Bayesian Computation Statistics
Mahdieh Bayati
Abstract
This study generalizes the joint empirical likelihood (JEL) which is named the joint penalized empirical likelihood(JPEL) and presents a comparative analysis of two innovative empirical likelihood methods: the restricted penalized empirical likelihood (RPEL) and the joint penalized empirical likelihood. ...
Read More
This study generalizes the joint empirical likelihood (JEL) which is named the joint penalized empirical likelihood(JPEL) and presents a comparative analysis of two innovative empirical likelihood methods: the restricted penalized empirical likelihood (RPEL) and the joint penalized empirical likelihood. These methods extend traditional empirical likelihood approaches by integrating criteria based on the minimum variance and unbiasedness of the estimator equations. In RPEL, estimators are obtained under these two criteria, while JPEL facilitates the joint application of the estimator equations used in RPEL, allowing for broader applicability.\\We evaluate the effectiveness of RPEL and RJEL in regression models through simulation studies, and evaluate the performance of RPEL and JPEL, focusing on parameter accuracy, model selection (as measured by the Empirical Bayesian Information Criterion), predictive accuracy (Mean Square Error), and robustness to outliers. Results indicate that RPEL consistently outperforms JPEL across all criteria, with RPEL yielding simpler models and more reliable estimates, particularly as sample sizes increase. These findings suggest that RPEL provides greater stability and interpretability for regression models, making it a superior choice over JPEL for the scenarios tested in this study.
Bayesian Network
Vahid Rezaei Tabar
Abstract
The aim of this paper is to learn a Bayesian network structure for discrete variables. For this purpose, we introduce a Gibbs sampler method. Each sample represents a Bayesian network. Thus, in the process of Gibbs sampling, we obtain a set of Bayesian networks. For achieving a single graph that represents ...
Read More
The aim of this paper is to learn a Bayesian network structure for discrete variables. For this purpose, we introduce a Gibbs sampler method. Each sample represents a Bayesian network. Thus, in the process of Gibbs sampling, we obtain a set of Bayesian networks. For achieving a single graph that represents the best graph fitted on data, we use the mode of burn-in graphs. This means that the most frequent edges of burn-in graphs are considered to indicate the best single graph. The results on the well-known Bayesian networks show that our method has higher accuracy in the task of learning a Bayesian network structure.
Ehsan Ormoz
Abstract
In this paper, we will introduce a Bayesian semiparametric model concerned with both constant and coefficients. In Meta-Analysis or Meta-Regression, we usually use a parametric family. However, lately the increasing tendency to use Bayesian nonparametric and semiparametric models, entered this area too. ...
Read More
In this paper, we will introduce a Bayesian semiparametric model concerned with both constant and coefficients. In Meta-Analysis or Meta-Regression, we usually use a parametric family. However, lately the increasing tendency to use Bayesian nonparametric and semiparametric models, entered this area too. On the other hand, although we have some works on Bayesian nonparametric or semiparametric models, they just focus on intercept and do not pay much attention to regressor coefficient(s). We also would check the efficiency of the proposed model via simulation and give an illustrating example.