Statistical Computing
Raheleh Zamini
Abstract
In various statistical model, such as density estimation and estimation of regression curves or hazardrates, monotonicity constraints can arise naturally. A frequently encountered problem in nonparametricstatistics is to estimate a monotone density function f on a compact interval. A known estimator ...
Read More
In various statistical model, such as density estimation and estimation of regression curves or hazardrates, monotonicity constraints can arise naturally. A frequently encountered problem in nonparametricstatistics is to estimate a monotone density function f on a compact interval. A known estimator fordensity function of f under the restriction that f is decreasing, is Grenander estimator, where is the leftderivative of the least concave majorant of the empirical distribution function of the data. Many authorsworked on this estimator and obtained very useful properties from this estimator. Grenander estimatoris a step function and as a consequence it is not smooth. In this paper, we discuss the estimation of adecreasing density function by the kernel smoothing method. Many works have been done due to theimportance and applicability of Berry-Esseen bounds for the density estimator. In this paper, we studya Berry- Esseen type bound for a smoothed version of Grenander estimator.
Rahim Mahmoudvand; Paulo Canas Rodrigues
Abstract
In a referendum conducted in the United Kingdom (UK) on June 23, 2016, $51.6\\%$ of the participants voted to leave the European Union (EU). The outcome of this referendum had major policy and financial impact for both UK and EU, and was seen as a surprise because the predictions consistently indicate ...
Read More
In a referendum conducted in the United Kingdom (UK) on June 23, 2016, $51.6\\%$ of the participants voted to leave the European Union (EU). The outcome of this referendum had major policy and financial impact for both UK and EU, and was seen as a surprise because the predictions consistently indicate that the ``Remain'''' would get a majority. In this paper, we investigate whether the outcome of the Brexit referendum could have been predictable by polls data. The data consists of 233 polls which have been conducted between January 2014 and June 2016 by YouGov, Populus, ComRes, Opinion, and others. The sample size range from 500 to 20058.We used Singular Spectrum Analysis (SSA) which is an increasingly popular and widely adopted filtering technique for both short and long time series. We found that the real outcome of the referendum is very close to our point estimate and within our prediction interval, which reinforces the usefulness of SSA to predict polls data.
Mina Norouzirad; Mohammad Arashi; Mahdi Roozbeh
Abstract
Partial linear model is very flexible when the relation between the covariates and responses, either parametric and nonparametric. However, estimation of the regression coefficients is challenging since one must also estimate the nonparametric component simultaneously. As a remedy, the differencing approach, ...
Read More
Partial linear model is very flexible when the relation between the covariates and responses, either parametric and nonparametric. However, estimation of the regression coefficients is challenging since one must also estimate the nonparametric component simultaneously. As a remedy, the differencing approach, to eliminate the nonparametric component and estimate the regression coefficients, can be used. Here, suppose the regression vector-parameter is subjected to lie in a sub-space hypothesis. In situations where the use of difference-based least absolute and shrinkage selection operator (D-LASSO) is desired for, we propose a restricted D-LASSO estimator. To improve its performance, LASSO-type shrinkage estimators are also developed. The relative dominance picture of suggested estimators is investigated. In particular, the suitability of estimating the nonparametric component based on the Speckman approach is explored. A real data example is given to compare the proposed estimators. From the numerical analysis, it is obtained that the partial difference-based shrinkage estimators perform better than the difference-based regression model in average prediction error sense.
Statistical Computing
Soroush Pakniat
Abstract
This paper presents approximate confidence intervals for each function of parameters in a Banach space based on a bootstrap algorithm. We apply kernel density approach to estimate the persistence landscape. In addition, we evaluate the quality distribution function estimator of random variables ...
Read More
This paper presents approximate confidence intervals for each function of parameters in a Banach space based on a bootstrap algorithm. We apply kernel density approach to estimate the persistence landscape. In addition, we evaluate the quality distribution function estimator of random variables using integrated mean square error (IMSE). The results of simulation studies show a significant improvement achieved by our approach compared to the standard version of confidence intervals algorithm. Finally, real data analysis shows that the accuracy of our method compared to that of previous works for computing the confidence interval.
Computational Statistics
Reza Pourtaheri
Abstract
Traditionally, the statistical quality control techniques utilize either an attributes or variables product quality measure. Recently, some methods such as three-level control chart have been developed for monitoring multi attribute processes. Control chart usually has three design parameters: the sample ...
Read More
Traditionally, the statistical quality control techniques utilize either an attributes or variables product quality measure. Recently, some methods such as three-level control chart have been developed for monitoring multi attribute processes. Control chart usually has three design parameters: the sample size (n), the sampling interval (h) and the control limit coefficient (k).The design parameters of the control chart are generally specified according to statistical or/and economic criteria. The variable sampling interval (VSI) control scheme has been shown to provide an increase to the detecting efficiency of the control chart with fixed sampling rate (FRS). In this paper a method is proposed to conduct the economic-statistical design for variable sampling interval of the three-level control charts. We use the cost model developed by Costa and Rahim and optimize this model by genetic algorithm approach. We compare the expected cost per unit time of the VSI and FRS 3-level control charts. Results indicate that the proposed chart has improved performance.
Bahman Tarvirdizade; Nader Nematollahi
Abstract
In this article, we consider the problem of estimating the stress-strength reliability $Pr (X > Y)$ based on upper record values when $X$ and $Y$ are two independent but not identically distributed random variables from the power hazard rate distribution with common scale parameter $k$. When the parameter ...
Read More
In this article, we consider the problem of estimating the stress-strength reliability $Pr (X > Y)$ based on upper record values when $X$ and $Y$ are two independent but not identically distributed random variables from the power hazard rate distribution with common scale parameter $k$. When the parameter $k$ is known, the maximum likelihood estimator (MLE), the approximate Bayes estimator and the exact confidence intervals of stress-strength reliability are obtained. When the parameter $k$ is unknown, we obtain the MLE and some bootstrap confidence intervals of stress-strength reliability. We also apply the Gibbs sampling technique to study the Bayesian estimation of stress-strength reliability and the corresponding credible interval. An example is presented in order to illustrate the inferences discussed in the previous sections. Finally, to investigate and compare the performance of the different proposed methods in this paper, a Monte Carlo simulation study is conducted.
Statistical Simulation
farzad eskandari
Abstract
Imprecise measurement tools produce imprecise data. Interval-valued data is usually used to deal with such imprecision. So interval-valued variables are used in estimation methods. They have recently been modeled by linear regression models. If response variable has any statistical distributions, interval-valued ...
Read More
Imprecise measurement tools produce imprecise data. Interval-valued data is usually used to deal with such imprecision. So interval-valued variables are used in estimation methods. They have recently been modeled by linear regression models. If response variable has any statistical distributions, interval-valued variables are modeled in generalized linear models framework. In this article, we propose a new consistent estimator of a parameter in generalized linear models with regard to distributions of response variable in the exponential family. A simulation study shows that the new estimator is better than others on the basis of particular distributions of response variable. We present optimal properties of the estimators in this research
Statistical Computing
mohammad hossein naderi; Mohammad Bameni Moghadam; asghar Seif
Abstract
A proper method of monitoring a stochastic system is to use the control charts of statisticalprocess control in which a drift in characteristics of output may be due to one or several assignable causes. In the establishment of X charts in statistical process control, an assumption is made that there ...
Read More
A proper method of monitoring a stochastic system is to use the control charts of statisticalprocess control in which a drift in characteristics of output may be due to one or several assignable causes. In the establishment of X charts in statistical process control, an assumption is made that there is no correlation within the samples. However, in practice, there are many cases where the correlation does exist within the samples. It would be more appropriate to assume that each sample is a realization of a multivariatenormal random vector. Using three dierent loss functions in the concept of quality control charts with economic and economic statistical design leads to better decisions in the industry. Although some research works have considered the economic design of control charts under single assignable cause and correlated data, the economic statistical design of X control chart for multiple assignable causes and correlated data under Weibull shock model with three dierent loss functions have not been presented yet. Based on theoptimization of the average cost per unit of time and taking into account the dierent combination valuesof Weibull distribution parameters, optimal design values of sample size, sampling interval and control limitcoecient were derived and calculated. Then the cost models under non-uniform and uniform samplingscheme were compared. The results revealed that the model under multiple assignable causes with correlatedsamples with non-uniform sampling integrated with three dierent loss functions has a lower cost than themodel with uniform sampling.
Bayesian Computation Statistics
Ehsan Ormoz
Abstract
In the meta-analysis of clinical trials, usually the data of each trail summarized by one or more outcome measure estimates which reported along with their standard errors. In the case that summary data are multi-dimensional, usually, the data analysis will be performed in the form of a number of separated ...
Read More
In the meta-analysis of clinical trials, usually the data of each trail summarized by one or more outcome measure estimates which reported along with their standard errors. In the case that summary data are multi-dimensional, usually, the data analysis will be performed in the form of a number of separated univariate analysis. In such a case the correlation between summary statistics would be ignored. In contrast, a multivariate meta-analysis model, use from these correlations synthesizes the outcomes, jointly to estimate the multiple pooled effects simultaneously. In this paper, we present a nonparametric Bayesian bivariate random effect meta-analysis.
Esmaeil Shirazi
Abstract
Estimation of a quantile density function from biased data is a frequent problem in industrial life testingexperiments and medical studies. The estimation of a quantile density function in the biased nonparametric regression model is inves-tigated. We propose and develop a new wavelet-based methodology ...
Read More
Estimation of a quantile density function from biased data is a frequent problem in industrial life testingexperiments and medical studies. The estimation of a quantile density function in the biased nonparametric regression model is inves-tigated. We propose and develop a new wavelet-based methodology for this problem. In particular, anadaptive hard thresholding wavelet estimator is constructed. Under mild assumptions on the model, weprove that it enjoys powerful mean integrated squared error properties over Besov balls. The performanceof proposed estimator is investigated by a numerical study.In this study, we develop two types of wavelet estimators for the quantile density function when datacomes from a biased distribution function. Our wavelet hard thresholding estimator which is introducedas a nonlinear estimator, has the feature to be adaptive according to q(x). We show that these estimatorsattain optimal and nearly optimal rates of convergence over a wide range of Besov function classes.
mozhgan taavoni
Abstract
This paper considers an extension of the linear mixed model, called semiparametric mixed effects model, for longitudinal data, when multicollinearity is present. To overcome this problem, a new mixed ridge estimator is proposed while the nonparametric function in the semiparametric model is approximated ...
Read More
This paper considers an extension of the linear mixed model, called semiparametric mixed effects model, for longitudinal data, when multicollinearity is present. To overcome this problem, a new mixed ridge estimator is proposed while the nonparametric function in the semiparametric model is approximated by the kernel method. The proposed approache integrates ridge method into the semiparametric mixed effects modeling framework in order to account for both the correlation induced by repeatedly measuring an outcome on each individual over time, as well as the potentially high degree of correlation among possible predictor variables. The asymptotic normality of the exhibited estimator is established. To improve efficiency, the estimation of the covariance function is accomplished using an iterative algorithm. Performance of the proposed estimator is compared through a simulation study and analysis of CD4 data.
Bayesian Computation Statistics
sima naghizadeh
Abstract
The Bayesian variable selection analysis is widely used as a new methodology in air quality control trials and generalized linear models. One of the important and, of course, controversial topics in this area is selection of prior distribution of unknown model parameters. The aim of this study is presenting ...
Read More
The Bayesian variable selection analysis is widely used as a new methodology in air quality control trials and generalized linear models. One of the important and, of course, controversial topics in this area is selection of prior distribution of unknown model parameters. The aim of this study is presenting a substitution for mixture of priors which besidespreservation of benefits and computational efficiencies obviate the available paradoxes and contradictions. In this research we pay attention to two points of view; empirical and fully Bayesian. Especially, a mixture of priors and its theoretical characteristics is introduced. Finally, the proposed model is illustrated with a real example.
Statistical Computing
Hassan Rashidi; Hamed Heidari; Marzie Movahedin; Maryam Moazami Gudarzi; Mostafa Shakerian
Abstract
The purpose of this research is to identify and introduce effective factors in adoption of e-learning based on technology adoption model. Accordingly, by considering the studies conducted in this field, several variables such as computer self-efficacy, content quality, system support, interface design, ...
Read More
The purpose of this research is to identify and introduce effective factors in adoption of e-learning based on technology adoption model. Accordingly, by considering the studies conducted in this field, several variables such as computer self-efficacy, content quality, system support, interface design, technology tools and computer anxiety as factors influencing the adoption of e-learning system were extracted and based on them, a conceptual model of research was developed. To measure the model and the relationships between the variables in the model, a questionnaire was designed and provided to users of the electronic education system of Qazvin University of Medical Sciences. The results of the data analysis confirmed the correctness of all hypotheses using the structural equation modeling method, except for the effect of technology tools on the acceptance of the e-learning system. The findings of this study will help university administrators and the professors associated with this system to encourage students to make effective use of the system by creating the necessary background for effective factors.
Mahsa Ghajarbeigi; Hamid Reza Vakely fard; Ramzanali Roeayi
Abstract
The purpose of this paper was to investigate the impact of audit quality on the reduction of collateral facilities, taking into account the role of major shareholders in companies listed on the Tehran Stock Exchange during the period 2017 to 2022. Considering the research conditions, 179 companies were ...
Read More
The purpose of this paper was to investigate the impact of audit quality on the reduction of collateral facilities, taking into account the role of major shareholders in companies listed on the Tehran Stock Exchange during the period 2017 to 2022. Considering the research conditions, 179 companies were selected as the statistical sample of the research (From a total number of 895 companies). The research method of this research is descriptive and applied research in terms of nature and content. The panel data method was used to test the research hypotheses. The findings of this research emphasized that audit quality reduces collateral facilities. The rotation of the auditor increases collateral facilities. But the auditor's expertise in the industry does not have a significant effect on collateral facilities. On the other hand, the ownership percentage of major shareholders does not affect the intensity of the impact of audit quality and expertise in the audit industry and audit turnover on collateral facilities.
Bayesian Computation Statistics
Rashin Nimaei; Farzad Eskandari
Abstract
The recent advancements in technology have faced an increase in the growth rate of data.According to the amount of data generated, ensuring effective analysis using traditional approaches becomes very complicated.One of the methods of managing and analyzing big data ...
Read More
The recent advancements in technology have faced an increase in the growth rate of data.According to the amount of data generated, ensuring effective analysis using traditional approaches becomes very complicated.One of the methods of managing and analyzing big data is classification.%One of the data mining methods used commonly and effectively to classify big data is the MapReduceIn this paper, the feature weighting technique to improve Bayesian classification algorithms for big data is developed based on Correlative Naive Bayes classifier and MapReduce Model.%Classification models include Naive Bayes classifier, correlated Naive Bayes and correlated Naive Bayes with feature weighting.Correlated Naive Bayes classification is a generalization of the Naive Bayes classification model by considering the dependence between features.%This paper uses the feature weighting technique and Laplace calibration to improve the correlated Naive Bayes classification.The performance of all described methods are evaluated by considering accuracy, sensitivity and specificity, accuracy, sensitivity and specificity metrics.
Bayesian Computation Statistics
Ehsan Ormoz; Farzad Eskandari
Abstract
This paper introduces a novel semiparametric Bayesian approach for bivariate meta-regression. The method extends traditional binomial models to trinomial distributions, accounting for positive, neutral, and negative treatment effects. Using a conditional Dirichlet process, we develop a model to compare ...
Read More
This paper introduces a novel semiparametric Bayesian approach for bivariate meta-regression. The method extends traditional binomial models to trinomial distributions, accounting for positive, neutral, and negative treatment effects. Using a conditional Dirichlet process, we develop a model to compare treatment and control groups across multiple clinical centers. This approach addresses the challenges posed by confounding factors in such studies. The primary objective is to assess treatment efficacy by modeling response outcomes as trinomial distributions. We employ Gibbs sampling and the Metropolis-Hastings algorithm for posterior computation. These methods generate estimates of treatment effects while incorporating auxiliary variables that may influence outcomes. Simulations across various scenarios demonstrate the model’s effectiveness. We also establish credible intervals to evaluate hypotheses related to treatment effects. Furthermore, we apply the methodology to real-world data on economic activity in Iran from 2009 to 2021. This application highlights the practical utility of our approach in meta-analytic contexts. Our research contributes to the growing body of literature on Bayesian methods in meta-analysis. It provides valuable insights for improving clinical study evaluations.
azar ghyasi; hanieh rashidi
Abstract
Due to the inherent complexity and increasing competition, today's business environment requires new approaches in organizing and managing. One of the new approaches is business intelligence, which is the most critical technology to help manage and deliver smart services, especially business reporting. ...
Read More
Due to the inherent complexity and increasing competition, today's business environment requires new approaches in organizing and managing. One of the new approaches is business intelligence, which is the most critical technology to help manage and deliver smart services, especially business reporting. Business intelligence enables firms to manage their business efficiently to meet the needs of businesses at different macro, middle and even operations levels. In this paper, while investigating the feasibility of implementing business intelligence in firms, designing business intelligence to report and present new services is discussed. In order to demonstrate the capabilities of this type of intelligence, an approach based on the concept of Bayesian network in the application layer of business intelligence is presented. This approach is implemented for one of the companies governed by the Iranian Industrial Development and Renovation Organization, and the effects of important accounting and financial variables on the firm goals are investigated.
Mathematical Computing
Mohammad Arashi
Abstract
The multilinear normal distribution is a widely used tool in the tensor analysis of magnetic resonance imaging (MRI). Diffusion tensor MRI provides a statistical estimate of a symmetric 2nd-order diffusion tensor for each voxel within an imaging volume. In this article, tensor elliptical (TE) distribution ...
Read More
The multilinear normal distribution is a widely used tool in the tensor analysis of magnetic resonance imaging (MRI). Diffusion tensor MRI provides a statistical estimate of a symmetric 2nd-order diffusion tensor for each voxel within an imaging volume. In this article, tensor elliptical (TE) distribution is introduced as an extension to the multilinear normal (MLN) distribution. Some properties, including the characteristic function and distribution of affine transformations are given. An integral representation connecting densities of TE and MLN distributions is exhibited that is used in deriving the expectation of any measurable function of a TE variate.
Machine Learning
Sahar Abbasi; Radmin Sadeghian; Maryam Hamedi
Abstract
Multi-label classification assigns multiple labels to each instance, crucial for tasks like cancer detection in images and text categorization. However, machine learning methods often struggle with the complexity of real-life datasets. To improve efficiency, researchers have developed feature selection ...
Read More
Multi-label classification assigns multiple labels to each instance, crucial for tasks like cancer detection in images and text categorization. However, machine learning methods often struggle with the complexity of real-life datasets. To improve efficiency, researchers have developed feature selection methods to identify the most relevant features. Traditional methods, requiring all features upfront, fail in dynamic environments like media platforms with continuous data streams. To address this, novel online methods have been created, yet they often neglect optimizing conflicting objectives. This study introduces an objective search approach using mutual information, feature interaction, and the NSGA-II algorithm to select relevant features from streaming data. The strategy aims to minimize feature overlap, maximize relevance to labels, and optimize online feature interaction analysis. By applying a modified NSGA-II algorithm, a set of non-dominantsolutions is identified. Experiments on eleven datasets show that the proposed approach outperforms advanced online feature selection techniques in predictive accuracy, statistical analysis, and stability assessment.
Machine Learning
Mostafa Azghandi; Mahdi Yaghoobi; Elham Fariborzi
Abstract
By focusing on the fuzzy Delphi technique (FDM), the current research introduces a novel approach to modeling Persian vernacular architecture. Fuzzy Delphi is a more advanced version of the Delphi Method, which utilizes triangulation statistics to determine the distance between the levels of consensus ...
Read More
By focusing on the fuzzy Delphi technique (FDM), the current research introduces a novel approach to modeling Persian vernacular architecture. Fuzzy Delphi is a more advanced version of the Delphi Method, which utilizes triangulation statistics to determine the distance between the levels of consensus within the expert panel and deals with the measurement uncertainty of qualitative data. In this sense, the main objective of the Delphi method is to acquire the most reliable consensus of a group of expert opinions; an advantage that helps the current study to answer the main question of the research, that is, determining the efficacy of fuzzy Delphi technique in intelligent modeling of Persian vernacular architecture. Therefore, in order to identify the main factors of the research model, systematic literature reviews as well as semi-structured interviews with experts were conducted. Then, with the usage of Qualitative Content Analysis (QCA), various themes were obtained and employed as the main factors of the research model. Finally, by utilizing the fuzzy Delphi technique, the present study examined the degree of certainty and accuracy of the factors in two stages and identified 28 factors in the modeling of Persian vernacular architecture.
Statistical Computing
farzad eskandari
Abstract
Interval-valued data are observed as ranges instead of single values and contain richer information thansingle-valued data. Meanwhile, interval-valued data are used for interval-valued characteristics. An intervalgeneralized linear model is proposed for the first time in this research. Then a suitable ...
Read More
Interval-valued data are observed as ranges instead of single values and contain richer information thansingle-valued data. Meanwhile, interval-valued data are used for interval-valued characteristics. An intervalgeneralized linear model is proposed for the first time in this research. Then a suitable model is presented toestimate the parameters of the interval generalized linear model. The two models are provided on the basis ofthe interval arithmetic. The estimation procedure of the parameters of the suitable model is as the estimationprocedure of the parameters of the interval generalized linear model. The least-squares (LS) estimation of thesuitable model is developed according to a nice distance in the interval space. The LS estimation is resolvedanalytically through a constrained minimization problem. Then some desirable properties of the estimatorsare checked. Finally, both the theoretical and the empirical performance of the estimators are investigated.
Statistical Simulation
Vadood keramati; Ramin Sadeghian; Maryam Hamedi; Ashkan Shabbak
Abstract
Record linkage is a tool used to gather information and data from different sources. It is used in activities related to government, such as e-government and the production of register-based data. This method compares the strings in the databases and there are different methods for record linkage, such ...
Read More
Record linkage is a tool used to gather information and data from different sources. It is used in activities related to government, such as e-government and the production of register-based data. This method compares the strings in the databases and there are different methods for record linkage, such as deterministic and probabilistic assumption. This paper presents a proposed expert system for record linkage of data received from multiple databases. The system is designed to save time and reduce errors in the process of aggregating data. The inputs for this system include several linked fields, thresholds, and metric methods, which are explained along with the evaluation of the used method. To validate the system, inputs from two databases and seven information fields, comprising 100,000 simulated records, were used. The results reveal a higher accuracy of possible record linkage compared to deterministic records. Furthermore, the highest linkage was achieved using five fields with varying thresholds. In assessing the various metric methods, it was found that metric methods with less than 80% accuracy and the Winkler metric method with over 86% accuracy were utilized. These findings demonstrate that the implementation of the proposed automated system significantly saves time and enhances the flexibility of selection methods.
Alireza Safariyan; Reza Arabi Belaghi
Abstract
In this paper, the probability of failure-free operation until time t, along with the probability of stress-strength, based on progressive censoring data is studied in a family of lifetime distributions. Since the number of data in a progressive censoring scheme is usually reduced, so shrinkage methods ...
Read More
In this paper, the probability of failure-free operation until time t, along with the probability of stress-strength, based on progressive censoring data is studied in a family of lifetime distributions. Since the number of data in a progressive censoring scheme is usually reduced, so shrinkage methods have been used to improve the classical estimator. For estimation purposes, the preliminary test and Stein-type shrinkage estimators are proposed and their exact distributional properties are derived. For numerical superiority demonstration of the proposed estimation strategies, some improved bootstrap confidence intervals, are constructed. The theoretical results are illustrated by a real data examples and an extensive simulation study. Simulation shreds of evidence revealed that our proposed shrinkage strategies perform well in the estimation of parameters based on progressive censoring data.
Machine Learning
Mohammad Zahaby; Iman Makhdoom
Abstract
Breast cancer (BC) is one of the leading causes of death in women worldwide. Early diagnosis of this disease can save many women’s lives. The Breast Imaging Reporting and Data System (BIRADS) is a standard method developed by the American College of Radiology (ACR). However, physicians have had ...
Read More
Breast cancer (BC) is one of the leading causes of death in women worldwide. Early diagnosis of this disease can save many women’s lives. The Breast Imaging Reporting and Data System (BIRADS) is a standard method developed by the American College of Radiology (ACR). However, physicians have had a lot of contradictions in determining the value of BIRADS, and all aspects of patients have not been considered in diagnosing this disease using the methods that have been used so far. In this article, a novel decision support system (DSS) has been presented. In the proposed DSS, firstly, c-mean clustering was used to determine the molecular subtype for patients who did not have this value by combining the mammography reports processing along with hospital information systems (HIS) obtained from their electronic files. Then several classifiers such as convolutional neural networks (CNN), decision tree (DT), multi-level fuzzy min-max neural network (MLF), multi-class support vector machine (SVM), and XGboost were trained to determine the BIRADS. Finally, the values obtained by these classifiers were combined using weighted ensemble learning with the majority voting algorithm to obtain the appropriate value of BIRADS. This helps physicians in the early diagnosis of BC. Finally, the results were evaluated in terms of accuracy, specificity, sensitivity, positive predicted value (PPV), negative predicted value (NPV), and f1-measure by the confusion matrix. The obtained values were, 97.94%, 98.79%, 92.08%, 92.34%, 98.80%, and 92.19% respectively.
Computational Statistics
Manijeh Mahmoodi; Mohammad Reza Salehi Rad; Farzad Eskandari
Abstract
AbstractThe novel corona virus (covid-19) spread quickly from person to another and one of the basic aspects of the country management has been to prevent the spread of this disease. So the prediction of its expansion is very important. In such matters, the estimation of new cases and deaths in covid-19 ...
Read More
AbstractThe novel corona virus (covid-19) spread quickly from person to another and one of the basic aspects of the country management has been to prevent the spread of this disease. So the prediction of its expansion is very important. In such matters, the estimation of new cases and deaths in covid-19 has been considered by researchers. we propose an estimation of the statistical model for predicting the new cases and the new deaths by using the vector autoregressive (VAR) model with the multivariate skew normal (MSN) distribution for the asymmetric shocks and predict the samples data. The maximum likelihood (ML) method is applied to estimation of this model for the weekly data of the new cases and the new deaths of covid-19. Data are taken from World Health Organization (WHO) from March 2020 until March 2023 Iran country. The performance of the model is evaluated with the Akaike and the Bayesian information criterions and the mean absolute prediction error (MAPE) is interpreted.