Mahsa Ghajarbeigi; Hamid Reza Vakely fard; Ramzanali Roeayi
Abstract
The purpose of this paper was to investigate the impact of audit quality on the reduction of collateral facilities, taking into account the role of major shareholders in companies listed on the Tehran Stock Exchange during the period 2017 to 2022. Considering the research conditions, 179 companies were ...
Read More
The purpose of this paper was to investigate the impact of audit quality on the reduction of collateral facilities, taking into account the role of major shareholders in companies listed on the Tehran Stock Exchange during the period 2017 to 2022. Considering the research conditions, 179 companies were selected as the statistical sample of the research (From a total number of 895 companies). The research method of this research is descriptive and applied research in terms of nature and content. The panel data method was used to test the research hypotheses. The findings of this research emphasized that audit quality reduces collateral facilities. The rotation of the auditor increases collateral facilities. But the auditor's expertise in the industry does not have a significant effect on collateral facilities. On the other hand, the ownership percentage of major shareholders does not affect the intensity of the impact of audit quality and expertise in the audit industry and audit turnover on collateral facilities.
azar ghyasi; hanieh rashidi
Abstract
Due to the inherent complexity and increasing competition, today's business environment requires new approaches in organizing and managing. One of the new approaches is business intelligence, which is the most critical technology to help manage and deliver smart services, especially business reporting. ...
Read More
Due to the inherent complexity and increasing competition, today's business environment requires new approaches in organizing and managing. One of the new approaches is business intelligence, which is the most critical technology to help manage and deliver smart services, especially business reporting. Business intelligence enables firms to manage their business efficiently to meet the needs of businesses at different macro, middle and even operations levels. In this paper, while investigating the feasibility of implementing business intelligence in firms, designing business intelligence to report and present new services is discussed. In order to demonstrate the capabilities of this type of intelligence, an approach based on the concept of Bayesian network in the application layer of business intelligence is presented. This approach is implemented for one of the companies governed by the Iranian Industrial Development and Renovation Organization, and the effects of important accounting and financial variables on the firm goals are investigated.
Machine Learning
Mostafa Azghandi; Mahdi Yaghoobi; Elham Fariborzi
Abstract
By focusing on the fuzzy Delphi technique (FDM), the current research introduces a novel approach to modeling Persian vernacular architecture. Fuzzy Delphi is a more advanced version of the Delphi Method, which utilizes triangulation statistics to determine the distance between the levels of consensus ...
Read More
By focusing on the fuzzy Delphi technique (FDM), the current research introduces a novel approach to modeling Persian vernacular architecture. Fuzzy Delphi is a more advanced version of the Delphi Method, which utilizes triangulation statistics to determine the distance between the levels of consensus within the expert panel and deals with the measurement uncertainty of qualitative data. In this sense, the main objective of the Delphi method is to acquire the most reliable consensus of a group of expert opinions; an advantage that helps the current study to answer the main question of the research, that is, determining the efficacy of fuzzy Delphi technique in intelligent modeling of Persian vernacular architecture. Therefore, in order to identify the main factors of the research model, systematic literature reviews as well as semi-structured interviews with experts were conducted. Then, with the usage of Qualitative Content Analysis (QCA), various themes were obtained and employed as the main factors of the research model. Finally, by utilizing the fuzzy Delphi technique, the present study examined the degree of certainty and accuracy of the factors in two stages and identified 28 factors in the modeling of Persian vernacular architecture.
Statistical Simulation
Vadood keramati; Ramin Sadeghian; Maryam Hamedi; Ashkan Shabbak
Abstract
Record linkage is a tool used to gather information and data from different sources. It is used in activities related to government, such as e-government and the production of register-based data. This method compares the strings in the databases and there are different methods for record linkage, such ...
Read More
Record linkage is a tool used to gather information and data from different sources. It is used in activities related to government, such as e-government and the production of register-based data. This method compares the strings in the databases and there are different methods for record linkage, such as deterministic and probabilistic assumption. This paper presents a proposed expert system for record linkage of data received from multiple databases. The system is designed to save time and reduce errors in the process of aggregating data. The inputs for this system include several linked fields, thresholds, and metric methods, which are explained along with the evaluation of the used method. To validate the system, inputs from two databases and seven information fields, comprising 100,000 simulated records, were used. The results reveal a higher accuracy of possible record linkage compared to deterministic records. Furthermore, the highest linkage was achieved using five fields with varying thresholds. In assessing the various metric methods, it was found that metric methods with less than 80% accuracy and the Winkler metric method with over 86% accuracy were utilized. These findings demonstrate that the implementation of the proposed automated system significantly saves time and enhances the flexibility of selection methods.
Computational Statistics
Manijeh Mahmoodi; Mohammad Reza Salehi Rad; Farzad Eskandari
Abstract
AbstractThe novel corona virus (covid-19) spread quickly from person to another and one of the basic aspects of the country management has been to prevent the spread of this disease. So the prediction of its expansion is very important. In such matters, the estimation of new cases and deaths in covid-19 ...
Read More
AbstractThe novel corona virus (covid-19) spread quickly from person to another and one of the basic aspects of the country management has been to prevent the spread of this disease. So the prediction of its expansion is very important. In such matters, the estimation of new cases and deaths in covid-19 has been considered by researchers. we propose an estimation of the statistical model for predicting the new cases and the new deaths by using the vector autoregressive (VAR) model with the multivariate skew normal (MSN) distribution for the asymmetric shocks and predict the samples data. The maximum likelihood (ML) method is applied to estimation of this model for the weekly data of the new cases and the new deaths of covid-19. Data are taken from World Health Organization (WHO) from March 2020 until March 2023 Iran country. The performance of the model is evaluated with the Akaike and the Bayesian information criterions and the mean absolute prediction error (MAPE) is interpreted.
Computational Statistics
Arezoo Asoodeh; Amir Hossein Ghatari; Ehsan Bahrami Samani
Abstract
We propose a novel parametric distribution, termed the Beta Modified Exponential Power Series (BMEPS) distribution, capable of modeling increasing, decreasing, bathtub-shaped, and unimodal failure rates. Constructed from addressing a latent complementary risk problem, this distribution arises from a ...
Read More
We propose a novel parametric distribution, termed the Beta Modified Exponential Power Series (BMEPS) distribution, capable of modeling increasing, decreasing, bathtub-shaped, and unimodal failure rates. Constructed from addressing a latent complementary risk problem, this distribution arises from a combination of the Beta Modified Exponential (BME) and power series distributions. Within this new distribution, several important distributions discussed in the literature, such as the Beta Modified Exponential Poisson (BMEP), Beta Modified Exponential Geometric (BMEG), and Beta Modified Exponential Logarithmic (BMEL) distributions, exist as special submodels. This work provides a comprehensive mathematical treatment of the new distribution, offering closed-form expressions for its density, cumulative distribution, survival function, failure rate function, the r-th raw moment, and moments of order statistics. Furthermore, we delve into maximum likelihood estimation and present formulas for the elements comprising the Fisher information matrix. Finally, to showcase the flexibility and potential applicability of the new distribution, we apply it to a real dataset.
Machine Learning
Morteza Amini; Kiana Ghasemifard
Abstract
The diabetes data set gathered by Michael Kahn, at Washington University, St. Louis, MO, which is available online at UCI machine learning repository is one of the rarely used data sets, specially for glucose prediction purposes in diabetic patients. In this paper, we study the problem of blood glucose ...
Read More
The diabetes data set gathered by Michael Kahn, at Washington University, St. Louis, MO, which is available online at UCI machine learning repository is one of the rarely used data sets, specially for glucose prediction purposes in diabetic patients. In this paper, we study the problem of blood glucose range prediction, rather than raw glucose prediction, along with two other important tasks, which are the detection of increment or decrement of glucose as well as abnormal value prediction, based on regular and NPH insulin doses, based on this data set. Two commonly used machine learning approaches for time series data, namely LSTM and CNN are used along with a promising statistical regression approach, that is non-parametric multivariate Gaussian additive mixed model, for the prediction task. It is observed that, although LSTM and CNN models are preferable concerning the prediction error, the statistical method performs significantly better in the sense of abnormal value detection, which is a critical task for diabetic patients.
Machine Learning
Seyyed Mousa Khademi; Abbas Shams Vala; Somayyeh Jafari
Abstract
The purpose of this research is to explain the application of business intelligence in managing knowledge assets, utilizing the co-word analysis technique on scientific productions related to "knowledge assets management and business intelligence". In this applied research, the method of content analysis ...
Read More
The purpose of this research is to explain the application of business intelligence in managing knowledge assets, utilizing the co-word analysis technique on scientific productions related to "knowledge assets management and business intelligence". In this applied research, the method of content analysis and the techniques of co-word analysis, social network analysis, hierarchical clustering, and strategic diagram have been used. The research community is 929 scientific productions related to "business intelligence and knowledge management" from the 1990s to 2022 in the Web of Science database. Data analysis was conducted using Histcite, BibExcel, UCINET, and Excel software, while the maps were created using VOS Viewer and SPSS software. The results indicated that the average annual growth rates for publication and production impact were 28% and 8.9%, respectively. Among the keywords, "big data," "data mining," and "data warehouse," as well as "big data," "management," and "system," and "design science," "Industry 4.0," and "discovery" exhibited the highest frequency, links, and citations, respectively. Co-word analysis resulted in the formation of eight clusters comprising a total of 138 keywords. In hierarchical clustering, five clusters—namely, business intelligence tools in knowledge management, infrastructures and technologies of business intelligence, and business process management through the management of knowledge assets—are considered mature and are positioned at the center of this research field. This research provides a comprehensive perspective by identifying the main topics and clusters discussed in the fields of business intelligence and knowledge management. It can be valuable for researchers, educators, policymakers, and organizational managers.
Bayesian Computation Statistics
Iman Makhdoom
Abstract
This article focuses on the M/M/ 1 /K queuing model. In this model, the inter-arrival times ofcustomers to the system are random variables with an exponential distribution parameterized by λ , andthe service times of customers are random variables with an exponential distribution parameterized ...
Read More
This article focuses on the M/M/ 1 /K queuing model. In this model, the inter-arrival times ofcustomers to the system are random variables with an exponential distribution parameterized by λ , andthe service times of customers are random variables with an exponential distribution parameterized byµ . We aim to estimate the traffic intensity parameter of this model using Bayesian, E-Bayesian, andhierarchical Bayesian methods. These methods utilize the entropy loss function and an appropriate priordistribution for the independent parameters λ and µ . Additionally, we employ the shrinkage-basedmaximum likelihood estimation method to obtain the parameter estimates. To determine the desiredtraffic intensity parameter estimate, we introduce a decision criterion based on a cost function, anda fuzzy criterion called the Average Customer Satisfaction Index (ACSI). The goal is to select theestimation with a higher ACSI index. To facilitate understanding, we compare this estimation using theMonte Carlo simulation method and two numerical examples based on the ACSI index.
Statistical Simulation
S.M.T. K. MirMostafaee
Abstract
In this paper, we introduce a new extension of the XLindley distribution, called the exponentiated new XLindley distribution. The new model has an increasing or bathtub-shaped hazard rate function, making it suitable for modeling real-life phenomena. We study important properties of the ...
Read More
In this paper, we introduce a new extension of the XLindley distribution, called the exponentiated new XLindley distribution. The new model has an increasing or bathtub-shaped hazard rate function, making it suitable for modeling real-life phenomena. We study important properties of the new model, such as the moments, moment generating function, incomplete moments, mean deviations from the mean and the median, Bonferroni and Lorenz curves, mean residual life function, Rényi entropy, order statistics, and k-record values. We also address the estimation of parameters using the maximum likelihood and bootstrap methods. A Monte Carlo simulation study is conducted to evaluate the estimators discussed in the paper. Additionally, we analyze two real data applications, including rainfall and COVID-19 data sets, to demonstrate the applicability and flexibility of the new distribution. Our results show that the new model fits the data sets better than several other recognized or recently introduced distributions, based on some well-known goodness-of-fit criteria.
Machine Learning
Negin Bagherpour; Behrang Ebrahimi
Abstract
Feature selection is crucial to improve the quality of classification and clustering. It aims to enhance machine learning performance and reduce computational costs by eliminating irrelevant or redundant features. However, existing methods often overlook intricate feature relationships and select redundant ...
Read More
Feature selection is crucial to improve the quality of classification and clustering. It aims to enhance machine learning performance and reduce computational costs by eliminating irrelevant or redundant features. However, existing methods often overlook intricate feature relationships and select redundant features. Additionally, dependencies are often hidden or inadequately identified. That’s mainly because of nonlinear relationships being used in traditional algorithms. To address these limitations, novel feature selection algorithms are needed to consider intricate feature relationships and capture high-order dependencies, improving the accuracy and efficiency of data analysis.In this paper, we introduce an innovative feature selection algorithm based on Adjacency Matrix, which is applicable to supervised data. The algorithm comprises three steps for identifying pertinent features. In the first step, the correlation between each feature and its corresponding class is measured to eliminate irrelevant features. Moving to the second step, the algorithm focuses on the selected features, calculates pairwise relationships and constructs an adjacency matrix. Finally, the third step employs clustering techniques to classify the adjacency matrix into k clusters, where k represents the number of desired features. From each cluster, the algorithm selects the most representative feature for subsequent analysis.This feature selection algorithm provides a systematic approach to identify relevant features in supervised data, thereby significantly enhance the efficiency and accuracy of data analysis. By taking into account both the linear and nonlinear dependencies between features and effectively detecting them across multiple feature sets, it successfully overcomes the limitations of previous methods.
Statistical Computing
Ali Dolati; Samane Al-sadat Mousavi; Ali Dastbaravarde
Abstract
Performance measures are essential for evaluating portfolio performance in the risk management and fund industries, with the Sharpe ratio being a widely adopted risk-adjusted metric. This ratio compares the excess expected return to its standard deviation, enabling ...
Read More
Performance measures are essential for evaluating portfolio performance in the risk management and fund industries, with the Sharpe ratio being a widely adopted risk-adjusted metric. This ratio compares the excess expected return to its standard deviation, enabling investors to assess the returns of risk-taking activities against risk-free options. Its popularity stems from its ease of calculation and straightforward interpretation. However, the actual Sharpe ratio value is often unavailable and must be estimated empirically based on the assumption of normality of asset returns. In practice, financial assets typically exhibit non-normal distributions and nonlinear dependencies, which can compromise the accuracy of the Sharpe ratio estimation when normality is assumed. This paper challenges the normality assumption, aiming to enhance the accuracy of Sharpe ratio estimates. We investigate the impact of dependency on the Sharpe ratio of a two-asset portfolio using copulas. Theoretical findings and extensive simulations demonstrate the effectiveness of the proposed copula-based approach to the classic Sharpe ratio.