Neural Network
Ghasem Ahmadi
Abstract
Accurate weather prediction plays a vital role in many sectors, such as agriculture, disaster preparedness, transportation systems, and urban planning. Traditional meteorological models face challenges in capturing complex atmospheric dynamics, leading to increased reliance on artificial neural networks ...
Read More
Accurate weather prediction plays a vital role in many sectors, such as agriculture, disaster preparedness, transportation systems, and urban planning. Traditional meteorological models face challenges in capturing complex atmospheric dynamics, leading to increased reliance on artificial neural networks (ANNs) for improved forecasting accuracy. ANNs have been widely applied in meteorology due to their ability to model nonlinear relationships and temporal dependencies. Based on the Sinc numerical methods, the modified Sinc neural network (MSNN) has been introduced recently. This model uses the advantages of the Sinc function, such as smoothness and fluctuation, and at the same time improves the ability to model nonlinear dependencies and temporal dynamics in environmental data. This work utilizes the MSNN for time series forecasting where its parameters are adjusted with a discrete-time online Lyapunov-based learning algorithm. Then, it is applied to enhance the weather forecasting. This model is evaluated on datasets containing various meteorological variables. The data used in this article is related to the city of Khorramabad in Iran. The results show that despite its simple structure, MSNN has a high efficiency in weather forecasting.
Machine Learning
Parviz Nasiri; Heydar Mokhtari Farivar
Abstract
Principal Component Analysis (PCA) is a cornerstone technique for dimensionalityreduction and data analysis. However, classic PCA can exhibit instability inhigh-dimensional settings where the number of variables significantly exceeds thenumber of observations. Shrinkage-based PCA addresses this limitation ...
Read More
Principal Component Analysis (PCA) is a cornerstone technique for dimensionalityreduction and data analysis. However, classic PCA can exhibit instability inhigh-dimensional settings where the number of variables significantly exceeds thenumber of observations. Shrinkage-based PCA addresses this limitation by incorporatingregularization into the covariance matrix estimation process, leading tomore stable and interpretable results. This paper provides a robust mathematicaland statistical foundation for shrinkage-based PCA, compares its performance withclassic PCA, and demonstrates its advantages through theoretical analysis, numericalsimulations, and real-world data experiments. It is important to note that using the idea of a contraction estimator increases the efficiency of the estimator. mean time in this paper, it is shown that the covariance matrix estimator resulting from the contraction estimator is very efficient.It is also worth mentioning that to increase the efficiency of the contraction estimator, the recently discussed interval contraction estimator can be used.keywords: principal component analysis, Shrinkage-based, Estimation, Covariance Structures, Simulation.
Mathematical Computing
Farnood Freidooni; Ali Rajabpour; Sina Nasiri
Abstract
Capillary action and water uptake are tremendously fundamental and practical phenomena which used in a wide range of applications from industries and medical to agricultural fields. This work aims to provide a detailed numerical investigation and statistical data sampling of capillary action and water ...
Read More
Capillary action and water uptake are tremendously fundamental and practical phenomena which used in a wide range of applications from industries and medical to agricultural fields. This work aims to provide a detailed numerical investigation and statistical data sampling of capillary action and water uptake, considering hysteresis associated with density, surface tension, contact angle, gravity force, tube diameter, and inclination angle effects. The main selected domain is 1mm and 5mm in diameter and height. The solver type is chosen as a pressure-based solver, and time-dependent data sampling is utilized. The flow field is selected as incompressible, constant properties, Newtonian homogeneous fluid. The finite volume method on a co-located grid system is used. The code uses algebraic multigrid schemes to accelerate the solution. The message passing interface parallelized code is used. The bisection algorithms are used for partitioning. The pressure and velocity fields were coupled using the PISO algorithm. The results show that increasing capillary tube diameter or surface tension enhances uptake velocity by 98–100% and reduces filling time by 49–50%, respectively, though inertial/dissipative effects caused minor deviations (1–12%) in surface tension cases. Flow velocity scaled linearly with contact angle doubling filling time, while gravitational acceleration induced only marginal delays with negligible meniscus impact, supporting its omission in engineering models. Transient meniscus asymmetry occurred in inclined tubes (45°) due to contact angle disparity between halves, yet filling duration remained identical to vertical and horizontal orientations despite geometric differences in meniscus evolution.
Neural Network
Navid Ashraf; Shokouh Shahbeyk; Hossein Teimoori Faal
Abstract
Abstract of paper: This study explores the application of Recurrent Neural Networks (RNNs) for predicting loan defaults, with a particular emphasis on incorporating uncertainty estimation into the predictive framework. Conventional RNN models demonstrate high accuracy, but they fail to provide quantitative ...
Read More
Abstract of paper: This study explores the application of Recurrent Neural Networks (RNNs) for predicting loan defaults, with a particular emphasis on incorporating uncertainty estimation into the predictive framework. Conventional RNN models demonstrate high accuracy, but they fail to provide quantitative measures of prediction uncertainty. To address this limitation, a dual modeling approach is proposed: a standard RNN model for achieving high predictive accuracy and an uncertainty-aware RNN model incorporating Bayesian inference. The uncertainty-aware model enables enhanced risk assessment through confidence level estimation and improved capture of complex temporal dependencies in financial data. Experimental results indicate that both proposed models outperform traditional methods, with the uncertainty-aware variant offering superior risk evaluation capabilities through its probabilistic outputs. These findings contribute to advancing credit risk assessment methodologies and offer practical value for financial institutions seeking more robust default prediction systems.Keyword: Recurrent Neural Networks (RNNs), Loan Default Prediction, Uncertainty Quantification, Credit Risk Assessment.
Statistical Computing
Fatemeh Ghapani
Abstract
This article focuses on diagnostic measures for identifying high-leverage points, and influential observations in linear mixed measurement error (LMME) models. It achieves by imposing the stochastic restrictions on the parameters and incorporating the ridge estimator to tackle the issue of multicollinearity. ...
Read More
This article focuses on diagnostic measures for identifying high-leverage points, and influential observations in linear mixed measurement error (LMME) models. It achieves by imposing the stochastic restrictions on the parameters and incorporating the ridge estimator to tackle the issue of multicollinearity. To this end, generalized leverage matrices are defined using the restricted ridge estimator (RRE) to identify high-leverage observations. Additionally, analogs of Cook’s distance and likelihood distance are proposed to determine influential observations through a case deletion approach. Simulation studies and real-life applications support the theoretical results. To the best of our knowledge, there has been no significant attention in the literature regarding diagnostics for leverage and influence measures concerning the outcomes of the RRE in LMME models. Hence, this paper evaluates the influence of observations by using leverage and influence measures to identify influential observations on the RRE’s of fixed effects and the prediction of random effects in LMME models
Computational Statistics
zahra Karimiezmareh; Behdad Mostafaiy
Abstract
In this paper, we propose a new two-parameter discrete distribution based on central Bell expansion, which is zero-inflated and designed to effectively model overdispersed count data. We study several structural properties of the proposed distribution and demonstrate that it is infinitely divisible, ...
Read More
In this paper, we propose a new two-parameter discrete distribution based on central Bell expansion, which is zero-inflated and designed to effectively model overdispersed count data. We study several structural properties of the proposed distribution and demonstrate that it is infinitely divisible, which adds theoretical strength and potential for wider applicability. The paper also discusses parameter estimation techniques for the distribution, focusing on two common approaches: the method of moments and the maximum likelihood estimation method. Both methods are developed and explained in detail. To evaluate the accuracy and reliability of these estimators, a simulation study is conducted across different sample sizes, allowing us to assess their performance under various conditions. To illustrate the practical importance and usefulness of the new distribution, we apply it to two real data sets and show how well it fits the observed data, reinforcing its value as a flexible tool for analyzing count data.
Statistical Computing
Lazhar BENKHELIFA
Abstract
The paper suggests a novel model defined on the unit interval, termed the unit linear exponential distribution, constructed via an inversion of the exponential function. The proposed model contains the unit exponential and the unit Rayleigh distributions as special submodels. Fundamental properties of ...
Read More
The paper suggests a novel model defined on the unit interval, termed the unit linear exponential distribution, constructed via an inversion of the exponential function. The proposed model contains the unit exponential and the unit Rayleigh distributions as special submodels. Fundamental properties of the introduced distribution are discussed which are stochastic ordering property, quantile function, incomplete moments, moments, probability weighted moment, order statistics, stochastic orderings, stress strength reliability, and Tsallis and Renyi entropies. The distribution has two unknown parameters, which are estimated utilizing the following methods: maximum likelihood, maximum product spacing, least and weighted least squares, Cramér-von Mises, and Anderson-Darling. The behavior of these estimators is assessed through a simulation study. Furthermore, the paper develops a novel quantile regression model based on suggested distribution, which is shown to be a good alternative to existing models like the Kumaraswamy, beta, and unit Chen quantile regression models. We estimate the parameters of the regression model utilizing maximum likelihood. Two well-known real data applications are given to prove the modeling capability of the newly suggested distribution and quantile regression model.
Machine Learning
Seyede Fatemeh Noorani; Mohammad Reza Mohammadi; Maryam Karimi; Nasrin Taherkhani
Abstract
In this study, a multi-stage approach based on online exam data analysis was proposed to identify and rank students' Cheating Ranks. Initially, student response sheets were clustered using the $K-means++$ algorithm with dynamic determination of K, forming groups with similar characteristics. Subsequently, ...
Read More
In this study, a multi-stage approach based on online exam data analysis was proposed to identify and rank students' Cheating Ranks. Initially, student response sheets were clustered using the $K-means++$ algorithm with dynamic determination of K, forming groups with similar characteristics. Subsequently, in later stages, each student's Cheating Rank was determined based on various behavioral and performance parameters.Results demonstrated that the Cheating Rank derived from online exam data effectively differentiates between students suspicious of cheating and normal students, with statistically significant differences between the two groups. These findings underscore the validity and efficacy of the proposed method in detecting cheating in online exams.Additionally, the impact of threshold selection for group differentiation highlighted the importance of appropriate parameter tuning in enhancing detection accuracy and influencing model sensitivity and reliability. The use of in-person exam scores as a reference criterion strengthened the results' credibility and enabled more objective model evaluation.Given the limitations in sample size and data scope, future research should focus on larger, more diverse, and multidimensional datasets to improve both diagnostic accuracy and model generalizability. Furthermore, integrating this approach with advanced machine learning techniques and behavioral analytics could significantly enhance online exam integrity monitoring systems.Overall, this research represents a significant step toward developing cost-effective, reliable, and efficient methods to reduce cheating in online educational environments, fostering greater trust among instructors and students in assessment processes.
Bayesian Computation Statistics
Morteza Amini; Moein Monemi; Mahmoud Taheri; Mohammad Arashi
Abstract
We investigate the problem of weight uncertainty originally proposed by [Blundell et al. (2015). Weight uncertainty in neural networks. In International conference on machine learning, 1613-1622, PMLR.] in the context of neural networks designed for regression tasks, and we extend their framework by ...
Read More
We investigate the problem of weight uncertainty originally proposed by [Blundell et al. (2015). Weight uncertainty in neural networks. In International conference on machine learning, 1613-1622, PMLR.] in the context of neural networks designed for regression tasks, and we extend their framework by incorporating variance uncertainty into the model. Our analysis demonstrates that explicitly modeling uncertainty in the variance parameter can significantly enhance the predictive performance of Bayesian neural networks. By considering a full posterior distribution over the variance, the model achieves improved generalization compared to approaches that treat variance as fixed or deterministic. We evaluate the generalization capability of our proposed approach through a function approximation example and further validate it on the riboflavin genetic dataset. Our exploration encompasses both fully connected dense networks and dropout neural networks, employing Gaussian and spike-and-slab priors respectively for the network weights, providing a comprehensive assessment of how variance uncertainty affects model performance across different architectural choices.
Machine Learning
Reza Sookhtsaraei; Mehdi Sakhaei-nia; Fereshteh Azadi Parand
Abstract
In mission-critical applications, ultra-low latency and high reliability are required to facilitate accurate and timely decision-making. Although cloud platforms provide abundant computing resources, their intrinsic latency constraints make them inadequate for such latency-sensitive applications. This ...
Read More
In mission-critical applications, ultra-low latency and high reliability are required to facilitate accurate and timely decision-making. Although cloud platforms provide abundant computing resources, their intrinsic latency constraints make them inadequate for such latency-sensitive applications. This work explores the cloud-to-edge computing continuum as an opportunistic paradigm and presents an enhanced service placement framework using deep reinforcement learning. In particular, the proposed method leverages the Proximal Policy Optimization algorithm to carry out real-time placement decisions while adapting dynamically to environmental changes. To accelerate convergence and improve adaptability, transfer learning technique are incorporated into the learning process. Additionally, a fault-tolerant mechanism is envisioned to be sensitivity-aware, which modulates its response according to the level of criticality of incoming service requests to maintain seamless service continuity in the event of failure. By prioritizing reliability and minimizing response time, the model significantly enhances the rate of deadline-compliant service deliveries. Experimental tests prove that the proposed method surpasses state-of-the-art approaches in supporting delay-sensitive and mission-critical workloads, providing a robust and intelligent orchestration strategy along the computing continuum.
Bayesian Computation Statistics
Ali Sakhaei; Iman Makhdoom
Abstract
The Weighted Marshall-Olkin Bivariate Exponential (WMOBE) distribution was first proposed byJamalizadeh and Kundu (2013), who examined its different characteristics and properties. Bayesianestimation of the model parameters is carried out using both the squared error loss (SEL) function,which is symmetric, ...
Read More
The Weighted Marshall-Olkin Bivariate Exponential (WMOBE) distribution was first proposed byJamalizadeh and Kundu (2013), who examined its different characteristics and properties. Bayesianestimation of the model parameters is carried out using both the squared error loss (SEL) function,which is symmetric, and the linear-exponential (LINEX) loss function, which is asymmetric. Theseestimators are derived under both informative and non-informative gamma priors. Given the complexityof the four-parameters model, explicit analytical solutions for the Bayesian estimators are not attainable,making it necessary to employ the Gibbs sampling procedure. Markov Chain Monte Carlo (MCMC)methods are widely utilized to compute and implement these estimates. Furthermore, the convergencebehavior of the Markov chain toward a stationary distribution is carefully analyzed. Credible intervals,particularly the highest posterior density (HPD) intervals for the unknown parameters, are alsoconstructed. To assess and compare the effectiveness of these estimation approaches, Monte Carlo simulations are performed. Finally, the methodology is applied to a real-world dataset for illustrative purposes.
Bayesian Computation Statistics
Mohammad Mehdi Abdollahi; M. Bameni Moghadam
Abstract
This study investigates the application of Bayesian two-sample testing in Hilbert space to monitor the health status of industrial equipment. The proposed method evaluates distributional differences between operational data samples to detect faults or anomalies. Unlike traditional multivariate techniques, ...
Read More
This study investigates the application of Bayesian two-sample testing in Hilbert space to monitor the health status of industrial equipment. The proposed method evaluates distributional differences between operational data samples to detect faults or anomalies. Unlike traditional multivariate techniques, our approach pro-vides higher sensitivity to subtle distributional shifts and supports visual insights into posterior distributions. Real-world experiments on industrial sensor outputs demonstrate the method’s effectiveness in early fault detection, reducing mainte-nance costs and downtime. This makes Bayesian two-sample testing in Hilbert space a powerful tool for predictive maintenance strategies.This study investigates the application of Bayesian two-sample testing in Hilbert space to monitor the health status of industrial equipment. The proposed method evaluates distributional differences between operational data samples to detect faults or anomalies. Unlike traditional multivariate techniques, our approach pro-vides higher sensitivity to subtle distributional shifts and supports visual insights into posterior distributions. Real-world experiments on industrial sensor outputs demonstrate the method’s effectiveness in early fault detection, reducing mainte-nance costs and downtime. This makes Bayesian two-sample testing in Hilbert space a powerful tool for predictive maintenance strategies.