Document Type : original

Authors

1 Department of Computer Engineering, Faculty of Engineering, Bu-Ali Sina University, Hamedan, Iran

2 Department of Computer Engineering, Faculty of Engineering, Bu-Ali Sina University, Hamedan

3 Department of Computer Science, Allameh Tabataba'i University, Tehran

10.22054/jdsm.2026.86787.1073

Abstract

In mission-critical applications, ultra-low latency and high reliability are required to facilitate accurate and timely decision-making. Although cloud platforms provide abundant computing resources, their intrinsic latency constraints make them inadequate for such latency-sensitive applications. This work explores the cloud-to-edge computing continuum as an opportunistic paradigm and presents an enhanced service placement framework using deep reinforcement learning. In particular, the proposed method leverages the Proximal Policy Optimization algorithm to carry out real-time placement decisions while adapting dynamically to environmental changes. To accelerate convergence and improve adaptability, transfer learning technique are incorporated into the learning process. Additionally, a fault-tolerant mechanism is envisioned to be sensitivity-aware, which modulates its response according to the level of criticality of incoming service requests to maintain seamless service continuity in the event of failure. By prioritizing reliability and minimizing response time, the model significantly enhances the rate of deadline-compliant service deliveries. Experimental tests prove that the proposed method surpasses state-of-the-art approaches in supporting delay-sensitive and mission-critical workloads, providing a robust and intelligent orchestration strategy along the computing continuum.

Keywords

Main Subjects