Praveen Kumar DontaUniversitetslektor
Om mig
Praveen Kumar Donta är för närvarande universitetslektor vid Institutionen för data- och systemvetenskap, Stockholms universitet, Sverige. Han arbetade vid Distributed Systems Group vid TU Wien som postdoktoral forskare från juli 2021 till juni 2024. Han avlade sin doktorsexamen vid Indian Institute of Technology (Indian School of Mines), Dhanbad, vid Institutionen för datavetenskap och teknik i juni 2021. Han var gästforskare under sin doktorandutbildning vid Mobile&Cloud Lab, Tartu universitet, Estland, från juli 2019 till januari 2020. Han erhöll sin master- och kandidatexamen i teknik vid Institutionen för datavetenskap och teknik vid JNTUA, Ananthapur, med utmärkelse 2014 respektive 2012. Han är Senior Member i IEEE och Professional Member i ACM. Han är ledamot i redaktionsrådet för IEEE Internet of Things Journal, Computing Springer, ETT Wiley, PLOS One, Measurement och Computer Communications Elsevier Journals. Hans nuvarande forskning fokuserar på inlärningsdrivna distribuerade beräkningskontinuumsystem, Casual and Conscious Continuum Systems samt intelligenta dataprotokoll.
Forskning
- Distributed Computing Continuum Systems
- Learning Techniques in IoT
- AI/ML for Computing Systems
- Congnition, Causality in Computing Systems
- Cyber-physical Continuum
Publikationer
I urval från Stockholms universitets publikationsdatabas
-
Equilibrium in the Computing Continuum through Active Inference
2024. Boris Sedlak (et al.). Future Generation Computer Systems 160, 92-108
ArtikelComputing Continuum (CC) systems are challenged to ensure the intricate requirements of each computational tier. Given the system’s scale, the Service Level Objectives (SLOs), which are expressed as these requirements, must be disaggregated into smaller parts that can be decentralized. We present our framework for collaborative edge intelligence, enabling individual edge devices to (1) develop a causal understanding of how to enforce their SLOs and (2) transfer knowledge to speed up the onboarding of heterogeneous devices. Through collaboration, they (3) increase the scope of SLO fulfillment. We implemented the framework and evaluated a use case in which a CC system is responsible for ensuring Quality of Service (QoS) and Quality of Experience (QoE) during video streaming. Our results showed that edge devices required only ten training rounds to ensure four SLOs; furthermore, the underlying causal structures were also rationally explainable. The addition of new types of devices can be done a posteriori; the framework allowed them to reuse existing models, even though the device type had been unknown. Finally, rebalancing the load within a device cluster allowed individual edge devices to recover their SLO compliance after a network failure from 22% to 89%.
-
Edge Intelligence—Research Opportunities for Distributed Computing Continuum Systems
2023. Victor Casamayor Pujol (et al.). IEEE Internet Computing 27 (4), 53-74
ArtikelEdge intelligence and, by extension, any distributed computing continuum system will bring to our future society a plethora of new and useful applications, which will certainly revolutionize our way of living. Nevertheless, managing these systems challenges all previously developed technologies for Internet-distributed systems. In this regard, this article presents a set of techniques and concepts that can help manage these systems; these are framed in the main paradigm for autonomic computing, the well-known monitor–analyze–plan–execute over shared knowledge, or MAPE-K. All in all, this article aims at unveiling research opportunities for these new systems, encouraging the community to work together toward new technologies to make edge intelligence a reality.
-
On Distributed Computing Continuum Systems
2023. Schahram Dustdar, Victor Casamayor Pujol, Praveen Kumar Donta. IEEE Transactions on Knowledge and Data Engineering 35 (4), 4092-4105
ArtikelThis article presents our vision on the need of developing new managing technologies to harness distributed “computing continuum” systems. These systems are concurrently executed in multiple computing tiers: Cloud, Fog, Edge and IoT. This simple idea develops manifold challenges due to the inherent complexity inherited from the underlying infrastructures of these systems. This makes inappropriate the use of current methodologies for managing Internet distributed systems, which are based on the early systems that were based on client/server architectures and were completely specified by the application software. We present a new methodology to manage distributed “computing continuum” systems. This is based on a mathematical artifact called Markov Blanket, which sets these systems in a Markovian space, more suitable to cope with their complex characteristics. Furthermore, we develop the concept of equilibrium for these systems, providing a more flexible management framework compared with the one based on thresholds, currently in use for Internet-based distributed systems. Finally, we also link the equilibrium with the development of adaptive mechanisms. However, we are aware that developing the entire methodology requires a big effort and the use of learning techniques, therefore, we finish this article with an overview of the techniques required to develop this methodology.
-
Markov Blanket Composition of SLOs
2024. Boris Sedlak (et al.). 2024 IEEE International Conference on Edge Computing and Communications (EDGE), 128-138
KonferensSmart environments use composable microservices pipelines to process Internet of Things (IoT) data, where each service is dependent on the outcome of its predecessor. To ensure Quality of Service (QoS), individual services must fulfill Service Level Objectives (SLOs); however, SLO fulfillment is dependent on resources (e.g., processing or storage), which are scarcely available within the Edge. Hence, when distributing services over heterogeneous devices, this raises the question of where to deploy each service to best fulfill both its own SLOs as well as those imposed by dependent services. In this paper, we maximize SLO fulfillment of a pipeline-based application by analyzing these dependencies. To achieve this, services and hosting devices alike are extended with a Markov blanket (MB) - a probabilistic view into their internal processes - which are composed into one overarching model. Given a mutable set of services, hosts, and SLOs, the composed MB allows inferring the optimal assignment between services and edge devices. We evaluated our method for a smart city scenario, which assigned pipelined services (e.g., video processing) under constraints from subsequent services (e.g., consumer latency). The results showed how our method can support infrastructure providers by optimizing SLO fulfillment for arbitrary devices currently available.
-
A Privacy Enforcing Framework for Data Streams on the Edge
2024. Boris Sedlak (et al.). IEEE Transactions on Emerging Topics in Computing, 1-12
ArtikelRecent developments in machine learning (ML) allow for efficient data stream processing and also help in meeting various privacy requirements. Traditionally, predefined privacy policies are enforced in resource-rich and homogeneous environments such as in the cloud to protect sensitive information from being exposed. However, large amounts of data streams generated from heterogeneous IoT devices often result in high computational costs, cause network latency, and increase the chance of data interruption as data travels away from the source. Therefore, this article proposes a novel privacy-enforcing framework for transforming data streams by executing various privacy policies close to the data source. To achieve our proposed framework, we enable domain experts to specify high-level privacy policies in a human-readable form. Then, the edge-based runtime system analyzes data streams (i.e., generated from nearby IoT devices), interprets privacy policies (i.e., deployed on edge devices), and transforms data streams if privacy violations occur. Our proposed runtime mechanism uses a Deep Neural Networks (DNN) technique to detect privacy violations within the streamed data. Furthermore, we discuss the framework, processes of the approach, and the experiments carried out on a real-world testbed to validate its feasibility and applicability.
-
Invited Paper: DeepSLOs for the Computing Continuum
2024. Victor Casamayor Pujol (et al.). ApPLIED'24
KonferensThe advent of the computing continuum, i.e., the blending of all existing computational tiers, calls for novel techniques and methods that consider its complex dynamics. This work presents the DeepSLO as a novel design paradigm to define and structure Service Level Objectives (SLOs) for distributed computing continuum systems. Hence, when multiple stakeholders are involved, the DeepSLO allows them to plan the overarching behaviors of the system. Further, the techniques employed (Bayesian networks, Markov blanket, Active inference) provide autonomy and decentralization to each SLO while the DeepSLO hierarchy remains to account for objectives dependencies. Finally, DeepSLOs are represented graphically, as well as individual SLOs enabling a human interpretation of the system performance.
-
Cooperative Transmission Scheduling and Computation Offloading With Collaboration of Fog and Cloud for Industrial IoT Applications
2023. Abhishek Hazra (et al.). IEEE Internet of Things Journal 10 (5), 3944-3953
ArtikelEnergy consumption for large amounts of delay-sensitive applications brings serious challenges with the continuous development and diversity of Industrial Internet of Things (IIoT) applications in fog networks. In addition, conventional cloud technology cannot adhere to the delay requirement of sensitive IIoT applications due to long-distance data travel. To address this bottleneck, we design a novel energy–delay optimization framework called transmission scheduling and computation offloading ( TSCO ), while maintaining energy and delay constraints in the fog environment. To achieve this objective, we first present a heuristic-based transmission scheduling strategy to transfer IIoT-generated tasks based on their importance. Moreover, we also introduce a graph-based task-offloading strategy using constrained-restricted mixed linear programming to handle high traffic in rush-hour scenarios. Extensive simulation results illustrate that the proposed TSCO approach significantly optimizes energy consumption and delay up to 12%–17% during computation and communication over the traditional baseline algorithms.
-
Towards Intelligent Data Protocols for the Edge
2023. Praveen Kumar Donta, Schahram Dustdar.
KonferensThe computing continuum is growing because multiple devices are added daily. Edge devices play a key role in this because computation is decentralized or distributed. Edge computing is advanced by using AI/ML algorithms to become more intelligent. Besides, Edge data protocols are useful for transmitting or receiving data between devices. Since, computation efficiency is possible when the data is received at the Edge timely, and it is possible only when the data protocols are efficient, reliable and fast. Most edge data protocols are defined with static set of rules and their primary purpose is to provide standardized and reliable data communications. Edge devices need autonomous or dynamic protocols that enable interoperability, autonomous decision making, scalability, and adaptability. This paper examines the limitations of popular data protocols used in edge networks, the need for intelligent data protocols, and their implications. We also explore possible ways to simplify learning for edge devices and discuss how intelligent data protocols can mitigate challenges such as congestion, message filtering, message expiration, prioritization, and resource handling.
Visa alla publikationer av Praveen Kumar Donta vid Stockholms universitet