Credit: RUDN University
RUDN mathematician and his colleagues from China, Egypt, Saudi Arabia, United Kingdom, and Qatar have developed an algorithm allowing the distribution of computing tasks between the IoT devices and the cloud in an optimal way. As a result, the power and time costs are reduced by about three times. The study was published in the Big Data.
With the development of technologies and devices, Internet of Things (IoT) applications require more and more computing power. The amount of data that the IoT devices need to process can be so large that it is reasonable to migrate computing to the cloud. Cloud computing provides flexible data processing and storage capabilities. But Computation offloading, meaning transferring of the resource-intensive processes to the cloud, requires additional power and time. In addition, during the transfer of information, the device is less protected from malware. The RUDN mathematician and his colleagues proposed an algorithm that allows reducing the offloading time and power consumption by 2.8 times. The algorithm also protects the devices for the offloading period with additional encryption.
“Numerous approaches and models for computation offloading emerged in the literature with the goal of decreasing energy consumption, reducing computation latency, and/or allocating radio resources efficiently. However, obtaining an optimum offloading solution in complex and dynamic multi-user wireless systems is a challenging task. In addition, the security of data during transmission from mobile devices to edge devices is a challenge due to, for example, sniffing, jamming and eavesdropping attacks. These security threats have not been addressed in most offloading approaches in the literature. We present a deep reinforcement learning model to handle performance optimization in multi-user and multi-task systems that are capable of protecting data during edge server transmission,” Ammar Muthanna, PhD from RUDN University’s Applied Mathematics and Communications Technology Institute.
Mathematicians developed the offloading model within the framework of mobile peripheral computing (MEC). The model describes a set of devices connected to the network — phones, computers, smart devices, and other gadgets — and a wireless base station to provide computational and storage services. Each of the devices has a certain number of tasks to complete locally or transmit to the base station. Mathematicians described the amount of power and time needed for data computing and transmitting in a form of an equations system. For this system, the researchers solved the optimization problem, meaning they found an algorithm that distributes the tasks between the devices and the base station so that the costs are minimal. To do this, the researchers used machine learning.
They tested the algorithm experimentally using a real system simulation. It had five mobile devices, each running a face recognition app. From a computational point of view, this task consists of three independent computation tasks, namely, face detection, pre-processing, and feature extraction and classification. So that was a total of 15 tasks to be executed simultaneously. Mathematicians compared the constructed model with three other approaches-when all computational tasks are processed locally, remotely, or distributed between a base station and mobile devices randomly. It turned out that the new algorithm reduces the cost of calculation by 2-4 times, depending on the number of devices connected to the system, by 64.7% on average.
“In ongoing and future work, a new effective compression layer will be added to our model. This addition will compress the transmission data size to reduce the transmission time and enhance the overall system performance,” Ammar Muthanna from RUDN.
Related Journal Article