Download PDFOpen PDF in browser

A Latency-Aware Power-Efficient Reinforcement Learning Approach for Task Offloading in Multi-Access Edge Networks

EasyChair Preprint 9329

6 pagesDate: November 16, 2022

Abstract

Since some cloud resources are located as edge servers near mobile devices, these devices can offload some of their tasks to those servers. This will accelerate the task execution to meet the increasing computing demands of mobile applications. Various approaches have been proposed to make offloading decisions about offloading. In this paper we present a Reinforcement Learning (RL) approach that considers delayed feedback from the environment, which is more realistic than conventional RL methods. The simulation results show that the proposed method succeeded to handle the random delayed feedback of the environment properly and enhanced the conventional reinforcement methods significantly.

Keyphrases: Mobile Edge Computing, Reinforcement Learning, task offloading

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:9329,
  author    = {Ali Aghasi and Rituraj Rituraj},
  title     = {A Latency-Aware Power-Efficient Reinforcement Learning Approach for Task Offloading in Multi-Access Edge Networks},
  howpublished = {EasyChair Preprint 9329},
  year      = {EasyChair, 2022}}
Download PDFOpen PDF in browser