Multi-Objective Reinforcement Learning for Reconfiguring Data Stream Analytics on Edge Computing

Published in ICPP 2019, 2019

[Paper] [BIBTEX]

Abstract

There is increasing demand for handling massive amounts of data in a timely manner via Data Stream Processing (DSP). A DSP application is often structured as a directed graph whose vertices are operators that perform transformations over the incoming data and edges representing the data streams between operators. DSP applications are traditionally deployed on the Cloud in order to explore the virtually unlimited number of resources. Edge computing has emerged as a suitable paradigm for executing parts of DSP applications by offloading certain operators from the Cloud and placing them close to where the data is generated, hence minimising the overall time required to process data events (i.e., the end-to-end latency). The operator reconfiguration consists of changing the initial placement by reassigning operators to different devices given target performance metrics. In this work, we model the operator reconfiguration as a Reinforcement Learning (RL) problem and define a multi-objective reward considering metrics regarding operator reconfiguration, and infrastructure and application improvement. Experimental results show that reconfiguration algorithms that minimise only end-to-end processing latency can have a substantial impact on WAN traffic and communication cost. The results also demonstrate that when reconfiguring operators, RL algorithms improve by over 50% the performance of the initial placement provided by state-of-the-art approaches.

Recommended citation: da Silva Veith, Alexandre; de Souza, Felipe Rodrigo; Dias de Assunção, Marcos; Lefèvre, Laurent; Santos dos Anjos, Julio Cesar
Download Paper | Download Slides