Papers
Topics
Authors
Recent
Search
2000 character limit reached

Optimization Framework for Splitting DNN Inference Jobs over Computing Networks

Published 13 Nov 2021 in cs.NI | (2111.07006v3)

Abstract: Ubiquitous AI is considered one of the key services in 6G systems. AI services typically rely on deep neural network (DNN) requiring heavy computation. Hence, in order to support ubiquitous AI, it is crucial to provide a solution for offloading or distributing computational burden due to DNN, especially at end devices with limited resources. We develop an optimization framework for assigning the computation tasks of DNN inference jobs to computing resources in the network, so as to reduce the inference latency. To this end, we propose a layered graph model with which simple conventional routing jointly solves the problem of selecting nodes for computation and paths for data transfer between nodes. We show that using our model, the existing approaches to splitting DNN inference jobs can be equivalently reformulated as a routing problem that possesses better numerical properties. We also apply the proposed framework to derive algorithms for minimizing the end-to-end inference latency. We show through numerical evaluations that our new formulation can find a solution for DNN inference job distribution much faster than the existing formulation, and that our algorithms can select computing nodes and data paths adaptively to the computational attributes of given DNN inference jobs, so as to reduce the end-to-end latency.

Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.