Papers
Topics
Authors
Recent
Search
2000 character limit reached

Evaluating Multi-Instance DNN Inferencing on Multiple Accelerators of an Edge Device

Published 12 Mar 2025 in cs.DC | (2503.09546v1)

Abstract: Edge devices like Nvidia Jetson platforms now offer several on-board accelerators -- including GPU CUDA cores, Tensor Cores, and Deep Learning Accelerators (DLA) -- which can be concurrently exploited to boost deep neural network (DNN) inferencing. In this paper, we extend previous work by evaluating the performance impacts of running multiple instances of the ResNet50 model concurrently across these heterogeneous components. We detail the effects of varying batch sizes and hardware combinations on throughput and latency. Our expanded analysis highlights not only the benefits of combining CUDA and Tensor Cores, but also the performance degradation from resource contention when integrating DLAs. These findings, together with insights on precision constraints and workload allocation challenges, motivate further exploration of intelligent scheduling mechanisms to optimize resource utilization on edge platforms.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.