Papers
Topics
Authors
Recent
Search
2000 character limit reached

A Survey on LUT-based Deep Neural Networks Implemented in FPGAs

Published 9 Jun 2025 in cs.AR | (2506.07367v1)

Abstract: Low-latency, energy-efficient deep neural networks (DNNs) inference are critical for edge applications, where traditional cloud-based deployment suffers from high latency and security risks. Field-Programmable Gate Arrays (FPGAs) offer a compelling solution, balancing reconfigurability, power efficiency, and real-time performance. However, conventional FPGA-based DNNs rely heavily on digital signal processing (DSP) blocks for multiply-accumulate (MAC) operations, limiting scalability. LUT-based DNNs address this challenge by fully leveraging FPGA lookup tables (LUTs) for computation, improving resource utilization and reducing inference latency. This survey provides a comprehensive review of LUT-based DNN architectures, including their evolution, design methodologies, and performance trade-offs, while outlining promising directions for future research.

Authors (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.