Papers
Topics
Authors
Recent
Search
2000 character limit reached

How deep the machine learning can be

Published 2 May 2020 in cs.DC and cs.LG | (2005.00872v1)

Abstract: Today we live in the age of artificial intelligence and machine learning; from small startups to HW or SW giants, everyone wants to build machine intelligence chips, applications. The task, however, is hard: not only because of the size of the problem: the technology one can utilize (and the paradigm it is based upon) strongly degrades the chances to succeed efficiently. Today the single-processor performance practically reached the limits the laws of nature enable. The only feasible way to achieve the needed high computing performance seems to be parallelizing many sequentially working units. The laws of the (massively) parallelized computing, however, are different from those experienced in connection with assembling and utilizing systems comprising just-a-few single processors. As machine learning is mostly based on the conventional computing (processors), we scrutinize the (known, but somewhat faded) laws of the parallel computing, concerning AI. This paper attempts to review some of the caveats, especially concerning scaling the computing performance of the AI solutions.

Citations (7)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.