Computational power of fixed-precision Transformers with positional encodings
Characterize the computational power of Transformer networks with positional encodings when all internal computations use fixed (finite) precision, including determining the class of languages recognizable by such fixed-precision Transformers.
References
Although no longer Turing complete, one can still study the computational power of fixed-precision Transformers. We left this as future work.
— On the Turing Completeness of Modern Neural Network Architectures
(1901.03429 - Pérez et al., 2019) in Section 3.3, Differences with [Transformer]'s framework — The need of arbitrary precision