On Precision - Redundancy Relation in the Design of Source Coding Algorithms
Abstract: We study the effects of finite-precision representation of source's probabilities on the efficiency of classic source coding algorithms, such as Shannon, Gilbert-Moore, or arithmetic codes. In particular, we establish the following simple connection between the redundancy $R$ and the number of bits $W$ necessary for representation of source's probabilities in computer's memory ($R$ is assumed to be small): \begin{equation*} W \lesssim \eta \log_2 \frac{m}{R}, \end{equation*} where $m$ is the cardinality of the source's alphabet, and $\eta \leqslant 1$ is an implementation-specific constant. In case of binary alphabets ($m=2$) we show that there exist codes for which $\eta = 1/2$, and in $m$-ary case ($m > 2$) we show that there exist codes for which $\eta = m/(m+1)$. In general case, however (which includes designs relying on progressive updates of frequency counters), we show that $\eta = 1$. Usefulness of these results for practical designs of source coding algorithms is also discussed.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.