Papers
Topics
Authors
Recent
Search
2000 character limit reached

On The Decoding Error Weight of One or Two Deletion Channels

Published 7 Jan 2022 in cs.IT and math.IT | (2201.02466v2)

Abstract: This paper tackles two problems that fall under the study of coding for insertions and deletions. These problems are motivated by several applications, among them is reconstructing strands in DNA-based storage systems. Under this paradigm, a word is transmitted over some fixed number of identical independent channels and the goal of the decoder is to output the transmitted word or some close approximation of it. The first part of the paper studies optimal decoding for a special case of the deletion channel, referred by the $k$-deletion channel, which deletes exactly $k$ symbols of the transmitted word uniformly at random. In this part, the goal is to understand how an optimal decoder operates in order to minimize the expected normalized distance. A full characterization of an efficient optimal decoder for this setup, referred to as the maximum likelihood* (ML*) decoder, is given for a channel that deletes one or two symbols. The second part of this paper studies the deletion channel that deletes a symbol with some fixed probability $p$, while focusing on two instances of this channel. Since operating the maximum likelihood (ML) decoder, in this case, is computationally unfeasible, we study a slightly degraded version of this decoder for two channels and study its expected normalized distance. We observe that the dominant error patterns are deletions in the same run or errors resulting from alternating sequences. Based on these observations, we derive lower bounds on the expected normalized distance of the degraded ML decoder for any transmitted $q$-ary sequence of length $n$ and any deletion probability $p$. We further show that as the word length approaches infinity and the channel's deletion probability $p$ approaches zero, these bounds converge to approximately $\frac{3q - 1}{q - 1} p2$. These theoretical results are verified by corresponding simulations.

Citations (2)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.