Author_Institution :
EE Dept., Technion, Haifa 32000, Israel. Email: yaniva@tx.technion.ac.il
Abstract :
Universally achievable error exponents pertaining to certain families of channels (most notably, discrete memoryless channels (DMCs)), and various ensembles of random codes, are studied by combining the competitive minimax approach, proposed by Feder and Merhav, and Gallager´s techniques for the analysis of error exponents. In particular, we derive a single¿letter expression for a lower bound to the largest, universally achievable fraction ¿ of the optimum error exponent pertaining to the optimum ML decoding. To demonstrate the tightness of this lower bound, we show that ¿ = 1, for the binary symmetric channel (BSC), when the random coding distribution is uniform over: (i) all codes (of a given rate), and (ii) all linear codes, in agreement with well¿known results. We also show that ¿ = 1 for the uniform ensemble of systematic linear codes, and for that of time¿varying convolutional codes in the bit-error¿rate sense. For the latter case, we also show how the corresponding universal decoder can be efficiently implemented using a slightly modified version of the Viterbi algorithm which employs two trellises.