DocumentCode :
1253534
Title :
Identification without randomization
Author :
Ahlswede, Rudolf ; Cai, Ning
Author_Institution :
Fak. fur Math., Bielefeld Univ., Germany
Volume :
45
Issue :
7
fYear :
1999
fDate :
11/1/1999 12:00:00 AM
Firstpage :
2636
Lastpage :
2642
Abstract :
In the theory of identification via noisy channels randomization in the encoding has a dramatic effect on the optimal code size, namely, it grows double-exponentially in the blocklength, whereas in the theory of transmission it has the familiar exponential growth. We consider now instead of the discrete memoryless channel (DMC) more robust channels such as the familiar compound (CC) and arbitrarily varying channels (AVC). They can be viewed as models for jamming situations. We make the pessimistic assumption that the jammer knows the input sequence before he acts. This forces communicators to use the maximal error concept and also makes randomization in the encoding superfluous. Now, for a DMC W by a simple observation, made by Ahlswede and Dueck (1989), in the absence of randomization the identification capacity, say CNRI(W), equals the logarithm of the number of different row-vectors in W. We generalize this to compound channels. A formidable problem arises if the DMC W is replaced by the AVC W. In fact, for 0-1-matrices only in W we are-exactly as for transmission-led to the equivalent zero-error-capacity of Shannon. But for general W the identification capacity CNRI(W) is quite different from the transmission capacity C(W). An observation is that the separation codes of Ahlswede (1989) are also relevant here. We present a lower bound on C NRI(W). It implies for instance for W={(0 11 0), (δ (1-δ)(1 0))}, δ∈(0, ½) that CNRI(W)=1, which is obviously tight. It exceeds C(W), which is known to exceed 1-h(δ), where h is the binary entropy function. We observe that a separation code, with worst case average list size L¯ (which we call an NRA code) can be partitioned into L¯2ne transmission codes. This gives a nonsingle-letter characterization of the capacity of AVC with maximal probability of error in terms of the capacity of codes with list decoding. We also prove that randomization in the decoding does not increase CI(W) and CNRI(W). Finally, we draw attention to related work on source coding
Keywords :
channel capacity; channel coding; decoding; error statistics; identification; jamming; matrix algebra; noise; random processes; source coding; 0-1-matrices; Shannon; arbitrarily varying channels; binary entropy function; blocklength; channel coding; compound channels; decoding randomization; discrete memoryless channel; double-exponentially growth; encoding; equivalent zero-error-capacity; exponential growth; identification capacity; identification theory; input sequence; jamming; lower bound; maximal error concept; maximal error probability; noisy channels randomization; optimal code size; row-vectors; separation codes; source coding; transmission codes; transmission theory; worst case average list size; Automatic voltage control; Capacity planning; Decoding; Entropy; Jamming; Memoryless systems; Probability distribution; Robustness; Source coding;
fLanguage :
English
Journal_Title :
Information Theory, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9448
Type :
jour
DOI :
10.1109/18.796419
Filename :
796419
Link To Document :
بازگشت