###
Statistical Encoding of Succinct Data Structures

####
Rodrigo González and Gonzalo Navarro

In recent work, Sadakane and Grossi [SODA 2006] introduced a scheme to
represent any sequence *S[1,n]*, over an alphabet of size *s*, using
*n Hk(S) + O(n / log_s n * (k log s + log log n))* bits of space, where
*Hk(S)* is the *k*-th order empirical entropy of *S*. The
representation permits extracting any substring of size *Theta(log_s n)*
in constant time, and thus it completely replaces *S* under the
RAM model. This is extremely important because it permits converting any
succinct data structure requiring *o(|S|) = o(n log s)* bits
in addition to *S*, into another requiring *n Hk(S) + o(n log s)*
(overall) for any *k = o(log_s n)*. They achieve this result by using
Ziv-Lempel compression, and conjecture that the result can in particular be
useful to implement compressed full-text indexes.
In this paper we extend their result, by obtaining the same space and time
complexities using a simpler scheme based on statistical encoding.
We show that the scheme supports appending symbols in constant amortized time.
In addition, we prove some results on the applicability of the scheme for
full-text self-indexing.