Awards

Best Paper Awards

The DSTC gives two papers awards, the IEEE Data Storage Best Paper Award and the IEEE Data Storage Best Student Paper Award.

2020 – 2021 Paper Awards

  • IEEE Data Storage Best Paper Award for 2020 – 2021

    Awardees: Jin Sima and Jehoshua Bruck

    Paper: “On optimal k-deletion correcting codes,” IEEE Transactions on Information Theory, vol. 67, no. 6, June 2021, pp. 3360-3375.

  • IEEE Data Storage Best Student Paper Award for 2020 – 2021

    Awardees: Ruida Zhou, Chao Tian, Hua Sun, and Tie Liu

    Paper: “Capacity-achieving private information retrieval codes from MDS-coded databases with minimum message size,” IEEE Transactions on Information Theory, vol. 66, no. 8,  August 2020, pp. 4904-4916.

2018 – 2019 Paper Awards

  • IEEE Data Storage Best Paper Award for 2018 – 2019

    Awardee: Daesung Kim, Krishna R. Narayanan, and Jeongseok Ha

    Paper:  “Symmetric Block-wise Concatenated BCH Codes for NAND Flash Memories,” published in IEEE Transactions on Communications, October 2018. IEEEXplore

    Abstract: This paper introduces a high rate error-correcting coding scheme called symmetric block-wise concatenated Bose-Chaudhuri-Hocquenghem (symmetric BC-BCH) codes tailored for storage devices with hard-decision outputs, e.g., storage devices based on NAND flash memory. It will be shown that a careful integration of the symmetry and 2-D block-wise concatenation is especially beneficial to achieve improvements of error-rate performance when an iterative hard-decision-based decoding (IHDD) is assumed. The claim is substantiated by proving that the proposed symmetric concatenation is optimal in terms of error-rate performance in the low error-rate regime over other 2-D block-wise concatenations. Besides, this paper proposes a novel way to design constituent codes, which enables us to enjoy advantages of primitive BCH codes and to efficiently break stopping sets associated with the IHDD in the low error-rate regime. We consider error-control systems made up of a symmetric BC-BCH code, the IHDD, and simple auxiliary decoders specifically targeting to break stopping sets caused in the IHDD. It will be shown that the auxiliary decoders significantly improve error-rate performance at a negligible amount of extra complexity. Performance comparisons are also carried out between error-control systems with the proposed and other coding schemes such as BCH codes, quasi-primitive BC-BCH codes, and low-density parity-check codes.

  • IEEE Data Storage Best Student Paper Award for 2018 – 2019

    Awardees: Yuval Ben-Hur and Yuval Cassuto

    Paper: “Detection and Coding Schemes for Sneak-Path Interference in Resistive Memory Arrays,” published in IEEE Transactions on Communications, June 2019. IEEEXplore

    Abstract: Resistive memory is a promising technology for achieving unprecedented storage densities and new in-memory computing features. However, to fulfill their promise, resistive memories require array architectures suffering from a severe interference effect called “sneak paths.” In this paper, we address the sneak-path problem through a communication-theory framework. Starting from the fundamental problem of readout with parallel-resistance interference, we develop several tools for detection and coding that significantly improve memory reliability. For the detection problem, we formulate and derive the optimal detector for a realistic array model, and then propose simplifications that enjoy similarly good performance and simpler implementation. Complementing detection for better error rates is done by a new coding scheme that shapes the stored bits to get lower sneak-path incidence. For the same storage rates, the new coding scheme exhibits error rates lower by an order of magnitude compared to known shaping techniques.

  • IEEE Data Storage Best Student Paper Award for 2018 – 2019

    Awardees: Ahmed Hareedy, Chinmayi Lanka, Nian Guo, and Lara Dolecek

    Paper: “A Combinatorial Methodology for Optimizing Non-Binary Graph-Based Codes: Theoretical Analysis and Applications in Data Storage,” published in IEEE Transactions on Information Theory, April 2019. IEEEXplore

    Non-binary (NB) low-density parity-check (LDPC) codes are graph-based codes that are increasingly being considered as a powerful error correction tool for modern dense storage devices. Optimizing NB-LDPC codes to overcome their error floor is one of the main code design challenges facing storage engineers upon deploying such codes in practice. Furthermore, the increasing levels of asymmetry incorporated by the channels underlying modern dense storage systems, e.g., multi-level Flash systems, exacerbate the error floor problem by widening the spectrum of problematic objects that contribute to the error floor of an NB-LDPC code. In a recent research, the weight consistency matrix (WCM) framework was introduced as an effective combinatorial NB-LDPC code optimization methodology that is suitable for modern Flash memory and magnetic recording (MR) systems. The WCM framework was used to optimize codes for asymmetric Flash channels, MR channels that have intrinsic memory, in addition to canonical symmetric additive white Gaussian noise channels. In this paper, we provide an in-depth theoretical analysis needed to understand and properly apply the WCM framework. We focus on general absorbing sets of type two (GASTs) as the detrimental objects of interest. In particular, we introduce a novel tree representation of a GAST called the unlabeled GAST tree, using which we prove that the WCM framework is optimal in the sense that it operates on the minimum number of matrices, which are the WCMs, to remove a GAST. Then, we enumerate WCMs and demonstrate the significance of the savings achieved by the WCM framework in the number of matrices processed to remove a GAST. Moreover, we provide a linear-algebraic analysis of the null spaces of WCMs associated with a GAST. We derive the minimum number of edge weight changes needed to remove a GAST via its WCMs, along with how to choose these changes. In addition, we propose a new set of problematic objects, namely oscillating sets of type two (OSTs), which contribute to the error floor of NB-LDPC codes with even column weights on asymmetric channels, and we show how to customize the WCM framework to remove OSTs. We also extend the domain of the WCM framework applications by demonstrating its benefits in optimizing column weight 5 codes, codes used over Flash channels with additional soft information, and spatially coupled codes. The performance gains achieved via the WCM framework range between 1 and nearly 2.5 orders of magnitude in the error floor region over interesting channels.

2016 – 2017 Paper Awards

  • IEEE Data Storage Best Paper Award for 2016 – 2017

    Awardee: Min Ye and Alexander Barg

    Paper:  “Explicit Constructions of High-Rate MDS Array Codes With Optimal Repair Bandwidth”, IEEE Transactions on Information Theory, 2017 Apr, 63(4): 2001-2014. IEEEXplore

    Abstract: Maximum distance separable (MDS) codes are optimal error-correcting codes in the sense that they provide the maximum failure tolerance for a given number of parity nodes. Suppose that an MDS code with k information nodes and r = n – k parity nodes is used to encode data in a distributed storage system. It is known that if h out of the n nodes are inaccessible and d surviving (helper) nodes are used to recover the lost data, then we need to download at least h/(d + h – k) fraction of the data stored in each of the helper nodes (Dimakis et al., 2010 and Cadambe et al., 2013). If this lower bound is achieved for the repair of any h erased nodes from any d helper nodes, we say that the MDS code has the (h, d)-optimal repair property. We study high-rate MDS array codes with the optimal repair property (also known as minimum storage regenerating codes, or MSR codes). Explicit constructions of such codes in the literature are only available for the cases where there are at most three parity nodes, and these existing constructions can only optimally repair a single node failure by accessing all the surviving nodes. In this paper, given any r and n, we present two explicit constructions of MDS array codes with the (h, d)-optimal repair property for all h r and k d n – h simultaneously. Codes in the first family can be constructed over any base field F as long as |F| ≥ sn, where s = lcm(1, 2, . .. , r). The encoding, decoding, repair of failed nodes, and update procedures of these codes all have low complexity. Codes in the second family have the optimal access property and can be constructed over any base field F as long as |F| ≥ n+1. Moreover, both code families have the optimal error resilience capability when repairing failed nodes. We also construct several other related families of MDS codes with the optimal repair property.

  • IEEE Data Storage Best Student Paper Award for 2016 – 2017

    Awardees: Yongjune Kim, Abhishek A. Sharma, Robert Mateescu, Seung-Hwan Song, Zvonimir Z. Bandic, James A. Bain, and B. V. K. Vijaya Kumar

    Paper: “Locally Rewritable Codes for Resistive Memories”, IEEE Journal on Selected Areas in Communications, 2016 Sep, 34(9): 2470-2485. IEEEXplore

    Abstract: Resistive memories, such as phase change memories and resistive random access memories, have attracted significant research interest because of their scalability, non-volatility, fast speed, and rewritability. However, their write endurance needs to be improved substantially for large-scale deployment of resistive memories. In addition, their write power consumption is much higher than the power consumption of read operation. Inspired by locally repairable codes (LRCs) recently introduced for distributed storage systems, we propose locally rewritable codes (LWCs) for resistive memories. We define a novel parameter of rewriting locality, which can be connected to repair locality of LRC. As small values of repair locality of LRC enable fast repair in distributed storage systems, small values of rewriting locality of LWC are able to reduce the problems of write endurance and write power consumption. We show how a small value of rewriting locality can improve write endurance and power consumption by deriving the upper bounds on writing cost. Also, we point out the dual relation of LRC and LWC, which indicates that the existing construction methods of LRC can be applied to construct LWC. Finally, we investigate the construction of LWC with error correcting capability for random errors.

2015 Paper Awards

  • IEEE Data Storage Best Paper Award for 2015

    Awardee: Haitao Xia, Lu Lu, Seongwook Jeong, Lu Pan and Jun Xiao

    Paper: “Signal processing and detection for array reader magnetic recording system,” IEEE Transactions on Magnetics, Vol. 51, No. 11, pp. 1-4, Nov. 2015. IEEEXplore

    Abstract: In the past several years, the single-reader read channel scheme reached its performance limitation. Thus, a new multiple-reader system is evaluated in this paper to have more than one read back signal to improve the hard disk drive performance and capacity. The novel control loops, equalization, and detection are discussed on how to handle the multiple read back signals, estimate head positions based on preamble signal-to-noise ratio, jointly equalize two stream signals as well as detect the final output using pattern-dependent detection.

  • IEEE Data Storage Best Student Paper Award for 2015

    Awardees: Ryan Gabrys,  Eitan Yaakobi and Lara Dolecek

    Paper: “Correcting Grain-Errors in Magnetic Media,” IEEE Transactions on Information Theory, Vol. 61, No. 5, pp. 2256-2272, May 2015. IEEEXplore

    Abstract: This paper studies new bounds and code constructions that are applicable to the combinatorial granular channel model previously introduced by Sharov and Roth. We derive new bounds on the maximum cardinality of a grain-error-correcting code and propose constructions of codes that correct grain-errors. We demonstrate that a permutation of the classical group codes (e.g., Constantin-Rao codes) can correct a single grain-error. In many cases of interest, our results improve upon the currently best known bounds and constructions. Some of the approaches adopted in the context of grain-errors may have application to related channel models.

2014 Paper Awards

  • IEEE Data Storage Best Paper Award for 2014

    Awardee: Chao Tian

    Paper: “Characterizing the Rate Region of the (4, 3, 3) Exact-Repair Regenerating Codes,” IEEE Journal on Selected Areas in Communications, Vol. 32, No. 5, May 2014. IEEEXplore

    Abstract: Exact-repair regenerating codes are considered for the case (n,k,d) = (4,3,3), for which a complete characterization of the rate region is provided. This characterization answers in the affirmative the open question whether there exists a non-vanishing gap between the optimal bandwidth-storage tradeoff of the functional-repair regenerating codes (i.e., the cut-set bound) and that of the exact-repair regenerating codes. To obtain an explicit information theoretic converse, a computer-aided proof (CAP) approach based on primal and dual relation is developed. This CAP approach extends Yeung’s linear programming (LP) method, which was previously only used on information theoretic problems with a few random variables due to the exponential growth of the number of variables in the corresponding LP problem. The symmetry in the exact-repair regenerating code problem allows an effective reduction of the number of variables, and together with several other problem-specific reductions, the LP problem is reduced to a manageable scale. For the achievability, only one non-trivial corner point of the rate region needs to be addressed in this case, for which an explicit binary code construction is given.

  • IEEE Data Storage Best Student Paper Award for 2014

    Awardees: Farzad Farnoud (Hassanzadeh) and Olgica Milenkovic

    Paper: “Multipermutation Codes in the Ulam Metric for Nonvolatile Memories,” IEEE Journal on Selected Areas in Communications, Vol. 32, No. 5, May 2014. IEEEXplore

    Abstract: We address the problem of multipermutation code design in the Ulam metric for novel storage applications. Multipermutation codes are suitable for flash memory where cell charges may share the same rank. Changes in the charges of cells manifest themselves as errors whose effects on the retrieved signal may be measured via the Ulam distance. As part of our analysis, we study multipermutation codes in the Hamming metric, known as constant composition codes. We then present bounds on the size of multipermutation codes and their capacity, for both the Ulam and the Hamming metrics. Finally, we present constructions and accompanying decoders for multipermutation codes in the Ulam metric.

2013 Paper Awards

  • IEEE Data Storage Best Paper Award for 2013

    Awardees: Itzhak Tamo, Zhiying Wang and Jehoshua Bruck

    Paper: “Zigzag Codes: MDS Array Codes With Optimal Rebuilding,” IEEE Transactions on Information Theory, Vol. 59, No. 3, March 2013. IEEEXplore

    Abstract: Maximum distance separable (MDS) array codes are widely used in storage systems to protect data against erasures. We address the rebuilding ratio problem, namely, in the case of erasures, what is the fraction of the remaining information that needs to be accessed in order to rebuild exactly the lost information? It is clear that when the number of erasures equals the maximum number of erasures that an MDS code can correct, then the rebuilding ratio is 1 (access all the remaining information). However, the interesting and more practical case is when the number of erasures is smaller than the erasure correcting capability of the code. For example, consider an MDS code that can correct two erasures: What is the smallest amount of information that one needs to access in order to correct a single erasure? Previous work showed that the rebuilding ratio is bounded between [1/2] and [3/4]; however, the exact value was left as an open problem. In this paper, we solve this open problem and prove that for the case of a single erasure with a two-erasure correcting code, the rebuilding ratio is [1/2]. In general, we construct a new family of r-erasure correcting MDS array codes that has optimal rebuilding ratio of [1/(r)] in the case of a single erasure. Our array codes have efficient encoding and decoding algorithms (for the cases r=2 and r=3, they use a finite field of size 3 and 4, respectively) and an optimal update property.

  • IEEE Data Storage Best Student Paper Award for 2013

    Awardees: Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi and Lar R. Varshney

    Paper: “Noise-Enhanced Associative Memories,” Proceedings of Neural Information Processing Systems, pp. 1682–1690, 2013. NIPS Proceedings

    Abstract: Recent advances in associative memory design through structured pattern sets and graph-based inference algorithms have allowed reliable learning and recall of an exponential number of patterns. Although these designs correct external errors in recall, they assume neurons that compute noiselessly, in contrast to the highly variable neurons in hippocampus and olfactory cortex. Here we consider associative memories with noisy internal computations and analytically characterize performance. As long as the internal noise level is below a specified threshold, the error probability in the recall phase can be made exceedingly small. More surprisingly, we show that internal noise actually improves the performance of the recall phase. Computational experiments lend additional support to our theoretical analysis. This work suggests a functional benefit to noisy neurons in biological neuronal networks.

2011-2012 Paper Awards

  • IEEE Data Storage Best Paper/Best Student Paper Award of 2011/2012

    Awardees: K. V. Rashmi, Nihar B. Shah and P. Vijay Kumar

    Paper: “Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction,” IEEE Transactions on Information Theory, Vol. 57, No. 8, August 2011. IEEEXplore

    Abstract: Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1].

    Correction June 11, 2014 — An earlier version stated the above paper was the “2011 and 2012 Best Student Paper Award on Data Storage Area” and that there is no Best Paper Award for these years. It now correctly states that the award is the “IEEE Data Storage Best Paper/Best Student Paper Award of 2011/2012.”

2010 and Earlier Awards

  • 2010 Best Paper Award on Data Storage Area

    Awardees: A. G. Dimakis, P. B. Godfrey, Y. Wu, M. J. Wainwright and K. Ramchandran

    Paper: “Network Coding for Distributed Storage Systems,” IEEE Transactions on Information Theory, Vol. 56, Issue 9, Sept. 2010.

    Abstract: Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.

  • 2010 Best Student Paper Award on Data Storage Area

    Awardee: Y. Cassuto, M. Schwartz, V. Bohossian, and J. Bruck

    Paper: “Codes for Asymmetric Limited-Magnitude Errors With Application to Multilevel Flash Memories”, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 4, APRIL 2010.

    Abstract: Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by l. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation.

  • 2009 Best Paper Award on Data Storage Area

    Awardees: Anxiao (Andrew) Jiang, Robert Mateescu, Moshe Schwartz, Jehoshua Bruck

    Paper: “Rank Modulation for Flash Memories,” IEEE Transactions on Information Theory, vol. 55, pp. 2659-2673, June 2009.

    Abstract: We explore a novel data representation scheme for multilevel flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The only allowed charge-placement mechanism is a “push-to-the-top” operation, which takes a single cell of the set and makes it the top-charged cell. The resulting scheme eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. We present unrestricted Gray codes spanning all possible n-cell states and using only “push-to-the-top” operations, and also construct balanced Gray codes. One important application of the Gray codes is the realization of logic multilevel cells, which is useful in conventional storage solutions. We also investigate rewriting schemes for random data modification. We present both an optimal scheme for the worst case rewrite performance and an approximation scheme for the average-case rewrite performance.

  • 2009 Best Student Paper Award on Data Storage Area

    Awardee: Anantha Raman Krishnan, Rathnakumar Radhakrishnan, Bane Vasic, Aleksander Kavcic, William Ryan, Fatih Erden

    Paper: “2-D Magnetic Recording: Read Channel Modeling,” IEEE Transactions on Magnetics, vol. 45, pp. 3830-3836, October 2009.

    Abstract: Two-dimensional magnetic recording (TDMR) is a novel storage architecture that, in theory, can achieve a density of up to 10 Tb/in^2. It uniquely differs from other proposed next-generation architectures because of its reliance on sophisticated 2-D signal-processing algorithms. Recently, a number of contributions have been made in the development of read-channel models and detectors for TDMR systems. In this paper, we provide a detailed review on all important read-channel models under consideration. Our discussion focuses on the purpose of each model, placing a special emphasis on the suitability of the Voronoi model for the purpose of designing detectors. We also propose several detection schemes for TDMR based on the Voronoi model and present some numerical results.

  • 2008 Best Paper Award on Data Storage Area

    Awardees: K. Cai and K. Immink

    Paper: “A general construction of constrained parity-check codes for optical recording,” IEEE Transactions on Communications, vol. 56, pp. 1070-1079, July 2008.

    Abstract: This paper proposes a general and systematic code design method to efficiently combine constrained codes with parity-check (PC) codes for optical recording. The proposed constrained PC code includes two component codes: the normal constrained (NC) code and the parity-related constrained (PRC) code. They are designed based on the same finite state machine (FSM). The rates of the designed codes are only a few tenths below the theoretical maximum. The PC constraint is defined by the generator matrix (or generator polynomial) of a linear binary PC code, which can detect any type of dominant error events or error event combinations of the system. Error propagation due to parity bits is avoided, since both component codes are protected by PCs. Two approaches are proposed to design the code in the non-return-to-zero-inverse (NRZI) format and the non-return-to-zero (NRZ) format, respectively. Designing the codes in NRZ format may reduce the number of parity bits required for error detection and simplify post-processing for error correction. Examples of several newly designed codes are illustrated. Simulation results with the Blu-Ray disc (BD) systems show that the new d = 1 constrained 4-bit PC code significantly outperforms the rate 2/3 code without parity, at both nominal density and high density.

  • 2008 Best Student Paper Award on Data Storage Area

    Awardee: O. Shental, N. Shental, S. Shamai, I. Kanter, A. J. Weiss and Y. Weiss

    Paper: “Discrete-input two dimensional Gaussian channels with memory: estimation and information rates via graphical models and statistical mechanics,” IEEE Trans. Information Theory, pp. 1500-1513, Apr. 2008.

    Abstract: Discrete-input two-dimensional (2D) Gaussian channels with memory represent an important class of systems, which appears extensively in communications and storage. In spite of their widespread use, the workings of 2D channels are still very much unknown. In this work, we try to explore their properties from the perspective of estimation theory and information theory. At the heart of our approach is a mapping of a 2D channel to an undirected graphical model, and inferring its a posteriori probabilities (APPs) using generalized belief propagation (GBP). The derived probabilities are shown to be practically accurate, thus enabling optimal maximum a posteriori (MAP) estimation of the transmitted symbols. Also, the Shannon-theoretic information rates are deduced either via the vector-wise Shannon-McMillan-Breiman (SMB) theorem, or via the recently derived symbol-wise Guo-Shamai-Verdu (GSV) theorem. Our approach is also described from the perspective of statistical mechanics, as the graphical model and inference algorithm have their analogues in physics. Our experimental study, based on common channel settings taken from cellular networks and magnetic recording devices, demonstrates that under nontrivial memory conditions, the performance of this fully tractable GBP estimator is almost identical to the performance of the optimal MAP estimator. It also enables a practically accurate simulation-based estimate of the information rate. Rationalization of this excellent performance of GBP in the 2-D Gaussian channel setting is addressed.

  • 2007 Best Paper Award on Data Storage Area

    Awardees: J.B. Soriaga, H. D. Pfister and P. H. Siegel

    Paper: “Determing and approaching achievable rates of binary intersymbol interference channels using mulitstage decoding,” IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 53, No. 4, pp. 1416- 1429, April 2007.

    Abstract: By examining the achievable rates of a multistage decoding system on stationary ergodic channels, we derive lower bounds on the mutual information rate corresponding to independent and uniformly distributed (i.u.d.) inputs, also referred to as the i.u.d. information rate. For binary intersymbol interference (ISI) channels, we show that these bounds become tight as the number of decoding stages increases. Our analysis, which focuses on the marginal conditional output densities at each stage of decoding, provides an information rate corresponding to each stage. These rates underlie the design of multilevel coding schemes, based upon low-density parity-check (LDPC) codes and message passing, that in combination with multistage decoding approach the i.u.d. information rate for binary ISI channels. We give example constructions for channel models that have been commonly used in magnetic recording. These examples demonstrate that the technique is very effective even for a small number of decoding stages.

  • 2007 Best Student Paper Award on Data Storage Area

    Awardee: X. Hu, B. V. K. Kumar, Z. Li and R. Barndt

    Paper: “Error floor estimation of long LDPC codes on partial response channels,” in Proc. Globecom2007, pp. 259-264, Nov. 2007.

    Abstract: The presence of error floor in low density parity check (LDPC) codes is of great concern for potential applications of LDPC codes to data storage channels, which require the error correcting code (ECC) to maintain the near-capacity error correcting performance at frame error rate as low as 10-12. In order to investigate the error floor of LDPC codes under partial response channels used in data storage systems, we propose a new estimation method combining analytical tools and simulation, based on the concept of trapping sets. The definition of trapping sets is based on the dominant error patterns observed in the decoding process. The goal is to accurately estimate the error rate in the error floor region for certain types of LDPC codes under the partial response channel and further extend the frame error rate down to 10-14 or lower. Towards this goal, we first use field programmable gate array (FPGA) hardware simulation to find the trapping sets that cause the decoding failure in the error floor region. For each trapping set, we extract the parameters which are key to the decoding failure rate caused by this trapping set. Then we use a much simpler in situ hardware simulation with these parameters to obtain the conditional decoding failure rate. By considering all the trapping sets we find, we obtain the overall frame error rate in the error floor region. The estimation results for a length -4623 QC-LDPC code under the EPR4 channel are within 0.3 dB of the direct simulation results. In addition, this method allows us to estimate the frame error rate of a LDPC code down to 10-14 or lower.

  • 2006 Best Paper Award on Data Storage Area

    Awardees: Jing Jiang and Krishna R. Narayanan

    Paper: “Iterative Soft-Input Soft-Output Decoding of Reed–Solomon Codes by Adapting the Parity-Check Matrix,” IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 52, No. 8, pp. 3746- 3756, August 2006.

    Abstract: An iterative algorithm is presented for soft-input soft-output (SISO) decoding of Reed-Solomon (RS) codes. The proposed iterative algorithm uses the sum-product algorithm (SPA) in conjunction with a binary parity-check matrix of the RS code. The novelty is in reducing a submatrix of the binary parity-check matrix that corresponds to less reliable bits to a sparse nature before the SPA is applied at each iteration. The proposed algorithm can be geometrically interpreted as a two-stage gradient descent with an adaptive potential function. This adaptive procedure is crucial to the convergence behavior of the gradient descent algorithm and, therefore, significantly improves the performance. Simulation results show that the proposed decoding algorithm and its variations provide significant gain over hard-decision decoding (HDD) and compare favorably with other popular soft-decision decoding methods.

  • 2006 Best Student Paper Award on Data Storage Area

    Awardee: Riccardo Pighi, Riccardo Raheli and Umberto Amadei

    Paper: “Multidimensional Signal Processing and Detection for Storage Systems with Data-Dependent Transition Noise,” IEEE TRANSACTIONS ON MAGNETICS, vol. 42, No. 7, pp. 1905-1916, July 2006.

    Abstract: In the last decade, significant research on detection algorithms capable of mitigating the effects of colored Gaussian thermal noise and transition noise in storage systems, has been performed. In this paper, we present a new detection scheme based on a multidimensional detector front end and multidimensional linear prediction, applied to maximum a posteriori probability (MAP) sequence detection. This method improves the bit-error-rate (BER) performance with respect to previous approaches and makes the detector quite insensitive to transition noise. We show that the gain in terms of BER versus signal-to-noise ratio with our detector increases with the user density. The results obtained for a magnetic storage channel are extendable to optical storage systems as well.

  • IEEE ICC2007 Best Paper Award in Signal Processing and Coding for Storage

    Awardees: Richard Todd , J. R. Cruz

    Paper: R. Todd and J. R. Cruz, “Computing Maximum-Likelihood Bounds for Reed-Solomon Codes over Partial Response Channels,” University of Oklahoma

  • Signal Processing for Storage 2005 Best Paper Award

    Awardees: Aleksandar Kavcic, Xiao Ma, Nedeljko Varnica

    Paper: A. Kavcic, X. Ma, N. Varnica, “Matched Information Rate Codes for Partial Response Channels,” IEEE Transactions on Information Theory, vol. 51, pp. 973-989, March 2005

    Abstract: In this paper, we design capacity-approaching codes for partial response channels. The codes are constructed as concatenations of inner trellis codes and outer low-density parity- check (LDPC) codes. Unlike previous constructions of trellis codes for partial response channels, we disregard any algebraic properties (e.g., the minimum distance or the run-length limit) in our design of the trellis code. Our design is purely probabilistic in that we construct the inner trellis code to mimic the transition probabilities of a Markov process that achieves a high (capacity-approaching) information rate. Hence, we name it a matched information rate (MIR) design. We provide a set of five design rules for constructions of capacity-approaching MIR inner trellis codes. We optimize the outer LDPC code using density evolution tools specially modified to fit the superchannel consisting of the inner MIR trellis code concatenated with the partial response channel. Using this strategy, we design degree sequences of irregular LDPC codes whose noise tolerance thresholds are only fractions of a decibel away from the capacity. Examples of code constructions are shown for channels both with and without spectral s.

  • Signal Processing for Storage 2005 Best Student Paper Award

    Awardee: Shaohua Yang

    Paper: S. Yang, A. Kavcic, S. Tatikonda, “The Feedback Capacity of Finite-State Machine Channels,” IEEE Transactions on Information Theory, vol. 51, pp. 799-810, March 2005.

    Abstract: We consider a finite-state machine channel with a finite memory length (e.g., finite length intersymbol interference channels with finite input alphabets-also known as partial response channels). For such a finite-state machine channel, we show that feedback-dependent Markov sources achieve the feedback capacity, and that the required memory length of the Markov process matches the memory length of the channel. Further, we show that the whole history of feedback is summarized by the causal posterior channel state distribution, which is computed by the sum-product forward recursion of the Bahl-Cocke-Jelinek-Raviv (BCJR) (Baum-Welch, discrete-time Wonham filtering) algorithm. These results drastically reduce the space over which the optimal feedback-dependent source distribution needs to be sought. Further, the feedback capacity computation may then be formulated as an average-reward-per-stage stochastic control problem, which is solved by dynamic programming. With the knowledge of the capacity-achieving source distribution, the value of the capacity is easily estimated using Markov chain Monte Carlo methods. When the feedback is delayed, we show that the feedback capacity can be computed by similar procedures. We also show that the delayed feedback capacity is a tight upper bound on the feedforward capacity by comparing it to tight existing lower bounds. We demonstrate the applicability of the method by computing the feedback capacity of partial response channels and the feedback capacity of run-length-limited (RLL) sequences over binary symmetric channels (BSCs).

  • Past Winners of Best Student Paper Award
  • Year 2004: Zheng Zhang

    Title: “Achievable information rates and coding for MIMO systems over ISI channels and frequency-selective fading channels,” IEEE Trans. Commun., vol. 52, pp. 1698-1710, Oct. 2004.

    Abstract: We introduce a simulation-based method to compute the information rates of intersymbol interference (ISI) channels with additive colored Gaussian noise and/or signal-dependent Gaussian noise when the inputs are binary and independent identically distributed. The method extends the idea advanced by Arnold and Loeliger (2001), which focuses on the ISI channels with additive white Gaussian noise (AWGN). With the new method, we can compute the information rates of the Lorentzian channel with media noise that represents a suitable model for practical magnetic recording channels. We illustrate the use of the technique via several examples and show that media noise is preferable to AWGN in terms of the achievable information rates, at low signal-to-noise ratios, in particular. We also present examples of turbo codes for Lorentzian channels with media noise and compare the performance with the achievable information rates. The results demonstrate, not surprisingly, that improved detectors are necessary to achieve the channel capacity for magnetic recording channels when the media noise is dominant.

  • Year 2001: Dieter Arnold

    Title: “On the information rate of binary-input channels with memory,” ICC 2001, Helsinki, Finland.

    Abstract: The entropy rate of a finite-state hidden Markov model can be estimated by forward sum-product trellis processing (i.e., the forward recursion of the Baum-Welch/BCJR algorithm) of simulated model output data. This can be used to compute information rates of binary-input AWGN channels with memory.