_D 05.05.2006 AU Dr Frederique Oggier, Caltech TI   AB Recently, attention has been focused on wireless networks. Methods to exploit diversity using the antennas of different users in a cooperative way have been studied, in order to look for the diversity known to be achieved in the multiple antennas point to point case. We consider a wireless network where a transmitter and a receiver are distinguished, with enough computational capacity and power to encode/decode the data, and use several antennas to transmit/receive the signal. The other nodes are relays, small devices with few power and computational ability, with only one antenna (as for example in sensor networks). In this scenario, it is not realistic to assume nodes can decode the information. We thus propose a strategy based on the idea of Distributed Space-Time Coding, where the nodes just do a very simple operation before forwarding the information. Our scheme relies on known algebraic Space-Time Codes. It is suitable for any number of transmit/receive antennas and nodes. It is shown to perform better than random codes, with less encoding complexity. We also briefly discuss the scenario where nodes may be down in the network, and show that our strategy is resistant to node failures. _F TBA

_D 28.04.2006 AU Dr Daniel Bleichenbacher, Bell Labs TI New Attacks on RSA with Small Secret CRT-Exponents AB It is well-known that there is an efficient method for decrypting/signing with RSA when the secret exponent d is small modulo p-1 and q-1. We call such an exponent d a small CRT-exponent. It is one of the major open problems in attacking RSA whether there exists a polynomial time attack for small CRT-exponents, i.e. a result that can be considered as an equivalent to the Wiener and Boneh-Durfee bound for small d. At Crypto 2002, May presented a partial solution in the case of an RSA modulus N=pq with unbalanced prime factors p and q. In this talk we present two imporoved algorithms for breaking RSA with small CRT-exponents. _F TBA

_D 15.12.2005 AU Cyril Measson, LTHC TI (G)EXIT: Entry point to MAP decoding AB There is a fundamental relationship between belief propagation (BP) and

 maximum a posteriori (MAP) decoding which is reminiscent of Maxwell's
 construction in thermodynamics. BP and MAP decoding are connected
 to a common object which is the (G)EXIT function. The (general)
 area theorem is the central element in this theory. As a main
 application of the area theorem it can be shown that
 a Maxwell-type construction determines the MAP threshold from the BP
 (G)EXIT curve. But there are many other potential applications of
 (G)EXIT curves. (G)EXIT analysis turns out to be an efficient
 machinery which for example enables to extend to general channels the
 matching condition already known for the erasure channel. This
 condition asserts that transmission above capacity is not possible,
 using only quantities which naturally appear on the context of iterative
 coding. 

_F cyril1205.pdf

_D 01.12.2005 AU Professor Alex Vardy, UC San Diego TI Asymptotic Improvements of the Gilbert-Varshamov and Minkowski Bounds for

  Codes and Sphere Packings

AB Given positive integers $n$ and $d$, let $A_2(n,d)$ denote the maximum i

 size i of a binary code of length $n$ and minimum distance $d$.  
 The well-known Gilbert-Varshamov bound asserts that 
 $A_2(n,d) \geq 2^n/V(n,d-1)$, where $V(n,d) = \sum_{i=0}^d {n \choose i}$ 
 is the volume of a Hamming sphere of radius $d$.  We show that, in fact, 
 there exists a positive constant $c$ such that 
 $A_2(n,d) \geq  c \frac{2^n}{V(n,d-1)} \log_2 V(n,d-1)$. This exceeds the 
 Gilbert-Varshamov bound by a factor that is linear in the length $n$. 
 The result follows by recasting the Gilbert-Varshamov bound into a 
 graph-theoretic framework and using the fact that the corresponding graph 
 is locally sparse.  This result has several interesting generalizations. 
 In particular, we have recently used the same approach to improve upon the 
 classical Minkowski bound on the density of sphere-packings in the Euclidean
 space $\R^n$ by a factor that is linear in $n$.  Unlike previous such 
 improvements, our proof is "constructive" in a certain well-defined sense. 
 The new bound also gives rise to many interesting questions.  For example, 
 it is well known that random linear codes achieve the GV bound with 
 probability approaching 1 for large $n$. Are they likely to achieve the new 
 bound with high probability?  We have recently settled this question in the 
 negative.  Thus we now have a proof that there exist binary codes that are 
 asymptotically better than random codes. *joint work with Tao Jiang 
 (University of Miami), Michael Krivelevich  and Simon Litsyn (Tel-Aviv 
 University) 

_F notyet

_D 17.11.2005 AU Dr Henry Pfister, LTHC TI Capacity Achieving Codes with Bounded Complexity AB Error-correcting codes which employ iterative decoding algorithms are now

 considered state of the art in communications.  There is now a large 
 collection of code families which achieve a small gap to capacity with 
 feasible decoding complexity.  Examples are low-density parity-check (LDPC) 
 codes, irregular repeat-accumulate (IRA) codes, and Raptor codes. For each 
 of these code families, one can construct code sequences which provably 
 achieve capacity on the binary erasure channel (BEC).  In each case, 
 however, the decoding complexity becomes unbounded as the gap to capacity 
 vanishes. This talk will focus on recently constructed code families whose 
 complexity remains bounded as the gap to capacity vanishes. Assuming only 
 basic knowledge of LDPC codes, three closely related ensembles will be 
 described: IRA codes, accumulate-repeat-accumulate (ARA) codes, and 
 accumulate-LDPC (ALDPC) codes.  Using the duality between these ensembles 
 and simplified approach to density evolution, we will construct a variety 
 of codes which achieve capacity with bounded complexity. This is joint work 
 with Igal Sason and Ruediger Urbanke.

_F NA18.pdf

_D 31.10.2005 AU Dr Michael Luby, Digital Fountain. TI Application-layer FEC erasure codes and cellular multicast/broadcast

 standards 

AB There has recently been and continues to be a large amount of work in

 cellular commercial standards that is focused on multimedia delivery using 
 broadcast and multicast channels.  The reasons for these standardization 
 efforts are self-evident: while cellular services become ever more popular 
 and as cellular multimedia applications grow richer, the available bandwidth
 for delivering these services remains fairly limited, and thus 
 broadcast/multicast delivery is the only viable solution operators see to 
 this obvious revenue-threatening bottleneck.  Examples of such commercial 
 standardization efforts include 3GPP MBMS, DVB-H IPDatacast, 3GPP2 BCMCS and
 DMB. One of the primary challenges in using broadcast/multicast channels for
 delivering multimedia is to provide a reliable solution that efficiently 
 uses the channel.  For cellular unicast delivery, there are a number of 
 methods to assure reliability and efficiency, including automatic modulation
 and error-correction adjustments at the physical layer, automatic 
 retransmission  of lost radio packets, and automatic adjustments of transmit
 power, all based on continual feedback from each individual receiver.  For 
 cellular broadcast/multicast delivery, none of these methods are applicable,
 and the primary method of providing reliability instead is the use of 
 application-layer FEC codes. In this talk we describe some of the 
 broadcast/multicast services to be offered and specifically how 
 application-layer FEC codes provide reliability and channel efficiency for 
 each of the services.  We also describe in some detail Raptor R10, the 
 application-layer FEC code that has been chosen by 3GPP MBMS and DVB-H 
 IPDatacast, and that is under consideration in all of the remaining cellular
 standards that have not yet made a selection.

_F Luby1005.pdf