2008_Min-Max decoding for non binary LDPC codes
LTE_3GPP_36.213-860(中文版)
3GPP
Release 8
3
3GPP TS 36.213 V8.6.0 (2009-03)
Contents
Foreword ...................................................................................................................................................... 5 1 2 3
Internet
Copyright Notification No part may be reproduced except as authorized by written permission. The copyright and the foregoing restriction extend to reproduction in all media.
© 2009, 3GPP Organizational Partners (ARIB, ATIS, CCSA, ETSI, TTA, TTC). All rights reserved. UMTS™ is a Trade Mark of ETSI registered for the benefit of its members 3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners LTE™ is a Trade Mark of ETSI currently being registered for the benefit of i ts Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM Association
香港身份证号码
For this example d = (1 0 0)
T
The error occurred at the 4th position. D is called a parity check matrix.
19
Probability for Correct Sending
Suppose we are transmitting a message of 4 digits on a binary channel and q=0.9. Then the probability of correctly sent for No coding: q4=0.6561; Repetition code:
8
9
X354670(?)
9(58)+8(33)+7(3)+6(5)+5(4)+4(6)+3(7)+2(0)+z = 902+z 被11整除,所以z=0。 我們可利用Modular arithmetic來簡化運算。 z = 9α8β7a6b5c4d3e2f ≡ 2α+3β+4a+5b5c4d3e2f (mod 11) 所以 z ≡ 2(58)+3(33)+4(3)+5(5) 5(4) 4(6) 3(7) 2(0) ≡ 2(3)+3(0)+12+252024210 ≡ 6+0+1+3+22+10=11 ≡ 0 (mod 11) 即 X354670(0) 是正確的香港身分證號碼。
a1 + a3 + a5 + a7 + a9 + a11 + a13 + 3(a2 + a4 + a6 + a8 + a10 + a12 ) ≡ 0 (mod 10)
第5章无失真编码
Theorem 5.1 (Kraft Inequality) A necessary and sufficient condition for the existence of a binary prefix code set whose code words have lengths n1 n2 ... nL is
code)
Kraft theorem
Unfixed length coding theorem(Shannon First Theorem)
Kraft theຫໍສະໝຸດ remquestion:find real-time, uniquely decodable code method:research the code partition condition of the
li
Xi {x1, x2 ,..., xr )
How to code losslessly? Assuming that the statistical characteristics of the source can be ignored, it must satisfy the following condition.
dimension (3) Node ↔ Part of a code word (4) Terminal node ↔ End of a code word (5) Number of branches ↔ Code length (6) Non-full branch tree ↔ Variable length code (7) Full branch tree ↔ Fixed length code
real-time uniquely decodable code introduction:concept of “code tree” conclusion:present the condition that the real-time
太阳集团TM T1设备 T1规格说明书
SPECIFICATIONSConnectorsBantam jacks (Eq Tx, Eq Rx, Fac Tx, Fac Rx)8-pin mini DIN RS232C serial port, DTEAccessSingle ModeDSX Monitor: 100ΩBridged Monitor: > 1000ΩTerminated: 100ΩTerminated Loop: 100ΩBridged Loop: > 1000ΩDSX Monitor Loop: 100ΩDual ModeThru A/B, Split A/B, Split E/F, Loop E/F, Mon E/FTerminationThru, Split, Loop: 100ΩMon: > 1000ΩTransmitterFraming: SF-D4, ESF, SLC-96, T1DMCoding: AMI, B8ZSLine Build Out (LBO): 0, 7.5, 15 dBDSX pre-equalization: 0 to 655 ft, 133 ft per step Clock: Internal (1.544 MHz ± 5 ppm), looped,externalPulse shape to Telcordia TR-TSY-000499; reference: G.703, CB113, CB119, CB132, CB143, PUB62508, PUB62411 Transmit PatternsRepeating: 3 in 24, 1 in 8 (1:7), all 1s, 1 in 16, 55octet, alt 1010, all 0s, T1-T6, DDS1-DDS6User programmable pattern 1 to 2048 bitsStore up to 10 programmable patterns with alphanu-meric namesPseudo random: QRS, PRBS, n = 6, 7, 9, 11, 15, 20, 23 Test pattern inversionInsert errors: BPV, logic, frame errors; programmableerror burst 1 to 9999 counts, or error rate 2 x 10-3to 1 x 10-9ReceiverInput sensitivityTerminate, Bridge: +6 to -36 dB cable lossDSXMON: -15 to -30 dB, resistiveCoding: AMI, B8ZS, AutoFraming: SF, ESF, SLC-96, T1DM, auto frame Frequency range: 1542 kHz to 1546 kHzAuto pattern synchronizationReceived pattern sync independent of transmitted patternProgrammable loss of frame criteria, error averaging interval Basic MeasurementsSummary MeasurementsElapsed time, remaining time, framing, line coding, transmitted pattern, received pattern, BPV count and rate, bit error count and rate, framing bit error count, pulse level (dB), CRC-6 block error count, line frequency, errored second count and percent, severely errored second count and percent, error free second percent, available second percent, unavailable second count and percentLogical Error MeasurementsBit error count and current rate, average bit error rate since start, bit slips, bit errored seconds and percent, severely bit errored seconds and percent, available seconds and percent, unavailable seconds and percent, degraded minutes count and percent, loss of sync seconds count and percentSignal MeasurementsSignal available seconds count and percent, loss of signal seconds count and percent, low density seconds count, excess 0s seconds count, AIS seconds count, signalunavailable seconds percentSimplex current: 1 to 150 mA, ± 1 mA ± 5%.Receive bit rate: 1542 to 1546 kbps, ± 1 bps, ± clock source accuracy, external or internal clockReceive level (volts and dBdsx)Peak to peak: 60 mV to 15V, ± 10 mV, ± 5%Positive pulse: 30 mV to 7.5V, ± 10 mV, ± 5%Negative pulse: -30 mV to -7.5V, ± 10 mV, ± 5%eLine Error MeasurementsBPV count and rate (current and average), BPV error seconds count and percent, BPV SES count and percent, BPV AS count and percent, BPV UAS count and percent, BPV degraded minutes count and percentPath - Frame MeasurementsFrame bit error count and rate (current and average), frame slip count, OOF second count, COFA count, frame synch loss seconds, yellow alarm second count, frame error second count and percent, frame severely errored second count and rate, frame available second count and percent, frame unavailable second count and percent Path - CRC-6 MeasurementsCRC-6 block error count and rate (current and average), CRC-6 errored second count and percent, CRC-6 severely errored second count and percent, CRC-6 available second count and percent, CRC-6 unavailable second count and percentFrequency MeasurementsMoving bar graph of slip rate, received signal frequency, max frequency, min frequency, clock slips, frame slips, max positive wander, max negative wanderOther MeasurementsView Received DataView T1 data in binary, hex, ASCIIShows data in bytes by time slotShows 8 time slots per display pageCaptures 256 consecutive time slots as test patternPropagation DelayMeasure round trip propagation delay in unit intervals ± 1 UI, with translation to microseconds and one way distance over cable Quick Test I and II2 programmable automated loopback tests that save time when performing standardized acceptance testsBridge TapAutomated transmission and measurement of 21 different patterns to identify possible bridge taps at some point on lineLoopbacksLoopback Control, In-bandCSU, NIU, 10000010 programmable user patterns, 1 to 32 bitsLoopback Control, ESF-Facility Data LinkPayload, Line, Network10 programmable user patterns, 1 to 32 bitsWestell & Teltrend Looping Devices Control (SW1010) Automated looping of Westell and Teltrend line and central office repeaters. Includes SF and ESF modes, arm, loop up/down, loopback query, sequential loopback, power loop query, span power down/up, unblocking.Voice Frequency CapabilityMonitor speaker with volume controlBuilt-in microphone for talkView all 24 channel A, B (C, D) bitsControl A, B (C, D) bits (E&M ground/loop start, FXO, FXS, on/off hook, wink)Generator: 404, 1004, 1804, 2713, 2804 Hz @ 0 dBm and -13 dBm DTMF dialing, 32 digits, 10 sets preprogrammable speed dial number Programmable tone and interdigital periodCompanding law - µ LawHitless drop and insertProgrammable idle channel A, B (C, D) bitsSelectable idle channel code, 7F or FF hexVF Level, Freq & Noise Measurement (SW111) Generator: 50 to 3950 Hz @ 1 Hz step; +3 to -60 dBm @ 1 dBm step Level, Freq measurements: 50 to 3950 Hz +3 dBm to -60 dBm Noise: 3 kHz flat, C-message, C-notch, S/NMF/DTMF/DP Dialing, Decoding and Analysis (SW141) MF/DTMF/DP dialingProgrammable DP %break and interdigital period @ 10 ppsMF/DTMF decode up to 40 received digits. Analyze number, high/low frequencies, high/low levels, twist, tone period, interdigital time. DP decode up to 40 digits. Analyze number, %break, PPS, interdigital time.Signaling AnalysisLive: Graphical display of A, B (C, D) signaling state changesTrigger: Programmable A, B (C, D) trigger state to start analysis on the opposite sideMFR1: Timing analysis of signaling transition states and decoding of dialed digitsMFR1M: Modified MFR1 CO switches signaling analysisMIXTONE: Decode a signaling sequence that has both MF andDTMF digitsFractional T1 (SW105, SW1010)Error measurements, channel configuration verificationNx64 kbps, Nx56 kbps, N=1 to 24Sequential, alternating, or random channelsAuto scan and auto configure to any FT1 orderScan for active channelsRx and Tx do not need to be same channelsHitless drop and insertProgrammable idle channel A, B (C, D) bitsSelectable idle channel code, 7F or FF hexESF Facility Data Link (SW107, SW1010)Read and Send T1.403 message on FDL (PRM and BOM)Automatic HDLC protocol handlingYEL ALM, LLB ACT, LLB DEA, PLB ACT, PLB DEAAT&T 54016, 24 hr performance report retrievalT1.403, 24 hour PRM collection per 15 min intervalSunSet TM T1SLC-96 Data Link (SW107, SW1010)Send and receive messageWP1, WP1B, NOTE formatsAlarms, switch-to-protect, far end loopTo Telcordia TR-TSY-000008 specificationsSLC-96 FEND loopCSU/NI Emulation (SW106, SW1010)Bidirectional (Equipment and Facility Directions)CSU/NI replacement emulationResponds to loopback commands - inband and datalinkGraphic indication of incoming signal status in both directions Simultaneous display of T1 line measurementsAutomatic generation of AISLoopbacksFacility: Line and payload loopbackEquipment: Line loopbackSimultaneous loopbacks in both directionsLocal and remote loopback controlRemote Control (SW100)VT100 emulation with same graphical interface used by test set Circuit status table provides current & historical information on test set LEDs Uses test set's serial port at 9600 baud, 8-pin MINI DINSerial port can not be connected to printer during remote control Westell PM NIU and MSS (SW120)Supports Westell performance monitoring network interface unit and maintenance switch system with rampSet/query NIU time and date. Query performance data by hour or all.Reset performance registers. Read data over ramp line. Perform maintenance switch function for Westell and Teltrend.Pulse Mask Analysis (SW130)Scan Period: 800 nsMeasurements: Pass/Fail, ns Rise time, ns fall time, ns pulse width, %overshoot, %undershootResolution: 1 ns or 1%, as applicableMasks: ANSI T1.102, T1.403, AT&T CB119, Pub 62411Pulse/Mask Display: Test set screen and SS118 printerDDS Basic Package (SW170)Choose receive and transmit time slots independentlyTest rates: 2.4, 4.8, 9.6, 19.2, 56, 64 kbpsPatterns: 2047, 511, 127, 63, all 1s, all 0s, DDS-1, DDS-2, DDS-3, DDS-4, DDS-5, DDS-6, 8-bit userLoopbacks: Latching, interleaved, CSU, DSU, OCU, DSO-DP, 8-bit user Measurements: Bit errors, Bit error rateControl code send/receive: Abnormal, mux out of sync, idleAccess Mode: Loopback tests require intrusive access to T1Teleos & Switched 56 Tests (SW144)Switched 56 call set up: Supervision and dialingSend test patterns: 2047, 511, 127, 63, all 1s, all 0s, FOX, DDS1-6, USERBit error, bit error rate measurementTeleos signaling sequence timing analysis and dial digits decoding GENERALOperating temperature: 0˚C to 50˚COperating humidity: 5% to 90%, noncondensingStorage temperature: -20˚C to 70˚CSize: 2.4" (max) x 4.2" (max) x 10.5"Weight: 2.7 lb [1.2 kg]Battery operation time: 2.5 hr nominalAC operation: 110V/120V @ 60 Hz, or 220V/240V @ 50/60 HzORDERING INFORMATIONTest SetSS100SunSet T1 ChassisIncludes battery charger, User's manual, Instrument stand.Software cartridge must be ordered separately.CLEI: T1TUW04HAACPR: 674488Software OptionsSW1000Software T1Includes basic measurements, loopback control, testpatterns send/rcv, bridge tap, propagation delay, quick test.Also includes VF channel capabilities: Talk/listen, view/control A, B (C ,D), DTMF dialing, send 5 tones at 2 levelsCLEI: T1TUW01HAACPR: 674485SW1010Software FT1Includes all Software T1 features and adds: Fractional T1,Teltrend/Westell looping device control, CSU/NIU emula-tion, ESF/SLC-96 data link controlCLEI: T1TUW02HAACPR: 674486SW100Remote ControlGraphical, menu driven VT100 emulationIncludes SS115 & SS122SW105Fractional T1Purchased with SW1000 onlySW106CSU/NIU EmulationPurchased with SW1000 onlySW107ESF & SLC-96 Data Link Send and ReceivePurchased with SW1000 onlySW111VF Level, Frequency & Noise MeasurementSW120Westell Maintenance Switch, PM NIU, RAMPPurchased with SW1010 onlySW130Pulse Mask AnalysisSW141MF/DTMF/DP Dialing, Decoding, and AnalysisSW144Teleos/Northern Switched 56 testsSW170Basic DDS PackageAccessoriesSS101Carrying CaseSS104Cigarette Lighter Battery ChargerSS105Repeater ExtenderSS106Single Bantam to Single Bantam Cable, 6'SS107Dual Bantam to Dual Bantam Cable, 6'SS108Single Bantam to Single 310 Cable, 6'SS109Single Bantam to Probe Clip Cable, 6'Note: Specifications subject to change without notice.© 2001 Sunrise Telecom Incorporated. All rights reserved.Printed in USA.00SS110Dual Bantam to 15-pin D Connector Cable,Male, 6'SS111Dual Bantam to 15-pin D Connector Cable,Female, 6'SS112Dual Bantam to 8-position Modular Plug Cable, 6'SS113A AC Battery Charger, 120VAC SS113B AC Battery Charger, 110VAC SS114SunSet T1 User's ManualSS115DIN-8 to RS232C Printer Cable SS115B DIN-8 to DB-9 Printer Cable SS116Instrument StandSS117A Printer Paper, 5 rolls, for SS118B/CSS118B High Capacity Thermal Printer with 110 VAC charger. Includes SS115B.SS118C High Capacity Thermal Printer with 220 VAC charger. Includes SS115B.SS121A SunSet AC Charger, 230VAC, 50/60 Cycle European style connectorSS121B SunSet AC Charger, 220VAC, 50/60 Cycle 3-prong IEC connectorSS121C SunSet AC Charger, 240VAC, 50/60 Cycle 3-prong IEC connectorSS122Null Modem Adapter, DB-25SS122A Null Modem Adapter, DB-9SS123A SunSet JacketSS125SunSet T1 Training Tape, EnglishSS130A Removable SunSet Rack Mount - 19"/23"SS130B Permanent SunSet Rack Mount - 19"/23"SS132Two Single Bantams to 4-position Modular Plug Cable。
气泡混合轻质土使用规程
目次1总则 (3)2术语和符号 (4)2.1 术语 (4)2.2 符号 (5)3材料及性能 (6)3.1 原材料 (6)3.2 性能 (6)4设计 (8)4.1 一般规定 (8)4.2 性能设计 (8)4.3 结构设计 (9)4.4 附属工程设计 (10)4.5 设计计算 (10)5配合比 (13)5.1 一般规定 (13)5.2 配合比计算 (13)5.3 配合比试配 (14)5.4 配合比调整 (14)6工程施工 (15)6.1 浇筑准备 (15)6.2 浇筑 (15)6.3 附属工程施工 (15)6.4 养护 (16)7质量检验与验收 (17)7.1 一般规定 (17)7.2 质量检验 (17)7.3 质量验收 (18)附录A 发泡剂性能试验 (20)附录B 湿容重试验 (22)附录C 适应性试验 (22)附录D 流动度试验 (24)附录E 干容重、饱水容重试验 (25)附录F 抗压强度、饱水抗压强度试验 (27)附录G 工程质量检验验收用表 (28)本规程用词说明 (35)引用标准名录 (36)条文说明 (37)Contents1.General provisions (3)2.Terms and symbols (4)2.1 Terms (4)2.2 Symbols (5)3. Materials and properties (6)3.1 Materials (6)3.2 properties (6)4. Design (8)4.1 General provisions (8)4.2 Performance design (8)4.3 Structure design (9)4.4 Subsidiary engineering design (9)4.5 Design calculation (10)5. Mix proportion (13)5.1 General provisions (13)5.2 Mix proportion calculation (13)5.3 Mix proportion trial mix (14)5.4 Mix proportion adjustment (14)6. Engineering construction (15)6.1 Construction preparation (15)6.2 Pouring .............................................................. .. (15)6.3 Subsidiary engineering construction (16)6.4 Maintenance (17)7 Quality inspection and acceptance (18)7.1 General provisions (18)7.2 Quality evaluate (18)7.3 Quality acceptance (19)Appendix A Test of foaming agent performance (20)Appendix B Wet density test (22)Appendix C Adaptability test (23)Appendix D Flow value test.................................................................................. .. (24)Appendix E Air-dry density and saturated density test (25)Appendix F Compressive strength and saturated compressive strength test (27)Appendix G Table of evaluate and acceptance for quality (28)Explanation of Wording in this code (35)Normative standard (36)Descriptive provision (37)1总则1.0.1为规范气泡混合轻质土的设计、施工,统一质量检验标准,保证气泡混合轻质土填筑工程安全适用、技术先进、经济合理,制订本规程。
Low Latency Decoding of EG LDPC Codes
Publication History:– 1. First printing, TR-2005-036, June 2005
Low Latency Decoding of EG LDPC Codes
Juntan Zhang, Jonathan S. Yedidia and Marc P. C. Fossorier
MITSUBISHI ELECTRIC RESEARCH LABORATORIES
Low Latency Decoding ofnathan S. Yedidia, and Marc P.C. Fossorier
This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c Mitsubishi Electric Research Laboratories, Inc., 2005 201 Broadway, Cambridge, Massachusetts 02139
Research Statement
Research StatementParikshit GopalanMy research focuses on fundamental algebraic problems such as polynomial reconstruction and interpolation arising from various areas of theoretical computer science.My main algorith-mic contributions include thefirst algorithm for list-decoding a well-known family of codes called Reed-Muller codes[13],and thefirst algorithms for agnostically learning parity functions[3]and decision trees[11]under the uniform distribution.On the complexity-theoretic side,my contribu-tions include the best-known hardness results for reconstructing low-degree multivariate polyno-mials from noisy data[12]and the discovery of a connection between representations of Boolean functions by polynomials and communication complexity[2].1IntroductionMany important recent developments in theoretical computer science,such as probabilistic proof checking,deterministic primality testing and advancements in algorithmic coding theory,share a common feature:the extensive use of techniques from algebra.My research has centered around the application of these methods to problems in Coding theory,Computational learning,Hardness of approximation and Boolean function complexity.While atfirst glance,these might seem like four research areas that are not immediately related, there are several beautiful connections between these areas.Perhaps the best illustration of these links is the noisy parity problem where the goal is to recover a parity function from a corrupted set of evaluations.The seminal Goldreich-Levin algorithm solves a version of this problem;this result initiated the study of list-decoding algorithms for error-correcting codes[5].An alternate solution is the Kushilevitz-Mansour algorithm[19],which is a crucial component in algorithms for learning decision trees and DNFs[17].H˚a stad’s ground-breaking work on the hardness of this problem has revolutionized our understanding of inapproximability[16].All these results rely on insights into the Fourier structure of Boolean functions.As I illustrate below,my research has contributed to a better understanding of these connec-tions,and yielded progress on some important open problems in these areas.2Coding TheoryThe broad goal of coding theory is to enable meaningful communication in the presence of noise, by suitably encoding the messages.The natural algorithmic problem associated with this task is that of decoding or recovering the transmitted message from a corrupted encoding.The last twenty years have witnessed a revolution with the discovery of several powerful decoding algo-rithms for well-known families of error-correcting codes.A key role has been played by the notion of list-decoding;a relaxation of the classical decoding problem where we are willing to settle for a small list of candidate transmitted messages rather than insisting on a unique answer.This relaxation allows one to break the classical half the minimum distance barrier for decoding error-correcting codes.We now know powerful list-decoding algorithms for several important code families,these algorithms have also made a huge impact on complexity theory[5,15,23].List-Decoding Reed-Muller Codes:In recent work with Klivans and Zuckerman,we give the first such list-decoding algorithm for a well-studied family of codes known as Reed-Muller codes, obtained from low-degree polynomials over thefinitefield F2[13].The highlight of this work is that our algorithm is able to tolerate error-rates which are much higher than what is known as the Johnson bound in coding theory.Our results imply new combinatorial bounds on the error-correcting capability of these codes.While Reed-Muller codes have been studied extensively in both coding theory and computer science communities,our result is thefirst to show that they are resilient to remarkably high error-rates.Our algorithm is based on a novel view of the Goldreich-Levin algorithm as a reduction from list-decoding to unique-decoding;our view readily extends to polynomials of arbitrary degree over anyfield.Our result complements recent work on the Gowers norm,showing that Reed-Muller codes are testable up to large distances[21].Hardness of Polynomial Reconstruction:In the polynomial reconstruction problem,one is asked to recover a low-degree polynomial from its evaluations at a set of points and some of the values could be incorrect.The reconstruction problem is ubiquitous in both coding theory and computational learning.Both the Noisy parity problem and the Reed-Muller decoding problem are instances of this problem.In joint work with Khot and Saket,we address the complexity of this problem and establish thefirst hardness results for multivariate polynomials of arbitrary degree [12].Previously,the only hardness known was for degree1,which follows from the celebrated work of H˚a stad[16].Our work introduces a powerful new algebraic technique called global fold-ing which allows one to bypass a module called consistency testing that is crucial to most hardness results.I believe this technique willfind other applications.Average-Case Hardness of NP:Algorithmic advances in decoding of error-correcting codes have helped us gain a deeper understand of the connections between worst-case and average case complexity[23,24].In recent work with Guruswami,we use this paradigm to explore the average-case complexity of problems in NP against algorithms in P[8].We present thefirst hardness amplification result in this setting by giving a construction of an error-correcting code where most of the symbols can be recovered correctly from a corrupted codeword by a deterministic algorithm that probes very few locations in the codeword.The novelty of our work is that our decoder is deterministic,whereas previous algorithms for this task were all randomized.3Computational LearningComputational learning aims to understand the algorithmic issues underlying how we learn from examples,and to explore how the complexity of learning is influenced by factors such as the ability to ask queries and the possibility of incorrect answers.Learning algorithms for a class of concept typically rely on understanding the structure of that concept class,which naturally ties learning to Boolean function complexity.Learning in the presence of noise has several connections to decoding from errors.My work in this area addresses the learnability of basic concept classes such as decision trees,parities and halfspaces.Learning Decision Trees Agnostically:The problem of learning decision trees is one of the central open problems in computational learning.Decision trees are also a popular hypothesis class in practice.In recent work with Kalai and Klivans,we give a query algorithm for learning decision trees with respect to the uniform distribution on inputs in the agnostic model:given black-box access to an arbitrary Boolean function,our algorithmfinds a hypothesis that agrees with it on almost as many inputs as the best decision tree[11].Equivalently,we can learn decision trees even when the data is corrupted adversarially;this is thefirst polynomial-time algorithm for learning decision trees in a harsh noise model.Previous decision-tree learning algorithms applied only to the noiseless setting.Our algorithm can be viewed as the agnostic analog of theKushilevitz-Mansour algorithm[19].The core of our algorithm is a procedure to implicitly solve a convex optimization problem in high dimensions using approximate gradient projection.The Noisy Parity Problem:The Noisy parity problem has come to be widely regarded as a hard problem.In work with Feldman et al.,we present evidence supporting this belief[3].We show that in the setting of learning from random examples(without queries),several outstanding open problems such as learning juntas,decision trees and DNFs reduce to restricted versions of the problem of learning parities with random noise.Our result shows that in some sense, noisy parity captures the gap between learning from random examples and learning with queries, as it is believed to be hard in the former setting and is known to be easy in the latter.On the positive side,we present thefirst non-trivial algorithm for the noisy parity problem under the uniform distribution in the adversarial noise model.Our result shows that somewhat surprisingly, adversarial noise is no harder to handle than random noise.Hardness of Learning Halfspaces:The problem of learning halfspaces is a fundamental prob-lem in computational learning.One could hope to design algorithms that are robust even in the presence of a few incorrectly labeled points.Indeed,such algorithms are known in the setting where the noise is random.In work with Feldman et al.,we show that the setting of adversarial errors might be intractable:given a set of points where99%are correctly labeled by some halfs-pace,it is NP-hard tofind a halfspace that correctly labels even51%of the points[3].4Prime versus Composite problemsMy thesis work focuses on new aspects of an old and famous problem:the difference between primes and composites.Beyond basic problems like primality and factoring,there are many other computational issues that are not yet well understood.For instance,in circuit complexity,we have excellent lower bounds for small-depth circuits with mod2gates,but the same problem for circuits with mod6gates is wide open.Likewise in combinatorics,set systems where sizes of the sets need to satisfy certain modular conditions are well studied.Again the prime case is well understood,but little is known for composites.In all these problems,the algebraic techniques that work well in the prime case break down for composites.Boolean function complexity:Perhaps the simplest class of circuits for which we have been unable to show lower bounds is small-depth circuits with And,Or and Mod m gates where m is composite;indeed this is one of the frontier open problems in circuit complexity.When m is prime, such bounds were proved by Razborov and Smolensky[20,22].One reason for this gap is that we do not fully understand the computational power of polynomials over composites;Barrington et.al were thefirst to show that such polynomials are surprisingly powerful[1].In joint work with Bhatnagar and Lipton,we solve an important special case:when the polynomials are symmetric in their variables[2].We show an equivalence between computing Boolean functions by symmetric polynomials over composites and multi-player communication protocols,which enables us to apply techniques from communication complexity and number theory to this problem.We use these techniques to show tight degree bounds for various classes of functions where no bounds were known previously.Our viewpoint simplifies previously known results in this area,and reveals new connections to well-studied questions about Diophantine equations.Explicit Ramsey Graphs:A basic open problem regarding polynomials over composites is: Can asymmetry in the variables help us compute a symmetric function with low degree?I show a connec-tion between this question and an important open problem in combinatorics,which is to explicitly construct Ramsey graphs or graphs with no large cliques and independent sets[6].While good Ramsey graphs are known to exist by probabilistic arguments,explicit constructions have proved elusive.I propose a new algebraic framework for constructing Ramsey graphs and showed howseveral known constructions can all be derived from this framework in a unified manner.I show that all known constructions rely on symmetric polynomials,and that such constructions cannot yield better Ramsey graphs.Thus the question of symmetry versus asymmetry of variables is precisely the barrier to better constructions by such techniques.Interpolation over Composites:A basic problem in computational algebra is polynomial interpolation,which is to recover a polynomial from its evaluations.Interpolation and related algorithmic tasks which are easy for primes become much harder,even intractable over compos-ites.This difference stems from the fact that over primes,the number of roots of a polynomial is bounded by the degree,but no such theorem holds for composites.In lieu of this theorem I presented an algorithmic bound;I show how to compute a bound on the degree of a polynomial given its zero set[7].I use this to give thefirst optimal algorithms for interpolation,learning and zero-testing over composites.These algorithms are based on new structural results about the ze-roes of polynomials.These results were subsequently useful in ruling out certain approaches for better Ramsey constructions[6].5Other Research HighlightsMy other research work spans areas of theoretical computer science ranging from algorithms for massive data sets to computational complexity.I highlight some of this work below.Data Stream Algorithms:Algorithmic problems arising from complex networks like the In-ternet typically involve huge volumes of data.This has led to increased interest in highly efficient algorithmic models like sketching and streaming,which can meaningfully deal with such massive data sets.A large body of work on streaming algorithms focuses one estimating how sorted the input is.This is motivated by the realization that sorting the input is intractable in the one-pass data stream model.In joint work with Krauthgamer,Jayram and Kumar,we presented thefirst sub-linear space data stream algorithms to estimate two well-studied measures of sortedness:the distance from monotonicity(or Ulam distance for permutations),and the length of the Longest Increasing Subsequence or LIS.In more recent work with Anna G´a l,we prove optimal lower bounds for estimating the length of the LIS in the data-stream model[4].This is established by proving a direct-sum theorem for the communication complexity of a related problem.The novelty of our techniques is the model of communication that they address.As a corollary,we obtain a separation between two models of communication that are commonly studied in relation to data stream algorithms.Structural Properties of SAT solutions:The solution space of random SAT formulae has been studied with a view to better understanding connections between computational hardness and phase transitions from satisfiable to unsatisfiable.Recent algorithmic approaches rely on connectivity properties of the space and break down in the absence of connectivity.In joint work with Kolaitis,Maneva and Papadimitriou,we consider the problem:Given a Boolean formula,do its solutions form a connected subset of the hypercube?We classify the worst-case complexity of various connectivity properties of the solution space of SAT formulae in Schaefer’s framework[14].We show that the jump in the computational hardness is accompanied by a jump in the diameter of the solution space from linear to exponential.Complexity of Modular Counting Problems:In joint work with Guruswami and Lipton,we address the complexity of counting the roots of a multivariate polynomial over afinitefield F q modulo some number r[9].We establish a dichotomy showing that the problem is easy when r is a power of the characteristic of thefield and intractable otherwise.Our results give several examples of problems whose decision versions are easy,but the modular counting version is hard.6Future Research DirectionsMy broad research goal is to gain a complete understanding of the complexity of problems arising in coding theory,computational learning and related areas;I believe that the right tools for this will come from Boolean function complexity and hardness of approximation.Below I outline some of the research directions I would like to pursue in the future.List-decoding algorithms have allowed us to break the unique-decoding barrier for error-correcting codes.It is natural to ask if one can perhaps go beyond the list-decoding radius and solve the problem offinding the codeword nearest to a received word at even higher error rates. On the negative side,we do not currently know any examples of codes where one can do this.But I think that recent results on Reed-Muller codes do offer some hope[13,21].Algorithms for solving the nearest codeword problem if they exist,could also have exciting implications in computational learning.There are concept classes which are well-approximated by low-degree polynomials over finitefields lying just beyond the threshold of what is currently known to be learnable efficiently [20,22].Decoding algorithms for Reed-Muller codes that can tolerate very high error rates might present an approach to learning such concept classes.One of the challenges in algorithmic coding theory is to determine whether known algorithms for list-decoding Reed-Solomon codes[15]and Reed-Muller codes[13,23]are optimal.This raises both computational and combinatorial questions.I believe that my work with Khot et al.rep-resents a goodfirst step towards understanding the complexity of the decoding/reconstruction problem for multivariate polynomials.Proving similar results for univariate polynomials is an excellent challenge which seems to require new ideas in hardness of approximation.There is a large body of work proving strong NP-hardness results for problems in computa-tional learning.However,all such results only address the proper learning scenario where the learning algorithm is restricted to produce a hypothesis from some particular class H which is typically the same as the concept class C.In contrast,known learning algorithms are mostly im-proper algorithms which could use more complicated hypotheses.For hardness results that are independent of the hypothesis H used by the algorithm,one currently has to resort to crypto-graphic assumptions.In ongoing work with Guruswami and Raghavendra,we are investigating the possibility of proving NP-hardness for improper learning.Finally,I believe that there are several interesting directions to explore in the agnostic learn-ing model.An exciting insight in this area comes from the work of Kalai et al.who show that 1regression is a powerful tool for noise-tolerant learning[18].A powerful paradigm in com-putational learning is to prove that the concept has some kind of polynomial approximation and then recover the approximation.Algorithms based on 1regression require a weaker polynomial approximation in comparison with previous algorithms(which use 2regression),but use more powerful machinery for the recovery step.Similar ideas might allow us to extend the boundaries of efficient learning even in the noiseless model;this is a possibility I am currently exploring.Having worked in areas ranging from data stream algorithms to Boolean function complexity, I view myself as both an algorithm designer and a complexity theorist.I have often found that working on one aspect of a problem gives insights into the other;indeed much of my work has originated from such insights([12]and[13],[10]and[4],[6]and[7]).Ifind that this is increasingly the case across several areas in theoretical computer science.My aim is to maintain this balance between upper and lower bounds in my future work.References[1]D.A.Barrington,R.Beigel,and S.Rudich.Representing Boolean functions as polynomialsmodulo composite putational Complexity,4:367–382,1994.[2]N.Bhatnagar,P.Gopalan,and R.J.Lipton.Symmetric polynomials over Z m and simultane-ous communication protocols.Journal of Computer&System Sciences(special issue for FOCS’03), 72(2):450–459,2003.[3]V.Feldman,P.Gopalan,S.Khot,and A.K.Ponnuswami.New results for learning noisyparities and halfspaces.In Proc.47th IEEE Symp.on Foundations of Computer Science(FOCS’06), 2006.[4]A.G´a l and P.Gopalan.Lower bounds on streaming algorithms for approximating the lengthof the longest increasing subsequence.In Proc.48th IEEE Symp.on Foundations of Computer Science(FOCS’07),2007.[5]O.Goldreich and L.Levin.A hard-core predicate for all one-way functions.In Proc.21st ACMSymposium on the Theory of Computing(STOC’89),pages25–32,1989.[6]P.Gopalan.Constructing Ramsey graphs from Boolean function representations.In Proc.21stIEEE symposium on Computational Complexity(CCC’06),2006.[7]P.Gopalan.Query-efficient algorithms for polynomial interpolation over composites.In Proc.17th ACM-SIAM symposium on Discrete algorithms(SODA’06),2006.[8]P.Gopalan and V.Guruswami.Deterministic hardness amplification via local GMD decod-ing.Submitted to23rd IEEE Symp.on Computational Complexity(CCC’08),2008.[9]P.Gopalan,V.Guruswami,and R.J.Lipton.Algorithms for modular counting of roots of mul-tivariate polynomials.In tin American Symposium on Theoretical Informatics(LATIN’06), 2006.[10]P.Gopalan,T.S.Jayram,R.Krauthgamer,and R.Kumar.Estimating the sortedness of a datastream.In Proc.18th ACM-SIAM Symposium on Discrete Algorithms(SODA’07),2007.[11]P.Gopalan,A.T.Kalai,and A.R.Klivans.Agnostically learning decision trees.In Proc.40thACM Symp.on Theory of Computing(STOC’08),2008.[12]P.Gopalan,S.Khot,and R.Saket.Hardness of reconstructing multivariate polynomials overfinitefields.In Proc.48th IEEE Symp.on Foundations of Computer Science(FOCS’07),2007. [13]P.Gopalan,A.R.Klivans,and D.Zuckerman.List-decoding Reed-Muller codes over smallfields.In Proc.40th ACM Symp.on Theory of Computing(STOC’08),2008.[14]P.Gopalan,P.G.Kolaitis,E.N.Maneva,and puting the connec-tivity properties of the satisfiability solution space.In Proc.33rd Intl.Colloqium on Automata, Languages and Programming(ICALP’06),2006.[15]V.Guruswami and M.Sudan.Improved decoding of Reed-Solomon and Algebraic-Geometric codes.IEEE Transactions on Information Theory,45(6):1757–1767,1999.[16]J.H˚a stad.Some optimal inapproximability results.J.ACM,48(4):798–859,2001.[17]J.Jackson.An efficient membership-query algorithm for learning DNF with respect to theuniform distribution.Journal of Computer and System Sciences,55:414–440,1997.[18]A.T.Kalai,A.R.Klivans,Y.Mansour,and R.A.Servedio.Agnostically learning halfspaces.In Proc.46th IEEE Symp.on Foundations of Computer Science,pages11–20,2005.[19]E.Kushilevitz and Y.Mansour.Learning decision trees using the Fourier spectrum.SIAMJournal of Computing,22(6):1331–1348,1993.[20]A.Razborov.Lower bounds for the size of circuits of bounded depth with basis{∧,⊕}.Mathematical Notes of the Academy of Science of the USSR,(41):333–338,1987.[21]A.Samorodnitsky.Low-degree tests at large distances.In Proc.39th ACM Symposium on theTheory of Computing(STOC’07),pages506–515,2007.[22]R.Smolensky.Algebraic methods in the theory of lower bounds for Boolean circuit com-plexity.Proc.19th Annual ACM Symposium on Theoretical Computer Science,(STOC’87),pages 77–82,1987.[23]M.Sudan,L.Trevisan,and S.P.Vadhan.Pseudorandom generators without the XOR lemma.put.Syst.Sci.,62(2):236–266,2001.[24]L.Trevisan.List-decoding using the XOR lemma.In Proc.44th IEEE Symposium on Foundationsof Computer Science(FOCS’03),pages126–135,2003.。
The Decision Reliability of MAP, Log-MAP,
The Decision Reliability of MAP, Log-MAP, Max-Log-MAP and SOV A Algorithmsfor Turbo CodesAbstract —In this paper, we study the reliability of decisions ofe Codes, Channel Reliability,e N comm llular, satellite and we also consider two improved versions, named Log-MAP two different or identicalRecursi s, connectedin pFig. 1. The turbo encoder with rate 1/3.The first encoder operat ed b e u , i ond encoderp Lucian Andrei Peri şoar ă, and Rodica Stoianth MAP, Log-MAP, Max-Log-MAP and SOVA decoding algorithms for turbo codes, in terms of the a priori information, a posteriori information, extrinsic information and channel reliability. We also analyze how important an accurate estimate of channel reliability factor is to the good performances of the iterative turbo decoder. The simulations are made for parallel concatenation of two recursive systematic convolutional codes with a block interleaver at the transmitter, AWGN channel and iterative decoding with mentioned algorithms at the receiver.Keywords —Convolutional Turbo D cision Reliability, Extrinsic Information, Iterative Decoding.I. I NTRODUCTIONunication systems, like ce computer fields, the information is represented as a sequence of binary digits. The binary message is modulated to an analog signal and transmitted over a communication channel affected by noise that corrupt the transmitted signal.The channel coding is used to protect the information fromnoise and to reduce the number of error bits.One of the most used channel codes are convolutional codes, with the decoding strategy based on the Viterbialgorithm. The advantages of convolutional codes are used inTurbo Codes (TC), which can achieve performances within a2 dB of channel capacity [1]. These codes are parallelconcatenation of two Recursive Systematic Convolutional (RSC) codes separated by an interleaver. The performances of the turbo codes are due to parallel concatenation ofcomponent codes, the interleaver schemes and the iterative decoding using the Soft Input Soft Output (SISO) algorithms [2], [3].In this paper we study the decision reliability problem for turbo coding schemes in the case of two different decodingstrategies: Maximum A Posteriori (MAP) algorithm and Soft Output Viterbi Algorithm (SOVA). For the MAP algorithmand Max-Log-MAP algorithms. The first one is a simplified algorithm which offers the same optimal performance with a reasonable complexity. The second one and the SOVA are less complex again, but give a slightly degraded performance. The paper is organized as follows. In Section II, the turbo encoder is presented. In Section III, the turbo decoder is ex Manuscript received December 10, 2008. This work was supported in part by the Romanian National University Research Council (CNCSIS) under theGrant type TD (young doctoral students), no. 24.L. A. Peri şoar ă is with the Applied Electronics and InformationEngineering Department, Politehnica University of Bucharest, Romania (e-mail: lucian@orfeu.pub.ro, lperisoara@, www.orfeu.pub.ro).R. Stoian is with the Applied Electronics and Information Engineering Department, Politehnica University of Bucharest, Romania (e-mail: rodica@orfeu.pub.ro, rodicastoian2004@, www.orfeu.pub.ro).plained in detail, presenting firstly the iterative decoding principle (turbo principle), specifying the concepts of a priori information, a posteriori information, extrinsic information, channel reliability and source reliability. Then, we review the MAP, Log-MAP, Max-Log-MAP and SOVA decoding algorithms for which we discuss the decision reliability. In Section IV is analyzed the influence of channel reliability factor on decoding performances for the mentioned decoding algorithms. Section V presents some simulation results, which we obtained.II. T HE T URBO C ODING S CHEME The turbo encoder can use ve Systematic Convolutional (RSC) code arallel, see Fig. 1.es on the input bits represent n their original order, while the sec y the fram o erates on the input bits which are permuted by the interleaver, frame u ’, [4]. The output of the turbo encoder is represented by the frame: I2)()()1211,12,121,22,21,2,,,,,,,,,...,,,k k k u c c u c c u c c ==v u c c /R k n = to b , (1)is less likely where frame c1 is the output of the first RSC and frame c2 is the output of the second RSC. If the input frame u is of length k and the output frame x is of length n , then the encoder rate is .For block encoding data is segmented into non-overlapping blocks of length k with each block encoded (and decoded)independently. This scheme imposes the use of a blockinterleaver with the constraint that the RSC’s must begin in the same state for each new block. This requires either trellis termination or trellis truncation. Trellis termination need appending extra symbols (usually named tail bits) to the inputframe to ensure that the shift registers of the constituent RSC encoders starts and ends at the same zero state. If the encoder has code rate 1/3, then it maps k data bits into 3k coded bits plus 3m tail bits. Trellis truncation simply involves resettingthe state of the RSC’s for each new block.The interleaver used for parallel concatenation is a device that permutes coordinates either on a block basis (a generalized “block” interleaver) or on a sliding window basis(a generalized “convolutional” interleaver). The interleaver ensures that the set of code sequences generated by the turbo code has nice weight properties, which reduces the probabilitythat the decoder will mistake one codeword for another.The output codeword is then modulated, for example with Binary Phase Shift Keying (BPSK), resulting the sequence , which is transmitted over an Additive White Gaussian Noise (AWGN) channel.(12,,=v u c c 12,)p x x )(,s p =x x e a low weight codeword due to the interleaver in front of it. The interleaver shuffles the inputsequence It is known that turbo codes are the best practical codes due to their performance at low SNR. One reason for their better performance is that turbo codes produce high weight code words [4]. For example, if the input sequence u is originally low weight, the systematic u and parity c 1 outputs mayproduce a low weight codeword. However, the parity output c 2 is less likely to be a low weight codeword due to the u , in such a way that when introduced to the second encoder, it is more likely probable to produce a high weight codeword. This is ideal for the code because high weight code words result in better decoder performance. III. T HE T URBO D ECODING S CHEME Let be the received sequence of length n , 12(,,)s p p =y y y y where the vector y s is formed only by the received informationsymbols s y 222222(,,...,)p p p p n y y y =y p 1 and y p 2and . These three streams are applied to the input of the turbo decoder presented in Fig. 2. 11112(,,...,)p p p p n y y y 1=y y At time j , decoder 1 using partial received information 1,s p j j y y (), makes its decision and outputs the a posterioriinformation s j L x +()()()e s s s s . Then, the extrinsic information is computed j j j c jL x L x L x L y +−=−−. Decoder 2 makes itsdecision based on the extrinsic information ()e sj L x 2 from decoder 1 and the received information ',s p j jy y . The term(')s j L x + is the a posteriori information derived from decoder 2 and used by decoder 1 as a priori information about thereceived sequence, noted with (')sj L x −(). Now, the second iteration can begin, and the first decoder decodes the same channel symbols, but now with additional information about the value of the input symbols provided by the second decoder in the first iteration. After some iterations, the algorithm converges and the extrinsic information values remains the same. Now the decision about the message bits u j is made based on the a posteriori values s j L x +.e s y p 2y p 1y sFig. 2. The turbo decoder.Each constituent decoder operates based on the Logarithm Likelihood Ratio (LLR).A. The Decision Reliability of MAP DecoderBahl, Cocke, Jelinek and Raviv proposed the Maximum APosteriori (MAP) decoding algorithm for convolutional codesin 1974 [1]. The iterative decoder developed by Berrou et al.[5] in 1993 has a greatly increased attention. In their paper,they considered the iterative decoding of two RSC codesconcatenated in parallel through a non-uniform interleaver and the MAP algorithm was modified to minimize the sequence error probability instead of bit error probability.Because of its increased complexity, the MAP algorithm was simplified in [6] and the optimal MAP algorithm calledthe Log-MAP algorithm was developed. The LLR of a transmitted bit is defined as [2]:(1)()log ()(1)s Wenoted def j s sj j s j P x L x L x P x −⎛⎞=+==⎜⎟⎜⎟=−⎝⎠where the sign of the LLR ()s j L x indicate whether the bit s j xis more likely to be +1 or -1 and the magnitude of the LLRgives an indication of the correct value of s j x . The term()sj L x − is defined as the a priori information about s j x .In channel coding theory we are interested in theprobability that , based or conditioned on some received sequence 1s j x =±s j y . Hence, we use the conditional LLR: ()()()1||log (1|s s We noted def j j s s s j j j s s j j P x y L x y L x P x y +⎛⎞=+⎜⎟=⎜⎟=−⎝⎠=) The conditional probabilities (1|s sj j P x y =± are the a posteriori probabilities of the decoded bit s j x and ()s j L x + is thea posteriori information about sj x , which is the information that the decoder gives us, including the received frame, the a priori information for the systematic symbols y s j and the apriori information for symbol x s j . It is the output of the MAPalgorithm. In addition, we will use the conditional LLR ()|s s j j L y x based on the probability that the receiver’s output would be s j y when the transmitted bit s j x was either +1 or -1:()()()|1|log |1s s defj j s s jjs s j j P y x L y x P y x ⎛⎞=+⎜=⎜=−⎝⎠⎟⎟. (3)For AWGN channel using BPSK modulation, we can write the conditional probability density functions, [7]:()()20|12s s b j j j EP y x y a N ⎡⎤=±=−⎢⎣⎦m ⎥, (4)where is the transmitted energy per bit, a is the fadingamplitude and is the noise variance.b E 0/2N We can rewrite the (3) as follows: ()()()2200|4,s s s s b j j j j Noteds s b j c j E L y x y a y a N E a y L y N ⎡⎤=−−−+⎢⎥⎣⎦== (5) the fading amplitude and is the noise power. For nonfading AWGN channels a = 1 and 0N /204c b L E N =. Theratio is defined as the Signal to Noise Ration (SNR) of thechannel.0/b E N The extrinsic information can be computed as [1], [2], [9]: ()()()()()()1|()log 1|1|log log 1|()().s s j j e sj s s jj s s j j s sj j s s s j j c j P x y L x P x y P x P y x P x P y x L x L x L y +−⎛⎞=+⎜⎟=⎜⎟=−⎝⎠⎛⎞⎛=+=+⎜⎟⎜−−⎜⎟⎜=−=−⎝⎠⎝=−−11s j s j ⎞⎟⎟⎠ (6)The a posteriori information defined in (2), can be written asthe following [1], [10]:11(')()(',)()log (')()(',)e j j j s j e j j j s s s s L x s s s s −++−−α⋅β⋅γ=α⋅β⋅γ∑∑, (7)where +∑is the summation over all possible transition branch pairs (s ’,s ) in the trellis, at time j , given the transmittedsymbol x s j = +1. Analog, −∑is for transmitted symbol x s j =-1.The forward and backward terms, represented in Fig. 3 as transitions between two consecutive states from the trellis, can be computed recursively as following [7], [10], [11]:1'()(')(',)j j j s s s s s −α=αγ∑, (8)1(')()(',)j j j ss s s s −β=βγ∑. (9)For systematic codes, which is our case, the branch transition probabilities (',)js s γ are given by the relation:11(',)exp ()(',)22s s s s e j j j c jj j s s L x x L x y s −⎡γ=+⋅γ⎢⎣⎦s ⎤⎥, (10) where:112211(',)exp 22e p p j c j j c p p j j s s L x y L x ⎡⎤γ=+⎢⎥⎣⎦y .(11)At each iteration and for each frame y, ()s j L x + is computedat the output of the second decoder and the decision is done,symbol by symbol j = 1…k , based on the sign of ()sj L x +, original information bit u j being estimated as [2], [3]: {ˆ()sj usign L x +=}j . (12) In the iterative decoding procedure, the extrinsicinformation ()e s j L x is permuted by the interleaver andbecomes the a priori information ()sj L x − for the next decoder. influence on ()s j L x + is insignificant.B. The Decision Reliability of Max-Log-MAP DecoderThe MAP algorithm as described in previous section is much more complex than the Viterbi algorithm and with hard decision outputs performs almost identically to it. Therefore for almost 20 years it was largely ignored. However, its application in turbo codes renewed interest in this algorithm. Its complexity can be dramatically reduced without affecting its performance by using the sub-optimal Max-Log-MAP algorithm, proposed in [12]. This technique simplifies the MAP algorithm by transferring the recursions into the log domain and invoking the approximation: ln max()i x i ii e x ⎛⎞≈⎜⎟⎝⎠∑. (13)where max()i i x means the maximum value of x i . If we note:()()ln ()j j A s =αs , (14)()()ln ()j j B s s =β, (15)and:()(',)ln (',)j j G s s s s =γ, (16)then the equations (8), (9) and (10) can be written as: ()(()1'1'1'()ln ()ln (')(',)ln exp (')(',)max (')(',),j j j j s j j s j j s )A s s s s A s G s s A s G s s −−−⎛⎞=α=αγ⎜⎟⎝⎠⎛=+⎜⎝⎠≈+∑∑s ⎞⎟⎞⎟(17) ()()()11(')ln (')ln ()(',)ln exp ()(',)max ()(',),j j j j s j j s j j s B s s s s s B s G s s B s G s s −−⎛⎞=β=βγ⎜⎟⎝⎠⎛=+⎜⎝⎠≈+∑∑ (18) 11(',)()22s s s s jj j c j G s s C x L x L x y −=++j , (19) term ()s s j j x L x −.Finally, the a posteriori LLR ()s j L x + which the Max-Log-MAP algorithm calculates is:Fig. 3. Trellis states transitions.for ()j s αfor 1(')j s −β((1(',)11(',)1()max(')(',)()max (')(',)().j j s j j j j s s for u j j j s s for u L x As G s s B s ))A s G s s B s +−=+−=−≈++−++ (20)In [12] and [13] the authors shows that the complexity of Max-Log-MAP algorithm is bigger than two times that of a classical Viterbi algorithm Unfortunately, the storage requirements are much greater for Max-Log-MAP algorithm, due to the need to store both the forward and backward recursively calculated metrics and before the ()j A s ()j B s ()s j L x + values can be calculated.C. The Decision Reliability of Log-MAP DecoderThe Max-Log-MAP algorithm gives a slight degradation in performance compared to the MAP algorithm due to the approximation of (13). When used for the iterative decodingof turbo codes, Robertson found this degradation to result in a drop in performance of about 0.35 dB, [12]. However, the approximation of (13) can be made exact by using the Jacobian logarithm:()(()121212121212ln()max(,)ln 1exp ||max(,)||(,),x x e e x x x x )x x f x x g x x +=++−−=+−= (21)where ()f δ can be thought of as a correction term. However,the maximization in (17) and (18) is completed by the correction term ()f δ in (21). This means that the exact ratherthan approximate values of and are calculated. For binary trellises, the maximization will be done only for two terms. Therefore we can correct the approximations in (17) and (18) by adding the term ()j A s ()j B s ()f δ, where δ is the magnitude of the difference between the metrics of the twomerging paths. This is the basis of the Log-MAP algorithmproposed by Robertson, Villebrun and Hoeher in [12]. Thus we must generalize the previous equation for more than two 1x terms, by nesting the 12(,)g x x operations as follows: (((13211ln ,,,(,)i n x n n i e g x g x g x g x x −=⎛⎞=⎜⎟⎝⎠∑K ))), (22)The correction term ()f δδ need not to be computed for every value of , but instead can be stored in a look-up table. In [12], Robertson found that such a look-up table need containonly eight values for , ranging between 0 and 5. This meansthat the Log-MAP algorithm is only slightly more complexthan the Max-Log-MAP algorithm, but it gives exactly the same performance as the MAP algorithm. Therefore, it is a very attractive algorithm to use in the component decoders of an iterative turbo decoder. δD. The Decision Reliability of SOVA DecoderThe MAP algorithm has a high computational complexityfor providing the Soft Input Soft Output (SISO) decoding. However, we obtain easily the optimal a posteriori probabilities for each decoded symbol. The Viterbi algorithm provides the Maximum Likelihood (ML) decoding for convolutional codes, with optimalsequence estimation. The conventional Viterbi decoder has two main drawbacks for a serial decoding scheme: the inner Viterbi decoder produces bursts of error bits and hard decision output, which degrades the performance of the outer Viterbi decoder [3]. Hagenauer and Hoeher modified the classical Viterbi algorithms and they provided a substantially less complex and suboptimal alternative in their Soft OutputViterbi Algorithm (SOVA). The performance improvement is obtained if the Viterbi decoders are able to produce reliability values or soft outputs by using a modified metric [14]. These reliability values are passed on to the subsequent Viterbi decoders as a priori information .In soft decision decoding, the receiver doesn’t assign a zero or a one to each received symbol from the AWGN channel, but uses multi-bit quantized values for the received sequence y , because the channel alphabet is greater than the sourcealphabet [3]. In this case, the metric derived from Maximum Likelihood principle, is used instead of Hamming distance. For an AWGN channel, the soft decision decoding produces again of 2÷3 dB over hard decision decoding, and an eight-level quantization offers enough performance in comparison with an infinite bit quantization [7].The original Viterbi algorithm searches for an informationsequence u that maximizes the a posteriori probability, s being the states sequence generated by the message u . Using the Bayes theorem and taking into account that thereceived sequence y is fixed for the metric computation and it can be discarded, the maximization of is: (|)P s y (|)P s y {}{max (|)max (|)()P P =u us }P y y s s . (23)For a systematic code, this relation can be expanded to:(1211max (,,)|,()k s p p j j j j j j j P y y y s s P s −=)⎧⎫⎪⎪⎨⎬⎪⎪⎩⎭∏u. (24) Taking into account that:()()()(1211122(,,)|,|||s p p j j j j j s s p p p p j j j j j j P y y y s s P y x P y x P y x −==⋅⋅), (25)where 1(,)j j s s − denotes the transitions between the states attime j -1 and the states at time j , the SOVA metric is obtained from (24) as [15]:()()***1***|1(1)log log ,(0)|1j j j j j j j j j jP y x P u M M x u P u P y x −⎛⎞=+⎛⎞=⎜⎟=++⎜⎟⎜⎟⎜⎟==−⎝⎠⎝⎠∑ (26)where *1,2,(,,)j j j j x u c c = is the RSC output code word at timej , at channel input and *1(,,)s p p j j j j 2y y y y = is the channeloutput. The summation is made for each pair of information symbols (,s j j u y ) and for each pair of parity symbols (11,,p j j c y )and (2,2,p j j y 1*c ).According [14] and [7], the relation (26) can be reduced as: **c j j ()j j j j M M L −=+∑x y u L u +(), (27)where the source reliability j L u , defined in (26), is the log-likelihood ratio of the binary symbol u j . The sign of ()j L u ) is the hard decision of u j and the magnitude of (j L u is the decision reliability .According [10], the SOVA metric includes values from the past metric M j -1, the channel reliability L c and the source reliability ()j L u (, as an a priori value. If the channel is very good, the second term in (27) is greater than the third term andthe decoding relies on the received channel values. If thechannel is very bad, the decoding relies on the a priori information )j L u . If M 1j , M 2j are two metrics of the survivor path and concurrent path in the trellis, at time j , then the metric difference is defined as [7]:01212j j j M M −)(s m Δ=. (28)The probability of path m , at time j , is related as:()/2mjM (path )exp m j P m P ==. (29) where j s is a states vector and mj M is the metric. The probability of choosing the survivor path is: 001)(path (correc ath 1)(path 2)1jjP e P P P e ΔΔ==++t)(p . (30)The reliability of this path decision is calculated as:(correct)orrect)log 1-(c j P P =Δ. (31) The reliability values along the survivor paths, for aparticular node and time j , are denoted as d j Δ, where d is the distance from the current node at time j . If the survivor path bit for is the same with the associated bit on the competing path, then there would be no error if the competing path is chosen. The reliability value remains unchanged.d j =To improve the reliability values an updating process must be used, so the “soft” values of a decision symbol are:(')'di j d j di L u u −−=j=Δ∑, (32)which can be approximated as:{}0...(')'min i j d j d i d L u u −−=j =⋅Δ. (33)The SOVA algorithm described in this section is the least complex of all the SISO decoders discussed in this section. In [12], Robertson shows that the SOVA algorithm is about halfas complex as the Max-Log-MAP algorithm. However, theSOVA algorithm is also the least accurate of the algorithmsdescribed in this section and, when used in an iterative turbo decoder, performs about 0.6 dB worse than a decoder using the MAP algorithm. If we represent the outputs of the SOVA algorithm they will be significantly more noisy than thosefrom the MAP algorithm, so an increased number of decodingiterations must be used for SOVA to obtain the sameperformances as for MAP algorithm.The same results are reported also for the iterative decoding (turbo decoding) of the turbo product codes, which are basedon two concatenated Hamming block codes not on convolutional codes [19]. IV. T HE INFLUENCE OF L C ON DECODING PERFORMANCE In this section we analyze the importance of an accurate estimate of the channel reliability factor L c is to the good performance of an iterative turbo decoder which uses the MAP, SOVA, Max-Log-MAP and Log-MAP algorithms. In the MAP algorithm the channel inputs and the a priori information are used to calculate the transition probabilities from one state to another, that are then used to calculate theforward and backward recursion terms [2], [8]. Finally, the aposteriori information ()s j L x + is computed and the decision about the original message is made based on it. In the iterative decoding with MAP algorithm, the channelreliability is calculated from the received channel values. At first iteration, the decoder 1 has no a priori information available (the ()s j L x − is zero) and the output from thealgorithm is calculated based on channel values. If an incorrect value of L c is used the decoder will make more decision errors and the extrinsic information from the output of the first decoder will have incorrect values, for the softchannel inputs [16].In the SOVA algorithm the channel values are used torecursively calculate the metric *c j L y j M for the current state s along a path from the metric 1j M − for the previous state along that path added to an a priori information term and to a cross-correlation term between the transmitted and the receivedchannel values, *j x and *j y , using (27). The channel reliability factor is used to scale this cross-correlation. When we usec Lan incorrect value of , e.g. , we are scaling the channel values applied to the inputs of component decoders by a factor of one instead of the correct value of . This has the effect of scaling all the metrics by the same factor, see (8), and the metric differences are also scaled by the same factor, see (9). This scaling of the metrics do not affect the path chosen by the algorithm as a survivor path or as a Maximum Likelihood (ML) path, so the hard decisions given by the algorithm are not affected by using an incorrect value of L c [16]-[18].c L ()j B s 1c L =c L c In the iterative decoding with SOVA algorithm, in the first iteration we assume that no a-priori information about the transmitted bits is available to the decoder (the a-priori information is zero), the first component decoder takes only the channel values. If channel reliability factor is incorrect, the channel values are scaled, the extrinsic information will be also scaled by the same factor and the a-priori information for the second decoder will also be scaled. Because of the linearity of the SOVA, the effect of using an incorrect value of the channel reliability factor is that the output LLR from the decoder is scaled by a constant factor. The relative importance of the two inputs to the decoder, the a priori information and the channel information, will not change, since the LLRs for both these sources of information will be scaled by the same factor. In the final iteration, the soft outputs from the final component decoder will have the same sign as those that would have been calculated using the correct value of . So, the hard outputs from the turbo decoder using the SOVA algorithm are not affected by the channel reliability factor [16].L c L The Max-Log-MAP algorithm has the same linearity that is found in the SOVA algorithm. Instead of one metric, now two metrics and are calculated, for forward andbackward recursions, see (17), (18) and (19), were are used only simple additions of the cross-correlation of the transmitted and received symbols. But, if an incorrect value of the channel reliability value is used, all the metrics are simply scaled by a factor as in the SOVA algorithm. The soft outputs given by the differences in metrics between different paths will also be scaled by the same factor, with the sign unchanged and the final hard decisions given by the turbo decoder will not be affected.()j A s The Log-MAP algorithm is identical to the Max-Log-MAP algorithm, except for a correction term ()()ln exp()f δ=−δ1+, used in the calculation of the forward and backward metrics and ()j A s ()j B s , and the soft output LLRs. The function()f δ is not a linear function, it decreases asymptoticallytowards zero as δ increases. Hence the linearity that is present in the Max-Log-MAP and SOVA algorithms is not present in the Log-MAP algorithm. This effect of non-linearity determines more hard decision errors of thecomponent decoders if the channel reliability factor is incorrect, and the extrinsic information derived from the first component decoder have incorrect amplitudes, which become the a-priori information for the second decoder in the first iteration. Both decoders in subsequent iterations will have incorrect amplitudes relative to the soft channel inputs.c L In the iterative decoding with Log-MAP algorithm, the extrinsic information exchange from one component decoder to another, determines a rapid decrease in the BER as the number of iterations increases. When the incorrect value of is used, no such rapid fall in the BER occurs due to the incorrect scaling of the a priori information relative to the channel inputs. In fact, the performance of the decoder is largely unaffected by the number of iterations used.c L For wireless communications, some of them modeled as Multiple Input Multiple Output (MIMO) systems [23], the channel is considered to be Rayleigh or Rician fading channel. If the Channel State Information (CSI) is not known at the receiver, a natural approach is to estimate the channel impulse response and to use the estimated values to compute the channel reliability factor required by the MAP algorithm to calculate the correct decoding metric.c L In [20], the degradation in the performance of a turbo decoder using the MAP algorithm is studied when the channel SNR is not correctly estimated. The authors propose a method for blind estimation of the channel SNR, using the ratio of the average squared received channel value to the square of the average of the magnitudes of the received channel values. In addition, they showed that using these estimates for SNR gives almost identical performances for the turbo decoder to that given using the true SNR.In [8], the authors proposes a simple estimation scheme for from the statistical computation on the block observation of matched filter outputs. The channel estimator includes the error variance of the channel estimates. In [24], is used the Minimum Mean Squared Error (MMSE) estimation criterion and is studied an iterative joint channel MMSE estimation and MAP decoding.c L None of above works requires a training sequence with pilot symbols to estimate the channel reliability factor. Other studies used pilot symbols to estimate the channel parameters, like [22] and [25].In [22] it is shown that it is not necessary to estimate the channel SNR for a turbo decoder with Max-Log-MAP or SOVA algorithms. If the MAP or the Log-MAP algorithm is used then the value of does not have to be very close to the true value for a good BER performance to be obtained. c LV. S IMULATION RESULTSThis section presents some simulation results for the turbo codes ensembles, with MAP, Max-Log-MAP, Log-MAP and SOVA decoding algorithms. The turbo encoder is the same for the four decoding algorithms and is described by two identical RSC codes with constraint length 3 and the generator polynomials and . No tail bitsand no puncturing are performed. The two constituent encoders are parallel concatenated by a classical block interleaver, with dimensions variable according to the frame21f G =+D D 21b G D =++。
Indradrive 系列 故障代码
Error MessagesF9001 Error internal function call.F9002 Error internal RTOS function callF9003 WatchdogF9004 Hardware trapF8000 Fatal hardware errorF8010 Autom. commutation: Max. motion range when moving back F8011 Commutation offset could not be determinedF8012 Autom. commutation: Max. motion rangeF8013 Automatic commutation: Current too lowF8014 Automatic commutation: OvercurrentF8015 Automatic commutation: TimeoutF8016 Automatic commutation: Iteration without resultF8017 Automatic commutation: Incorrect commutation adjustment F8018 Device overtemperature shutdownF8022 Enc. 1: Enc. signals incorr. (can be cleared in ph. 2) F8023 Error mechanical link of encoder or motor connectionF8025 Overvoltage in power sectionF8027 Safe torque off while drive enabledF8028 Overcurrent in power sectionF8030 Safe stop 1 while drive enabledF8042 Encoder 2 error: Signal amplitude incorrectF8057 Device overload shutdownF8060 Overcurrent in power sectionF8064 Interruption of motor phaseF8067 Synchronization PWM-Timer wrongF8069 +/-15Volt DC errorF8070 +24Volt DC errorF8076 Error in error angle loopF8078 Speed loop error.F8079 Velocity limit value exceededF8091 Power section defectiveF8100 Error when initializing the parameter handlingF8102 Error when initializing power sectionF8118 Invalid power section/firmware combinationF8120 Invalid control section/firmware combinationF8122 Control section defectiveF8129 Incorrect optional module firmwareF8130 Firmware of option 2 of safety technology defectiveF8133 Error when checking interrupting circuitsF8134 SBS: Fatal errorF8135 SMD: Velocity exceededF8140 Fatal CCD error.F8201 Safety command for basic initialization incorrectF8203 Safety technology configuration parameter invalidF8813 Connection error mains chokeF8830 Power section errorF8838 Overcurrent external braking resistorF7010 Safely-limited increment exceededF7011 Safely-monitored position, exceeded in pos. DirectionF7012 Safely-monitored position, exceeded in neg. DirectionF7013 Safely-limited speed exceededF7020 Safe maximum speed exceededF7021 Safely-limited position exceededF7030 Position window Safe stop 2 exceededF7031 Incorrect direction of motionF7040 Validation error parameterized - effective thresholdF7041 Actual position value validation errorF7042 Validation error of safe operation modeF7043 Error of output stage interlockF7050 Time for stopping process exceeded8.3.15 F7051 Safely-monitored deceleration exceeded (159)8.4 Travel Range Errors (F6xxx) (161)8.4.1 Behavior in the Case of Travel Range Errors (161)8.4.2 F6010 PLC Runtime Error (162)8.4.3 F6024 Maximum braking time exceeded (163)8.4.4 F6028 Position limit value exceeded (overflow) (164)8.4.5 F6029 Positive position limit exceeded (164)8.4.6 F6030 Negative position limit exceeded (165)8.4.7 F6034 Emergency-Stop (166)8.4.8 F6042 Both travel range limit switches activated (167)8.4.9 F6043 Positive travel range limit switch activated (167)8.4.10 F6044 Negative travel range limit switch activated (168)8.4.11 F6140 CCD slave error (emergency halt) (169)8.5 Interface Errors (F4xxx) (169)8.5.1 Behavior in the Case of Interface Errors (169)8.5.2 F4001 Sync telegram failure (170)8.5.3 F4002 RTD telegram failure (171)8.5.4 F4003 Invalid communication phase shutdown (172)8.5.5 F4004 Error during phase progression (172)8.5.6 F4005 Error during phase regression (173)8.5.7 F4006 Phase switching without ready signal (173)8.5.8 F4009 Bus failure (173)8.5.9 F4012 Incorrect I/O length (175)8.5.10 F4016 PLC double real-time channel failure (176)8.5.11 F4017 S-III: Incorrect sequence during phase switch (176)8.5.12 F4034 Emergency-Stop (177)8.5.13 F4140 CCD communication error (178)8.6 Non-Fatal Safety Technology Errors (F3xxx) (178)8.6.1 Behavior in the Case of Non-Fatal Safety Technology Errors (178)8.6.2 F3111 Refer. missing when selecting safety related end pos (179)8.6.3 F3112 Safe reference missing (179)8.6.4 F3115 Brake check time interval exceeded (181)Troubleshooting Guide | Rexroth IndraDrive Electric Drivesand ControlsI Bosch Rexroth AG VII/XXIITable of ContentsPage8.6.5 F3116 Nominal load torque of holding system exceeded (182)8.6.6 F3117 Actual position values validation error (182)8.6.7 F3122 SBS: System error (183)8.6.8 F3123 SBS: Brake check missing (184)8.6.9 F3130 Error when checking input signals (185)8.6.10 F3131 Error when checking acknowledgment signal (185)8.6.11 F3132 Error when checking diagnostic output signal (186)8.6.12 F3133 Error when checking interrupting circuits (187)8.6.13 F3134 Dynamization time interval incorrect (188)8.6.14 F3135 Dynamization pulse width incorrect (189)8.6.15 F3140 Safety parameters validation error (192)8.6.16 F3141 Selection validation error (192)8.6.17 F3142 Activation time of enabling control exceeded (193)8.6.18 F3143 Safety command for clearing errors incorrect (194)8.6.19 F3144 Incorrect safety configuration (195)8.6.20 F3145 Error when unlocking the safety door (196)8.6.21 F3146 System error channel 2 (197)8.6.22 F3147 System error channel 1 (198)8.6.23 F3150 Safety command for system start incorrect (199)8.6.24 F3151 Safety command for system halt incorrect (200)8.6.25 F3152 Incorrect backup of safety technology data (201)8.6.26 F3160 Communication error of safe communication (202)8.7 Non-Fatal Errors (F2xxx) (202)8.7.1 Behavior in the Case of Non-Fatal Errors (202)8.7.2 F2002 Encoder assignment not allowed for synchronization (203)8.7.3 F2003 Motion step skipped (203)8.7.4 F2004 Error in MotionProfile (204)8.7.5 F2005 Cam table invalid (205)8.7.6 F2006 MMC was removed (206)8.7.7 F2007 Switching to non-initialized operation mode (206)8.7.8 F2008 RL The motor type has changed (207)8.7.9 F2009 PL Load parameter default values (208)8.7.10 F2010 Error when initializing digital I/O (-> S-0-0423) (209)8.7.11 F2011 PLC - Error no. 1 (210)8.7.12 F2012 PLC - Error no. 2 (210)8.7.13 F2013 PLC - Error no. 3 (211)8.7.14 F2014 PLC - Error no. 4 (211)8.7.15 F2018 Device overtemperature shutdown (211)8.7.16 F2019 Motor overtemperature shutdown (212)8.7.17 F2021 Motor temperature monitor defective (213)8.7.18 F2022 Device temperature monitor defective (214)8.7.19 F2025 Drive not ready for control (214)8.7.20 F2026 Undervoltage in power section (215)8.7.21 F2027 Excessive oscillation in DC bus (216)8.7.22 F2028 Excessive deviation (216)8.7.23 F2031 Encoder 1 error: Signal amplitude incorrect (217)VIII/XXII Bosch Rexroth AG | Electric Drivesand ControlsRexroth IndraDrive | Troubleshooting GuideTable of ContentsPage8.7.24 F2032 Validation error during commutation fine adjustment (217)8.7.25 F2033 External power supply X10 error (218)8.7.26 F2036 Excessive position feedback difference (219)8.7.27 F2037 Excessive position command difference (220)8.7.28 F2039 Maximum acceleration exceeded (220)8.7.29 F2040 Device overtemperature 2 shutdown (221)8.7.30 F2042 Encoder 2: Encoder signals incorrect (222)8.7.31 F2043 Measuring encoder: Encoder signals incorrect (222)8.7.32 F2044 External power supply X15 error (223)8.7.33 F2048 Low battery voltage (224)8.7.34 F2050 Overflow of target position preset memory (225)8.7.35 F2051 No sequential block in target position preset memory (225)8.7.36 F2053 Incr. encoder emulator: Pulse frequency too high (226)8.7.37 F2054 Incr. encoder emulator: Hardware error (226)8.7.38 F2055 External power supply dig. I/O error (227)8.7.39 F2057 Target position out of travel range (227)8.7.40 F2058 Internal overflow by positioning input (228)8.7.41 F2059 Incorrect command value direction when positioning (229)8.7.42 F2063 Internal overflow master axis generator (230)8.7.43 F2064 Incorrect cmd value direction master axis generator (230)8.7.44 F2067 Synchronization to master communication incorrect (231)8.7.45 F2068 Brake error (231)8.7.46 F2069 Error when releasing the motor holding brake (232)8.7.47 F2074 Actual pos. value 1 outside absolute encoder window (232)8.7.48 F2075 Actual pos. value 2 outside absolute encoder window (233)8.7.49 F2076 Actual pos. value 3 outside absolute encoder window (234)8.7.50 F2077 Current measurement trim wrong (235)8.7.51 F2086 Error supply module (236)8.7.52 F2087 Module group communication error (236)8.7.53 F2100 Incorrect access to command value memory (237)8.7.54 F2101 It was impossible to address MMC (237)8.7.55 F2102 It was impossible to address I2C memory (238)8.7.56 F2103 It was impossible to address EnDat memory (238)8.7.57 F2104 Commutation offset invalid (239)8.7.58 F2105 It was impossible to address Hiperface memory (239)8.7.59 F2110 Error in non-cyclical data communic. of power section (240)8.7.60 F2120 MMC: Defective or missing, replace (240)8.7.61 F2121 MMC: Incorrect data or file, create correctly (241)8.7.62 F2122 MMC: Incorrect IBF file, correct it (241)8.7.63 F2123 Retain data backup impossible (242)8.7.64 F2124 MMC: Saving too slowly, replace (243)8.7.65 F2130 Error comfort control panel (243)8.7.66 F2140 CCD slave error (243)8.7.67 F2150 MLD motion function block error (244)8.7.68 F2174 Loss of motor encoder reference (244)8.7.69 F2175 Loss of optional encoder reference (245)Troubleshooting Guide | Rexroth IndraDrive Electric Drivesand Controls| Bosch Rexroth AG IX/XXIITable of ContentsPage8.7.70 F2176 Loss of measuring encoder reference (246)8.7.71 F2177 Modulo limitation error of motor encoder (246)8.7.72 F2178 Modulo limitation error of optional encoder (247)8.7.73 F2179 Modulo limitation error of measuring encoder (247)8.7.74 F2190 Incorrect Ethernet configuration (248)8.7.75 F2260 Command current limit shutoff (249)8.7.76 F2270 Analog input 1 or 2, wire break (249)8.7.77 F2802 PLL is not synchronized (250)8.7.78 F2814 Undervoltage in mains (250)8.7.79 F2815 Overvoltage in mains (251)8.7.80 F2816 Softstart fault power supply unit (251)8.7.81 F2817 Overvoltage in power section (251)8.7.82 F2818 Phase failure (252)8.7.83 F2819 Mains failure (253)8.7.84 F2820 Braking resistor overload (253)8.7.85 F2821 Error in control of braking resistor (254)8.7.86 F2825 Switch-on threshold braking resistor too low (255)8.7.87 F2833 Ground fault in motor line (255)8.7.88 F2834 Contactor control error (256)8.7.89 F2835 Mains contactor wiring error (256)8.7.90 F2836 DC bus balancing monitor error (257)8.7.91 F2837 Contactor monitoring error (257)8.7.92 F2840 Error supply shutdown (257)8.7.93 F2860 Overcurrent in mains-side power section (258)8.7.94 F2890 Invalid device code (259)8.7.95 F2891 Incorrect interrupt timing (259)8.7.96 F2892 Hardware variant not supported (259)8.8 SERCOS Error Codes / Error Messages of Serial Communication (259)9 Warnings (Exxxx) (263)9.1 Fatal Warnings (E8xxx) (263)9.1.1 Behavior in the Case of Fatal Warnings (263)9.1.2 E8025 Overvoltage in power section (263)9.1.3 E8026 Undervoltage in power section (264)9.1.4 E8027 Safe torque off while drive enabled (265)9.1.5 E8028 Overcurrent in power section (265)9.1.6 E8029 Positive position limit exceeded (266)9.1.7 E8030 Negative position limit exceeded (267)9.1.8 E8034 Emergency-Stop (268)9.1.9 E8040 Torque/force actual value limit active (268)9.1.10 E8041 Current limit active (269)9.1.11 E8042 Both travel range limit switches activated (269)9.1.12 E8043 Positive travel range limit switch activated (270)9.1.13 E8044 Negative travel range limit switch activated (271)9.1.14 E8055 Motor overload, current limit active (271)9.1.15 E8057 Device overload, current limit active (272)X/XXII Bosch Rexroth AG | Electric Drivesand ControlsRexroth IndraDrive | Troubleshooting GuideTable of ContentsPage9.1.16 E8058 Drive system not ready for operation (273)9.1.17 E8260 Torque/force command value limit active (273)9.1.18 E8802 PLL is not synchronized (274)9.1.19 E8814 Undervoltage in mains (275)9.1.20 E8815 Overvoltage in mains (275)9.1.21 E8818 Phase failure (276)9.1.22 E8819 Mains failure (276)9.2 Warnings of Category E4xxx (277)9.2.1 E4001 Double MST failure shutdown (277)9.2.2 E4002 Double MDT failure shutdown (278)9.2.3 E4005 No command value input via master communication (279)9.2.4 E4007 SERCOS III: Consumer connection failed (280)9.2.5 E4008 Invalid addressing command value data container A (280)9.2.6 E4009 Invalid addressing actual value data container A (281)9.2.7 E4010 Slave not scanned or address 0 (281)9.2.8 E4012 Maximum number of CCD slaves exceeded (282)9.2.9 E4013 Incorrect CCD addressing (282)9.2.10 E4014 Incorrect phase switch of CCD slaves (283)9.3 Possible Warnings When Operating Safety Technology (E3xxx) (283)9.3.1 Behavior in Case a Safety Technology Warning Occurs (283)9.3.2 E3100 Error when checking input signals (284)9.3.3 E3101 Error when checking acknowledgment signal (284)9.3.4 E3102 Actual position values validation error (285)9.3.5 E3103 Dynamization failed (285)9.3.6 E3104 Safety parameters validation error (286)9.3.7 E3105 Validation error of safe operation mode (286)9.3.8 E3106 System error safety technology (287)9.3.9 E3107 Safe reference missing (287)9.3.10 E3108 Safely-monitored deceleration exceeded (288)9.3.11 E3110 Time interval of forced dynamization exceeded (289)9.3.12 E3115 Prewarning, end of brake check time interval (289)9.3.13 E3116 Nominal load torque of holding system reached (290)9.4 Non-Fatal Warnings (E2xxx) (290)9.4.1 Behavior in Case a Non-Fatal Warning Occurs (290)9.4.2 E2010 Position control with encoder 2 not possible (291)9.4.3 E2011 PLC - Warning no. 1 (291)9.4.4 E2012 PLC - Warning no. 2 (291)9.4.5 E2013 PLC - Warning no. 3 (292)9.4.6 E2014 PLC - Warning no. 4 (292)9.4.7 E2021 Motor temperature outside of measuring range (292)9.4.8 E2026 Undervoltage in power section (293)9.4.9 E2040 Device overtemperature 2 prewarning (294)9.4.10 E2047 Interpolation velocity = 0 (294)9.4.11 E2048 Interpolation acceleration = 0 (295)9.4.12 E2049 Positioning velocity >= limit value (296)9.4.13 E2050 Device overtemp. Prewarning (297)Troubleshooting Guide | Rexroth IndraDrive Electric Drivesand Controls| Bosch Rexroth AG XI/XXIITable of ContentsPage9.4.14 E2051 Motor overtemp. prewarning (298)9.4.15 E2053 Target position out of travel range (298)9.4.16 E2054 Not homed (300)9.4.17 E2055 Feedrate override S-0-0108 = 0 (300)9.4.18 E2056 Torque limit = 0 (301)9.4.19 E2058 Selected positioning block has not been programmed (302)9.4.20 E2059 Velocity command value limit active (302)9.4.21 E2061 Device overload prewarning (303)9.4.22 E2063 Velocity command value > limit value (304)9.4.23 E2064 Target position out of num. range (304)9.4.24 E2069 Holding brake torque too low (305)9.4.25 E2070 Acceleration limit active (306)9.4.26 E2074 Encoder 1: Encoder signals disturbed (306)9.4.27 E2075 Encoder 2: Encoder signals disturbed (307)9.4.28 E2076 Measuring encoder: Encoder signals disturbed (308)9.4.29 E2077 Absolute encoder monitoring, motor encoder (encoder alarm) (308)9.4.30 E2078 Absolute encoder monitoring, opt. encoder (encoder alarm) (309)9.4.31 E2079 Absolute enc. monitoring, measuring encoder (encoder alarm) (309)9.4.32 E2086 Prewarning supply module overload (310)9.4.33 E2092 Internal synchronization defective (310)9.4.34 E2100 Positioning velocity of master axis generator too high (311)9.4.35 E2101 Acceleration of master axis generator is zero (312)9.4.36 E2140 CCD error at node (312)9.4.37 E2270 Analog input 1 or 2, wire break (312)9.4.38 E2802 HW control of braking resistor (313)9.4.39 E2810 Drive system not ready for operation (314)9.4.40 E2814 Undervoltage in mains (314)9.4.41 E2816 Undervoltage in power section (314)9.4.42 E2818 Phase failure (315)9.4.43 E2819 Mains failure (315)9.4.44 E2820 Braking resistor overload prewarning (316)9.4.45 E2829 Not ready for power on (316)。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
卷积码外文文献
A Comparative Study of Viterbi and Fano DecodingAlgorithm for Convolution CodesKapil Gupta*, P.K.Ghosh*, R.N Piplia** and Anup Dey**** Mody Institute of Technology & Science, Faculty of Engineering & TechnologyECE Department, Lakshmangarh, District –Sikar, Rajasthan-332311, India** Modi Institute of Technology/ECE Department, Kota, Rajasthan, India***Kalyani Government Engineering College/ECE Department, Kalyani, West Bengal, India. Abstract—In this paper, simulation of Viterbi decoder and Fano decoder for decoding the convolutional codes in AWGN channel is carried out. Graphs are plotted between Viterbi algorithm and Fano algorithm for decoding the convolutional codes of fixed code rate and fixed constraint length. Result shows that performance of Viterbi decoder is better than Fano decoder for the same code rate, constraint length and decoding delay.I.INTRODUCTIONIn the recent years, there has been an increasing demand for efficient and reliable digital data transmission and storage systems. A major concern of the designer is the control of errors so that reliable reproduction of data can be obtained at the receiver end.The system parameters available to the designer are the transmitted signal power and the channel bandwidth. These two parameters together with the power spectral density of the receiver noise, determine the signal energy per bit-to-noise power spectral density ratio E b/N o. For a fixed E b/N o, the only practical option available for changing data quality from problematic to acceptable level is to use Error Control Coding. Error correction coding is essentially a signal processing technique used to improve the reliability of communication system in digital channels. There are two kinds of forward error correction techniques, namely block codes and convolutional codes[1]. Conceptually, encoder for the block code is a memory-less device, which maps k-symbol input sequence into n-symbol output sequence. The term “memory-less” indicates that each n–symbol block depends only upon a specific k-symbol block. The encoder for convolutional code is a device with memory that accepts binary symbols in set of k bits and output binary symbols in set of n bits. Each set of n output code symbols are determined by the current input set and a span of v of the preceding input symbols. For convolutional codes, two important decoding techniques are rigorously used in various communication systems are Fano decoding and Viterbi decoding. Fano decoding can perform very well with long–constraint length convolutional codes, but it has a variable decoding time [2]. Viterbi decoding is a dominant decoding technique for convolutional codes, and has advantage of highly satisfactory bit error rate performance, high speed operation, ease of implementation and low cost.In digital communication, SNR is usually measured in terms of E b/N o which stand for energy per bit divided by the one-sided noise density [3]. The usual measure of performance of a coded system is the average error rate that is achieved at a specified signal to noise ratio. The interleaved (2, 1, 7) convolution codes with Viterbi decoding are adopted as an anti-interference scheme in mobile image communication system [4]. A better bit error rate (BER) performance is obtained by preparing more codes for the biorthogonal system than in conventional direct sequence spread spectrum (DS/SS) systems using binary convolutional coding methods [5]. Decoder delay is one of the most important aspect by which we are in the position to select the decoding algorithm for decoding the convolutional codes for various communication systems. The organization of this paper is as follow. Section II describes the convolutional encoder. Decoding algorithm for convolutional codes is described in section III. Results are shown in section IV. Finally, conclusion is drawn in section V.II CONVOLUTIONAL ENCODERDuring encoding, k input bits are mapped to n output bits to give rate k/n coded bit stream. The encoder consists of a shift register of K stages, where K is described as the Constraint length of the code.Convolutional encoders are physically constructed by using shift registers with taps determined by the generator functions. The rate of the encoder is defined as the ratio of input to the output symbols. The number of taps on the shift register determines how many of the output bits are influenced by the input bits. The number of influenced output bits is called the Constraint Length. A convolutional code looks very much like a discrete time filter. Instead of having a single input and output stream,however, we have k input streams and n output streams.Figure 1. Convolutional Encoder for K=3, Rate=1/2An Encoder is illustrated in Figure 1 for K=3 and v=2. Here M1 through M3 are 1-bit storage devices such as flip-flops. The output v1 and v2 of adder are given byCP1324,I nt e r nat i onal Conf e r e nc e on M e t hods and M ode l s i n Sc i e nc e and Te c hnol ogy(I CM2ST-10)e di t e d by R.B.Pa t e l a nd B.P.Si ngh©2010 A m e r i c a n I ns t i t ut e of Phys i c s978-0-7354-0879-1/10/$30.00v1=s1 s2 s3 (1)v2=s1 s3 (2) The operation of the encoder proceeds as follows:The shift register is assumed to be clear initially. The 1st bit of the input data stream is entered into M1. During this message bit interval the commutator samples and the adder output are v1 & v2. The next message bit enters M1 while the previous bit in M1 transfers to M2 and the commutator again samples all the v adder outputs. This process continues until eventually the last bit of the message has been entered into M1. Thereafter in order that every message bit may proceed entirely through the shift register to complete process, 3 zeros are added to the message to transfer the last bit of the message to M3 and hence out of the shift register. The shift register then finds itself in its initial clear condition again. Table 1 describes the encoder stages for K=3, rate ½ encoder of Figure 1.Table1. Encoder stages for K=3, code rate=1/2.The tree representation of this encoder is displayed in Figure 2.III DECODING ALGORITHM FOR CONVOLUTIONAL CODESThere are a variety of algorithms for decoding the received coded information sequences to recover the original data. Viterbi algorithm is one of the practical techniques [6].Viterbi decoding algorithm :Viterbi algorithm steps are summarized below:The equivalence between maximum likelihood decoding and minimum distance decoding for a binary symmetric channel implies that a convolutional code may be decoded by choosing a path in the code tree whose coded sequence differs from the received one in the fewest number of places. The change of stages for the input bits is shown in Figure 3. Since a code tree is equivalent to a trellis, trellis representation is considered as depicted inFigure 4.Figure 2. Tree representation of encoder(rate=1/2,K=3)Figure 3: Encoder Input bits and output symbolsFigure 4: The trellis for t=17 for given input.The reason for preferring the trellis over the tree is that the number of nodes at any level of the trellis does not continue to grow as the message bit increases; rather, it remains constant at 2K -1, where K is the constraint length of the code.The Viterbi algorithm operates by computing a metric or discrepancy for every possible path in the trellis. The metric for a particular path is defined as the Hamming distance between the coded sequence represented by the path and the received sequence. Thus, for each node (state) in the trellis, the algorithm compares the two paths entering that node. The path with lower metric is retained while the other path is discarded. The paths that are retained by the algorithm are the survivors.The Viterbi algorithm proceeds in a step-by-step fashion as follows: Initial stepLabel the left most state of the trellis (i.e., the all zero state at level 0) as 0, since there is no discrepancy at this initial point in the computation. Computation step j+1All survivor paths have been identified. The survivor path and its metric for each state of the trellis are stored. Then at level j+1, compute the metric for all the paths entering each state of the trellis by adding the metric of the incoming branches to the metric of the connecting survivor path from level j. For each state, identify the path with the lowest metric as the survivor of the step j+1, thereby updating the computation. Final stepContinue the computation until the algorithm completes its forward search through the trellis and reaches the termination node (i.e., all zero state). At this time it makes the decision on the maximum likelihood path. Then, the sequence of the symbols associated with that path is released to the destination as the decoded version of the received sequence. After initial step, a decision is then made on the “Best’” path and the symbol associated with the last branch on the path is dropped. Next, the decoding window is forwarded one time interval, and the decision on the G next code frame is made, and so on.For the above-mentioned steps, the encoder trellis diagram and the survivor path are shown in Figures 5 and6, respectively.Figure 5. Encoder trellis diagramOnce this information is built up, the Viterbi decoder is ready to recreate the sequence of bits that were inputted to the convolution encoder. This is accomplished by thefollowing steps:Figure 6. Tracing of right path1.First, select the state having the smallest accumulated error metric and save the state number of the state. 2.Iteratively perform the following step until the beginning of the trellis is reached: Working backward through the state history table and for the selected state, select a new state which is listed in the state history table as being the predecessor to that state. Save the state number of each selected state. This step is called Trace back.Now work forward through the list of selected states saved in the previous steps. Look up what input bit corresponds to a transition from each predecessor state to its successor state.Table 2 shows the accumulated metric for full 15 bit (plus two flushing bits) message at each time t. It is interesting to note that for this hard decision input Viterbi decoder example, the smallest accumulated error metric in the final state indicates how many channel symbol errors occurred.Table 2. Accumulated metric for t=17Table 3 shows the states selected when tracing the path back through the survivor state.Table 3: Selected metric when traced backward.Fano decoding algorithm :The Fano decoding algorithm searches for the most probable path through the tree or trellis by examining one path at a time. The increment added to the metric along each branch is proportional to the received bit sequence for that branch and a negative constant is also added to each branch metric. The value of the negative constant is chosen such that the metric for the correct path will increase on the average while the same for the incorrect path will decrease on the average. By comparing themetric of the paths with increasing threshold, Fano algorithm detects and discards the incorrect paths.The advantage of Fano decoding is that such decoding allows the decoder to avoid the lengthy process of testing every branch of the possible 2K branches of the code tree in the decoding of single message bit. It proceeds from node to node, taking the most probable branch at each node and increasing the threshold such that the threshold is never more than some pre-assigned value, say IJ, below the metric. If decoder takes an incorrect path because of noise, it appears more probable than the correct path. Since the metric of an incorrect path decreases on the average, the metric will fall below the current threshold, say,IJo When this occurs, the decoder backs up and takes alternate path through the tree in order of decreasing branch metrics, with an aim to find another path that exceeds the threshold IJo .If it is successful in finding the alternative path, it continues along the path, always selecting the most probable branch at each node. On the other hand, if no path exists that exceeds the threshold IJo the threshold is reduced by an amount IJ and the original path is retraced. If the original path does not stay above the new threshold value, the decoder resumes its backward search for another path. This procedure is continued, with the threshold reduced by IJ for each repetition, until the decoder finds a path that remains above the threshold value.In Fano decoding at the arrival of first v message bits the decoder compares these bits with the two branches diverging from the starting node. If one of the branches matches exactly with these v code bits, then the encoder follows this branch. If there are errors in received bits due to noise, the encoder follows the branch with less discrepancy. At the second node a similar comparison is made between the diverging branches and the second set of bits and so on at succeeding nodes.If in the transmission of any v bits, branch errors have found more than half of the bits, then at the node from which the branch diverges, the decoder will make mistake. For such case the entire decoding on this path must be in error. To combat this, the decoder keeps a record of the total number of discrepancies between the received code bits and the corresponding bits encountered in the path. The decoder is programmed in such a way that it retraces its path back to the node at which the apparent errors has taken place and choose an alternative branch out of the node. In this way the decoder will find a path through K nodes.Steps involved in simulation of communication channel using convolution encoding with Viterbi and Fano decoding are as follows:(1) Generation of input bits- binary data.(2) Pass data from convolution encoder to producechannel symbols.(3) Add noise to the transmitted channel symbol. This isthe received channel symbols.(4) Pass the received channel symbols from Viterbi andFano decoder.(5) Comparison is made between the decoded data bitsand the input bits.(6) Count the number of errors.(7) Repeat the process for multiple E b/N o values.(8) Plot the graph between BER and E b/N o.IV RESULTMonte Carlo simulations are performed, five independent trials are carried out and their average is plotted. As illustrated in Figure 7, the plot is for the BER of decoding of convolutional codes using Viterbi decoder for constraint length K=3, Rate=1/2 and decoding delays D = 4, 5 and 6 times of constraint length. For a bit error rate of 10-3 for decoding delay D=4K, 5K & 6K, the SNR is 5dB, 4.1 dB & 3.6 dB respectively. If we take the decoding delay of 5K or 6K then the gains come out 0.9dB or 1.4dB as compared to the decoding delay of 4K respectively for the same constraint length K=3.10101010Eb/No (dB)BERPerformPlot of BER v/s SNR for Viterbi DecoderFigure 7. Performance of Viterbi decoder Rate=1/2, K=3, Delay=4K, 5K and 6KFigure 8 shows the plots of the BER of Viterbi decoder for Constraint length K=3, Rate=1/2, decoding delay D= 5K and Fano decoder. Performance of Viterbi decoding delay of 5K is much better than that of Fano sequentialdecoder. The difference in SNR is about 5dB.10101010Eb/No (dB)BERPlot of BER v/s SNR for Fano Decoder & Viterbi decoderFigure 8. Performance of Viterbi& Fano decoder for R=1/2, K=3,D=5KFigure 9 depicts the plot of BER of Viterbi decoder for Constraint length K=3, Rate=1/2, decoding delay D= 6K and Fano sequential decoder for the same constraintlength K=3. The gain in SNR for the BER of 10 -3 is found to be 4.1 dB. Hence the performance of Viterbi decoder with decoding delay 6K is better than that of Fano sequential decoder.10101010Eb/No (dB)B E RPlot of BER v /s SNR for Fano decoder & Viterbi DecoderFigure 9. Performance of Viterbi & Fano decoder for R=1/2, K=3, D=6KV CONCLUSIONThe numerical results show that the performance of the system increases as the decoding delay increases in case of Viterbi decoding. For Viterbi decoding when delay D=6K required SNR is 3.9 dB, for D=5K required SNR is 4.1dB. For D=5K, Rate=1/2, K=3 required SNR is 9 dB in case of Fano decoding and 4.1dB in case of Viterbi decoding.The gain in SNR for the BER of 10 -3 is 4.1 dB for D=6K, Rate=1/2 and K=3.The overall performance of the Viterbi decoding algorithm is better than that Fano algorithm for same decoding delay.REFERENCES[1] Sab, O.A., “Forward error correction techniques” IEEE proc., vol 1, pp. 391, 2003.[2] Lin, ming-bo,” New path history management circuits for Viterbi Decoders” IEEE transactions on communication, vol.48, October, 2000, pp 1605-1608.[3] V.Pless, Introduction to the theory of error- correcting codes, 3rd edition new York: John Wiley & Sons , 1998.[4] Leon W. Couch,II “Digital and Analog Communication Systems”, Pearson Education.[5] J.Proakis, “Digital Communication”,McGraw-Hill New York, 1991.[6] G.D.Forney,Jr.,”Convolutional Codes II: Maximum- Likelihood decoding” Information Control , vol.25, June, 1974, pp. 222-226.Copyright of AIP Conference Proceedings is the property of American Institute of Physics and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.。
Nonbinary Quantum Reed-Muller Codes
a r X i v :q u a n t -p h /0502001v 1 31 J a n 2005Nonbinary Quantum Reed-Muller CodesPradeep K.SarvepalliDept.of Computer Science Texas A&M UniversityCollege Station,TX 77843-3112,USAEmail:pradeep@Andreas KlappeneckerDept.of Computer Science Texas A&M UniversityCollege Station,TX 77843-3112,USAEmail:klappi@Abstract —We construct nonbinary quantum codes from clas-sical generalized Reed-Muller codes and derive the conditions under which these quantum codes can be punctured.We provide a partial answer to a question raised by Grassl,Beth and R¨o tteler on the existence of q -ary quantum MDS codes of length n with q ≤n ≤q 2−1.I.I NTRODUCTIONThe quest to build a scalable quantum computer that is resilient against decoherence errors and operational noise has sparked a lot of interest in quantum error-correcting codes.The early research has been confined to the study of binary quan-tum error-correcting codes,but more recently the theory was extended to nonbinary codes that are useful in the realization of fault-tolerant computations.The purpose of this paper is to derive families of quantum stabilizer codes that are based on classical generalized Reed-Muller (GRM)codes.Recall that one cannot arbitrarily shorten a quantum stabilizer code.We study the puncture codes of GRM stabilizer codes that determine the possible lengths of shortened codes.We give now an overview of our main theorems.We omit some technical details to keep our exposition brief,but we assure the reader that the subsequent sections provide all missing details.Let q be power of a prime.We denote by R q (ν,m )the q -ary generalized Reed-Muller code of order νwith parameters [q m ,k (ν),d (ν)]q and dual distance d (ν⊥).Our first two results concern the construction of two families of nonbinary quantum stabilizer codes:Theorem 1:For 0≤ν1≤ν2≤m (q −1)−1,there exist pure quantum stabilizer codes with the parameters[[q m ,k (ν2)−k (ν1),min {d (ν⊥1),d (ν2)}]]q .Theorem 2:For 0≤ν≤m (q −1)−1,there exists a pure [[q 2m ,q 2m −2k (ν),d (ν⊥)]]q quantum stabilizer code.Puncturing a quantum stabilizer code is restricted because the underlying classical code must remain self-orthogonal.Our next two results show when quantum Reed-Muller codes can be punctured.Theorem 3:For 0≤ν1≤ν2≤m (q −1)−1and 0≤µ≤ν2−ν1,if R q (µ,m )has a codeword of weight r ,then there exists an [[r,≥(k (ν2)−k (ν1)−q m +r ),≥d ]]q quantumstabilizer code with d =min {d (ν2),d (ν⊥1)}.Theorem 4:Let C =R q 2(ν,m )with 0≤ν≤m (q −1)−1and (q +1)ν≤µ≤m (q 2−1)−1.If R q 2(µ,m )⊥|F q contains avector of weight r ,then there exists a quantum stabilizer code with parameters [[r,≥(r −2k (ν)),≥d (ν⊥)]]q .Our last result deals with the existence of quantum MDS codes of length n with q ≤n ≤q 2−1.These are derived from the previous results on punctured codes making use of some additional properties of MDS codes.Theorem 5:There exist quantum MDS codes with the pa-rameters [[(ν+1)q,(ν+1)q −2ν−2,ν+2]]q for 0≤ν≤q −2.The paper is organized as follows.In the next section we review the basics of generalized Reed-Muller codes.We construct two series of quantum codes using a nonbinary version of the CSS construction and a Hermitian construction.In Section III,we derive our results concerning the puncturing of these stabilizer codes.In Section IV,we consider a special case of GRM codes to obtain some quantum MDS codes.The MDS nature of these codes allows us to make some more definitive statements about their puncture codes,making it possible to show the existence of quantum MDS codes of lengths between q and q 2.Notation:We denote the Euclidean inner product of two vectors x and y in F n q by x |y =x 1y 1+···x n y n .We write x |y h =x 1y q 1+···x n y qnto denote the Hermitian inner product of two vectors x and y in F n q 2.We denote theEuclidean dual of a code C ⊆F n q by C ⊥,and the Hermitiandual of a code D ⊆F n q 2by D ⊥h.II.G ENERALIZED R EED -M ULLER C ODESPrimitive generalized Reed-Muller codes were introduced by Kasami,Lin,and Petersen [8]as a generalization of Reed-Muller codes [10],[13].We follow Assmus and Key [2],[3]in our approach to GRM codes.Our main goal is to derive two series of quantum stabilizer codes.A.Classical CodesLet (P 1,...,P n )denote an enumeration of all points in F mq with n =q m .We denote by L m (ν)the subspace of F q [x 1,...,x m ]that is generated by polynomials of de-gree νor less.Let νdenote an integer in the range 0≤ν<m (q −1).Let ev denote the evaluation function ev f =(f (P 1),...,f (P n )).The generalized Reed-Muller code R q (ν,m )of order νis defined asR q (ν,m )={(f (P 1),...,f (P n ))|f ∈L m (ν)},(1)={ev f |f ∈L m (ν)}.The dimension k(ν)of the code R q(ν,m)equalsk(ν)=mj=0(−1)j m j m+ν−jqν−jq ,(2)and its minimum distance d(ν)is given byd(ν)=(R+1)q Q,(3) where m(q−1)−ν=(q−1)Q+R,such that0≤R<q−1; see[2],[3],[8],[11].It is clear that R q(ν,m)⊆R q(ν′,m)holds for all param-etersν≤ν′.More interesting is the fact that the dual code of R q(ν,m)is again a generalized Reed-Muller code,R q(ν,m)⊥=R(ν⊥,m)withν⊥=m(q−1)−1−ν. We need the following result for determining the distances and purity of quantum codes.Lemma1:Let C1=R q(ν1,m)and C2=R q(ν2,m).If v1<v2,then C1⊂C2and wt(C2\C1)=wt(C2).Proof:We already know that C1⊂C2ifν1<ν2. We denote the minimum distances of the codes C1and C2by d1=wt(C1)=(R1+1)q Q1and d2=wt(C2)=(R2+1)q Q2. It suffices to show thatν1<ν2implies d2<d1,because in that case C2\C1must contain a vector of weight d2,and this shows that wt(C2\C1)=wt(C2),as claimed.Sinceν1<ν2,we havem(q−1)−ν2<m(q−1)−ν1.(4) If we set m(q−1)−νk=(q−1)Q k+R k with0≤R k<q−1 for k∈{1,2},then it follows from(4)that Q1≥Q2.If Q1=Q2then R1>R2;hence,d1=(R1+1)q Q1> (R2+1)q Q2=d2.On the other hand,if Q1>Q2,then d1≥(0+1)q Q1and d2≤(q−2+1)q Q2=(q−1)q Q2,and it follows that d1≥q Q2+1>d2.Theorem1:For0≤ν1≤ν2≤m(q−1)−1,there exists a pure[[q m,k(ν2)−k(ν1),min{d(ν⊥1),d(ν2)}]]q quantum stabilizer code,where the parameters k(ν)and d(ν)are given by the equations(2)and(3),respectively.Proof:Forν1≤ν2,C1=R q(ν1,m)⊆R q(ν2,m)= C2.By Lemma2,we know there exists a pure[[q m,k(ν2)−k(ν1),min{d(ν⊥1),d(ν2)}]]q quantum code.The purity of the code follows from Lemma1.The next construction starts from a generalized Reed-Muller code over F q2.If it is contained in its Hermitian dual code, then it can be used to construct quantum codes.Therefore, our immediate goal will be tofind such self-orthogonal Reed-Muller codes.Lemma4:Ifνis an order in the range0≤ν≤m(q−1)−1,then R q2(ν,m)⊆R q2(ν,m)⊥h.Proof:Recall that ev f=(f(P1),...,f(P n)).The code R q2(0,m)is generated by1,the all one vector.The relation R q2(0,m)⊥=R q2(m(q2−1)−1,m)shows that ev f|1 =0 for all polynomials f in L m(ν)with deg f≤m(q2−1)−1. If x a11···x a m m and x b11···x b m m are monomials in L m(ν),then ev x a11···x a m m|ev x b11···x b m m h= ev x a11···x a m m|ev x qb11···x qb mm,= ev x a1+qb11···x a m+qb mm|1 =0,where the last equality holds because the monomial has at most degree(m(q−1)−1)(q+1)<m(q2−1)−1.Since the monomials generate L m(ν),it follows that ev f|ev g h=0 for all f,g in L m(ν).Theorem2:For0≤ν≤m(q−1)−1,there exist pure quantum codes[[q2m,q2m−2k(ν),d(ν⊥)]]q,wherek(ν)=mj=0(−1)j m j m+ν−jq2ν−jq2 ,and d(ν⊥)=(R+1)q2Q,withν+1=(q2−1)Q+R and 0≤R<q2−1.Proof:First we note that wt(R q2(ν,m)⊥h)= wt(R q2(ν,m)⊥)=d(ν⊥).Recall that d(ν⊥)can be computed using equation(3),keeping in mind that these codes are overFq2.From Lemma4and Lemma5we can conclude that there exists a pure quantum code[[q2m,q2m−2k(ν),d(ν⊥)]]q where k(ν)is the dimension of R q2(ν,m)as given by equation(2). The purity of the code follows from Lemma1.III.P UNCTURING Q UANTUM GRM C ODES Puncturing provides a means to construct new codes from existing codes.Puncturing quantum stabilizer codes,however, is not as straightforward as in the case of classical codes.Rains introduced the notion of puncture code[12],which simplified this problem and provided a means tofind out when punctured codes are possible.Further extensions to these ideas can be found in[6].With the help of these results we now study the puncturing of GRM codes.Recall that for every quantum code constructed using the CSS construction,we can associate two classical codes,C1 and C2.Define C to be the direct sum of C1and C⊥2viz. C=C1⊕C⊥2.The puncture code P(C)[6,Theorem12]is defined asP(C)={(a i b i)n i=1|a∈C1,b∈C⊥2}⊥.(5) The usefulness of the puncture codes lies in the fact that if there exists a vector of nonzero weight r,then the correspond-ing quantum code can be punctured to a length r and minimum distance greater than or equal to distance of the parent code. Theorem3:For0≤ν1≤ν2≤m(q−1)−1and0≤µ≤ν2−ν1,if R q(µ,m)has codeword of weight r,then there exists an[[r,≥(k(ν2)−k(ν1)−q m+r),≥d]]q quantum code,where d=min{d(ν2),d(ν⊥1)}.In particular,there exists a[[d(µ),≥(k(ν2)−k(ν1)−q m+d(µ)),≥d]]q quantum code.Proof:Let C i=R q(νi,m)with≤ν1≤ν2≤m(q−1)−1,for i∈{1,2}.By Theorem1,a[[q m,k(ν2)−k(ν1),d]]q quantum code Q with d=min{d(ν2),d(ν⊥1)}exists.It follows from equation(5)that P(C)⊥=R q(ν1+ν⊥2,m), soP(C)=R q(m(q−1)−ν1−ν⊥2−1,m)=R q(ν2−ν1,m)(6) By[6,Theorem11],if there exists a vector of weight r in P(C),then there exists an[[r,k′,d′]]q quantum code,where k′≥(k(ν2)−k(ν1)−q m+r)and distance d′≥d.SinceP(C)=R q(ν2−ν1,m)⊇R q(µ,m)for all0≤µ≤ν2−ν1, the weight distributions of R q(µ,m)give all the lengths to which Q can be punctured.Moreover P(C)will certainly contain vectors whose weight r=d(µ),that is the minimum weight of R q(µ,m).Thus there exist punctured quantum codes with the parameters[[d(µ),≥(k(ν2)−k(ν1)−q m+ d(µ)),≥d]]q.IV.Q UANTUM MDS C ODES VIA GRM C ODES MDS codes occupy a special place of interest in coding theory in view of their optimality with respect to the Singleton bound and also because of their many connections to other branches of mathematics.So in this section we shall construct some linear quantum MDS codes as a special case of quantum GRM codes.The usefulness of such an approach will be appreciated later when we try to puncture them.A.Quantum MDS codesThe previous results enable us to derive very easily some qauntum MDS codes as a special case.Corollary6:There exist quantum MDS codes with the parameters[[q,q−2ν−2,ν+2]]q for0≤ν≤(q−2)/2and [[q2,q2−2ν−2,ν+2]]q for0≤ν≤q−2.Proof:These codes and the corresponding ranges ofνare a direct consequence of Corollary3and Theorem2with 1Since C is over Fq2,q2should be used in equations(2)and(3).m =1.In both cases it can be verified that k (ν)=ν+1and d (ν⊥)=ν+2.The quantum codes [[q,q −2ν−2,ν+2]]q and [[q 2,q 2−2ν−2,ν+2]]q follow on substituting the values of k (ν)and d (ν⊥)in these constructions.It can be easily verified that these quantum codes satisfy the quantum Singleton bound [12].Lemma 8:Let C =R q 2(ν,1)with 0≤ν≤q −2,then the puncture code P h (C )has a vector of weight (ν+1)q .Proof:Let C =R q 2(ν,1).ByTheorem 4,we knowthat P h (C )⊇R q 2(µ,1)⊥|F q ,where (q +1)ν≤µ≤(q 2−2).P h (C )⊇R q 2(µ,1)⊥|F q =R q 2(q 2−µ−2,1)|F q (16)Choose µ=(ν+1)q −2.(Note that (q +1)ν≤(ν+1)q ≤q 2−2holds for 0≤ν≤q −2).ThenP h (C )⊇R q 2(q 2−(ν+1)q,1)|F q .(17)We will show that R q (q −ν−1,2)is embedded in R q 2(q 2−(ν+1)q,1)|F q ;thus in P h (C )also.By equation (3),wt (R q (q −ν−1,2))=d (q −ν−1)=(ν+1)q .By Lemma 7R q (q −ν−1,2)is embedded in R q 2(q 2−(ν+1)q,1)|F q ⊆P h (C ).Hence P h (C )contains a vector of weight equal to (ν+1)q .V.C ONCLUSIONWe constructed a family of nonbinary quantum codes based on the classical generalized Reed-Muller codes.Then we studied when these codes can be punctured to give more quantum codes.As a special case we derived a series of quantum MDS codes from the generalized Reed-Muller codes.We provided a partial answer to the question of existence of q -ary MDS codes with lengths in the range q to q 2by analytically proving the existence of a series of codes with lengths in this range.A CKNOWLEDGMENTThis research was supported by NSF CAREER award CCF 0347310,NSF grant CCR 0218582,a Texas A&M TITF initiative,and a TEES Select Young Faculty award.R EFERENCES[1] A.Ashikhmin and E.Knill,“Nonbinary quantum stabilizer codes,”IEEErm.Theory ,vol.47,no.7,pp.3065–3072,2001.[2] A.Assmus,Jr.and J.Key,Designs and their codes .Cambridge:Cambridge University Press,1992.[3] E.Assmus,Jr.and J.Key,“Polynomial codes and finite geometries,”inHandbook of Coding Theory ,V .Pless and W.Huffman,Eds.,vol.II.Amsterdam:Elsevier,1998,pp.1269–1343.[4] A.Calderbank, E.Rains,P.Shor,and N.Sloane,“Quantum errorcorrection via codes over GF(4),”IEEE rm.Theory ,vol.44,pp.1369–1387,1998.[5]P.Delsarte,“On subfield subcodes of Reed-Solomon codes,”IEEErm.Theory ,vol.21,no.5,pp.575–576,1975.[6]M.Grassl,T.Beth,and M.R¨o tteler,“On optimal quantum codes,”Internat.J.Quantum Information ,vol.2,no.1,pp.757–775,2004.[7]W.C.Huffman and V .Pless,Fundamentals of Error-Correcting Codes .Cambridge:University Press,2003.[8]T.Kasami,S.Lin,and W.W.Peterson,“New generalizations of theReed-Muller codes Part I :Primitive codes,”IEEE rm.Theory ,vol.14,no.2,pp.189–199,1968.[9]J.-L.Kim and J.Walker,“Nonbinary quantum error-correcting codesfrom algebraic curves,”submitted for publication.[10] D.Muller,“Applications of Boolean algebra to switching circuit des-ignand to error correction,”IRE p.,vol.3,pp.6–12,1954.[11]R.Pellikaan and X.-W.Wu,“List decoding of q -ary Reed-Muller codes,”IEEE rm.Theory ,vol.50,no.4,pp.679–682,2004.[12] E.Rains,“Nonbinary quantum codes,”IEEE rm.Theory ,vol.45,pp.1827–1832,1999.[13]I.Reed,“A class of multiple-error-correcting codes and a decodingscheme,”IEEE rm.Theory ,vol.4,pp.38–49,1954.[14] A.Steane,“Quantum Reed-Muller codes,”IEEE rm.Theory ,vol.45,no.5,pp.1701–1703,1999.。
front_matter
Principles of Digital CommunicationRobert G. GallagerJanuary 5, 2008iiPreface: introduction and objectivesThe digital communication industry is an enormous and rapidly growing industry, roughly comparable in size to the computer industry. The objective of this text is to study those aspects of digital communication systems that are unique to those systems. That is, rather than focusing on hardware and software for these systems, which is much like hardware and software for many other kinds of systems, we focus on the fundamental system aspects of modern digital communication.Digital communication is a field in which theoretical ideas have had an unusually powerful impact on system design and practice. The basis of the theory was developed in 1948 by Claude Shannon, and is called information theory. For the first 25 years or so of its existence, information theory served as a rich source of academic research problems and as a tantalizing suggestion that communication systems could be made more efficient and more reliable by using these approaches. Other than small experiments and a few highly specialized military systems, the theory had little interaction with practice. By the mid 1970’s, however, mainstream systems using information theoretic ideas began to be widely implemented. The first reason for this was the increasing number of engineers who understood both information theory and communication system practice. The second reason was that the low cost and increasing processing power of digital hardware made it possible to implement the sophisticated algorithms suggested by information theory. The third reason was that the increasing complexity of communication systems required the architectural principles of information theory.The theoretical principles here fall roughly into two categories -the first provide analytical tools for determining the performance of particular systems, and the second put fundamental limits on the performance of any system. Much of the first category can be understood by engineering undergraduates, while the second category is distinctly graduate in nature. It is not that graduate students know so much more than undergraduates, but rather that undergraduate engineering students are trained to master enormous amounts of detail and to master the equations that deal with that detail. They are not used to the patience and deep thinking required to understand abstract performance limits. This patience comes later with thesis research.My original purpose was to write an undergraduate text on digital communication, but experience teaching this material over a number of years convinced me that I could not write an honest exposition of principles, including both what is possible and what is not possible, without losing most undergraduates. There are many excellent undergraduate texts on digital communication describing a wide variety of systems, and I didn’t see the need for another. Thus this text is now aimed at graduate students, but accessible to patient undergraduates.The relationship between theory, problem sets, and engineering/design in an academic subject is rather complex. The theory deals with relationships and analysis for models of real systems. A good theory (and information theory is one of the best) allows for simple analysis of simplified models. It also provides structural principles that allow insights from these simple models to be applied to more complex and realistic models. Problem sets provide students with an opportunity to analyze these highly simplified models, and, with patience, to start to understand the general principles. Engineering deals with making the approximations and judgment calls to create simple models that focus on the critical elements of a situation, and from there to design workable systems.The important point here is that engineering (at this level) cannot really be separated from theory. Engineering is necessary to choose appropriate theoretical models, and theory is necessaryiii to find the general properties of those models. To oversimplify it, engineering determines what the reality is and theory determines the consequences and structure of that reality. At a deeper level, however, the engineering perception of reality heavily depends on the perceived structure (all of us carry oversimplified models around in our heads). Similarly, the structures created by theory depend on engineering common sense to focus on important issues. Engineering sometimes becomes overly concerned with detail, and theory overly concerned with mathematical niceties, but we shall try to avoid both these excesses here.Each topic in the text is introduced with highly oversimplified toy models. The results about these toy models are then related to actual communication systems and this is used to generalize the models. We then iterate back and forth between analysis of models and creation of models. Understanding the performance limits on classes of models is essential in this process.There are many exercises designed to help understand each topic. Some give examples showing how an analysis breaks down if the restrictions are violated. Since analysis always treats models rather than reality, these examples build insight into how the results about models apply to real systems. Other exercises apply the text results to very simple cases and others generalize the results to more complex systems. Yet others explore the sense in which theoretical models apply to particular practical problems.It is important to understand that the purpose of the exercises is not so much to get the ‘answer’ as to acquire understanding. Thus students using this text will learn much more if they discuss the exercises with others and think about what they have learned after completing the exercise. The point is not to manipulate equations (which computers can now do better than students) but rather to understand the equations (which computers can not do).As pointed out above, the material here is primarily graduate in terms of abstraction and patience, but requires only a knowledge of elementary probability, linear systems, and simple mathematical abstraction, so it can be understood at the undergraduate level. For both undergraduates and graduates, I feel strongly that learning to reason about engineering material is more important, both in the workplace and in further education, than learning to pattern match and manipulate equations.Most undergraduate communication texts aim at familiarity with a large variety of different systems that have been implemented historically. This is certainly valuable in the workplace, at least for the near term, and provides a rich set of examples that are valuable for further study. The digital communication field is so vast, however, that learning from examples is limited, and in the long term it is necessary to learn the underlying principles. The examples from undergraduate courses provide a useful background for studying these principles, but the ability to reason abstractly that comes from elementary pure mathematics courses is equally valuable. Most graduate communication texts focus more on the analysis of problems with less focus on the modeling, approximation, and insight needed to see how these problems arise. Our objective here is to use simple models and approximations as a way to understand the general principles. We will use quite a bit of mathematics in the process, but the mathematics will be used to establish general results precisely rather than to carry out detailed analyses of special cases.Contents1 Introduction to digital communication 11.1 Standardized interfaces and layering (3)1.2 Communication sources (5)1.2.1 Source coding (6)1.3 Communication channels (7)1.3.1 Channel encoding(modulation) (9)1.3.2 Error correction (10)1.4 Digital interface (11)1.4.1 Network aspects of the digital interface . . . . . . . . . . . . . . . . . . . 121.5 Supplementary reading (14)2 Coding for Discrete Sources 152.1 Introduction (15)2.2 Fixed-length codes for discrete sources (16)2.3 Variable-length codes for discrete sources . . . . . . . . . . . . . . . . . . . . . . 182.3.1 Unique decodability (19)2.3.2 Prefix-free codes for discrete sources . . . . . . . . . . . . . . . . . . . . . 202.3.3 The Kraft inequality for prefix-free codes . . . . . . . . . . . . . . . . . . 222.4 Probability models for discrete sources (24)2.4.1 Discrete memoryless sources (25)2.5 Minimum L for prefix-free codes (26)2.5.1 Lagrange multiplier solution for the minimum L (27)2.5.2 Entropy bounds on L (28)2.5.3 Huffman’s algorithm for optimal source codes . . . . . . . . . . . . . . . 292.6 Entropy andfixed-to-variable-length codes. . . . . . . . . . . . . . . . . . . . . . 332.6.1 Fixed-to-variable-length codes (35)2.7 The AEP and the source coding theorems (36)2.7.1 The weak law of large numbers (37)ivv CONTENTS2.7.2 The asymptotic equipartition property (38)2.7.3 Source coding theorems (41)2.7.4 The entropy bound for general classes of codes . . . . . . . . . . . . . . . 422.8 Markov sources (43)2.8.1 Coding for Markov sources (45)2.8.2 Conditional entropy (45)2.9 Lempel-Ziv universal data compression (47)2.9.1 The LZ77algorithm (48)2.9.2 Why LZ77works (49)2.9.3 Discussion (50)2.10Summary of discrete source coding (51)2.E Exercises (53)3 Quantization 633.1 Introduction to quantization (63)3.2 Scalar quantization (65)3.2.1 Choice of intervals for given representation points . . . . . . . . . . . . . 653.2.2 Choice of representation points for given intervals . . . . . . . . . . . . . 663.2.3 The Lloyd-Max algorithm (66)3.3 Vector quantization (68)3.4 Entropy-coded quantization (69)3.5 High-rate entropy-coded quantization (70)3.6 Differential entropy (71)3.7 Performance of uniform high-rate scalar quantizers . . . . . . . . . . . . . . . . . 733.8 High-rate two-dimensional quantizers (76)3.9 Summary of quantization (78)3A Appendix A:Nonuniform scalar quantizers . . . . . . . . . . . . . . . . . . . . . 79 3B Appendix B:Nonuniform2D quantizers (81)3.E Exercises (83)4 Source and channel waveforms 874.1 Introduction (87)4.1.1 Analog sources (87)4.1.2 Communication channels (89)4.2 Fourier series (90)4.2.1 Finite-energy waveforms (92)4.3 L2 functions and Lebesgue integration over [−T/2,T/2] (94)vi CONTENTS4.3.1 Lebesgue measure for a union of intervals . . . . . . . . . . . . . . . . . . 954.3.2 Measure for more general sets (96)4.3.3 Measurable functions and integration over [−T/2,T/2] (98)4.3.4 Measurability of functions defined by other functions . . . . . . . . . . . . 1004.3.5 L1 and L2 functions over [−T/2,T/2] (101)4.4 The Fourier series for L2 waveforms (102)4.4.1 The T-spaced truncated sinusoid expansion . . . . . . . . . . . . . . . . . 1034.5 Fourier transforms and L2 waveforms (105)4.5.1 Measure and integration over R (107)4.5.2 Fourier transforms of L2 functions (109)4.6 The DTFT and the sampling theorem (111)4.6.1 The discrete-time Fourier transform . . . . . . . . . . . . . . . . . . . . . 1124.6.2 The sampling theorem (112)4.6.3 Source coding using sampled waveforms . . . . . . . . . . . . . . . . . . . 1154.6.4 The sampling theorem for [∆ −W,∆+W] (116)4.7 Aliasing and the sinc-weighted sinusoid expansion . . . . . . . . . . . . . . . . . . 1174.7.1 The T-spaced sinc-weighted sinusoid expansion . . . . . . . . . . . . . . . 1174.7.2 Degrees of freedom (118)4.7.3 Aliasing—a time domain approach (119)4.7.4 Aliasing—a frequency domain approach . . . . . . . . . . . . . . . . . . 1204.8 Summary (121)4A Appendix:Supplementary material and proofs (122)4A.1 Countable sets (122)4A.2 Finite unions of intervals over [−T/2,T/2] (125)4A.3 Countable unions and outer measure over [−T/2,T/2] (125)4A.4 Arbitrary measurable sets over [−T/2,T/2] (128)4.E Exercises (132)5 Vector spaces and signal space 1415.1 The axioms and basic properties of vector spaces . . . . . . . . . . . . . . . . . . 1425.1.1 Finite-dimensional vector spaces (144)5.2 Inner product spaces (145)5.2.1 The inner product spaces R n and C n (146)5.2.2 One-dimensional projections (146)5.2.3 The inner product space of L2 functions (148)5.2.4 Subspaces of inner product spaces (149)5.3 Orthonormal bases and the projection theorem . . . . . . . . . . . . . . . . . . . 150vii CONTENTS5.3.1 Finite-dimensional projections (151)5.3.2 Corollaries of the projection theorem. . . . . . . . . . . . . . . . . . . . . 1515.3.3 Gram-Schmidt orthonormalization (153)5.3.4 Orthonormal expansions in L2 (153)5.4 Summary (156)5A Appendix:Supplementary material and proofs (156)5A.1 The Plancherel theorem (156)5A.2 The sampling and aliasing theorems (160)5A.3 Prolate spheroidal waveforms (162)5.E Exercises (164)6 Channels, modulation, and demodulation 1676.1 Introduction (167)6.2 Pulse amplitude modulation(PAM) (169)6.2.1 Signal constellations (170)6.2.2 Channel imperfections: a preliminary view . . . . . . . . . . . . . . . . . 1716.2.3 Choice of the modulation pulse (173)6.2.4 PAM demodulation (174)6.3 The Nyquist criterion (175)6.3.1 Band-edge symmetry (176)6.3.2 Choosing {p(t−kT); k∈Z}as an orthonormal set (178)6.3.3 Relation between PAM and analog source coding . . . . . . . . . . . . . 1796.4 Modulation:baseband to passband and back (180)6.4.1 Double-sideband amplitude modulation . . . . . . . . . . . . . . . . . . . 1806.5 Quadrature amplitude modulation(QAM). . . . . . . . . . . . . . . . . . . . . . 1816.5.1 QAM signal set (182)6.5.2 QAM baseband modulation and demodulation . . . . . . . . . . . . . . . 1836.5.3 QAM:baseband to passband and back. . . . . . . . . . . . . . . . . . . . 1846.5.4 Implementation of QAM (185)6.6 Signal space and degrees of freedom (187)6.6.1 Distance and orthogonality (188)6.7 Carrier and phase recovery in QAM systems. . . . . . . . . . . . . . . . . . . . . 1906.7.1 Tracking phase in the presence of noise . . . . . . . . . . . . . . . . . . . 1916.7.2 Large phase errors (191)6.8 Summary of modulation and demodulation . . . . . . . . . . . . . . . . . . . . . 1926.E Exercises (193)viii CONTENTSRandom processes and noise7 1997.1 Introduction (199)7.2 Random processes (200)7.2.1 Examples of random processes (201)7.2.2 The mean and covariance of a random process . . . . . . . . . . . . . . . 2027.2.3 Additive noise channels (203)7.3 Gaussian random variables, vectors, and processes . . . . . . . . . . . . . . . . . 2047.3.1 The covariance matrix of a jointly-Gaussian random vector . . . . . . . . 2067.3.2 The probability density of a jointly-Gaussian random vector . . . . . . . . 2077.3.3 Special case of a 2-dimensional zero-mean Gaussian random vector . . . . 2097.3.4 Z = A W where A is orthogonal (210)7.3.5 Probability density for Gaussian vectors in terms of principal axes . . . . 2107.3.6 Fourier transforms for joint densities . . . . . . . . . . . . . . . . . . . . . 2117.4 Linear functionals and filters for random processes . . . . . . . . . . . . . . . . . 2127.4.1 Gaussian processes defined over orthonormal expansions . . . . . . . . . . 2147.4.2 Linearfiltering of Gaussian processes. . . . . . . . . . . . . . . . . . . . . 2157.4.3 Covariance for linear functionals and filters . . . . . . . . . . . . . . . . . 2157.5 Stationarity and related concepts (216)7.5.1 Wide-sense stationary (WSS) random processes . . . . . . . . . . . . . . . 2177.5.2 Effectively stationary and effectively WSS random processes . . . . . . . . 2197.5.3 Linear functionals for effectively WSS random processes . . . . . . . . . . 2207.5.4 Linear filters for effectively WSS random processes . . . . . . . . . . . . . 2207.6 Stationary and WSS processes in the Frequency Domain . . . . . . . . . . . . . . 2227.7 White Gaussian noise (224)7.7.1 The sinc expansion as an approximation to WGN . . . . . . . . . . . . . . 2267.7.2 Poisson process noise (227)7.8 Adding noise to modulated communication . . . . . . . . . . . . . . . . . . . . . 2277.8.1 Complex Gaussian random variables and vectors . . . . . . . . . . . . . . 2297.9 Signal to noise ratio (233)7.10Summary of Random Processes (235)7A Appendix:Supplementary topics (236)7A.1 Properties of covariance matrices (236)7A.2 The Fourier series expansion of a truncated random process . . . . . . . . 238 7A.3 Uncorrelated coefficients in a Fourier series . . . . . . . . . . . . . . . . . 239 7A.4 The Karhunen-Loeve expansion (242)7.E Exercises (244)ix CONTENTS8 Detection, coding, and decoding 2498.1 Introduction (249)8.2 Binary detection (251)8.3 Binary signals in white Gaussian noise (254)8.3.1 Detection for PAM antipodal signals . . . . . . . . . . . . . . . . . . . . . 2548.3.2 Detection for binary non-antipodal signals . . . . . . . . . . . . . . . . . . 2568.3.3 Detection for binary real vectors in WGN . . . . . . . . . . . . . . . . . . 2578.3.4 Detection for binary complex vectors in WGN . . . . . . . . . . . . . . . 2608.3.5 Detection of binary antipodal waveforms in WGN . . . . . . . . . . . . . 2618.4 M-ary detection and sequence detection (264)8.4.1 M-ary detection (265)8.4.2 Successive transmissions of QAM signals in WGN . . . . . . . . . . . . . 2668.4.3 Detection with arbitrary modulation schemes . . . . . . . . . . . . . . . . 2688.5 Orthogonal signal sets and simple channel coding (271)8.5.1 Simplex signal sets (271)8.5.2 Bi-orthogonal signal sets (272)8.5.3 Error probability for orthogonal signal sets . . . . . . . . . . . . . . . . . 2738.6 Block Coding (276)8.6.1 Binary orthogonal codes and Hadamard matrices . . . . . . . . . . . . . . 2768.6.2 Reed-Muller codes (278)8.7 The noisy-channel coding theorem (280)8.7.1 Discrete memoryless channels (280)8.7.2 Capacity (282)8.7.3 Converse to the noisy-channel coding theorem . . . . . . . . . . . . . . . . 2838.7.4 noisy-channel coding theorem, forward part . . . . . . . . . . . . . . . . . 2848.7.5 The noisy-channel coding theorem for WGN. . . . . . . . . . . . . . . . . 2878.8 Convolutional codes (288)8.8.1 Decoding of convolutional codes (290)8.8.2 The Viterbi algorithm (291)8.9 Summary (292)8A Appendix:Neyman-Pearson threshold tests (293)8.E Exercises (298)9 Wireless digital communication 3059.1 Introduction (305)9.2 Physical modeling for wireless channels. . . . . . . . . . . . . . . . . . . . . . . . 3089.2.1 Free space, fixed transmitting and receiving antennas . . . . . . . . . . . 309x CONTENTS9.2.2 Free space,moving antenna (311)9.2.3 Moving antenna,reflecting wall (311)9.2.4 Reflection from a ground plane (313)9.2.5 Shadowing (314)9.2.6 Moving antenna,multiple reflectors. . . . . . . . . . . . . . . . . . . . . . 3149.3 Input/output models of wireless channels . . . . . . . . . . . . . . . . . . . . . . 3159.3.1 The system function and impulse response for LTV systems . . . . . . . . 3169.3.2 Doppler spread and coherence time (319)9.3.3 Delay spread,and coherence frequency. . . . . . . . . . . . . . . . . . . . 3219.4 Baseband system functions and impulse responses . . . . . . . . . . . . . . . . . 3239.4.1 A discrete-time baseband model (325)9.5 Statistical channel models (328)9.5.1 Passband and baseband noise (330)9.6 Data detection (331)9.6.1 Binary detection inflat Rayleigh fading . . . . . . . . . . . . . . . . . . . 3329.6.2 Non-coherent detection with known channel magnitude . . . . . . . . . . 3349.6.3 Non-coherent detection in flat Rician fading . . . . . . . . . . . . . . . . . 3369.7 Channel measurement (338)9.7.1 The use of probing signals to estimate the channel . . . . . . . . . . . . . 3399.7.2 Rake receivers (343)9.8 Diversity (346)9.9 CDMA;The IS95Standard (349)9.9.1 Voice compression (350)9.9.2 Channel coding and decoding (351)9.9.3 Viterbi decoding for fading channels . . . . . . . . . . . . . . . . . . . . . 3529.9.4 Modulation and demodulation (353)9.9.5 Multiaccess Interference in IS95 (355)9.10Summary of Wireless Communication (357)9A Appendix: Error probability for non-coherent detection . . . . . . . . . . . . . . 3589.E Exercises (360)。
digital Fundamental 数电英文
Summary
Basic Logic Functions True only if all input conditions are true. True only if one or more input conditions are true. Indicates the opposite condition.
CD drive
10110011101 Digital data
Digital-to-analog converter
Analog reproduction of music audio signal
Linear amplifier
Speaker Sound waves
Floyd, Digital Fundamentals, 10th ed
Digital Waveforms Digital waveforms change between the LOW and HIGH levels. A positive going pulse is one that goes from a normally LOW logic level to a HIGH level and then back again. Digital waveforms are made up of a series of pulses.
HIGH
VH(min)
Invalid
VL(max)
LOW
VL(min)
Floyd, Digital Fundamentals, 10th ed
© 2009 Pearson Education, Upper Saddle River, NJ 07458. All Rights Reserved
基于遗传思想的多元LDPC码拓展最小和改进算法
基于遗传思想的多元LDPC码拓展最小和改进算法周亚强;董千慧;李一兵【摘要】The paper studied EMS decoding algorithm of non?binary low density parity check codes and the thought of the genetic algorithm was adopted to reduce computation complexity. In decoding,highly reliable variable nodes were selected,for these special nodes,attached operation was added,i.e. correction parameter was used to enlarge the belief value of the symbols corresponding to the judgment location,while other belief values relatively were de?creased. This method can speed up convergence rate and reduce the number of decoding iterations, finally reduce decoding computation complexity and save computation resources. The simulation and computational analysis show that the improved algorithm can reduce computation complexity without affecting the error correction performance.%针对多元LDPC码的EMS译码计算量的问题,采用遗传算法的思想对其进行了改进,选择高度可靠的变量节点,针对这些特殊节点增加附加运算,即采用修正参数放大消息判决位对应符号置信值的方法,而其他置信值相对变小.这种方法通过加快收敛速度,减少了译码过程迭代的次数,最终能够减少译码计算量,节省计算资源.通过仿真结果及计算量分析,证明了改进算法能够在不影响译码纠错性能的情况下降低计算量.【期刊名称】《应用科技》【年(卷),期】2018(045)001【总页数】5页(P56-60)【关键词】多元LDPC码;信道编码;EMS算法;遗传算法;纠错码;置信传播;信息论;伽罗华域【作者】周亚强;董千慧;李一兵【作者单位】哈尔滨工程大学信息与通信工程学院,黑龙江哈尔滨150001;哈尔滨工程大学信息与通信工程学院,黑龙江哈尔滨150001;哈尔滨工程大学信息与通信工程学院,黑龙江哈尔滨150001【正文语种】中文【中图分类】TN911.2低密度奇偶校验码(low density parity check,LDPC codes)是一种具有稀疏矩阵的线性分组码。
On Sequential Monte Carlo Sampling Methods for Bayesian Filtering
methods, see (Akashi et al., 1975)(Handschin et. al, 1969)(Handschin 1970)(Zaritskii et al., 1975). Possibly owing to the severe computational limitations of the time these Monte Carlo algorithms have been largely neglected until recently. In the late 80’s, massive increases in computational power allowed the rebirth of numerical integration methods for Bayesian filtering (Kitagawa 1987). Current research has now focused on MC integration methods, which have the great advantage of not being subject to the assumption of linearity or Gaussianity in the model, and relevant work includes (M¨ uller 1992)(West, 1993)(Gordon et al., 1993)(Kong et al., 1994)(Liu et al., 1998). The main objective of this article is to include in a unified framework many old and more recent algorithms proposed independently in a number of applied science areas. Both (Liu et al., 1998) and (Doucet, 1997) (Doucet, 1998) underline the central rˆ ole of sequential importance sampling in Bayesian filtering. However, contrary to (Liu et al., 1998) which emphasizes the use of hybrid schemes combining elements of importance sampling with Markov Chain Monte Carlo (MCMC), we focus here on computationally cheaper alternatives. We describe also how it is possible to improve current existing methods via Rao-Blackwellisation for a useful class of dynamic models. Finally, we show how to extend these methods to compute the prediction and fixed-interval smoothing distributions as well as the likelihood. The paper is organised as follows. In section 2, we briefly review the Bayesian filtering problem and classical Bayesian importance sampling is proposed for its solution. We then present a sequential version of this method which allows us to obtain a general recursive MC filter: the sequential importance sampling (SIS) filter. Under a criterion of minimum conditional variance of the importance weights, we obtain the optimal importance function for this method. Unfortunately, for numerous models of applied interest the optimal importance function leads to non-analytic importance weights, and hence we propose several suboptimal distributions and show how to obtain as special cases many of the algorithms presented in the literature. Firstly we consider local linearisation methods of either the state space model 3
ADC0801S040TS中文资料
ADC0801S040Single 8 bits ADC, up to 40 MHzRev. 02 — 18 August 2008Product data sheet1.General descriptionThe ADC0801S040 is an 8-bit universal analog-to-digital converter (ADC) for video andgeneral purpose applications. It converts the analog input signal from 2.7 V to 5.5 V into8-bit binary-coded digital words at a maximum sampling rate of 40 MHz. All digital inputsand outputs are CMOS/T ransistor-Transistor Logic(TTL)compatible.A sleep mode allowsreduction of the device power consumption to 4 mW.2.FeaturesI8-bit resolutionI Operation between 2.7 V and 5.5 VI Sampling rate up to 40 MHzI DC sampling allowedI High signal-to-noise ratio over a large analog input frequency range (7.3 effective bitsat 4.43 MHz full-scale input at f clk = 40 MHz)I CMOS/TTL compatible digital inputs and outputsI External reference voltage regulatorI Power dissipation only 30 mW (typical value)I Low analog input capacitance, no buffer amplifier requiredI Sleep mode (4 mW)I No sample-and-hold circuit required3.ApplicationsI Video data digitizingI CameraI CamcorderI Radio communicationI Car alarm system4.Quick reference data5.Ordering informationTable 1.Quick reference dataV DDA =V5to V6=3.3V;V DDD =V3to V4=3.3V;V DDO =V20to V11=3.3V;V SSA ,V SSD and V SSO shorted together; V i(a)(p-p) = 1.84 V; C L = 20 pF; T amb = 0°C to 70°C; typical values measured at T amb = 25°C unless otherwise specified.Symbol Parameter ConditionsMin Typ Max Unit V DDA analog supply voltage 2.7 3.3 5.5V V DDD digital supply voltage 2.7 3.3 5.5V V DDO output supply voltage2.53.3 5.5V ∆V DD supply voltage difference V DDA − V DDD −0.2-+0.2V V DDD − V DDO −0.2-+2.25V I DDA analog supply current -46mA I DDD digital supply current -58mA I DDO output supply current f clk = 40 MHz; ramp input;C L =20pF-12mA INL integralnon-linearity ramp input; see Figure 6-±0.5±0.75LSB DNL differential non-linearity ramp input; see Figure 7-±0.25±0.5LSB f clk(max)maximum clockfrequency 40--MHzP tottotal power dissipationV DDA =V DDD =V DDO =3.3 V -3053mWTable 2.Ordering informationType numberPackage NameDescriptionVersion ADC0801S040TSSSOP20plastic shrink small outline package; 20 leads; body width 4.4mmSOT266-16.Block diagramFig 1.Block diagram79R lad810RB RM RTVI3V DDD 5V DDA 2CMOS OUTPUTSLATCHESCLOCK DRIVER014aaa4951CLK SLEEPADC0801S04020V DDO6V SSAanalog grounddigital ground4V SSD 11V SSOoutput groundanalog voltage inputdata outputsLSB MSB19D718D617D516D415D314D213D112D0ANALOG - TO - DIGIT ALCONVERTER7.Pinning information7.1Pinning7.2Pin descriptionFig 2.Pin configurationADC0801S040TSCLK V DDO SLEEP D7V DDD D6V SSD D5V DDA D4V SSA D3RB D2RM D1VID0RT V SSO014aaa4941234567891012111413161518172019Table 3.Pin descriptionSymbol Pin Description CLK 1clock input SLEEP 2sleep mode inputV DDD 3digital supply voltage (2.7 V to 5.5 V)V SSD 4digital groundV DDA 5analog supply voltage (2.7 V to 5.5 V)V SSA 6analog groundRB 7reference voltage BOTTOM input RM 8reference voltage MIDDLE VI 9analog input voltage RT 10reference voltage TOP input V SSO 11output stage groundD012data output; bit 0 (Least Significant Bit (LSB))D113data output; bit 1D214data output; bit 2D315data output; bit 3D416data output; bit 4D517data output; bit 58.Limiting values[1]The supply voltages V DDA , V DDD and V DDO may have any value between −0.3 V and +7.0 V provided that the supply voltage ∆V DD remains as indicated.9.Thermal characteristics10.CharacteristicsD618data output; bit 6D719data output; bit 7 (Most Significant Bit (MSB))V DDO20positive supply voltage for output stage (2.7 V to 5.5 V)Table 3.Pin description …continuedSymbol Pin Description Table 4.Limiting valuesIn accordance with the Absolute Maximum Rating System (IEC 60134).Symbol ParameterConditionsMin Max Unit V DDA analog supply voltage [1]−0.3+7.0V V DDD digital supply voltage [1]−0.3+7.0V V DDO output supply voltage [1]−0.3+7.0V ∆V DDsupply voltage differenceV DDA −V DDD ;V DDD −V DDO ;V DDA −V DDO −0.1+4.0VV I input voltage referenced to V SSA−0.3+7.0V V i(clk)(p-p)peak-to-peak clock input voltage referenced toV SSD -V DDD V I O output current -10mA T stg storage temperature −55+150°C T amb ambient temperature −20+75°C T jjunction temperature-150°CTable 5.Thermal characteristics Symbol ParameterCondition Value Unit R th(j-a)thermal resistance from junction to ambientin free air120K/WTable 6.CharacteristicsV DDA =V5to V6=3.3V;V DDD =V3to V4=3.3V;V DDO =V20to V11=3.3V;V SSA ,V SSD and V SSO shorted together;V i(a)(p-p)= 1.84 V; C L = 20 pF; T amb = 0°C to 70°C; typical values measured at T amb = 25°C unless otherwise specified.Symbol ParameterConditions Min Typ Max Unit Supplies V DDA analog supply voltage 2.7 3.3 5.5V V DDD digital supply voltage 2.7 3.3 5.5VV DDOoutput supply voltage2.53.35.5∆V DD supply voltage difference V DDA−V DDD−0.2-+0.2VV DDD−V DDO−0.2-+2.25VI DDA analog supply current-46mAI DDD digital supply current-58mAI DDO output supply current f clk= 40 MHz; ramp input;C L=20pF-12mAP tot total power dissipation V DDA=V DDD=V DDO=3.3V-3053mW InputsClock input CLK (Referenced to V SSD)[1]V IL LOW-level input voltage0-0.3 V DDD VV IH HIGH-level input voltage V DDD≤ 3.6 V0.6 V DDD-V DDD VV DDD > 3.6 V0.7 V DDD-V DDD VI IL LOW-level input current V clk = 0.3 V DDD−10+1µAI IH HIGH-level input current V clk = 0.7 V DDD--5µAZ i input impedance f clk= 40 MHz-4-kΩC i input capacitance f clk= 40 MHz-3-pF Input SLEEP (Referenced to V SSD); see Table8V IL LOW-level input voltage0-0.3 V DDD VV IH HIGH-level input voltage V DDD≤ 3.6 V0.6 V DDD-V DDD VV DDD > 3.6 V0.7 V DDD-V DDD VI IL LOW-level input current V IL = 0.3 V DDD−1--µAI IH HIGH-level input current V IH = 0.7 V DDD--+1µA Analog input VI (Referenced to V SSA)I IL LOW-level input current V I = V RB-0-µAI IH HIGH-level input current V I = V RT-9-µAZ i input impedance f i= 1 MHz-20-kΩC i input capacitance f i= 1 MHz-2-pF Reference voltages for the resistor ladder; see Table7V RB voltage on pin RB 1.1 1.2-VV RT voltage on pin RT V RT≤ V DDA 2.7 3.3V DDA VV ref(dif)differential referencevoltageV RT−V RB 1.5 2.1 2.7VI ref reference current-0.95-mAR lad ladder resistance- 2.2-kΩTC Rlad ladder resistortemperature coefficient-4092- mΩ/K V offset offset voltage BOTTOM[2]-170-mVTOP[2]-170-mVV i(a)(p-p)peak-to-peak analoginput voltage [3] 1.4 1.76 2.4VTable 6.Characteristics …continuedV DDA=V5to V6=3.3V;V DDD=V3to V4=3.3V;V DDO=V20to V11=3.3V;V SSA,V SSD and V SSO shorted together;V i(a)(p-p) = 1.84 V; C L = 20 pF; T amb = 0°C to 70°C; typical values measured at T amb = 25°C unless otherwise specified.Symbol Parameter Conditions Min Typ Max UnitDigital outputs D7 to D0 and IR (Referenced to V SSD )V OL LOW-level output voltageI O = 1 mA 0-0.5V V OH HIGH-level output voltageI O =−1 mAV DDO −0.5-V DDO V I OZ OFF-state output current 0.4V <V O <V DDO −20-+20µA Clock input CLK; see Figure 4[1]f clk(max)maximum clock frequency40--MHz t w(clk)H HIGH clock pulse width 9--ns t w(clk)L LOW clock pulse width9--nsAnalog signal processing (f clk = 40 MHz)Linearity INL integral non-linearity ramp input; see Figure 6-±0.5±0.75LSB DNL differential non-linearityramp input; see Figure 7-±0.25±0.5LSB Bandwidth Bbandwidthfull-scale sine wave [4]-10MHz 75 % full-scale sine wave -13MHz 50 % full-scale sine wave -20MHz small signal at mid scale;V i =±10 LSB at code 128-350MHzInput set response; see Figure 8[5]t s(LH)LOW to HIGH settling timefull-scale square wave -35ns t s(HL)HIGH to LOW settling timefull-scale square wave-35nsHarmonics; see Figure 9[6]THD total harmonic distortion f i =4.43MHz -−50-dB Signal-to-Noise ratio; see Figure 9[6]S/Nsignal-to-noise ratiowithout harmonics;f i =4.43MHz -47-dBEffective bits; see Figure 9[6]ENOBeffective number of bitsf i =300MHz -7.8-bits f i =4.43MHz-7.3-bitsDifferential gain [7]G difdifferential gainPAL modulated ramp- 1.5-%Table 6.Characteristics …continuedV DDA =V5to V6=3.3V;V DDD =V3to V4=3.3V;V DDO =V20to V11=3.3V;V SSA ,V SSD and V SSO shorted together;V i(a)(p-p)= 1.84 V; C L = 20 pF; T amb = 0°C to 70°C; typical values measured at T amb = 25°C unless otherwise specified.Symbol Parameter Conditions Min Typ Max Unit[1]In addition to a good layout of the digital and analog ground,it is recommended that the rise and fall times of the clock must not be less than 1 ns.[2]Analog input voltages producing code 0 up to and including code 255:a)V offset BOTTOM is the difference between the analog input which produces data equal to 00 and the reference voltage on pin RB(V RB ) at T amb = 25°C.b)V offset TOP is the difference between the reference voltage on pin RT (V RT )and the analog input which produces data outputs equal to code 255 at T amb = 25°C .[3]To ensure the optimum linearity performance of such a converter architecture the lower and upper extremities of the converter reference resistor ladder are connected to pins RB and RT via offset resistors R OB and R OT as shown in Figure 3.a)The current flowing into the resistor ladder is and the full-scale input range at the converter, to cover code 0to 255 is b)Since R L , R OB and R OT have similar behavior with respect to process and temperature variation, the ratio will be kept reasonably constant from device to device.Consequently variation of the output codes at a given input voltage depends mainly on the difference V RT − V RB and its variation with temperature and supply voltage. When several ADCs are connected in parallel and fed with the same reference source, the matching between each of them is optimized.[4]The analog bandwidth is defined as the maximum input sine wave frequency which can be applied to the device. No glitches greater than 2 LSB, nor any significant attenuation is observed in the reconstructed signal.[5]The analog input settling time is the minimum time required for the input signal to be stabilized after a sharp full-scale input (square wave signal) in order to sample the signal and obtain correct output data.[6]Effective bits are obtained via a Fast Fourier T ransform (FFT) treatment taking 8000 acquisition points per equivalent fundamentalperiod. The calculation takes into account all harmonics and noise up to half of the clock frequency (Nyquist frequency). Conversion to signal-to-noise ratio: S/N = ENOB × 6.02 + 1.76 dB.[7]Measurement carried out using video analyzer VM700A, where video analog signal is reconstructed through a DAC.[8]Output data acquisition: the output data is available after the maximum delay time of t d(o).Differential phase [7]ϕdif differential phase PAL modulated ramp-0.25-deg Timing (f clk = 40 MHz; C L = 20 pF); see Figure 4[8]t d(s)sampling delay time --5ns t h(o)output hold time 5 --ns t d(o)output delay timeV DDO = 4.75 V 81215ns V DDO = 3.15 V 81720ns V DDO = 2.7 V81821ns 3-state output delay times; see Figure 5t dHZ active HIGH to float delay time-1418ns t dZL float to active LOW delay time-1620ns t dZH float to active HIGH delay time-1620ns t dLZactive LOW to float delay time-1418nsTable 6.Characteristics …continuedV DDA =V5to V6=3.3V;V DDD =V3to V4=3.3V;V DDO =V20to V11=3.3V;V SSA ,V SSD and V SSO shorted together;V i(a)(p-p)= 1.84 V; C L = 20 pF; T amb = 0°C to 70°C; typical values measured at T amb = 25°C unless otherwise specified.Symbol Parameter ConditionsMin Typ Max Unit I V RT V RB –R OB R L R OT++----------------------------------------=V I R L I L ×R LR OB R L R OT++----------------------------------------V RT V RB +()×0.838V RT V RB –()×===R LR OB R L R OT++----------------------------------------11.Additional information relating to Table 6Fig 3.Explanation of Table 6Table note 3014aaa504RTRBRMR ladR OTR LR LR LR LI LR OBcode 255code 0976Table 7.Output coding and input voltage (typical values; referenced to V SSA )Code V i(a)(p-p) (V)Binary outputs D7to D0Underflow < 1.37000000000 1.37000000001-00000001↓-↓254-11 1111 10255 3.1311 1111 11Overflow > 3.1311 1111 11Table 8.Mode selectionSLEEP D7 to D0I DDA + I DDD (typ)1high impedance 1.2 mA 0active9 mAFig 4.Timing diagramsample N + 1sample N CLK 014aaa508sample N + 2sample N + 1sample N sample N + 250 %VIDATA D0 to D7V DDO0 V50 %DATA N + 1DA TA N DAT A N − 1DATA N − 2t d(o)t w(clk)Ht w(clk)Lt d(s)t h(o)frequency on pin SLEEP = 100 kHz.Fig 5.Timing diagram and test conditions of 3-state output delay timeLOWHIGHHIGHLOWADC0801S040V DDD V DDDS1SLEEPSLEEPoutput dataoutput data10 %50 %50 %90 %50 %t dLZt dZLt dHZt dZH20 pF3.3 k ΩS1TESTV DDD t dLZ V DDD t dZL GNDt dZHt dHZ GND 014aaa496Fig 6.Typical Integral Non-Linearity (INL) performance014aaa501−0.0470.065−0.1600.1780.291A (LSB)−0.272codes027220468136Fig 7.Typical Differential Non-Linearity (DNL) performance014aaa502−0.0250.032−0.840.0910.150A (LSB)−0.143codes027220468136Fig 8.Analog input settling-time diagram014aaa497code 255code 050 %50 %CLK VI t s(LH)t s(HL)50 %50 %5 ns 5 ns2 ns 2 nsEffective bits: 7.32; THD =−51.08 dB.Harmonic levels (dB): 2nd =−68.99; 3rd =−51.62; 4th =−66.05; 5th =−63.23; 6th =−72.79.Fig 9.Typical fast Fourier transform (f clk = 40 MHz; f i = 4.43 MHz)f (MHz)020.015.05.010.0014aaa503−80−400A (dB)−120Fig 10.CMOS data outputs Fig 11.VI analog inputFig 12.SLEEP 3-state inputFig 13.RB, RM and RT inputs014aaa498V DDOD7 to D0V SSOV DDAVIV SSA014aaa505014aaa499V DDOV SSOSLEEPV DDARTRMRBV SSA014aaa506R LR LR LR LV DDDCLK1/2 V DDDV SSD014aaa507 Fig 14.CLK input12.Application information12.1Application diagramsThe analog and digital supplies should be separated and decoupled.The external voltage reference generator must be built in such a way that a good supply voltage ripple rejection is achieved with respect to the LSB value.Eventually,the reference ladder voltages can be derived from a well regulated V DDA supply through a resistor bridge and a decoupling capacitor.(1) RB, RM, RT are decoupled to V SSA .Fig 15.Application diagram100 nF100 nF100 nFCLK SLEEP V DDD V SSD V DDA V SSA V SSAV SSAV SSARB(1)RM(1)VIRT(1)V DDO D7D6D5D4D3D2D1D0V SSO1234567891011122019181716151413ADC0801S040014aaa50013.Package outlineFig 16.Package outline SOT266-1 (SSOP20)UNIT A 1A 2A 3b p c D (1)E (1)(1)e H E L L p Q Z y w v θ REFERENCESOUTLINE VERSION EUROPEAN PROJECTIONISSUE DATE IECJEDEC JEITAmm0.1501.41.20.320.200.200.136.66.44.54.30.6510.26.66.20.650.450.480.18100oo 0.130.1DIMENSIONS (mm are the original dimensions)Note1. Plastic or metal protrusions of 0.20 mm maximum per side are not included.0.750.45SOT266-1MO-15299-12-2703-02-19w MθAA 1A 2b pD H EL p Qdetail XE ZecLv M AX(A )3Ay0.251102011pin 1 index0 2.5 5 mmscaleSSOP20: plastic shrink small outline package; 20 leads; body width 4.4 mmSOT266-1Amax.1.514.Revision historyTable 9.Revision historyDocument ID Release date Data sheet status Change notice Supersedes ADC0801S040_220080818Product data sheet-ADC0801S040_1 Modifications:•Corrections made to table notes in Figure1.•Corrections made to Table3.•Corrections made to symbol in Table4.•Corrections made to Table6.•Corrections made to Figure13ADC0801S040_120080612Product data sheet--15.Legal information15.1Data sheet status[1]Please consult the most recently issued document before initiating or completing a design.[2]The term ‘short data sheet’ is explained in section “Definitions”.[3]The product status of device(s)described in this document may have changed since this document was published and may differ in case of multiple devices.The latest product status information is available on the Internet at URL .15.2DefinitionsDraft —The document is a draft version only. The content is still under internal review and subject to formal approval, which may result in modifications or additions. NXP Semiconductors does not give any representations or warranties as to the accuracy or completeness ofinformation included herein and shall have no liability for the consequences of use of such information.Short data sheet —A short data sheet is an extract from a full data sheet with the same product type number(s)and title.A short data sheet is intended for quick reference only and should not be relied upon to contain detailed and full information. For detailed and full information see the relevant full data sheet, which is available on request via the local NXP Semiconductors sales office. In case of any inconsistency or conflict with the short data sheet, the full data sheet shall prevail.15.3DisclaimersGeneral —Information in this document is believed to be accurate andreliable.However,NXP Semiconductors does not give any representations or warranties,expressed or implied,as to the accuracy or completeness of such information and shall have no liability for the consequences of use of such information.Right to make changes —NXP Semiconductors reserves the right to make changes to information published in this document, including withoutlimitation specifications and product descriptions, at any time and without notice.This document supersedes and replaces all information supplied prior to the publication hereof.Suitability for use —NXP Semiconductors products are not designed,authorized or warranted to be suitable for use in medical, military, aircraft,space or life support equipment, nor in applications where failure ormalfunction of an NXP Semiconductors product can reasonably be expected to result in personal injury, death or severe property or environmentaldamage. NXP Semiconductors accepts no liability for inclusion and/or use of NXP Semiconductors products in such equipment or applications and therefore such inclusion and/or use is at the customer’s own risk.Applications —Applications that are described herein for any of these products are for illustrative purposes only. NXP Semiconductors makes no representation or warranty that such applications will be suitable for the specified use without further testing or modification.Limiting values —Stress above one or more limiting values (as defined in the Absolute Maximum Ratings System of IEC 60134)may cause permanent damage to the device.Limiting values are stress ratings only and operation of the device at these or any other conditions above those given in theCharacteristics sections of this document is not implied. Exposure to limiting values for extended periods may affect device reliability.Terms and conditions of sale —NXP Semiconductors products are sold subject to the general terms and conditions of commercial sale,as published at /profile/terms , including those pertaining to warranty,intellectual property rights infringement and limitation of liability, unless explicitly otherwise agreed to in writing by NXP Semiconductors. In case of any inconsistency or conflict between information in this document and such terms and conditions, the latter will prevail.No offer to sell or license —Nothing in this document may be interpreted or construed as an offer to sell products that is open for acceptance or the grant,conveyance or implication of any license under any copyrights,patents or other industrial or intellectual property rights.Quick reference data —The Quick reference data is an extract of theproduct data given in the Limiting values and Characteristics sections of this document, and as such is not complete, exhaustive or legally binding.15.4TrademarksNotice:All referenced brands,product names,service names and trademarks are the property of their respective owners.16.Contact informationFor more information, please visit:For sales office addresses, please send an email to:salesaddresses@Document status [1][2]Product status [3]DefinitionObjective [short] data sheet Development This document contains data from the objective specification for product development.Preliminary [short] data sheet Qualification This document contains data from the preliminary specification.Product [short] data sheetProductionThis document contains the product specification.17.Contents1General description. . . . . . . . . . . . . . . . . . . . . . 12Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Quick reference data. . . . . . . . . . . . . . . . . . . . . 25Ordering information. . . . . . . . . . . . . . . . . . . . . 26Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 37Pinning information. . . . . . . . . . . . . . . . . . . . . . 47.1Pinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47.2Pin description . . . . . . . . . . . . . . . . . . . . . . . . . 48Limiting values. . . . . . . . . . . . . . . . . . . . . . . . . . 59Thermal characteristics. . . . . . . . . . . . . . . . . . . 510Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . 511Additional information relating to Table6. . . . 912Application information. . . . . . . . . . . . . . . . . . 1512.1Application diagrams . . . . . . . . . . . . . . . . . . . 1513Package outline . . . . . . . . . . . . . . . . . . . . . . . . 1614Revision history. . . . . . . . . . . . . . . . . . . . . . . . 1715Legal information. . . . . . . . . . . . . . . . . . . . . . . 1815.1Data sheet status . . . . . . . . . . . . . . . . . . . . . . 1815.2Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815.3Disclaimers. . . . . . . . . . . . . . . . . . . . . . . . . . . 1815.4T rademarks. . . . . . . . . . . . . . . . . . . . . . . . . . . 1816Contact information. . . . . . . . . . . . . . . . . . . . . 1817Contents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Please be aware that important notices concerning this document and the product(s)described herein, have been included in section ‘Legal information’.© NXP B.V.2008.All rights reserved.For more information, please visit: For sales office addresses, please send an email to: salesaddresses@Date of release: 18 August 2008。
ADC08D1500
ADC08D1500High Performance,Low Power,Dual 8-Bit,1.5GSPS A/D ConverterGeneral DescriptionNote:This product is currently in development.-ALL specifications are design targets and are subject to change.The ADC08D1500is a dual,low power,high performance CMOS analog-to-digital converter that digitizes signals to 8bits resolution at sampling rates up to 1.7GSPS.Consuming a typical 1.9Watts at 1.5GSPS from a single 1.9Volt supply,this device is guaranteed to have no missing codes over the full operating temperature range.The unique folding and interpolating architecture,the fully differential comparator design,the innovative design of the internal sample-and-hold amplifier and the self-calibration scheme enable a very flat response of all dynamic parameters beyond Nyquist,producing a high 7.25ENOB with a 748MHz input signal and a 1.5GHz sample rate while providing a 10-18B.E.R.Output formatting is offset binary and the LVDS digital out-puts are compliant with IEEE 1596.3-1996,with the excep-tion of an adjustable common mode voltage between 0.8V and 1.2V.Each converter has a 1:2demultiplexer that feeds two LVDS buses and reduces the output data rate on each bus to half the sampling rate.The two converters can be interleaved and used as a single 3GSPS ADC.The converter typically consumes less than 3.5mW in the Power Down Mode and is available in a 128-lead,thermally enhanced exposed pad LQFP and operates over the Indus-trial (-40˚C ≤T A ≤+85˚C)temperature range.Featuresn Internal Sample-and-Holdn Single +1.9V ±0.1V Operationn Choice of SDR or DDR output clocking n Interleave Mode for 2x Sampling Rate n Multiple ADC Synchronization Capability n Guaranteed No Missing Codesn Serial Interface for Extended Controln Fine Adjustment of Input Full-Scale Range and Offset nDuty Cycle Corrected Sample ClockKey Specificationsn Resolution8Bitsn Max Conversion Rate 1.5GSPS (min)n Bit Error Rate10-18(typ)n ENOB @748MHz Input 7.25Bits (typ)n DNL±0.15LSB (typ)nPower Consumption —Operating1.9W (typ)—Power Down Mode3.5mW (typ)Applicationsn Direct RF Down Conversion n Digital Oscilloscopes n Satellite Set-top boxes n Communications Systems nTest InstrumentationBlock Diagram20152153PRELIMINARYJune 2005ADC08D1500High Performance,Low Power,Dual 8-Bit,1.5GSPS A/D Converter©2005National Semiconductor Corporation Ordering InformationIndustrial Temperature Range (-40˚C <T A <+85˚C)NS PackageADC08D1500CIYB 128-Pin Exposed Pad LQFPADC08D1500EVALEvaluation BoardPin Configuration20152101*Exposed pad on back of package must be soldered to ground plane to ensure rated performance.A D C 08D 1500 2Pin Descriptions and Equivalent CircuitsPin FunctionsPin No.Symbol Equivalent Circuit Description3OutV/SCLK Output Voltage Amplitude and Serial Interface Clock.Tie this pin high for normal differential DCLK and data amplitude. Ground this pin for a reduced differential output amplitude and reduced power consumption.See Section1.1.6.When the extended control mode is enabled,this pin functions as the SCLK input which clocks in the serial data.See Section1.2for details on the extended control mode.See Section1.3for description of the serial interface.4OutEdge/DDR/SDATADCLK Edge Select,Double Data Rate Enable and Serial DataInput.This input sets the output edge of DCLK+at which theoutput data transitions.(See Section1.1.5.2).When this pin isfloating or connected to1/2the supply voltage,DDR clockingis enabled.When the extended control mode is enabled,thispin functions as the SDATA input.See Section1.2for detailson the extended control mode.See Section1.3for descriptionof the serial interface.15DCLK_RST DCLK Reset.A positive pulse on this pin is used to reset and synchronize the DCLK outs of multiple converters.See Section1.5for detailed description.26 29PDPDQPower Down Pins.A logic high on the PD pin puts the entiredevice into the Power Down Mode.A logic high on the PDQpin puts only the"Q"ADC into the Power Down mode.30CAL Calibration Cycle Initiate.A minimum80input clock cycles logic low followed by a minimum of80input clock cycles high on this pin initiates the self calibration sequence.See Section2.4.2for an overview of self-calibration and Section2.4.2.2fora description of on-command calibration.14FSR/ECE Full Scale Range Select and Extended Control Enable.In non-extended control mode,a logic low on this pin sets the full-scale differential input range to650mV P-P.A logic high on this pin sets the full-scale differential input range to870mV P-P.See Section1.1.4.To enable the extended control mode,whereby the serial interface and control registers are employed,allow this pin to float or connect it to a voltage equal to V A/2.See Section1.2for information on the extended control mode.127CalDly/DES/SCSCalibration Delay,Dual Edge Sampling and Serial InterfaceChip Select.With a logic high or low on pin14,this pinfunctions as Calibration Delay and sets the number of inputclock cycles after power up before calibration begins(SeeSection1.1.1).With pin14floating,this pin acts as the enablepin for the serial interface input and the CalDly valuebecomes"0"(short delay with no provision for a longpower-up calibration delay).When this pin is floating orconnected to a voltage equal to V A/2,DES(Dual EdgeSampling)mode is selected where the"I"input is sampled attwice the input clock rate and the"Q"input is ignored.SeeSection1.1.5.1.ADC08D15003Pin Descriptions and Equivalent Circuits(Continued)Pin Functions Pin No.SymbolEquivalent CircuitDescription1819CLK+CLK-LVDS Clock input pins for the ADC.The differential clock signal must be a.c.coupled to these pins.The input signal is sampled on the falling edge of CLK+.See Section 1.1.2for a description of acquiring the input and Section 2.3for an overview of the clock inputs.1110.2223V IN I+V IN I−.V IN Q+V IN Q−Analog signal inputs to the ADC.The differential full-scale input range is 650mV P-P when the FSR pin is low,or 870mV P-P when the FSR pin is high.7V CMOCommon Mode Voltage.The voltage output at this pin is required to be the common mode input voltage at V IN +and V IN −when d.c.coupling is used.This pin should be grounded when a.c.coupling is used at the analog inputs.This pin is capable of sourcing or sinking 100µA.See Section 2.2.31V BG Bandgap output voltage capable of 100µA source/sink.126CalRunCalibration Running indication.This pin is at a logic high when calibration is running.32R EXTExternal bias resistor connection.Nominal value is 3.3k-Ohms (±0.1%)to ground.See Section 1.1.1.3435Tdiode_P Tdiode_N Temperature Diode Positive (Anode)and Negative (Cathode)for die temperature measurements.See Section 2.6.2.A D C 08D 1500 4Pin Descriptions and Equivalent Circuits(Continued)Pin FunctionsPin No.Symbol Equivalent Circuit Description83/78 84/77 85/76 86/75 89/72 90/71 91/70 92/69 93/68 94/67 95/66 96/65 100/61 101/60 102/59 103/58DI7−/DQ7−DI7+/DQ7+DI6−/DQ6−DI6+/DQ6+DI5−/DQ5−DI5+/DQ5+DI4−/DQ4−DI4+/DQ4+DI3−/DQ3−DI3+/DQ3+DI2−/DQ2−DI2+/DQ2+DI1−/DQ1−DI1+/DQ1+DI0−/DQ0−DI0+/DQ0+I and Q channel LVDS Data Outputs that are not delayed inthe output pared with the DId and DQdoutputs,these outputs represent the later time samples.These outputs should always be terminated with a100Ωdifferential resistor.104/57 105/56 106/55 107/54 111/50 112/49 113/48 114/47 115/46 116/45 117/44 118/43 122/39 123/38 124/37 125/36DId7−/DQd7−DId7+/DQd7+DId6−/DQd6−DId6+/DQd6+DId5−/DQd5−DId5+/DQd5+DId4−/DQd4−DId4+/DQd4+DId3−/DQd3−DId3+/DQd3+DId2−/DQd2−DId2+/DQd2+DId1−/DQd1−DId1+/DQd1+DId0−/DQd0−DId0+/DQd0+I and Q channel LVDS Data Outputs that are delayed by oneCLK cycle in the output pared with theDI/DQ outputs,these outputs represent the earlier timesample.These outputs should always be terminated with a100Ωdifferential resistor.79 80OR+OR-Out Of Range output.A differential high at these pinsindicates that the differential input is out of range(outside therange±325mV or±435mV as defined by the FSR pin).82 81DCLK+DCLK-Differential Clock outputs used to latch the output data.Delayed and non-delayed data outputs are suppliedsynchronous to this signal.This signal is at1/2the input clockrate in SDR mode and at1/4the input clock rate in the DDRmode.2,5,8, 13,16,17,20, 25,28, 33,128V A Analog power supply pins.Bypass these pins to ground.ADC08D15005Pin Descriptions and Equivalent Circuits(Continued)Pin Functions Pin No.SymbolEquivalent CircuitDescription40,51,62,73,88,99,110,121V DROutput Driver power supply pins.Bypass these pins to DR GND.1,6,9,12,21,24,27,41GNDGround return for V A .42,53,64,74,87,97,108,119DR GNDGround return for V DR .52,63,98,109,120NC No Connection.Make no connection to these pins.A D C 08D 1500 6Absolute Maximum Ratings(Notes1,2)If Military/Aerospace specified devices are required, please contact the National Semiconductor Sales Office/ Distributors for availability and specifications.Supply Voltage(V A,V DR) 2.2V Voltage on Any Input Pin−0.15V to(V A+0.15V) Ground Difference|GND-DR GND|0V to100mV Input Current at Any Pin(Note3)±25mA Package Input Current(Note3)±50mA Power Dissipation at T A≤85˚C 2.0W ESD Susceptibility(Note4)Human Body Model Machine Model 2500V 250VSoldering Temperature,Infrared,10seconds,(Note5),(Appliesto standard plated package only)235˚C Storage Temperature−65˚C to+150˚C Operating Ratings(Notes1,2)Ambient Temperature Range−40˚C≤T A≤+85˚C Supply Voltage(V A)+1.8V to+2.0V Driver Supply Voltage(V DR)+1.8V to V A Analog Input Common ModeVoltage V CMO±50mV V IN+,V IN-Voltage Range(Maintaining Common Mode)200mV to V A Ground Difference(|GND-DR GND|)0V CLK Pins Voltage Range0V to V A Differential CLK Amplitude0.4V P-P to2.0V P-PPackage Thermal ResistancePackageθJAθJC(Top ofPackage)θJ-PAD(ThermalPad) 128-LeadExposed PadLQFP26˚C/W10˚C/W 2.8˚C/WConverter Electrical CharacteristicsNOTE:This product is currently in development and the parameters specified in this section are DESIGN TARGETS. The specifications in this section cannot be guaranteed until device characterization has taken place.The following specifications apply after calibration for V A=V DR=+1.9V DC,OutV=1.9V,V IN FSR(a.c.coupled)=differential870mV P-P,C L =10pF,Differential,a.c.coupled Sinewave Input Clock,f CLK=1.5GHz at0.5V P-P with50%duty cycle,V BG=Floating,Non-Extended Control Mode,SDR Mode,R EXT=3300Ω±0.1%,Analog Signal Source Impedance=100ΩDifferential.Boldface limits apply for T A=T MIN to T MAX.All other limits T A=25˚C,unless otherwise noted.(Notes6,7)Symbol Parameter Conditions Typical(Note8)Limits(Note8)Units(Limits)STATIC CONVERTER CHARACTERISTICSINL Integral Non-Linearity(Best fit)DC Coupled,1MHz Sine WaveOveranged±0.3±TBD LSB(max)DNL Differential Non-Linearity DC Coupled,1MHz Sine WaveOveranged±0.15±TBD LSB(max)Resolution with No MissingCodes8BitsV OFF Offset Error-0.45−TBDTBDLSB(min)LSB(max)V OFF_ADJ Input Offset Adjustment Range Extended Control Mode±45mVPFSE Positive Full-Scale Error(Note9)−0.6±TBD mV(max)NFSE Negative Full-Scale Error(Note9)−1.31±TBD mV(max)FS_ADJ Full-Scale Adjustment Range Extended Control Mode±20±15%FS NORMAL MODE(Non DES)DYNAMIC CONVERTER CHARACTERISTICSFPBW Full Power Bandwidth Normal Mode(non DES) 1.7GHz B.E.R.Bit Error Rate10-18Error/SampleGain Flatness d.c.to500MHz±0.5dBFS d.c.to1GHz±1.0dBFSENOB Effective Number of Bits f IN=248MHz,V IN=FSR−0.5dB7.4TBD Bits(min)f IN=498MHz,V IN=FSR−0.5dB7.4TBD Bits(min)f IN=748MHz,V IN=FSR−0.5dB7.25TBD Bits(min)ADC08D1500 7Converter Electrical Characteristics(Continued)NOTE:This product is currently in development and the parameters specified in this section are DESIGN TARGETS.The specifications in this section cannot be guaranteed until device characterization has taken place.The following specifications apply after calibration for V A =V DR =+1.9V DC ,OutV =1.9V,V IN FSR (a.c.coupled)=differential 870mV P-P ,C L =10pF,Differential,a.c.coupled Sinewave Input Clock,f CLK =1.5GHz at 0.5V P-P with 50%duty cycle,V BG =Floating,Non-Extended Control Mode,SDR Mode,R EXT =3300Ω±0.1%,Analog Signal Source Impedance =100ΩDifferential.Boldface limits apply for T A =T MIN to T MAX .All other limits T A =25˚C,unless otherwise noted.(Notes 6,7)SymbolParameterConditionsTypical (Note 8)Limits (Note 8)Units (Limits)NORMAL MODE (Non DES)DYNAMIC CONVERTER CHARACTERISTICS SINADSignal-to-Noise Plus Distortion Ratiof IN =248MHz,V IN =FSR −0.5dB 46.3TBD dB (min)f IN =498MHz,V IN =FSR −0.5dB 46.3TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB 45.4TBD dB (min)SNRSignal-to-Noise Ratiof IN =248MHz,V IN =FSR −0.5dB47.1TBD dB (min)f IN =498MHz,V IN =FSR −0.5dB 47.1TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB 46.3TBD dB (min)THDTotal Harmonic Distortionf IN =248MHz,V IN =FSR −0.5dB-55-TBD dB (max)f IN =498MHz,V IN =FSR −0.5dB -55-TBD dB (max)f IN =748MHz,V IN =FSR −0.5dB -53-TBDdB (max)2nd HarmSecond Harmonic Distortionf IN =248MHz,V IN =FSR −0.5dB−60dB f IN =498MHz,V IN =FSR −0.5dB −60dB f IN =748MHz,V IN =FSR −0.5dB −60dB 3rd HarmThird Harmonic Distortionf IN =248MHz,V IN =FSR −0.5dB−65dB f IN =498MHz,V IN =FSR −0.5dB −65dB f IN =748MHz,V IN =FSR −0.5dB −65dB SFDRSpurious-Free dynamic Rangef IN =248MHz,V IN =FSR −0.5dB55TBD dB (min)f IN =498MHz,V IN =FSR −0.5dB 55TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB53TBD dB (min)IMDIntermodulation Distortion f IN1=321MHz,V IN =FSR −7dB f IN2=326MHz,V IN =FSR −7dB -50dBOut of Range Output Code (In addition to OR Output high)(V IN +)−(V IN −)>+Full Scale 255(V IN +)−(V IN −)<−Full Scale0INTERLEAVE MODE (DES Pin 127=Float)-DYNAMIC CONVERTER CHARACTERISTICSFPBW (DES)Full Power Bandwidth Dual Edge Sampling Mode 900MHz ENOB Effective Number of Bits f IN =498MHz,V IN =FSR −0.5dB 7.3TBD Bits (min)f IN =748MHz,V IN =FSR −0.5dB 7.2TBD Bits (min)SINAD Signal to Noise Plus Distortion Ratiof IN =498MHz,V IN =FSR −0.5dB 46TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB 45TBD dB (min)SNR Signal to Noise Ratio f IN =498MHz,V IN =FSR −0.5dB 46.4TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB 45.4TBD dB (min)THD Total Harmonic Distortion f IN =498MHz,V IN =FSR −0.5dB -58-TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB -57-TBDdB (min)2nd Harm Second Harmonic Distortion f IN =498MHz,V IN =FSR −0.5dB -64dB f IN =748MHz,V IN =FSR −0.5dB -62dB 3rd Harm Third Harmonic Distortion f IN =498MHz,V IN =FSR −0.5dB -69dB f IN =748MHz,V IN =FSR −0.5dB -69dB SFDRSpurious Free Dynamic Rangef IN =498MHz,V IN =FSR −0.5dB 57TBD dB (min)f IN =748MHz,V IN =FSR −0.5dB57TBD dB (min)A D C 08D 1500 8Converter Electrical Characteristics(Continued)NOTE:This product is currently in development and the parameters specified in this section are DESIGN TARGETS. The specifications in this section cannot be guaranteed until device characterization has taken place.The following specifications apply after calibration for V A=V DR=+1.9V DC,OutV=1.9V,V IN FSR(a.c.coupled)=differential870mV P-P,C L =10pF,Differential,a.c.coupled Sinewave Input Clock,f CLK=1.5GHz at0.5V P-P with50%duty cycle,V BG=Floating,Non-Extended Control Mode,SDR Mode,R EXT=3300Ω±0.1%,Analog Signal Source Impedance=100ΩDifferential.Boldface limits apply for T A=T MIN to T MAX.All other limits T A=25˚C,unless otherwise noted.(Notes6,7)Symbol Parameter Conditions Typical(Note8)Limits(Note8)Units(Limits)ANALOG INPUT AND REFERENCE CHARACTERISTICSV IN Full Scale Analog DifferentialInput RangeFSR pin14Low650570mV P-P(min)730mV P-P(max)FSR pin14High870790mV P-P(min)950mV P-P(max)V CMI Analog Input Common ModeVoltageV CMOV CMO−50V CMO+50mV(min)mV(max)C IN Analog Input Capacitance,Normal operation(Notes10,11)Differential0.02pFEach input pin to ground 1.6pF Analog Input Capacitance,DESMode(Notes10,11)Differential0.08pFEach input pin to ground 2.2pFR IN Differential Input Resistance10094Ω(min) 106Ω(max)ANALOG OUTPUT CHARACTERISTICSV CMO Common Mode Output Voltage 1.260.951.45V(min)V(max)V CMO_LVL V CMO input threshold to set DCCoupling modeV A=1.8V0.60VV A=2.0V0.66VTC V CMO Common Mode Output VoltageTemperature CoefficientT A=−40˚C to+85˚C118ppm/˚CC LOAD V CMO Maximum V CMO loadCapacitance80pFV BG Bandgap Reference OutputVoltageI BG=±100µA 1.261.201.33V(min)V(max)TC V BG Bandgap Reference VoltageTemperature CoefficientT A=−40˚C to+85˚C,I BG=±100µA28ppm/˚CC LOAD V BG Maximum Bandgap Referenceload Capacitance80pFTEMPERATURE DIODE CHARACTERISTICS∆V BE Temperature Diode Voltage 192µA vs.12µA,T J=25˚C71.23mV 192µA vs.12µA,T J=85˚C85.54mVCHANNEL-TO-CHANNEL CHARACTERISTICSOffset Match1LSBPositive Full-Scale Match Zero offset selected in ControlRegister1LSBNegative Full-Scale Match Zero offset selected in ControlRegister1LSBPhase Matching(I,Q)F IN=1.0GHz<1DegreeX-TALK Crosstalk from I(Agressor)to Q(Victim)ChannelAggressor=867MHz F.S.Victim=100MHz F.S.-71dBX-TALK Crosstalk from Q(Agressor)to I(Victim)ChannelAggressor=867MHz F.S.Victim=100MHz F.S.-71dBADC08D15009Converter Electrical Characteristics(Continued)NOTE:This product is currently in development and the parameters specified in this section are DESIGN TARGETS.The specifications in this section cannot be guaranteed until device characterization has taken place.The following specifications apply after calibration for V A =V DR =+1.9V DC ,OutV =1.9V,V IN FSR (a.c.coupled)=differential 870mV P-P ,C L =10pF,Differential,a.c.coupled Sinewave Input Clock,f CLK =1.5GHz at 0.5V P-P with 50%duty cycle,V BG =Floating,Non-Extended Control Mode,SDR Mode,R EXT =3300Ω±0.1%,Analog Signal Source Impedance =100ΩDifferential.Boldface limits apply for T A =T MIN to T MAX .All other limits T A =25˚C,unless otherwise noted.(Notes 6,7)SymbolParameterConditionsTypical (Note 8)Limits (Note 8)Units (Limits)CLOCK INPUT CHARACTERISTICSV IDDifferential Clock Input LevelSine Wave Clock0.60.42.0V P-P (min)V P-P (max)Square Wave Clock0.60.42.0V P-P (min)V P-P (max)I I Input CurrentV IN =0or V IN =V A ±1µA C INInput Capacitance (Notes 10,11)Differential0.02pF Each input to ground 1.5pFDIGITAL CONTROL PIN CHARACTERISTICS V IH Logic High Input Voltage (Note 12)0.85x V A V (min)V IL Logic Low Input Voltage (Note 12)0.15x V AV (max)C INInput Capacitance (Notes 11,13)Each input to ground1.2pFDIGITAL OUTPUT CHARACTERISTICSV ODLVDS Differential Output VoltageMeasured differentially,OutV =V A ,V BG =Floating (Note 15)710400mV P-P (min)920mV P-P (max)Measured differentially,OutV =GND,V BG =Floating (Note 15)510280mV P-P (min)720mV P-P (max)∆V O DIFF Change in LVDS Output Swing Between Logic Levels ±1mV V OS Output Offset Voltage,see Figure 1V BG =Floating 800mV V OS Output Offset Voltage,see Figure 1V BG =V A (Note 15)1200mV ∆V OS Output Offset Voltage Change Between Logic Levels ±1mV I OS Output Short Circuit Current Output+&Output-connected to 0.8V±4mA Z O Differential Output Impedance 100OhmsV OH CalRun H level output I OH =-400uA (Note 12) 1.65 1.5V V OLCalRun L level outputI OH =400uA (Note 12)0.150.3V POWER SUPPLY CHARACTERISTICS I AAnalog Supply CurrentPD =PDQ =LowPD =Low,PDQ =High PD =PDQ =High 6604301.8TBD TBD mA (max)mA (max)mA I DROutput Driver Supply CurrentPD =PDQ =LowPD =Low,PDQ =High PD =PDQ =High 2001120.012TBD TBD mA (max)mA (max)mA P DPower Consumption PD =PDQ =LowPD =Low,PDQ =High PD =PDQ =High1.91.23.5TBD TBDW (max)W (max)mW PSRR1 D.C.Power Supply Rejection RatioChange in Full Scale Error with change in V A from 1.8V to 2.0V 30dB PSRR2A.C.Power Supply Rejection Ratio248MHz,50mV P-P riding on V A 51dBA D C 08D 1500 10Converter Electrical Characteristics(Continued)NOTE:This product is currently in development and the parameters specified in this section are DESIGN TARGETS. The specifications in this section cannot be guaranteed until device characterization has taken place.The following specifications apply after calibration for V A=V DR=+1.9V DC,OutV=1.9V,V IN FSR(a.c.coupled)=differential870mV P-P,C L =10pF,Differential,a.c.coupled Sinewave Input Clock,f CLK=1.5GHz at0.5V P-P with50%duty cycle,V BG=Floating,Non-Extended Control Mode,SDR Mode,R EXT=3300Ω±0.1%,Analog Signal Source Impedance=100ΩDifferential.Boldface limits apply for T A=T MIN to T MAX.All other limits T A=25˚C,unless otherwise noted.(Notes6,7)Symbol Parameter Conditions Typical(Note8)Limits(Note8)Units(Limits)AC ELECTRICAL CHARACTERISTICSf CLK1Maximum Input ClockFrequencyNormal Mode(non DES)or DESMode1.7 1.5GHz(min)f CLK2Minimum Input ClockFrequencyNormal Mode(non DES)200MHzf CLK2Minimum Input ClockFrequencyDES Mode500MHz Input Clock Duty Cycle200MHz≤Input clock frequency≤1.5GHz(Normal Mode)(Note12)502080%(min)%(max) Input Clock Duty Cycle500MHz≤Input clock frequency≤1.5GHz(DES Mode)(Note12)502080%(min)%(max)t CL Input Clock Low Time(Note11)333133ps(min) t CH Input Clock High Time(Note11)333133ps(min)DCLK Duty Cycle(Note11)504555%(min)%(max)t RS Reset Setup Time(Note11)150ps t RH Reset Hold Time(Note11)250pst SD Syncronizing Edge to DCLKOutput Delayf CLKIN=1.5GHzf CLKIN=200MHz3.533.85nst RPW Reset Pulse Width(Note11)4Clock Cycles(min)t LHT Differential Low to HighTransition Time10%to90%,C L=2.5pF250pst HLT Differential High to LowTransition Time10%to90%,C L=2.5pF250pst OSK DCLK to Data Output Skew 50%of DCLK transition to50%ofData transition,SDR Modeand DDR Mode,0˚DCLK(Note11)±50ps(max)t SU Data to DCLK Set-Up Time DDR Mode,90˚DCLK(Note11)667ps t H DCLK to Data Hold Time DDR Mode,90˚DCLK(Note11)667pst AD Sampling(Aperture)Delay Input CLK+Fall to Acquisition ofData1.3nst AJ Aperture Jitter0.4ps rmst OD Input Clock to Data OutputDelay(in addition to PipelineDelay)50%of Input Clock transition to50%of Data transition3.1nsPipeline Delay(Latency)(Notes11,14)DI Outputs13Input ClockCyclesDId Outputs14DQ OutputsNormal Mode13DES Mode13.5DQd OutputsNormal Mode14DES Mode14.5Over Range Recovery TimeDifferential V IN step from±1.2V to0V to get accurate conversion1Input ClockCycleADC08D1500Converter Electrical Characteristics(Continued)NOTE:This product is currently in development and the parameters specified in this section are DESIGN TARGETS.The specifications in this section cannot be guaranteed until device characterization has taken place.The following specifications apply after calibration for V A =V DR =+1.9V DC ,OutV =1.9V,V IN FSR (a.c.coupled)=differential 870mV P-P ,C L =10pF,Differential,a.c.coupled Sinewave Input Clock,f CLK =1.5GHz at 0.5V P-P with 50%duty cycle,V BG =Floating,Non-Extended Control Mode,SDR Mode,R EXT =3300Ω±0.1%,Analog Signal Source Impedance =100ΩDifferential.Boldface limits apply for T A =T MIN to T MAX .All other limits T A =25˚C,unless otherwise noted.(Notes 6,7)SymbolParameterConditionsTypical (Note 8)Limits (Note 8)Units (Limits)AC ELECTRICAL CHARACTERISTICS t WU PD low to Rated Accuracy Conversion (Wake-Up Time)500ns f SCLK Serial Clock Frequency (Note 11)100MHz t SSU Data to Serial Clock Setup Time(Note 11) 2.5ns (min)t SHData to Serial Clock Hold Time (Note 11)1ns (min)Serial Clock Low Time 4ns (min)Serial Clock High Time4ns (min)t CAL Calibration Cycle Time 1.4x 105Clock Cycles t CAL_L CAL Pin Low Time See Figure 9(Note 11)80Clock Cycles(min)t CAL_H CAL Pin High TimeSee Figure 9(Note 11)80Clock Cycles(min)t CalDly Calibration delay determined by pin 127See Section 1.1.1,Figure 9,(Note 11)225Clock Cycles(min)t CalDlyCalibration delay determined by pin 127See Section 1.1.1,Figure 9,(Note 11)231Clock Cycles(max)Note 1:Absolute Maximum Ratings indicate limits beyond which damage to the device may occur.There is no guarantee of operation at the Absolute Maximum Ratings.Operating Ratings indicate conditions for which the device is functional,but do not guarantee specific performance limits.For guaranteed specifications and test conditions,see the Electrical Characteristics.The guaranteed specifications apply only for the test conditions listed.Some performance characteristics may degrade when the device is not operated under the listed test conditions.Note 2:All voltages are measured with respect to GND =DR GND =0V,unless otherwise specified.Note 3:When the input voltage at any pin exceeds the power supply limits (that is,less than GND or greater than V A ),the current at that pin should be limited to 25mA.The 50mA maximum package input current rating limits the number of pins that can safely exceed the power supplies with an input current of 25mA to two.This limit is not placed upon the power,ground and digital output pins.Note 4:Human body model is 100pF capacitor discharged through a 1.5k Ωresistor.Machine model is 220pF discharged through ZERO Ohms.Note 5:See AN-450,“Surface Mounting Methods and Their Effect on Product Reliability”.Note 6:The analog inputs are protected as shown below.Input voltage magnitudes beyond the Absolute Maximum Ratings may damage this device.20152104Note 7:To guarantee accuracy,it is required that V A and V DR be well bypassed.Each supply pin must be decoupled with separate bypass capacitors.Additionally,achieving rated performance requires that the backside exposed pad be well grounded.Note 8:Typical figures are at T A =25˚C,and represent most likely parametric norms.Test limits are guaranteed to National’s AOQL (Average Outgoing Quality Level).Note 9:Calculation of Full-Scale Error for this device assumes that the actual reference voltage is exactly its nominal value.Full-Scale Error for this device,therefore,is a combination of Full-Scale Error and Reference Voltage Error.See Figure 2.For relationship between Gain Error and Full-Scale Error,see Specification Definitions for Gain Error.Note 10:The analog and clock input capacitances are die capacitances only.Additional package capacitances of 0.65pF differential and 0.95pF each pin to ground are isolated from the die capacitances by lead and bond wire inductances.Note 11:This parameter is guaranteed by design and is not tested in production.Note 12:This parameter is guaranteed by design and/or characterization and is not tested in production.A D C 08D 1500。
Model Builder DML
V IRTUTECH W HITE P APERVIRTUTECH DML – DEVICE MODELING LANGUAGEJUNE 2008J AKOB E NGBLOMINTRODUCTIONVirtual software development is a development methodology where the actual hardware of a system is replaced with a virtual model running on a workstation or PC. The virtual hardware can run the same binary software as the physical hardware, fast enough to be used as an alternative to physical hardware for software development. The virtual hardware provides additional benefits like better debugging facilities, checkpointing and restart at any point, superior convenience and stability, access to the target long before prototype hardware to start software development early, and the ability to test faults and boundary cases with complete control and precision.The key to realizing the benefits of virtualized software development for a particular application is the ability to quickly develop high-performance virtual systems. This is the task of system modeling, and this Virtutech white paper will discuss how system modeling is supported by the Virtutech-developed device modeling language known as DML.At core, Simics is a very fast transaction-level model (TLM) simulator. Simics features an efficient simulation infrastructure that has been honed by active use for more than ten years, very fast processor simulators, optimized target memory handling, and a proven API for device modeling. All Simics models are transaction-level, in all their interfaces.DML is a language to quickly create fast functional models of hardware devices, created explicitly to address the productivity issue for transaction-level virtual platforms.WHY DML?Creating transaction-level virtual platform models is fundamentally a programming task. Today, most of this work is performed in sequential programming languages like C, C++, Python, Java, and SystemC. These languages do not provide constructs that really correspond to the concepts involved in creating transaction-level models of hardware. Features that are missing and have to be created by each programmer are for example memory-map decoding, bit-field manipulation, interpretation of network packets andother packed structure, working with non-native endianness, and parallelism between devices.This lack of explicit language support for the TLM domain leads to suboptimal programmer productivity and carries a large risk of functional mistakes and performance-degrading modeling mistakes.Further hampering productivity, a large part of the model code in current simulators is interfacing code towards the simulation framework itself. Such code is voluminous and time-consuming to write but add no real value to the final models. The result is code where it is very easy to make mistakes, both functional and performance-wise, and that takes a significant amount of time to write.Virtutech has been building TLM virtual platforms since 1991, and we realized this problem many years ago. In 2004 we started to use a language called DML, Device Modeling Language, for internal development of models. In 2005, DML was released to Virtutech customers. It has since undergone two major revisions and is currently at version 1.2. It has proven a real boon to modeling productivity for all users, both internal and external to Virtutech.WHEN DML?A system model in Simics consists of four broad classes of hardware models:•Processor cores – the CPUs actually running processors instructions. For example PPC 464, MPC e500, Core 2 Duo, MIPS 5Kc, or ARM9.•Interconnects – networks and buses connecting devices, machines, boards, and cabinets together. For example serial, Ethernet, I2C, PCI, SCSI, USB, or MIL-STD-1553.•Memory – RAM, ROM, EEPROM, FLASH, and other types of memory devices that store large amounts of code or data.•Devices – anything else. All the peripheral units that move data between machines or do work that is not instruction processing. Timers, interrupt controllers, ADC, DAC, network interfaces, I2C controllers, serial ports, LED drivers, displays, media accelerators, pattern matches, table lookup engines, and memory controllers are just some examples of devices.In general, most of the work in creating a new system model is spent modeling devices, and they are the most numerous and least standardized of the hardware components. DML is designed to help in this task, as device modeling is the task that experience tells us could benefit the most from a better programming system.Processors and memory models are highly reusable as the number of varieties is comparatively low. They also need significant detail work in order to ensure simulation performance. In Simics, processor models are either provided by Virtutech, or written in any language and plugged into Simics using the Simics processor API (available in Simics 4.0 and later versions). Interconnects are usually standardized and reusable across simulated systems. They tend to be quite complex when implemented to support distributed and multithreaded simulation, and the implementation of one interconnect can be quite different from the other. For this reason, interconnects in Simics are usually programmed in C or C++ since there is less commonality to be leveraged into a custom language.Thus, devices form the bulk of the modeling effort, and is repetitive enough to warrant creating a domain-specific language to increase productivity and simplify programming.DEVICE MODELING IN SIMICSA hardware device model essentially has four interfaces, as illustrated in Figure 1.•Programming register map.•Communication networks and links.•Tightly-coupled devices.•Simulator core.AppsOSBSPThe memory-mapped programming register map is where the device driver software writes and reads registers in the device to control the behavior of the device. This is often known as the “front end” of a device model. It can be as simple as a few bytes or contain many thousands of registers with varying size.Communication buses and networks are where the device model communicates with the external world (external to the chip it is contained in). Typical examples are serial lines and Ethernet networks, but it could also be something as simple as I2C or complex as a model of a plant being controlled. In Simics, most such interfaces are implemented with an explicit network or communications link. This makes it easier to support multithreaded simulation, and also ensures that all input and output to a device model can be recorded and replayed.Device can also be tightly coupled to each other. Typically, devices within the same chip need to access specific function of other devices that are not really suitable to model as explicit communications networks. For example, a multiprocessor system controller will need to route interrupts from devices to processors and pass interrupts between processors in the system. Networkprocessing accelerators could have direct data connections to the network interfaces of an SoC where network frames go directly without touching system memory.The interface to the simulator core is used to drive time forward and to perform housekeeping and infrastructure tasks. In Simics, this includes checkpointing, support for attributes, reverse execution, logging, posting and reacting to events and haps, and general access to the external world. The part of the model that interacts with the simulator core is also the part that ties the activity on the other interfaces together. Note that a large portion of the kernel interface code is automatically generated for DML models.If you want to know more about the general principles of device modeling in Simics and the applicable methodology, please see our white paper on modeling. This white paper is focused on the specifics of the DML language.INTRODUCING DMLDML is a textual language that provides a way to write transaction-level device models with less code and fewer mistakes than plain C/C++ and a simulator API. DML models are much more concise than corresponding C language models, and much easier to write, read, understand, and maintain. DML lets users easily express complex register maps for devices, including bit fields and sparse register maps.DML also explicitly supports an iterative development style for device models by providing default implementations for many device aspects and strong support for marking unimplemented aspects. DML supports and encourages inline documentation in a model, and documentation is extracted and used to help Simics provide a better end-user interface for a model. Eclipse and Emacs editing modes specific to DML are provided with Simics to provide a custom smart editing environment.Compilation ProcessDML does not compile directly to binary models, rather DML is compiled to C code that is then compiled by the native C compiler. The DML compiler also generates the plumbing code needed to connect a particular model to Simics, reducing the Simics API knowledge needed to create models and keeping the code relatively free from explicit Simics API calls. This includesautomatic support for checkpointing, attributes, reverse execution, logging, and other Simics features.User programTarget operating systemTarget firmwareFigure 2 illustrates the steps of the DML compilation process. Note that the entire process is transparent to DML users, who just need to invoke the Simics module makefile automatically generated by Simics to compile a DML module. The C code is compiled like a Simics module written in C, using the same Simics API header files as all other Simics modules.The common denominator for all Simics modules is the C-level Simics API. By generating C code with Simics API calls, DML code integrates naturally with any other Simics module. It also makes it possible to combine C and DML source code into a single module, which can be used to wrap existing C-language simulations behind a DML frontend, or make use of particular C-language libraries to implement module functionality.Reactive TLMDML is designed to code models in a reactive style. DML describes models as a set of methods (functions) which are called from the simulation kernel and across the device interfaces whenever something occurs that the model has to respond to. There is no main loop in a DML model, the simulation kernel and DML compiler takes care of sequencing and activation of the different pieces of sequential code in a model.This is the industry-standard way to write high-performance transaction-levelmodels. In SystemC terms, it is equivalent to using SC_METHOD rather than SC_THREAD. In C, it means creating a model to be driven by function calls from other parts of the simulation rather than trying to spawn of an operating-system thread or using a user-level threading package.Model PerformanceNot all models are equal, and the final performance of the simulation isdirectly affected by how models are designed and written. Modelers need tothink carefully about the design of a model to make it fast, and make sure thatno implementation details accidentally spoil performance.With DML, it is harder to make performance mistakes. The DML languageitself encourages a reactive TLM style of coding that does not rely on periodicupdates to implement functionality. The DML compiler generates theintegration code to the simulation platform, which removes the risk ofmisusing the platform API in a performance-zapping way. DML itself doesnot introduce any overhead compared to native C coding of a model.The DML compiler can also optimize the generated code and do analysis on adevice model to do smart things to the final code. For example, theimplementation of a device register map address decoder can be generated in away suitable to how a particular register map is setup.DML FEATURESRegister MapsThe most obvious contribution of DML is in simplifying the specification and implementation of device programming registers. This was the main problem that we wanted to solve when DML was initially designed. The typical way a device register map decoder is written in a C-family language is a “big switch”.Figure 3. UART programming register decoder in SystemC, “big switch”int uart::IPmodel(accessHandle t){// get the data from transactiondata.set(t->getMData());// READ behaviourif (t->getMCmd()==Generic_MCMD_RD){// Which offset was accessed?switch ((unsigned int)t->getMAddr()){case 0x0:// address 0x00 has different behavior based on DLABif (DLAB)(*(gs_uint8*)data.getPointer())=DLL;else{if(FCR0){RBR=RCVR_fifo.read(); // Read from FIFO if enabledif(RCVR_fifo.num_available()<rcvr_trigger_level)IssueInterrupt(CRDA);}if( !FCR0 || RCVR_fifo.num_available()==0 )LSR=LSR&0xFE; // Clean DR (Data Ready) bit(*(gs_uint8*)data.getPointer())=RBR;}break;case 0x1:Figure 3 shows an excerpt from a typical big switch code. The code needs to work out if a memory access is a read or write, and then look at the address and take the appropriate action. The result is normally two large switch statements, one for the read and one for the write case. The drawback of this coding style is that it mixes the declaration of the register map with the implementation of the behavior for each register, and that read and write behavior tends to end up in two separate locations.Note that doing the outer switch on the address accessed and then for each case determining if the operation is a read or write ends up being just as hard to read. The core problem here is that doing this type of dispatch is not part of the C-style languages, and you end up coding a particular implementation of the decoder rather than simply declaring it.In DML, the declaration of the register map is much more declarative in style. As shown in Figure 4, the model source code can describe the register layout (which registers have which addresses) separately from the implementation, making it much easier to read.Figure 4. UART programming register layout in DMLbank uart {parameter register_size = 1;parameter byte_order = "little-endian";register rbr @ 0x0 "Receiver Buffer register";// non-mapped registers to take care of multiple personality// of register at offset 0register thr @ undefined "Transmitter Holding Register";register dll @ undefined "Divisor Latch LSB";register ier @ 0x1 "Interrupt Enable register";// non-mapped register to take care of the multiple personality of // register at offset 1register dlm @ undefined "Divisor Latch MSB";register iir @ 0x2 "Interrupt Identification register";// non-mapped register to take care of multiple personality of// register at offset 2register fcr @ undefined "FIFO Control Register";register lcr @ 0x3 "Line Control register";register mcr @ 0x4 is (unimplemented) "MODEM Control register";register lsr @ 0x5 "Line Status register";register msr @ 0x6 is (unimplemented) "MODEM status register";register scr @ 0x7 "Scratch pad register";The actions to be taken on memory accesses are described in separate read and write methods for each register, and these are usually defined in a block of code separate from the main register map declaration. Figure 7 shows an example of how this code looks. You are allowed to put both the declaration of the register sizes and offsets and the definition of the actual functionality into the same block of code, but it is strongly recommended that you separate them, to make the source code easier to read. Note that there is no need torepeat the size or offset of a register when declaring the functionality, the DML compiler takes care of combining all information specified for a register.Note the use of parameter statements in Figure 4 to specify the endianness and default register size for the register bank called uart. Such parameters are common throughout DML, providing values for defaults or specifying the particular behavior of a particular object. Figure 6 also shows some parameters giving the reset values of a register, which is another way in which DML makes device coding easier.Bit Field DecodingDevice programming registers tend to be divided up into fields consisting of a few bits each, and decoding the contents of such fields ends up as a series of very error-prone bit masking and shifting operations in most C code. Figure 5 shows another SystemC example of such code, looking at data which is at the lowest two bits of a register.Figure 5. Bit field decode in SystemCcase 0x3:LCR=*(gs_uint8*)(data.getPointer());// This is the word size, stored on a convenience// attribute for easy ANDif ((LCR&0x03)==0) mask=0x1F;else if ((LCR&0x03)==1) mask=0x3F;else if ((LCR&0x03)==2) mask=0x7F;else if ((LCR&0x03)==3) mask=0xFF;break;As shown in Figure 6, DML allows the user to declare the bits inside each register that make up each field. It is possible to use both little-endian and big-endian (Power Architecture-style) bit field numbering, to align with how the device manuals describe the bit fields.Figure 6. Bit field decode in DMLregister lcr {parameter soft_reset_value = 0x00;parameter hard_reset_value = 0x00;field wls [1:0] "Word length select ";field stb [2] "Number of stop bits (0 = 1, 1 = 2)";field pen [3] "Parity enable (0 = disable, 1 = enable)";field eps [4] "Even parity select (0=Odd, 1=Even)";field stick_parity [5] "Stick parity";field set_break [6] "Set break";field dlab [7] "Divisor latch access bit";// After a write to this register, check the contents of WLS and// set the character length mask appropriatelymethod after_write (memop) {if ($wls == 0) $mask = 0x1F;else if ($wls == 1) $mask = 0x3F;else if ($wls == 2) $mask = 0x7F;else $mask = 0xFF;}}The last bit of code in Figure 6 performs the same work as the code above, and note that no explicit masks are needed. The DML compiler takes care of generating the shifting and masking code, and the user does not need to care about how this is done.Endianness and Partial AccessesAn advantage of using a declarative style for register maps is that the DML compiler knows the register layout explicitly. This enables the DML compiler to automatically generate code supporting tricky cases like accesses that hit only part of a register (a single byte in a 16-bit register, for example), and accesses that cover several registers in a single memory operation. Depending on the specification of the target device, such accesses are either flagged as software errors or handled by routing accesses to the correct parts of a device. This is set by a simple parameter in the source code for a bank. Endianness is also explicitly declared for a register bank, and that endianness is handled correctly regardless of the endianness of the host. Note that there is both byte endianness, which determines how data arriving in a memory transaction is interpreted, and bit endianness, which deals with how bits are numbered inside a register. These two are independent.DML makes handling cases like a two-byte access in the middle of an eight-byte big-endian register on a little-endian host trivial. Coding such an access in C is fraught with problems and risks for error. It also reduces the mental strain on the programmer who has to deal with mixed endianness in a target system.C-like Core LanguageThe code that actually performs the work in DML is an extended subset of C. The code can access fields and variables defined in the model using a $-prefix to names, and define and call methods. The DML compiler ties the local definitions of behavior together into a coherent device model, the programmer needs not care about sequencing. Small snippets of core code are shown in Figure 6 and Figure 7.Figure 7. UART register access handling in DMLbank uart {register rbr {method read -> (uint8 chr) {if ($fcr.fifo_enable == 1) {// RBR is filled up here from the rcv FIFOcall $rx_fifo.dequeue;}chr = ($this & $char_len_mask);$uart.lsr.dr = 0;call $update_interrupt;}method write(value) {// pass write along to thr register// since rbr and thr are at the same offsetcall $thr.write(value);}}...Compared to normal C, some additional features are available in DML. In particular, C++-style new and delete is used to manage dynamic memory, and there is support for try-catch exception handling. Other extensions help express common simulation functions like logging information, error reporting, and asserts concisely.The use of C style is intentional to make DML easier to learn. As a first approximation, DML can be considered as a macro language tying together snippets of C code. There is however much more to DML than that.TemplatesTo reduce code size, DML uses templates to describe recurring functionality, for example, “always zero”, or “clear on read”. Users can define their own templates. Groups and arrays are provided in DML to describe repeating patterns and register structures. Using DML mechanisms, even very large and complex register banks can be described succinctly. We have many examples of devices with several thousand registers modeled in DML. Thanks to the domain-specific nature of DML, these mechanisms are more powerful and easier to use than C++ templates.The DML compiler comes with a set of predefined templates for common register cases. A single register or field can combine several templates, as long as the templates do not affect the same aspect of the register behavior. The table in Figure 8 below gives some examples of DML templates available in Simics 4.0 with DML 1.2.Figure 8. Example templates for common register and field behaviorsclear_on_read When read, return the current value and set the value to zero.ignore Writes have no effect, reads always return zeroread_constant All reads return the value set in the model source code, writes have no effectreserved Log accesses as reading or writing reserved bits, remember written values, read last value written.read_only Writes are ignored and loggedunimplemented Log accesses as accessing unimplemented feature to Simics console, remember written values, read last value written.write_1_clears Writes clear the bits marked by ones in the written value.Templates usually carry parameters that specify aspects of their behavior, making them quite general in applicability. For example, the constant template carries a value parameter providing the value to which it is fixed.It is worth noting that register semantics in Simics are often expressed in a combination of computation of the results corresponding to the target machine computations and simulation-specific side effects like logging. This makes it easier to support iterative development of models, and provides a richer communication of the information the simulator provides about the system and its execution.Arrays and GroupsHardware often contains repeated groups of registers with identical functionality repeated multiple times. It can be the per-processor registers in a multiprocessor interrupt controller, or replicated functional units in an accelerator. To make the specification of such units simpler, DML contains the concepts of arrays and groups. Groups group related registers together, and arrays can repeat individual registers or groups of registers multiple times.Figure 9. DML groups and arrays used in an interrupt controllerbank pic {...group GT[g in 0 .. NUM_TIMER_GROUPS - 1] {register TFRR is (read_write)"Timer frequency reporting register";register GTCCR[NUM_TIMER_INTERRUPTS] is (gt_curcount)"Global timer i current count register";register GTBCR[NUM_TIMER_INTERRUPTS] is (gt_count)"Global timer i base count register";register GTVPR[NUM_TIMER_INTERRUPTS] is (VPR)"Global timer i vector/priority register";register GTDR[NUM_TIMER_INTERRUPTS] is (DR)"Global timer i destination register";register TCR is (gt_control)"Timer control register";}...}Figure 9 shows an example of the use of arrays of groups from an interrupt controller. Note that you have a number of sets of GT registers, and each such set contains several register arrays like GTCCR. The declaration also makes extensive use of DML compile-time constants so that the exact number of registers can be set from a file including this general declaration. In this way, the same source code can be shared across multiple similar devices. The is statements also indicate the use of templates for functionality, some of which are user-defined (MSIR, VPR, DR), and some which are standard with Simics (read_only).Putting such nested structures into the right places in memory would be painful if the offset of each register would have to be specified individually. To support this, DML features computed offsets using index variables, as shown in Figure 10.Figure 10. DML computed offsetsbank pic {...group GT[g in 0 .. NUM_TIMER_GROUPS - 1] {register TFRR @ 0x010f0 + 0x1000*$g;register GTCCR[NUM_TIMER_INTERRUPTS] @ 0x01100 + 0x1000*$g + 0x40*$i;register GTBCR[NUM_TIMER_INTERRUPTS] @ 0x01110 + 0x1000*$g + 0x40*$i;register GTVPR[NUM_TIMER_INTERRUPTS] @ 0x01120 + 0x1000*$g + 0x40*$i;register GTDR[NUM_TIMER_INTERRUPTS] @ 0x01130 + 0x1000*$g + 0x40*$i;register TCR @ 0x01300 + 0x1000*$g;}...}Memory LayoutsDevice quite often operate on data structures in main memory, without involvement of the main processor in a system. Typical cases are descriptors, structures describing a set of work for a device to do, and data network packets that are processed (or deposited in memory) by devices. To support the modeling of such devices, DML has a memory layout data type. Memory layouts look similar to structure types in C, and are used as types for variables. Unlike C structs, DML layouts explicitly map directly to the data layout in memory (in C, the compiler is actually free to insert padding between fields). Figure 11 shows a snippet of DML code involving a layout. Note that the layout has an explicit endianness, and includes single bits in bit fields as addressable units. It is used as the data type for a local variable, into which the contents of memory is copied. The manipulation of a layout is local to the code of the device, which avoids repeated calls to the simulated memory system to collect the data, enhancing locality and speed in the simulator.Figure 11. Memory layouts in DML// example from Simics documentationtypedef layout "big-endian" {uint32 addr;uint16 len;uint8 offset;bitfields 8 {uint1 ext @ [0:0];} flags;} sg_list_block_row_t;...{// local variable of layout typelocal sg_list_block_row_t row;memcpy(&row, ptr, sizeof(row));ptr += sizeof(row);if (row.flags.ext) {...}}Interfaces to Other ModelsAs discussed above, models also need to interface to other models within the same tightly-coupled system component and to various interconnect links outside. In Simics, all such interfaces are expressed as sets of function calls called interfaces. The interfaces are unidirectional, providing a way for one device to call into another device. In DML, this is mirrored by the connect and implement statements.Incoming connections are defined in an implement, and define the behavior for each function call in the interface. Outgoing connections in connect statements provide the model with a configuration attribute that can be used to tell it which device to connect to. This is set by the system configuration, and provides the model with what is essentially a pointer to the other device. But it is a pointer stored in an attribute, and thus fully checkpointable and configurable at runtime.Simics provides common header files for interfaces like PCI, I2C, and Ethernet-MAC, to ensure model interoperability and design reuse.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
In this paper we propose several new algorithms for decoding non-binary LDPC codes, one of which is called the MinMax algorithm. They are all independent of thermal noise estimation errors and perform quasi-optimal decoding – meaning that they present a very small performance loss with respect to the optimal iterative decoding (Sum-Product). We also propose two implementations of the Min-Max algorithm, both in the LLR domain, so that the decoding is computationally stable: a “standard implementation” whose complexity scales
Notations related to LDPC codes:
• H ∈ MM,N (GF(q)), the q-ary check matrix of the code. • C , set of codewords of the LDPC code. • Cn(a), set of codewords with the nth coordinate equal to a; for given 1 ≤ n ≤ N and a ∈ GF(q). • N = (x1, x2, . . . , xN ) a q-ary codeword transmitted over the channel.
ISIT 2008, Toronto, Canada, July 6 - 11, 2008
Min-Max decoding for non binary LDPC codes
Valentin Savin, CEA-LETI, MINATEC, Grenoble, France, valentin.savin@cea.fr
The Sum-Product algorithm can be efficiently implemented in the probability domain using binary Fourier transforms [4] and its complexity is dominated by O(q log2 q) sum and product operations for each check node processing, where q is the cardinality of the Galois field of the non-binary LDPC code. The Min-Sum decoding can be implemented either in the log-probability domain or in the log-likelihood ratio (LLR) domain and its complexity is dominated by O(q2) sum operations for each check node processing. In the LLR domain, a reduced selective implementation of the Min-Sum decoding, called Extended Min-Sum, was proposed in [5], [6]. Here “selective” means that the check-node processing uses the incoming messages concerning only a part of the Galois field elements. Non binary LDPC codes were also investigated in [7], [8], [9].
The paper is organized as follows. In the next section we briefly review several realizations of the Min-Sum algorithm for non binary LDPC codes. It is intended to keep the paper self-contained but also to justify some of our choices regarding the new decoding algorithms introduced in section III. The implementation of the Min-Max decoder is discussed in section IV. Section V presents simulation results and section VI concludes this paper.
The following notations will be uபைடு நூலகம்ed throughout the paper.
Notations related to the Galois field: • GF(q) = {0, 1, . . . , q −1}, the Galois field with q elements, where q is a power of a prime number. Its elements will be called symbols, in order to be distinguished from ordinary integers. • a, s, x will be used to denote GF(q)-symbols. • =, I, N will be used to denote vectors of GF(q)-symbols. For instance, = = (a1, . . . , aI ) ∈ GF(q)I , etc.
as the square of the Galois field’s cardinality and a reduced complexity implementation called “selective implementation”. That makes the Min-Max decoding very attractive for practical purposes.
Notations related to the Tanner graph: • H, the Tanner graph of the code. • n ∈ {1, 2, . . . , N } a variable node of H. • m ∈ {1, 2, . . . , M } a check node of H. • H(n), set of neighbor check nodes of the variable node n. • H(m), set of neighbor variable nodes of the check node m. • L (m), set of local configurations verifying the check node m; i.e. the set of sequences of GF(q)-symbols = = (an)n∈H(m) , verifying the linear constraint:
Abstract—Iterative decoding of non-binary LDPC codes is currently performed using either the Sum-Product or the MinSum algorithms or slightly different versions of them. In this paper, several low-complexity quasi-optimal iterative algorithms are proposed for decoding non-binary codes. The Min-Max algorithm is one of them and it has the benefit of two possible LLR domain implementations: a standard implementation, whose complexity scales as the square of the Galois field’s cardinality and a reduced complexity implementation called selective implementation, which makes the Min-Max decoding very attractive for practical purposes.