Improved decoding of Reed-Solomon and algebraic-geometric codes

合集下载

通信系统中的信道编码和解码技术

通信系统中的信道编码和解码技术

通信系统中的信道编码和解码技术在现代通信系统中,信道编码和解码技术起着至关重要的作用。

信道编码是一种将源数据进行编码的过程,以便在信道传输过程中提高信号的可靠性。

而在接收端,信道解码则是将接收到的编码数据进行解码,恢复为原始数据的过程。

本文将介绍通信系统中常用的信道编码和解码技术。

一、前向纠错编码(Forward Error Correction,FEC)前向纠错编码是一种能够在传输过程中主动纠正错误的编码技术。

其原理是通过在原始数据中添加冗余信息,使接收端能够在接收到有错误的数据包时,根据冗余信息进行纠错,从而恢复出正确的数据。

1. 常见的FEC编码方案(1)海明码(Hamming Code)海明码是一种最早被应用于通信领域的FEC编码方案。

它通过在原始数据中添加校验位,实现了单比特错误的纠正,并且能够检测多比特错误。

海明码的编解码算法相对简单,但纠错能力有限。

(2)LDPC码(Low-Density Parity Check Code)LDPC码是一种基于图论的FEC编码方案。

它通过在校验位的选择上使用低密度的校验矩阵,实现了较高的纠错能力。

LDPC码在现代通信系统中得到广泛应用,尤其是在卫星通信和无线通信领域。

(3)RS码(Reed-Solomon Code)RS码是一种广泛应用于磁盘存储和数字通信领域的FEC编码方案。

它通过在原始数据中添加冗余信息,实现了对一定数量的错误进行纠正。

RS码的编解码复杂度较高,但纠错能力强,适用于对信道质量较差的环境。

2. FEC编码的优势和应用FEC编码在通信系统中具有以下优势:(1)提高信号的可靠性:FEC编码能够在信道传输过程中纠正一定数量的错误,减少信号传输的错误率。

(2)节省带宽资源:通过添加冗余信息,FEC编码可以在一定程度上减少因错误重传导致的带宽浪费。

FEC编码在无线通信、卫星通信、光通信等领域广泛应用。

例如,在卫星通信系统中,由于信号传输距离较长,受到的干扰较多,采用FEC编码可以有效提高通信质量。

逼近香农极限的新型光调制技术

逼近香农极限的新型光调制技术

逼近香农极限的新型光调制技术光传输技术经历了多代的技术演进发展,频谱效率得到了显著改善,业界开始探讨香农通信理论在光纤传输系统上的最基本线性和非线性信号通道容限是多少,从而使下一代的新技术超越当前100G相干系统的传输性能,进一步提升谱效率和总容量,以接近香农的理论极限。

新技术包括了更复杂的调制码型和信道编解码方式、预滤波和其相结合的多符号同时检测算法、光正交频分复用(OFDM和奈奎斯特波分复用(Nyquist WDM)的多载波技术以及抵抗非线性的补偿方案。

新技术进一步优化后,很可能应用在超100G的光传输系统中,从而满足不断增长的带宽需求。

频谱效率;香农极限;高斯噪声;光信噪比;调制;非线性补偿Optical transmission technologies have gone through several generations of development. Spectral efficiency has significantly improved ,and industry has begun to seek the answer to a basic question :What are the fundamental linear and nonlinear signal channel limitations of Shannon theory when there is no compensation in optical fiber transmission systems ?Next-generation technologies should exceed the 100G transmission capability of coherent systems in order to approach the Shannon limit. Spectral efficiency first needs to be improved before overall transmission capability can be improved. The means to improve spectral efficiency include more complex modulation formats and channel encoding/decoding algorithms ,pre-filtering with multisymbol detection ,optical OFDM and NyquistWDM multicarrier technologies ,and nonlinearity compensation. With further optimization ,these technologies will most likely be incorporated into beyond-100G optical transport systems to meet bandwidth demand.spectral efficiency ;Shannon limit ;Gaussian noise ;optical signal noise ratio ;modulation ;nonlinearity compensation1 业务和光传输容量需求随着海量视频、大规模云计算和移动互联网的迅猛发展,电信网络的业务量将继续保持高速增长态势。

2025届广东省广州三中英语高三第一学期期末考试模拟试题含解析

2025届广东省广州三中英语高三第一学期期末考试模拟试题含解析

2025届广东省广州三中英语高三第一学期期末考试模拟试题考生请注意:1.答题前请将考场、试室号、座位号、考生号、姓名写在试卷密封线内,不得在试卷上作任何标记。

2.第一部分选择题每小题选出答案后,需将答案写在试卷指定的括号内,第二部分非选择题答案写在试卷题目指定的位置上。

3.考生必须保证答题卡的整洁。

考试结束后,请将本试卷和答题卡一并交回。

第一部分(共20小题,每小题1.5分,满分30分)1.––Is this tea good cold as well?––______ with ice, this tea is especially delicious.A.Served B.Serving C.Having served D.To be served2.﹣Have you got the results of the final exam?﹣Not yet.It will be a few days ________ we know the full results.()A.before B.afterC.until D.when3.He works very hard in order to get himself ______ into a key university. A.accepted B.received C.announced D.admitted4.They went to the street to ________ to the whole city to help the poor boy. A.apply B.appealC.add D.reply5.A good suitcase is essential for someone who is ______ as much as Jackie is.A.on the rise B.on the lineC.on the spot D.on the run6.Newly released data point to an increase in technology use among childrensome worry is changing the very nature of childhood.A.why B.whichC.who D.where7.As a surgeon,I cannot any mistakes;it would be dangerous for the patient. A.appreciate B.removeC.offer D.afford8.A Chinese proverb has it that a tower is built when soil on earth _________, and a river is formed when streams come together.A.accumulates B.accelerates C.collapses D.loosens9.—He is good at a lot of things but it doesn’t mean he is perfect.— ___________ Actually no one is.A.What’s going on?B.Let’s get going.C.Thank goodness. D.I’m with you on that.10.If you want to improve your figure and health, the most effective thing to do is to show up at the gym every time you ________ be there.A.can B.willC.may D.shall11.—Do you really plan to drop out of the football team?—________ It’s time for me to concentrate on my study.A.I’m just kidding.B.Definitely not.C.I mean it D.What a pity!12.That was a very busy street that I was never allowed to cross accompanied by an adult.A.when B.if C.unless D.where13.Don't give up half way, and you will find the scenery is more beautiful when you reach the destination than when you _______.A.start off B.have started offC.started off D.will start off14.— Why are the Woods selling their belongings?— They to another city.A.had moved B.have moved C.moved D.are moving15.The la nguage in the company’s statement is highly ________, thus making its staff confused.A.ambiguous B.apparentC.appropriate D.aggressive16.. Some people say more but do less ______ others do the opposite.A.once B.when C.while D.as17.—You know quite a lot about the fashion show.—Well, Cathy ________ it to me during lunch.A.introduces B.introducedC.had introduced D.will introduce18.The 114 colorful clay Warriors ____ at No. 1 pit, ______ in height from 1.8m to 2m, have black hair, green, white or pink faces, and black or brown eyes.A.unearthed; ranging B.unearthing; rangingC.unearthed; ranged D.are unearthed; are ranging19.—Iris is always kind and ________ to the suffering of others.—No wonder she chooses to be a relief worker.A.allergic B.immuneC.relevant D.sensitive20.John had planned to make a compromise, but he changed his mind at the last minute.A.anyhow B.otherwiseC.therefore D.somehow第二部分阅读理解(满分40分)阅读下列短文,从每题所给的A、B、C、D四个选项中,选出最佳选项。

2023-2024学年安徽省合肥市普通高中六校联盟高二下学期期末联考英语试题

2023-2024学年安徽省合肥市普通高中六校联盟高二下学期期末联考英语试题

2023-2024学年安徽省合肥市普通高中六校联盟高二下学期期末联考英语试题Welcome to the International Science Drama CompetitionThis competition is an annual event which aims to promote science through drama. It-involves teams from Singapore and overseas, and will be a great opportunity for you to showcase your talents on an international level.What is the theme for International Science Dram a Competition 2022?The theme for 2022 is "Sustainable Agriculture for a Better Future".Your performance should combine scientific content and drama. Examples of some possible topics include but are not limited to the following:●Farming using renewable energy sourcesHydroponics and AquaponicsFood security●Protecting t he environment as we 'meet society's food and textile needsHow do I participate in International Science Dram a Competition 2022?●Junior CategoryAll participants must be age 12 or below, excluding teachers or adults who can help as backstage crew.●Open CategoryTeams will compete in this category when at least one member is above age 12.●Short Films CategoryThere is no age limit for participants. You may participate with your friends, families or community groups. You may represent your school or an organization or just form a team and compete on your own.Mark your calendars!A.Its theme is limited to four topics.B.It's held in many different countries.C.It's intended to popularize science through drama.D.Its participants will go from one category to another.2. What is special about Short Films Category?A.People need to participate with several teammates.B.Participants are required to represent their schools.C.People of different ages are allowed to participate.D.Participants below 12 should be accompanied by families.3. When should finalists' videos be submitted?A.By March 8,2022. B.By June 7,2022.C.By June 28,2022. D.By September14,2022.Zhu Zhiwen, in his 30s,has been on the road, on his most recent bicycle trip since March.Zhu started this adventure, an “Olympic trip”, from Beijing and had spent several months going across the country’s vast land, including Shijiazhuang of Hebei province and Urumqi of the Xinjiang Uygur autonomous region.“My destination is Paris, France, before the Summer Olympics on July 26, 2024,” says Zhu, who was originally from a small village in East China’s Jiangxi province. “It’s my way of celebrating the Games.”Over the past 11 years, Zhu has traveled through 45 countries and regions across four continents, covering a distance of over 90,000 kilometers. From Asia to Africa, from South America to North America, he has encountered many wonderful people and experiences, and witnessed various unique customs.People’s good will has been a highlight of his trips. “I met a small hotel owner when I was cycling in Ecuador, South America, in 2016. After I knew the owner had been to China multiple times and was a fan of the Chinese culture and food, I took the initiative to cook him a Chinese dinner. He was very happy and, in return, offered me a night of accommodation for free,” Zhu says.Yet, it hasn’t all been smooth riding. Zhu came across severe weather on his w ay to the Arctic in 2020.Moreover, he suffered robberies during his stay in South Africa and escaped attacks from bears and other wild animals in Canada.After the Olympic tour, Zhu says he will continue to explore the world on his bicycle and fulfill his childhood dream of touring the whole world with his feet. Being an experienced traveler, Zhu hasoften been asked about his understanding of travel. “I think it is to accept what is possible during travel and get pleasure from it, no matter if it is good o r bad,” Zhu says.4. Why did Zhu Zhiwen start his recent bicycle trip?A.To bicycle across China’s vast land.B.To visit his birthplace in Jiangxi province.C.To reach his destination before July 26, 2024.D.To celebrate the Summer Olympics in Paris, France.5. What’s Zhu Zhiwen’s purpose in mentioning his trip in Ecuador?A.To express his gratitude to a hotel owner.B.To present his skill in cooking Chinese dinner.C.To show people’s kindness he experienced in his trips.D.To introduce a devoted fan of Chinese culture and food.6. What do we know about Zhu Zhiwen’s bicycle journey?A.It is a bitter-sweet riding for Zhu Zhiwen.B.People’s good will fails to impress Zhu Zhiwen.C.Zhu Zhiwen has traveled through all the countries in Asia.D.Zhu Zhiwen gave up his trip in Canada for the attacks from wild animals.7. What does travel mean to Zhu Zhiwen?A.Fulfilling his lifelong dream.B.Exploring the entire world by bicycle.C.Becoming an experienced adventurer.D.Embracing all the experiences travel offers.Many of us have heard of or seen My Fair Lady (1964), a classic movie starring British actress Audrey Hepburn. The film is an adaptation of Pygmalion, a comic play by the Irishman George Bernard Shaw(1856-1950).However, the musical version of My Fair Lady is even older, and has been entertaining audiences since 1956.In fact, a new Broadway version of the musical was nominated for 10 Tony awards. The story of Eliza Doolittle, a working-class flower seller from London, and the snobbish(势利的)Professor Higgins, who teaches her "proper English", has never showed any signs of dying out since it was introduced.But why do people still line up around the block for tickets when a new production of the play is announced? The answer is simple: The musical has a wonderful story. Everyone can identify with Doolittle's desire to live a more satisfying life; all she wants is to get rid of her accent so she can get a job in a store and not on the streets. However, it seems that people like Higgin s are always looking down on Doolittle for not being from a fancy family.Besides Doolittle's background, the story and the songs in the music al are also what make it so popular. Audience members are moved when Eliza and her fellow flower sellers imagine what it'd be like to live as a rich person does. This is perfectly summed up in the lyric (歌词),“Wouldn't it be lovely?”The musical also makes people think. It doesn't make sense that one kind of accent can be considered good and another bad. Eliza and her father speak "badly" because they were born poor, while Higgins speaks "proper English" because he was born a "gentleman". That reflects the unfairness of class divisions – a kind of unfairness that still sadly exists today.8. The underlined word "nominated" in paragraph 2 can be replaced by _________.A.Appointed. B.Selected. C.Suggested. D.Directed.9. Why do people like Broadway's musical My Fair Lady?A.People can relate to Doolittle's experiences.B.It's considered the best adaptation of Pygmalion.C.It has won 10 Tony awards since its appearance.D.People have a good chance to learn proper English.10. What message does My Fair Lady convey?A.People from poor families can hardly be successful.B.Women's status has been greatly improved in the UK.C.Social class division is deeply rooted in British society.D.One's language ability and manner are the key to success.11. What can we know about the story of the musical?A.Higgins was described as a kindhearted man.B.Eliza was born poor but fought for a better life.C.Higgins and Eliza became friends after they first met.D.Eliza finally changed Higgins` prejudice against the poor.Sharks and their relatives are some of the most threatened vertebrates(脊椎动物) on Earth. Coral reefs (珊瑚礁) provide homes for countless fish species that are vital for fisheries and aretherefore an especially important ecosystem for humans——and one where the decline of shark populations seems to be especially acute (剧烈的).The study by Simpfendorfer with his team is the result of a worldwide cooperation called the Global FinPrint project. The data analyzed include more than 20,000 hours of standardized underwater video taken at nearly 400reefs in 67 countries around the world. It reveals declines of 60% to 73% of once-abundant(充足的) coral reef shark species at reefs around the worldHowever, the findings of Simpfendorfer with his team include signs of hope and a clear path forward. Their results show that although shark populations in many reefs had declined, some health y reef shark populations remained. The reefs with healthier shark populations had some important similarities: they tended to be in the waters of high-income countries with stronger natural resource management regulations (规定) . A country that lacks the resources to feed its people is less able to sustainably manage and protect its biodiversity.The most unexpected result of the study is that a decline or complete loss of shark species in one reef was not always associated with similar changes in nearby reefs. They found that one reef can be over fished so badly that a once-common reef shark species is totally gone, but another reef a short distance away can have healthy populations of that same species. It is likely that healthy population s can eventually help repopulate nearby areas.The problem is clear——animals that provide ecosystem services that are vital for human food security and livelihoods are disappearing at an alarming rate. The loss of sharks and the ecosystem services they provide represents an ecological disaster that can cause substantial harm to humans. But apparently, the findings have shown a way. If the threat that led to population decline are resolved, then these important and threatened animals may recover.12. Why is Simpfendorfer with his team mentioned in Paragraph 2?A.To stress the result of global cooperation.B.To state the urgency of protecting the ocean.C.To show the severe condition of certain sharks.D.To present the figures of their contributions to sharks.13. Why could healthy reef shark population s remain in some areas?A.Citizens there had a stronger environmental awareness.B.Science and economics were improving fast in such areas.C.The waters in these areas were suitable for sharks to live in.D.These areas took stricter measures to protect natural resources.14. What can be inferred from the text?A.Changes in reefs lead to loss of shark species.B.Humans and animals co-exist in ecosystem.C.It's not clear whether humans cause damage to biodiversity.D.The declined shark species have nothing to do with overfishing.15. What's the text mainly about?A.Acute Decline of reef sharks. B.Potential recovery of reef sharks.C.Global cooperation of saving sharks. D.Amazing discovery of Coral reefs.In an increasingly busy world, finding moments of peace is more important than ever. One way to make you have a peaceful feeling at home is to have an indoor water fountain. 16 And it also offersa lot of calming benefits that are able to positively influence your well-being.17 It can contribute to feelings of calmness and relaxation, making you forget about the demands of the outside world. Indoor water fountains offer a convenient and accessible way to bring this relaxing experience into your home.Indoor water fountains can also improve the quality of the air in your home. As water flows, small drops of water go away into the surrounding air. They can help humidify (使湿润) the environment.18 At that time, indoor heating systems can lead to dry, uncomfortable air. Improved air quality can contribute to a sense of well-being and comfort.Indoor water fountains can help improve concentration by creating a calming environment. Whether you are working from home or dealing with a challenging project, it helps you keep focused and improve your performance. Students also need to be highly focused when studying. 19For those who struggle with sleep issues, an indoor water fountain can be a valuable aid. Listening to the sound of an indoor water fountain can bring them into a deeper and more restful sleep. 20 Besides, having a fountain in their bedroom helps to drown out (淹没) background noise, creating a more peaceful sleep environment.By the time he was 24 years old, Imran Nuri quit his job in a bold move, and emptied his savings account to carry out an ambitious ________ he'd been bearing in mind. Nuri would drive his car to every state on a 100-day trip to find 1,000________ and ask them to share one thing they wishthey'd……________ when they were younger. Nuri was hoping for answers that might help________ the rest of his life.He ________. But to his shock, he chose a________ person for the first time. He met a man, who had a dark look at Nuri. Nuri thought the man was going to ________ him. Then, doing the most________ thing he'd done, he introduced himself, “I'm ________ around the country to talk to l,000 unfamiliar people. I'm asking them for pieces of life advice about things they wish they knew________.”The man remained silent for a minute or so and________ said, "We should spend more time with our family."In Colorado Springs, Colorado, a man in his 50s who had stage 4 cancer told Nuri, "Life is about the human________. There would be no life without social interactions."In Tillamook, Oregon, a waitress —a college student —told him, “Whether it's changing your major or changing your whole life path, you don't have to ________ yourself for taking a step back and reevaluating your past cho ices. Just do what you think to be right.”Nuri found beauty in every place as well as ________ from the people he met. Those people taught a lot. Now he always puts himself in other people's shoes even when a person ________ the world differently.21.A.plan B.policy C.initiative D.trick22.A.acquaintances B.residents C.strangers D.peers23.A.quit B.inquired C.misunderstood D.known24.A.picture B.guide C.describe D.interpret25.A.made up B.turned down C.set out D.knocked off26.A.generous B.wrong C.pleasant D.selfish27.A.dismiss B.accompany C.consult D.beat28.A.courageous B.stupid C.unforgettable D.glorious29.A.sailing B.hiking C.driving D.wandering30.A.occasionally B.permanently C.later D.earlier31.A.finally B.excitedly C.instantly D.properly32.A.potential B.nature C.connection D.competition 33.A.admire B.reward C.forgive D.fault34.A.wisdom B.fortune C.confidence D.ambition35.A.changes B.sees C.creates D.travels阅读下面短文,在空白处填入1个适当的单词或括号内单词的正确形式。

咬尾卷积码译码的流程

咬尾卷积码译码的流程

咬尾卷积码译码的流程Reed-Solomon codes are an important type of error-correcting codes widely used in various communication systems. These codes are known for their ability to correct errors and detect data corruption in a reliable and efficient manner.瑞德-所羅門碼是一種在各種通信系統中廣泛使用的重要錯誤更正碼的一種類型。

這些代碼以其能夠可靠有效地更正錯誤和檢測數據損壞的能力而聞名。

One of the applications of Reed-Solomon codes is in the form of concatenated codes, where Reed-Solomon codes are used in conjunction with another error-correcting code such as convolutional codes. This combination enhances the error-correcting capabilities of the system, making it even more robust in the face of noise and interference.瑞德-所羅門碼的應用之一是形成串聯碼,其中瑞德-所羅門碼與另一種錯誤更正碼(例如卷積碼)結合使用。

這種組合增強了系統的錯誤更正能力,使其在面對噪聲和干擾時更為強大。

When it comes to tail-biting convolutional codes, they are a special class of convolutional codes where the final state of the encoder is set to the initial state. This unique property allows for seamless decoding of the code, making it an excellent choice for various communication applications.談到尾部截斷卷積碼,它們是一種特殊類型的卷積碼,其中編碼器的最終狀態設置為初始狀態。

reedSolomon

reedSolomon

Retimed Decomposed Serial Berlekamp-Massey (BM) Architecture for High-Speed Reed-Solomon DecodingShahid RizwanKorea Advanced Institute of Science and Technology, Republic of Koreashahidrizwan06@AbstractThis paper presents a retimed decomposed inversion-less serial Berlekamp-Massey (BM) architecture for Reed Solomon (RS) decoding. The key idea is to apply the retiming technique into the critical path in order to achieve high decoding performance. The standard basis irregular fully parallel multiplier is separated into partial product generation (PPG) and partial product reduction (PPR) stages to implement the proposed modified decomposed inversion-less serial BM algorithm. The proposed RS (255,239) decoder is implemented in verilog HDL and synthesized with 0.18 m CMOS std130 standard cell library. The proposed architecture achieves almost 76 % increase in speed and throughput, and can be used in high-speed and high-throughput applications such as DVD, optical fiber communications, etc.1. IntroductionAmong a large number of error correction coding developed so far, linear block codes such as Reed-Solomon and BCH have a lot of applications in many digital areas because of their powerful error correction capability and efficient encoding and decoding procedures [1]. These applications include DVD, DTV, satellite communications, wireless systems, optical communications, etc.Due to increasing demand for high capacity of optical communications, the high-speed and high-throughput implementations of RS decoders are desirable to meet higher data rate requirements. However all existing RS decoder architectures have limitations in terms of speed and throughput. Among the decoding steps in RS decoding, the second block i.e. the key equation solver (KES) block, which computes the coefficients of the error location polynomial has been known to be the most responsible for the performance against high-speed, small-area, low-power because of its complexity and its delay and thus has been the bottleneck to achieve high-speed decoding. Either the Berlekamp-Massey (BM) algorithm or the Euclidian algorithm can be used to solve the key equation for an error locator polynomial and an error evaluator polynomial. The Berlekamp-Massey (BM) algorithm results in more efficient software and hardware implementations. If we investigate the literature, we would come up with three distinct BM architectures i.e. inversion-less parallel BM architecture [5], inversion-less serial BM architecture [7] and the inversion-less decomposed serial BM architecture [6]. The parallel BM architecture has a shorter latency but a lower decoding speed. The serial BM architecture has a higher latency but possesses a higher decoding speed. Decomposed serial BM structure has a medium latency in comparison to the above two architectures and also the highest decoding speed. So this is the fastest architecture amongst the present BM architectures. In the decomposed serial BM algorithm, the critical path includes a multiplier, an adder and a multiplexer. The critical path has a feedback nature so it could not be pipelined easily. A retimed decomposed inversion-less serial BM architecture is proposed in this paper to break the critical path to achieve high-speed and high-throughput Reed Solomon (RS) decoding. The rest of this paper is organized as follows. Section 2 explains the Reed-Solomon decoder, section 3 gives an insight into the retiming technique and section 4 explains the proposed retimed BM architecture. Pipelined error evaluator block has been given in section 5 and the concluding remarks are made in section 6.2. Reed-Solomon DecoderA general RS code denoted by (n, k) can correct up to t = /2n k symbol errors [7], where n and k21st International Conference on VLSI Designrepresent the length of a block and the length of the information symbols, respectively.A syndrome-based RS decoder consists of three components [1]. First part is a syndrome calculator. It generates a syndrome polynomial that is used in the second component for solving a key equation (KES). The Berlekamp-Massey (BM) algorithm is used to solve the key equation for an error locator polynomial and an error evaluator polynomial because it is considered to be the one with the least hardware complexity. The BM algorithm computes the error locator polynomial in 2t iterations [2]. Computation of error locator and error evaluator polynomial in parallel results in less latency (merged architecture). Then in the third component, these two polynomials are used to compute the error locations and the corresponding error values according to the Chien search and Forney s algorithm. In addition, delay elements are used in parallel to these three components in order to buffer the received symbols according to the latency of these components. The distinct blocks in RS decoder block usually operate in parallel (pipelined mode) and are shown in Figure 1. The main objective of this paper is the improvement of the KES block to improve the speed of the Reed-Solomon decoding. It is hard to pipeline the KES stage because of the presence of feed back signals. The use of a specialized technique i.e. retiming results in a high-throughput and high-speed Reed Solomon decoding architecture.)x S S 3. Retiming TechniqueRetiming is a transformation technique used to change the locations of the delay elements in a circuit without affecting the input/output characteristics of the circuit. Retiming maps a circuit G to a retimed circuit G r . A retiming solution is characterized by a value r (V) for each node V in the graph. Let w (e) denotes the weight of the edge e in the original graph G , and let w r (e) denote the weight of the edge e in the retimed graph Gr. The weight of the edge eU V in the retimed graph is computed from the weight of the edge in the original graph usingw r (e) = w (e) + r (V) r (U) (A)For example consider a filter circuit in graphical from in Figure 2(a), and the retimed filter drawn in Figure 2(b). The retiming values r (1) = 0, r (2) = 1, r (3) = 0, and r (4) = 0 can be used to obtain the retimed data flow graph (DFG) in Figure 2(b) from the data flow graph (DFG) inFigure 2(a). For example, the edge32ein theretimed DFG containsw r (32e) = w (32e) + r (2) r (3) = 0 + 1 0 = 1 delay, and the edge 21econtainsw r (21e ) = w (21e ) + r (1) r (2)= 1 + 0 1 = 0delays.A retiming solution is feasible if ()0r w e holds forall edges. The solution thatmaps Figure 2(a) to Figure2(b) is feasible because all of the edges in Fig. 2(b) havenonnegative weights.Although the filters in Figure 2(a) and Figure 2(b) have delays at different locations, these filters have the same input/output characteristics. These 2 filters can be derived from one another using retiming. The critical path of the filter in Figure 2(a) passes through 1 multiplier and 1 adder and has a computation time of 3 time units. The retimed filter in Figure 2(b) has a critical path that passes though 2 adders and has a computation time of 2 time units. By retiming the filter in Figure 2(a) to obtain the filter in Figure 2(b), the clock period has been reduced from 3 to 2 or by 33% [3].Retiming has many applications in synchronous circuit design. The applications include reducing the clock period of the circuit, reducing the number of registers in the circuit, reducing the power consumption of the circuit, and logic synthesis.Cutset retiming is a useful technique that is a specialcase of retiming. A cutset is a set of edges that can beremoved from the data flow graph (DFG) to create 2 disconnected sub-graphs. Cutset retiming only affects the weights of the edges in the cutest. It is often used in combination with slow-down. The procedure is to first replace each delay in the DFG with N delays to create an N-slow version of the DFG and then to perform cutest retiming on the N-slow DFG. In an N-slow system, N 1 null operations (or 0 samples) must be interleaved after each useful signal sample to preserve the functionality of the algorithm [3].4. Retimed Decomposed Inversion-less Berlekamp-Massey (BM) ArchitectureModified decomposed inversion less BM algorithm is given as(1)0()(1)()(1)1.01.221i i ji i i jj fo r j fo r jt (1)(1)(1)()131001.221i i i jj ij j fo r j S fo r j t (2)where()()()()01()...i i i i t tx x x is the error locatorpolynomial,()'i js are the coefficients of ()()i x , and1'i js are the partial results in computing discrepancy(1)i .We can decompose the ith iteration into 2t + 2 cycles.()i jrequires at most two Finite Field Multipliers(FFMs) and (1)i jrequires only one FFM. Standardbasis irregular fully parallel multiplier with separate partial product generation (PPG) and partial product reduction (PPR) stages has been used in our design [4]. At first cycle partial product generation (PPG) operation is done and in the next cycle partial product reduction (PPR) is done.For implementing the above idea we propose a retimed decomposed inversion-less BM architecture. The steps used to modify the original decomposed BM structure to the retimed structure are as follows.i.Identifying the nodes and the delay elements in the original circuit and drawing a data flow graph of the original circuit. (Figure 3)ii.Making a 2-slow version of the original circuit. (Figure 4)iii.Applying retiming technique by giving retiming value of -1 to all the multiplier nodes and then adjusting the delays of different branches to retain the original operation of the circuit. Then shifting the registers to inside of the multipliers to break the critical path. (Figure 5 )1)3ij Siv.Using register minimization technique to minimize the number of registers in the circuit. (Figure 6)3ij S 1)The retimed decomposed serial BM architecture for RS (255, 239) decoder for error locator polynomial computation is shown in Figure 6. All operations are done is standard basis so there is no need to convert from dual basis to standard basis and vice versa as was done in [6]. A similar architecture in parallel to this architecture could be used to compute the error evaluator polynomial.almost 76 % increase in speed and throughput.5. Pipelined Error Evaluator BlockThe basic error evaluator block given in literature has a delay of about 2.70 ns[6]. The critical path consists of an inverter, a multiplier, an adder and a multiplexer.Addition of two pipeline stages results in considerable speed improvement (Figure 7). Synthesis results are given in Table 2.(2)m (oddt = 8 case Basic Errorcorrector blockTwo stage pipeline Three stage pipeline Cycle Time(ns) 2.70 2.13 1.53 Speed (MHz) 370.4 469.5 653.6 Critical path Inverter + multiplier+addermultiplexerInverter +multipli er (PPG)Inverter6. ConclusionsIn this paper, we have proposed a retimed decomposed inversion less serial BM architecture that uses the irregular fully parallel multiplier. In the original decomposed inversion less serial BM algorithm, the critical path includes a multiplier, an adder and a multiplexer. The retimed architecture breaks this critical path using retiming technique and results in considerable speed as well as throughput improvements. This improvement is achieved at the expense of a few extra registers. In addition, pipelining the error evaluator block results in considerable speed improvement. The proposed architecture was implemented in Verilog HDL and synthesized with UMC 0.18m std130 standard cell library. Our proposed architecture achieves almost 76 % increase in speed and throughput, which could be used in high speed and high throughput applications such as optical fiber communication. Layout was done using Milkyway and Apollo tool and is shown in Figure 8. The chip size is 2.0 m x 1.9m .10. References[1]S. B. Wicker and V. K. Bhargava, Reed Solomon Codes and Their Applications, IEEE PRESS, 1994. [2]Richard E. Blahut, Theory and Practice of Error Control Codes , Addison-Wesley Publishing Company, 1983.[3]Keshab K. Parhi, VLSI Digital Signal Processing Systems, Design and Implementation , A Wiley-Interscience Publication, 1999.[4]Lijun Gao; Parhi, K.K., Custom VLSI design of efficient low latency and low power finite field multiplier for Reed-Solomon codec , Circuits and Systems, 2001. ISCAS 2001.[5]Reed, I.S.; Shih, M.T., VLSI design of inverse-free Berlekamp-Massey algorithm , Computers and Digital Techniques, IEE Proceedings-E, Volume: 138, Issue: 5, Sept. 1991.[6]Hsie-Chia Chang; Shung, C.B.; Chen-Yi Lee, A Reed-Solomon product-code (RS-PC) decoder chip for DVD applications , Solid-State Circuits, IEEE Journal of, Volume: 36, Issue: 2, Feb. 2001.[7]Hyeong-Ju Kang; In-Cheol Park, A high-speed and low-latency Reed-Solomon decoder based on a dual-line structure , Acoustics, Speech, and Signal Processing, 2002. Proceedings. (ICASSP '02).[8]Hanho Lee; Meng-Lin Yu; Leilei Song, VLSI designof Reed-Solomon decoder architectures , Circuits andSystems, 2000. Proceedings. ISCAS 2000 Geneva.。

(完整版)托福TPOextra答案解析和原文翻译

(完整版)托福TPOextra答案解析和原文翻译

TPO 34阅读解析第一篇Population and Climate【P1】地球人口的增长已经对大气和生态环境产生了影响。

化石燃料的燃烧,毁林,城市化,种植大米,养殖家畜,生产作为助推燃料和制冷剂的CFC增加了空气中CO2,甲烷,二氧化氮,二氧化硫灰尘和CFOs 的含量。

约70%的太阳能量穿过大气直射地球表面。

太阳射线提高了土地和海洋表面的温度,随后土地和海洋表面将红外射线反射会太空中。

这能使地球避免温度过高。

但是并不是所有的红外射线被返回会太空中,一些被大气中的气体吸收,然后再次反射回地球表面。

温室气体就是其中吸收了红外射线的一种气体,然后再次反射一些红外线到地球。

二氧化碳,CFC,甲烷和二氧化氮都是温室气体。

大气中温室效应形成和建立的很自然。

事实上,大气中如果没有温室气体,科学家预测地球温度比当前的能够低33度。

【P2】大气中当前二氧化碳浓度是360ppm。

人类活动正在对大气中二氧化碳浓度的增加有着重要的影响,二氧化碳浓度正在快速增长,目前预估在未来50-100年内,浓度将是目前的一倍。

IPCC在1992中做出一份报告,在该份报告中大多数大气科学家中观点一致,预测二氧化碳浓度翻倍可能会将全球气温提高1.4-4.5度。

IPCC在2001年的报告中做出的预测是气温几乎将会提高2倍。

可能发生的气温升高比在冰河时期发生的变化要大很多。

这种温度的升高也不会是一直的,在赤道周围变化最小,而在极点周围的变化则是2-3倍。

这些全球变化的本地化影响很难预测,但是大家一致认为可能会影响洋流的改变,在北半球的一些区域可能增加在冬天发洪水的可能性,在一些区域夏天发生干旱的概率提高,还有海平面的升高也可能会淹没位置较低的国家。

【P3】科学家积极参与地球气候系统中物理,化学和生物成分的调查,为了对温室气体的增加对未来全球气候的影响做出准确预测。

全球环流模型在这个过程中是重要的工具。

这些模型体现包含了当前对大气环流模式,洋流,大陆影响和类似东西所掌握的知识,在变化的环境下预测气候。

high-speed architectures for reed-solomon decoders

high-speed architectures for reed-solomon decoders

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION(VLSI)SYSTEMS,VOL.9,NO.5,OCTOBER2001641 High-Speed Architectures for Reed–SolomonDecodersDilip V.Sarwate,Fellow,IEEE,and Naresh R.Shanbhag,Member,IEEEAbstract—New high-speed VLSI architectures for decodingReed–Solomon codes with the Berlekamp–Massey algorithmare presented in this paper.The speed bottleneck in theAut h ori z ed l i c ensed u s Berlekamp–Massey algorithm is in the iterative computation ofdiscrepancies followed by the updating of the error-locator poly-nomial.This bottleneck is eliminated via a series of algorithmictransformations that result in a fully systolic architecture in whicha single array of processors computes both the error-locatorand the error-evaluator polynomials.In contrast to conventionalBerlekamp–Massey architectures in which the critical path passesthrough two multipliers and1++1)642IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION(VLSI)SYSTEMS,VOL.9,NO.5,OCTOBER2001whereof,and,it followsthat.In fact,an arbitrary poly-nomial of degree lessthanandofdegree is dividedby.Thus,.Clearly,is a mul-tipleofis is of degree atmost,it follows that the codeword is givenbyand consists of the data symbols followed by the parity-checksymbols.B.Decoding of Reed–Solomon CodesLetdenote the received word polynomial.The input to thedecoderiscan be writtenas;in fact,it doesnot even know the valueoffrom itsinputfromor fewer errors can always becorrected.The decoder begins its task of error correction by computingthe syndromevaluesis a codeword andit is assumedthat,that is,no errors have occurred.Otherwise,the decoder knowsthat,which is defined tobeto calculate the error values and error locations.Define the errorlocatorpolynomial and the error evaluatorpolynomial tobe(5)These polynomials are relatedto(6)Solving the key equation to determinebothfromandandandforeach.Usually,the decoder computes the valueof-th receivedsymbol leaves thedecoder circuit.This process is called a Chien search[1],[3].If,then is one of the error locations(say.Note that the formal derivative simplifiesto.Thus,,which is justthe terms of odd degreeinatcan be found during the evaluationofand does not require a separate computation.Note also that(7)can be simplified bychoosingEdited by Foxit ReaderCopyright(C) by Foxit Software Company,2005-2008For Evaluation Only.SARWATE AND SHANBHAG:HIGH-SPEED ARCHITECTURES FOR REED–SOLOMON DECODERS 643ThethroughputbottleneckinReed–Solomondecodersisinthe KES blockwhichsolves(6):incontrast,the SC and CSEE blocks are relatively straightforward to implement.Hence,in this paper we focus on developing high-speed architectures for the KES block.As mentioned earlier,the key equation (6)can be solved via the eE algorithm (see [19]and [17]for implementations),or via the BM algorithm (see [5]for implementations).In this paper,we develop high-speed architectures for a reformulated version of the BM algorithm because we believe that this reformulated algorithm can be used to achieve much higher speeds than can be achieved by other implementations of the BM and eE algorithms.Furthermore,as we shall show in Section IV-B4,these new architectures also have lower gate complexity and a simpler control structure than architectures based on the eE algorithm.III.E XISTING B ERLEKAMP –M ASSEY (BM)A RCHITECTURES In this section,we give a brief description of different ver-sions of the Berlekamp–Massey (BM )algorithm and then dis-cuss a generic architecture,similar to that in the paper by Reed et al.[13]for implementation of the algorithm.A.The Berlekamp–Massey AlgorithmThe BM algorithm is an iterative procedure for solving (6).In the form originally proposed by Berlekamp [1],the algorithmbegins withpolynomialsand iter-atively determinespolynomialssatisfying the polynomialcongruenceforandand ,the algorithmdeterminesfromandhasdegree,the algorithm needs to storeroughlyandor less in the polyno-mialproduct.An implementation of this versionthus needs to storeonlyclock cycles.Although thisversion of the BM algorithm trades off space against time,it also suffers from the same problem as the Berlekamp version,viz.during some of the iterations,it is necessary to divide each coefficientofby,these divisions,which occur inside an iterative loop,aremore time consuming than multiplications.Obviously,if these divisions could be replaced by multiplications,the resulting cir-cuit implementation would have a smaller critical path delay and higher clock speeds would be usable.2A less well-known ver-sion of the BM algorithm [4],[13],has precisely this property and has been recently employed in practice [13],[5].We focus on this version of the BM algorithm in this paper.The inversionless BM (iBM )algorithm is described by the pseudocode shown below.The iBM algorithm actually findsscalarmultiplesand instead ofthe defined in (4)and (5).However,it is obvious that the Chien search will find the same error locations and it follows from (7)that the same error values are obtained.Hence,we continue to refer to the polynomials computed by the iBM algorithmas.As a minor implementationdetail,which occurs in StepsiBM.2and iBM.3is a constant:it has value zero forall(0)=b (0)=b;i =0;1;...;2t 01.for r =0step 1until 2t 01do beginStep iBM.1 (r )=s (r )+s(r )+111+s (r )Step iBM.2(r )0 (r )b(r +1)=(r +1)=bO u t p u t :(2t );i =0;1;...;t 01.Forinvolvingunknown quanti-ties2Theastute reader will have noticed that the Forney error value formula (7)also involves a division.Fortunately,these divisions can be pipelined because they are feed-forward computations.Similarly,the polynomial evaluations needed in the CSEE block (as well as those in the SC block)are feed-forward computations that can be pipelined.Unfortunately,the divisions in the KES block occur inside an iterative loop and,hence,pipelining the computation becomes difficult.Thus,as was noted in Section II,the throughput bottleneck is in the KES block.Edited by Foxit ReaderCopyright(C) by Foxit Software Company,2005-2008For Evaluation Only.644IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI)SYSTEMS,VOL.9,NO.5,OCTOBER2001Fig.1.The iBM architecture.and therefore theunknown do not affect the valueofarithmetic units for computing thediscrepancyand arithmetic units for up-dating these polynomials,as shown in Fig.1.During a clockcycle,the DC block computes thediscrepancyand a controlsignal tothe ELU block which updates the polynomials during the sameclock cycle.Sinceallarithmetic operations are com-pleted in one clock cycle,we assumethatto produce thediscrepancyisand.Fol-lowing the computationofbitsin is nonzero.Thisrequirestwo-input OR gates arranged in a binary tree ofdepthis implemented in twos-complement representation,then.Finally,oncethesignal is available,the counterforrequires only the complementation of all the bits intheandis updatedjustin thenext.FromStep iBM.4,it follows that the “discrepancies”computed during thenextofEdited by Foxit ReaderCopyright(C) by Foxit Software Company,2005-2008For Evaluation Only.SARWATE AND SHANBHAG:HIGH-SPEED ARCHITECTURES FOR REED–SOLOMON DECODERS645Fig.2.The discrepancy computation(DC)block.Note that the total hardware requirements of the DC block are-bit latches,adders,and miscellaneousother circuitry(counters,arithmetic adder or ring counter,ORgates,inverters and latches),in the control unit.From Fig.2,thecritical path delay of the DC block isand the signal in the DC block,the polynomial coefficient updates of Steps iBM.2and iBM.3are performed simultaneously in the ELU block.The processorelement PE0(hereinafter the PE0processor)that updates onecoefficient of is illustrated in Fig.3(a).The com-plete ELU architecture is shown in Fig.3(b),where we see thatsignals,and are broadcast to all the PE0pro-cessors.In addition,the latches in all the PE0processors areinitialized to zero except for.Notice thatare initialized to12GF(2646IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI)SYSTEMS,VOL.9,NO.5,OCTOBER 2001algorithmisare the same as thatof(generating thesignalthrough the controlunit (generatingsignal)feeding into the ELU block mul-tiplexerisfasterthanthedirectpath,i.e.,.This is a reasonable assumption in most technologies.Note that more than halfof.A.Reformulation of the iBM Algorithm1)Simultaneous Computation of Discrepancies and Updates:Viewing Steps iBM.2and iBM.3in terms of poly-nomials,we see that Step iBM.2computes(10)while Step iBM.3setseitherto .Next,note that thediscrepancy,the coefficientof in the polynomial3Sincedeg (2t;z )<t ,the array has only t PE0processors.product(11)Much faster implementations are possible if the decoder com-putes all the coefficientsof)even thoughonly is needed tocompute and to decidewhether is to be setto.Suppose that at the beginning of a clock cycle,the decoder hasavailable to it all the coefficientsof(and,of course,ofas well).Thus,is available at the beginning of the clock cycle,and the decoder cancomputeand .Furthermore,it follows from (10)and (11)thatwhile is set toeitherorto .Inshort,and are computed in exactly the samemanner asareand .Furthermore,all four polynomial updates can be computed simultaneously,and all the polynomial coefficients as wellas,andand.Theiterations of Step iBM.4are not needed.Thehigh-order coefficientsof,whereof degree atmost contains the high-order terms.Sinceis a rootof .Thus,(7)can be rewrittenas(12)We next show that this variation of the error evaluation formula has certain architectural advantages.Note that thechoice,thediscrepancyis computed inprocessorEdited by Foxit ReaderCopyright(C) by Foxit Software Company,2005-2008For Evaluation Only.SARWATE AND SHANBHAG:HIGH-SPEED ARCHITECTURES FOR REED–SOLOMON DECODERS 647to the ELU block thatcomputesand .Additional reformulation of the iBM algorithm,as described next,eliminates these multiplexers.We use the fact that foranyand cannot affect the value of any laterdiscrepancyandfor ,defineand and thepolynomialsandwith initialvalues.It follows thatthese polynomial coefficients are updatedaswhileorto.Note thatthediscrepancyis always in a fixed (zero-th)position with this form of update.As a final comment,note this form of update ultimatelyproducesand,thus,(12)can be used for error evaluation in the CSEE block.The riBM algorithm is described by the following pseu-docode.Notethatfor all valuesof (0)=b (0)=b;i =0;1;...;2t 01.^(0)=s(r +1)= (r )1 (r )1b(r +1)= (r )1^(r )1^(r )=0and k (r ) 0then beginb (r );(i =0;1;...;t )^(r )k (r +1)=0k (r )01end else beginb (r );(i =0;1;...;t )^(r );(i =0;1;...;2t 01); (r +1)= (r )k (r +1)=k (r )+1endendOutput: (2t );(i =0;1;...;t 01)(a)(b)Fig.4.The rDC block diagram.(a)The PE1processor.(b)The rDCarchitecture.Next,we consider architectures that implement the riBM al-gorithm.B.High-Speed Reed–Solomon Decoder Architectures As in the iBM architecture described in Section III,the riBM architecture consists of a reformulated discrepancy computation (rDC )block connected to an ELU block.1)The rDC Architecture:The rDC block uses the processor PE1shown in Fig.4(a)and the rDC architecture shown in Fig.4(b).Notice that processor PE1is very similar to processor PE0of Fig.3(a).However,the contents of the upper latch “flow through”PE1while the contents of the lower latch “recircu-late”.In contrast,the lower latch contents “flow through”in processor PE0while the contents of the upper latch “recircu-late”.Obviously,the hardware complexity and the critical path delays of processors PE0and PE1are identical.Thus,assumingas beforethat,we getthat .Note that the delay is independent ofthe error-correctioncapabilityand since theyEdited by Foxit ReaderCopyright(C) by Foxit Software Company,2005-2008For Evaluation Only.648IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION(VLSI)SYSTEMS,VOL.9,NO.5,OCTOBER2001Fig.5.The systolic riBM architecture.operate in parallel,our proposed riBM architecture achieves the same critical pathdelay:which is less than half thedelays,contain the coefficientsofwhich can be used for error evaluation.Thus,and as needed in(12).Ignoring the control unit,the hardware requirement of thisarchitectureisadditional multipliersandand.Sincethe.Then,as the iterations proceed,thepolynomialsandare updated in the processors in the left-hand end of the array(effectively,get updated and shifted left-wards).After areinprocessors.Next,notethatandandand being stored inprocessorandfor.Thus,the same array iscarrying out two separate computations.These computations donot interfere with one another.Polynomialsare stored in processorsnumberedwhereis a nondecreasing functionofifforallSARWATE AND SHANBHAG:HIGH-SPEED ARCHITECTURES FOR REED–SOLOMON DECODERS649Fig.6.The homogenous systolic RiBMarchitecture.shift leftwards,they do not over-write thecoefficientsofand .We denote the contents of the array in the RiBM architec-ture aspolynomialsand with initialvalues for all valuesof(0)=0for i =2t;2t +1;...;3t 01.k (0)=0. (0)=1.Input:s(0)=~ ;(i =0;...;2t 01)for r =0step 1until 2t 01do beginStep RiBM.1~(r )1~(r )=0and k (r ) 0then begin ~(r )k (r +1)=0k (r )01end else begin ~(r );(i =0;1;...;3t );(r +1)= (r )k (r +1)=k (r )+1endendOutput:(2t );(i=0;1;...;t );!(2t )=~r t ip i r s rro r r re t in riBM and RiBM architectures require considerably more gates than the conventional iBM architecture (Blahut’s version),but also requireonlyand650IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI)SYSTEMS,VOL.9,NO.5,OCTOBER 2001TABLE IC OMPARISON OF H ARDWARE C OMPLEXITY AND P ATH DELAYSof [2]need several multiplexers to route the various operands to the arithmetic units,and additional latches to store one addend until the other addend has been computed by the multiplier,etc.As a result,the architecture described in [2]requires not only many more latches and multiplexers,but also many more clock cycles than the riBM and RiBM architectures.Furthermore,the critical path delay is slightly larger because of the multiplexers in the various paths.On the other hand,finite-field multipliers themselves consist of large numbers of gates (possibly as manyas.The processing element PE1is shown in Fig.7(a)where the upper eight latches store theelementm3.3V CMOS technology.In the next section,we develop a pipelined architecture that further reduces the critical path delay by as much as an order of magnitude by using a block interleaved code.V .P IPELINED R EED –S OLOMON D ECODERSThe iterations in the original BM algorithm were pipelined using the look-ahead transformation [12]by Liu et al.[9],and the same method can be applied to the riBM and RiBM al-gorithms.However,such pipelining requires complex overhead and control hardware.On the other hand,pipeline interleaving (also described in [12])of a decoder for a block-interleaved Reed–Solomon code is a simple and efficient technique that can reduce the critical path delay in the decoder by an order of mag-nitude.We describe our results for only the RiBM architecture of Section IV ,but the same techniques can also be applied to the riBM architecture as well as to the decoder architectures described in Section III.A.Block-Interleaved Reed–Solomon Codes1)Block Interleaving:Error-correcting codes for use on channels in which errors occur in bursts are often interleaved so that symbols from the same codeword are not transmitted consecutively.A burst of errors thus causes single errors in multiple codewords rather than multiple errors in a single codeword.The latter occurrence is undesirable since it can easily overwhelm the error-correcting capabilities of the code and cause a decoder failure or decoder error.Two types of interleavers,block interleavers and convolutional interleavers,(a)(b)Fig.7.The RiBM architecture synthesized in a 3.3V ,0.25 m CMOS technology.(a)The PE1processing element.(b)The RiBM architecture.are commonly used (see,e.g.,[16],[18]).We restrict our atten-tion to block-interleaved codes.Block-interleavinganresults inan channelscarries a codeword oftheintocodewords are stored row wiseintoanmemory array.Thememory is then read out row wise toformdenotes the generator polynomial ofthenow denoting the datapolynomialofdegree.Thetransmitted codewordis.In essence,the datastreamis treated as if it were a multichannel data stream and the stream in each channel is encoded withthedelays,and re-times the architecture to account for the addi-tional delays.The encoder treats its input as a multichannel datastream and produces a multichannel output data stream,that4Infact,the data symbol ordering is that which is produced by interleaving the data stream in blocks of k symbols to depth M .is,a block-interleaved Reed–Solomon codeword.Note also that while the interleaver array has been eliminated,the delay-scaled encoderusestimes as much memory as the conventional decoder.For example, delay-scaling the PE1processors in the RiBM architecture of Fig.6results in the delay-scaled processor DPE1shown inFig.8.Note thatforlatchesinth syn-drome ofthe.After clock cycles,processorscontain theinterleaved error-locator polynomials.We remark that delay-scaled decoders can also be used todecode block-interleaved Reed–Solomon codewords producedby memory array interleavers.However,the data symbols at theoutput of the decoder will still be interleaved andanarrayneeded to deinterleavethestages.1)A Pipelined Multiplier Architecture:While pipelining amultiplier,especially if it is a feedforward structure,is trivial,it is not so in this case.This is because for RS decoders thepipelining should be done in such a manner that the initial condi-tions in the pipelining latches are consistent with the syndromevalues generated by the SC block.The design of finite-field mul-tipliers depends on the choice of basis for the representation.Here,we consider only the standard polynomial basis inwhichFig.8.Delay-scaled DPE1processor.Initial conditions in the latchesare indicated in ovals.The delay-scaled RiBM architecture is obtained byreplacing the PE1processors in Fig.6with DPE1processor and delay-scalingthe control unit as well.the.The pipelined multiplier architecture is based on writing theproduct oftwoelementsterms in the sum above.Themultiplier processingelement(ifmultiplies.Since,and tremendous speed gains canbe achieved if the pipelined multiplier architecture is used in de-coding a block-interleaved Reed–Solomon code.Practical con-siderations such as the delays due to pipelining latches,clockskew and jitter will prevent the fullest realization of the speedgains due to pipelining.Nevertheless,the pipelined multiplierstructure in combination with the systolic architecture will pro-vide significant gains over existing approaches.The pipelined multiplier thus consistsofandtheinput are zero,and therefore the initial conditions of the lower latches in theMPE s do not affect the circuit operation.Theproductafterclock cycles,the initial con-tents of the upper latches of the MPE s appear in succession atthe outputof.This property is crucial to the properoperation of our proposed pipelined decoder.(a)(b)Fig.9.The pipelined multiplier block diagram.(a)The multiplier processing element (MPE ).(b)The multiplier architecture.Initial conditions of the latches at the y input are indicated inovals.Fig.10.Pipelined PPE1processor.Initial conditions in the latches are indicated in ovals.The pipelined RiBM architecture is obtained by replacing the PE1processors in Fig.6with PPE1processor and employing the pipelined delay-scaled controller.2)The Pipelined Control Unit:If the pipelined multiplier architecture described above (and shown in Fig.9)is used in the DPE1processors of Fig.8,the critical path delay of DPE1is reducedfromtojust delaysfrom -delay scaled RiBM architec-ture (see Fig.8)can be retimed to the outputs of the control unitand then subsequently employed to pipeline it.Note,however,thattheclockcyclesfor noninterleaved codes should give serious thought to the following design strategy.•Readin,the pRiBM architecture may be too large to implement on a single chip.In such a case,one can deinter-leave first and then reinterleave to a suitable depth.In fact,the “deinterleave and reinterleave”strategy can be used to construct a universal decoder around a single decoder chip with fixed in-terleaving depth.VI.C ONCLUDING R EMARKSWe have shown that the application of algorithmic transfor-mations to the Berlekamp–Massey algorithm result in the riBM and RiBM architectures whose critical path delay is less than half that of conventional architectures such as the iBM archi-tecture.The riBM and RiBM architectures use systolic arrays of identical processor elements.For block-interleaved codes,the deinterleaver can be embedded in the decoder architecture via delay scaling.Furthermore,pipelining the multiplications in the delay-scaled architecture result in an order of magnitude reduc-tion in the critical path delay.In fact,the high speeds at which the pRiBM architecture can operate makes it feasible to use it to decode noninterleaved codes by the simple stratagem of in-ternally interleaving the received words,decoding the resulting interleaved word using the pRiBM architecture,and then de-in-terleaving the output.Future work is being directed toward integrated circuit imple-mentations of the proposed architectures and their incorporation into broadband communications systems such as those for very high-speed digital subscriber loops and wireless systems.A CKNOWLEDGMENTThe authors would like to thank the reviewers for their con-structive criticisms which has resulted in significant improve-ments in the manuscript.R EFERENCES[1] E.R.Berlekamp,Algebraic Coding Theory.New York:McGraw-Hill,1968.(revised ed.—Laguna Hills,CA:Aegean Park,1984).[2] E.R.Berlekamp,G.Seroussi,and P.Tong,Reed–Solomon Codes andTheir Applications,S.B.Wicker and V.K.Bhargava,Eds.Piscataway, NJ:IEEE Press,1994.A hypersystolic Reed–Solomon decoder.[3]R.E.Blahut,Theory and Practice of Error-Control Codes.Reading,MA:Addison-Wesley,1983.[4]H.O.Burton,“Inversionless decoding of binary BCH codes,”IEEErm.Theory,vol.IT-17,pp.464–466,Sept.1971.[5]H.-C.Chang and C.Shung,“A Reed-Solomon product code(RS-PC)decoder for DVD applications,”in Int.Solid-State Circuits Conf.,SanFrancisco,CA,Feb.1998,pp.390–391.[6]H.-C.Chang and C. B.Shung,“New serial architectures for theBerlekamp–Massey algorithm,”IEEE mun.,vol.47,pp.481–483,Apr.1999.[7]M. A.Hasan,V.K.Bhargava,and T.Le-Ngoc,Reed–SolomonCodes and Their Applications,S.B.Wicker and V.K.Bhargava, Eds.Piscataway,NJ:IEEE Press,1994.Algorithms and architecturesfor a VLSI Reed–Solomon codec.[8]S.Kwon and H.Shin,“An area-efficient VLSI architecture of aReed–Solomon decoder/encoder for digital VCRs,”IEEE Trans.Consumer Electron.,pp.1019–1027,Nov.1997.[9]K.J.R.Liu,A.-Y.Wu,A.Raghupathy,and J.Chen,“Algorithm-basedlow-power and high-performance multimedia signal processing,”Proc.IEEE,vol.86,pp.1155–1202,June1998.[10]J.L.Massey,“Shift-register synthesis and BCH decoding,”IEEE Trans.Inform.Theory,vol.IT-15,pp.122–127,Mar.1969.[11]J.Nelson,A.Rahman,and E.McQuade,“Systolic architectures for de-coding Reed-Solomon codes,”in Proc.Int.Conf.Application Specific Array Processors,Princeton,NJ,Sept.1990,pp.67–77.[12]K.K.Parhi and D.G.Messerschmitt,“Pipeline interleaving and paral-lelism in recursive digital filters—Parts I and II,”IEEE Trans.Acoust.Speech Signal Processing,vol.37,pp.1099–1134,July1989.[13]I.S.Reed,M.T.Shih,and T.K.Truong,“VLSI design of inverse-freeBerlekamp–Massey algorithm,”Proc.Inst.Elect.Eng.,pt.E,vol.138, pp.295–298,Sept.1991.[14]H.M.Shao,T.K.Truong,L.J.Deutsch,J.H.Yuen,and I.S.Reed,“A VLSI design of a pipeline Reed–Solomon decoder,”IEEE Trans.Comput.,vol.C-34,pp.393–403,May1985.[15]P.Tong,“A40MHz encoder-decoder chip generated by aReed-Solomon code compiler,”Proc.IEEE Custom IntegratedCircuits Conf.,pp.13.5.1–13.5.4,May1990.[16]R. B.Wells,Applied Coding and Information Theory for Engi-neers.Upper Saddle River,NJ:Prentice-Hall,1999.[17]S.R.Whitaker,J.A.Canaris,and K.B.Cameron,“Reed–SolomonVLSI codec for advanced television,”IEEE Trans.Circuits Syst.Video Technol.,vol.1,pp.230–236,June1991.[18]S.B.Wicker,Error Control Systems for Digital Communication andStorage.Englewood Cliffs,NJ:Prentice-Hall,1995.[19]W.Wilhelm,“A new scalable VLSI architecture for Reed–Solomon de-coders,”IEEE J.Solid-State Circuits,vol.34,pp.388–396,Mar.1999.Dilip V.Sarwate(S’68–M’73–SM’78–F’90)received the Bachelor of Science degree in physicsand mathematics from the University of Jabalpur,Jabalpur,India,in1965,the Bachelor of Engineeringdegree in electrical communication engineering fromthe Indian Institute of Science,Bangalore,India,in1968,and the Ph.D.degree in electrical engineeringfrom Princeton University,Princeton,NJ,in1973.Since January1973,he has been with the Univer-sity of Illinois at Urbana-Champaign,where he is cur-rently a Professor of Electrical and Computer Engi-neering and a Research Professor in the Coordinated Science Laboratory.His research interests are in the general areas of communication systems and in-formation theory,with emphasis on multiuser communications,error-control coding,and signal design.Dr.Sarwate has served as the Treasurer of the IEEE Information Theory Group,as an Associate Editor for Coding Theory of the IEEE T RANSACTIONS ON I NFORMATION T HEORY,and as a member of the editorial board of IEEE P ROCEEDINGS.He was a Co-Chairman of the18th,19th,31st,and32nd AnnualAllerton Conferences on Communication,Control,and Computing held in1980, 1981,1993,and1994,respectively.In1985,he served as a Co-Chairman of the Army Research Office Workshop on Research Trends in Spread Spectrum Sys-tems.He has also been a member of the program committees for the IEEE Sym-posia on Spread Spectrum Techniques and Their Applications(ISSSTA)and the 1998and2001Conferences on Sequences and Their Applications(SETA),as well as of several advisory committees for international conferences.。

细说移动通信中的新技术(A new technology in mobile communicati

细说移动通信中的新技术(A new technology in mobile communicati

细说移动通信中的新技术(A new technology in mobilecommunication)As people's living space, activity space and participation areas continue to expand, the functional requirements of mobile phones, not only dialogue and communication, there are many other functions. Moreover, the existing communication system there exist many unsatisfactory places, such as system capacity, voice distortion, dropped on line, power radiation and slow data transmission, the existing communication technology alone is not enough to meet the new demands of communication people. So in this case, we must have new communication technology to ensure, so that a variety of emerging communication technology came into being, below is some of the 3G communications may be used in the new technology.1. Channel coding and decoding technologyThis technique may be used in the DS-CDMA communication standard, in which channel coding and decoding is mainly to reduce the signal transmission power and solve the inevitable fading problem of the signal in the wireless communication environment. The use of codec technology combined with interleaving can improve the BER performance, compared with no encoding, convolutional codes can improve the bit error rate is two orders of magnitude reached 10-3~10-4, and the DS-CDMA communication system using Turbo code error rate can be increased to 10-6. DS-CDMA candidate channel coding techniques include Reed-Solomon and Turbo codes, and Turbo codes can be used as 3G data encoding and decoding technology because the encoding and decoding performance can approach the Shannonlimit. Convolutional codes are mainly used for low data rate speech and signaling.2. Smart antenna technologyIn the development of mobile communication technology, smart antenna has become one of the most active fields. In recent years, almost all advanced mobile communication systems will use this technology. The advantage of smart antenna technology to mobile communication system is difficult to replace by any technology at present. Smart antenna technology has become one of the most attractive technologies in mobile communications. Smart antenna technology uses adaptive beamforming technology to improve the user's direction of arrival gain, while using the zero of the pattern to reduce the interference of high-power users on the space. Its main difficulties lie in the inconsistency of multi-channel and correction technology, the complexity of RAKE receiver combining baseband processing, and the inconsistency of the uplink and downlink direction of arrival caused by FDD technology.3, multi user detection technologyIn the third generation mobile communication system, WCDMA system is a typical example of application of multi user detection technology. As one of the key technologies in WCDMA system, multi user detection technology can make the system achieve good performance in high speed channel environment. Multiuser detection technology improves system performance and increases system capacity by removing cell interference. Multiuser detection technology can effectively mitigate thefar / near effects in DSSS WCDMA systems. The difficulty is the high complexity of baseband processing.4 、 soft handover technologyThe largest mobile phone users opinions on the network is often lost, now people not only during the call by its bitter, but some people worry that if the future network fax support, will not be dropped due to the problems of wireless fax into the water". This is because the mobile phone is switched more when the "hard switching", from a base station coverage area into another base station coverage area to break the original base station and the base station to contact, and then look for new entrants into the coverage area, which is commonly referred to as "first off", of course the off time difference of only a few hundred milliseconds, under normal circumstances, people can not feel, but once the mobile phone for entering the shield area or channel busy and unable to contact with a new base station, it will fall; CDMA technology is used in "soft switching", in the handover, mobile phone and continue to fall and the original base station the contact and contact with the new base station when the mobile phone has been confirmed and the new base station, the original base station and the link is broken, "and then off, dropping may be almost nothing.5, PHS TechnologyThe English name of PHS is Personal Handyphone System,Chinese meaning is personal mobile phone system, the network system is developed by the Japan Telegraph Company, it usesdigital transmission mode, combined with advanced radio access technology and intelligent digital network capabilities. PHS uses low power to transmit radio wave signals, so it covers a smaller area and is more suitable for urban areas, and relatively low rates. PHS provides complete communications services, the integrity of the data transmission ability to support wireless multimedia communication, secondly, PHS also offers a variety of Internet interface, such as: radio access, telephone lines, fiber optic cable, and because of its base design is very light, can support such as KTV. Street。

Erasure-Coding

Erasure-Coding

Erasure-Coding Based Routing for Opportunistic NetworksY ong Wang,Sushant Jain†,Margaret Martonosi,Kevin Fall‡Princeton University,†University of Washington,‡Intel Research BerkeleyABSTRACTRouting in Delay Tolerant Networks(DTN)with unpredictable node mobility is a challenging problem because disconnections are preva-lent and lack of knowledge about network dynamics hinders good decision making.Current approaches are primarily based on redun-dant transmissions.They have either high overhead due to exces-sive transmissions or long delays due to the possibility of making wrong choices when forwarding a few redundant copies.In this pa-per,we propose a novel forwarding algorithm based on the idea of erasure codes.Erasure coding allows use of a large number of re-lays while maintaining a constant overhead,which results in fewer cases of long delays.We use simulation to compare the routing performance of using erasure codes in DTN with four other categories of forwarding al-gorithms proposed in the literature.Our simulations are based on a real-world mobility trace collected in a large outdoor wild-life environment.The results show that the erasure-coding based algo-rithm provides the best worst-case delay performance with afixed amount of overhead.We also present a simple analytical model to capture the delay characteristics of erasure-coding based forward-ing,which provides insights on the potential of our approach. Categories and Subject DescriptorsC.2.2[Network Protocols]:Routing protocolsGeneral TermsAlgorithms,Performance,TheoryKeywordsRouting,Delay Tolerant Network,Erasure Coding1.INTRODUCTIONOpportunistic networks are an important class of DTNs in which contacts(time-window when data can be exchanged)appear op-portunistically without any prior information.Examples of such networks are sparse mobile ad hoc networks,such as ZebraNet[8], Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.SIGCOMM’05Workshops,August22–26,2005,Philadelphia,PA,USA. Copyright2005ACM1-59593-026-4/05/0008...$5.00.where no contemporaneous end-to-end path may exist due to radio range limitations.Routing becomes challenging in such networks because contact dynamics are not known in advance and no sin-gle path can be relied upon.Most current approaches are based on some kind of data replication over multiple paths[14,8].In this paper,we propose an alternate method of improving delay perfor-mance.The basic idea is to erasure code a message and distribute the generated code-blocks over a large number of pared to sending a full copy of the message over a relay,only a fraction of code-blocks are sent over each relay.This fraction allows us to control the routing overhead in terms of bytes transmitted.For sce-narios like ZebraNet,where nodes are energy constrained,limiting such overhead is an important design goal.The basic idea of using erasure coding is simple and has been explored in many applications[11].However,it is not clear if and when it will perform better than simpler alternatives based on pure replications in DTNs.In this paper,we study the performance of an erasure coding approach and other existing alternatives on a diverse mobility scenarios with different node densities and moving pat-terns.We use both synthetic and real-world DTN mobility traces as input to our simulations.We discover that the erasure coding approach can provide good delay guarantees by using afixed over-head.Fundamentally,the benefits of erasure coding arise in elimi-nating cases when long delays arise due to bad choice of forward-ing relays.Erasure coding allows the transmission to be spread over multiple relays while using afixed amount of overhead.This results in a protocol much more robust to failures of a few relays or some bad choices.Wefind that the erasure-coding based algorithm is the least sensitive to different parameters in terms of message latency and message delivery rate.Also,we derive an expression for the delay distribution under a simple network model to argue when and why the erasure coding approach outperforms other sim-pler alternatives.In one extreme case,we show that the average delay of a simple replication strategy will be infinite,whereas,by using erasure coding the average delay can be reduced to a small constant.Erasure coding can also help combat packet loss due to bad chan-nel quality or packet drops due to congestion.A full investigation of the benefits of this aspect is outside the scope of this paper.Here, our focus is on a less-conventional use of erasure coding:to achieve better delay performance using afixed amount of replication.2.BACKGROUNDIn an opportunistic network,reliable data delivery is often achieved using replication to send identical copies of a message over multi-ple paths to mitigate the effects of disconnections.Typical algo-rithms differ based on their decisions as to who forwards the data, at what time is the data forwarded,and to whom is the data sent.Inthe following discussions,we define a contact as an opportunity to communicate between two nodes and a relay denotes a forwarding node.•Flooding(flood):each node forwards any non duplicated messages(including messages received on behalf of other nodes)to any other node that it encounters.flood delivers messages with the minimum delay if there are no resource constraints,such as link bandwidth or node storage.•Direct contact(direct):the source holds the data until it comes in contact with the destination.direct uses minimal resources since each message is transmitted at most once.However,it may incur long delays.•Simple replication(srep(r)):this is a simple replication strategy in which identical copies of the message are sent over thefirst r contacts.Here,r is the replication factor.Only the source of the message sends multiple copies.The relay nodes are allowed to send only to the destination;they cannot forward it to another relay.This leads to small over-head as the messageflooding is controlled to take place only near the source.This class of forwarding algorithms is also known as the two-hop relay algorithm[3,2].There is a natu-ral trade-off between overhead(r)and data delivery latency.A higher r leads to more storage/transmissions but has lowerdelays.•History-based(history(r)):here history is used as an in-dicator of the probability of delivery.Each node keeps track of the probability that a given node will deliver its messages.r highest ranked relays(based on delivery probability)are selected as forwarding nodes.ZebraNet uses the frequency at which a node encounters destination as an indicator of the delivery probability.We use the same implementation as[8] in our simulations.A summary of these forwarding algorithms is listed in Table1.Algorithm Who When To whomflood all nodes new contact all newdirect source only destination destination onlysrep(r)source only new contact rfirst contacts history(r)all nodes new contact r highest ranked Table1:Summary of various forwarding algorithms.3.THE ERASURE-CODING BASED FOR-W ARDING ALGORITHMAs discussed in the previous section,most current approaches for routing in opportunistic networks are based on sending multi-ple identical copies over different paths.There is a fundamental trade-off between overhead and delay.On one extreme,flooding achieves the best possible delay but results in very high overhead. The other extreme is protocols like direct which have low over-head because they send only few copies or none at ck of knowledge about the topology dynamics prevents distinguishing good paths from bad ones.Therefore,these protocols may result in long delays if bad paths are selected.In this section,we describe a forwarding algorithm based on the idea of erasure coding.Our al-gorithm achieves better worst-case delay performance than existing approaches with afixed overhead.3.1Erasure coding backgroundErasure codes operate by converting a message into a larger set of code blocks such that any sufficiently large subset of the gener-ated code blocks can be used to reconstruct the original message. More precisely,an erasure encoding takes as input a message of size M and a replication factor r.The algorithm produces M∗r/b equal sized code blocks of size b,such that any(1+ )·M/b erasure coded blocks can be used to reconstruct the message.Here, is a small constant and varies depending on the exact algorithm used, such as Reed-Solomon codes or Tornado codes.The selection of algorithms involve trade-offs between coding/decoding efficiency and the minimum number of code blocks to reconstruct a message. For example,Tornado codes have efficient encoding and decod-ing steps based on simple operations such as XOR,at the cost of slightly higher .A thorough discussion of the various trade-offs is presented in[11].The choice of exact erasure coding algorithm is not important in our forwarding algorithm.The key aspect is that when using erasure coding with a replication factor of r,only1/r of the code blocks are required to decode the message.Therefore, we ignore constant for simplicity.Constant b is the block-size and is implementation dependent.3.2Erasure coding based forwarding(ec)Our erasure-coding based forwarding algorithm can be under-stood as an enhancement to the simple replication algorithm(srep) described in Section2.In srep with a replication factor r,the source sends r identi-cal copies over r contacts and relays are only allowed to send di-rectly to the destination.In the erasure-coding based algorithm,we first encode the message at the source and generate a large number of code blocks.The generated code blocks are then equally split among thefirst kr relays,for some constant k.In comparison with srep,this approach uses a factor of k more relays and each relay carries a factor of1/k less data.However,the number of bytes generated are rM,the same as the number of bytes generated by srep(r).Now by definition of erasure coding(with rate r,message size M),the message can be decoded at the destination if1/r of the generated code blocks are received.Since code blocks are divided equally among kr relays,the message can be decoded as soon as any k relays deliver their data if we assume that no code blocks are lost during transmissions to and from a relay.When k=1,the erasure coding approach has the same effect as the simple replication approach,which is,to use thefirst r relays and to each carry a copy of the original message.3.3Benefits of erasure coding in forwarding In simple replication,r relays are used to improve the delay per-formance.The erasure-coding based approach,instead,utilizes kr relays for the same amount of overhead.Therefore,one can ex-pect that the chances of at least some relays having low delays are higher,compared to using only r relays.At the same time,erasure coding requires at least k relays to succeed(instead of1in srep) before the data can be reconstructed.Therefore,if the number of such low-delay relays are larger than k,the erasure-coding based approach will successfully deliver the message with a lower delay than simple replication.Thus,the fundamental question is whether to use r relays and wait for one to succeed or use r∗k relays and wait for k relays to succeed.We answer this question using a sim-ple analytical model in Section5.The main observation is that if k is large,the delay distribution converges to a constant.Therefore, with the erasure-coding based approach,one can be almost assured of a constant delay.4.EV ALUATIONIn this section,we use simulation to compare forwarding al-gorithms discussed in Section2and the erasure-coding based ap-proach presented in Section3.4.1MethodologyWe use dtnsim,the discrete event simulator for DTN environ-ments from[6].We implemented the following routing algorithms in dtnsim:flooding(flood),direct contact routing(direct), history-based routing(history),simple replication routing(srep)and erasure-coding based routing(ec).For srep and ec,we rep-resent different replication factors and number of relays used tosplit,using srep-rep r and ec-rep r-p n.Here,r is the replication factor and n are the number of relays among which code blocks aredivided.We simulate using a real-world mobility trace collected as partof a wildlife tracking experiment in Kenya.The mobile networkwas deployed by the ZebraNet group in January,2004[15].Track-ing collars are placed on the necks of selected zebras.Each collaruses GPS to record its position data every8minutes,and period-ically sends back position log data to a mobile base station(e.g.,a vehicle).Due to extreme weather and waterproofing issues,aswell as antenna problems,only one tracking collar returned a com-plete set of uninterrupted movement data for the whole32-hour duration.Due to such limitations1,we create a semi-synthetic mo-bility model as follows:we synthesize node speed and turn angledistributions from the observed data and create other node move-ments following the same distribution.We scale the grid size to 6km×6km with a radio range of1km.Initially,the nodes are ran-domly distributed in the grid.The base station moves along a rect-angular path near the grid boundary.All messages are of size1M.Each node generates12messages every day.The total duration ofsimulation is16days.Another mobility model based on heavy-tailed inter-contact times is discussed in Section4.4.We compare the routing performance of different forwarding al-gorithms using the following three metrics:•Data success rate:the ratio of the number of messages that are delivered to the destination within a time T(deadline).If T is unspecified,it is considered to be the whole durationof the simulation,i.e16days.•Data latency:the duration between message generation and message reception(at its destination).In a DTN,latency may not be the most critical issue.However,it is always desirable to have fast data delivery whenever possible.The latency distribution metric measures how efficiently a protocol uses the available contact opportunities.•Routing overhead:the ratio of the number of bytes trans-mitted to the number of bytes generated during the simula-tion time.This metric measures the extra data transmitted for each message generated,while a metric based solely on the number of message transmissions will overlook the fact that ec has smaller message sizes.The radio transmission energy is proportional to the total number of byes transmit-ted.Therefore,this metric reflects the energy efficiency of the forwarding algorithm.1At the moment,we are working on collecting more node traces during our secondfield trip in June,2005.We will work on adjust-ing the model once we have those node traces available.0.20.40.60.810 5 10 15 20 25 30 35 40 45 50CDFDelay (hours)Link 1Link 2Link 3Link 4(a)Inter-Contact time distribution0.20.40.60.810 2 4 6 8 10CDFDelay (hours)Link 1Link 2Link 3Link 4(b)Contact duration distributionFigure1:Cumulative distribution plots for inter-contact times and contact durations for the ZebraNet trace.Thefigure plots these two metrics for four randomly selected links.Other links show similar characteristics.The contact duration distribution uses a different x-axis range to separate different curves.Ob-serve that inter-contact time patterns show significant variation and can be very long in some cases.4.2Zebra trace analysisTo begin our analysis,wefirst characterize the contact opportu-nities in the ZebraNet trace,with a focus on inter-contact time and contact durations.These two metrics are important in understand-ing the behavior of different forwarding algorithms on the ZebraNet trace.Simply put,inter-contact time is the time interval for which a link is down(no communications are possible during this time) and contact duration is the interval for which a link is up.Figure1plots the distribution of these two metrics for four ran-domly selected pairs of nodes(links)in the ZebraNet trace.Since almost all the links in the trace show similar characteristics,we just use these four random links as examples.As shown in Figure1(a),the inter-contact time distribution has few cases when a link is broken for a very long time.This ob-servation is important because such inter-contact time patterns can lead to extremely long delays when using a naive forwarding al-gorithm.As expected in such a sparse network,link up-times are relatively short(as compared to the link down times)and therefore, it is important to efficiently utilize the available communication op-portunity.4.3Impact of node density4.3.1Data latency distributionFigure2(a)and2(b)show the data latency distribution for the0 0.20.40.60.8 1 02040 8010060C C D FDelay (hours)(a)34nodes0 0.20.4 0.60.8 12040 8010060C C D FDelay (hours)(b)66nodesFigure 2:Latency distribution for different forwarding algo-rithms.Traffic injection rate is 12messages per day.The distribution is shown in Complementary CDF (CCDF)curve.A numeric presentation of this figure is in Table 2which lists the exact 50th ,90th and 99th percentiles delay.The erasure-coding based approach has significantly smaller tail than other approaches (except flood).flood has the lowest latencies but has high overhead as discussed later.ZebraNet trace with 34nodes and 66nodes respectively.Discount-ing source and destination,the total number of relays are 32and 64respectively.The distribution is shown in Complementary CDF (CCDF)curve.Table 2shows various data latency percentiles for both 34-node and 66-node experiments to facilitate the comparison of worse-case delay performance among all the algorithms considered.Generally,ec has a higher 50th percentile compared to other al-gorithms as shown in both Figure 2(a)and Figure 2(b)but a lower 99th percentile.This is because it takes longer to find enough relays to distribute data replicas.However,once ec distributes enough code blocks by forwarding along multiple relays (the num-ber of relays is larger than that used by srep ),it takes a much shorter time to transfer the messages to the destination since any n/r relays are required to be successful.Since n is much larger than r ,ec can fully utilize the diversity of multiple relays and is very robust to bad performance of individual relays.That is,in the presence of unpredicted failures or mobility of some of the re-lays,ec still has a good chance of sending the messages to the destination by routing code blocks through other functional relays.Algorithm 34nodes 66nodes 50%90%99%50%90%99%ec-rep2-p80.440.84 1.32———ec-rep2-p160.530.85 1.210.510.83 1.17ec-rep2-p32———0.590.82 1.04srep-rep20.240.88 1.700.250.89 1.91direct 0.49 1.63 3.270.51 1.79 3.54history 0.180.879.500.140.7210.83flood0.0130.0440.120.000120.00910.032Table 2:Latency (in days)for different algorithms for two dif-ferent node densities.This is the same data as shown in Figure 2.We see that ec has significantly lower 99th percentile la-tency.This indicates that ec is effective in getting rid of very high latency cases.Therefore,erasure-coding based routing is a promising candidate for opportunistic networks where (1)relay failures are prevalent and delays are unpredictable,and (2)minimizing the worst-case delay is important.This observation is further supported by the data shown in Fig-ure 2(b)where node density is higher.Given more contacts and re-lays,the CCDF curves of all forwarding algorithms become steeper.This is because there are more contacts overall.ec ,as we have ex-plained,still has the lowest 99th percentile and the sharpest data latency curve.Therefore,given enough relay opportunities,ec has the best performance in delivering most of the messages the fastest among all the algorithms considered.Simple replication,direct contact,and history-based algorithms,on the other hand,have very long tails (messages with much longer delays).This is because they use a small number of relays.There-fore,they cannot guarantee when these relays will see the desti-nation.Very likely,some packets may encounter very long delays by selecting some relays that fail to deliver the message promptly.In the long run however,with suf ficient buffer space,all messages will eventually be delivered.The lower the replication factor r ,the longer the tail will be.This is illustrated by comparing the CCDF of srep-rep2and direct .Since srep-rep2replicates its data to two other relays,the chance of losing contact opportunities is lower than that for direct .Hence,srep-rep2has a shorter tail than direct .The history approach,though having the lowest 50th percentile delay,also has the longest tail among all the algorithms considered.The performance of history is dependent on the accuracy of its selection of highest ranked relays based on past statistics.If the decision is relatively accurate,it tends to find relays that will for-ward the data to the destinations very quickly.On the contrary,if the relays selected based on this heuristic do not re flect future for-warding probabilities,very long delays may be incurred.However,using certain timeout and retransmission schemes,these long-delay messages might be masked out which makes the history approach more attractive over the others in networks with predictable node movement.This is an interesting research direction to explore.Finally,observe that the flood protocol in Figure 2(a)and Fig-ure 2(b),has latency distribution curves which are almost vertical.This shows that flood has very low delays for all messages.4.3.2Routing overheadTable 3lists the routing overhead corresponding to each forward-ing algorithm.Routing overhead is measured using the ratio of bytes transmitted to the bytes generated.Since both ec and srep transmit a fixed amount of data with respect to the data generated,AlgorithmOverhead(34nodes)(66nodes)ec-rep2-p8 3.96—ec-rep2-p16 3.96 3.98ec-rep2-p32— 3.98srep-rep2 3.98 3.99direct 1.0 1.0history30.2859.61flood68.0132.0Table3:Routing overhead of different forwarding algorithms for two node densities.Forwarding algorithms(such as ec and srep)which employ replication only at the source has signifi-cantly lower overhead.flood has almost an order of magni-tude higher overhead and does not scale well as the number of nodes increase.The high overhead of history results from our implementation in dtnsim2where a copy of message is transmitted even when some copy of the original message has been delivered.Some timeout scheme can solve this problem by reducing unnecessary message transmissions.their overhead is constant.For an algorithm with a replication factor of2,the overhead should be4,with2from the source to the relay and from the relay to the destination and the other2for the other relay.On the other hand,in both history and flood where relays also forward to other relays(and there are no restric-tions on replication factor),multiple identical copies of the original message are transmitted even after thefirst delivery of the origi-nal message.As Table3shows,normally history has a higher overhead than srep and ec.This situation becomes worse when more contacts are available and very likely,more duplicate mes-sages will be transmitted.For flood,almost all the nodes could receive a copy potentially and the overhead is proportional to2n, where n is the number of nodes.The factor of two comes because every relay sends to the destination(even if the destination has al-ready received the message)in our implementation.Some simple timeout scheme,such as one that imposes a maximum number of hops a message can traverse,can alleviate this problem.However, data delivery rate will decrease if the number of hops a message can traverse is too small.The exploration of such a trade-off is part of our future work.In summary,in terms of routing overhead,ec and srep scale well with node density and network size,while flood does not.4.3.3Data success rateAlgorithm0.25day1day2days4days8days ec-rep2-p822.6%95.9%100%100%100% ec-rep2-p169.2%94.6%100%100%100%srep-rep251.8%92.5%99.6%99.9%99.9%direct32.0%74.6%94.2%99.5%99.9%history58.4%87.9%92.7%94.6%95.3%flood100%100%100%100%100% Table4:Data success rate of different algorithms for different deadlines.Even with extremely large deadline of8days sim-ple replication can not transfer all its data.Also note that,ec has low data success rate when deadlines are extremely small and hence,caution must be used before deciding to use erasure coding.Table4shows the data success rate for different algorithms with(1)(4)(5)(3)(2)(1) ec−rep2−p8(3) srep−rep2(4) direct(5) history(2) ec−rep2−p160.20.40.60.810.01 0.1 1 100 100010CCDFDelay (hours)Figure3:Latency distribution of different forwarding algo-rithms for the Pareto trace.We use a log-scaled x-axis for clar-ity.Similar to the ZebraNet trace we observe that tails are sig-nificantly smaller when ec is used,i.e.,the worst case delays for other approaches are significantly higher.Since x-axis is log scale,the ratio of the worst case delay values is higher than in the ZebraNet trace.deadlines smaller than the total simulation time.All deadlines are specified in units of days.The data success rate for ec is low if the deadlines are less than6hours long.However,for relatively long deadlines(between1and2days),ec has the highest data success rate.This result can be observed directly by looking at the data latency distribution curve.Because ec has a lower99th percentile of latency distribution,it will deliver more messages before that time and hence a higher data success rate.Therefore,if achieving low latencies for all messages or high success rate within certain reasonable deadlines are the application requirement,ec should be used.On the other hand,history has the highest data success rate when the deadline is less than6hours.This is because history canfind good relays without the need to distribute copies of data to many relays.The performance improvement of history upon direct and srep comes directly from the efficiency of its selec-tion of good relays.However,since history has long tails in its data latency distribution curve,its data success rate is relatively low compared to other approaches.4.4Impact of mobility modelIn this section,we evaluate the performance of ec and other ap-proaches on a different mobility model.Our results here demon-strate that the idea of using erasure-coding based routing can be applied to different scenarios other than the ZebraNet trace.We find that the benefits of erasure coding are greater when the inter-contact times are heavy-tailed.We use such a heavy-tailed distri-bution for simulations in this section.The mobility model is based on the approximate power law distribution for inter-contact times observed for another set of real-world traces described in[2]. Figure3plots the CCDF of the data latency distribution for the Pareto trace.The other simulation parameters are exactly the same as in Section4.1.Observe that all curves are much sharper than the ZebraNet traces.Again,ec has the sharpest CCDF curve and the lowest99th percentile delay,while all the other algorithms have higher worst case delays.5.DELAY DISTRIBUTION ANALYSISThis section discusses the theoretical behavior of the delay dis-。

衰落信道下空时编码级联RS码的性能分析与研究

衰落信道下空时编码级联RS码的性能分析与研究

衰落信道下空时编码级联RS码的性能分析与研究包涛;梁永玲【摘要】The application of MIMO technology can remarkably increase the data rate of transmission, but many problems exist in the complex wireless communication environment, including multi-path fading, random noises, burst errors, and so forth. We believe that the combination of space-time codes with channel coding technique of the MIMO system can improvethe data rate and assure the communication quality simultaneously. A novel concatenated code of full-rank space-time code as the inner code and RS as the outer code is constructed, and the performance of this concatenated code is quantitatively analyzed under Rayleigh and Rice fading channels. The simulation results and their analysis show preliminarily that this novel concatenated code, compared with the space-time code, can re-duce the BER and loss of diversity gain and coding gain under fading channels, especially at high channel signal-to-noise ratios.%MIMO系统虽可有效提高数据传输速率,但在极其复杂的无线信道环境下,信号除受多径衰落和随机噪声的影响,还伴有突发性差错。

极化编码原理

极化编码原理

极化编码原理Polar coding is a revolutionary technique in the field of coding theory. 极化编码是在编码理论领域的一种革命性技术。

It was introduced by Professor Erdal Arikan in 2008 and has since gained tremendous popularity for its exceptional error correction performance. 这项技术是由Erdal Arikan教授于2008年提出的,并且因其出色的误差校正性能而广受欢迎。

Polar coding achieves channel capacity under the successive cancellation decoding algorithm and provides an efficient way to approach the Shannon limit. 极化编码在连续取消译码算法下实现了信道容量,并提供了一种接近香农极限的有效方法。

This has made it a preferred choice for advanced communication systems such as 5G, satellite, and optical communication. 这使它成为了先进通信系统(例如5G、卫星和光通信)的首选。

One of the key advantages of polar coding is its simplicity and universality. 极化编码的一个关键优势是其简单性和普适性。

It can be used for various communication scenarios and is particularly effective in scenarios with high latency or limited feedback. 它可以用于各种通信场景,特别适用于延迟较高或反馈有限的场景。

reed-solomon详解 -回复

reed-solomon详解 -回复

reed-solomon详解-回复Reed-Solomon (RS) codes are a class of error-correcting codes that are widely used in various applications, including data storage, communication systems, and digital broadcasting. In this article, I will provide a detailed explanation of RS codes, covering their fundamentals, encoding process, decoding algorithm, and error correction capability.Fundamentals of Reed-Solomon Codes:Reed-Solomon codes are based on the mathematical principles of finite fields. A finite field is a mathematical structure that shares some properties with ordinary arithmetic but has a finite number of elements. The number of elements in a finite field is denoted by q, which is usually a prime number or power of a prime.RS codes are formed by constructing polynomials over a finite field. The degree of the polynomial represents the number of information symbols in a code word, while the number of check symbols is determined by the desired error correction capability. The check symbols are added to the information symbols to create the final codeword.Encoding Process:The encoding process of RS codes involves two main steps: message polynomial generation and codeword construction.1. Message Polynomial Generation:To start the encoding process, the information symbols from the message are mapped to coefficients of a polynomial. For example, if we have a message with k information symbols, the message polynomial can be written as:M(x) = m0 + m1x + m2x^2 + ... + mk-1x^(k-1)2. Codeword Construction:In this step, the RS code adds check symbols to the message polynomial to create the final codeword polynomial. These check symbols help in detecting and correcting errors during the decoding process. The codeword polynomial can be written as:C(x) = M(x) * G(x)Here, G(x) represents the generator polynomial, which is derived from the field elements and the desired error correction capability.The multiplication of the message polynomial and the generator polynomial gives the codeword polynomial.Decoding Algorithm:The decoding algorithm of RS codes is based on the concept of syndrome calculations and error locator polynomials.1. Syndrome Calculations:During the decoding process, the received codeword is multiplied by a syndromes generator polynomial. The syndrome polynomial is a mathematical representation of the errors present in the received codeword.2. Error Locator Polynomials:The syndrome polynomial is then used to find the error locations in the codeword. By using a mathematical process called the Berlekamp-Massey algorithm, the error locator polynomial is obtained. This polynomial helps in locating the positions of errors in the codeword.3. Error Correction:Once the error locations are determined, the error values at those locations can be calculated using various methods, such as the Forney algorithm or the Peterson-Gorenstein-Zierler algorithm. These error values are then used to correct the received codeword and retrieve the original message.Error Correction Capability:The error correction capability of RS codes depends on the number of check symbols added during the encoding process. The number of errors that can be corrected is given by the formula:t = (n - k) / 2Here, n represents the total number of symbols in the codeword, while k represents the number of information symbols. The parameter t denotes the maximum number of errors that can be corrected, assuming that the total number of errors is less than or equal to 2t.Conclusion:Reed-Solomon codes are powerful error-correcting codes used in various communication and storage systems. They provide a robust mechanism for detecting and correcting errors, ensuring reliable transmission and storage of data. By understanding the fundamentals, encoding process, decoding algorithm, and error correction capability of RS codes, we can appreciate their significance in modern information technology.。

plank

plank
– (2007 – C)
• Cleversafe: CRS from , w = 8.
– (2008 – Java, based on Luby)
• RDP/EVENODD: Added to Jerasure.
Open Source Tests - Encoding
Big File
Bit Matrix Codes
Cauchy Reed Solomon (CRS) Codes [Blomer95]
• Bit Matrix derived from Reed-Solomon code. • Same constraints: All good as long as n ≤ 2w. • [Plank&Xu06]: Optimization to reduce ones. • Further optimization [Plank07].
Erasure Coding Basics/Nomenclature
You start with n disks:
n
Erasure Coding Basics/Nomenclature
Partition them into k data and m coding disks.
n k
Call it what you want: “k of n.” “k and m,” “[k,m].” But please use k, m and n.
Block D0 Block D1
DS0,0 DS0,1 ... DS0,s-1 DS1,0 DS1,1 ... DS1,s-1 ... CS0,0 CS0,1 ... Encoding Stripe 1 CS0,s-1 ... CSm-1,0 CSm-1,1 ... CSm-1,s-1

基于FPGA的RS(255,239)译码器的设计与实现

基于FPGA的RS(255,239)译码器的设计与实现

基于FPGA的RS(255,239)译码器的设计与实现胡雪川;刘会杰【摘要】In order to solve the problem such as the complexity of RS decoding process,low decoding speed , expensive specific RS decoder and so on that exists when the RS code is decoded, the RS (255,239) code is taken as an example, and the RS decoding theory based on the improved non-inversion Berlekamp-Massey (BM) iterative algorithm is introduced. On the FPGA platform, each submodule of the decoder has been designed and simulated by using the Verilog hardware description language and the software of Xilinx ISE 13.4. Pipeline approach is used in the entire decoder design process. Timing simulation results show that if there exists no more than eight errors, after 295 inherent delay, the decoder can output the corrected code word continuously in each clock cycle, and the ability of error correcting of RS decoder meets the expectations.%为了解决在RS译码中存在的译码过程复杂、译码速度慢和专用译码器价格高等问题,以RS(255,239)码为例,采用了基于改进的无求逆运算的 Berlekamp-Massey(BM)迭代算法。

广义ReedSolomon码的深洞问题

广义ReedSolomon码的深洞问题
检错和纠错
广义reedsolomon码既可以检测到错误,也可以纠正错误。
误码纠正
广义reedsolomon码可以纠正多个比特位的错误,实现较高的误 码纠正能力。
03
广义reedsolomon码的 实现
编码实现
确定生成元
选择生成元是编码的关键步骤,通常采用多项式来 作为生成元。
编码算法
将信息位转化为码字,通过生成元和信息位的乘积 进行编码。
05
广义reedsolomon码的 应用场景与前景
应用场景
数据存储与传输
在分布式存储系统、云存储和无线通信网络中,广义 Reed-Solomon码常用于纠正数据传输过程中的错误。
信息安全
广义Reed-Solomon码在数字水印、版权保护和认证中 具有重要应用,可用于检测和纠正信息被篡改或伪造的 情况。
纠正多个位错误
02
采用迭代方式,不断纠正错误,直到所有错误都被纠正为止。
优化纠正效率
03
采用特定的算法和数据结构,如BCH码等,来提高纠正效率。
04
广义reedsolomon码的 深洞问题
深洞问题的定义
广义Reed-Solomon码(Generalized Reed-Solomon Code,GRS)是一种基于有限域(finite field)的线性分组码(linear block code),具有较高的纠错能力和编码效率。
当输入数据中存在较长的连续错误时,由于广义ReedSolomon码的构造特点,可能无法找到足够的错误位置来进行 纠错,导致解码失败。
这种问题在某些特定的应用场景下尤为突出,如高噪声环境下 的数据传输、存储等。
深洞问题的解决方法
01
03

存储系统容错编码简介

存储系统容错编码简介

When are they useful?
Anytime you need to tolerate failures.
When are they useful?
Anytime you need to tolerate failures.
When are they useful?
Anytime you need to tolerate failures.
P. M. Chen, E. K. Lee, G. A. Gibson, R. H. Katz, and D. A. Patterson. RAID: High-performance, reliable secondary storage. ACM Computing Surveys, 26(2):145–185, June 1994.
Why is this such a pain?
Coding theory historically has been the purview of coding theorists.
Their goals have had their roots elsewhere (noisy communication lines, byzantine memory systems, etc).
Evaluating Parity
MDS Rate: R = n/(n+1) - Very space efficient Optimal encoding/decoding/update:
n-1 XORs to encode & decode 2 XORs to update
Extremely popular (RAID Level 5). Downside: m = 1 is limited.

基于Reed_Solomon算法的QR码纠错编码

基于Reed_Solomon算法的QR码纠错编码
α(x)=α 7 x 7 +α 6 x 6 +α 5 x5 +α 4 x4 +α 3 x3
+α 2 x 2 +α 1 x+α 0
β(x)=β 7 x 7 +β 6 x 6 +β 5 x5 +β 4 x4 +β 3 x3
+β 2 x 2 +β 1 x+β 0 如元素10011010可表示为x 7 + x 4 + x 3 + x,则α,β
k-1
dk-1x +...+d
2
2x +d1x+d0,





式d(x)






数据。在编码中,我们要用到生成多项式的概念,由生成多
项式生成我们所要的编码。对于循环码的编码,有以下定
理: 定 理 1:GF(2m)上的[n,k]循环码中,存在唯一的n-k次
首 一 多 项 式g(x),每一码多项式 C(x)都是 g(x)的 倍 式 , 且 每
第 29卷 第 1期 Vol.29 № 1
计 算 机 工 程 Computer Engineering
2003年 1月 January 2003
·基金项目论文·
文章编号: 1000— 3428(2003)01 — 0093—03
文献标识码:A
中 图 分 类 号 : TP391.1
基于Reed-Solomon算法的QR码纠错编码
一个次数小于或等于n-1 的g(x)的倍式一定是码多项式(证明
参见参考文献2)。 我们所要进行的 Reed-Solomon编码是循环码的一种,因
此,我们可以这样构造生成多项式:设a∈GF(2m) 是 本 原 域 元素,mi (x)是ai 的最小多项式(I=0,1,...,r-1),则生成多项式

Reed-Solomen_RAID

Reed-Solomen_RAID

毕业设计(论文)题目:“Reed-Solomen”算法在RAID系统中的应用系别:计算机科学与工程系专业:计算机科学与技术学生姓名:张伟强班级/学号0213519 指导老师/督导老师:张京生起止时间:2006年2月20日至2006年6月2日北京信息科技大学摘要未来的世界是信息的世界,当二十世纪人类开启了信息时代的大门,世界信息化的脚步就越走越快,爆炸式的信息增长趋势已注定人类的二十一世纪及未来将和无处不在的信息形影不离:无论是人们的个人生活、工作、学习,还是工业生产、金融、国防,人类的所有社会活动将全面的进入信息化时代。

这样,毫无疑问的是,在这个即将到来的信息社会,其核心的资源必然是存储在数以亿计各式终端中的各类数据。

然而,正是因为社会的每一个单元都开始变得和数据息息相关,数据本身的安全就一跃成为人们首要也是必须关心的问题。

对比个人用户珍藏资料数据遭到破坏的莫大遗憾,银行金融系统,企业集团,政府机构,军队国防,航空航天等等社会单元的数据资料一旦遭到破坏,将会造成巨大的经济损失,甚至更为严重。

金融经济的瞬间瘫痪,战场上的兵不血刃等等都将成为可能,而造成这一切的核心,就是数据存储的安全与否。

然而,病毒破坏,火灾,地震,恐怖袭击,人为误操作,逻辑系统的缺陷,尤其是存储介质本身会出现故障的绝对性却决定了无论多么优良的存储介质也无法绝对安全的现实。

面对这样的现实情况,本文试图换个角度突破这个安全瓶颈,通过对纠错编码理论及有限域数学理论的一步步深入研究,尝试将基于有限域数学的Galois域Reed-Solomon算法应用到RAID(Redundant Array of Independent Disk 独立冗余磁盘阵列) 磁盘阵列当中,并尝试实现小型的RAID6级别的实验程序,从而为实现更加安全的存储技术提供了思路。

关键词:里德-索罗门算法、纠错编码、RAID6 、磁盘阵列、有限域、伽罗华域AbstractThe world in the future will be a world built upon full of various information. When people began an age called information age in the 20th century, it would never stop and develop faster and faster. As the information increase exponentially nowadays , we’re sure this world will be always together with the information everywhere in the 21st century and the future: whether personal life , work , study ,or industry production , finance system , national defense , etc , all social activities of people will be informationed all-around. In this case, there is no doubt in the information society; the key resource is the data which are stored in billions of devices and terminals.However, just because every society unit is concerned with data, the security of data self has become the most serious problem that people have to be in the face of. Compared with the regret of personal commemorable data’s damage, the bank and finance system, corporation group, the government, the army and national defense, the spaceflight, once the stored data of these units of society is destroyed, it will cause enormous economy damnify and even worse. Economy and finance system’s sudden paralysis, wining victory without battle will be the truth, and the key reason of that is whether the data is secure or not. The data is secure? No, damage of computer virus, fire, earthquake, terrorism attack, man-made wrong operation, bugs of logic system, especially the inevitability of physical damage, all these have said that whether the storage device is good or not, the data will not be secure absolutely.In the face of above cases, this paper attempts to break through the bottleneck of data security with another way. Through study of Error Correction Code and Finite Field theoretics, this paper tries to apply the Reed-Solomom arithmetic based on Finite Field Galois Field to Redundant Array of Independent Disk, and tries to realize a simple RAID6 level programme, thereby could offer an idea for securer storage device.Keywords: Reed-Solomon theoretics、Error Correction Code 、RAID6 、Redundant Arrayof Independent Disk 、Finite Field 、Galois Field目录摘要(中文) I (英文) II第一章绪论1.1信息时代数据存储安全的极端重要性1.2当今数据存储及数据安全的现状1.3本课题的研究内容及实现目标1.4本课题在领域内研究的现状第二章RAID(Redundant Array of Independent Disk) 独立冗余磁盘阵列理论基础与RAID6模型的建立2.1什么是独立冗余磁盘阵列2.2已成功研发的各级别(level)RAID系统原理、性能比较与原因分析2.3继承与发展,RAID6独立冗余磁盘阵列的设想与模型建立第三章 Reed-Solomon算法理论探究3.1群、环、域的基本概念和性质3.2有限域的定义及有限域的性质3.3二元域的运算3.4基于Galois域GF()的Reed-Solomon算法3.5实现基于Galois域GF()的Reed-Solomon算法在RAID6磁盘阵列中的应用——P校验、Q校验的生成方法第四章基于Reed-Solomon算法的RAID6级别磁盘阵列的模拟实现——创建 RAID6部分4.1RAID6模型框架的程序实现4.2实现P校验4.3Galois域GF()的程序实现4.4实现Q校验4.5待存储文件字节数不为偶数情况的处理4.6RAID6级别磁盘阵列的模拟实现及验证第五章基于Reed-Solomon算法的RAID6级别磁盘阵列的模拟实现——实现对数据的恢复部分5.1RAID6当两块磁盘失效时进行数据恢复的算法5.2恢复算法的程序实现5.3以随机两块模拟磁盘失效进行恢复演示第六章总结与展望致谢第一章绪论1.1信息时代数据存储安全的极端重要性未来的世界将是一个用0和1来进行描述的世界,随着二十世纪末人类开启了信息时代的大门,世界信息化的脚步就越走越快,爆炸式的信息增长趋势已注定人类的二十一世纪及未来将会和无处不在的信息形影不离:无论是人们的个人生活、工作、学习,还是工业生产、金融、国防,人类所有的社会活动将全面进入被信息化的时代。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Improved Decoding of Reed-Solomon andAlgebraic-Geometry CodesVenkatesan Guruswami Madhu SudanOctober23,1999AbstractGiven an error-correcting code over strings of length and an arbitrary input string also of length,the list decoding problem is that offinding all codewords within a specified Hammingdistance from the input string.We present an improved list decoding algorithm for decodingReed-Solomon codes.The list decoding problem for Reed-Solomon codes reduces to thefollowing“curve-fitting”problem over afield:Given points,,anda degree parameter and error parameter,find all univariate polynomials of degree at mostsuch that for all but at most values of.We give an algorithm thatsolves this problem for,where the result yields thefirst asymptotic improvement in four decades[21].The algorithm generalizes to solve the list decoding problem for other algebraic codes, specifically alternant codes(a class of codes including BCH codes)and algebraic-geometrycodes.In both cases,we obtain a list decoding algorithm that corrects up toLaboratory for Computer Science,MIT,545Technology Square,Cambridge,MA02139,USA.email: venkat,madhu@.1IntroductionAn error correcting code of block length,rate,and distance over a-ary alphabet(code,for short)is a mapping from(the message space)to(the codewordspace)such that any pair of strings in the range of differ in at least locations out of.Wefocus on linear codes so that the set of codewords form a linear subspace of.Reed-Solomoncodes are a classical,and commonly used,construction of linear error-correcting codes that yieldcodes for any.The alphabet for such a code isafinitefield.The message specifies a polynomial of degree at most over in some formalvariable(by giving its coefficients).The mapping maps this code to its evaluation atdistinct values of chosen from(hence it needs).The distance property followsimmediately from the fact that two degree polynomials can agree in at most places.The decoding problem for an code is the problem offinding a codeword in thatis within a distance of from a“received”word.In particular it is interesting to study the error-rate that can be corrected as a function of the information rate.For a family of Reed-Solomon codes of constant message rate and constant error rate,the two brute-force approaches to the decoding problem(compare with all codewords,or look at all words in thevicinity of the received word)take time exponential in.It is therefore a non-trivial task to solvethe decoding problem in polynomial time in.Surprisingly,a classical algorithm due to Peter-son[21]manages to solve this problem in polynomial time,as long asthen there may exist severaldifferent codewords within distance of a received word,and so the decoding algorithm cannotpossibly always recover the“correct”message if it outputs only one solution.This motivates the list decoding problem,first defined in[7](see also[8])and sometimesalso termed the bounded-distance decoding problem,that asks,given a received word,to reconstruct a list of all codewords within a distance from the received word.List decodingoffers a potential for recovery from errors beyond the traditional“error-correction”bound(i.e.,the quantity)of a code.Loosely,we refer to a list decoding algorithm reconstructing allcodewords within distance of a received word as an“error-correcting”algorithm.Again,fora family of Reed-Solomon codes,we can study as afunction of.Till recently,no significant benefits were achieved using the listdecoding approach to recover from errors.The only improvements known over the algorithm of[21]were decoding algorithms due to Sidelnikov[25]and Dumer[6]which correct,thus achievingUsually an error correcting code is defined as a set of codewords,but for ease of exposition we describe it in terms of the underlying mapping,which also specifies the encoding method,rather than just the set of codewords.rate ,thus allowing for nearly twice as many errors as the classical approach.For codes of rate greater than ,however,this algorithm does not improve over the algorithm of [21].This case is of interest since applications in practice tend to use codes of high rates.00.10.20.30.40.50.60.70.80.9100.10.20.30.40.50.60.70.80.91e r r o r rate This paper [Sudan][Berlekamp-Massey]Diameter Bound Figure 1:Error-correcting capacity plotted against the rate of the code for known algorithms.In this paper we present a new polynomial-time algorithm for list-decoding of Reed-Solomon codes (in fact Generalized Reed-Solomon codes,to be defined in Section 2)that corrects up to (exactly)).Thus our algorithm has a bettererror-correction rate than previous algorithms for every choice of;and in particular,for our result yields the first asymptotic improvement in the error-rate ,since the original algorithm of [21].(See Figure 1for a graphical depiction of the relative error handled by our algorithm in comparison to previous ones.)We solve the decoding problem by solving the following (more general)curve fitting problem:Given pairs of elements where ,a degree parameter and an error parameter ,find all univariate polynomials such that for at least values of .Our algorithm solves this curve fitting problem fordeterministically overfinitefields in time polynomial in the size of thefield or probabilistically in time polynomial in the logarithm of the size of thefield and can also be solved deterministically over the rationals and reals[14,17,18].Thus our algorithm ends up solving the curve-fitting problem over fairly generalfields.It is interesting to contrast our algorithm with results which show bounds on the number of codewords that may exist with a distance of from a received word.One such result,due to Goldreich et al.[13],shows that the number of solutions to the list decoding problem for a code with block length and minimum distance,is bounded by a polynomial in as long asas wefix and let.These bounds are of interest in that they hint at a potential limitation to further improvements to the list decoding approach.Finally we point out that the main focus of this paper is on getting polynomial time algorithms maximizing the number of errors that may be corrected,and not optimizing the runtime of any of our algorithms.Extensions to Algebraic-Geometry Codes Algebraic-geometry codes are a class of algebraic codes that include the Reed-Solomon codes as a special case.These codes are of significant inter-est because they yield explicit construction of codes that beat the Gilbert-Varshamov bound over small alphabet sizes[29](i.e.,achieve higher value of for infinitely many choices of and than that given by the probabilistic method).Decoding algorithms for algebraic-geometry codes are typically based on decoding algorithms for Reed-Solomon codes.In particular,Shokrollahi and Wasserman[24]generalize the algorithm of Sudan[27]for the case of algebraic-geometry codes.Specifically,they provide algorithms for factoring polynomials over some algebraic func-tionfields;and then show how to decode using this factoring ing a similar approach, we extend our decoding algorithm to the case of algebraic-geometry codes and obtain a list decod-ing algorithm correcting an algebraic-geometry code for up toerrors(here is the genus of the algebraic curve underlying the code).This algorithm uses a root-finding algorithm for uni-variate polynomials over algebraic functionfields as a subroutine and some additional algorithmic assumptions about the underlying algebraic structures:The assumptions are described precisely in Section4.Other extensions One aspect of interest with decoding algorithms is how they tackle a combina-tion of erasures(i.e,some letters are explicitly lost in the transmission)and errors.Our algorithm generalizes naturally to this case.Another interesting extension of our algorithm is the solution to a weighted version of the curve-fitting problem:Given a set of pairs and asso-ciated non-negative integer weights,find all polynomials such that2.1Informal description of the algorithmOur algorithm is based on the algorithm of[27],and so we review that algorithmfirst.The algo-rithm has two phases:In thefirst phase itfinds a polynomial in two variables which“fits”the points,wherefitting implies for all.Then in the second phase itfinds all small degree roots of i.efinds all polynomials of degree at most such thator equivalently is a factor of;and these polynomials form candidates for theoutput.The main assertions are that(1)if we allow to have a sufficiently large degree then thefirst phase will be successful infinding such a bivariate polynomial,and(2)if and have lowdegree in comparison to the number of points where,then will be a factor of.Our algorithm has a similar plan.We willfind of low weighted degree that“fits”the points.But now we will expect more from the“fit”.It will not suffice that is zero—we will require that every point is a“singularity”rmally,a singularity is a point where the curve given by intersects itself.We will make this notion formal as we go along. In ourfirst phase the additional constraints will force us to raise the allowed degree of.However we gain(much more)in the second phase.In this phase we look for roots of and now we know that passes through many singularities of,rather than just points on.In such a case we need only half as many singularities as regular points,and this is where our advantage comes from.Pushing the idea further,we can force to intersect itself at each point as many times as we want:in the algorithm described below,this will be a parameter.There is no limit on what we can choose to be:only our running time increases with.We will choose sufficiently large to handle as many errors as feasible.(In the weighted version of the curvefitting problem, we force the polynomial to pass through different points a different number times,where is proportional to the weight of the point.)Finally,we come to the question of how to define“singularities”.Traditionally,one uses thepartial derivatives of to define the notion of a singularity.This definition is,however,not goodfor us since the partial derivatives overfields with small characteristic are not well-behaved.So weavoid this direction and define a singularity as follows:Wefirst shift our coordinate system so thatthe point is the origin.In the shifted world,we insist that all the monomials of with a non-zero coefficient be of sufficiently high degree.This will turn out to be the correct notion.(The algorithm of[27]can be viewed as a special case,where the coefficient of the constant term of the shifted polynomial is set to zero.)Wefirst define the shifting method precisely:For a polynomial and we willsay that the shifted polynomial is the polynomial given by. Observe that the following explicit relation between the coefficients of and the coefficients of holds:In particular observe that the coefficients are obtained by a linear transformation of the original coefficients.2.2AlgorithmDefinition3(weighted degree)For non-negative weights,the-weighted degree of the monomial is defined to be.For a bivariate polynomial,and non-negative weights,the-weighted degree of,denoted--,is the maximum over all monomials with non-zero coefficients in of the-weighted degree of the monomial.We now describe our algorithm for the polynomial reconstruction problem. Algorithm Poly-Reconstruct:Inputs:,,where.Step0:Compute parameters such thatandNow,by construction,has no coefficients of total degree less than.Thus by substituting for,we are left with a polynomial such that divides.Shifting back we have divides.All that needs to be shown now is that a polynomial as sought for in Step1does exist.The lemma below shows this conditionally.Lemma6IfLemma7If satisfy,then for the choice of made in our algorithm,which simplifies to or,equivalently,Hence it suffices to pick to be an integer greater than the larger root of the above quadratic,and therefore pickingsuffices,and this is exactly the choice made in the algorithm..Proof:Follows from Lemmas5,6and7.in a Generalized Reed-Solomon code.This bound is already known even for general(even non-linear codes)[13,22].Our result can be viewed as a constructive proof of this bound for the specific case of Generalized Reed-Solomon codes.Proposition9The number of codewords that lie within an Hamming ball of radius(which is in turn).Proof:By Lemma5,the number of such codewords is at most the degree of the bivariate polynomial in.Since the-weighted degree of is at most,. Choosingas desired.,for any constant ,is.2.4Runtime of the AlgorithmWe now verify that the algorithm above can be implemented to run efficiently(in polynomial in time)and also provide rough(but explicit)upper bounds on the number of operations it performs. Proposition11The algorithm above can be implemented to run usingfield operations over,provided.Proof:(Sketch)The homogeneous system of equations solved in Step1of the algorithm clearly has at most unknowns(since and).Hence using standard methods,Step1can be implemented usingfield operations.We claim that this is the dominant portion of the runtime and that Step2can be implemented to run within this time using standard bivariate polynomial factorization techniques.We sketch some details on the implementation of Step2below.To implement Step2,wefirst compute the discriminant of with respect to(treating it as a polynomial in with coefficients in).Therefore,and also where,are the degrees of in and respectively.This bound on the degree of follows easily from the definition of the discriminant(see for instance[5]),and it is also easy to prove that the discriminant can be computed infield operations.Next wefind an such that.This can be done deterministically by trying out an arbitrary set offield elements because of the bound we know on the degree of. Now,by the definition of the discriminant,for such an,is square-free as an element of .We then compute the shifted polynomial,so that is square-free. Now we use the algorithm in[11]that can compute all roots of a bivariate polynomial such that is square-free,in time.This gives us a list of all polynomials such that divides;by computing for each such gives us the desired list of roots of.It is clear that once is computed,all the above steps can be performed in at mostfield operations.Summing up,Step2can be performed usingfield operations.The entire algorithm can thus be implemented to run infield operations and since Theorem12The polynomial reconstruction problem can be solved in time,providedIn this analysis as well as the rest of the paper,we use the big-Oh notation to hide constants.We stress that these are universal constants and not functions of thefield size.Proof:Follows from Proposition11and Theorem8.can be list-decoded in time.When.3Some ConsequencesFirst of all,since the classical Reed-Solomon codes are simply a special case of Generalized Reed-Solomon codes,Corollary13above holds for Reed-Solomon codes as well.We now describe some other easy consequences and extensions of the algorithm of Section2.Thefirst three results are just applications of the curve-fitting algorithm.The fourth result revisits the curve-fitting algorithm to get a solution to a weighted curve-fitting problem.3.1Alternant codesWefirst describe a family of codes called alternant codes that includes a wide family of codes such as BCH codes,Goppa codes etc.Definition14(Alternant Codes([19],12.2))For positive integers,prime power,the field,a vector of distinct elements,and a vector of nonzero elements,the alternant code comprises of all the codewords of the Generalized Reed-Solomon code defined by that lie in.Since the Generalized Reed-Solomon code has distance exactly,it follows that the respective alternant code,being a subcode of the Generalized Reed-Solomon code,has distance at least.We term this the designed distance of the alternant code. The actual rate and distance of the code are harder to determine.The rate lies somewhere between and and thus the distance lies between and.Playing with the vector might alter the rate and the distance(which is presumably why it is used as a parameter).The decoding algorithm of the previous section can be used to decode alternant codes as well. Given a received word,we use as input to the polynomial reconstruction problem the pairs,where and are elements of.The list of polynomials output includes all possible codewords from the alternant code.Thus the decoding algorithm for the earlier section is really a decoding algorithm for alternant codes as well;with the caveat that its performance can only be compared with the designed distance,rather than the actual distance.The following theorem summarizes the scope of the decoding algorithm.Theorem15Let be an alternant code with designed distance(and thus satisfying errors.(We note that decoding algorithms for alternant codes given in classical texts seem to correct errors.For the more restricted BCH codes,there are algorithms that decode beyond half the designed distance(cf.[9]and also[4,Chapter9]).3.2Errors and Erasures decodingThe algorithm of Section2is also capable of dealing with other notions of corruption of infor-mation.A much weaker notion of corruption(than an“error”)in data transmission is that of an“erasure”:Here a transmitted symbol is either simply“lost”or received in obviously corruptedshape.We now note that the decoding algorithm of Section2handles the case of errors and era-sure naturally.Suppose symbols were transmitted and were received and symbols got erased.(We stress that the problem definition specifies that the receiver knows which symbolsare erased.)The problem just reduces to a polynomial reconstruction problem on points.Anapplication of Theorem12yields that errors can be corrected provided.The classical results of this nature show that one can solve the decoding problem if.To compare the two results we restate both result.The classical result can be rephrased asBy the AM-GM inequality it is clear that the second one holds whenever thefirst holds.3.3Decoding with uncertain receptionsConsider the situation when,instead of receiving a single word,for each we receive a list of possibilities such that one of them is correct(but we do not know which one).Once again,as in normal list decoding,we wish tofind out all possible codewords which could have been possibly transmitted,except that now the guarantee given to us is not in terms of the number of errors possible,but in terms of the maximum number of uncertain possibilities at each position of the received word.Let us call this problem decoding from uncertain receptions.Applying Theorem12(in particular by applying the theorem on point sets where the ’s are not distinct)we get the following result.Theorem17List decoding from uncertain receptions on a Reed-Solomon code can be done in polynomial time provided the number of“uncertain possibilities”at each position is(strictly)less than.3.4Weighted curvefittingAnother natural extension of the algorithm of Section2is to the case of weighted curvefitting. This case is somewhat motivated by a decoding problem called the soft-decision decoding problem (see[31]for a formal description),as one might use the reliability information on the individualsymbols in the received word moreflexibly by encoding them appropriately as the weights below instead of declaring erasures.At this point we do not have any explicit connection between the two.Instead we just state the weighted curvefitting problem and describe our solution to this problem.Problem3(Weighted polynomial reconstruction)I NPUT:points,non-negative integer weights,and param-eters and.O UTPUT:All polynomials such that is at least.The algorithm of Section2can be modified as follows:In Step1,we couldfind a polynomial which has a singularity of order at the point.Thus we would now have constraints.If a polynomial passes through the points for,then will appear as a factor of provided is greater than--.Optimizing over the weighted degree of yields the following theorem.Theorem18The weighted polynomial reconstruction problem can be solved in time polynomial in the sum of’s providederrorsin an code,improving over the result of[24],which corrects up to,where:denoting its algebraic closure.is a set of points(typically some subset of(variety in)is a subset of.tois a non-negative integer called the genus of.The components of always satisfy the following properties:1.is afield extension of:is endowed with operations and giving it afieldstructure.Furthermore,for,the functions and satisfyand,provided and are defined.Finally, corresponding to every,there exists a function s.t.for everyis analogous to the degree ofa function.If,then the negative of is the number of zeroes has atthe point.The following axioms may make a lot of sense when this is kept in mind.)For every,,,then.(This property is just the general-ization of the well-known theorem showing that a degree polynomial may have at most zeroes.)5.Rate property:Observe that,by Property3(c)above,the set of functionsform a vector space over,for anyand nowhere else.Let denote the set,we have that is also a vector space over.The rate prop-erty is that for every,,is a vector space of dimension at least.(This property is obtained from the famed Riemann-Roch theorem for the actual realizations of,and in fact the dimension is exactly if.)The following lemma shows how to construct a code from an algebraic functionfield,given rational points.Lemma19If there exists an algebraic functionfield with distinct rational points,then the linear space form an code for some and.Proof:For,by Property2,we have that,and by Property3a we have that.Thus.By Property4,the map given byis one-one,and hence.By Property5, this implies has dimension at least,yielding.Finally,considerthat agree in places.If and agree at,then and thus by Property3a, .Furthermore,we have that for every,we haveCodes constructed as above and achieving(in the limit of large)are known for constant alphabet size.In fact,such codes achieving bounds better than those known by probabilistic constructions are known for[29].4.2The Decoding AlgorithmWe now describe the extension of our algorithm to the case of algebraic-geometry codes.As usual we will try to describe the data points by some polynomial.We follow[24]and let be a polynomial in a formal variable with coefficients from(i.e.,).Now given a value of,will yield an element of.By definition such an element of has a value at and just as in[24]we will also require to evaluate to zero. We,however,will require more and insist that“behave”like a zero of multiplicity of; since and,we need to be careful in specifying the conditions to achieve this.We, as in[24],also insist that has a small(but positive)order at for any substitution of with a function in of order at most at the point.Having found such a,we then look for roots of.What remains to be done is to explicitly express the conditions(i)behaves like a zero of order of for,and(ii)for any,where is a parameter that will be set later(and which will play the same role as the in our decoding algorithm for Reed-Solomon codes).To do so,we assume that we are explicitly given functionssuch that and such that.LetLemma21Given functions of distinct orders at satisfying and a rational point,there exist functions with and such that there exist for such that.Proof:We prove a stronger statement by induction on:If are linearly independent (over)functions such that for,then there are functionssuch that that generate the’s over.Note that this will imply our lemma as are linearly independent using Property3(c)and the fact that the ’s have distinct pole orders at.W.l.o.g.assume that is a function with largest order at ,by assumption.We let.Now,for,setif.If,using Lemma20to the pairof functions,we get such that the function satisfies.Since in this case,we conclude that in any case,for,and generate.Now are linearly indepen-dent(since are)and for,so the inductive hypothesis applied to the functions now yields as required.The shifting to is achieved by defining.The terms in that are divisible by contribute towards the multiplicity of as a zero of,or,equivalently,the multiplicity of as a zero of.We have(2) whereSince,we can achieve our condition on being a zero of mul-tiplicity at least by insisting that for all,such that. Having developed the necessary machinery,we now proceed directly to the formal specification of our algorithm.Implicit Parameters:;;;.Assumptions:We assume that we“know”functions of distinct orders at with,as well as functionssuch that for any,the functions satisfy.The notion of“knowledge”is explicit in the following two objects that we assume are available to our algorithm.1.The set such that for every,.This assumption is a very reasonable one since Lemma21essen-tially describes an algorithm to compute this set given the ability to perform arithmeticin the functionfield.2.A polynomial-time algorithm tofind roots(in)of polynomials in where thecoefficients(elements of)are specified as a formal sum of’s.(The cases forwhich such algorithms are known are described in[24,11].)The Algorithm:Inputs:,,.Step0:Computer parameters such thatandStep1:Find of the form,i.efind values of the coefficients such that the following conditions hold:1.At least one is non-zero.2.For every,,,such that,Step2:Find all roots of the polynomial.For each such,check if for at least values of,and if so,include in output list.(This step can be performed by either completely factoring using algorithms presented in[24],or more efficiently by using the root-finding algorithm of[11].)The following proposition says that the above algorithm can be implemented efficiently modulo some(reasonable)assumptions.Proposition22Given the ability to performfield operations in the subset of the functionfield when elements are expressed as a formal combination of the’s for,the above algorithm reduces the decoding problem of an algebraic geometry code(with designed distance)in time(measured in operations over)at mostto a root-finding problem over the functionfield of a univariate polynomial of degree at most with coefficients having pole order at most,whereProof:We have,for any such,and using(2),this yieldsSince for,,and if is defined by ,then,we get as desired.,implying.Thus is a root of and hence divides.,then a as sought in Step1does exist(and can be found in polynomial time by solving a linear system).Proof:The proof follows that of Lemma6.The computational task in Step1is once again that of solving a homogeneous linear system.A non-trivial solution exists as long as the number of unknowns exceeds the number of constraints.The number of constraints in the linear system is ,while the number of unknowns equalsLemma26If satisfy,then for the choice of made in the algorithm, which simplifies toIf,it suffices to pick to be an integer greater than the larger root of the above quadratic,and therefore pickingsuffices,and this is exactly the choice made in the algorithm.errors for a rate Reed-Solomon code and generalized the algorithm for the broader class of Algebraic-Geometry codes. Our algorithm is able to correct a number of errors exceeding half the minimum distance for any rate.A natural question not addressed in our work is more efficient implementation of these de-coding algorithms.Subsequent work in[23]addresses this issue for Reed-Solomon codes,and [11,20]addresses this issue for both Reed-Solomon codes and Algebraic-Geometry codes.The list decoding problem remains an interesting question and it is not clear what the true limit is on the number of efficiently correctable errors.Deriving better upper or lower on the number of correctable errors remains a challenging and interesting pursuit.AcknowledgmentsWe would like to the anonymous referees for numerous comments which improved and clarified the presentation a lot.We would like to express our thanks to Elwyn Berlekamp,Peter Elias,Jorn Justesen,Ronny Roth,Amin Shokrollahi and Alex Vardy for useful comments on the paper.References[1]S.A R,R.L IPTON,R.R UBINFELD AND M.S UDAN.Reconstructing algebraic functionsfrom mixed data.SIAM Journal on Computing,28(2):488-511,1999.[2]E.R.B ERLEKAMP.Algebraic Coding Theory.McGraw Hill,New York,1968.。

相关文档
最新文档