A change detection measure based on a likelihood ratio and statistical properties of SAR images

合集下载

高考英语语法填空动词形式练习题30题

高考英语语法填空动词形式练习题30题

高考英语语法填空动词形式练习题30题1<背景文章>Dr. Smith is a renowned scientist who has dedicated his life to researching a new type of material. His journey began when he stumbled upon an interesting phenomenon in his laboratory. Intrigued by what he saw, he decided to embark on a long and arduous research project.He spent countless hours in the lab, conducting experiments and analyzing data. The process was not easy. He faced many challenges along the way. One of the main difficulties was finding the right combination of elements to create the new material. He tried various combinations, but none seemed to work.However, Dr. Smith did not give up. He continued to persevere and eventually made a breakthrough. After months of hard work, he discovered a unique combination of elements that showed great promise. This led to the creation of the new material, which had remarkable properties.The new material was stronger than steel, yet lighter than aluminum. It also had excellent heat resistance and could withstand extreme temperatures. Dr. Smith's discovery has the potential to revolutionize many industries, from aerospace to construction.1. What was Dr. Smith's initial inspiration for his research?A. A book he read.B. An accident in the lab.C. A conversation with a colleague.D. A news article.答案:B。

江苏省苏州市2024-2025学年高二上学期11月期中英语试题

江苏省苏州市2024-2025学年高二上学期11月期中英语试题

江苏省苏州市2024-2025学年高二上学期11月期中英语试题一、阅读理解When you volunteer through United Way, you’ re joining 1.5 million people who are giving back so others can get ahead. Use your time and talent to create social change where you work or live — join our community of game changers.If you are accepted to become a United Way volunteer, you will be required to complete the series of immunization (免疫) requirements listed below. It is requested that you complete all of your immunization requirements through your own healthcare provider. However, if this presents a financial hardship for you, Employee Health or Occupational Health may be able to assist with these requirements.V olunteer duties vary, but may include visiting and socializing with patients, helping withlight tasks such as sorting or filing, and assisting patients/visitors with wayfinding.V olunteers cannot perform any “hand-on” or clinical duties, perform the same job as staff members, or shadow or observe medical staff.We do not offer any research or internship opportunities.1.What is the purpose of the United Way?A.To inform volunteers to take the vaccines.B.To attract volunteers to make a difference.C.To create social change in the community.D.To provide an opportunity of internship. 2.What can be learned about immunization requirements?A.Two doses of MMR are necessary for Measles.B.TDaP should be shot at the age of 17 immediately.C.2 negative TB tests are given in prior 12 months.D.Hepatitis B Vaccine is only for regional volunteers.3.What are the volunteers expected to do?A.Assist dying patients file their wills.B.Direct the way for patients and visitors.C.Take medical workers’ place in case.D.Observe doctors in a clinical setting.In 2015, Brian Peterson, a car designer for Kia Motors, moved to Santa Ana, California, with his wife, Vanessa. There, they often met a homeless man named Matt Faris, who would frequently shout on the street corner, sometimes disturbing their sleep. Initially, Peterson had no interaction with Faris, but everything changed after reading the book Love Does, which stressed the power of love in action. Inspired by the book, Peterson decided to introduce himself to Faris.In their first conversation, Peterson learned that Faris had moved to Southern California from Kentucky in pursuit of a music career but had fallen on hard times, living on the streets for over a decade. Despite Faris’s rough appearance, Peterson saw beauty in him and felt forced to paint his portrait, even though he hadn’t picked up a paintbrush in eight years. Faris agreed, marking the start of a transformative project.So Peterson went to establish Faces of Santa Ana, a nonprofit organization dedicated to painting portraits of unhoused individuals in the community. He captures each subject’s personality through colors and then sells the portraits, splitting the earnings with the subject. Half of the funds are placed in a “love account”, which helps them to address their personal needs.However, Peterson learned the importance of asking people directly how they wanted to use the money rather than assuming what they need most. For example, Faris used the funds from his portrait to record an album, while another subject, Kimberly Sondoval, used the money to help pay her daughter’s rent.Over the years, Peterson’s project expanded, leading him to found Faces of Mankind, where artists nationwide paint portraits of the homeless. To date, Peterson has personally painted 41 portraits. His work not only provides financial assistance but also fosters understanding and connection between the buyers and the subjects, with many buyers developing friendships with the individuals they initially overlooked. Peterson hopes his work will continue to change how people perceive the homeless.4.What inspired Brian Peterson to approach Matt Faris?A.He saw Faris had artistic talent.B.He read a book about love in action.C.He wanted to complain about Faris’s shouting.D.He intended to found a nonprofitorganization.5.How does Brian Peterson deal with the money he earns from selling the portraits?A.He donates it to charities.B.He uses it to expand his project.C.He sponsors the homeless to buy art supplies.D.He keeps half and gives the other half to his subjects.6.What is true about Peterson in Paragraph 3?A.He believes in giving without accepting.B.He prefers to decide how the money should be spent.C.He helps people based on their personal needs.D.He funds them to develop art-related projects.7.What would be the best title for this passage?A.Painting for Homeless B.Art Can Cure HomelessnessC.The Story of Matt Faris D.Selling Portraits for CharityScientists from the Hefei Institutes of Physical Science (HIPS) under the Chinese Academy of Sciences have designed a wristwatch that can measure essential chemicals in body sweat. Theirfindings were published in the journal ACS Nano.Sweat contains electrolytes (电解质), primarily potassium, sodium and calcium. The balance of these essential minerals is crucial for supporting muscle function, nerve health and regular heartbeat. The wristwatch collects sweat from the skin and analyzes it in real time using a sensor chip with sensitive membrane. When sweat enters the device, it will come into contact with the membrane that contains three tubes capable of measuring sodium, potassium and calcium levels respectively.Although they are not the first to invent sweat sensors, the Chinese researchers emphasized the wrist watch’s solid interface for long-term reliability. “It surpasses the stability of many other sensors by consistently monitoring human sweat for over six months,” said the lead researcher Huang Xingjiu.Since athletes use electrolyte drinks to counteract (抵制) the loss of energy and refill it, researchers in the study measured the sweat composition of these chemicals in athletes running long distances. The accuracy reached about 95 percent when compared to the standard detection method.“When there are electrolyte abnormalities, the device will remind users to supplement (补充) them quickly. The aim of developing this device is to provide warnings for electrolyte loss and reduce exercise-related injury risks.”For ordinary people, the traditional electrolyte test requires samples of body fluids taken in hospitals. The new wristwatch has the potential to serve as an alternative to needles for measuring electrolytes.The next goal of the research team is to design various sensitive membrane materials for monitoring more information. The researchers noted that compared to popular fitness watches on the market, the device they designed is larger and heavier, making it less comfortable to wear. However, they expect to develop wearable sweat sensors suitable for market applications in the next five years. The team of researchers also aims to adapt the device for environmental monitoring.8.What is Paragraph 2 mainly about?A.The working principle of the device.B.The composition of sweat.C.The function of a particular sensor.D.The balance of the minerals.9.Which of the following is the advantage of the wristwatch?A.accurate and popular.B.large and wearable.C.convenient and comfortable.D.reliable and stable.10.What does the author imply in the last paragraph?A.The wristwatch has no equal now.B.The product hasn’t hit the market yet.C.The researchers are content with the product.D.The device will be definitely put into extensive use.11.What is the purpose of the passage?A.To provide warnings for readers to reduce risks.B.To introduce a new product on monitoring health.C.To inform the readers of the importance of eletrolytes.D.To analyze the relationship between sweat and health.Victories are temporary in China’s fast-changing economy. Earlier this month Colin Huang, the founder of Pinduoduo, became China’s richest man. The company, founded in 2015, today is China’s third-largest e-commerce firm by sales, behind only JD.com and Alibaba.Mr Huang’s time atop China’s rich list was, however, brief. On August 26th Pinduoduo’s share price decreased by nearly 30% after it reported sales for the quarter from April to June fell short of the market’s high expectations and gave warning that a long-run decline in profitability was “unavoidable”.Pinduoduo’s misfortunes are set against a backdrop of weakening consumer spending in China. In June sales from the “618” shopping festival fell for the first time, despite a number of platforms extending their sales periods this year. A fierce price war is adding to the trouble. Visit any Chinese e-commerce site and you will be impressed by signs advertising huge discounts and promising the cheapest deals online. Competition has grown more intense because of attacks into e-commerce by short-video apps such as Douyin and Xiaohongshu. Some merchants are piling yet more pressure on the industry. Some Chinese e-commerce companies juice their sales by fining merchants for late deliveries or product mismatches. Last month hundreds of suppliers surrounded the offices of Temu, Pinduoduo’s foreign branch, in the southern city of Guangzhou to protest against such punishment.Pinduoduo may be hoping that international expansion will rescue it from deteriorating conditions at home. That will not be straightforward. Although the number of people using Temu, which launched in America in 2022, has rocketed, owing in no small part to the vast amounts it has spent on advertising, turning that into profit has proved trickier.What is more, America’s e-commerce Amazon is fighting back against the Chinese upstart. During its Prime Day sale in July it offered discounts of up to 70% on some products. It is also reportedly planning to launch a discount section on its site which will feature cheap items. 12.Why did Pinduoduo’s share price decrease sharply?A.China is experiencing a major economic boom.B.Colin Huang failed to run the company well.C.Sales for the second quarter didn’t meet the expectations.D.Consumers were unwilling to spend money on the platform.13.What difficulty is Pinduoduo faced with?A.The rapid rise of JD.com and Alibaba.B.The pressure from consumers and suppliers.C.The invasion of many overseas e-commerce giants.D.The intense price competition and weak purchasing power.14.What does the underlined word “deteriorating” in paragraph 4 probably mean?A.Declining.B.Alarming.C.Increasing.D.Changing. 15.How does the author find Pinduoduo’s expansion of international market?A.Hopeful.B.Cautious.C.Unclear.D.Favourable.People are constantly bombarded (轰炸) with unrealistic and potentially harmful images of “ideal” body types. 16 It’s also important to learn what your body can physically do and become comfortable. In order to accept your body, it’s important to get in touch with both of these aspects of your body on their own terms.17 This means not trying to change who you are or focus on qualities you don’t like. Learn to enjoy your body —— how you move, feel, and get around. Let go of how you used to look, especially if your body has undergone changes from injuries, or medical conditions. Be kind to your body as it is right now.Replace negative thoughts with positive ones. As soon as you notice yourself starting to have a negative thought, replace it with something positive about yourself. Give yourself time to get into the habit of thinking positively. Try starting each day by thinking a few positive thoughts.18Compare yourself only to yourself. 19 There’s no point in comparing yourself to others, regardless of whether the person is a celebrity or classmate sitting next to you. Instead, compare yourself in terms of how you’ ve progressed over time, now that you’ve created your own realistic goals. For example, you might think to yourself that you’ve improved your appearance compared to a few years ago.Know when to seek help. Understand that nearly everyone struggles to maintain positive body image all the time and it’s normal to have ups and downs. 20 There are various signs that your body issues are severe and require professional help.A.Accept your body as it is.B.Identify what you like about your body and appearance.C.Negative thoughts do nothing to improve your self-image.D.The world would be a pretty boring place if we all looked the same.E.But you should also honestly consider if you need to speak with a counselor, or specialist. F.This can make it difficult to accept, love, and feel confident in your own body, which is critical. G.Remind yourself of these thoughts throughout the day when you start feeling critical of yourself.二、完形填空Ever since I was little, the doctors told my parents that someday I would need hearing aids. Of all my 21 my ears are the ones I hate the most. Although my hearing was getting22 I hadn’t told anyone. The ocean sound that was always in my head had been getting louder,23 people’s voices, I even couldn’t hear teachers in class. But I knew if I told Mom about it, I’d 24 hearing aids.Then in my annual checkup, I 25 the audiology test and the doctor said, “Dude, it’s time.” And he 26 me to a special ear doctor. When the ear doctor first pulled thehearing aids out for me, I groaned.Normal aids usually have a part that wraps around the outer ear to hold the inner bud 27 . But since I didn’t have outer ears, they had to put the earbuds on this heavy-duty headband to wrap around the back of my 28 . I could imagine how strange I’d look — my classmates would laugh at me, and even my teachers, my friends would be 29 at me!“Can’t wear that, Mom; I’ll look like Lobot!” I complained.“Lobot?” The ear doctor smiled as he looked at the headphones and made some 30 . “The Empire Strikes Back? The bald guy?”“You know Star Wars stuff?” I asked. “Hey, Lobot’s cool,” said he, 31 the earphones on my head carefully. “There you go. So how’s that?”“It’s so quiet in my ears and I don’t hear that noise anymore! Thanks so much, Dr. James!”I answered 32 .The first day I showed up at school with the hearing aids, I thought kids would make a big 33 about it. But no one did. Now that I look back, I don’t know why I was so 34 about it all this time. Funny how sometimes you worry a lot about something and it turns out to be 35 .21.A.favorites B.features C.figures D.frights 22.A.worse B.less C.sharper D.lower 23.A.giving out B.making out C.bringing out D.drowning out 24.A.end up with B.keep up with C.put up with D.break up with 25.A.had B.failed C.escaped D.passed 26.A.invited B.brought C.sent D.showed 27.A.in place B.in order C.in use D.in store 28.A.ears B.head C.back D.chest 29.A.surprised B.amazed C.scared D.annoyed 30.A.differences B.efforts C.comments D.adjustments 31.A.hanging B.sliding C.striking D.arranging 32.A.automatically B.loudly C.greedily D.excitedly 33.A.fortune B.choice C.deal D.decision 34.A.stressed B.curious C.mad D.disappointed35.A.something B.everything C.anything D.nothing三、语法填空阅读下面短文,在空白处填入1个适当的单词或括号内单词的正确形式Breakdancing (霹雳舞) made its debut as an Olympic sport after 36 ( include) in this year’s games in Paris. It was the only new event 37 broke into the Olympics at this year’s games.38 (begin) in the Bronx in the 1970s and popularized globally through media, breaking has faced skepticism about its classification 39 a sport. The term “breakdancing” was coined by journalists 40 is not used by supporters. Indeed breakers never seriously 41 ( seek) a place at the games. They were generally 42 ( interest) in gold chains than gold medals. Thanks to the World Dance Sport Federation, breaking’s inclusion aimed to attract younger viewers and refresh Olympic viewership, highlighting both challenges and opportunities for its 43 ( participate).The flips (翻转) and 44 ( freeze) may be short-lived, however. Breaking will not return in Los Angeles in 2028. The IOC’s charter (章程) caps the number of athletes at 10,500 and host cities have 45 final say over their games. Organizers in LA chose to include larger and better-funded sports such as baseball and cricket.四、单词拼写46.Her sharp (辨别力) in artworks allowed her to distinguish genuine pieces and clever reproductions. (根据汉语提示单词拼写)47.A fortunate (相遇) on the street brought the two friends together after a long separation. (根据汉语提示单词拼写)48.We can provide you with a (全面的) guide to local hotels and restaurants. (根据汉语提示单词拼写)49.The (零花钱) her parents gave her each month taught the young student the value of saving and budgeting responsibly. (根据汉语提示单词拼写)50.Smartphones should be used as our tools rather than (主宰) our lives. (根据汉语提示单词拼写)51.R could I find a book that I was deeply absorbed in. (根据首字母单词拼写) 52.The government is c to reducing poverty and has made remarkable progress. (根据首字母单词拼写)53.The bomb exploded during the night, t lots of people in the building. (根据首字母单词拼写)54.The sudden news of his death s her suddenly, leaving her stunned and unable to react. (根据首字母单词拼写)55.The report presented a f account of the events that took place, without any exaggeration. (根据首字母单词拼写)五、书信写作56.假如你是李华,你校将给美国的友好学校赠送一批具有中国特色的学生书画作品,请你代表学校给该校校长Mr. Thomson写一封信,内容要点如下:1. 赠书画作品的目的;2. 简要介绍其中一幅书画;3. 表示需要与其沟通捐赠细节。

2023年上半年6级试卷

2023年上半年6级试卷

2023年上半年6级试卷一、写作(30分钟)题目: The Importance of Lifelong Learning。

要求:1. 阐述终身学习的重要性;2. 应包含具体事例或理由;3. 字数不少于150词。

二、听力理解(30分钟)Section A.Directions: In this section, you will hear two long conversations. At the end of each conversation, you will hear four questions. Both the conversation and the questions will be spoken only once. After you hear a question, you must choose the best answer from the four choices marked A), B), C) and D).Conversation One.1. A) He is a college student majoring in history.B) He is a tour guide in a local museum.C) He is a researcher in an archaeological institute.D) He is a teacher in a high school.2. A) To visit some historical sites.B) To attend an academic conference.C) To do some research on ancient civilizations.D) To take a vacation with his family.3. A) It has a long history.B) It is famous for its architecture.C) It has many rare artifacts.D) It is well - preserved.4. A) He is very interested in it.B) He has studied it before.C) He wants to write a paper about it.D) He is required to do so by his boss. Conversation Two.1. A) She is a fashion designer.B) She is a marketing manager.C) She is a magazine editor.D) She is a TV presenter.2. A) The latest fashion trends.B) How to promote products effectively.C) The influence of social media on fashion.D) How to choose the right models for shows.3. A) By conducting market surveys.B) By following fashion bloggers.C) By attending fashion shows.D) By reading fashion magazines.4. A) It is very challenging.B) It is quite rewarding.C) It is rather boring.D) It is extremely stressful.Section B.Directions: In this section, you will hear three short passages. At the end of each passage, you will hear some questions. Both the passage and the questions will be spoken only once. After you hear a question, you must choose the best answer from the four choices marked A), B), C) and D).Passage One.1. A) It was founded in the 19th century.B) It is located in a small town.C) It has a large number of students.D) It offers a wide range of courses.2. A) To improve students' practical skills.B) To attract more international students.C) To keep up with the latest technological developments.D) To meet the needs of local employers.3. A) They are required to do internships.B) They have to pass a series of exams.C) They need to write a thesis.D) They must complete a project.4. A) It has a high employment rate.B) It has a beautiful campus.C) It has excellent teaching facilities.D) It has a strong faculty.Passage Two.1. A) How to deal with stress.B) The causes of stress.C) The effects of stress on health.D) Different types of stress.2. A) Work pressure.B) Family problems.C) Financial difficulties.D) All of the above.3. A) By taking some medications.B) By doing regular exercise.C) By changing one's lifestyle.D) By seeking professional help.4. A) Stress can be completely eliminated.B) Stress is always harmful to people.C) A certain amount of stress can be beneficial.D) People should avoid stress as much as possible. Passage Three.1. A) It is a new form of energy.B) It is a renewable resource.C) It is widely used in industry.D) It is very expensive to produce.2. A) Solar panels.B) Wind turbines.C) Hydroelectric power plants.D) Nuclear reactors.3. A) It is clean and environmentally friendly.B) It can be stored easily.C) It is more efficient than other energy sources.D) It is not affected by weather conditions.4. A) The lack of government support.B) The high cost of installation.C) The limited availability of resources.D) The technical problems in production.Section C.Directions: In this section, you will hear a passage three times. When the passage is read for the first time, you should listen carefully for its general idea. When the passage is read for the second time, you are required to fill in the blanks with the exact words you have just heard. Finally, when the passage is read for the third time, you should check what you have written.The development of artificial intelligence (AI) has been one of the most significant technological trends in recent years. AI has the potential to (1) _revolutionize_ many industries, including healthcare, finance, and transportation.In healthcare, AI can be used to assist doctors in diagnosing diseases. For example, it can analyze medical images such as X - rays and CT scans more (2) _accurately_ than human doctors in some cases. This can lead to earlier detection of diseases and better treatment outcomes.In finance, AI can be used for fraud detection. It can analyze large amounts of financial data in real - time to identify (3) _suspicious_ transactions. This helps financial institutions to protect their customers' money and maintain the integrity of the financial system.However, the development of AI also raises some concerns. One concernis the potential loss of jobs. As AI systems become more capable, they may replace humans in certain tasks. Another concern is the ethical use of AI.For example, how should AI be used in decision - making processes that affect people's lives?To address these concerns, governments and international organizations need to develop appropriate regulations. At the same time, researchers and developers should also be (4) _responsible_ for ensuring the ethical use of AI.三、阅读理解(40分钟)Section A.Directions: In this section, there is a passage with ten blanks. You are required to select one word for each blank from a list of choices given in a word bank following the passage. Read the passage through carefully before making your choices. Each choice in the word bank is identified by a letter. Please mark the corresponding letter for each item on Answer Sheet 2. You may not use any of the words in the word bank more than once.The concept of "smart cities" has been around for a while, but it is only in recent years that it has really started to gain (1) _momentum_. A smart city is a city that uses information and communication technologies (ICTs) to improve the quality of life of its residents, enhance urban (2) _efficiency_, and promote sustainable development.There are many different aspects to a smart city. For example, in the area of transportation, smart cities can use ICTs to manage traffic flow more effectively. This can include things like intelligent traffic lights that (3) _adjust_ their timing based on real - time traffic conditions, and public transportation systems that provide passengers with real - time information about bus or train arrivals.In the area of energy management, smart cities can use ICTs to monitor and control energy consumption. This can help to reduce energy waste and (4) _lower_ carbon emissions. For example, smart meters can be installed in homes and businesses to provide real - time information about energy usage, and building management systems can be used to automatically adjust heating, ventilation, and air - conditioning (HVAC) systems based on occupancy levels.Another important aspect of smart cities is the use of ICTs for public safety. This can include things like surveillance cameras that are connected to a central monitoring system, and emergency response systemsthat can quickly (5) _dispatch_ police, fire, or ambulance services when needed.However, building a smart city is not without challenges. One of the biggest challenges is the need for (6) _interoperability_ between different ICT systems. For example, if a city wants to integrate its traffic management system with its energy management system, the two systems needto be able to communicate with each other effectively.Another challenge is the issue of data privacy and security. As smart cities collect and use large amounts of data about their residents, it is essential to ensure that this data is protected from unauthorized accessand misuse.Despite these challenges, the potential benefits of smart cities are significant. By improving the quality of life of their residents, enhancing urban efficiency, and promoting sustainable development, smart cities can play an important role in the future of urban (7) _development_.Word Bank:A) adjust.B) dispatch.C) efficiency.D) interoperability.E) lower.F) momentum.G) privacy.H) security.I) sustainable.J) urban.Section B.Directions: In this section, you will read several passages. Each passage is followed by several questions based on its content. You are to choose the best answer to each question. Answer all the questions following each passage on the basis of what is stated or implied in that passage.Passage One.The sharing economy has been growing rapidly in recent years. Companies like Airbnb and Uber have disrupted traditional industries by enabling individuals to share their assets, such as their homes or cars, with others.One of the main drivers of the sharing economy is the increasing use of mobile technology. Mobile apps have made it easy for people to connect with others who are interested in sharing their assets. Another driver is thegrowing awareness of the environmental and economic benefits of sharing. For example, sharing a car instead of owning one can reduce traffic congestion and carbon emissions.However, the sharing economy also faces some challenges. One challenge is the lack of regulation in some areas. For example, in some cities, Airbnb has faced criticism for not following local housing regulations. Another challenge is the issue of trust. Since sharing economy transactions often involve strangers, there is a need to build trust between the parties involved.1. What is the main idea of this passage?A) The sharing economy is a new and innovative business model.B) The sharing economy has both advantages and challenges.C) Mobile technology is the key to the success of the sharing economy.D) The sharing economy is facing serious regulatory problems.2. According to the passage, what are the main drivers of the sharing economy?A) Mobile technology and environmental awareness.B) The need to reduce traffic congestion and carbon emissions.C) The desire to make more money by sharing assets.D) The lack of trust in traditional industries.3. What is one of the challenges faced by the sharing economy?A) The high cost of using sharing economy platforms.B) The difficulty in finding people to share assets with.C) The lack of trust between the parties involved.D) The over - regulation of sharing economy activities.Passage Two.The rise of e - commerce has had a significant impact on traditional retail stores. In recent years, many brick - and - mortar stores have been closing down as consumers increasingly prefer to shop online.One of the main reasons for the popularity of e - commerce is the convenience it offers. Consumers can shop from the comfort of their own homes, at any time of the day or night. Another reason is the widerselection of products available online. E - commerce platforms can offer a much larger range of products than a single physical store.However, e - commerce also has some disadvantages. One disadvantage is the lack of the ability to physically examine products before purchasing. This can lead to problems such as receiving damaged or defective products. Another disadvantage is the potential for fraud. Since e - commerce transactions are often conducted over the Internet, there is a risk of identity theft and other forms of fraud.1. What is the main impact of e - commerce on traditional retail stores?A) It has made traditional retail stores more competitive.B) It has led to the closure of many traditional retail stores.C) It has forced traditional retail stores to improve their services.D) It has had no significant impact on traditional retail stores.2. What are the main reasons for the popularity of e - commerce?A) The lower prices and better quality of products.B) The convenience and wider selection of products.C) The ability to interact with other shoppers online.D) The faster delivery times and better customer service.3. What are the disadvantages of e - commerce?A) The high cost of shipping and handling.B) The limited selection of products.C) The inability to physically examine products and the potential for fraud.D) The lack of personal interaction with salespeople.Section C.Directions: There are 2 passages in this section. Each passage is followed by some questions or unfinished statements. For each of them there are four choices marked A), B), C) and D). You should decide on the best choice and mark the corresponding letter on Answer Sheet 2 with a single line through the center.Passage One.Genetically modified (GM) foods have been a controversial topic for many years. Supporters of GM foods argue that they can help to solve global food security problems by increasing crop yields and reducing the need for pesticides. They also claim that GM foods are safe for human consumption, as they have been extensively tested.However, opponents of GM foods have several concerns. One concern is the potential long - term health effects of consuming GM foods. Although there is currently no scientific evidence to suggest that GM foods are harmful, some people worry that there may be unforeseen consequences in the future. Another concern is the impact of GM crops on the environment. For example, some GM crops are engineered to be resistant to pesticides, which could lead to the overuse of pesticides and the development of pesticide - resistant weeds.1. What is the main argument of supporters of GM foods?A) GM foods are cheaper than non - GM foods.B) GM foods can solve global food security problems.C) GM foods are more nutritious than non - GM foods.D) GM foods are easier to grow than non - GM foods.2. What are the concerns of opponents of GM foods?A) The short - term health effects and the impact on the environment.B) The long - term health effects and the impact on the environment.C) The taste and quality of GM foods.D) The availability and cost of GM foods.3. What can be inferred from the passage about the safety of GM foods?A) GM foods are definitely safe for human consumption.B) GM foods are definitely harmful to human consumption.C) There is currently no scientific evidence to suggest that GM foods are harmful.D) There is scientific evidence to suggest that GM foods are harmful.Passage Two.The Internet has changed the way we communicate, learn, and do business. One of the most significant changes has been in the field of education. Online education has become increasingly popular in recent years, offering students a flexible and convenient way to study.There are many different types of online education programs available. Some are offered by traditional universities, while others are provided by specialized online education providers. These programs can range from short - term courses to full - degree programs.One of the advantages of online education is that it allows students to study at their own pace. They can log in to the course materials whenever they have time and complete the assignments at their own speed. Another advantage is that it can be more cost - effective than traditional education. Since there are no physical classrooms or campus facilities to maintain, online education providers can often offer lower tuition fees.However, online education also has some challenges. One challenge isthe lack of face - to - face interaction with instructors and other students. This can make it difficult for some students to stay motivatedand engaged in the learning process. Another challenge is the need for students to have a high level of self - discipline, as they are responsible for managing their own study time.1. What is the main change in education brought about by the Internet?A) It has made education more expensive.B) It has made education more difficult.C) It has made online education more popular.D) It has made traditional education obsolete.2. What are the advantages of online education?A) It allows students to study at their own pace and is cost - effective.B) It provides more face - to - face interaction with instructors.C) It has more comprehensive course materials.D) It offers a more traditional learning environment.3. What are the challenges of online education?A) The high cost of tuition and the lack of course variety.B) The lack of face - to - face interaction and the need for self - discipline.C) The difficulty in accessing course materials and the slow Internet speed.D) The short duration of courses and the lack of accreditation.四、翻译(30分钟)题目:中国的城市化(urbanization)将会充分释放潜在内需(domestic demand)。

小学上册第16次英语第2单元寒假试卷

小学上册第16次英语第2单元寒假试卷

小学上册英语第2单元寒假试卷英语试题一、综合题(本题有100小题,每小题1分,共100分.每小题不选、错误,均不给分)1.We can ___ a snowman. (build)2.Many plants change color in ______ (秋天).3.My mom likes to decorate the ____.4.My mom enjoys __________ (旅行) with the family.5.The chemical formula for lead(II) sulfide is _____.6.The chemical symbol for xenon is ______.7.What is the name of the famous monument in India?A. Taj MahalB. Qutub MinarC. Gateway of IndiaD. Red FortA8.What do you call a baby rabbit?A. KidB. BunnyC. PupD. KitB9. A ______ can be found in gardens and helps plants grow.10.The ______ helps with the detection of temperature changes.11.The invention of the telescope expanded our understanding of _____.12.The _______ has intricate patterns on its leaves.13.What do we call the study of living things?A. ChemistryB. BiologyC. PhysicsD. GeographyB14.小鸭子) follows its mother closely. The ___15.I drink _____ milk in the morning. (cold)16.The birds are ______ together in the trees. (singing)17.________ (生态影响) shapes landscapes.18.The ________ was a major turning point in the fight for social justice.19.Which instrument has keys and is played with fingers?A. GuitarB. ViolinC. PianoD. Drum20.The _______ (Great Fire of London) occurred in 1666 and destroyed much of the city.21.I love to watch my _________ (玩具火车) go around the track.22.Which word means "a young child"?A. TeenagerB. AdultC. ToddlerD. SeniorC23.What do bees produce?A. MilkB. HoneyC. EggsD. Butter24.What is the term for the effect of rotation on weather patterns on Earth?A. Coriolis EffectB. Weather SystemC. Atmospheric CirculationD. Wind Patterns25.My brother is a __________ (交通规划师).26.What is the term for the distance around a shape?A. AreaB. PerimeterC. VolumeD. Surface areaB Perimeter27.The parrot's beak is strong enough to crack ________________ (坚果).28.What is the capital of Malaysia?A. Kuala LumpurB. SingaporeC. BangkokD. JakartaA Kuala Lumpur29.This ________ (玩具) fuels my creativity.30.My uncle loves to __________ (分享) his recipes.31.I love to _______ (记录) my experiences.32.What do we call the practice of growing plants in water?A. HydroponicsB. AquacultureC. AgricultureD. HorticultureA33. A dilute solution has a low concentration of _____.34.The capital of Azerbaijan is ________ (巴库).35.What do you call a person who repairs computers?A. TechnicianB. EngineerC. ProgrammerD. DeveloperA36.The ancient Greeks developed the theater as an art form in ________ (雅典).37. A _______ (小蚊子) buzzes around during summer nights.38.How many pairs of wings does a butterfly have?A. 1B. 2C. 3D. 4B39.What is the name of the famous statue in Rio de Janeiro?A. Christ the RedeemerB. Statue of LibertyC. DavidD. Venus de MiloA Christ the Redeemer40.The element with the symbol I is __________.41. A _____ (生态旅游) can focus on plant life.42.The basic unit of a protein is an ________.43.We should ________ our hands before eating.44.We drink ________ (water) when we are thirsty.45.The _____ (ferns) thrive in shady areas.46.The ____ is a wise creature often associated with knowledge.47. A __________ can lead to the formation of new islands.48.The ______ helps people improve their skills.49.Fermentation is a chemical process that produces _______.50.The chemical formula for silver nitrate is ______.51.The ______ helps with the absorption of nutrients.52.What is 7 x 3?A. 18B. 20C. 21D. 2453.I see a butterfly near the ___. (flower)54.What is the primary color that mixes with red to create purple?A. BlueB. YellowC. GreenD. OrangeA55.The __________ is a popular tourist destination in Thailand. (普吉岛)56.What is the primary season for growing crops?A. WinterB. SpringC. SummerD. FallB57.The flowers in the garden are _______ and fragrant.58.Which planet is known for having a great red spot?A. JupiterB. SaturnC. MarsD. VenusA59.The ______ (自然灾害) can damage plant life.60.The park is very ________.61.ts produce _____ (毒素) for protection. Some pla62.What is the name of the first artificial satellite launched into space?A. Apollo 11B. HubbleC. Sputnik 1D. Voyager 163.My grandmother tells me about ____.64. A ______ has a unique pattern on its fur.65.The parrot is very _______ (善于交际).66.What do we call the process of making decisions based on evidence?A. GuessingB. EstimationC. ReasoningD. IntuitionC67.The chemical process of respiration converts glucose into ______.68.The ______ is known for her artistic vision.69.I saw a ________ in the flower bed.70.What do bees make?A. ButterB. HoneyC. JamD. SyrupB Honey71.The _____ (kite/balloon) is flying high.72.The sun is very ________ today.73. A volcano that is currently erupting is called an ______ volcano.74.The __________ (历史的教训) is invaluable.75.__________ are substances that can conduct electricity when dissolved in water.76.Acids turn blue litmus paper _______.77.Matter can exist in different states, including solid, liquid, and ________.78.Many plants have _____ (刺) to protect themselves.79.n Tea Party was a protest against __________ (税收). The Bost80.Which instrument has 88 keys?A. OrganB. HarpC. PianoD. Accordion答案:C81. A horse makes a ______ sound.82.The __________ (古代文明) had complex societies.83.The clock is ___. (ticking)84.The chemical formula for glycine is ______.85. A well-maintained garden can be a source of ______. (一个维护良好的花园可以成为快乐的源泉。

电容测量英语翻译

电容测量英语翻译

A scheme,the features,the operation and the metrological characteristics of a new simple instrument formeasuring small capacitances with a linear measurement characteristic and a sensitivity of 0.01 pF aredescribed.Key words:measuring instrument,capacitance,sensitivity.The use of capacitive sensitive elements (capacitive sensors) in different areas of activity is due to their simple con-struction,accessibility,and high sensitivity of the primary measuring transducers. Their output quantity is the change in the electrical capacitance due to the action of the input parameter being measured. In some cases,capacitive sensors are given preference over other sensitive elements due to the possibility of using them for noncontact nondestructive testing of tech-nological processes (in particular in an automatic system) or for determining the parameters of fabricated articles.All possible applications of capacitive sensors are determined by the dependence of the capacitance of the capaci-tor on the geometrical dimensions and electrical parameters of the surroundings. The equation for the capacitance of a par-allel-plate capacitor (ignoring edge effects) isC = ε0εS /d ,where ε0is the permittivity of free space,εis the relative permittivity of the medium between the capacitor plates,S is the area of the plates,and d is the distance between them.In general,a capacitive sensor is a capacitor of arbitrary shape and dimensions. Only the dependence of its capac-itance on the parameter being measured (or monitored) is important. It is sometimes desirable to ensure that the electrical capacitance depends linearly on the external action.According to the expression given above,one can distinguish sensitive capacitive elements with a varying geome-try of the capacitor and sensitive elements which react to a change in the permittivity of the object being monitored. The lat-ter feature is of the greatest interest since it has a wider range of application.Capacitive sensors based on a measurement of the change in the permittivity are used both to determine the com-position of different substances and materials,and to measure level when the filling of the gap between the capacitor plates with a uniform medium varies. Hence,one can monitor the moisture content of different solids,friable and viscous materi-als,and also certain food products. The monitored foreign inclusions can be air bubbles or electrically conducting particles in solid sheet insulating materials for known dimensions of the samples and values of the permittivity of the material itself.In this case the material is monitored without damaging its structure (flaw detection).Measurement Techniques,Vol. 46,No.7,2003AN AUTONOMOUS INSTRUMENT FOR MEASURINGSMALL ELECTRICAL CAPACITANCES WITHA LINEAR CHARACTERISTICELECTROMAGNETIC MEASUREMENTSK. K. Kim,V . Yu. Barbarovich,and V . I. Asmus UDC 621.317.39(075.8)Translated from Izmeritel’naya Tekhnika,No. 7,pp. 27–30,July,2003. Original article submitted February 27,2003.0543-1972/03/4607-0673$25.00 ©2003 Plenum Publishing Corporation 673When measuring the thickness of layers of electrically insulating materials (films,fabrics,lacquer coatings,count-ing the number of sheets,etc.) or electrically conducting films (foils etc.) the material being investigated is placed in the gap between the capacitor plates. It should be noted that,under automatic conditions,such materials can be monitored at high speed since this method is practically inertia-free and the determination of the information parameter of the capacitive sen-sor is limited solely by the operating frequency of the capacitance measuring instrument employed.A no less important element of any measuring circuit,which uses a capacitive sensor as the primary transducer,is the instrument used to measure the capacitance,with which the readings are made. A number of requirements,governed by the measurement conditions,are imposed on such measuring instruments.First,its measurement range must provide the sensitivity necessary for the results to be reliable. In many cases,the capacitance of the sensor ranges from a few picofarads to tens of picofarads,and the change in the capacitance of the sensor under the action of the parameter being measured may lie in a narrow range,for example,1–10% of the steady value of the capacitance (as,for example,when monitoring the thickness or moisture content of sheet materials).Second,the working frequency must be sufficiently high (greater than 104Hz) in order to reduce the reactance and inertia of the measuring circuit when recording high-speed processes [1].Third,the measuring instrument must make measurements in a three-terminal circuit to reduce the effect of the capacitance of the conducting leads and of external electromagnetic interference.Fourth,in order to ensure that the instrument can be used under steady field conditions,the instrument must have an autonomous supply from built-in voltage sources of not more than 9–12 V,and these should be simple and convenient to use and have small dimensions and mass.The scale of the measuring instrument can be calibrated in units of capacitance,if the form of the dependence of the sensor capacitance on the monitored parameter is not known exactly or another calibration is inconvenient (a single sample, the instrument is multifunctional,etc.). The scale of the measuring instrument can also be calibrated in units of the measured physical quantity.As a rule,up to recently,the capacitance was measured using resonance circuits and alternating-current bridges with self-balancing,which had the following drawbacks:a nonlinear measurement characteristic,and a dependence of the opera-tion of the instrument on an external supply (220 V). This limits the range of application of capacitive sensors to practical-ly steady measurement conditions,and prevents their wide use under field conditions,for example,for checking the safety of reinforced concrete structures for transport or in the electric power industry.During the last few years,portable instruments for measuring capacitance have appeared on the market with a power supply from built-in sources,with a measurement limit of 1 nF or greater at frequencies of 150–1000 Hz. However,their sen-sitivity characteristics and working frequency do not always satisfy the requirements for recording small differences in capac-itance under complex measurement conditions. The relatively low working frequency (1000 Hz) limits their area of applica-tion when monitoring high-speed technological processes. The sensitivity threshold of these instruments is not less than 1 pF, and capacitances are measured in a two-terminal circuit,which,when recording small values of capacitance,means that the conducting wires and the surrounding objects have a considerable influence on the overall error of the results of measurements.We have developed an instrument for measuring capacitance which enables the area of application of capacitive sen-sors to be extended.The proposed instrument with small dimensions,which do not exceed the dimensions of the instruments measured above,and an autonomous supply from ordinary batteries (1.5–9 V),possesses the sensitivity of laboratory instruments. Existing samples of the instrument when tested showed a sensitivity threshold of better than 0.01 pF. We did not tackle the problem of attaining the greatest sensitivity,but if the limit of measurements is reduced to less than 10 pF,the sensitivity of the instrument can be increased.The first sample of the instrument designed with a magnetoelectric output instrument had a linear measurement characteristic,a measurement limit of 50 pF,and a sensitivity threshold of 0.3–0.5 pF. No requirements regarding the oper-ating frequency and voltage of the autonomous power supply were introduced,and hence the instrument had a frequency of 1000 Hz and a supply of 9 V. The limit of the reduced error of the output instrument was ±1% (accuracy class 1.0). The error of the instrument,determined by comparison with KVDP-1 and R597 capacitance standards in a three-terminal circuit,674did not exceed the error limit of the output instrument. The sensitivity threshold (0.3–0.5 pF) was limited by the resolution of the scale of the magnetoelectric output instrument.A later sample was designed to be included in an instrument for counting banknotes. Taking into account the high rate of motion of the banknotes and the relatively large gap between the electrodes of the capacitive sensor compared with the thickness of the notes,a capacitance measuring instrument was constructed with a measurement range of 10 pF,a sensi-tivity threshold of 0.01 pF,and a working frequency of 50 kHz.We also constructed a multilimit instrument for measuring capacitance with the following technical characteristics:measurement characteristic . . . . . . . . . . . . . . . . .linearmeasurement range,pF . . . . . . . . . . . . . . . . . . . .0.01–106working frequency,Hz . . . . . . . . . . . . . . . . . . . . .100–5·104limit of permissible reduced error,% . . . . . . . . . .0.5–1.0supply voltage,V . . . . . . . . . . . . . . . . . . . . . . . . .±(1.5–9)current consumption,mA . . . . . . . . . . . . . . . . . . . 2.0–9.0mass,kg,not greater than . . . . . . . . . . . . . . . . . .0.3overall dimensions,mm,not greater than . . . . . . .50 ×100 ×150It should be noted that the overall dimensions,mass,and other technical characteristics are determined by the com-ponents used and can be improved.The capacitance measuring instrument was based on a function generator which we developed,a block diagram of which is shown in Fig. 1. The proposed function generator is the fundamental measurement transducer which responds to a change in the input capacitance of the instrument.Electronic circuits which generate signals of special form,for example,triangular,rectangular or sinusoidal,are called function generators [2].The function generator contains a master oscillator 1and an adder 2,connected in a circuit consisting of an inte-grator 3and a Schmitt trigger 4connected in series,where the output of the master oscillator 1is connected to one of the inputs of the adder 2,to the second input of which the output of the Schmitt trigger 4is connected,while the output of the adder 2is connected to the input of the integrator 3.One can use as the master oscillator any type of generator which has a rectangular voltage at the output,symmetri-cal with respect to the point of zero potential of the instrument. As the adder 2,one can employ any device which carries out logical summation of the voltages of the master oscillator 1and of the Schmitt trigger 4,so that,at the output of the adder,the voltage is equal to zero if voltages of different sign are applied to its inputs,and is identical in sign with its input voltage if these voltages have the same sign.It should be borne in mind that in this circuit the Schmitt trigger acts as a comparator,which compares two voltages:the output voltage of the integrator and its own output voltage,and can be replaced by another type of comparator.675Fig. 1. Block diagram of the function generator:1) master oscillator;2) adder; 3) integrator; 4) Schmitt trigger.Fig. 2. V oltage diagrams.The function generator operates as follows. The master oscillator 1generates rectangular signals,symmetrical about the point of zero potential of the instrument,in a special case with a duty ratio of 2. These signals are applied to the input of the adder2,where they are compared with the output voltage of the Schmitt trigger 4. At the instant when the compared voltages have the same sign,the circuit is triggered. The adder 2generates a voltage of the same sign as the Schmitt trigger 4and the mas-ter oscillator 1. This voltage is applied to the input of the integrator 3,which in turn begins to generate a linearly varying voltage.If the time taken for the output voltage of the integrator 3to reach the operating threshold of the Schmitt trigger 4 is less than the pulse length of the master oscillator 1(or the frequency of natural self-excited oscillations of the integrator 3 and the Schmitt trigger 4is greater than the frequency of the master oscillator 1),the output voltage of the Schmitt trigger changes sign earlier than the master oscillator. The signals from the Schmitt trigger and the master oscillator are then in antiphase,and at the output of the adder the value of the voltage will be equal to zero,as a result of which the output voltage of the integrator will not change. This state will continue until the master oscillator changes the sign of its own output volt-age and then is of the same polarity as the output voltage of the Schmitt trigger. At the output of the adder there will be a voltage of the same sign as at the outputs of the Schmitt trigger and the master oscillator. The output voltage of the integra-tor will vary linearly in the opposite direction and will again reach the operating threshold of the Schmitt trigger earlier than the output voltage of the master oscillator reverses. The voltage at the output of the adder will be equal to zero,and the out-put voltage of the integrator will not change. This state will continue until the master oscillator changes the sign of its own output voltage,in which case the output voltage of the integrator begins to vary linearly in the opposite direction.Hence,the function generator generates periodic signals u m.o,the frequency of which is specified by the frequencyof the oscillations of the master oscillator (Fig. 2). The integrator generates signals u i of trapezoidal form,in which case thefrequency and start of the signals are identical with the frequency and start of the signals of the master oscillator. The Schmitt 676trigger,controlled by the integrator,generates signals u t of rectangular form,which have the same frequency but are shifted with respect to the signals of the master oscillator by a time ∆t,determined by the difference in the pulse length t m.o of the master oscillator and the integration time t i of the integrator∆t= t m.o– t i.The adder generates width-modulated signals u a of rectangular form and opposite polarity with a pulse length of t a, determined solely by the integration time t i of the integrator:t a= t i= t m.o– ∆t.The length t a of the rectangular pulses at the output of the adder or the time shift ∆t between the Schmitt trigger sig-nals and the master oscillator signals,due to the well known properties of the integrator [2],can be regulated by the param-eters of the time-setting circuit of the integrator with an extremely linear characteristic:t a= t i= Kτi;(1)∆t= t m.o– t a= t m.o– Kτi,(2)where τi is the integration constant of the integrator and K is a coefficient of proportionality,which depends only on the oper-ating threshold of the comparator.The characteristic is linear if a single-value change in the function by an amount ∆y= a∆x corresponds to an incre-ment in the argument by ∆x,where a is a coefficient of proportionality [3]. Expressions (1) and (2) satisfy this requirement. If the time-setting circuit of the integrator is constructed in the form of an RC-circuit,we havet i= RC,and a change in any of the arguments R or C by ∆R of ∆C leads to a single-valued change in the function t a= K∆τi or ∆(∆t)=–K∆τi.Hence,the time shift ∆t between the signals of the Schmitt trigger and the master oscillator can be linearly changed by one of the parameters of the time-setting circuit of the integrator from 0 to t m.o which,on changing to angle characteris-tics,will correspond to a phase shift from 0 to 180°.The function generator enables one to obtain two sequences of rectangular pulses of the same frequency,shifted in time,where the time shift can vary linearly depending on the parameters of the time-setting circuit of the integrator. The time shift can be varied from 0 to 180°in as small a step as desired,determined solely by the characteristics of the components used in the circuit. The duration of the rectangular pulses at the output of the adder is a function of the capacitance of the time-setting circuit (or a function of the capacitance being measured).The properties of the function generator were used to construct an instrument for measuring electrical capacitance. In this case,the information parameter,which affects the operation of the function generator,is the external measured capac-itance of the capacitive sensor,which,in turn,depends on the permittivity of the medium being investigated,the moisture content,the filling level,the composition,the capacitor geometry and other parameters being investigated.The instrument can also be used as a built-in unit in a multimeter or as a separate instrument for measuring electri-cal capacitances with high sensitivity.REFERENCES1. P. Profos (ed.),Measurements in Industry. Methods of Measurement and Apparatus.A Reference Book[in Russian],Metallurgiya,Moscow (1990).2. W. Tittse and K. Shenk,Semiconductor Circuit Technique[Russian translation],Mir,Moscow (1982).3. V. O. Arutyunov,Electrical Measuring Instruments and Measurements[in Russian],GÉI,Moscow,Leningrad (1958).677。

Design and validation of a uniform flow microreactor

Design and validation of a uniform flow microreactor

Journal of Mechanical Science and Technology 28 (1) (2014) 157~166/content/1738-494xDOI 10.1007/s12206-013-0954-5Design and validation of a uniform flow microreactor†Seung Jae Yi1, Ji Min Park3, Seung-Cheol Chang2,* and Kyung Chun Kim1,*1 School of Mechanical Engineering, Pusan National University, Busan, 609-735, Korea Institute of BioPhysio Sensor Technology, Pusan National University, Busan, 609-735, Korea 3 LT development team, R&D Division, Global HQ, Hankook Tire Co., LTD., Daejeon, 305-725, Korea 2(Manuscript Received March 20, 2013; Revised August 1, 2013; Accepted August 6, 2013) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------AbstractWe present a design method to characterize uniform flows in a microreactor for high performance surface plasmon resonance (SPR) a general-purpose biosensor chips. The shape of the microreactor is designed based on an approximate pressure drop model. The number of micro-pillars and the slopes of the inlet and outlet linear chambers are two dominant parameters used to minimize the velocity difference in the microreactor. The flow uniformity was examined quantitatively by numerical and experimental visualization methods. A computational fluid dynamics (CFD) analysis demonstrates that the designed microreactor has a fairly uniform velocity profile in the reaction zone for a wide range of flow rates. The velocity field in the fabricated microreactor was measured using the micro-particle image velocimetry (μ-PIV) method, and the flow uniformity was confirmed experimentally. The performance of the uniform flow microreactor was verified using the fluorescence antibody technique.Keywords: Microreactor design; Uniform flow; CFD; Micro-PIV; Fluorescence antibody technique ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------1. IntroductionChemical and mechanical engineers have become increasingly interested in immuno-sensors. Many microreactor and lab-on-a-chip devices have been improved by recent advances in microfabrication processes and nanotechnology. Immunosensors also have been paid special attention on the device design and fabrication processes because the high characteristics of selectivity and sensitivity of biological moieties [1-3]. Surface plasmon resonance (SPR) is a sensitive technique used to detect small changes occurring on the surfaces of thin noble metal films. Many researchers have selected this technique for its high sensitivity sensing range in which the sensing device presents an optimum working condition. The SPR has many applications: surface-enhanced spectroscopy [4-6], lithographic fabrication [7, 8], and biological and chemical sensing [9-12]. In biological and chemical sensing research, SPR is a surface-sensitive optical technique that can be used to study nano-scale thin (organic) layers on noble metal films. The technique is based on measuring the change in the refractive index due to surface modification; for example, the binding of molecules to the surface [13]. The SPR technique has been used to measure thermophysiCorresponding author. Tel.: +82 51 510 2324; +82 51 510 2276, Fax.: +82 51 515 7866; +82 51 514 2122 E-mail address: kckim@pusan.ac.kr, s.c.chang@pusan.ac.kr † Recommended by Associate Editor Simon Song © KSME & Springer 2014*cal properties of liquid media. Kim and Kihm (2006) conducted label-free visualization of microfluidic mixture concentration fields using SPR reflectance imaging [14]. They successfully extended their method to real-time and full-field detection of near-wall salinity using SPR reflectance [15]. The feasibility of SPR imaging thermometry has been tested as a potential tool for full-field and real-time temperature field mapping for thermally transient liquid media based on the temperature dependence of SPR reflectance intensity variation [16]. For biological sensing in our laboratory, a closed loop flowing SPR spectroscope (K-MAC Co. Ltd., Korea) was used to detect immune reactions such as the antigen-antibody reaction. An SPR chip with a gold layer to immobilize the antibody was used for the SPR analysis. Fig. 1(a) shows the geometry of a conventional SPR chip. A 50 nm-thick hexagonal gold layer was deposited onto the glass bottom wall of the hexagonal reaction chamber. The chamber was connected to inlet and outlet straight microchannels for reagent flow. There are several steps in conducting the SPR analysis such as washing with EtOH, pumping de-ionized (DI) water for cleaning, flowing a ProlinkerTM mixture for immobilizing antigens, and flowing antibodies for immune reactions. These procedures are processed on the SPR spectroscopy, therefore, uniform flow in the SPR chip is an important issue in this system. If there are recirculating flow zones in the SPR chip, many kinds of reactants could remain in the reaction chamber de-158S. J. Yi et al. / Journal of Mechanical Science and Technology 28 (1) (2014) 157~166carried out. Finally, a fluorescent antibody technique was used to confirm the performance of the reaction between antibodies and antigens in the reactor chamber.2. Design and fabrication of a uniform flow microreactor2.1 Approximate model for flow uniformity(a)(b)(c) Fig. 1. Conventional SPR chip: (a) schematic of the SPR chip; (b) velocity vector field by CFD analysis; (c) velocity profile along the line (A)-(A’).spite the washing processes. In this case, the sensitivity and accuracy of an SPR measurement cannot be ensured. Thus, the uniform flow in the reaction chamber is very important for better SPR spectroscopy performance. We performed a computational fluid dynamics (CFD) analysis of the flow field in a conventional SPR chip and found that the flow field had very large recirculating flow regions in the reaction chamber, as shown in Fig. 1(b). Due to sudden expansion from the inlet microchannel to the hexagonal reaction chamber, fairly large negative velocity vectors are created as shown in Fig. 1(c). As a result, the velocity profile along the center of the reaction chamber has severe non-uniformity, which degrades the performance of the SPR analysis. We designed a uniform flow microreactor for a generalpurpose biosensor chip. For this purpose, an approximate model was developed based on a simplified description of the microreactor as a network of equivalent rectangular ducts. The resulting design using the approximated model was then validated using CFD simulation and an experimental verification of the flow field was conducted using μ-PIV measurements. A comparison between the CFD and experimental results wasMicroreactors present advantages over macroscopic reactors, particularly in heat management [17], by facilitating isothermal operation or enabling the coupling of endothermic and exothermic reactions. In addition, the small characteristic dimensions of microreactors can provide safe operation with highly reactive or hazardous products, and these characteristics suggest new and original designs capable of reducing the number of reactive steps in long synthesis processes. Such advantages could improve selectivity and reduce energy consumption. However, poor uniformity of fluid distribution can limit severely the inherent advantages of microreactors. Commenge et al. suggested an optimal design guideline for flow uniformity in microchannel reactors. They examined the influence of the geometrical dimensions of the reactor microstructure on the velocity distribution between channels [18]. Many types of microreactor, such as surface plasmon resonance (SPR) chip, should have a clean reaction chamber without any structures. Instead of microchannels, we used micropillar structures at the upstream and downstream of the reaction chamber. Between these micro-pillars, there is microreactor similar to a microchannel reactor. There is microreactor, which is similar to a microchannel reactor. Therefore, we adopted the design guideline suggested by Commenge et al.. Fig. 2(a) shows a plate with Nc channels divided into Nc zones. Each zone was oriented perpendicularly to a bisecting line whose origin was located in the inlet or outlet chamber associated with the corresponding chamber. The zones were equally spaced along the chamber geometry, each with an identical zone length Le. Since the position of each zone varied along the chamber geometry, the width Wi of each zone i depended on its position. In this design, the inlet and outlet chambers were symmetrical, leading to an identical zone construction for both chambers of a given plate. Once the widths Wi of the Nc zones are known, the approximate model can be constructed according to the principle presented in Fig. 2(b). We assumed that the width and depth of each channel was the same, the channel width is We, and the depth is e. The starting point at the inlet zone had height width W1, and the height width of the ending point at the outlet zone was defined as Wnc. Each zone-channel pair was considered to be a portion of a duct with a rectangular cross section along which the pressure drop could be easily calculated with standard hydraulic formulae. A simple approach for studying flow distribution in parallel structures involves calculation of the pressure variation in the distribution system. The entire structure can be considered as a resistive network of ducts in which resistance is due to flow friction. The linearS. J. Yi et al. / Journal of Mechanical Science and Technology 28 (1) (2014) 157~166159NcOutletThe pressure drop relation for the approximate model results from the assumption that the boundaries of chamber zones are iso-pressure contours. Thus,DPchannel ,i + DPoutletzone ,Nc +1-i = DPinletzone ,i+1 + DPchannel ,i+1 for 1 £ i £ N c - 1 .(5)InletNc(a) Description of chamber zones with corresponding channels ranging from 1 to NcV'1As a result, pressure at the entrance to channel i and inlet zone i are equal. Equal velocities in channels i and i + 1 imply equal pressure drops through these channels. Thus,DPchannel ,i = DPchannel ,i+1 for 1 £ i £ N c - 1 .V'2V'NCOutletLe(6)WeThe condition for a uniform velocity distribution impliesDPoutletzone ,Nc +1-i = DPinletzone ,i+1 for 1 £ i £ N c - 1 .(7)U1UiUNCWi eSide view of cross section AALCInletW1V1V2Vi AAVNCWNFor equal velocities Uc through channels, mass balances on the zones from i to Nc of the inlet chambers and on the zones from i to Nc of the outlet chambers are, respectively,C(b) Channels of a plate with different parameters and variables involved in the approximate model for velocity distribution Fig. 2. The principle of designed model for flow uniformity in the microreactor.WV i i = ( N c + 1 - i )WeU cand Wi 'Vi ' = ( N c + 1 - i )WeU c . (8)For symmetric inlet and outlet chambers,Wi ' = Wi ; thus Vi ' = Vi .(9)pressure drop due to friction through a rectangular duct in laminar creeping flow with length L, width W, and thickness e is equal toDPf = 32lncUsing Eq. (5) and the mass balances, the condition for a uniform velocity distribution between the channels implies the following equality condition:Nc - i ö Wi+1 æ 1 ç1 - 0.351 ÷ e ç w / e ( i+1-i ) ÷ è ø2m Lum2 DH(1)=i WNc +1-i æ 1 ç1 - 0.351 e ç wNc +1-i / e èwhere μ is the dynamic viscosity of fluid, um is a mean velocity, the hydraulic diameter (DH) is defined as2We DH = W +e()ö ÷ ÷ ø2(10) thereby resulting in a direct relation between the width Wi+1 of inlet zone i+1 and the width WNc+1-i of outlet zone Nc+1-i. Determination of an optimum chamber geometry yielding a uniform velocity distribution can be performed by calculating the Wi that satisfy the above condition. For linear chamber geometry, the widths of the chamber zones are related to each other bywi w1 i - 1 æ wNc w1 ö = + - ÷. ç e e Nc - 1 è e e ø(2)and the noncircularity coefficient of the duct λnc is a function of the ratio of duct thickness to duct width:lnc =1.5 ææ e öæ e öö ç ç 1 - 0.351 ÷ç 1 + ÷ ÷ w øè w øø èè2.(3)(11)The pressure drop can be calculated by following any of the Nc substreams flowing through the channel structure. The pressure drop of substream j is given byDPplate = å i =1 DPi ,inletzone + å i =c1 + DPj , chamber .j N -1- jDPi , outlet + DPj , channel(4)The constraint of a linear chamber wall is too strong to allow completely uniform velocity distributions to be obtained. Nevertheless, the differences can be reduced to less than a few percent in most cases, which is sufficient for practical applications. Direct calculation was used for the linear chambers. For160S. J. Yi et al. / Journal of Mechanical Science and Technology 28 (1) (2014) 157~166upstream and downstream of the reaction chamber. The length of the pillar was 2.5 mm and the thickness was 0.5 mm. The microchannel between pillars was 0.2 mm wide, and the depth of the microreactor was 0.5 mm. The shapes of the inlet and outlet chambers were identical, and the slope angle (83º) was determined from the approximate model. A diffuser-type microchannel was connected from the circular inlet port to the inlet chamber. 2.2 Fabrication of uniform flow microreactor(a) Geometry of newly-designed reaction chip(b) Newly-fabricated reaction chip Fig. 3. Newly-designed reaction chip for flow uniformity.fixed values of Nc, Le, and Wce, all possible linear chamber geometries can be represented as points (W1, WNc) that give the minimum velocity differences between the channels of a plate. The design procedure is as follows: (1) The widths and thicknesses of the channels are chosen. (2) The thickness of the walls between channels can be chosen based on the technical capabilities of the micro-fabrication process. The wall thickness and channel width determine the chamber zone length Le. (3) Channel length Lc and fluid velocity Uc are chosen depending on the space and time required for reaction as well as the maximum pressure drop through the reactor. (4) The total number of channels Nc can then be calculated depending on the flow rate that must circulate through the reactor. (5) Once all the other geometric parameters have been determined, the optimum linear chamber can be designed using an approximate model to determine the widths of the first and last zones of the chamber. Fig. 3(a) shows the design result obtained from the approximate model with 10 pillars (11 channels), and linear inlet and outlet chambers. For the SPR measurement antibodyantigen reaction, a rectangular reaction chamber with dimensions of 6.0 mm by 7.2 mm was provided. The design flow rate was 3 mL/min, which is the normal operation condition of the SPR analysis. And micro-pillar structures were positionedThe microreactor must be optically accessible for μ-PIV measurement and the fluorescent antibody technique, so transparent materials were chosen. The pillars, reaction chamber, and inlet/outlet channels were made of polydimethylsiloxane (PDMS) and were bonded to a glass wafer. The mold for the PDMS microreactor was made from steel plate (S45C) and was manufactured using a micro-machining process. To make the PDMS microreactor, PDMS solution and hardener were mixed in a 10:1 ratio, and the mixture solution was poured into the metal mold. The mold with the PDMS mixture solution was baked at 80ºC in a vacuum oven for two hr. After the curing process, the PDMS part was reefed off from the mold. The PDMS product was used for the upper part of the antibody-antigen reaction chamber. The bottom side of the reaction chamber was made using a micro-electro-mechanical system (MEMS) process. Twelve identical patterns were arranged on the 4 inch glass wafer, and the size of the each pattern was 18 mm x 18 mm. A hexagonal gold layer was deposited in the center of each chip pattern with a thickness of 50 nm and side length of 3 mm using the E-beam evaporator. The final antibody-antigen reaction chip were fabricated by bonding these two parts--the PDMS microreactor and the square glass chip with the gold layer--as shown in Fig. 3(b).3. Validation of uniform flow microreactor3.1 CFD analysis of the microreactor To validate flow uniformity, a finite volume method CFD analysis was performed for the design condition (Q = 3.0 mL/min) and off-design conditions (Q = 1.0 and 5.0 mL/min) using the commercial code FLUENT. The three-dimensional (3-D) grid model of the microreactor shown in Fig. 4(a) was used for numerical simulation. To generate the grid system, GRIDGEN software was used. The structured grid model was composed of 41,000 cells. The de-ionized water was used for the working fluid to compare the results with experiment. Density and viscosity of the working fluid are 998.2 kg/m3 and 1.003 X 10-3 kg/m·s, respectively. The entrance flow rates for the initial conditions were 1.0, 3.0, and 5.0 mL/min, and an incompressible laminar flow model was used. Static pressure was applied at the outlet boundary with no-slip conditions at the wall boundaries. The equations used in the numerical study were the continuity equation (Eq. (12)) and the NavierStokes equation (Eq. (13)):S. J. Yi et al. / Journal of Mechanical Science and Technology 28 (1) (2014) 157~166161(a) Grid generation for CFD analysis(a)(iii)(ii)(i)(b) Velocity contour map obtained by CFD analysis: 3 mL/min Fig. 4. Grid generation for CFD analysis and velocity magnitude contour map.(b)¶uk =0 ¶xk(12)¶u j ö æ ¶ 2u k ¶P = + h ÷ ç ç ¶x 2 ¶xk ÷ ¶x j ø è k ö ÷ ÷ ørç çæ ¶u j è ¶t+ uk(13)where u is the velocity, P is the pressure, ρ is the density, and η is the apparent viscosity. The SIMPLEC algorithm was used for pressure-velocity coupling. To increase the calculation accuracy, 2nd order upwind scheme was employed for spatial discretization of pressure and momentum. The solution was iterated until the residues for each equation were less than 10-4. Fig. 4(b) shows the results of the flow velocity field at the center plane (0.25 mm in height from the bottom wall) in the microreactor by 3-D CFD analysis. The liquid sample (water) was divided into 11 micro-channels due to the slope of the inlet chamber, and 10 pillars. Fig. 4(b) shows the contour map of the velocity magnitude at the center plane for the design condition. There are no recirculation flows in the reaction chamber. Figs. 5(a)-(c) illustrate the velocity profiles obtained at the inlet (line i), center (line ii), and outlet (line iii) of the reaction chamber, respectively. There are eleven velocity magnitude peaks since the flow between each pillar behaved as a multiple jet. The jet from the largest cross section of the inlet chamber (left corner of line i) shows a higher velocity magnitude compared to other jets (see Fig. 5(a)). This is because the inlet chamber was designed as a linear chamber without optimiza-(c) Fig. 5. Results of CFD analysis: (a) velocity profiles along the line (i) inlet of the chamber for on-and-off design flow conditions (1, 3, and 5 mL/min); (b) velocity profiles along the line (ii) center of the chamber; (c) velocity profiles along the line (iii) outlet of the chamber.tion. In addition, the viscous effect from the side wall of the reaction chamber could squeeze the stream tube from the first microchannel. The same behavior appeared in the jet from the last channel (right corner of the line i). As the flow rate decreased, the difference between the peak jet velocities decreased. Fig. 5(b) shows velocity profiles at the center of the reaction chamber, which is the center position of the deposited gold layer. The main goal of this research is to provide uniform flow in this region. Within the range of simulated flow rates, the velocity profiles showed fairly uniform distribution except at the side wall region where the viscous effect is strong. The maximum velocity was observed near the side162S. J. Yi et al. / Journal of Mechanical Science and Technology 28 (1) (2014) 157~166wall. The uniformity of flow distribution can be defined as |(1 – umax/ucenter)| x 100%. The calculated uniformities are 1.8, 3.0, and 5.3% for Q = 1.0, 3.0, and 5.0 mL/min, respectively. These values are acceptable for the SPR analysis. The velocity profile at the outlet of the reaction chamber is the inverse of the velocity profile at the inlet, as shown in Fig. 5(c), since the outlet chamber is exactly asymmetric with respect to the inlet chamber. We note that the difference in peak velocities of the jets were higher than that of the inlet zone due to a higher viscosity effect. Although the initial jet-like flow showed a strong non-uniformity in the flow distribution, the velocity profile at the center of the reactor depicts a fairly uniform profile due to viscous diffusion. The uniformity and stability of flow in the microreactor were validated theoretically using CFD analysis. 3.2 μ-PIV measurement of the microreactor A standard two-dimensional (2-D) μ-PIV method was used for measuring the velocity field of the fabricated reaction chamber. Fig. 6(a) shows a schematic diagram of the experimental apparatus used in this study [19, 20]. The prepared DI water sample and fluorescent particles (Dp = 10 μm, Dantec) were injected to the reaction chamber using a syringe pump. A 4X objective lens was adopted in the microscope configuration (BX51, Olympus). The flow rate was set to the design condition of 3 mL/min. A halogen lamp was used for the light source. A 10-bit high speed camera (1280 × 1024 pixels, pco.1200 hs, PCO) was used to acquire particle images in the chamber. The size of the field-of-view was 3.45 mm × 2.76 mm. The overall size of the chamber, 7.2 mm × 6.0 mm, was too large to measure the entire flow field at one time; therefore, the measurement section was divided into six subsections. Fig. 6(b) shows the divided subsections and particle images. Three subsections were located at the inlet side, and the others were located at the outlet side. The focal plane was set to the center z-plane of the reaction chamber. For each section, 1001 images were acquired, the frame rate to capture images was 638.88 frames/s, and the images were interrogated using the two-frame cross-correlation technique. The interrogation windows were taken to be 48×48 pixels. The FFT window size was set to 64×64 pixels, and a 50% window overlap was used. False vectors in the raw vector fields were eliminated using the magnitude difference technique with an 80% threshold, and then interpolating with 3×3 Gaussian convolution. After post-processing, an ensemble-averaged velocity vector field was acquired for each section. The whole velocity vector field for the reaction chamber was obtained by combining six subsections. The velocity vectors in the overlapped regions of the subsections were interpolated and averaged. Fig. 6(c) shows the ensemble-averaged velocity field, and Fig. 7(a) illustrates the velocity magnitude contour map from the measured velocity field. There was no recirculating flow in the reaction chamber, which is in agreement with the results of the(a)(b)(c) Fig. 6. Results of μ-PIV measurement for the design flow condition (Q = 3 mL/min): (a) experimental setup; (b) measurement sections of the microreactor and raw particle images; (c) velocity vector field.CFD analysis. We confirmed the uniform flow vector field in the reaction chamber by experimental measurements. 3.3 Comparison of CFD and experimental results To compare the flow velocity vector field with the CFD analysis results, velocity profiles were extracted from the ensemble-averaged velocity vector field obtained by μ-PIV measurement. Fig. 7(b) shows the velocity profile at the inlet of the reaction chamber (line (i)). Eleven peaks were evident in the μ-PIV results. The measured peak jet velocities were in a range from about 0.015 to 0.025 m/s, and the CFD results were in the range from 0.035 to 0.060 m/s (see Fig. 5(a)). Fig. 7(c) shows the velocity profile at the center of the chamber (line (ii)). Due to viscous diffusion, the jet-like nature of the profile disappeared and a fairly flat velocity profile was obtained. The measured velocity at the center was about 0.015S. J. Yi et al. / Journal of Mechanical Science and Technology 28 (1) (2014) 157~166163m/s(ii)(i)(a)(b)(c)Fig. 7. Comparison between the CFD and μ-PIV results: (a) velocity magnitude contour profile in the reaction chamber; (b) velocity profiles at the inlet of the chamber; (c) velocity profiles at the center of the reaction chamber. The CFD results are averaged velocity profiles taking into account the depth of correlation of the μ-PIV measurement.m/s, while the CFD result predicted 0.021 m/s (see Fig. 5(b)). In general, the velocity magnitudes obtained by experiment have lower values than those obtained by CFD analysis. The main reason is that the CFD result gave the velocity profile at exactly the center plane, but the PIV result reflected the averaged velocity along the depth direction due to the defocused particle images (although the focal plane of the camera was set to the center plane). For a better comparison, the depth of correlation should be taken into account [21]. Since the μ-PIV method adopts the volume illumination technique, the velocity vectors are averaged through the depth of correlation. The depth of correlation can be defined as the depth over which particles significantly contribute to the correlation function. According to Olsen and Adrian’s formula (Eq. (14)), the depth of correlation equation is as follows:1nique was adopted for the confirmation of antibody-antigen immune binding in the proposed system. A fluorescein isothiocyanate (FITC) conjugated antibody was used for a sandwich-type immunoassay format with the microreactor. The FITC has excitation and emission spectrum peaks at 495 nm and 521 nm, respectively. When the FITC conjugated antibody bound to an antigen that was coupled with a modified antibody on the surface of the microreactor, a green light (λ = 521 nm) was emitted from the FITC conjugated antibody under the illumination of the excitation light (λ = 495 nm). 4.1 Reagents and materials Anti-rabbit IgG antibodies produced in goats, IgG from rabbit serum, FITC-conjugated goat anti-rabbit IgG, and bovine serum albumin (BSA) were obtained from Sigma-Aldrich (USA). We used a commercial binder (Prolinker, Proteogen, Korea). All other buffer chemicals were analytical grade and were supplied by Sigma-Aldrich (USA). All aqueous solutions were prepared in double-distilled water obtained from a Milli-Q water purifying system (18 MΩ cm). 4.2 Preparation of sample for FA measurement A micro-pump (ISMATEC SA, Switzerland) was used to pump the materials through the microreactor. The flow rate for all processes was 3 mL/min. Ethanol was pumped through the microreactor for 1 hr for washing. Then, distilled water was pumped through the microreactor for 1 hr. An aliquot of 2 mM Prolinker prepared in formaldehyde was pumped through the microreactor for 1 hr to form a self-assembled monolayer (SAM) surface on the gold layer. To remove unbound Prolinker and for washing, an aliquot of distilled water and 0.1 M PBS were injected into the microreactor for 1 hr. An aliquot of 0.25 mg/mL anti-IgG antibody solution prepared in PBS was pumped through the microreactor to form a modified antibody layer. After antibody modification, an aliquot of 0.5 mg/mL IgG antigen was flowed onto the gold layer to allow the immune reaction for 1 hr. Blocking of unbound anti-IgGZ corröæ 2 1.49( M + 1) 2 l 2 ö ù 2 1 é 1 - e æ n2 = ê - 1÷ç d p + ç ÷ú 2 2ë M 2 NA2 ê e è NA ú øè øû(14)where dp is the particle diameter, λ is the light wavelength, M is the image magnification, no is the refractive index of the lens immersion fluid, NA is the numerical aperture of the lens, and ε is the relative threshold, normally set to be equal to 0.01. For our μ-PIV experiment with NA = 0.13, dp= 10 μm, M = 4, λ = 570 nm, and ε = 0.01, the estimated Zcorr is 138.02 μm. In Fig. 7, the dashed lines are the averaged values of the CFD velocity profile along the depth direction, taking into account the depth of correlation. As shown in Fig. 7, the agreement between CFD and experimental results are excellent.4. Confirmation of antibody-antigen immune binding using the fluorescent antibody techniqueFluorescent antibody (FA) techniques are the most popular and widely used methods in immunoassay because of their simplicity and easy results readout using conventional optical devices [22-24]. Because of these advantages, an FA tech-。

Received Revised Accepted

Received Revised Accepted

STRUCTURE DISCOVERY IN SEQUENTIALLY-CONNECTEDDATA STREAMSJEFFREY COBLE, DIANE J. COOK AND LAWRENCE B. HOLDERDepartment of Computer Science and EngineeringThe University of Texas at ArlingtonBox 19015, Arlington, TX 76019, USA{coble,cook,holder}@ReceivedRevisedAcceptedMuch of current data mining research is focused on discovering sets of attributes that discriminatedata entities into classes, such as shopping trends for a particular demographic group. In contrast,we are working to develop data mining techniques to discover patterns consisting of complexrelationships between entities. Our research is particularly applicable to domains in which thedata is event driven, such as counter-terrorism intelligence analysis. In this paper we describe analgorithm designed to operate over relational data received from a continuous stream. Ourapproach includes a mechanism for summarizing discoveries from previous data increments sothat the globally best patterns can be computed by examining only the new data increment. Wethen describe a method by which relational dependencies that span across temporal incrementboundaries can be efficiently resolved so that additional pattern instances, which do not resideentirely in a single data increment, can be discovered. We also describe a method for changedetection using a measure of central tendency designed for graph data. We contrast twoformulations of the change detection process and demonstrate the ability to identify salientchanges along meaningful dimensions and recognize trends in a relational data stream.Keywords: Relational Data Mining; Stream Mining, Change Detection1. IntroductionMuch of current data mining research is focused on algorithms that can discover sets of attributes that discriminate data entities into classes, such as shopping or banking trends for a particular demographic group. In contrast, our work is focused on data mining techniques to discover relationships between entities. Our work is particularly applicable to problems where the data is event driven, such as the types of intelligence analysis performed by counter-terrorism organizations. Such problems require discovery of relational patterns between the events in the environment so that these patterns can be exploited for the purposes of prediction and action.Also common to these domains is the continuous nature of the discovery problems. For example, Intelligence Analysts often monitor particular regions of the world or focus on long-term problems like Nuclear Proliferation over the course of many years. To assist in such tasks, we are developing data mining techniques that can operate with data that is received incrementally.In this paper we present Incremental Subdue (ISubdue), which is the result of our efforts to develop an incremental discovery algorithm capable of evaluating data received incrementally. ISubdue iteratively discovers and refines a set of canonical patterns, considered to be most representative of the accumulated data. We also describe an approach for change detection in relational data streams and contrast two approaches to the problem formulation.2. Structure DiscoveryThe work we describe in this paper is based upon Subdue 1, which is a graph-based data mining system designed to discover common structures from relational data. Subdue represents data in graph form and can support either directed or undirected edges. Subdue operates by evaluating potential substructures for their ability to compress the entire graph, as illustrated in Figure 1. The better a particular substructure describes a graph, the more the graph will be compressed by replacing that substructure with a placeholder. Repeated iterations will discover additional substructures, potentially those that are hierarchical, containing previously compressed substructures.Subdue uses the Minimum Description Length Principle 2 as the metric by which graph compression is evaluated. Subdue is also capable of using an inexact graph match parameter to evaluate substructure matches so that slight deviations between two patterns can be considered as the same pattern.Fig. 1. Subdue discovers common substructures within relational data by evaluating their ability to compress the graph.Compressed Graph3. Incremental DiscoveryFor our work on ISubdue, we assume that data is received in incremental blocks. Repeatedly reprocessing the accumulated graph after receiving each new increment would be intractable because of the combinatoric nature of substructure evaluation, so instead we wish to develop methods to incrementally refine the substructure discoveries with a minimal amount of reexamination of old data.3.1. Independent dataIn our previous work 3, we developed a method for incrementally determining the best substructures within sequential data where each new increment is a distinct graph structure independent of previous increments. The accumulation of these increments is viewed as one large but disconnected graph.We often encounter a situation where local applications of Subdue to the individual data increments will yield a set of locally-best substructures that are not the globally best substructures that would be found if the data could be evaluated as one aggregate block. To overcome this problem, we introduced a summarization metric, maintained from each incremental application of Subdue, that allows us to derive the globally best substructure without reapplying Subdue to the accumulated data.To accomplish this goal, we rely on a few artifacts of Subdue’s discovery algorithm. First, Subdue creates a list of the n best substructures discovered from any dataset, where n is configurable by the user.Second, we use the value metric Subdue maintains for each substructure. Subdue measures graph compression with the Minimum Description Length principle as illustrated in Equation 1, where DL(S) is the description length of the substructure being evaluated, DL(G|S) is the description length of the graph as compressed by the substructure, and DL(G) is the description length of the original graph. The better our substructure performs, the smaller the compression ratio will be. For the purposes of our research, we have used a simple description length measure for graphs (and substructures) consisting of the number of vertices plus the number of edges. C.f. Cook and Holder 1994 for a full discussion of Subdue’s MDL graph encoding algorithm 4.Subdue’s evaluation algorithm ranks the best substructure by measuring the inverse of the compression value in Equation 1. Favoring larger values serves to pick a substructure )G (DL )S |G (DL )S (DL n Compressio +=(1)that minimizes DL(S) + DL(G|S), which means we have found the most descriptive substructure.For ISubdue, we must use a modified version of the compression metric to find the globally best substructure, illustrated in Equation 2.With Equation 2 we calculate the compression achieved by a particular substructure, S i , up through and including the current data increment m . The DL(S i ) term is the description length of the substructure, S i , under consideration. The termrepresents the description length of the accumulated graph after it is compressed by the substructure S i .Finally, the termrepresents the full description length of the accumulated graph.At any point we can then reevaluate the substructures using Equation 3 (inverse of Equation 2), choosing the one with the highest value as globally best.After running the discovery algorithm over each newly acquired increment, we store the description length metrics for the top n local subs in that increment. By applying our algorithm over all of the stored metrics for each increment, we can then calculate the global top n substructures.(2) ==+=m j j m j i j i i m )G (DL )S |G (DL )S (DL )S (Compress 11 + ==m j i j i m j j )S |G (DL )S (DL )G (DL )i max(arg 11(3) =m j i j )S |G (DL 1 =m j j )G (DL 14. Sequentially Connected DataWe now turn our attention to the challenge of incrementally modifying our knowledge of the most representative patterns when dependencies exist across sequentially received data increments. As each new data increment is received, it may contain new edges that extend from vertices in the new data increment to vertices in previous increments.Figure 2 illustrates an example where two data increments are introduced over successive time steps. Common substructures have been identified and two instances extend across the increment boundary. Referring back to our counterterrorism example, it is easy to see how analysts would continually receive new information regarding previously identified groups, people, targets, or organizations.Common Increment T 0T 1Fig. 2. Sequentially connected data Fig. 3 Flowchart illustrates the high-level steps of the discovery algorithm for sequentially-connected relational data4.1. AlgorithmOur only prerequisite for the algorithm is that any pattern spanning the increment boundary that is prominent enough to be of interest is also present in the local increments. As long as we have seen the pattern previously and above some threshold of support, then we can recover all instances that span the increment boundary. Figure 3 illustrates the basic steps of the discovery algorithm at a high level. We discuss the details of the algorithm in the following sections.4.1.1. ApproachLet•G n = set of top-n globally-best substructures•I s = set of pattern instances associated with a substructure s∈ G n•V b = set of vertices with an edge spanning the increment boundary and that are potential members of a top-n substructure•S b = 2-vertex pairs of seed substructure instances with an edge spanning the increment boundary•C i = set of candidate substructure instances that span the increment boundary and that have the potential of growing into an instance of a top-n substructure. The first step in the discovery process is to apply the algorithm we developed for the independent increments discussed above. This involves running Subdue discovery on the data contained exclusively within the new increment, ignoring the edges that extend to previous increments. We then update the statistics stored with the increment and compute the set of globally best substructures G n. This process is illustrated in Figure 4.process is to evaluate the local data in the newincrementWe perform this step to take advantage of all available data in forming our knowledge about the set of patterns that are most representative of the system generating the data. Although the set of top-n substructures computed at this point in the algorithm does not consider substructure instances spanning the increment boundary and therefore will not be accurate in terms of the respective strength of the best substructures, it will be more accurate than if we were to ignore the new data entirely prior to addressing the increment boundary.vertices that could possibly be part of an instance ofa top n pattern. The third step is to create 2-vertexsubstructure instances by joining the vertices thatspan the increment boundary.The second step of our algorithm is to identify the set of boundary vertices, V b, where each vertex has a spanning edge that extends to a previous increment and is potentially a member of one of the top-n best substructures in G n. We can identify all boundary vertices, V b, in O(m), where m is the number of edges in the new increment. Where p = |V b| <<m, then for each boundary vertex in V b we can identify those that are potential members of a top-n substructure in O(k), where k is the number of vertices in the set of substructures G n, for a total complexity of O(pk). Figure 5 illustrates this process.extension, we create a reference graph,which we keep extended one step aheadof the instances it represents.new candidate instances for evaluation against the top-nsubstructures. Failed extensions are propagated back into thereference graph with marked edges and vertices, to guidefuture extensions.For the third step we create a set of 2-vertex substructure seed instances by connecting each vertex in V b with the spanning edge to its corresponding vertex in a previous increment. We immediately discard any instance where the second vertex is not a member of a top-n substructure (all elements of V b are already members of a top-n substructure), which again can be done in O(pk). A copy of each seed instance is associated with each top-n substructure, s i∈ G n, for which it is a subset.To facilitate an efficient process for growing the seed instances into potential instances of a top-n substructure, we now create a set of reference graphs. We create one reference graph for each copy of a seed instance, which is in turn associated with one top-n substructure. Figure 6 illustrates this process.the set of seed instances until they are either growninto a substructure from S t or discarded.We create the initial reference graph by extending the seed instance by one edge and vertex in all possible directions. We can then extend the seed instance with respect to the reference graph to create a set of candidate instances C i, for each top-n substructure s i∈G n, illustrated in Figure 7. The candidate instances represent an extension by a single edge and a single vertex, with one candidate instance being generated for each possible extension beyond the seed instance. We then evaluate each candidate instance, c ij∈C i and keep only those where c ij is still a subgraph of s i. This evaluation requires a subgraph isomorphism test, which is an NP-complete algorithm, but since most patterns discovered by Subdue are relatively small in size, the cost is negligible in practice. For each candidate instance that is found to not be a subgraph of a top-n substructure, we mark the reference graph to indicate the failed edge and possibly a vertex that is a dead end. This prevents redundant exploration in future extensions and significantly prunes the search space.In the fifth step (Figure 8), we repeatedly extend each instance, c ij∈C i, in all possible directions by one edge and one vertex. When we reach a point where candidate instances remain but all edges and vertices in the reference graph have already been explored, then we again extend the reference graph frontier by one edge and one vertex. After each instance extension we discard any instance in C i that is no longer a subgraph of asubstructure in G n. Any instance in C i that is an exact match to a substructure in G n is added to the instance list for that substructure, I s, and removed from C i.Once we have exhausted the set of instances in C i so that they have either been added to a substructure’s instance list or discarded, we update the increment statistics to reflect the new instances and then we can recalculate the top-n set, G n, for the sake of accuracy, or wait until the next increment.4.2. Discovery EvaluationTo validate our approach to discovery from relational streams, we have conducted two sets of experiments, one on synthetic data and another on data simulated for the counterterrorism domain.substructure embedded insynthetic data.4.2.1 Synthetic dataOur synthetic data consists of a randomly generated graph segment with vertex labels drawn uniformly from the 26 letters of the alphabet. Vertices have between one and three outgoing edges where the target vertex is selected at random and may reside in a previous data increment, causing the edge to span the increment boundary. In addition to the random segments, we intersperse multiple instances of a predefined substructure. For the experiments described here, the predefined substructure we used is depicted in Figure 9. We embed this substructure internal to the increments and also insert instances that span the increment boundary to test that these instances are detected by our discovery algorithm.Figure 10 illustrates the results for a progression of five experiments. The x-axis indicates the number of increments that were processed and the respective size in termsof vertices and edges. To illustrate the experiment methodology, consider the 15-increment experiment. We provided ISubdue with the 15 increments in sequential order as fast as the algorithm could process them. The time (38 seconds) depicted is for processing all 15 increments. We then aggregated all 15 increments and processed them with Subdue for the comparison. The five results shown in Figure 10 are not cumulative, meaning that each experiment includes a new set of increments. It is reasonable to suggest then that adding five new increments – from 15 to 20 – would require approximately three additional seconds of processing time for ISubdue, whereas Subdue would require the full 1130 seconds because of the need to reprocess all of the accumulated data. Figure 11 depicts a similar set of experimentsFig. 10. Comparison of ISubdue and Subue on on increasingnumber of increments for synthetic data.In addition to the speedup achieved by virtue of the fact that ISubdue need only process the new increment, additional speedup is achieved because of a sampling effect. This is illustrated in Figure 10 where each independent experiment produces a significant run-time improvement for ISubdue even when processing an identical amount of data as standard Subdue. The sampling effect is an artifact of the way in which patterns are grown from the data. Since ISubdue is operating from smaller samples of data, there are fewer possible pattern instances to evaluate. There are limiting conditions to the speedup achievable with the sampling effect but a full discussion is beyond the scope of this paper.Fig. 11. A section of the graph representation of the counterterrorism data used for our evaluation. 4.2.2. Counterterrorism DataThe counterterrorism data was generated by a simulator created as part of the Evidence Assessment, Grouping, Linking, and Evaluation (EAGLE) program, sponsored by the U.S. Air Force Research Laboratory. The simulator was created by a program participant after extensive interviews with Intelligence Analysts and several studies with respect to appropriate ratios of noise and clutter. The data we use for discovery represents the activities of terrorist organizations as they attempt to exploit vulnerable targets, represented by the execution of five different event types. They are:Two-way-Communication: Involves one initiating person and one responding person. N-way-Communication: Involves one initiating person and multiple respondents. Generalized-Transfer: One person transfers a resource.Applying-Capability: One person applies a capability to a targetApplying-Resource: One person applies a resource to a targetFig. 12. Comparison of run-times for ISubdue and Subdue onincreasing numbers of increments for counterterrorism data.Fig. 13. The top 3 substructures discovered by both ISubdueand Subdue for the counterterrorism data.The data also involves targets and groups, groups being comprised of member agents who are the participants in the aforementioned events. All data is generalized so that no specific names are used. Figure 11 illustrates a small cross-section of the data used in our experiments.The intent of this experiment was to evaluate the performance of our research on ISubdue against the performance of the original Subdue algorithm. We are interested inmeasuring performance along two dimensions, run-time and the best reported substructures.Figure 12 illustrates the comparative run-time performance of ISubdue and Subdue on the same data. As for the synthetic data, ISubdue processes all increments successively whereas Subdue batch processes an aggregation of the increments for the comparative result. Each experiment was independent as it was for the synthetic data.Figure 13 depicts the top three substructures consistently discovered by both ISubdue and Subdue for all five experiments introduced in Figure 12.4.3 Qualitative AnalysisIn this paper we have described an algorithm to facilitate complete discovery in connected graph data to ensure that we can accurately evaluate the prevalence of specific patterns, even when those patterns are connected across temporal increment boundaries. The basis for using Subdue is that the prevalence of patterns is important, with the prevalence being derived from the number of instances present in the data. If we did not fully evaluate the increment boundaries, we would lose pattern instances and therefore patterns could not be accurately evaluated for their prevalence.To illustrate this process, we conducted an experiment using synthetic data similar to that described in section 4.2.1, where the increments were processed both with and without boundary evaluation. The synthetic data consisted of 50 increments, each with 560 vertices and approximately 1175 edges. Interspersed within the random graph data are instances of two different patterns, illustrated in Figure 14, shown with the total number of instances each. The difference is that most of the instances for the first patternFig. 14 Two patterns embedded in the random graphs in each data increment and spanning the increment boundary 625 instances250 instances gspan the increment boundaries, where all of the instances of the second pattern fall completely within single increments.We ran three separate tests on this data. The first is a benchmark test with the original Subdue algorithm run over the aggregate data, which totals 53,500 vertices and 108,666 edges. As expected, the most prevalent pattern (g1) is reported by Subdue as the most prevalent with 625 instances discovered.The second test was run with ISubdue with boundary evaluation enabled. Again, the most prevalent pattern (g1) was found with 624 instances discovered (occasionally the order of instance growth will result in the loss of an instance and happens for both Subdue and ISubdue).The last test was run with ISubdue with boundary evaluation disabled. In this case, the second pattern (g2) is returned as the most prevalent with 250 instances discovered. The first pattern (g1) was reported as the fourth best with 125 instances discovered. The second-best pattern was a subset of pattern g1and the third-best pattern was a small subset of pattern g2.Clearly this experiment illustrates the importance of including our boundary evaluation algorithm into the full ISubdue capability. Without the ability to recover pattern instances that do not reside entirely within a single increment we sacrifice the ability to provide accurate discovery results.5. Detecting ChangeResearchers from several fields, such as Machine Learning and Econometrics, have developed methods to address the challenges of a continuous system, like the sliding window approach5, where only the last n data points are used to learn a model. Many of these approaches have been used with success in attribute-based data mining. Other attribute-based methods, such as those involving Ensemble Classifiers6 and continuously updated Decision Trees7, have also been successfully demonstrated. However, these methods do not easily translate into relational discovery because of the complex, interconnected nature of the data. Since the data is relationally structured, it can be difficult to quantify the ways in which it may change over time. For example, an attribute vector can be changed by altering the probability distribution of the discrete set of attributes. Conversely, a relational data set may contain a collection of entities and relationship types that can be configured in a large number of different permutations. For a diverse relational dataset, the number of unique patterns may be intractably large. This makes it difficult to quantify the nature of change and so it is not straightforward to apply methods that rely on sampling, such as the sliding window approach.The remainder of this paper is devoted to describing a process by which we are able to compute a representative point in graph space for sequential sets of patterns discovered by ISubdue from successive data increments received from a continuous stream. We can use these representative points in the graph space, along with a distance calculation, to iteratively measure the evolving set of discoveries to detect and assess change. The objective of this work is to enable a method for measuring pattern drift in relational data streams, where the salient patterns may change in prevalence or structure over time. With a measure of central tendency for graph data, along with a method for calculating graph distance, we can begin to adapt time-series techniques to relational data streams. As part of our evaluation we have experimented with two different formulations for computing the median graphs – by aggregating sequential sets of local discoveries and by considering the evolving sets of globally-ranked discoveries. Each is discussed below.5.1. Error-correcting graph isomorphismThe ability to compute the distance between two graphs is essential to our approach for detecting and measure change in incremental discoveries. To compute this distance we rely on the error-correcting graph isomorphism (ecgi)8. An ecgi is a bijective function, f , that maps a graph g 1 to a graph g 2, such that:where V 1 and V 2 are the vertices from graphs g 1 and g 2, respectively. The ecgi function, f(v) = u , provides vertex mappings such that 1V v ′∈ and 2V u ′∈, where 1V ′ and 2V ′ are the sets of vertices for which a mapping from g 1 to g 2 can be found. The vertices 11V - V ′ in g 1 are deleted and the vertices 22V - V ′ in g 2 are inserted. A cost is assigned for each deletion and insertion operation. Depending on the formulation, a direct substitution may not incur cost, which is intuitive if we are looking to minimize cost based on the difference between the graphs. It should be noted that the substitution of a vertex from g 1 to g 2 may not be an identical substitution. For instance, if the vertices have different labels, then the substitution would be indirect and a cost would be incurred for the label transformation. The edit operations for edges are similar to those for vertices. We again have the situation where the edge mappings may not be identical substitutions. The edges may differ in their labeling as well as their direction.Figure 15 illustrates the mapping of vertices from graph g 1 to g 2. Vertex substitutions are illustrated with dashed lines in Figure 15.b, along with the label transformation. Figure 15.c depicts the edge substitution, deletion and insertion operations.221121V V V V V V :f ⊆′⊆′′→′ ; ;The ecgi returns the optimal graph isomorphism, where optimality is determined by the least-cost set of edit operations that transform g 1 to g 2. The costs of edit operations can be determined to suit the needs of a particular application or unit costs can be assigned. The optimal ecgi solution is known to be NP-complete and is intractable for even relatively small graphs. In the following sections we discuss an approximation method for the ecgi calculation.5.2. Graph metricsThe objective of this work is to develop methods for applying metrics to relational data so that we can quantify change over time. At the heart of almost all statistical analysis techniques is a measure of central tendency for the data sample being evaluated. The most prevalent of such measures is the mean. By definition, a mean is the point where the sum of all distances from the mean to the sample data points equals zero, such that:where ∆x i is the distance from the mean to data point x i . This definition can be rephrased to say that an equal amount of variance lies on each side of the number line from the mean. Unfortunately there is no straightforward translation of this definition into the space of graphs, since there is no concept of positive and negative that can be applied systematically. A mean graph was defined in Bunke and Guenter 20019, but only for a single pair of input graphs. This is possible because a mean graph can be found that is equidistant from the two input graphs, which causes it to satisfy the form of a statistical1=∆=n i i x Fig. 15. Example of error-correcting graphisomorphism function mapping g 1 to g 2.label transformation (c)。

epoch酶标仪操作规程

epoch酶标仪操作规程

epoch酶标仪操作规程Introduction:The Epoch酶标仪 is an essential tool used in biochemical research to measure enzyme activity. This instrument is widely used in laboratories for various applications, including protein quantification, enzyme kinetics, and DNA/RNA analysis. In this document, I will outline the operating procedures for the Epoch酶标仪.1. Preparing the Instrument:Before starting any experiment, it is crucial to ensure that the Epoch酶标仪 is properly set up and calibrated. This involves checking the power supply, connecting the necessary cables, and verifying the alignment of theoptical system. Additionally, it is essential to clean the instrument thoroughly to avoid any contamination that may affect the accuracy of the results.2. Setting up the Experiment:Once the instrument is ready, it is time to set up the experiment. This includes selecting the appropriate detection mode (absorbance, fluorescence, or luminescence) based on the nature of the assay. For example, if I am quantifying protein concentration using a Bradford assay, I would choose the absorbance mode. I would also need to select the appropriate wavelength and set the path length according to the assay requirements.3. Loading the Samples:After setting up the experiment, it is time to load the samples onto the Epoch酶标仪. This involves pipetting the assay mixture into the designated wells of a microplate. It is crucial to arrange the samples and controls in a systematic manner to ensure accurate and consistent readings. For example, I would load my samples intriplicates and include positive and negative controls for comparison.4. Running the Assay:Once the samples are loaded, I would start the assay by initiating the measurement on the Epoch酶标仪. The instrument will then read the absorbance, fluorescence, or luminescence signals based on the selected detection mode. It is important to allow sufficient time for the instrument to stabilize before recording the measurements. I would typically set a specific time interval for data collection, such as every 30 seconds for a kinetic assay.5. Data Analysis:After completing the assay, I would analyze the data obtained from the Epoch酶标仪. This involves using appropriate software or tools to process the raw data and calculate the desired parameters. For example, if I am measuring enzyme activity, I would determine the initial reaction rate by analyzing the slope of the absorbance curve. I would also compare the sample readings with the controls to assess the significance of the results.中文回答:Epoch酶标仪操作规程。

abnormality_detection1

abnormality_detection1

A Particle Filtering Approach to Abnormality Detection in Nonlinear Systems and its Application to Abnormal ActivityDetectionNamrata Vaswani,Rama ChellappaCenter for Automation ResearchDepartment of Electrical and Computer EngineeringUniversity of Maryland,College ParkMD20742,USAnamrata,rama@AbstractWe study abnormality detection in partially observed nonlinear dynamic systems tracked using particlefilters.An‘abnormality’is defined as a change in the system model,which could be drastic or gradual,with the parameters of the changed system unknown.If the change is drastic the particle filter will lose track rapidly and the increase in tracking error can be used to detect the change. In this paper we propose a new statistic for detecting‘slow’changes or abnormalities which do not cause the particlefilter to lose track for a long time.In a previous work,we have proposed a partially observed nonlinear dynamical system for modeling the configuration dynamics of a group of interacting point objects and formulated abnormal activity detection as a change detection problem. We show here results for abnormal activity detection comparing our proposed change detection strategy with others used in literature.1IntroductionIn many practical problems arising in quality control or surveillance problems like abnormal activity detection,the underlying system in its normal state can be modeled as a parametric stochastic model (which may be linear or nonlinear).In most cases,the observations are noisy(making the system partially observed)and the transformation between the observation and the state may again be linear or nonlinear.Such a system,in the most general case,forms a Partially Observed Non-Linear Dynamical (PONLD)system which can be tracked using a particlefilter[1].An‘abnormality’constitutes a change, which could be drastic or gradual,in the system model,with the parameters of the system model after the change unknown.If the change in the system is‘drastic’(or abrupt)the particlefilter will lose track rapidly and the increase in tracking error can be used to detect the change[2].In this paper we propose a new strategy for detecting‘slow’(gradual)changes or abnormalities in partially observed nonlinear systems which do not cause the particlefilter to lose track for very long and whose parameters are unknown,and show its application to abnormal activity detection.In a previous work[3],we have modeled the changing configuration of a group of moving point objects(an“activity”)as a moving deformable shape in.We have proposed a PONLD system to model an activity with the learnt shape deformation and motion models forming the system model and the measurement noise in the object locations(configuration space)forming the observation model.The mapping from state(shape+motion) space to observation(configuration)space is nonlinear.In this paper,we apply our proposed change detection strategy for abnormal activity(suspicious behavior)detection in this framework and compare its performance,for both slow and drastic abnormalities,with other statistics used in literature.1.1Previous WorkOnline detection of changes for partially observed linear dynamical systems has been studied exten-sively.For known changed system parameters the CUSUM[4]algorithm can be used directly.The CUSUM algorithm uses as change detection statistic,the maximum(taken over all previous time in-stants)of the likelihood ratio assuming change occurred at time,i.e.Extended Kalman Filtering[5]and change detection methods for linear systems are the main tools for change detection[4].Linearization techniques are computationally efficient but are not always applica-ble(require a good initial guess at each time step and hence not robust to noise spikes).[6]is an attempt to use a Particle Filtering(PF)approach for abrupt change detection in Partially Observed Non-Linear Dynamical(PONLD)systems without linearization.It uses knowledge of the parameters of the changes system and defines a modification of the CUSUM change detection statistic that can be evaluated recursively.It runs a sequence(for different)of PF’s to evaluate the probability of the observations given that the change occurred at time and compares each with the observations’probability evaluated using a PF which assumes that no change occurred.Another statistic commonly used for abrupt change detection in partially observed systems and which does not require knowledge of changed system parameters is tracking error[2].Tracking error is the distance(usually Euclidean distance)between the current observation and its prediction based on past observations.Another class of approaches(eg.see[7,8])used extensively with particlefilters uses a discrete state variable to denote the mode that the system is operating in.This is typically used when the system can operate in multiple modes each associated with a different system and/or observation model.The mode variable’s transition between states is governed by which mode maximizes the likelihood of the observations.But this approach also assumes known change parameters.Particlefiltering(PF)[9]also known as the sequential Monte Carlo method[10]or condensation algorithm[11]provides afinite dimensional approximate solution to the conditional density of the state given past observations.In this paper,we use a PF(described in section3)to track the PONLD system. The PF provides at each time instant,an-sample monte carlo estimate of the poterior distribution of the state of the system,given observations upto time.We propose to use as a change detection statistic,the expectation under this posterior state distribution of the log of the a-priori pdf of the state at time and we call this statistic”Expected Log Likelihood”or ELL.This is discussed in section4. In section5,we prove convergence of the Huber M-estimate[12]of this statistic,evaluated using the posterior distribution estimated by the PF,to its true value.Also,to be able to detect both slow and drastic changes,we propose to use a combination of ELL and tracking error.Finally in section6we show results for abnormal activity detection using ELL,tracking error,observation likelihood,, and the combined ELL-tracking error strategy and discuss future work in section7.2Problem Statement and AssumptionsGiven a partially observed nonlinear dynamic(PONLD)system[13]with an valued state process and an valued observation process,the aim is to detect a change in the system model(model for).The system(or state)process for the normal system is assumed to be a Markov process with state transition kernel and the observation process is defined bywhere is an i.i.d.noise process and is in general a nonlinear function.The prior initial state distribution denoted by,the observation likelihood denoted by and the state transition kernel are known.We assume in this work that the prior is absolutely continuous(w.r.t.to the Lebesgue measure)i.e.it admits a density(pdf),the transition kernel is Feller[13]and also absolutely continuous for all values of and for all.Also the observation likelihood is absolutely continuous i.e.it admits a density which is a continuous and everywhere positive function.Abnormality detection in a PONLD system is posed as a change detection problem with parameters of the changed system unknown.If the abnormality is a drastic one,it would cause the PF to lose track (tracking error will increase)and hence will get detected.But in a lot of cases,the abnormality is a slow change that does not cause the PF to lose track for a long time(there will be a large detection delay if tracking error is used)and this is the problem that we address here.To be specific,the problem is to detect a gradual change in the system model i.e.detect with minimum delay,the time instantwhen the state starts following a different system model with transition kernel which is unknown.3Particlefiltering(PF)The particlefilter[13]is a recursive algorithm which produces at each time,a cloud of particles, ,whose empirical measure closely“follows”the posterior distribution of the state given past observations(denoted by in the rest of the paper).It starts with sampling times from the initial state distribution to approximate it by,the predictionstep samples the new state from the distribution.Thus the empirical distribution of this new cloud of particles,.. Note that both and approximate but the resampling step is used because it increases the sampling efficiency by eliminating samples with very low weights.4Change/Abnormality DetectionFirst let us define some notation.The integral of a function w.r.t.a measure is denoted by with(1)Given the prior initial state distribution and the transition kernel,the prior state distribution at any time is.Since the transition kernel is absolutely continuous,this state distribution admits a pdf.In a lot of cases(for example if the system model is linear Gaussian with Gaussian initial state pdf)it is possible to define the pdf in closed form.Now,if the system were fully observed,one could evaluate from the observation and then (negative log-likelihood of the state coming from a normal system)could be used as a change detection statistic;the negative log-likelihood of the state of the changed or abnormal system would be smaller than that of the normal system.But for a partially observed system we can only approximate(using a PF)the posterior distribution of the state given past observations,by.The negative log-likelihood of state coming from a normal system can now be replaced by its expectation under the posterior distribution of the state.Thus for the partially observed system tracked using a PF,we propose to use as the change detection statistic,the expected log-likelihood(ELL),,approximated by. It is interesting to note that ELL is also the Kerridge inaccuracy[14]between.This is approximated using thePF as.The inaccuracy between the posterior state distribution corresponding to normal observations and the prior will be smaller than that between the posterior corresponding to abnormal observations and the prior.Kerridge inaccuracy can also be interpreted as the average length of code designed for rather than[15].The advantage of this statistic over others used in literature(for PONLD systems tracked using PF’s) like tracking error[2]or log-likelihood of current observation given past observations[6]is that it can detect slow changes much better.Consider as an example to motivate our approach,a PONLD system with a linear Gaussian system model i.e..is i.i.d.zero mean Gaussian noise,is a zero mean Gaussian,so that is also a zero mean Gaussian density.Let us assume that the abnormality causes the system model to change to,whereis a small constant bias added to the state or part of the state vector.If the bias added along a direction is small or comparable to the noise variance along that direction,it will get tracked correctly by the PF and hence tracking error will not show a significant change.But a systematically increasing bias along certain directions will cause the mean of the posterior distribution of the state to increase to a significantly nonzero value within a few time steps,thus causing the Kerridge inaccuracy between the posterior and the normal system pdf to increase.For such a problem,ELL(or Kerridge inaccuracy)will detect the abnormality much faster than tracking error.Now,if the abnormality/change is a drastic one,it will cause the PF to lose track quite rapidly(the number of particles,,is afinite number,and so there will be very few particles around the expected value of the abnormal state),thus causing the tracking error to show a sudden increase which can be used for detecting an abrupt change.Due to the same reason,the estimate of the posterior distribution in this case is no longer reliable and hence the ELL evaluated with it will also be inaccurate.For this case tracking error is a more reliable change detection statistic.Hence a robust change detection strategy which can detect both slow and drastic changes with minimum delay would be to use both tracking error and ELL to detect a change;if either one exceeds its respective threshold then a change is declared.5Convergence of ApproximationLet be a probability space,then the sequence is a sequence of random measures, and is a deterministic probability measure.denotes the set ofprobability measures over the Borel sigma algebra on.Theorem1[13]:The sequence of(empirical)posterior state distributions estimated using a PF (that satisfies certain smoothness assumptions discussed in[13])converges weakly to the true posterior ,P-a.s.,i.e.P-a.s.(2) where denotes the set of continuous bounded functions on.First note that the PF used in this work satisfies the assumptions of[13]:As discussed earlier in section2, the observation likelihood,is a continuous and everywhere positive funciton and the transition kernel is Feller.The multinomial resampling step(described in section3)produces unbiased samples from with variance of the weights estimated from the samples bounded by(so it goes to zero as). Now,the negative log likelihood of state is an unbounded function while the theorem stated above works for bounded continuous functions.But since it is a non-negative function, we can use the standard mathematical analysis trick of approximating it by a sequence of increasing bounded functions which will converge to,i.e.define(3)Then we have.Now,is a continuous bounded function and so we can use theorem1with to prove convergence of to,P-a.s.,as, i.e.P-a.s,(4) Since is a sequence of non-negative increasing functions that converge pointwise to,we can use Monotone Convergence Theorem([16],page87)to prove convergence of to,i.e.(5) Combining equations(4)and(5),we getCorollary1P-a.s(6)or that the error between the posterior expectation of the bounded approximation of negative log-likelihood of state,evaluated using an-particle PF,and its true value can be made as small as one wants by choosing the bound to be large enough and given the bound,choosing to be large enough.Remark1:Interpreting the above result in terms of Kerridge inaccuracy,we haveslow changes.But a drastic or abrupt change with large enough to track only the normal system,will cause the PF to lose track.Now,if the PF does lose track due to a change,tracking error will detect the change while as long as it is able to track the changing system,the evaluated ELL will increase from its normal value,so that ELL can detect such a change.6Abnormal Activity DetectionIn a previous work[3],we have defined a PONLD system for modeling the configuration dynamics of a group of interacting point objects which we treat as a moving and deforming shape.The shape deformation model and the motion(scale,rotation)model form the system model and the measurement noise in the point object configuration constitutes the observation noise.We have assumed a stationary shape deformation model,and so we are able to learn a single mean shape for the data.We use the tangent space coordinates w.r.t this mean shape[19]as the shape coordinates.Also,since translation is a linear operation,we use a translation normalized configuration vector as the observation vector(thus avoiding the need to have translation as part of the state vector).We give below the equations used for the observation model and for the system model for shape,scale and rotation.6.1Dynamical ModelObservation Modelwhere(9)is a complex vector of observations containing the object locations measurements,is a nonlin-ear function of the state,where is the vector of the tangent coordinates of shape, is the rotation angle,and is the scale.System ModelWe set as detection threshold for a statistic,its maximum value for a normal test ing this threshold we plot the detection delay against the rate of change(walk away velocity)for the three statis-tics.We do this for in3(a),(b),(c).Now detection delay using ELL is least except when the change is very drastic(velocity=16,32)where tracking error or Obs.likelihood perform better. The criterion for choosing a change detection statistic and its threshold is to minimize the detection delay for afixed mean time between false alarms[4].Since we are assuming unknown change param-eters,we plot the average delay in detecting abnormality(averaged over different change rates)against the mean time between false alarms.We do this infigure4(a),(b),(c)for.As can be seen from thefigures,ELL has the best average detection delay performance.But we know from the previousfigure that tracking error has better performance for drastic abnormalities.Thus as discussed in section4,we combine ELL and tracking error,i.e.we declare a change if either ELL or tracking error exceed their respective thresholds.To choose an operating point,we vary thresholds for both ELL and tracking error and plot the average detection delay versus mean time between false alarms.Each broken green line plot plots the detection delay for ELL thresholdfixed,tracking error threshold varying.As is evident fromfigure4,for most values of mean time between false alarm,we can obtain an operating point with this combined strategy that has lower detection delay than either ELL or tracking error alone.7Discussion and Future WorkNow[6]defines a CUSUM like likelihood ratio based statistic for change detection.But in our problem, the parameters of the changed system are unknown and so we cannot define the probability of observa-tions under the changed system and hence the likelihood ratio cannot be defined.One could try to adapt the idea to the case of unknown parameters by trying to usewhere is the probability of the observations under the normal system hypothesis and is a normalcy threshold for the observation ing this kind of a statistic will detect the change time more accurately and have lesser false alarms than just the current‘Obs.likelihood’as defined by us but it is computationally more complex.Also,it is not clear how to set the thresholds.As part of future work,one could attempt to define a CUSUM like statistic for ELL of state given observations i.e.define and compare its performance with defined above.We can set to be the expected value over observation sequences of the ELL at time,(a)(b)Figure 1:(a):A ‘normal activity’frame with 4people,(b):Abnormality:One person walking away in a weirddirection.,which is equal to the expected value underof(which is the entropyof).For the case of Gaussian prior this simplifies to a constant times the dimensionality of the datai.e..For small observation noises,one could assume the system to be fully observed and replace ELLby log likelihood of.It would be interesting to study at what observation noise,thefully observed assumption starts to fail.Finally,we hope to compare performance of the change detec-tion statistics for different kinds of abnormalities in this and other applications and also for non-white and non-Gaussian observation noise.In non-Gaussian noise,one interesting case would be how to dis-tinguish outlier observations caused due to say Cauchy noise from an abnormality.For such a case,a sequence based statistic likeorwould prove useful.Another interesting problem would be switchingbetween a couple of known modes using a discrete mode variable as in [7,8]and defining the sequence to be abnormal only if the posterior state distribution is far from the priors corresponding to all the modes.We have studied the change detection problem in more detail in a recent work [18],which analyzes the errors in approximating the ELL using a PF optimal for the normal system.It also discusses more rigorously,the complementary behavior of ELL and Obs.Likelihood for change detection.tE L L(a)(b)(c)Figure 2:We show plots of ELL,tracking error and obs.likelihood (in (a),(b),(c)respectively)for normal activityand increasing walk away velocities (abnormal behavior)as a function of time.Abnormality is introduced at and.(a)(b)(c)Figure 3:We plot the detection delay for zero false alarm against the rate of change (walk away velocity)for thethree statistics.(a),(b),(c)are for increasing observation noise.References[1]A.Doucet,N.deFreitas,and N.Gordon,Sequential Monte Carlo Methods in Practice ,Springer,2001.[2]Y .Bar-Shalom and T.E.Fortmann,Tracking and Data Association ,Academic Press,1988.A v e r a g e D e t e c t i o n D e l a y(a)(b)(c)Figure 4:We plot the ROC (detection delay averaged over all rates of change versus mean time between falsealarms)for the three statistics and the green dotted lines are the ROC for combining ELL and tracking error.(a),(b),(c)are for increasing observation noise.[3]N.Vaswani,A.RoyChowdhury,and R.Chellappa,“Activity recognition using the dynamics ofthe configuration of interacting objects,”in IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2003.[4]M.Basseville and I Nikiforov,Detection of Abrupt Changes:Theory and Application ,PrenticeHall,1993.[5]T.Kailath,A.H.Sayed,and B.Hassibi,Linear Estimation ,Prentice Hall,2000.[6]B.Azimi-Sadjadi and P.S.Krishnaprasad,“Change detection for nonlinear systems:A particlefiltering approach,”in American Control Conference ,2002.[7]R.Dearden and D.Clancy,“Particle filters for real-time fault detection in planetary rovers,”.[8]Shaohua Zhou and Rama Chellappa,“Probabilistic human recognition from video,”in EuropeanConference on Computer Vision ,2002,pp.681–697.[9]N.J.Gordon,D.J.Salmond,and A.F.M.Smith,“Novel approach to nonlinear/nongaussian bayesianstate estimation,”IEE Proceedings-F (Radar and Signal Processing),pp.140(2):107–113,1993.[10]P.Fearnhead,“Sequential monte carlo methods infilter theory,”in PhD Thesis,Merton College,Universoty of OXford,1998.[11]M.Isard and A.Blake,“Contour tracking by stochastic propagation of conditional density,”Euro-pean Conference on Computer Vision,pp.343–356,1996.[12]G.Casella and R.Berger,Statistical Inference,Duxbury Thomson Learning,second edition,2002.[13]D.Crisan and A.Doucet,“Convergence of sequential monte carlo methods,”in Technical Report,Cambridge University,2000.[14]D.F.Kerridge,“Inaccuracy and inference,”J.Royal Statist.Society,Ser.B,vol.231961.[15]Rudolf Kulhavy,“A geometric approach to statistical estimation,”in IEEE Conference on Decisionand Control(CDC),Dec.1995.[16]H.L.Royden,Real Analysis,Prentice Hall,1995.[17]LeGland F.and Oudjane N.,“Stability and Uniform Approximation of Nonlinear Filters using theHilbert Metric,and Application to Particle Filters,”Technical report,RR-4215,INRIA,2002.[18]N.Vaswani and R.Chellappa,“Change detection in partially observed nonlinear dynamical sys-tems with change parameters unknown,”in submitted to American Control Conference(ACC), 2004.[19]I.L.Dryden and K.V.Mardia,Statistical Shape Analysis,John Wiley and Sons,1998.。

济南2024年02版小学5年级上册第六次英语第4单元期中试卷

济南2024年02版小学5年级上册第六次英语第4单元期中试卷

济南2024年02版小学5年级上册英语第4单元期中试卷考试时间:100分钟(总分:120)A卷考试人:_________题号一二三四五总分得分一、综合题(共计100题)1、听力题:The burning of fossil fuels releases carbon ______.2、听力题:The Great Wall is located in _______.3、What do we call the period of time before written records?A. PrehistoryB. Ancient historyC. Middle AgesD. Modern history答案:A4、听力题:The formula for carbon dioxide is _______.5、听力题:The cat loves to curl up on a _____ warm lap.6、填空题:My ________ (玩具名称) is a character from a movie.7、听力题:Birds lay ______ in nests.8、听力题:The process of distillation separates liquids based on their _____.9、听力题:The symbol for zirconium is _____.10、填空题:The ________ was an important event in the history of the United States.11、听力题:The teacher gives us _____ for homework. (assignments)12、听力题:A __________ is a large area of flat land that is higher than the surrounding area.13、What is the opposite of 'big'?A. HugeB. SmallC. TallD. Short答案:B. Small14、填空题:The ______ (树叶) fall off in winter for some trees.15、填空题:My favorite snack to share is ______.16、What do we call a young person in school?A. StudentB. ScholarC. LearnerD. Pupil17、填空题:My sister's favorite color is ______ (粉红色). She has many ______ (玩具).18、听力题:A _______ is a high, steep face of rock.19、What is the capital of Argentina?A. SantiagoB. Buenos AiresC. LimaD. Montevideo答案:B20、What is the name of the famous explorer known for his voyages to the Pacific?A. James CookB. Ferdinand MagellanC. Christopher ColumbusD. Vasco da Gama答案: A21、选择题:What is the name of the famous bear who loves adventures?A. Winnie the PoohB. PaddingtonC. BalooD. Yogi22、Which shape has three sides?A. SquareB. RectangleC. TriangleD. Circle答案:C23、What is the process of water changing into vapor called?A. CondensationB. EvaporationC. PrecipitationD. Infiltration答案: B24、听力题:The chemical formula for carbon dioxide is __________.25、What do you call a person who writes poems?A. PoetB. LyricistC. AuthorD. All of the above答案:A26、填空题:The ________ is a friendly creature that loves to cuddle.27、What is the capital of Canada?A. TorontoB. OttawaC. VancouverD. Calgary答案: B28、填空题:My uncle is a __________ (教育专家).29、What do we call the act of embracing diversity?A. InclusionB. AcceptanceC. ToleranceD. All of the Above答案:D30、听力题:A ________ is a raised area of land, often with steep sides.31、What is the capital of Bangladesh?A. DhakaB. ChittagongC. KhulnaD. Sylhet答案:A. Dhaka32、听力填空题:I believe that expressing our feelings is important for __________.33、听力题:The movie was very ___ (interesting).34、ers are associated with ______ and traditional celebrations. (某些花与节日和传统庆祝活动密切相关。

depthmap使用手册

depthmap使用手册

31.131Abstract Here we present Depthmap, a program designed to perform visibility graph analysis of spatial environments. The program allows a user to import a 2D layout in drawing exchange format (DXF), and to fill the open spaces of this layout with a grid of points. The user may then use the program to make a visibility graph representing the visible connections between those point locations. Once the graph has been constructed the user may perform various analyses of the graph, concentrating on those which have previously been found to be useful for spatial description and movement forecasting within the Space Syntax literature. Some measures which have not received so much mainstream attention have also been imple-mented. In addition to graph analysis, a few sundry tools are also enabled so that the user may make minor adjustments to the graph. Depthmap has been implemented for the Windows 95/98/NT and 2000 platforms.1 IntroductionThe application of visibility graph analysis (VGA) to building environments was first intro-duced by Braaksma and Cook (1980). Braaksma and Cook calculate the co-visibility of vari-ous units within an airport layout, and produce an adjaceny matrix to represent these relation-ships, placing a 1 in the matrix where two locations are mutually visible, and a 0 where they are not. From this matrix they propose a metric to compare the number of existing visibility relationships with the number which could possibly exist, in order to quantify how usefully a plan of an airport satisfies a goal of total mutual visibility of locations. This type of analysis was recently rediscovered by Turner et al. (Turner and Penn, 1999; Turner et al., 2001), through considering recent developments in Space Syntax of various isovist approaches (see Hanson,1998, for further background). They recast the adjaceny matrix as a visibility graph of loca-tions, where a graph edge connects vertices representing mutually visible locations. Turner et al. then use several graph metrics as a few representative measures which could be performed on such graphs. The metrics they apply are taken from a combination of techniques used in Space Syntax and those employed in the analysis of small-worlds networks by Watts and Strogatz (1998).Here we present a tool which enables a user to perform VGA on both building and urban environments, allowing the user to perform the kind of analysis proposed by Turner et al..We call this tool Depthmap . Depthmap first allows the user to import layouts in 2D DXF format and to fill the layout with a rectilinear grid of points. These points may then be used to construct a visibility graph, and we describe this procedure in section 2. After the graph hasDepthmap A program to perform visibility graph analysisAlasdair Turner University College London, UK Keywords:visibility graph,spatial analysis,software develop-ment Alasdair Turner Center for Advanced Spatial Analysis,Torrington Place Site, University College London,Gower Street,London WC1E 6BT ,UK alasdair .turner@31.2been constructed, Depthmap gives the user several options to perform analysis on the graph,which we describe in detail in section 3. The methods include those employed by Turner et al.,but also some, such as control (Hillier and Hanson, 1984) or point depth entropy (following a formula proposed by Hiller et al., 1987), which have not been previously applied to visibility graph analysis. We supply algorithmic and mathematical details of each measure, as well as samples of results, and short explanations of why we believe these may be of interest to other researchers. In section 4 we include a brief description of extra facilities included in Depthmap to adjust the edge connections in the visibility graph, and give guidance for their use in spatial analysis. Finally, we present a summary in section 5.2 Making a visibility graph from a DXF fileIn this section we describe how the user may get from a layout to a visibility graph using Depthmap. When the user first enters Depthmap, she or he may create a new graph file from the File menu. This graph file initially appears blank, and the first thing the user must do is to import a layout to be analysed. The import format used is AutoDesk s drawing exchangeformat (DXF), and only two dimensional layouts maybe imported. Once loaded, the user is shown the lay-out on the screen, and she or he may move it andzoom in and out as desired. Figure 1 shows the pro-gram at this stage. After the DXF file has been im-ported, the user is ready to add a set of locations to beused as a basis for the visibility graph.2.1 Making a grid of locations To make a set of locations, the user selects the Fill tool from the toolbar, and then clicks on a seed point in open space to make a grid of points, as shown in figure 2. When the user clicks for the first time, she or he is given the option of choosing a grid spacing,based on the units used in the original DXF, and may use for example, a 1m grid. The grid is filled from the seed point using a simple flood fill algorithm. Points may be deleted by selecting either individual points or ranges of points. Once the user is happy with the grid,she or he may then go on to make the visibility graph.The user is restricted to a rectilinear grid of points at present. We realise that this is a problem for anyone attempting a methodological investigation of VGA,rather than simply using it to analyse environments.This is due to using a grid assumption when we calculate point intervisibility, and unfortu-nately a constraint. However, various themes on the basic grid are still available, for example by using different resolutions or by rotating the DXF prior to import, or by only partiallyfilling grids.Figure 1: Theapplication as itappears after the DXFfile has beenimportedFigure 2: The userfills a section of openspace using the filltool31.32.2 Making the graphThe graph is made by Depthmap by selecting the Make Graph option from the Tools menu. The program attempts to find all the visible locations from each grid location in the layout one by one, and uses a simple point visibility test radiating from the current location to achieve this. As each location is considered, a vertex is added to the graph for this point,and the set of visible vertices is stored. Note that this is only one way that the visibility graph could be stored. It is actually more space efficient only to store the edges, but this would lead to slower algorithms later when analysing the graph. We can write a simplified form of the algorithm in pseudocode as follows. In the algorithm we use graph set notation: V(G) is the set of all locations or vertices that exist, and v i an individual location or vertex in the graph we are making. Each vertex v i will have a set of vertices connected to it, which will labelled the set V (G i ), otherwise known as the vertex s neighbourhood.The number of vertices in the neighbourhood is obviouslyeasily calculable, and Depthmap records these neighbourhoodsizes as it makes the graph. In graph theory, the neighbourhoodsize for a vertex is commonly written ki, and may be expressed as in equation 1..(1)where E(G) is the set of all edges (i.e., visibility connections) inthe graph. Note that the set E(G) is not actually stored byDepthmap, and so k i is actually calculated using the first formof this equation. Figure 3 shows a simple layout after the vis-ibility graph has been made using Depthmap. As the actual number of connections is huge for each vertex, only k i values are shown for each location rather than the mess of actual connections. Depthmap colours k i values using a spectral range from indigo for low values through blue, cyan, green, yellow, orange, red to magenta for high values. The user may change the bounds for this range, or choose to use a greyscale instead, using a menu option from the View menu. Since this paper is reproduced in greyscale, all the figures shown have used the greyscale option, where black represents low values and white represents high values.Once the graph has been constructed, the user has various options, including graph analysis,which we describe in the next section, and modification of the graph, which we describe insection 4.Figure 3:Neighbourhood size calculated for a sample layout31.4 3 Measurement of the graph In this section we describe the graph measures available to a user of Depthmap. Analysis of the graph is split into two types: global measures (which are constructed using information from all the vertices in the graph) and local measures (which are constructed using informa-tion from the immediate neighbourhood of each vertex in the graph). The user may elect to perform either or both of these types of measure by selecting from the Tools menu. She or he is presented with a dialog box, as shown in figure 4. The radius box allows the user to restrict global measures to only include vertices up to n-edge steps from each vertex. When the user clicks OK, the program calculates the measures for the whole system. The key global measures are mean depth and point depth entropy, while the key local measures are clustering coefficient and control. Once the measures have been calculated, these and derived measures are given as options on the View menu, and may be exported as plain text for use in GIS and statistical packages from File menu. We now turn to a discussion of the algorithmic imple-mentation of each measure, and explore possibilities for their use within Space Syntax re-search.3.1 Clustering Coefficient Clustering coefficient, g i , is described in detail by Watts (1999) and has its origin in theanalysis of small-world networks. Turner et al. found it useful for the detection of junction points in environments. Clustering coefficient is defined as the proportion of vertices which are actually connected within the neighbourhood of the current vertex, compared to thenumber that could possibly be connected, as shown in equation 2.(2)where E(G i ) is the set of edges in the neighbourhood of v i and k i is the as previously calculated. This is implemented in Depthmap by the following algorithm for each vertex inthe graph*Figure 4: The optionsdialog box for graphanalysis * Again note that as the set of edges E(G i ) is not recorded, the information must be recovered from the vertices in the neighbourhood, V(G i )31.5Figure 5 shows the clustering coefficient calculated for a sample spatial configuration. As noted by Turner et al. junctions in the configuration are picked out. However, as they suggest,and looking at the figure, it actually seems to pick out better the changes in visual information as the system is navigated, so low clustering coefficients occur where a new area of the system may be discovered. Examining g i , seems a promising line of investigation when looking at how visual information varies within an environment - for example, as suggested by Conroy (2000) for finding pause points on journeys.3.2 ControlControl for a location, which we will labelci, is defined by Hillier and Hanson (1984), andis calculated by summing the reciprocals of theneighbourhood sizes adjoining the vertex, asshown in equation 3.(3)A simple algorithm can be used to calculate thisvalue as follows:It should be noted that in VGA many of theimmediately adjoining neighbourhoods willoverlap, so that perhaps a better definition ofVGA control would be the area of the currentneighbourhood with respect to the total areaof the immediately adjoining neighbourhood- that is, rather than use the sum the size of allthe adjoining neighbourhoods, use the size ofthe union of those adjoining neighbourhoodsas shown in equation 4. The results of applyingthis method are shown in figure 6, althoughDepthmap is also capable of calculating controlas defined by Hillier and Hanson.(4)Figure 5: Clusteringcoefficient calculatedfor a sample layoutFigure 6: Controlcalculated for asample layout31.63.3 Mean DepthThe mean path length L i from a vertex is the average number of edge steps to reach any other vertex in the graph using the shortest number of steps possible in each case. This sort of graph measure has a long history stretching back as far as Wiener (1947), and is pertinent to visibility graph analysis due to the parallels with the use of integration in space syntax theory (Hillier et al., 1993), showing how visually connected a vertex is to all other vertices in the system. We calculate L i by constructing the set of point depths, as follows. The algorithm we use is not the most time efficient, as shortest paths are recalculated for each vertex, rather than being stored in a cache. However, the memory constraints on current personal computers mean that storing all the shortest paths in the system would rapidly use up the available memory. Hence, the algorithm that follows works in O(n 2) time. It obtains point depths for all the vertices in the system from the current vertex, by adding ordered pairs of vertices and depths to the set P.An asterisk, such as in the set pair {v 1, *}, represents a wild card matching operation. For example, {v 1, *} matches any of {v 1,1}, {v 1, 2} or {v 1, 4}.Once the point depth set has been constructed, it is facile to calculate measures such as mean depth and integration. Figure 7 shows mean depth for our sample system. As has been shown, this measure would seem to be useful understanding movement of people within building environments, where it isdifficult to apply traditional Space Syntax methods such as axial analyses at high resolutions.However, in urban environments, since we are measuring numbers of turns from location to location, VGA integration quickly approximates to axial integration (albeit with each line weighted by the street area), and due to speed considerations, it may not be as beneficial to useVGA integration in these situations.Figure 7: Mean depthcalculated for asample layout31.73.4 Point Depth EntropyIn addition to calculating measures such as mean depth, the point depth set P i allows us to explore measures based on the frequency distribution of the depths. One such measure is the point depth entropy of a location, s i , which we can express using Shannon s formula of uncertainty, as shown in equation 5. Entropy occurs in many fields, including informatics,and is proposed for use in Space Syntax by Hiller et al. (1987).(5)where d max is the maximum depth from vertex v i and p d is the frequency of point depth *d*from the vertex. This is implemented algorithmically in Depthmap as follows:.Calculating point depth entropy can give an insight into how ordered the system is from a location. For example, if a doorway is connected to a main street then there is a marked disorder in the point depths from the point of view of the doorway: at depth 1 there are only a few locations visible from the doorway, then at depth 2 there are many locations from the street, and then order contained within further depths will depend on how the street is integrated within its environment. Figure 8 shows the point depth entropy as calculated by Depthmap. Other entropy-like measures are also calculated. The information from a point is calculated using the frequency with respect to the expected frequency of locations at each depth, as shown in equation 6. Obviously, what is meant by expected is debatable, but as the frequency is based on the probability of events occurring (that is, of the j graph splitting), it seems appropriate to use a Poisson distribution (see Turner, 2001, for a more detailed discussion), which has the advantage of depending only on a single variable, the mean depth of the j graph. The resulting formula is shown in equation 6 and is similar to that used by Hiller et al. for relativised entropy. So, why calculate the entropy or information from a point?The answer is threefold: firstly, it was found to be useful by Hiller et al.; secondly, it appeals intuitively to a tentative model of people occupation of a system, in that the entropy corre-sponds to how easy it is to traverse to a certain depth within the system (low disorder is easy,31.8high disorder is hard); and thirdly, it remedies the problem that VGA integration is heavily biased towards large open areas. In axial integration, because the system is dimensionless,large open areas do not unduly weight the values of the lines; that is, the large areas only weight the values by their increased connections, not through their area. By contrast, in VGA integration the measure approximates a mean of distance times area, as discussed in the previous section. Hence, by using a topological measure such as point depth entropy we eliminate the area dependence of the measure, and instead concentrate on the visual accessi-bility of a point from all other points.(6)4 Further tools available in Depthmap As well as allowing the user to make and analyse graphs, Depthmap includes the facility to modify the graph connections. We believe this will be useful when the user wishes to model something other than a two dimensional plan - for example, when modelling a building with more than one storey, or trying to include one way entrances or exit, escalators and so on. The method to modify connections in Depthmap is as follows. Once the graph has been made, the user first selects an area by dragging a select box across the desired portion of the graph, as she or he would in many computer applications. She or he may also select other areas by holding down the Shift key and then selecting again. The user then pins this selection, using a toolbar button. Now the user may select another area in the same way.Once these two areas have been selected, the user can alter the connections between them by selecting the Edit Connections dia-log box from the Edit menu. An example is shown in figure 9.The dialog box gives several options but essentially these reduce to set operations on the neighbourhoods of the selected points. The Add option simply adds the selected points in the other set to the neighbourhood, and is useful for turning points, for example a stairwell landing. The Merge option allows the user to union the neighbourhoods of the two selected sets, and is usefulfor adding seamless merges, for example, descending an incline. Finally the Remove option can be used to take away any connections from the selected set, and for example, might be useful to convert a two way entrance to a one way entrance.5 ConclusionIn this paper, we have presented a description of the Depthmap program, designed to perform visibility graph analysis (VGA). Depthmap can be used to analyse a layout to obtain various graph measures, including integration, which has been shown to correlate well with observed movement patterns when used with axial maps (see Hillier et al., 1993, for ex-ample), and also shown to correlate with movement patterns when used with VGA (see Turner and Penn, 1999). Although we have talked only about the overall application ofDepthmap to a system to make and analyse graphs, Depthmap also has many other features Figure 8: Point depthcalculated for asample layout31.9which a user would expect from a program, such as printing, summarising graph data and so on, which we have restricted to a user manual. What we do hope to have given here is a flavour of what is achievable with the program, an insight into how the graph is analysed,and our reasons for choosing the graph measures we have included.Finally, it is worth noting that Depthmap is designed to be a tool for Space Syntax researchers. This means that we have chosen DXF as the input medium, and that the pro-gram runs interactively, allowing the graph to be modified and the analysis to be reapplied. It also means that sometimes we have not used the fastest algorithm to achieve a task, but have instead designed the program to work with the memory available on most personal comput-ers today. On a 333 MHz machine running Windows 98, Depthmap takes around an hour to process a graph with 10 000 point locations, from start to finish, and the program has been tested on graphs with up to 30 000 point locations. We hope that the Space Syntax commu-nity will enjoy using our tool and we look forward to improving it with the input and insight of future researchers.References Braaksma, J P and Cook, W J, 1980, Human orientation in transportation terminals Transportation Engineering Journal 106(TE2) 189-203Conroy, R, 2000 Spatial Navigation in Immersive Vir-tual Environments PhD thesis, Bartlett School of Graduate Studies, UCL Hanson, J, 1998 Decoding Houses and Homes (Cam-bridge University Press, Cambridge, UK)Hiller, B, Hanson, J and Peponis, J, 1987, The syntactic analysis of settlements Architecture and Behaviour 3(3) 217-231Hillier, B and Hanson, J, 1984 The Social Logic of Space (Cambridge University Press, Cambridge,UK)Hillier, B, Penn, A, Hanson, J, Grajewski, T and Xu, J, 1993, Natural movement: or configura-tion and attraction in urban pedestrian move-ment Environment and Planning B: Planning and Design 20 29-66Turner, A, 2001, Angular analysis Proceedings of the 3rd International Symposium on Space Syntax Georgia Institute of Technology, Atlanta, Geor-gia.Turner, A, Doxa, M, O Sullivan, D and Penn, A,2001, From isovists to visibility graphs: a methodology for the analysis of architectural space Environment and Planning B: Planning and Design 28(1) Forthcoming Turner, A and Penn, A, 1999, Making isovists syntatic: Isovist integration analysis Proceedings of the 2nd International Symposium on Space Syntax Vol. 3, Universidad de Brasil, Brasilia, Brazil Watts, D J, 1999 Small Worlds (Princeton University Press, Princeton, NJ)Watts, D J and Strogatz, S H, 1998, Collective dynamics of small-world networks Nature 393440-442Wiener, H, 1947, Structural determination of paraffin boiling points Journal of the American Chemistry Society 6917-20Figure 9: Editing connections。

Outof-domain detection based on confidence measures from multiple topic classification

Outof-domain detection based on confidence measures from multiple topic classification

OUT-OF-DOMAIN DETECTION BASED ON CONFIDENCE MEASURES FROMMULTIPLE TOPIC CLASSIFICATIONIan ne1,2,Tatsuya Kawahara1,2,Tomoko Matsui3,2,Satoshi Nakamura21School of Informatics,Kyoto UniversitySakyo-ku,Kyoto606-8501,Japan2ATR Spoken Language Translation Laboratories2-2-2Hikaridai,Seika-cho,Soraku-gun,Kyoto619-0288,Japan3The Institute of Statistical Mathematics4-6-7Minami-Azabu,Mitato-ku,Tokyo106-8569,JapanABSTRACTOne significant problem for spoken language systems is how to cope with users’OOD(out-of-domain)utterances which cannot be handled by the back-end system.In this paper,we propose a novel OOD detection framework,which makes use of classification con-fidence scores of multiple topics and trains a linear discriminant in-domain verifier using GPD.Training is based on deleted inter-polation of the in-domain data,and thus does not require actual OOD data,providing high portability.Three topic classification schemes of word N-gram models,LSA,and SVM are evaluated, and SVM is shown to have the greatest discriminative ability.In an OOD detection task,the proposed approach achieves an ab-solute reduction in EER of6.5%compared to a baseline method based on a simple combination of multiple-topic classifications. Furthermore,comparison with a system trained using OOD data demonstrates that the proposed training scheme realizes compara-ble performance while requiring no knowledge of the OOD data set.1.INTRODUCTIONMost spoken language systems,excluding general-purpose dicta-tion systems,operate over definite domains as a user interface to a service provided by the back-end system.However,users,es-pecially novice users,do not always have an exact concept of the domains served by the system.Thus,they often attempt utterances that cannot be handled by the system.These are referred to as OOD(out-of-domain)in this paper.Definitions of OOD for three typical spoken language systems are described in Table1.For an improved interface,spoken language systems should predict and detect such OOD utterances.In order to predict OOD utterances,the language model should allow some margin in its coverage.A mechanism is also required for the detection of OOD utterances,which is addressed in this paper.Performing OOD de-tection will improve the system interface by enabling users to de-termine whether to reattempt the current task after being confirmed as in-domain,or to halt attempts due to being OOD.For exam-ple,in a speech-to-speech translation system,an utterance may be in-domain but unable to be accurately translated by the back-end system;in this case the user is requested to re-phrase the input utterance,making translation possible.In the case of an OOD ut-terance,however,re-phrasing will not improve translation,so theTable1.Definitions of Out-of-domain for various systems System Out-of-Domain definition Spoken Dialogue User’s query does not relate to back-endinformation sourceCall Routing User’s query does not relate to anycall destinationSpeech-to-Speech Translation system does not provide Translation coverage for offered topicuser should be informed that the utterance is OOD and provided with a list of tractable domains.Research on OOD detection is limited,and conventional stud-ies have typically focused on using recognition confidences for re-jecting erroneous recognition outputs(e.g.,[1],[2]).In these ap-proaches there is no discrimination between in-domain utterances that have been incorrectly recognized and OOD utterances,and thus effective user feedback cannot be generated.One area where OOD detection has been successfully applied is call routing tasks such as that described in[3].In this work,classification models are trained for each call destination,and a garbage model is ex-plicitly trained to detect OOD utterances.To train these models,a large amount of real-world data is required,consisting of both in-domain and OOD training examples.However,reliance on OOD training data is problematic:first,an operational on-line system is required to gather such data,and second,it is difficult to gain an appropriate distribution of data that will provide sufficient cover-age over all possible OOD utterances.In the proposed approach,the domain is assumed to consist of multiple sub-domain topics,such as call destinations in call-routing,sub-topics in translation systems,and sub-domains in com-plex dialogue systems.OOD detection is performed byfirst cal-culating classification confidence scores for all in-domain topic classes and then applying an in-domain verification model to this confidence vector,which results in an OOD decision.The verifi-cation model is trained using GPD(gradient probabilistic descent) and deleted interpolation,allowing the system to be developed by using only in-domain data.2.SYSTEM OVERVIEWIn the proposed framework,the training set is initially split into multiple topic classes.In the work described in this paper,topic classes are predefined and the training set is hand-labeled appropri-«by applying topic-dependent language models.We demonstratedthe effectiveness of such an approach in[4].An overview of the OOD detection framework is shown inFigure1.First,speech recognition is performed by applying ageneralized language model that covers all in-domain topics,andN-best recognition hypotheses(s1,...,s N)are generated.Next, topic classification confidence scores(C(t1|X),...,C(t M|X)) are generated for each topic class based on these hypotheses.Fi-nally,OOD detection is performed by applying an in-domain veri-fication model G in−domain(X)to the resulting confidence vector. The overall performance of the proposed approach is affected by the accuracy of the topic classification method and the in-domain verification model.These aspects are described in detail in the following sections.3.TOPIC CLASSIFICATIONIn this paper three topic classification schemes are evaluated:topic-dependent word N-gram,LSA(latent semantic analysis),and SVM (support vector machines).Based on a given feature set,topic models are trained using the above methods.Topic classification is performed and confidence scores(in the range[0,1])are calculated by applying a sigmoid transform to these results.When classifica-tion is applied to an N-best speech recognition result,confidence scores are calculated as shown in Equation1.Topic classification is applied independently to each N-best hypothesis,and these are linearly combined by weighting each with the posterior probability of that hypothesis given by ASR.C(t j|X)=NXi=1p(s i|X)C(t j|s i)(1)C(t j|X):confidence score of topic t j for input utterance Xp(s i|X):posterior probability of i-th best sentencehypothesis s i by ASRN:number of N-best hypotheses3.1.Topic Classification FeaturesVarious feature sets for topic classification are investigated.A feature vector consists of either word baseform(word token with no tense information;all variants are merged),full-word(surface form of words,including variants),or word+POS(part-of-speech) tokens.The inclusion of N-gram features that combine multiple neighboring tokens is also investigated.Appropriate cutoffs are applied during training to remove features with low occurrence.3.2.Topic-dependent Word N-gramIn this approach,N-gram language models are trained for each topic class.Classification is performed by calculating the log-likelihood of each topic model for the input sentence.Topic clas-sification confidence scores are calculated by applying a sigmoid transform to this log-likelihood measure.tent Semantic AnalysisLSA(latent semantic analysis)[5]is a popular technique for topic classification.Based on a vector space model,each sentence is represented as a point in a large dimension space,where vector components relate to the features described in Section3.1.Be-cause the vector space tends to be extremely large(10,000-70,000 features),traditional distance measures such as the cosine distance become unreliable.To improve performance,SVD(singular value decomposition)is applied to reduce the large space to100-300di-mensions.Each topic class is represented as a single document vector composed of all training sentences,and projected to this reduced space.Classification is performed by projecting the vector represen-tation of the input sentence to the reduced space and calculating the cosine distance between this vector and each topic class vec-tor.The resulting distance is normalized by applying a sigmoid transform generating classification confidence scores.3.4.Support Vector MachinesSVM(support vector machines)[6]is another popular classifica-tion ing a vector space model,SVM classifiers are trained for each in-domain topic class.Sentences that occur in the training set of that topic are used as positive examples and the remainder of the training set is used as negative examples.Classification is performed by feeding the vector representa-tion of the input sentence to each SVM classifier.The perpendicu-lar distance between this vector and each SVM hyperplane is used as the classification measure.This value is positive if the input sen-tence is in-class and negative otherwise.Again,confidence scores are generated by applying a sigmoid transform to this distance.4.IN-DOMAIN VERIFICATIONThefinal stage of OOD detection consists of applying an in-domain verification model G in−domain(X)to the vector of confidence scores generated during topic classification.We adopt a linear dis-criminant model(Eqn.2).Linear discriminant weights (λ1,...,λM)are applied to the confidence scores from topic clas-sification(C(t1|X),...,C(t M|X)),and a threshold(ϕ)is ap-plied to obtain a binary decision of in-domain or OOD.G in−domain(X)=(1ifPMj=1λj C(t j|X)≥ϕ(in-domain)0otherwise.(OOD)(2)C(t j|X):confidence score of topic t j for input utterance XM:number of topic classes4.1.Training using Deleted InterpolationThe in-domain verification model is trained using only in-domain data.An overview of the proposed training method combining GPD(gradient probabilistic descent)[7]and deleted interpolation¬Table2.Deleted Interpolation based Training for each topic i in[1,M]set topic i as temporary OODset remaining topic classes as in-domaincalculate(λ1,...,λM)using GPD(λi excluded) average(λ1,...,λM)over all iterationsTable3.Experiment CorpusDomain:Basic Travel ExpressionsIn-Domain:11topics(transit,accommodation,...)OOD:1topic(shopping)Training Set:11topics,149540sentences(in-domain data only) Lexicon Size:17000wordsTest set:In-Domain:1852utterancesOOD:138utterancesis given in Table2.Each topic is iteratively set to be temporar-ily OOD,and the classifier corresponding to this topic is removed from the model.The discriminant weights of the remaining topic classifiers are estimated using GPD.In this step,the temporary OOD data is used as negative training examples,and a balanced set of the remaining topic classes are used as positive(in-domain) examples.Upon completion of estimation by GPD,thefinal model weights are calculated by averaging over all interpolation steps. In the experimental evaluation,a topic-independent class“basic”covering general utterances exists,which is not removed during deleted interpolation.4.2.Incorporation of Topic-dependent VerifierImproved OOD detection accuracy can be achieved by applying more elaborate verification models.In this paper,a model consist-ing of multiple linear discriminant functions is investigated.Topic dependent functions are added for topics not modeled sufficiently. Their weights are trained specifically for verifying that topic.For verification,the topic with maximum classification confidence is selected,and a topic-dependent function is applied if one exists, otherwise a topic-independent function(Eqn.2)is applied.5.EXPERIMENTAL EV ALUATIONThe ATR BTEC corpus[8]is used to investigate the performance of the proposed approach.An overview of the corpus is given in Table3.In this experiment,we use“shopping”as OOD of the speech-to-speech translation system.The training set consisting of11in-domain topics is used to train both the language model for speech recognition and the topic classification models.Recogni-tion is performed with the Julius recognition engine.The recognition performance for the in-domain(ID)and OOD test sets are shown in Table4.Although the OOD test set has much greater error rates and out-of-vocabulary rate compared with the in-domain test set,more than half of the utterances are correctly recognized,since the language model covers the general travel do-main.This indicates that the OOD set is related to the in-domain task,and discrimination between these sets will be difficult.System performance is evaluated by the following measures: FRR(False Rejection Rate):Percentage of in-domainutterances classified as OOD FAR(False Acceptance Rate):Percentage of OOD utterancesclassified as in-domainEER(Equal Error Rate):Error rate at an operating pointwhere FRR and FAR are equalTable4.Speech Recognition Performance#Utt.WER(%)SER(%)OOV(%) In-Domain18527.2622.40.71 Out-of-Domain13812.4945.3 2.56 WER:Word Error Rate SER:Sentence Error RateOOV:Out of V ocabularyparison of Feature Sets&Classification Models Method Token Set Feature Set#Feat.EER(%)SVM base-form1-gram877129.7SVM full-word1-gram989923.9SVM word+POS1-gram1000623.3SVM word+POS1,2-gram4075421.7SVM word+POS1,2,3-gram7306519.6LSA word+POS1-gram1000623.3LSA word+POS1,2-gram4075424.1LSA word+POS1,2,3-gram7306523.0 NGRAM word+POS1-gram1000624.8 NGRAM word+POS1,2-gram4075425.2 NGRAM word+POS1,2,3-gram7306524.2 SVM:Support Vector Machines LSA:Latent Semantic Analysis NGRAM:Topic-dependent Word N-gram5.1.Evaluation of Topic Classification and Feature Sets First,the discriminative ability of various feature sets as described in Section3.1were investigated.Initially,SVM topic classifica-tion models were trained for each feature set.A closed evaluation was performed for this preliminary experiment.Topic classifica-tion confidence scores were calculated for the in-domain and OOD test sets using the above SVM models,and used to train the in-domain verification model using GPD.During training,in-domain data were used as positive training examples,and OOD data were used as negative examples.Model performance was evaluated by applying this closed model to the same confidence vectors used for training.The performance in terms of EER is shown in thefirst section of Table5.The EER when word-baseform features were used was29.7%. Full-word or word+POS features improved detection accuracy sig-nificantly:with EERs of23.9%and23.3%,respectively.The in-clusion of context-based2-gram and3-gram features further im-proved detection performance.A minimum EER of19.6%was obtained when3-gram features were incorporated.Next,LSA and N-gram-based classification models were eval-uated.Both approaches showed lower performance than SVM, and the inclusion of context-based features did not improve per-formance.SVM with a feature set containing1-,2-,and3-gram offered the lowest OOD detection error rate,so it is used in the following experiments.5.2.Deleted Interpolation-based TrainingNext,performance of the proposed training method combining GPD and deleted interpolation was evaluated.We compared the OOD detection performances of the proposed method(proposed), a reference method in which the in-domain verification model was trained using both in-domain and OOD data(as described in Sec-tion5.1)(closed-model),and a baseline system.In the baseline system,topic detection was applied and an utterance was classi-fied as OOD if all binary SVM decisions were negative.Other-¬10203040506070010203040506070FRRF A RFig.2.OOD Detection Performance on Correct Transcriptions102030baselineproposedclosed-modelVerification MethodE r r o r R a t e (%)Fig.3.OOD Detection Performance on ASR Result wise it was classi fied as in-domain.The ROC graph of the three systems obtained by altering the veri fication threshold (ϕin Eqn.2)is shown in Figure 2.The baseline system has a FRR of 25.2%,a FAR of 29.7%,and an EER of 27.7%.The proposed method provides an abso-lute reduction in EER of 6.5%compared to the baseline system.Furthermore,it offers comparable performance to the closed eval-uation case (21.2%vs.19.6%)while being trained with only in-domain data.This shows that the deleted interpolation approach is successful in training the OOD detection model in the absence of OOD data.5.3.Evaluation with ASR ResultsNext,the performances of the above three systems were evaluated on a test set of 1990spoken utterances.Speech recognition was performed and the 10-best recognition results were used to gen-erate a topic classi fication vector.The FRR,FAR and percentage of falsely rejected utterances with recognition errors are shown in Figure 3.The EER of the proposed system when applied to the ASR re-sults is 22.7%,an absolute increase of 1.5%compared to the case for the correct transcriptions.This small increase in EER suggests that the system is strongly robust against recognition errors.Fur-ther investigation showed that the falsely rejected set had a SER of around 43%,twice that of the in-domain test set.This suggests that utterances that incur recognition errors are more likely to be rejected than correctly recognized utterances.5.4.Effect of Topic-dependent Veri fication ModelFinally,the topic-dependent in-domain veri fication model described in Section 4.2was also incorporated.Evaluation was performed on spoken utterances as in the above section.The addition of atopic-dependent function (for the topic “basic ”)reduced the EER to 21.2%.The addition of further topic-dependent functions,how-ever,did not provide signi ficant improvement in performance over the two function case.The topic class “basic ”is the most vague and is poorly modeled by the topic-independent model.A topic-dependent function effectively models the complexities of this class.6.CONCLUSIONSWe proposed a novel OOD (out-of-domain)detection method based on con fidence measures from multiple topic classi fication.A novel training method combining GPD and deleted interpolation was in-troduced to allow the system to be trained using only in-domain data.Three classi fication methods were evaluated (topic depen-dent word N-gram,LSA and SVM),and SVM-based topic classi fi-cation using word and N-gram features proved to have the greatest discriminative ability.The proposed approach reduced OOD detection errors by 6.5%compared to the baseline system based on a simple combination of binary topic classi fications.Furthermore,it provides similar per-formance to the same system trained on both in-domain and OOD data (EERs of 21.2%and 19.6%,respectively)while requiring no knowledge of the OOD data set.Addition of a topic dependent veri fication model provides a further reduction in detection errors.Acknowledgements:The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled,”A study of speech dialogue trans-lation technology based on a large corpus”.7.REFERENCES[1]T.Hanzen,S.Seneff,and J.Polifroni.Recognition con fidenceand its use in speech understanding systems.In Computer Speech and Language ,2002.[2]C Ma,M.Randolph,and J.Drish.A support vector machines-based rejection technique for speech recognition.In ICASSP ,2001.[3]P.Haffner,G.Tur,and J.Wright.Optimizing svms for com-plex call classi fication.In ICASSP ,2003.[4]ne,T.Kawahara,and nguage model switch-ing based on topic detection for dialog speech recognition.In ICASSP ,2003.[5]S.Deerwester,S.Dumais,G.Furnas,ndauer,andR.Harshman.Indexing by latent semantic analysis.In Journ.of the American Society for information science,41,pp.391-407,1990.[6]T.Joachims.Text categorization with support vector ma-chines.In Proc.European Conference on Machine Learning ,1998.[7]S.Katagiri,C.-H.Lee,and B.-H.Juang.New discriminativetraining algorithm based on the generalized probabilistic de-scent method.In IEEE workshop NNSP ,pp.299-300,1991.[8]T.Takezawa,M.Sumita,F.Sugaya,H.Yamamoto,and Ya-mamoto S.Towards a broad-coverage bilingual corpus for speech translation of travel conversations in the real world.In Proc.LREC,pp.147-152,2002.¬。

第十四届全国复杂网络大会日程

第十四届全国复杂网络大会日程
16:20 - 18:00
18:30 8:30-10:10 10:10 - 10:40 10:40-12:20 12:20 - 13:30 14 日 13:30 - 14:50
14:50 - 15:10
15:10 - 16:40
16:40 - 17:00
主持人 贾韬
陈关荣 吕金虎 汪小帆 蒋国平 曹进德
尔积
Modeling the complex 质大学(北京) multidimensional information time series to
characterize the volatility pattern evolution
16:40-17:00 黄昌巍 北京邮电大学理学院 Persistence paves the way for cooperation in evolutionary games
Predicting Human Contacts Through Alternating Direction Method of Multipliers
分组报告 A1:网络中的传播 (主持人:许小可)
201 会议厅(二楼)
时间
报告人
单位
题目
14:00-14:20 刘宗华 华东师范大学
复杂网络上的热传递进展
10 月 13 日
最佳学生论文答辩 I (主持人:方锦清)
时间 报告人
单位
401 会议厅(四楼) 题目
14:00-14:20 孙孟锋
上海大学
An Exploration and Simulation of Epidemic Spread and its Control in Multiplex
Networks

传感器与检测技术英文书籍英语

传感器与检测技术英文书籍英语

传感器与检测技术英文书籍英语Sensors and Detection Technologies.Sensors and detection technologies are essential components of modern instrumentation and control systems. They provide the means to measure and monitor physical, chemical, and biological parameters, and transmit this information to other devices for processing and analysis.There is a wide variety of sensors and detection technologies available, each with its own unique set of capabilities and limitations. The choice of sensor for a particular application depends on factors such as the parameter to be measured, the desired accuracy and precision, the operating environment, and the cost.Some of the most common types of sensors include:Temperature sensors measure the temperature of a substance. They can be based on a variety of principles,including thermocouples, resistance temperature detectors (RTDs), and thermistors.Pressure sensors measure the pressure of a gas or liquid. They can be based on a variety of principles, including strain gauges, diaphragms, and piezoresistive elements.Flow sensors measure the flow rate of a gas or liquid. They can be based on a variety of principles, including differential pressure, thermal dispersion, and ultrasonic waves.Level sensors measure the level of a liquid or solidin a tank or other container. They can be based on avariety of principles, including float switches, ultrasonic waves, and capacitance probes.Gas sensors measure the concentration of a gas in a sample. They can be based on a variety of principles, including electrochemical cells, semiconductor sensors, and optical sensors.Chemical sensors measure the concentration of a chemical species in a sample. They can be based on avariety of principles, including ion-selective electrodes, potentiometric sensors, and amperometric sensors.Biological sensors measure the presence or concentration of a biological molecule in a sample. Theycan be based on a variety of principles, including immunoassays, DNA hybridization, and protein binding assays.Detection technologies are used to convert the outputof a sensor into a digital signal that can be processed and analyzed by a computer or other device. Some of the most common types of detection technologies include:Analog-to-digital converters (ADCs) convert an analog signal into a digital signal.Digital-to-analog converters (DACs) convert a digital signal into an analog signal.Counters count the number of pulses or events that occur over a period of time.Timers measure the duration of a period of time.Data acquisition systems collect and store data from sensors and other devices.Sensors and detection technologies are used in a wide variety of applications, including:Industrial automation.Medical diagnostics.Environmental monitoring.Military and defense.Scientific research.The development of new sensors and detectiontechnologies is an active area of research and development. New sensors are being developed to measure a wider range of parameters with greater accuracy and precision. New detection technologies are being developed to improve the signal-to-noise ratio and reduce the cost of data acquisition systems.The continued development of sensors and detection technologies will enable new and innovative applications in a wide variety of fields.。

专八英语阅读

专八英语阅读

英语专业八级考试TEM-8阅读理解练习册(1)(英语专业2012级)UNIT 1Text AEvery minute of every day, what ecologist生态学家James Carlton calls a global ―conveyor belt‖, redistributes ocean organisms生物.It’s planetwide biological disruption生物的破坏that scientists have barely begun to understand.Dr. Carlton —an oceanographer at Williams College in Williamstown,Mass.—explains that, at any given moment, ―There are several thousand marine species traveling… in the ballast water of ships.‖ These creatures move from coastal waters where they fit into the local web of life to places where some of them could tear that web apart. This is the larger dimension of the infamous无耻的,邪恶的invasion of fish-destroying, pipe-clogging zebra mussels有斑马纹的贻贝.Such voracious贪婪的invaders at least make their presence known. What concerns Carlton and his fellow marine ecologists is the lack of knowledge about the hundreds of alien invaders that quietly enter coastal waters around the world every day. Many of them probably just die out. Some benignly亲切地,仁慈地—or even beneficially — join the local scene. But some will make trouble.In one sense, this is an old story. Organisms have ridden ships for centuries. They have clung to hulls and come along with cargo. What’s new is the scale and speed of the migrations made possible by the massive volume of ship-ballast water压载水— taken in to provide ship stability—continuously moving around the world…Ships load up with ballast water and its inhabitants in coastal waters of one port and dump the ballast in another port that may be thousands of kilometers away. A single load can run to hundreds of gallons. Some larger ships take on as much as 40 million gallons. The creatures that come along tend to be in their larva free-floating stage. When discharged排出in alien waters they can mature into crabs, jellyfish水母, slugs鼻涕虫,蛞蝓, and many other forms.Since the problem involves coastal species, simply banning ballast dumps in coastal waters would, in theory, solve it. Coastal organisms in ballast water that is flushed into midocean would not survive. Such a ban has worked for North American Inland Waterway. But it would be hard to enforce it worldwide. Heating ballast water or straining it should also halt the species spread. But before any such worldwide regulations were imposed, scientists would need a clearer view of what is going on.The continuous shuffling洗牌of marine organisms has changed the biology of the sea on a global scale. It can have devastating effects as in the case of the American comb jellyfish that recently invaded the Black Sea. It has destroyed that sea’s anchovy鳀鱼fishery by eating anchovy eggs. It may soon spread to western and northern European waters.The maritime nations that created the biological ―conveyor belt‖ should support a coordinated international effort to find out what is going on and what should be done about it. (456 words)1.According to Dr. Carlton, ocean organism‟s are_______.A.being moved to new environmentsB.destroying the planetC.succumbing to the zebra musselD.developing alien characteristics2.Oceanographers海洋学家are concerned because_________.A.their knowledge of this phenomenon is limitedB.they believe the oceans are dyingC.they fear an invasion from outer-spaceD.they have identified thousands of alien webs3.According to marine ecologists, transplanted marinespecies____________.A.may upset the ecosystems of coastal watersB.are all compatible with one anotherC.can only survive in their home watersD.sometimes disrupt shipping lanes4.The identified cause of the problem is_______.A.the rapidity with which larvae matureB. a common practice of the shipping industryC. a centuries old speciesD.the world wide movement of ocean currents5.The article suggests that a solution to the problem__________.A.is unlikely to be identifiedB.must precede further researchC.is hypothetically假设地,假想地easyD.will limit global shippingText BNew …Endangered‟ List Targets Many US RiversIt is hard to think of a major natural resource or pollution issue in North America today that does not affect rivers.Farm chemical runoff残渣, industrial waste, urban storm sewers, sewage treatment, mining, logging, grazing放牧,military bases, residential and business development, hydropower水力发电,loss of wetlands. The list goes on.Legislation like the Clean Water Act and Wild and Scenic Rivers Act have provided some protection, but threats continue.The Environmental Protection Agency (EPA) reported yesterday that an assessment of 642,000 miles of rivers and streams showed 34 percent in less than good condition. In a major study of the Clean Water Act, the Natural Resources Defense Council last fall reported that poison runoff impairs损害more than 125,000 miles of rivers.More recently, the NRDC and Izaak Walton League warned that pollution and loss of wetlands—made worse by last year’s flooding—is degrading恶化the Mississippi River ecosystem.On Tuesday, the conservation group保护组织American Rivers issued its annual list of 10 ―endangered‖ and 20 ―threatened‖ rivers in 32 states, the District of Colombia, and Canada.At the top of the list is the Clarks Fork of the Yellowstone River, whereCanadian mining firms plan to build a 74-acre英亩reservoir水库,蓄水池as part of a gold mine less than three miles from Yellowstone National Park. The reservoir would hold the runoff from the sulfuric acid 硫酸used to extract gold from crushed rock.―In the event this tailings pond failed, the impact to th e greater Yellowstone ecosystem would be cataclysmic大变动的,灾难性的and the damage irreversible不可逆转的.‖ Sen. Max Baucus of Montana, chairman of the Environment and Public Works Committee, wrote to Noranda Minerals Inc., an owner of the ― New World Mine‖.Last fall, an EPA official expressed concern about the mine and its potential impact, especially the plastic-lined storage reservoir. ― I am unaware of any studies evaluating how a tailings pond尾矿池,残渣池could be maintained to ensure its structural integrity forev er,‖ said Stephen Hoffman, chief of the EPA’s Mining Waste Section. ―It is my opinion that underwater disposal of tailings at New World may present a potentially significant threat to human health and the environment.‖The results of an environmental-impact statement, now being drafted by the Forest Service and Montana Department of State Lands, could determine the mine’s future…In its recent proposal to reauthorize the Clean Water Act, the Clinton administration noted ―dramatically improved water quality since 1972,‖ when the act was passed. But it also reported that 30 percent of riverscontinue to be degraded, mainly by silt泥沙and nutrients from farm and urban runoff, combined sewer overflows, and municipal sewage城市污水. Bottom sediments沉积物are contaminated污染in more than 1,000 waterways, the administration reported in releasing its proposal in January. Between 60 and 80 percent of riparian corridors (riverbank lands) have been degraded.As with endangered species and their habitats in forests and deserts, the complexity of ecosystems is seen in rivers and the effects of development----beyond the obvious threats of industrial pollution, municipal waste, and in-stream diversions改道to slake消除the thirst of new communities in dry regions like the Southwes t…While there are many political hurdles障碍ahead, reauthorization of the Clean Water Act this year holds promise for US rivers. Rep. Norm Mineta of California, who chairs the House Committee overseeing the bill, calls it ―probably the most important env ironmental legislation this Congress will enact.‖ (553 words)6.According to the passage, the Clean Water Act______.A.has been ineffectiveB.will definitely be renewedC.has never been evaluatedD.was enacted some 30 years ago7.“Endangered” rivers are _________.A.catalogued annuallyB.less polluted than ―threatened rivers‖C.caused by floodingD.adjacent to large cities8.The “cataclysmic” event referred to in paragraph eight would be__________.A. fortuitous偶然的,意外的B. adventitious外加的,偶然的C. catastrophicD. precarious不稳定的,危险的9. The owners of the New World Mine appear to be______.A. ecologically aware of the impact of miningB. determined to construct a safe tailings pondC. indifferent to the concerns voiced by the EPAD. willing to relocate operations10. The passage conveys the impression that_______.A. Canadians are disinterested in natural resourcesB. private and public environmental groups aboundC. river banks are erodingD. the majority of US rivers are in poor conditionText CA classic series of experiments to determine the effects ofoverpopulation on communities of rats was reported in February of 1962 in an article in Scientific American. The experiments were conducted by a psychologist, John B. Calhoun and his associates. In each of these experiments, an equal number of male and female adult rats were placed in an enclosure and given an adequate supply of food, water, and other necessities. The rat populations were allowed to increase. Calhoun knew from experience approximately how many rats could live in the enclosures without experiencing stress due to overcrowding. He allowed the population to increase to approximately twice this number. Then he stabilized the population by removing offspring that were not dependent on their mothers. He and his associates then carefully observed and recorded behavior in these overpopulated communities. At the end of their experiments, Calhoun and his associates were able to conclude that overcrowding causes a breakdown in the normal social relationships among rats, a kind of social disease. The rats in the experiments did not follow the same patterns of behavior as rats would in a community without overcrowding.The females in the rat population were the most seriously affected by the high population density: They showed deviant异常的maternal behavior; they did not behave as mother rats normally do. In fact, many of the pups幼兽,幼崽, as rat babies are called, died as a result of poor maternal care. For example, mothers sometimes abandoned their pups,and, without their mothers' care, the pups died. Under normal conditions, a mother rat would not leave her pups alone to die. However, the experiments verified that in overpopulated communities, mother rats do not behave normally. Their behavior may be considered pathologically 病理上,病理学地diseased.The dominant males in the rat population were the least affected by overpopulation. Each of these strong males claimed an area of the enclosure as his own. Therefore, these individuals did not experience the overcrowding in the same way as the other rats did. The fact that the dominant males had adequate space in which to live may explain why they were not as seriously affected by overpopulation as the other rats. However, dominant males did behave pathologically at times. Their antisocial behavior consisted of attacks on weaker male,female, and immature rats. This deviant behavior showed that even though the dominant males had enough living space, they too were affected by the general overcrowding in the enclosure.Non-dominant males in the experimental rat communities also exhibited deviant social behavior. Some withdrew completely; they moved very little and ate and drank at times when the other rats were sleeping in order to avoid contact with them. Other non-dominant males were hyperactive; they were much more active than is normal, chasing other rats and fighting each other. This segment of the rat population, likeall the other parts, was affected by the overpopulation.The behavior of the non-dominant males and of the other components of the rat population has parallels in human behavior. People in densely populated areas exhibit deviant behavior similar to that of the rats in Calhoun's experiments. In large urban areas such as New York City, London, Mexican City, and Cairo, there are abandoned children. There are cruel, powerful individuals, both men and women. There are also people who withdraw and people who become hyperactive. The quantity of other forms of social pathology such as murder, rape, and robbery also frequently occur in densely populated human communities. Is the principal cause of these disorders overpopulation? Calhoun’s experiments suggest that it might be. In any case, social scientists and city planners have been influenced by the results of this series of experiments.11. Paragraph l is organized according to__________.A. reasonsB. descriptionC. examplesD. definition12.Calhoun stabilized the rat population_________.A. when it was double the number that could live in the enclosure without stressB. by removing young ratsC. at a constant number of adult rats in the enclosureD. all of the above are correct13.W hich of the following inferences CANNOT be made from theinformation inPara. 1?A. Calhoun's experiment is still considered important today.B. Overpopulation causes pathological behavior in rat populations.C. Stress does not occur in rat communities unless there is overcrowding.D. Calhoun had experimented with rats before.14. Which of the following behavior didn‟t happen in this experiment?A. All the male rats exhibited pathological behavior.B. Mother rats abandoned their pups.C. Female rats showed deviant maternal behavior.D. Mother rats left their rat babies alone.15. The main idea of the paragraph three is that __________.A. dominant males had adequate living spaceB. dominant males were not as seriously affected by overcrowding as the otherratsC. dominant males attacked weaker ratsD. the strongest males are always able to adapt to bad conditionsText DThe first mention of slavery in the statutes法令,法规of the English colonies of North America does not occur until after 1660—some forty years after the importation of the first Black people. Lest we think that existed in fact before it did in law, Oscar and Mary Handlin assure us, that the status of B lack people down to the 1660’s was that of servants. A critique批判of the Handlins’ interpretation of why legal slavery did not appear until the 1660’s suggests that assumptions about the relation between slavery and racial prejudice should be reexamined, and that explanation for the different treatment of Black slaves in North and South America should be expanded.The Handlins explain the appearance of legal slavery by arguing that, during the 1660’s, the position of white servants was improving relative to that of black servants. Thus, the Handlins contend, Black and White servants, heretofore treated alike, each attained a different status. There are, however, important objections to this argument. First, the Handlins cannot adequately demonstrate that t he White servant’s position was improving, during and after the 1660’s; several acts of the Maryland and Virginia legislatures indicate otherwise. Another flaw in the Handlins’ interpretation is their assumption that prior to the establishment of legal slavery there was no discrimination against Black people. It is true that before the 1660’s Black people were rarely called slaves. But this shouldnot overshadow evidence from the 1630’s on that points to racial discrimination without using the term slavery. Such discrimination sometimes stopped short of lifetime servitude or inherited status—the two attributes of true slavery—yet in other cases it included both. The Handlins’ argument excludes the real possibility that Black people in the English colonies were never treated as the equals of White people.The possibility has important ramifications后果,影响.If from the outset Black people were discriminated against, then legal slavery should be viewed as a reflection and an extension of racial prejudice rather than, as many historians including the Handlins have argued, the cause of prejudice. In addition, the existence of discrimination before the advent of legal slavery offers a further explanation for the harsher treatment of Black slaves in North than in South America. Freyre and Tannenbaum have rightly argued that the lack of certain traditions in North America—such as a Roman conception of slavery and a Roman Catholic emphasis on equality— explains why the treatment of Black slaves was more severe there than in the Spanish and Portuguese colonies of South America. But this cannot be the whole explanation since it is merely negative, based only on a lack of something. A more compelling令人信服的explanation is that the early and sometimes extreme racial discrimination in the English colonies helped determine the particular nature of the slavery that followed. (462 words)16. Which of the following is the most logical inference to be drawn from the passage about the effects of “several acts of the Maryland and Virginia legislatures” (Para.2) passed during and after the 1660‟s?A. The acts negatively affected the pre-1660’s position of Black as wellas of White servants.B. The acts had the effect of impairing rather than improving theposition of White servants relative to what it had been before the 1660’s.C. The acts had a different effect on the position of white servants thandid many of the acts passed during this time by the legislatures of other colonies.D. The acts, at the very least, caused the position of White servants toremain no better than it had been before the 1660’s.17. With which of the following statements regarding the status ofBlack people in the English colonies of North America before the 1660‟s would the author be LEAST likely to agree?A. Although black people were not legally considered to be slaves,they were often called slaves.B. Although subject to some discrimination, black people had a higherlegal status than they did after the 1660’s.C. Although sometimes subject to lifetime servitude, black peoplewere not legally considered to be slaves.D. Although often not treated the same as White people, black people,like many white people, possessed the legal status of servants.18. According to the passage, the Handlins have argued which of thefollowing about the relationship between racial prejudice and the institution of legal slavery in the English colonies of North America?A. Racial prejudice and the institution of slavery arose simultaneously.B. Racial prejudice most often the form of the imposition of inheritedstatus, one of the attributes of slavery.C. The source of racial prejudice was the institution of slavery.D. Because of the influence of the Roman Catholic Church, racialprejudice sometimes did not result in slavery.19. The passage suggests that the existence of a Roman conception ofslavery in Spanish and Portuguese colonies had the effect of _________.A. extending rather than causing racial prejudice in these coloniesB. hastening the legalization of slavery in these colonies.C. mitigating some of the conditions of slavery for black people in these coloniesD. delaying the introduction of slavery into the English colonies20. The author considers the explanation put forward by Freyre andTannenbaum for the treatment accorded B lack slaves in the English colonies of North America to be _____________.A. ambitious but misguidedB. valid有根据的but limitedC. popular but suspectD. anachronistic过时的,时代错误的and controversialUNIT 2Text AThe sea lay like an unbroken mirror all around the pine-girt, lonely shores of Orr’s Island. Tall, kingly spruce s wore their regal王室的crowns of cones high in air, sparkling with diamonds of clear exuded gum流出的树胶; vast old hemlocks铁杉of primeval原始的growth stood darkling in their forest shadows, their branches hung with long hoary moss久远的青苔;while feathery larches羽毛般的落叶松,turned to brilliant gold by autumn frosts, lighted up the darker shadows of the evergreens. It was one of those hazy朦胧的, calm, dissolving days of Indian summer, when everything is so quiet that the fainest kiss of the wave on the beach can be heard, and white clouds seem to faint into the blue of the sky, and soft swathing一长条bands of violet vapor make all earth look dreamy, and give to the sharp, clear-cut outlines of the northern landscape all those mysteries of light and shade which impart such tenderness to Italian scenery.The funeral was over,--- the tread鞋底的花纹/ 踏of many feet, bearing the heavy burden of two broken lives, had been to the lonely graveyard, and had come back again,--- each footstep lighter and more unconstrained不受拘束的as each one went his way from the great old tragedy of Death to the common cheerful of Life.The solemn black clock stood swaying with its eternal ―tick-tock, tick-tock,‖ in the kitchen of the brown house on Orr’s Island. There was there that sense of a stillness that can be felt,---such as settles down on a dwelling住处when any of its inmates have passed through its doors for the last time, to go whence they shall not return. The best room was shut up and darkened, with only so much light as could fall through a little heart-shaped hole in the window-shutter,---for except on solemn visits, or prayer-meetings or weddings, or funerals, that room formed no part of the daily family scenery.The kitchen was clean and ample, hearth灶台, and oven on one side, and rows of old-fashioned splint-bottomed chairs against the wall. A table scoured to snowy whiteness, and a little work-stand whereon lay the Bible, the Missionary Herald, and the Weekly Christian Mirror, before named, formed the principal furniture. One feature, however, must not be forgotten, ---a great sea-chest水手用的储物箱,which had been the companion of Zephaniah through all the countries of the earth. Old, and battered破旧的,磨损的, and unsightly难看的it looked, yet report said that there was good store within which men for the most part respect more than anything else; and, indeed it proved often when a deed of grace was to be done--- when a woman was suddenly made a widow in a coast gale大风,狂风, or a fishing-smack小渔船was run down in the fogs off the banks, leaving in some neighboring cottage a family of orphans,---in all such cases, the opening of this sea-chest was an event of good omen 预兆to the bereaved丧亲者;for Zephaniah had a large heart and a large hand, and was apt有…的倾向to take it out full of silver dollars when once it went in. So the ark of the covenant约柜could not have been looked on with more reverence崇敬than the neighbours usually showed to Captain Pennel’s sea-chest.1. The author describes Orr‟s Island in a(n)______way.A.emotionally appealing, imaginativeB.rational, logically preciseC.factually detailed, objectiveD.vague, uncertain2.According to the passage, the “best room”_____.A.has its many windows boarded upB.has had the furniture removedC.is used only on formal and ceremonious occasionsD.is the busiest room in the house3.From the description of the kitchen we can infer that thehouse belongs to people who_____.A.never have guestsB.like modern appliancesC.are probably religiousD.dislike housework4.The passage implies that_______.A.few people attended the funeralB.fishing is a secure vocationC.the island is densely populatedD.the house belonged to the deceased5.From the description of Zephaniah we can see thathe_________.A.was physically a very big manB.preferred the lonely life of a sailorC.always stayed at homeD.was frugal and saved a lotText BBasic to any understanding of Canada in the 20 years after the Second World War is the country' s impressive population growth. For every three Canadians in 1945, there were over five in 1966. In September 1966 Canada's population passed the 20 million mark. Most of this surging growth came from natural increase. The depression of the 1930s and the war had held back marriages, and the catching-up process began after 1945. The baby boom continued through the decade of the 1950s, producing a population increase of nearly fifteen percent in the five years from 1951 to 1956. This rate of increase had been exceeded only once before in Canada's history, in the decade before 1911 when the prairies were being settled. Undoubtedly, the good economic conditions of the 1950s supported a growth in the population, but the expansion also derived from a trend toward earlier marriages and an increase in the average size of families; In 1957 the Canadian birth rate stood at 28 per thousand, one of the highest in the world. After the peak year of 1957, thebirth rate in Canada began to decline. It continued falling until in 1966 it stood at the lowest level in 25 years. Partly this decline reflected the low level of births during the depression and the war, but it was also caused by changes in Canadian society. Young people were staying at school longer, more women were working; young married couples were buying automobiles or houses before starting families; rising living standards were cutting down the size of families. It appeared that Canada was once more falling in step with the trend toward smaller families that had occurred all through theWestern world since the time of the Industrial Revolution. Although the growth in Canada’s population had slowed down by 1966 (the cent), another increase in the first half of the 1960s was only nine percent), another large population wave was coming over the horizon. It would be composed of the children of the children who were born during the period of the high birth rate prior to 1957.6. What does the passage mainly discuss?A. Educational changes in Canadian society.B. Canada during the Second World War.C. Population trends in postwar Canada.D. Standards of living in Canada.7. According to the passage, when did Canada's baby boom begin?A. In the decade after 1911.B. After 1945.C. During the depression of the 1930s.D. In 1966.8. The author suggests that in Canada during the 1950s____________.A. the urban population decreased rapidlyB. fewer people marriedC. economic conditions were poorD. the birth rate was very high9. When was the birth rate in Canada at its lowest postwar level?A. 1966.B. 1957.C. 1956.D. 1951.10. The author mentions all of the following as causes of declines inpopulation growth after 1957 EXCEPT_________________.A. people being better educatedB. people getting married earlierC. better standards of livingD. couples buying houses11.I t can be inferred from the passage that before the IndustrialRevolution_______________.A. families were largerB. population statistics were unreliableC. the population grew steadilyD. economic conditions were badText CI was just a boy when my father brought me to Harlem for the first time, almost 50 years ago. We stayed at the hotel Theresa, a grand brick structure at 125th Street and Seventh avenue. Once, in the hotel restaurant, my father pointed out Joe Louis. He even got Mr. Brown, the hotel manager, to introduce me to him, a bit punchy强力的but still champ焦急as fast as I was concerned.Much has changed since then. Business and real estate are booming. Some say a new renaissance is under way. Others decry责难what they see as outside forces running roughshod肆意践踏over the old Harlem. New York meant Harlem to me, and as a young man I visited it whenever I could. But many of my old haunts are gone. The Theresa shut down in 1966. National chains that once ignored Harlem now anticipate yuppie money and want pieces of this prime Manhattan real estate. So here I am on a hot August afternoon, sitting in a Starbucks that two years ago opened a block away from the Theresa, snatching抓取,攫取at memories between sips of high-priced coffee. I am about to open up a piece of the old Harlem---the New York Amsterdam News---when a tourist。

外文翻译----数字图像处理和模式识别技术关于检测癌症的应用

外文翻译----数字图像处理和模式识别技术关于检测癌症的应用

引言英文文献原文Digital image processing and pattern recognition techniques for the detection of cancerCancer is the second leading cause of death for both men and women in the world , and is expected to become the leading cause of death in the next few decades . In recent years , cancer detection has become a significant area of research activities in the image processing and pattern recognition community .Medical imaging technologies have already made a great impact on our capabilities of detecting cancer early and diagnosing the disease more accurately . In order to further improve the efficiency and veracity of diagnoses and treatment , image processing and pattern recognition techniques have been widely applied to analysis and recognition of cancer , evaluation of the effectiveness of treatment , and prediction of the development of cancer . The aim of this special issue is to bring together researchers working on image processing and pattern recognition techniques for the detection and assessment of cancer , and to promote research in image processing and pattern recognition for oncology . A number of papers were submitted to this special issue and each was peer-reviewed by at least three experts in the field . From these submitted papers , 17were finally selected for inclusion in this special issue . These selected papers cover a broad range of topics that are representative of the state-of-the-art in computer-aided detection or diagnosis(CAD)of cancer . They cover several imaging modalities(such as CT , MRI , and mammography) and different types of cancer (including breast cancer , skin cancer , etc.) , which we summarize below .Skin cancer is the most prevalent among all types of cancers . Three papers in this special issue deal with skin cancer . Y uan et al. propose a skin lesion segmentation method. The method is based on region fusion and narrow-band energy graph partitioning . The method can deal with challenging situations with skin lesions , such as topological changes , weak or false edges , and asymmetry . T ang proposes a snake-based approach using multi-direction gradient vector flow (GVF) for the segmentation of skin cancer images . A new anisotropic diffusion filter is developed as a preprocessing step . After the noise is removed , the image is segmented using a GVF1snake . The proposed method is robust to noise and can correctly trace the boundary of the skin cancer even if there are other objects near the skin cancer region . Serrano et al. present a method based on Markov random fields (MRF) to detect different patterns in dermoscopic images . Different from previous approaches on automatic dermatological image classification with the ABCD rule (Asymmetry , Border irregularity , Color variegation , and Diameter greater than 6mm or growing) , this paper follows a new trend to look for specific patterns in lesions which could lead physicians to a clinical assessment.Breast cancer is the most frequently diagnosed cancer other than skin cancer and a leading cause of cancer deaths in women in developed countries . In recent years , CAD schemes have been developed as a potentially efficacious solution to improving radiologists’diagnostic accuracy in breast cancer screening and diagnosis . The predominant approach of CAD in breast cancer and medical imaging in general is to use automated image analysis to serve as a “second reader”, with the aim of improving radiologists’diagnostic performance . Thanks to intense research and development efforts , CAD schemes have now been introduces in screening mammography , and clinical studies have shown that such schemes can result in higher sensitivity at the cost of a small increase in recall rate . In this issue , we have three papers in the area of CAD for breast cancer . Wei et al. propose an image-retrieval based approach to CAD , in which retrieved images similar to that being evaluated (called the query image) are used to support a CAD classifier , yielding an improved measure of malignancy . This involves searching a large database for the images that are most similar to the query image , based on features that are automatically extracted from the images . Dominguez et al. investigate the use of image features characterizing the boundary contours of mass lesions in mammograms for classification of benign vs. Malignant masses . They study and evaluate the impact of these features on diagnostic accuracy with several different classifier designs when the lesion contours are extracted using two different automatic segmentation techniques . Schaefer et al. study the use of thermal imaging for breast cancer detection . In their scheme , statistical features are extracted from thermograms to quantify bilateral differences between left and right breast regions , which are used subsequently as input to a fuzzy-rule-based classification system for diagnosis.Colon cancer is the third most common cancer in men and women , and also the third mostcommon cause of cancer-related death in the USA . Y ao et al. propose a novel technique to detect colonic polyps using CT Colonography . They use ideas from geographic information systems to employ topographical height maps , which mimic the procedure used by radiologists for the detection of polyps . The technique can also be used to measure consistently the size of polyps . Hafner et al. present a technique to classify and assess colonic polyps , which are precursors of colorectal cancer . The classification is performed based on the pit-pattern in zoom-endoscopy images . They propose a novel color waveler cross co-occurence matrix which employs the wavelet transform to extract texture features from color channels.Lung cancer occurs most commonly between the ages of 45 and 70 years , and has one of the worse survival rates of all the types of cancer . Two papers are included in this special issue on lung cancer research . Pattichis et al. evaluate new mathematical models that are based on statistics , logic functions , and several statistical classifiers to analyze reader performance in grading chest radiographs for pneumoconiosis . The technique can be potentially applied to the detection of nodules related to early stages of lung cancer . El-Baz et al. focus on the early diagnosis of pulmonary nodules that may lead to lung cancer . Their methods monitor the development of lung nodules in successive low-dose chest CT scans . They propose a new two-step registration method to align globally and locally two detected nodules . Experments on a relatively large data set demonstrate that the proposed registration method contributes to precise identification and diagnosis of nodule development .It is estimated that almost a quarter of a million people in the USA are living with kidney cancer and that the number increases by 51000 every year . Linguraru et al. propose a computer-assisted radiology tool to assess renal tumors in contrast-enhanced CT for the management of tumor diagnosis and response to treatment . The tool accurately segments , measures , and characterizes renal tumors, and has been adopted in clinical practice . V alidation against manual tools shows high correlation .Neuroblastoma is a cancer of the sympathetic nervous system and one of the most malignant diseases affecting children . Two papers in this field are included in this special issue . Sertel et al. present techniques for classification of the degree of Schwannian stromal development as either stroma-rich or stroma-poor , which is a critical decision factor affecting theprognosis . The classification is based on texture features extracted using co-occurrence statistics and local binary patterns . Their work is useful in helping pathologists in the decision-making process . Kong et al. propose image processing and pattern recognition techniques to classify the grade of neuroblastic differentiation on whole-slide histology images . The presented technique is promising to facilitate grading of whole-slide images of neuroblastoma biopsies with high throughput .This special issue also includes papers which are not derectly focused on the detection or diagnosis of a specific type of cancer but deal with the development of techniques applicable to cancer detection . T a et al. propose a framework of graph-based tools for the segmentation of microscopic cellular images . Based on the framework , automatic or interactive segmentation schemes are developed for color cytological and histological images . T osun et al. propose an object-oriented segmentation algorithm for biopsy images for the detection of cancer . The proposed algorithm uses a homogeneity measure based on the distribution of the objects to characterize tissue components . Colon biopsy images were used to verify the effectiveness of the method ; the segmentation accuracy was improved as compared to its pixel-based counterpart . Narasimha et al. present a machine-learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using an ion-abrasion scanning electron microscope . The proposed approach has minimal user intervention and can achieve high classification accuracy . El Naqa et al. investigate intensity-volume histogram metrics as well as shape and texture features extracted from PET images to predict a patient’s response to treatment . Preliminary results suggest that the proposed approach could potentially provide better tools and discriminant power for functional imaging in clinical prognosis.We hope that the collection of the selected papers in this special issue will serve as a basis for inspiring further rigorous research in CAD of various types of cancer . We invite you to explore this special issue and benefit from these papers .On behalf of the Editorial Committee , we take this opportunity to gratefully acknowledge the autors and the reviewers for their diligence in abilding by the editorial timeline . Our thanks also go to the Editors-in-Chief of Pattern Recognition , Dr. Robert S. Ledley and Dr.C.Y. Suen , for their encouragement and support for this special issue .英文文献译文数字图像处理和模式识别技术关于检测癌症的应用世界上癌症是对于人类(不论男人还是女人)生命的第二杀手。

GroupNetwork

GroupNetwork

GroupNetworkNovel Techniques for Fraud Detection inMobile Telecommunication NetworksYves Moreau, Bart Preneel, Dept. Electrical Engineering-ESAT K.U.Leuven{yves.moreau,bart.preneel}@esat.kuleuven.ac.bePeter Burge, John Shawe-Taylor, Dept. Computer Science, Royal Holloway{peteb,jst}@/doc/79c8b920aaea998fcc220e13.htmlChristof Stoermann, Siemens AG******************************.deChris Cooke, Vodafonechris.cooke@/doc/79c8b920aaea998fcc220e13.htmlGroup: NetworkAbstract: This paper discusses the status of research on detection of fraud undertaken as part of the European Commission-funded ACTS ASPeCT (Advanced Security for Personal Communications Technologies) project. A first task has been the identification of possible fraud scenarios and of typical fraud indicators which can be mapped to data in toll tickets. Currently, the project is exploring the detection of fraudulent behaviour based on a combination of absolute and differential usage. Three approaches are being investigated: a rule-based approach, an approach based on neural network, where both supervised and unsupervised learning are considered. Special attention is being paid to the feasibility of the implementations.Introduction:It is estimated that the mobile communication industry loses several million ECUs per year due to fraud. Therefore, prevention and early detection of fraudulent activity is an important goal for network operators. It is clear that the additional security measures taken in GSM and in the future UMTS (Universal Mobile Telecommunications System) make these networks less vulnerable to fraud than the analogue networks. Nevertheless, certain types of commercial fraud are very hard to preclude by technical means. It is also anticipated that the introduction of new services can lead to the development of new ways to defraud the system. The use of sophisticated fraud detection techniques can assist in early detection of commercial frauds, and will also reduce the effectivity of technical frauds.One of the tasks of the European Commission-funded ACTS project ASPeCT (Advanced Security for Personal Communications Technologies) [1] is the development of new techniques and concepts for the detection of fraud in mobile telecommunication networks. This paper intends to report on the progress made during the first year. For a more detailed status report, the reader is referred to [2].The remainder of this paper is organised as follows: section 1 discusses the identification of possible fraud scenarios and of fraud indicators; section 2 discusses the general approach of user profiling; section 3 and 4 present respectively the rule based approach and the neural net based approach to fraud detection.Section 1: Possible frauds and their indicatorsA first step of the work consists of the identification of possible fraud scenarios in tele-communication networks and particularly in mobile networks. These scenarios have been classified by the technical manner in which they are committed; also an investigation has been undertaken to identify which parts of the mobile telecommunication network are abused in order to commit any particular fraud. Other characteristics that have been studied are whether frauds are technical fraud operated for financial gain, or they are fraud related to personal use - hence not employed for profiteering. A further classification is achieved by considering whether the network abuse is the result of administrative fraud, procurement fraud, or application fraud.Subsequently, typical indicators have been identified which may be used for the purposes of detecting fraud committed using mobile telephones. In order to provide an indication of the likely ability of particular indicators to identify a specific fraud, these indicators have been classified both by their type and by their use.The different types are :-usage indicators, related to the way in which a mobile telephone is used;mobility indicators, related to the mobility of the telephone;deductive indicators, which arise as a by-product of fraudulent behaviour (e.g., overlapping calls and velocity checks). Indicators have also been classified by use:-primary indicators can, in principle, be employed in isolation to detect fraud;secondary indicators provide useful information in isolation (but are not sufficient by themselves) tertiary indicators provide supporting information when combined with other indicators.A selection has been made of those scenarios which cannot be easily detected using existing tools, but which could be identified using more sophisticated approaches.The potential fraud indicators have been mapped to network data required to measure them. The information required to monitor the use of the communications network is contained in the toll tickets. Toll Tickets are data records containing details pertaining to every mobile phone call attempt that is made. Toll Tickets are transmitted to the network operator by the cells or switches that the mobile phone was communicating with at the time due to proximity. They are used to determine the charge to the subscriber, but they also provide information about customer usage and thus facilitate the detection of any possible fraudulent use. It has been investigated which fields in the GSM toll tickets can be used as indicators for fraudulent behaviour.Before use in the fraud detection engine, the toll tickets are being preprocessed. An essential component of this process is the encryption of all personal information in the toll tickets (such as telephone numbers). This allows for the protection of the privacy of users during the development of the fraud detection tools, while at the same time the network operators will be able to obtain the identity of fraudulent users.Section 2: User profilingAbsolute or differential analysisExisting fraud detection systems tend to interrogate sequences of Toll Tickets comparing a function of the various fields with fixed criteria known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to an investigation by the network operator. Such fixed trigger systems perform what is known as an absolute analysis of the Toll Tickets and are good at detecting the extremes of fraudulent activity.Another approach to the problem is to perform a differential analysis. Here we monitor behavioural patterns of the mobile phone comparing its most recent activities with a history of its usage. Criteria can then be derived to use as triggers that are activated when usage patterns of the mobile phone change significantly over a short period of time. A change in the behaviour pattern of a mobile phone is a common characteristic in nearly all fraud scenarios excluding those committed on subscription where there is no behavioural pattern established.There are many advantages to performing a differential analysis through profiling the behaviour of a user. Firstly, certain behavioural patterns may be considered anomalous for one type of user, and hence potentially indicative of fraud, that are considered acceptable for another. With a differential analysis flexible criteria can be developed that detect any change in usage based on a detailed history profile of user behaviour. This takes fraud detection down to the personal level comparing like with like enabling detection of less obvious frauds that may only be noticed at the personal usage level. An absolute usage system would not detect fraud at this level. In addition, however, because a typical user is not a fraudster, the majority of criteria that would have triggered an alarm in an absolute usage system will be seen as a large change in behaviour in a differential usage system. In this way a differential analysis can be seen as incorporating the absolute approach.The differential approachMost fraud indicators do not become apparent from an individual Toll Ticket. With the possible exception of a velocity trap, we can only gain confidence in detecting a real fraud through investigating a fairly long sequence of Toll Tickets. This is particularly the case when considering more subtle changes in a user’s behaviour by performing a differential analysis.A differential usage system requires information concerning the users history of behaviour plus a more recent sample of the mobile phones activities. An initial approach might be to extract and encode information from Toll Tickets and to store it in record format. This would require two windows or spans over the sequence of transactions for each user. The shorter sequence might be called the Current User Profile (CUP) and the longer sequence, the User Profile History (UPH). Both profiles could be treated and maintained as finite length queues. When a new Toll Ticket arrives for a given user, the oldestentry from the UPH would be discarded and the oldest entry from the CUP would move to the back of the UPH queue. The new record encoded from the incoming Toll Ticket would then join the back of the CUP queue.Clearly it is not optimal to search and retrieve historical information concerning a user’s activities prior to each calculation, on receipt of a new Toll Ticket. A more suitable approach is to compute a single cumulative CUP and UPH, for each user, from incoming Toll Tickets which can be stored as individual records, possibly in a database. So that we maintain the concept of having two different spans over the Toll Tickets without retaining a database record for each Toll Ticket, we will need to decay both profiles before the influence of a new Toll Ticket can be taken into consideration. A straight forward decay factor may not be suitable as this will potentially dilute information relating to encoded parameters stored in the user’s profile. An important concern here is the potential creation of false behaviour patterns. Several decaying systems are currently being investigated.Relevant toll ticket dataThere are two important requirements for user profiling. At first, efficiency is of the foremost concern for storing the user data and for performing updates. Secondly, user profiles have to realise a precise description of user behaviour to facilitate reliable fraud detection. All the information that a fraud detection tool will need to handle is derived from the toll tickets provided by the network operator.The following toll ticket components have been viewed to be the most fraud relevant measures:Charged_IMSI1(identifies the user)First_Cell_Id(location characteristic for mobile originating calls)Chargeable_Duration(base for all cost estimations)B_Type_of_Number(for distinguishing between national / international calls)Non_Charged_Party (the number dialed)These components will continually be picked out of the toll tickets and incorporated into the user profiles in a cumulative manner.It is also anticipated that the analysis of cell congestion can provide useful ancillary information. Section 3: Rule-based approach to fraud detectionIn ASPeCT, several approaches are taken to identify fraudulent behaviour. In the rule-based approach, both the absolute and differential usage are verified against certain rules. This approach works best with user profiles containing explicit information, where fraud criteria given as rules can be referred to. User profiles are maintained for the directory number of the calling party (A-number), for the directory number of the called party (B-number) and also for the cells used to make/receive the calls. A-number profiles represent user behaviour and are useful for the detection of most types of fraud, while B-number profiles point to hot destinations and thus allow the detection of frauds based upon call forwarding. All deviations from normal user behaviour resulting from the different analysing processes are collected and alarms will finally be raised if the results in combination fulfil given alarm criteria. The implementation of this solution is based on an existing rule-based tool for audit trail analysis PDAT (Protocol Data Analysis Tool) [3]. PDAT is a rule based tool for intrusion detection developed by Siemens ZFE (Corporate Research and Development). PDAT works in heterogeneous environments, has the possibility of on-line analysis, and provides a performance of about 200 KB input per second. Important goals were flexibility and broad applicability, including the analysis of general protocol data, which is achieved by the special language PDAL (Protocol Data Analysis Language). PDAL allows the programming of analysis criteria as well as a GUI-aided configuration of the analysis at runtime.Intrusion detection and mobile fraud detection are quite similar problem fields and the flexibility and broad applicability of PDAT are promising for using this tool for mobile fraud detection too. The main difference between intrusion detection and mobile fraud detection seems to be the kind of input data. The recording for intrusion detection produces 50 MB per day and user, but only for the few users of one UNIX-system. In comparison, fraud detection has to deal with a huge amount of mobile phone subscribers (roughly 1 Million), each of whom, however, produces only about 300 bytes of data per day. PDAT was able to keep all interim results in main memory, since only a few users had to be dealt with. For fraud detection, however, intermediate data has of course to be stored on hard disc. Because 1 International Mobile Subscriber Identityof these new requirements it was necessary to develop some completely new concepts such as user profiling and fast swapping for the updating of user profiles. Also, the internal architecture had to be changed to a great extent. The new architecture is depicted in figure 1.Figure 1 Architecture of rule-based fraud detection toolSection 4: Neural network based approach to fraud detectionA second approach to identify fraudulent behaviour uses neural networks. The multiplicity and heterogeneity of the fraud scenarios require the use of intelligent detection systems. The fraud detection engine has to be flexible enough to cope with the diversity of fraud. It should also be adaptive in order to face new fraud scenarios, since fraudsters are likely to develop new forms of fraud once older attacks become impractical. Further, fraud appears in the billing system as abnormal usage patterns in the toll ticket records of one or more users. The function of the fraud detection engine is to recognise such patterns and produce the necessary alarms. High flexibility and adaptivity for a pattern recognition problem directly point to neural networks as a potential solution. Neural networks are systems of elementary decision units that can be adapted by training in order to recognise and classify arbitrary patterns. The interaction of a high number of elementary units makes it possible to learn arbitrarily complex tasks. For fraud detection in telephone networks, neural network engines are currently being developed worldwide. As a closely related application, neural networks are now routinely used for the detection of credit card fraud.There are two main forms of learning in neural networks: unsupervised learning and supervised learning. In unsupervised learning, the network groups similar training patterns in clusters. It is then up to the user to recognise what class or behaviour has to be associated to each cluster. When patterns are presented to the network after training, they are associated to the cluster they are closest to, and are recognised as belonging to the class corresponding to that cluster. In supervised learning, the patterns have to be a priori labelled as belonging to some class. During learning, the network tries to adapt its units so that it produces the correct label at its output for each training pattern. Once training is finished the units are frozen, and when a new pattern is presented, it is classified according to the output produced by the network.Unsupervised learning presents some difficulties. The problem is that patterns have to be presented -that is, encoded - in such a way that the data from fraudulent usage will form groups that are distinct enough from regular data. On the other hand, these systems can be trained using clean data only. With supervised learning, the difficulty is that one must obtain a significant amount of fraudulent data, and label it as such. This represents a significant effort. Further, it is not clear how such systems will handle new fraud strategies. Therefore, none of the approaches appears to be a priori superior to the other, and both directions are being investigated.References[1]ACTS AC095, project ASPeCT, Initial report on security requirements,AC095/ATEA/W21/DS/P/02/B, February 1996 .[2]ACTS AC095, project ASPeCT, Definition of fraud detection concepts,AC095/KUL/W22/DS/P/06/A, September 1996.[3]PDAT: Author 1, Author 2, Author 3, etc: Reference Title. Source, City, Date..。

小学上册第一次英语第一单元测验试卷

小学上册第一次英语第一单元测验试卷

小学上册英语第一单元测验试卷英语试题一、综合题(本题有100小题,每小题1分,共100分.每小题不选、错误,均不给分)1.The ocean is a source of ______ for many organisms.2.In a chemical equation, the products are shown on the _____ side.3.They are having a ______ (party) this weekend.4.The cat is very _______ (可爱).5.What do we call a person who is trained to help sick people?A. NurseB. DoctorC. ParamedicD. All of the above6.My sister loves to watch ______ (鸟) in the garden.7.What is the name of the famous music festival held in California?A. CoachellaB. LollapaloozaC. BonnarooD. GlastonburyA Coachella8.The ______ is very cold. (water)9.The _______ of a balloon can change when it is heated.10.What do we call the science of numbers?A. ChemistryB. PhysicsC. MathematicsD. BiologyC11.My sister is good at __________ (跳舞).12.The ancient Greeks are famous for their _______ and philosophy.13.What is the name of the toy that can spin?A. Yo-yoB. KiteC. BallD. PuzzleA Yo-yo14.The chemical symbol for arsenic is _______.15. A reaction that absorbs heat is called an ______ reaction.16.can Revolution was influenced by Enlightenment ________ (理念). The Amer17.Which body part do you use to hear?A. EyesB. NoseC. EarsD. MouthC18.I like to ______ my friends' birthdays. (celebrate)19.My cat enjoys climbing up on ______ (家具).20.My cat has sharp ______ (爪子).21.The country of Italy is shaped like a __________.22.The ancient Greeks established the concept of ________ (民主).23.Which planet is known as the Red Planet?A. EarthB. MarsC. VenusD. JupiterB24.The __________ (绿色空间) is essential for health.25.We will have _____ (fun/work) at the fair.26.The ______ (植物的生长速度) can vary based on conditions.27.Have you ever seen a _____ (考拉) in Australia?28.What is a common animal found on a farm?A. TigerB. PigC. ElephantD. LionB29.How do you say "mountain" in Spanish?A. MontañaB. MontC. MontagneD. Moutain30.I see a _____ (car/bike) on the road.31.The country known for its fashion is ________ (以时尚闻名的国家是________).32.The Great Red Spot on Jupiter is a permanent ______.33.I wear _____ (shoes/socks) to school.34.The ______ helps with the detection of temperature changes.35.Which animal is known for its long neck?A. ElephantB. GiraffeC. ZebraD. KangarooB36.The chemical formula for ammonium sulfate is _______.37.The solid part of a solution after filtration is called ______.38. A _____ (生态平衡) is essential for a healthy environment.39.ts are __________ (有毒的) and should be handled carefully. Some pla40.I found a _______ (小蛇) in the grass.41.My brother loves to play __________. (篮球)42.What do we call a young kangaroo?A. CalfB. JoeyC. KidD. Pup43. A chemical change can result in a change in ______.44.The _____ (园艺技巧) can enhance your gardening skills.45.The process of making sugar involves _______.46.What is the capital of Norway?A. OsloB. BergenC. StavangerD. Trondheim47.In the garden, we have many ________ (树) and flowers that attract ________ (蜜蜂).48.ts can survive in very ______ (干燥) conditions. Some pla49.The main components of air are nitrogen and _____.50.My favorite season is ______ (春天) because of the flowers.51.What is the name of the famous American author known for "The Grapes of Wrath"?A. John SteinbeckB. Ernest HemingwayC. F. Scott FitzgeraldD. Mark Twain52.The _____ (植物繁荣) occurs in favorable conditions.53.What is the capital of Norway?A. CopenhagenB. StockholmC. OsloD. HelsinkiC54.Liquids have a definite __________ but no fixed shape.55.The chemical formula for glucose is ________.56. A ______ (温暖的天气) encourages plant growth.57.The party is at my ________.58. A frog's tongue is fast and ______ (黏).59.My uncle is a ______. He works with computers.60.I love to explore ________ (国家公园) during vacations.61.He is ___ his homework. (finishing)62.The ______ is important for clean air.63.The book is ________ and interesting.64.What is the name of the famous clock tower in London?A. Big BenB. Eiffel TowerC. Leaning Tower of PisaD. Burj Khalifa65.I like to __________ (动词) with my __________ (玩具名) after school.66.The ______ (老虎) has stripes and is very powerful.67. A thunderstorm can be very ______ (可怕).68.We are going to ___ dinner. (eat)69.The __________ were fierce warriors from Scandinavia. (维京人)70.The antelope is quick on its ____.71.What do we call a type of poetry that tells a story?A. LyricB. EpicC. OdeD. Sonnet72.I like to ______ my grandparents during holidays. (visit)73.Wildflowers grow without ______ (照顾).74.The bird builds a nest in the _______ (鸟在_______中筑巢).75.I like _______ (观察) the stars at night.76.What is the name of the famous ancient city in Peru?A. TenochtitlanB. Machu PicchuC. PetraD. Angkor WatB77.What do you call a group of stars?A. GalaxyB. UniverseC. ConstellationD. Solar SystemC78.The first female aviator to fly solo across the Atlantic was _______ Earhart.79.My teacher is _______ (友好的).80.The ____ is known for its incredible speed and agility.81.My dad enjoys fixing ____ (electronics).82. A telescope makes distant objects appear ______.83.The movie was ______ (really) entertaining.84.What is the longest river in the world?A. NileB. AmazonC. MississippiD. Yangtze85.What is the main ingredient in hummus?A. BeanB. ChickpeaC. LentilD. PeaB86.He is very _____ (善于沟通) in meetings.87. A chemical reaction that absorbs heat is called _______.88.What is the main purpose of the Hubble Space Telescope?A. To study the EarthB. To study the MoonC. To observe distant galaxiesD. To communicate with aliens89.The main use of sulfuric acid is in _____.90.My mom loves to _______ (动词) to relax. 她觉得这个很 _______ (形容词).91.The ancient Romans believed in many _______. (神灵)92.What do you call the action of preparing food for a meal?A. CookingB. BakingC. RoastingD. GrillingA93.What is the capital of the Dominican Republic?A. Santo DomingoB. SantiagoC. Puerto PlataD. La RomanaA94.My cousin is very __________ (活跃的) in school clubs.95.The __________ (历史的多样性) highlights richness.96.My brother loves to play ____.97.My favorite food is ________ (意大利面) with sauce.98.What is the weather like when it rains?A. DryB. WetC. SunnyD. ColdB99.The cake is _____ with candles. (decorated)100.The __________ can reveal patterns in sedimentation and erosion over time.。

小学下册O卷英语第二单元期末试卷

小学下册O卷英语第二单元期末试卷

小学下册英语第二单元期末试卷英语试题一、综合题(本题有100小题,每小题1分,共100分.每小题不选、错误,均不给分)1.What is the largest organ in the human body?A. HeartB. LiverC. SkinD. Brain2._____ (城市绿化) initiatives improve air quality.3. A primary standard is a reagent of known ______.4.Which of these is an insect?A. SpiderB. ButterflyC. WormD. Snail5.What is 8 3?A. 4B. 5C. 6D. 7B6.My favorite ________ is purple.7.ers open up in the morning and ______ at night. (有些花在早晨开放,晚上闭合。

) Some fru8.My favorite season is ______ (春天). Everything comes back to ______ (生命).9.I love to explore ________ (树林) near my house.10.Jupiter has a very strong ______ field.11.What is the name of the famous American singer known as the "King of Rock and Roll"?A. Frank SinatraB. Elvis PresleyC. Johnny CashD. Buddy HollyB12.The coyote is known for its ________________ (嚎叫).13.What time is it when the big hand is on 12 and the small hand is on 3?A. 12:00B. 3:00C. 6:00D. 9:0014.The ____ has a beautiful song and is often found in trees.15.I like to plant ________ in the garden.16.Which sport is played on a court with a net?A. SoccerB. TennisC. BaseballD. Golf17.The _______ (Palace of Versailles) was the royal residence of French kings.18.Which instrument has keys and is played by pressing them?A. GuitarB. DrumsC. PianoD. ViolinC19.I like to eat ________ (水果) every day.20. A ______ is a large animal that can graze on grass.21. A dolphin communicates with clicks and ________________ (哨声).22. A bee's role as a pollinator is essential for many plants and ________________ (作物).23.What do you call a vehicle that travels on tracks?A. CarB. TrainC. BusD. BicycleB24.How many chambers does the human heart have?A. TwoB. ThreeC. FourD. FiveC25.The _______ (海马) is unique among fish.26.The boy draws very ________.27.The invention of the printing press allowed for mass _____.28.What is a baby sheep called?A. CalfB. KittenC. LambD. PuppyC29. A ________ (园艺技巧) can enhance growth.30.The campfire is _______ (温暖的)。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

This article was downloaded by: [Boli Xiong]On: 17 August 2011, At: 08:11Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UKRemote Sensing LettersPublication details, including instructions for authors andsubscription information:/loi/trsl20A change detection measure basedon a likelihood ratio and statisticalproperties of SAR intensity imagesBoli Xiong a b , Jing M. Chen b & Gangyao Kuang aa Department of Information Engineering, School of ElectronicScience and Engineering, National University of DefenseT echnology, Changsha, Hunan, PR Chinab Department of Geography and Programming in Planning,University of T oronto, T oronto, ON, CanadaAvailable online: 04 Aug 2011PLEASE SCROLL DOWN FOR ARTICLERemote Sensing LettersVol.3,No.3,May 2012,267–275A change detection measure based on a likelihood ratio and statisticalproperties of SAR intensity imagesBOLI XIONG*†‡,JING M.CHEN‡and GANGYAO KUANG††Department of Information Engineering,School of Electronic Science and Engineering,National University of Defense Technology,Changsha,Hunan,PR China ‡Department of Geography and Programming in Planning,University of Toronto,Toronto,ON,Canada(Received 8September 2010;in final form 10March 2011)With its weather-and illumination-independent characteristics,synthetic aperture radar (SAR)has become an important tool for change detection.There are two critical steps in SAR image change detection:designing a change detector and choosing a decision rule.Given a measure from a change detector,the change detection results could be sensitive to the decision rule,such as the selection of a threshold.This letter presents a change detection measure based on a likelihood ratio and the statistical distribution of SAR intensity images.The likelihood ratio is defined as the ratio between the joint probability density functions (PDFs)of a pair of SAR images.Under the condition that both PDFs follow the gamma distribu-tion,the histogram of this change detection measure deduced from the likelihood ratio has a single and steep peak that can be used to reliably and easily determine the change detection threshold.Analyses of SAR image pairs from different plat-forms show that the proposed change detection measure is simple and effective in detecting changes.1.IntroductionChange detection is a process of acquiring time-dependent information of objects of interest from the same scene observed at different times.Synthetic aperture radar (SAR)has been an important technique for change detection for various applica-tions such as environmental monitoring (Wang and Allen 2008,Makkeasorn et al.2009),agricultural surveys,urban studies,forest monitoring and disaster evaluation (Bovolo and Bruzzone 2007).Changes in a SAR scene are manifested mainly as differences of intensity or local texture among multi-temporal SAR images.Image dif-ferencing is one of the most widely used change detection algorithms for a variety of geographical environments (Singh 1989).Image ratioing is another change detection method which is also as simple and quick as image differencing.It is widely used in SAR image change detection because of multiplicative noise reduction when data are ratioed on a pixel-by-pixel basis.It is well known that,intrinsic to all active imaging systems,SAR images suffer from speckle noise due to coherence from distributed tar-gets (Franceschetti and Lanari 1999).The variety of the noise and imaging conditions*Corresponding author.Email:bolixiong@Remote Sensing LettersISSN 2150-704X print/ISSN 2150-7058online ©2012Taylor &Francis/journals DOI:10.1080/01431161.2011.572093D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011268 B.Xiong et al .in the data acquisition process affects the performance and stability of SAR change detection.In particular,a simple threshold of 0for image differencing and 1for image ratioing may not be appropriate to identify change from no-change.In this letter we propose a change detection measure that is based on a likelihood ratio of SAR inten-sity images and a theoretical analysis of SAR statistical properties.With the proposed change detection measure,one can estimate an optimum threshold that differentiates the changes from the non-changed areas.2.The proposed change detection measureThere are two key steps in classical SAR image change detection (Inglada and Mercier 2007).First is the development of a change detection measure,and the second is to make a decision with this measure.Image differencing appears to be an appropri-ate measure in change detection if the data are affected by additive Gaussian noise (Coppin and Bauer 1996).However,SAR speckle is a multiplicative noise,and thus image ratioing of the multi-temporal radar intensity data is better adapted to the statistical features of SAR data than image differencing (Rignot and Zyl 1993).An alternative change detection measure widely used in SAR change detection is the log-ratio (Bazi et al.2006),which converts the linear scale of SAR data to a logarithmic scale before the differencing operation,and thus transforms the multiplicative noise into additive noise.The results after image differencing or ratioing are called ‘residual data’(Jha and Unni 1994).The choice of a threshold differentiating the change from no-change in unsupervised change detection can be made automatically on the basis of residual data.The automatic selection of thresholds is of particular interest in SAR change detection (Bruzzone and Prieto 2000,Rogerson 2002,Moser and Serpico 2006).Based on the likelihood ratio and the gamma distribution of SAR intensity images,a sta-tistical hypothesis testing is proposed for change detection of SAR data (Conradsen et al.2003).With this hypothesis testing,a new change detection measure is deduced,which is not only as simple as image differencing,but also reliable and provides an easy method for selecting a threshold.In this letter,each pixel in a SAR image is assumed to be independent of the other pixels.If x expresses the intensity value of a pixel in a homogenous region of an L -look SAR image,the probability density function (PDF)of x is (Dekker 1998,Oliver and Quegan 2004)P (x )=1(L )L u L x L −1exp −Lxu(1)where (·)is a gamma function.From the moment estimation,the mean (E )andvariance (D )of x are as follows:E (x )=u ,D (x )=u 2L(2)It can be assumed that there are M groups of SAR images,and each group contains a SAR image after an L -look processing.Given a pixel x ij in a homogenous area,which is surrounded by N -1adjacent pixels that follow the same PDF ,the joint PDF of x ij (i =1,2,...,N ,j =1,2,...,M )can be expressed asD o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011A measure for SAR intensity images change detection269P (X )=Px 11,x 12,...,x NM =N i =1M j =11L u jLx L −1ijexp−x ij Lu j(3)where u j is the average intensity of pixels in the homogenous area of each image.When M =2,the first image is regarded as the so-called master image,and the second the slave for change ing the moment estimation in equation (2),the change detection problem is addressed by using the following binary hypothesis test (Chatelain and Tourneret 2007):H 0(unchanged pattern):u 1=E (x 1)=u 2=E (x 2)=u 0H 1(changed pattern):u 1=E (x 1)=u 2=E (x 2)where x 1and x 2are the pixels in the master and slave images,bined with equation (3),the conditional probabilities in a likelihood ratio test can be described as follows:λ1=P (X |H 0)=N i =11 2(L )L u 02Lx L −1i 1x L −1i 2exp −L u 0(x i 1+x i 2)(4)λ2=P (X |H 1)=N i =112(L )L 2u 1u 2Lx L −1i 1x L −1i 2exp −Lx i 1u 1+x i 2u 2(5)The likelihood ratio,λ,is obtained by combining equations (4)and (5):λ=P (X |H 1)P (X |H 0)=u 20u 1u 2LNexp ⎛⎜⎜⎜⎝−L ⎛⎜⎜⎜⎝Ni =1x i 1u 1+N i =1x i 2u 2−N i =1x i 1u 0−Ni =1x i 2u 0⎞⎟⎟⎟⎠⎞⎟⎟⎟⎠(6)Let η1=1N Ni =1x i 1,η2=1NNi =1x i 2,then the ratio can be expressed as λ=P (X |H 1)P (X |H 0)=u 20u 1u 2LNexp−NLη1u 1+η2u 2−η1u 0−η2u 0(7)From a maximum likelihood estimation,it is easy to get the estimations of u 0,u 1andu 2with ˆu0=arg max(λ1)(u 1,u 2)and (ˆu 1,ˆu 2)=arg max(λ2)(u 1,u 2):⎧⎪⎪⎨⎪⎪⎩ˆu0=12NNi =1(x i 1+x i 2)=12(η1+η2)(ˆu 1,ˆu 2)= 1N Ni =1x i 1,1N N i =1x i 2 =(η1,η2)(8)D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011270B.Xiong et al .Likelihood ratiox i 1, i = 1,2,...,N x i 2, i = 1,2,...,NH 0 or H 1λ(X )+−τ0Figure 1.SAR change detection scheme based on a likelihood ratio of SAR images.Thus,equation (7)can be expressed as follows:λ=P (x |H 1)P (x |H 0)= (η1+η2)24η1η2LN exp(0)H 1><H 0τ0(9)Figure 1shows the scheme of change detection based on SAR statistical properties.Simplification of equation (9)results inη=η1η2+η2η1H 1><H 04τ1LN 0−2=τ(10)The expression η= η1 η2 + η2 η1is the new change detection measure,which is a simplified form of the likelihood ratio of the SAR intensity images.The symbol τis the new threshold for change detection and ηk ,k =1,2,are the average values in the sub-homogenous areas of the master and slave SAR images at the same loca-tion,respectively.To minimize computational requirements,small windows are used to avoid the necessity of searching for homogenous areas.It is noted that for 8-bit data,when η1equals η2,ηachieves its minimum value ηmin =2.Therefore,we map ηto η with the range between 0and 255.If there is no obvious change identified,then η will be close to 0.Then a threshold is automatically chosen in this range based on the histogram of η .3.Results and analysisA pair of European remote sensing satellite (ERS-2)SAR images and a pair of airborne SAR images are used to test the proposed change detection measure.3.1ERS-2SAR imagesFigure 2(a )and 2(b )shows two co-registered C-band SAR images after radiometric calibration acquired by ERS-2over the city of Bern,Switzerland,on 20April and 25May 1999.The pixel size of the images is about 12.5m.Flooding over the farmland,evident in figure 2(b ),is mapped as the reference image with a visual interpretation in figure 2(c ).For the Bern data set,residual data were calculated using a 3×3window,which was assumed to represent a homogenous area.The normalized histogram and its local amplification of the residual data are shown in figure 3(a )and 3(b ).The single,sharp peak of the histogram is assumed to represent the unchanged part of the image,and the long tail of the histogram the changed part of the image.Since the unchanged pix-els are very close to 0and concentrated in the narrow peak,the value of the transitionalD o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011A measure for SAR intensity images change detection 271(a )(b )(c )050010002000 mFigure 2.A pair of ERS-2SAR images of Bern,Switzerland:(a )before and (b )after floods.(c )Reference image of change (represented by darkpixels).50100150200250Grey-level value0.80.70.60.50.40.30.20.10.0N o r m a l i z e d h i s t o g r am1020304050Grey-level value 0.80.70.60.50.40.30.20.10.0N o r m a l i z e d h i s t o g r a m50100150200Grey-level value654R a t i o3210(15, 0.8)(a )(b )(c )Figure 3.(a )Normalized histogram and (b )the local amplification of residual data from the Bern data set.(c )The ratio curve of the histogram and the first point with the ratio under 1.0are taken as the threshold for change detection.point between the peak and the tail can be chosen as the threshold when the histogram curve changes from steep to smooth.The ratios of the histogram at two adjacent grey-level values on the right side of the peak can be used to find the appropriate threshold.The first point with a ratio less than 1.0is taken as the transitional point.After that point,the histogram enters into an oscillating state and the ratios fluctuate randomly around 1.0.Figure 3(c )is the ratio curve of the histogram.The first point with the ratio less than 1.0is marked with a circle and its value is 15.Figure 4(a )is the residual data before thresholding and (b )is the result of the change detection with the selected threshold,which corresponds well with the reference image.Since log-ratio is a widely used measure in SAR change detection (Bazi et al.2005,Bovolo and Bruzzone 2005),this letter compares the change detection results with a supervised manual trial-and-error procedure (MTEP)based on a log-ratio detector,which consists of finding the threshold value that minimizes the overall errors,and an unsupervised change detection with Kittler–Illingworth (KI)threshold selection based on general Gaussian (GG)model and the log-ratio detector (Bazi et al.2005).Table 1lists the change detection results,the overall error and the kappa coefficient to evaluate these methods.The kappa coefficients and overall errors show that the proposed method is better than that of the GG method and is very close to the result of MTEP .D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011272 B.Xiong et al.(a)(b )Figure 4.(a )The residual data with the proposed change detection measure applied to the data shown in figure 2.(b )The change detection result after thresholding high values as changes and mapping them to black pixels.Table 1.Results achieved by the proposed methods,MTEP and GG method (Bern SARdata set).Correctly identified changed pixelsCorrectly identified unchanged pixels Unchanged pixels classified as changedChanged pixels classified as unchangedOverall error Kappa coefficient MTEP1210127,2591402724120.853GG method 1349126,9504491335820.820Proposed method1165127,2881113174280.8433.2High-resolution airborne SAR imagesFigure 5(a )and 5(b )shows two X-band airborne SAR intensity images acquired over a field in Beijing,China,on 4and 6April 2004.The pixel size of the images is 0.5m.The local incident angle was approximately 35◦,and the flight azimuth approximately 75◦.The numbers and positions of the vehicles on the field were different during the two data acquisition dates.These data sets were only approximately co-registered,with a registration error of about 1–2pixels.Figure 5(c )is the ground change reference image with a visual interpretation.Since notable speckle noise is present in these two images,a Frost filter with a 3×3window was applied four times before the change detection operation,which resulted in a good trade-off between noise reduction and degradation of the spatial details of the data.Then the same 3×3windows were used to replace the homogenous areas to acquire the residual data with the proposed change detection measure.Figure 6(a )and 6(b )shows the histogram and its local amplification of the residual data,and (c )is the ratio curve of the histogram.The grey-level value 18was selected as the threshold value with the same method in the former experiment.Figure 7(a )is the residual data before thresholding and (b )is the result of the change detection with the selected threshold,which also corresponds well with the ground reference image.D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011A measure for SAR intensity images change detection273(a )(b )(c)Figure 5.(a )and (b )A pair of high-resolution airborne SAR images with different vehicle distributions acquired 2days apart.(c )Reference image of change (represented by darkpixels).0.70.60.50.40.30.20.1N o r m a l i z e d h i s t o g r a m050100150200250Grey-level value (a )(b )(c )0.70.60.50.40.30.20.1N o r m a l i z e d h i s t o g r a m010********Grey-level value 6543210R a t i o50100150200Grey-level value(18, 0.99)Figure 6.(a )Normalized histogram and (b )the local amplification of residual data from the airborne SAR data set.(c )The ratio curve of the histogram and the first point with the ratio under 1.0are taken as the threshold for changedetection.(a )(b )Figure 7.(a )The residual data with the proposed change detection measure applied to the data shown in figure 5.(b )The change detection result after thresholding high values as changes and mapping them to black pixels.D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011274 B.Xiong et al .Table 2.Results achieved by the proposed methods,MTEP and GG method (airborne SARdata set).Correctly identified changed pixelsCorrectly identified unchanged pixels Unchanged pixels classified as changed Changed pixels classified as unchanged Overall error Kappa coefficient MTEP3263804,911550127618260.780GG method 3013804,989472152620560.750Proposed method3795803,899156274423060.766Table 2lists the evaluation of the change detection result.The kappa coefficients show that the proposed method is more accurate than the GG method and is very close to the result of MTEP .Though the overall error of the proposed method is higher than the benchmark methods,the number of correctly identified changed pixels is much larger than that of the other methods.Most of the false alarm pixels due to mis-registration potentially can be removed with a post change-classification procedure.4.ConclusionA change detection measure based on the likelihood ratio and statistical properties of SAR intensity images was developed in this letter.With the proposed change detec-tion measure,the histogram of the residual image shows a single,sharp peak for the unchanged areas,which makes it straightforward to select a threshold to differentiate the changed pixels from the unchanged ones.The analyses of two SAR image pairs from different sensors with different resolutions show that the proposed measure can detect the change information from multi-temporal SAR images very well.The change detection results are very close to that from the MTEP and better than that from the unsupervised change detection method with KI thresholding based on the GG model.AcknowledgementsThe authors would like to thank the Chinese Scholarship Council for providing funds for this study.The authors are also thankful to Dr.Timothy Warner and the anony-mous reviewers for their detailed comments,which helped very much to improve this letter.ReferencesB AZI ,Y .,B RUZZONE ,L.and M ELGANI ,F .,2005,An unsupervised approach based on the gen-eralized Gaussian model to automatic change detection in multitemporal SAR images.IEEE Transactions on Geoscience and Remote Sensing ,43,pp.874–887.B AZI ,Y .,B RUZZONE ,L.and M ELGANI ,F .,2006,Automatic identification of the number andvalues of decision thresholds in the log-ratio image for change detection in SAR images.IEEE Geoscience and Remote Sensing Letters ,3,pp.349–353.B OVOLO ,F .and B RUZZONE ,L.,2005,A detail-preserving scale-driven approach to changedetection in multitemporal SAR images.IEEE Transactions on Geoscience and Remote Sensing ,43,pp.2963–2972.B OVOLO ,F .and B RUZZONE ,L.,2007,A split-based approach to unsupervised change detectionin large-size multitemporal images:application to tsunami-damage assessment.IEEE Transactions on Geoscience and Remote Sensing ,45,pp.1658–1670.D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011A measure for SAR intensity images change detection 275B RUZZONE ,L.and P RIETO ,D.F .,2000,Automatic analysis of the difference image for unsu-pervised change detection.IEEE Transactions on Geoscience and Remote Sensing ,38,pp.1171–1182.C HATELAIN ,F .and T OURNERET ,J.-Y .,2007,Bivariate gamma distributions for multisensorSAR images.In International Conference on Acoustics,Speech and Signal Processing ,15–20April,Honolulu,HI,pp.1237–1240.C ONRADSEN ,K.,N IELSEN ,A.A.,S CHOU ,J.and S KRIVER ,H.,2003,A test statistic in the com-plex Wishart distribution and its application to change detection in polarimetric SAR data.IEEE Transactions on Geoscience and Remote Sensing ,41,pp.4–19.C OPPIN ,P .R.and B AUER ,M.E.,1996,Digital change detection in forest ecosystems with remotesensing imagery.International Journal of Remote Sensing ,13,pp.207–234.D EKKER ,R.J.,1998,Speckle filtering in satellite SAR change detection imagery.InternationalJournal of Remote Sensing ,19,pp.1133–1146.F RANCESCHETTI ,G.and L ANARI ,R.,1999,Synthetic Aperture Radar Processing ,1st Ed.(BocaRaton,FL:CRC Press LLC).I NGLADA ,J.and M ERCIER ,G.,2007,A new statistical similarity measure for change detectionin multitemporal SAR images and its extension to multiscale change analysis.IEEE Transactions on Geoscience and Remote Sensing ,45,pp.1432–1445.J HA , C.S.and U NNI ,N.V .M.,1994,Digital change detection of forest conversion ofa dry tropical Indian forest region.International Journal of Remote Sensing ,15,pp.2543–2552.M AKKEASORN ,A.,C HANG ,N.-B.and L I ,J.,2009,Seasonal change detection of riparian zoneswith remote sensing images and genetic programming in a semi-arid watershed.Journal of Environmental Management ,90,pp.1069–1080.M OSER ,G.and S ERPICO ,S.B.,2006,Generalized minimum-error thresholding for unsupervisedchange detection from SAR amplitude imagery.IEEE Transactions on Geoscience and Remote Sensing ,44,pp.2972–2982.O LIVER ,C.and Q UEGAN ,S.,2004,Understanding Synthetic Aperture Radar Images ,2nd Ed.(Raleigh,NC:SciTech Publishing Inc.).R IGNOT ,E.J.M.and Z YL ,J.J.V .,1993,Change detection techniques for ERS-1SAR data.IEEETransactions on Geoscience and Remote Sensing ,31,pp.896–906.R OGERSON ,P .A.,2002,Change detection thresholds for remotely sensed images.Journal ofGeographical Systems ,4,pp.85–97.S INGH ,A.,1989,Digital change detection techniques using remotely-sensed data.InternationalJournal of Remote Sensing ,10,pp.989–1003.W ANG ,Y .and A LLEN ,T.R.,2008,Estuarine shoreline change detection using Japanese ALOSPALSAR HH and JERS-1L-HH SAR data in the Albemarle-Pamlico Sounds,North Carolina,USA.International Journal of Remote Sensing ,29,pp.4429–4442.D o w n l o a d e d b y [B o l i X i o n g ] a t 08:11 17 A u g u s t 2011。

相关文档
最新文档