Reducing Assortment An Attribute-Based Approach

合集下载

外文文献及翻译

外文文献及翻译

外文文献原稿和译文原稿DATABASEA database may be defined as a collection interrelated data store together with as little redundancy as possible to serve one or more applications in an optimal fashion .the data are stored so that they are independent of programs which use the data .A common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base .One system is said to contain a collection of database if they are entirely separate in structure .A database may be designed for batch processing , real-time processing ,or in-line processing .A data base system involves application program, DBMS, and database.THE INTRODUCTION TO DATABASE MANAGEMENT SYSTEMSThe term database is often to describe a collection of related files that is organized into an integrated structure that provides different people varied access to the same data. In many cases this resource is located in different files in different departments throughout the organization, often known only to the individuals who work with their specific portion of the total information. In these cases, the potential value of the information goes unrealized because a person in other departments who may need it does not know it or it cannot be accessed efficiently. In an attempt to organize their information resources and provide for timely and efficient access, many companies have implemented databases.A database is a collection of related data. By data, we mean known facts that can be recorded and that have implicit meaning. For example, the names, telephone numbers, and addresses of all the people you know. You may have recorded this data in an indexed address book, or you may have stored it on a diskette using a personalcomputer and software such as DBASE Ⅲor Lotus 1-2-3. This is a collection of related data with an implicit meaning and hence is a database.The above definition of database is quite general. For example, we may consider the collection of words that made up this page of text to be usually more restricted. A database has the following implicit properties:● A database is a logically coherent collection of data with some inherent meaning. A random assortment of data cannot be referred to as a database.● A database is designed, built, and populated with data for a specific purpose. It has an intended group of user and some preconceived applications in which these users are interested.● A database represents some aspect of the real world, sometimes called the miniworld. Changes to the miniworld are reflected in the database.In other words, a database has some source from which data are derived, some degree of interaction with events in the real world, and an audience that is actively interested in the contents of the database.A database management system (DBMS) is composed of three major parts: (1) a storage subsystem that stores and retrieves data in files; (2)a modeling and manipulation subsystem that provides the means with which to organize the data and to add, delete, maintain, and update the data; and (3) an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems.●Managers who require more up-to-date information to make effective decisions.●Customers who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.●Users who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.●Organizations that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or p oorly defined, but people can “browse” through the database until they have the needed information. In short, the DBMS will “mange” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers. In a file-oriented system, user needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information. The availability of a DBMS, however, offers users a much faster alternative communications path.DATABASE QUERYIf the DBMS provides a way to interactively enter and update the database ,as well as interrogate it ,this capability allows for managing personal database. However, it does not automatically leave an audit trail of actions and does not provide the kinds of controls necessary in a multi-user organization .There controls are only available when a set of application programs is customized for each data entry and updating function.Software for personal computers that perform some of the DBMS functions has been very popular .Individuals for personal information storage and processing intended personal computers for us .Small enterprises, professionals like doctors, architects, engineers, lawyers and so on have also used these machines extensively. By the nature of intended usage ,database system on there machines are except from several of the requirements of full-fledged database systems. Since data sharing is not intended, concurrent operations even less so ,the software can be less complex .Security and integrity maintenance are de-emphasized or absent .as data volumes will be small, performance efficiency is also less important .In fact, the only aspect of a database system that is important is data independence. Data independence ,as stated earlier ,means that application programs and user queries need not recognize physical organization of data on secondary storage. The importance of this aspect , particularly for the personal computer user ,is that this greatly simplifies database usage . The user can store ,access and manipulate data at ahigh level (close to the application)and be totally shielded from the low level (close to the machine )details of data organization.DBMS STRUCTURING TECHNIQUESSpatial data management has been an active area of research in the database field for two decades ,with much of the research being focused on developing data structures for storing and indexing spatial data .however, no commercial database system provides facilities for directly de fining and storing spatial data ,and formulating queries based on research conditions on spatial data.There are two components to data management: history data management and version management .Both have been the subjects of research for over a decade. The troublesome aspect of temporal data management is that the boundary between applications and database systems has not been clearly drawn. Specifically, it is not clear how much of the typical semantics and facilities of temporal data management can and should be directly incorporated in a database system, and how much should be left to applications and users. In this section, we will provide a list of short-term research issues that should be examined to shed light on this fundamental question.The focus of research into history data management has been on defining the semantics of time and time interval, and issues related to understanding the semantics of queries and updates against history data stored in an attribute of a record. Typically, in the context of relational databases ,a temporal attribute is defined to hold a sequence of history data for the attribute. A history data consists of a data item and a time interval for which the data item is valid. A query may then be issued to retrieve history data for a specified time interval for the temporal attribute. The mechanism for supporting temporal attributes is to that for supporting set-valued attributes in a database system, such as UniSQL.In the absence of a support for temporal attributes, application developers who need to model and history data have simply simulated temporal attributes by creating attribute for the time interval ,along with the “temporal” attribute. This of course may result in duplication of records in a table, and more complicated search predicates in queries. The one necessary topic of research in history data management is to quantitatively establish the performance (and even productivity) differences betweenusing a database system that directly supports attributes and using a conventional database system that does not support either the set-valued attributes or temporal attributes.Data security, integrity, and independenceData security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database of the database, called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.Data integrity refers to the accuracy, correctness, or validity of the data in the database. In a database system, data integrity means safeguarding the data against invalid alteration or destruction. In large on-line database system, data integrity becomes a more severe problem and two additional complications arise. The first has to do with many users accessing the database concurrently. For example, if thousands of travel agents book the same seat on the same flight, the first agent’s booking will be lost. In such cases the technique of locking the record or field provides the means for preventing one user from accessing a record while another user is updating the same record.The second complication relates to hardware, software or human error during the course of processing and involves database transaction which is a group of database modifications treated as a single unit. For example, an agent booking an airline reservation involves several database updates (i.e., adding the passenger’s name and address and updating the seats-available field), which comprise a single transaction. The database transaction is not considered to be completed until all updates have been completed; otherwise, none of the updates will be allowed to take place.An important point about database systems is that the database should exist independently of any of the specific applications. Traditional data processing applications are data dependent.When a DMBS is used, the detailed knowledge of the physical organization of the data does not have to be built into every application program. The application program asks the DBMS for data by field name, for example, a coded representationof “give me customer name and balance due” would be sent to the DBMS. Without a DBMS the programmer must reserve space for the full structure of the record in the program. Any change in data structure requires changes in all the applications programs.Data Base Management System (DBMS)The system software package that handles the difficult tasks associated with creating ,accessing and maintaining data base records is called a data base management system (DBMS). A DBMS will usually be handing multiple data calls concurrently.It must organize its system buffers so that different data operations can be in process together .It provides a data definition language to specify the conceptual schema and most likely ,some of the details regarding the implementation of the conceptual schema by the physical schema.The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model “.At the present time ,there are four underling structures for database management systems. They are :List structures.Relational structures.Hierarchical (tree) structures.Network structures.Management Information System(MIS)An MIS can be defined as a network of computer-based data processing procedures developed in an organization and integrated as necessary with manual and other procedures for the purpose of providing timely and effective information to support decision making and other necessary management functions.One of the most difficult tasks of the MIS designer is to develop the information flow needed to support decision making .Generally speaking ,much of the information needed by managers who occupy different levels and who have different levels and have different responsibilities is obtained from a collection of exiting information system (or subsystems)Structure Query Language (SQL)SQL is a data base processing language endorsed by the American NationalStandards Institute. It is rapidly becoming the standard query language for accessing data on relational databases .With its simple ,powerful syntax ,SQL represents a great progress in database access for all levels of management and computing professionals.SQL falls into two forms : interactive SQL and embedded SQL. Embedded SQL usage is near to traditional programming in third generation languages .It is the interactive use of SQL that makes it most applicable for the rapid answering of ad hoc queries .With an interactive SQL query you just type in a few lines of SQL and you get the database response immediately on the screen.译文数据库数据库可以被定义为一个相互联系的数据库存储的集合。

福建省厦门第一中学2023-2024学年高三上学期11月期中英语试题

福建省厦门第一中学2023-2024学年高三上学期11月期中英语试题

福建省厦门第一中学2023-2024学年高三上学期11月期中英语试题学校:___________姓名:___________班级:___________考号:___________一、阅读理解FIVE UNUSUAL SPORTSWhat sports are you into? Football? Tennis? Swimming? If you’re looking for a change, you might like to try one of these.OctopushOctopush (or underwater hockey as it’s also known) is a form of hockey that’s played in a swimming pool. Participants wear a mask and snorkel and try to move a puck (水球) across the bottom of a pool. The sport has become popular in countries such as the UK, Australia, Canada, New Zealand and South Africa. An ability to hold your breath for long periods of time is a definite plus.ZoobombingZoobombing involves riding a children’s bike down a steep hill. The sport originated in the US city of Portland in Oregon in 2002. Participants carry their bikes on the MAX Light Rail and go to the Washington Park station next to Oregon Zoo (which is why it’s called “zoobombing”). From there, they take a lift to the surface, and then ride the mini-bikes down the hills in the area.Office Chair RacingOffice Chair Racing consists of racing down a hill in office chairs that can reach speeds of up to 30kph. Strict rules are in place for competitors: they’re allowed to fit in-line skate wheels and handles to their chairs, but no motors. “We check each chair carefully in advance,”one of the organizers explained. The participants race in pairs wearing protective padding as they launch themselves from a ramp (坡道). Prizes are given to the fastest competitors and also for the best-designed chairs.Fit 4 DrumsFit 4 Drums is a new form of cardio-rhythmic exercise. Led by an instructor, the class involves beating a specially-designed drum with two sticks while dancing at the same time. It’s the first group fitness activity where you get to play a drum while getting an intense workout. A sense of rhythm is a definite advantage!Horse BoardingHorse Boarding involves being towed behind a horse at 35mph on an off-road skateboard. Professional stuntman Daniel Fowler Prime invented the sport after he strung a rope between his off-road “mountain board” and a horse. Participants stand on a board while holding onto a rope, attempting to maintain their balance as the horse gallops (疾驰) ahead. “The horse rider and the horse have to work together because if they don’t, the horse goes flying,”Daniel explained.So, which sport would you like to try?1.What do you need to do if you want to play Octopush?A.Swim on the surface of the water.B.Hold your breath before the sport.C.Play it by the side of the seashore.D.Wear underwater breathing devices. 2.Which activity will you choose if you want to take part in collective fitness?A.Zoobombing B.Office Chair RacingC.Fit 4 Drums D.Horse Boarding3.What proverb does Horse Boarding tell us?A.The spirit is willing, but the flesh is weak.B.Never let your feet run faster than your shoes.C.The bigger they come, the harder they fall.D.Every chess master was once a beginner.Eliana Yi dreamed of pursuing piano performance in college, never mind that her fingers could barely reach the length of an octave (八度音阶). Unable to fully play many works by Romantic-era composers, including Beethoven and Brahms, she tried anyway — and in her determination to spend hours practicing one of Chopin’s compositions which is known for being “stretchy”, wound up injuring herself.“I would just go to pieces,” the Southern Methodist University junior recalled. “There were just too many octaves. I wondered whether I was just going to play Bach and Mozart for the rest of my life.”The efforts of SMU keyboard studies chair Carol Leone are changing all that. Twenty years ago, the school became the first major university in the U.S. to incorporate smaller keyboards into its music program, leveling the playing field for Yi and other piano majors.Yi reflected on the first time she tried one of the smaller keyboards: “I remember beingreally excited because my hands could actually reach and play all the right notes,” she said. Ever since, “I haven’t had a single injury, and I can practice as long as I want.”For decades, few questioned the size of the conventional piano. If someone’s hand span was less than 8.5 inches — the distance considered ideal to comfortably play an octave — well, that’s just how it was.Those who attempt “stretchy” passages either get used to omitting notes or risk tendon (腱) injury with repeated play. Leone is familiar with such challenges. Born into a family of jazz musicians, she instead favored classical music and pursued piano despite her small hand span and earned a doctorate in musical arts.A few years after joining SMU’s music faculty in 1996, the decorated pianist read an article in Piano and Keyboard magazine about the smaller keyboards. As Leone would later write, the discovery would completely renew her life and career.In 2000, she received a grant to retrofit a department Steinway to accommodate a smaller keyboard, and the benefits were immediate. In addition to relieving injury caused by overextended fingers, she said, it gave those with smaller spans the ability to play classic compositions taken for granted by larger-handed counterparts.Smaller keyboards instill many with new confidence. It’s not their own limitations that have held them back, they realize; it’s the limitations of the instruments themselves. For those devoted to a life of making music, it’s as if a cloud has suddenly lifted.4.What is the similarity between Eliana Yi and Carol Leone?A.Their interest in jazz extended to classical music.B.Short hand span used to restrict their music career.C.They both joined SMU’s music faculty years ago.D.Romantic-era composers’ music was easy for them.5.Why did SMU initiate an effort to scale down the piano?A.To reduce the number of octaves.B.To incorporate Bach into its music program.C.To provide fair opportunities for piano majors.D.To encourage pianists to spend more hours practicing.6.How did Yi probably feel when she played the retrofitted piano?A.Confident.B.Frustrated.C.Challenging.D.Determined. 7.Which of the following is the best title of the passage?A.Who Qualifies as an Ideal Pianist?B.Traditional or Innovative Piano?C.Hard-working Pianists Pays offD.The Story behind Retrofitted PianosThe curb cut (路缘坡). It’s a convenience that most of us rarely, if ever, notice. Yet, without it, daily life might be a lot harder—in more ways than one. Pushing a baby stroller onto the curb, skateboarding onto a sidewalk or taking a full grocery cart from the sidewalk to your car—all these tasks are easier because of the curb cut.But it was created with a different purpose in mind.It’s hard to imagine today, but back in the 1970s, most sidewalks in the United States ended with a sharp drop-off. That was a big deal for people in wheelchairs because there were no ramps to help them move along city blocks without assistance. According to one disability rights leader, a six-inch curb “might as well have been Mount Everest”. So, activists from Berkeley, California, who also needed wheelchairs, organized a campaign to create tiny ramps at intersections to help people dependent on wheels move up and down curbs independently.I think about the “curb cut effect” a lot when working on issues around health equity (公平). The first time I even heard about the curb cut was in a 2017 Stanford Social Innovation Review piece by Policy Link CEO Angela Blackwell. Blackwell rightly noted that many people see equity as “a zero-sum game (零和游戏)” and that it is commonly believed that there is a “prejudiced societal suspicion that intentionally supporting one group hurts another.” What the curb cut effect shows though, Blackwell said, is that “when society creates the circumstances that allow those who have been left behind to participate and contribute fully, everyone wins.”There are multiple examples of this principle at work. For example, investing in policies that create more living-wage jobs or increase the availability of affordable housing certainly benefits people in communities that have limited options. But, the action also empowers those people with opportunities for better health and the means to become contributing members of society—and that benefits everyone. Even the football huddle (密商) was initially created to help deaf football players at Gallaudet College keep their game plans secret from opponents who could have read their sign language. Today, it’s used by every team to prevent theopponent from learning about game-winning strategies.So, next time you cross the street, or roll your suitcase through a crosswalk or ride your bike directly onto a sidewalk—think about how much the curb cut, that change in design that broke down walls of exclusion for one group of people at a disadvantage, has helped not just that group, but all of us.8.What does the underlined quote from the disability rights leader imply concerning asix-inch curb?A.It is an unforgettable symbol.B.It is an impassable barrier.C.It is an important sign.D.It is an impressive landmark. 9.According to Angela Blackwell, what do many people believe?A.It’s not worthwhile to promote health equity.B.It’s necessary to go all out to help the disabled.C.It’s impossible to have everyone treated equally.D.It’s fair to give the disadvantaged more help than others.10.Which of the following examples best illustrates the “curb cut effect” principle?A.Spaceflight designs are applied to life on earth.B.Four great inventions of China spread to the west.C.Christopher Columbus discovered the new world.D.Classic literature got translated into many languages.11.What conclusion can be drawn from the passage?A.Caring for disadvantaged groups may finally benefit all.B.Action empowers those with opportunities for better solutions.C.Society should create circumstances that get everyone involved.D.Everyday items are originally invented for people in need of help.Casting blame is natural: it is tempting to fault someone else for a mistake rather than taking responsibility yourself. But blame is also harmful. It makes it less likely that people will own up to mistakes, and thus less likely that organizations can learn from them. Research published in 2015 suggests that firms whose managers pointed to external factors to explain their failings underperformed companies that blamed themselves.Blame culture can spread like a virus. Just as children fear mom and dad’s punishment if they admit to wrongdoing, in a blaming environment, employees are afraid of criticism andpunishment if they acknowledge making a mistake at work. Blame culture asks, “who dropped the ball?” instead of “where did our systems and processes fail?” The focus is on the individuals, not the processes. It’s much easier to point fingers at a person or department instead of doing the harder, but the more beneficial, exercise of fixing the root cause, in which case the problem does not happen again.The No Blame Culture was introduced to make sure errors and deficiencies (缺陷) were highlighted by employees as early as possible. It originated in organizations where tiny errors can have catastrophic (灾难性的) consequences. These are known as high reliability organizations (HROs) and include hospitals, submarines and airlines. Because errors can be so disastrous in these organizations, it’s dangerous to operate in an environment where employees don’t feel able to report errors that have been made or raise concerns about that deficiencies may turn into future errors. The No Blame Culture maximizes accountability because all contributions to the event occurring are identified and reviewed for possible change and improvement.The National Transportation Safety Board (NTSB), which supervises air traffic across the United States, makes it clear that its role is not to assign blame or liability but to find out what went wrong and to issue recommendations to avoid a repeat. The proud record of the airline industry in reducing accidents partly reflects no-blame processes for investigating crashes and close calls. The motive to learn from errors also exist when the risks are lower. That is why software engineers and developers routinely investigate what went wrong if a website crashes or a server goes down.There is an obvious worry about embracing blamelessness. What if the website keeps crashing and the same person is at fault? Sometimes, after all, blame is deserved. The idea of the “just culture”, a framework developed in the 1990s by James Reason, a psychologist, addresses the concern that the incompetent and the malevolent (恶意的) will be let off the hook. The line that Britain’s aviation regulator draws between honest errors and the other sort is a good starting-point. It promises a culture in which people “are not punished for actions or decisions taken by them that match with their experience and training”. That narrows room for blame but does not remove it entirely.12.According to the research published in 2015, companies that ______ had better performance.A.blamed external factors B.admitted their mistakesC.conducted investigations D.punished the under performers 13.According to the passage, what do you learn about the No Blame Culture?A.It encourages the early disclosure of errors.B.It only exists in high reliability organizations.C.It enables people to shift the blame onto others.D.It prevents organizations from making any error.14.What is the major concern about embracing blamelessness according to the passage?A.Innocent people might take the blame by admitting their failure.B.Being blamed for mistakes can destroy trust in employees.C.The line between honest errors and the other sort is not clear.D.People won’t learn their lessons if they aren’t blamed for failures.15.Which of the following is the best title for the passage?A.Why We Fail to Learn from Our Own MistakesB.How to Avoid Disastrous Errors in OrganizationsC.Why We Should Stop the Blame Game at WorkD.How to Deal with Workplace Blame Culture二、七选五You’ve reached that special time — you are getting ready to leave your job and move on to the next step in your career. But the end of an employment relationship is not necessarily the end of the relationship — with either the leader or the company. 16I learned this relatively early in my career. At first, I was concerned I might lose my relationship with my now former boss, as I truly liked him. 17 My boss enthusiastically stayed in touch with me, and I helped him onboard my replacement and consulted on other projects. And now, more than 2 decades since I left, we are still in communication and friends.That isn’t to say it always goes like this. When I left another role, in spite of my desire to maintain communication, my former supervisor seemed indifferent and the relationship ended. Sometimes your boss was a nightmare and you want to end the relationship. 18 You don’t owe the bad bosses anything. That’s exactly what I did when I was fired from a freelance role after I asked to be paid for my completed work!But for the good bosses and organizations, the ones that invested in your talent and celebrated your achievements, things are different. 19 The breakup can become a breakthrough.20 Especially when you have a truly delightful and respectful boss, you may feel guilt, sadness, or regret. But your overall responsibility is to yourself and your career — not to one organization. And given the right circumstances, it is almost always possible — and usually beneficial — to leave gracefully.A.But it turned out I had no reason to fear.B.So the way I left contributed to this breakup.C.It’s completely understandable not to engage further.D.It is normal to have mixed emotions when you leave a job.E.Here are some ways to build a win-win with your former leader.F.The concusion of the employment can start a new era of cooperation.G.You can leave your company and keep the relationship at the same time.三、完形填空While doing some cleaning in my kitchen, I noticed a tiny black pellet(小球)on the shelf.was gone.Now I 33 peek(窥视)inside the dishwasher and the oven before turning them on. 34 , I know I am not the only one looking out for geckos. No 35 is too small for us to love.21.A.remembered B.discovered C.thought D.wished 22.A.approved of B.sought for C.fed on D.got into 23.A.fixed B.touched C.hurt D.lost 24.A.trouble B.danger C.failure D.pleasure 25.A.starvation B.thirst C.climate D.poverty 26.A.different B.simple C.interesting D.tough 27.A.kitchen B.bedroom C.garden D.lab 28.A.books B.woods C.stones D.bottles 29.A.arranged B.grasped C.cleaned D.removed 30.A.dropped B.obtained C.spotted D.rescued 31.A.agreed B.hoped C.feared D.promised 32.A.counted B.checked C.picked D.locked 33.A.even B.never C.still D.already 34.A.Nevertheless B.Instead C.Therefore D.Otherwise 35.A.place B.dream C.human D.creature四、用单词的适当形式完成短文阅读下面短文,在空白处填入1个适当的单词或括号内单词的正确形式。

富士康英语笔试题及答案

富士康英语笔试题及答案

富士康英语笔试题及答案一、词汇题(每题1分,共10分)1. The company has a large number of _______ employees.A. permanentB. temporaryC. casualD. part-time答案: A2. The _______ of the new product was a great success.A. introductionB. innovationC. initiationD. induction答案: A3. The _______ of the meeting has been postponed due to bad weather.A. commencementB. completionC. cancellationD. termination答案: A4. She has a _______ knowledge of the subject.A. superficialB. profoundC. elementaryD. rudimentary答案: B5. The _______ of the old building was a difficult task.A. renovationB. demolitionC. constructionD. destruction答案: B6. The _______ of the company's profits has been steady over the past decade.A. fluctuationB. stabilityC. increaseD. decrease答案: B7. The _______ of the new policy was met with mixed reactions.A. implementationB. enforcementC. initiationD. establishment答案: A8. The _______ of the project was completed on schedule.A. executionB. performanceC. operationD. function答案: A9. The _______ of the company's assets is a complex process.A. evaluationB. valuationC. assessmentD. estimation答案: B10. The _______ of the new CEO was announced at the annual meeting.A. appointmentB. nominationC. electionD. designation答案: A二、阅读理解题(每题2分,共20分)Passage 1In recent years, the rise of e-commerce has significantly impacted the retail industry. Traditional brick-and-mortar stores are facing challenges as online shopping becomes more popular. However, some companies have adapted to thesechanges by integrating their online and offline presence to create a seamless shopping experience for customers.Questions:11. What has been the impact of e-commerce on the retail industry?A. It has led to the decline of online shopping.B. It has caused an increase in the popularity ofphysical stores.C. It has significantly impacted the way people shop.D. It has resulted in the closure of all physical stores.答案: C12. How have some companies adapted to the rise of e-commerce?A. By closing their physical stores.B. By focusing solely on online sales.C. By integrating their online and offline presence.D. By ignoring the changes in consumer behavior.答案: CPassage 2The development of renewable energy sources is crucial for reducing our reliance on fossil fuels and combating climatechange. Solar and wind power are two of the most promising renewable energy sources, offering clean and sustainable alternatives to traditional energy production methods.Questions:13. Why is the development of renewable energy sources important?A. To increase our reliance on fossil fuels.B. To reduce the cost of energy production.C. To combat climate change and reduce reliance on fossil fuels.D. To make energy production more difficult.答案: C14. Which two renewable energy sources are mentioned in the passage?A. Solar and nuclear power.B. Wind and hydro power.C. Solar and wind power.D. Fossil fuels and hydro power.答案: C三、完形填空题(每题1.5分,共15分)In the modern world, technology plays a vital role in our daily lives. It has transformed the way we communicate, work, and learn. However, with the rapid advancement of technology, there are also concerns about its impact on society.15. Technology has made our lives _______ easier.A. muchB. littleC. notD. no答案: A16. The _______ of technology is not without its drawbacks.A. progressB. developmentC. advancementD. growth答案: C17. People are increasingly _______ about the effects of technology on privacy.A. concernedB. informedC. interestedD. curious答案: A18. Despite。

高级英语单词表

高级英语单词表

recital superficial syndicated columns tabloid ticker top the list virulence wire services adversity arbitrary arthritis barricade bittersweet come to terms conserve contingent (up) on cost of Living debilitating defective deprivation desolation dignified diagnostic discrepant drastically eradicate euphemism excruciatingly existential folder fraudulent fulfilling funnel herein hearing aids housing humiliating impair inflationary inherently inhospitable superstitious tippet wring accelerator backboard blocks blueberry 使整洁 打听,窥探(常作贬义,口语化用 boulevard 词) bulge 勒死,掐死,限制,阻止 click 小事,琐事 cop 放松,松弛 crib (人)瘦长而结实的 dash 使羞愧,使窘迫 dent 理解;领悟 dopey 秘密的,暗地的,偷偷地 dribble 狡猾,狡诈的 droop 滑稽的 exhale (嗓音)颤抖,结结巴巴地说 feel like oneself 烦躁,坐立不安,惹人生厌 fender 烦躁的,不安的 flip 焦急,不安;心绪不宁 hot dog 短柄小斧 hot-shot 铰链 (美国历史上)移民到某地定居并 ignition 耕种政府分给的土地 ignition key 用打成花结连接 irk 孤单的,寂寞的,偏远的 jerk 衬裙 lettuce 用小布拼缝被子

Measurements of Cross Sections and Forward-Backward Asymmetries at the Z Resonance and Dete

Measurements of Cross Sections and Forward-Backward Asymmetries at the Z Resonance and Dete

a rXiv:h ep-e x /246v116Feb2EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH CERN-EP/2000-022February 04,2000Measurements of Cross Sections and Forward-Backward Asymmetries at the Z Resonance and Determination of Electroweak Parameters The L3Collaboration Abstract We report on measurements of hadronic and leptonic cross sections and leptonic forward-backward asymmetries performed with the L3detector in the years 1993−95.A total luminosity of 103pb −1was collected at centre-of-mass energies √s ≈m Z ±1.8GeV which corresponds to 2.5million hadronic and 245thousand leptonic events selected.These data lead to a significantly improved determination of Z parameters.From the total cross sections,combined with our measurements in 1990−92,we obtain the final results:m Z =91189.8±3.1MeV ,ΓZ =2502.4±4.2MeV ,Γhad =1751.1±3.8MeV ,Γℓ=84.14±0.17MeV .An invisible width of Γinv =499.1±2.9MeV is derived which in the Standard Model yields for the number of light neutrino species N ν=2.978±0.014.Adding our results on the leptonic forward-backward asymmetries and the tau polarisation,the effective vector and axial-vector coupling constants of the neutral weak current to charged leptons are determined to be ¯g ℓV =−0.0397±0.0017and ¯g ℓA =−0.50153±0.00053.Includingour measurements of the Z →b ¯b forward-backward and quark charge asymmetries a value for theeffective electroweak mixing angle of sin 21IntroductionThe Standard Model(SM)of electroweak interactions[1,2]is tested with great precision by the experiments performed at the LEP and SLC e+e−colliders running at centre-of-mass energies,√on the treatment of the t-channel contributions in e+e−→e+e−(γ)and on technicalities of the fit procedures,respectively.2The L3DetectorThe L3detector[13]consists of a silicon microvertex detector[14],a central tracking chamber, a high resolution electromagnetic calorimeter composed of BGO crystals,a lead-scintillator ring calorimeter at low polar angles[15],a scintillation counter system,a uranium hadron calorime-ter with proportional wire chamber readout and an accurate muon spectrometer.Forward-backward muon chambers,completed for the1995data taking,extend the polar angle coverage of the muon system down to24degrees[16]with respect to the beam line.All detectors are installed in a12m diameter magnet which provides a solenoidalfield of0.5T in the central region and a toroidalfield of1.2T in the forward-backward region.The luminosity is measured using BGO calorimeters preceded by silicon trackers[10]situated on each side of the detector.In the L3coordinate system the direction of the e−beam defines the z direction.The xy, or rφplane,is the bending plane of the magneticfield,with the x direction pointing to the centre of the LEP ring.The coordinatesφandθdenote the azimuthal and polar angles.3Data AnalysisThe data collected between1993and1995are split into nine samples according to the year√and the centre-of-mass energy.Data samples atshowers are simulated with the GHEISHA[28]program.The performance of the detector, including inefficiencies and their time dependence as observed during data taking,is taken into account in the simulation.With this procedure,experimental systematic errors on cross sections and forward-backward asymmetries are minimized.4LEP Energy CalibrationThe average centre-of-mass energy of the colliding particles at the L3interaction point is calcu-lated using the results provided by the Working Group on LEP Energy[9].Every15minutes the average centre-of-mass energy is determined from measured LEP machine parameters,ap-plying the energy model which is based on calibration by resonant depolarisation[29].This model traces the time variation of the centre-of-mass energy of typically1MeV per hour.The average centre-of-mass energies are calculated for each data sample individually as luminosity weighted averages.Slightly different values are obtained for different reactions because of small differences in the usable luminosity.The errors on the centre-of-mass energies and their correlations for the1994data and for the two scans performed in1993and1995are given in form of a7×7covariance matrix in Table1.The uncertainties on the centre-of-mass energy for the data samples not included in this matrix,i.e.the1993and1995pre-scans,are18MeV and10MeV,respectively.Details of the treatment of these errors in thefits can be found in Appendix B.The energy distribution of the particles circulating in an e+e−-storage ring has afinite width due to synchrotron oscillations.An experimentally observed cross section is therefore a convolution of cross sections at energies which are distributed around the average value in a gaussian form.The spread of the centre-of-mass energy for the L3interaction point as obtained from the observed longitudinal length of the particle bunches in LEP is listed in Table2[9]. The time variation of the average energy causes a similar,but smaller,effect which is included in these numbers.All cross sections and forward-backward asymmetries quoted below are corrected for the energy spread to the average value of the centre-of-mass energy.The relative corrections on the measured hadronic cross sections amount to+1.7per mill(‰)at the Z pole and to−1.1‰and−0.6‰at the peak−2and peak+2energy,respectively.The absolute corrections on the forward-backward asymmetries are very small.The largest correction is−0.0002for the muon and tau peak−2data sets.The error on the energy spread is propagated into thefits,resulting in very small contributions to the errors of thefitted parameters(see Appendix B).The largest effect is on the total width of the Z,contributing approximately0.3MeV to its error.During the operation of LEP,no evidence for an average longitudinal polarisation of the electrons or positrons has been observed.Stringent limits on residual polarisation during lumi-nosity runs are set such that the uncertainties on the determination of electroweak observables are negligible compared to their experimental errors[30].The determination of the LEP centre-of-mass energy in1990−92is described in Refer-ences[31].From these results the LEP energy error matrix given in Table3is derived.5Luminosity MeasurementThe integrated luminosity L is determined by measuring the number of small-angle Bhabha interactions e+e−→e+e−(γ).For this purpose two cylindrical calorimeters consisting of arraysof BGO crystals are located on either side of the interaction point.Both detectors are dividedinto two half-rings in the vertical plane to allow the opening of the detectors duringfilling ofLEP.A silicon strip detector,consisting of two layers measuring the polar angle,θ,and one layer measuring the azimuthal angle,φ,is situated in front of each calorimeter to preciselydefine thefiducial volume.A detailed description of the luminosity monitor and the luminosity determination can be found in Reference[10].The selection of small-angle Bhabha events is based on the energy depositions in adjacentcrystals of the BGO calorimeters which are grouped to form clusters.The highest-energy cluster on each side is considered for the luminosity analysis.For about98%of the cases a hitin the silicon detectors is matched with a cluster and its coordinate is used;otherwise the BGOcoordinate is retained.The event selection criteria are:1.The energy of the most energetic cluster is required to exceed0.8E b and the energy onthe opposite side must be greater than0.4E b,where E b is the beam energy.If the energyof the most energetic cluster is within±5%of E b the minimum energy requirement onthe opposite side is reduced to0.2E b in order to recover events with energy lost in the gaps between crystals.The distributions of the energy of the most energetic cluster andthe cluster on the opposite side as measured in the luminosity monitors are shown in Figure1for the1993data.All selection cuts except the one under study are applied.2.The cluster on one side must be confined to a tightfiducial volume:•32mrad<θ<54mrad;|φ−90◦|>11.25◦and|φ−270◦|>11.25◦.The requirements on the azimuthal angle remove the regions where the half-rings of thedetector meet.The cluster on the opposite side is required to be within a largerfiducialvolume:•27mrad<π−θ<65mrad;|φ−90◦|>3.75◦and|φ−270◦|>3.75◦.This ensures that the event is fully contained in the detectors and edge effects in the reconstruction are avoided.3.The coplanarity angle∆φ=φ(z<0)−φ(z>0)between the two clusters must satisfy|∆φ−180◦|<10◦.The distribution of the coplanarity angle is shown in Figure2.Very good agreement with theMonte Carlo simulation is observed.Four samples of Bhabha events are defined by applying the tightfiducial volume cut to oneof theθ-measuring silicon layers.Taking the average of the luminosities obtained from thesesamples minimizes the effects of relative offsets between the interaction point and the detectors. The energy and coplanarity cuts reduce the background from random beam-gas coincidences.The remaining contamination is very small:(3.4±2.2)·10−5.This number is estimated using the sidebands of the coplanarity distribution,10◦<|∆φ−180◦|<30◦,after requiring that neither of the two clusters have an energy within±5%of E b.The accepted cross section is determined from Monte Carlo e+e−→e+e−(γ)samples gen-√erated with the BHLUMI event generator at afixed centre-of-mass energy ofs=91.25GeV the acceptedcross section is determined to be69.62nb.The statistical error on the Monte Carlo sample con-tributes0.35‰to the uncertainty of the luminosity measurement.The theoretical uncertainty on the Bhabha cross section in ourfiducial volume is estimated to be0.61‰[12].The experimental errors of the luminosity measurement are small.Important sources of systematic errors are:geometrical uncertainties due to the internal alignment of the silicon detectors(0.15‰to0.27‰),temperature expansion effects(0.14‰)and the knowledge on the longitudinal position of the silicon detectors(0.16‰to0.60‰).The precision depends on the accuracy of the detector surveys and on the stability of the detector and wafer positions during the different years.The polar angle distribution of Bhabha scattering events used for the luminosity measure-ment is shown in Figure3.The structure seen in the central part of the+z side is due to the flare in the beam pipe on this side.The imperfect description in the Monte Carlo does not pose any problem as it is far away from the edges of thefiducial volume.The overall agreement between the data and Monte Carlo distributions of the selection quantities is good.Small discrepancies in the energy distributions at high energies are due to contamination of Bhabha events with beam-gas interactions and,at low energies,due to an imperfect description of the cracks between crystals.The selection uncertainty is estimated by varying the selection criteria over reasonable ranges and summing in quadrature the resulting contributions.This procedure yields errors between0.42‰and0.48‰for different years.The luminosities determined from the four samples described above agree within these errors.The trigger inefficiency is measured using a sample of events triggered by only requiring an energy deposit exceeding30GeV on one side.It is found to be negligible.The various sources of uncertainties are summarized in bining them in quadra-ture yields total experimental errors on the luminosity of0.86‰,0.64‰and0.68‰in1993,1994 and1995.Correlations of the total experimental systematic errors between different years are studied and the correlation matrix is given in Table5.The error from the theory is fully correlated.Because of the1/s dependence of the small angle Bhabha cross section,the uncertainty on the centre-of-mass energies causes a small additional uncertainty on the luminosity measure-ment.For instance,this amounts to0.1‰for the high statistics data sample of1994.This effect is included in thefits performed in Section12and13,see Appendix B.The statistical error on the luminosity measurement from the number of observed small angle Bhabha events is also included in thosefits.Table6lists the number of observed Bhabha events for the nine data samples and the corresponding errors on cross section measurements.√Combining all data sets taken in1993−95at6e+e−→hadrons(γ)Event SelectionHadronic Z decays are identified by their large energy deposition and high multiplicity in theelectromagnetic and hadron calorimeters.The selection criteria are similar to those applied in our previous analysis[4]:1.The total energy observed in the detector,E vis,normalised to the centre-of-mass energy√must satisfy0.5<E vis/√s′is the effective centre-of-mass energy after initial state s′>0.1√s is estimated to be photon radiation.The acceptance for events in the data withnegligible.They are not considered as part of the signal and hence not corrected for.The interference between initial andfinal state photon radiation is not accounted for in the event generator.This effect modifies the angular distribution of the events in particular at very low polar angles where the detector inefficiencies are largest.However,the error from the imperfect simulation on the measured cross section,which includes initial-final state interference as part of the signal,is estimated to be very small(≪0.1pb)in the centre-of-mass energyrange considered here.Quark pairs originating from pair production from initial state radiation√are considered as part of the signal if their invariant mass exceeds50%ofDifferences of the implementation of QED effects in both programs are studied and found tohave negligible impact on the acceptance.Hadronic Z decays are triggered by the energy,central track,muon or scintillation counter multiplicity triggers.The combined trigger efficiency is obtained from the fraction of events with one of these triggers missing as a function of the polar angle of the event thrust axis. This takes into account most of the correlations among triggers.A sizeable inefficiency is only observed for events in the very forward region of the detector,where hadrons can escape through the beam pipe.Trigger efficiencies,including all steps of the trigger system,between99.829% and99.918%are obtained for the various data sets.Trigger inefficiencies determined for data sets taken in the same year are statistically bining those data sets results in statistical errors of at most0.12‰which is assigned as systematic error to all data sets.The background from other Z decays is found to be small:2.9‰essentially only from e+e−→τ+τ−(γ).The uncertainty on this number is negligible compared to the total systematic error.The determination of the non-resonant background,mainly e+e−→e+e−hadrons,is based on the measured distribution of the visible energy shown in Figure5.The Monte Carlo program PHOJET is used to simulate two-photon collision processes.The absolute cross section isderived by scaling the Monte Carlo to obtain the best agreement with our data in the low end√of the E vis spectrum:0.32≤E vis/s is observed.This is in agreement with results of a similar calculation performed with the DIAG36program.Beam related background(beam-gas and beam-wall interactions)is small.To the extent that the E vis spectrum is similar to that of e+e−→e+e−hadrons,it is accounted for by determining the absolute normalisation from the data.As a check,the non-resonant background is estimated by extrapolating an exponential dependence of the E vis spectrum from the low energy part into the signal region.This method yields consistent results.Based on these studies we assign an error on the measured hadron cross section of3pb due to the understanding of the non-resonant background.This errorassignment is supported by our measurements of the hadronic cross section at high energies √(130GeV≤certainties which scale with the cross section and absolute uncertainties are separated because they translate in a different way into errors on Z parameters,in particular on the total width. The scale error is further split into a part uncorrelated among the data samples,in this case consisting of the contribution of Monte Carlo statistics,and the rest which is taken to be fully correlated and amounts to0.39‰.The results of the e+e−→hadrons(γ)cross section measurements are discussed in Sec-tion10.7e+e−→µ+µ−(γ)Event SelectionThe selection of e+e−→µ+µ−(γ)in the1993and1994data is similar to the selection applied in previous years described in Reference[4].Two muons in the polar angular region|cosθ|<0.8 are required.Most of the muons,88%,are identified by a reconstructed track in the muon spectrometer.Muons are also identified by their minimum ionising particle(MIP)signature in the inner sub-detectors,if less than two muon chamber layers are hit.A muon candidate is denoted as a MIP,if at least one of the following conditions is fulfilled:1.A track in the central tracking chamber must point within5◦in azimuth to a cluster inthe electromagnetic calorimeter with an energy less than2GeV.2.On a road from the vertex through the barrel hadron calorimeter,at leastfive out of amaximum of32cells must be hit,with an average energy of less than0.4GeV per cell.3.A track in the central chamber or a low energy electromagnetic cluster must point within10◦in azimuth to a muon chamber hit.In addition,both the electromagnetic and the hadronic energy in a cone of12◦half-opening angle around the MIP candidate,corrected for the energy loss of the particle,must be less than 5GeV.Events of the reaction e+e−→µ+µ−(γ)are selected by the following criteria:1.The event must have a low multiplicity in the calorimeters N cl≤15.2.If at least one muon is reconstructed in the muon chambers,the maximum muon momen-tum must satisfy p max>0.6E b.If both muons are identified by their MIP signature there must be two tracks in the central tracking chamber with at least one with a transverse momentum larger than3GeV.3.The acollinearity angleξmust be less than90◦,40◦or5◦if two,one or no muons arereconstructed in the muon chambers.4.The event must be consistent with an origin of an e+e−-interaction requiring at least onetime measurement of a scintillation counter,associated to a muon candidate,to coincide within±3ns with the beam crossing.Also,there must be a track in the central tracking chamber with a distance of closest approach to the beam axis of less than5mm.As an example,Figure11shows the distribution of the maximum measured muon momen-tum for candidates in the1993−94data compared to the expectation for signal and backgroundprocesses.The acollinearity angle distribution of the selected muon pairs is shown in Figure12. The experimental angular resolution and radiation effects are well reproduced by the Monte Carlo simulation.The analysis of the1995data in addition uses the newly installed forward-backward muon chambers.Thefiducial volume is extended to|cosθ|<0.9.Each event must have at least one track in the central tracking chamber with a distance of closest approach in the transverse plane of less than1mm and a scintillation counter time coinciding within±5ns with the beam crossing.The rejection of cosmic ray muons in the1995data is illustrated in Figure13.For events with muons reconstructed in the muon chambers the maximum muon momentum must be larger than2µ+µ−(γ)are summarised in Table8.Resonant four-fermionfinal states with a high-mass muon pair and a low-mass fermion pair are accepted.These events are considered as part of the signal if the invariant mass of√the muon pair exceeds0.5,(2)σF+σBwhereσF is the cross section for events with the fermion scattered into the hemisphere which is forward with respect to the e−beam direction.The cross section in the backward hemisphere is denoted byσB.Events with hard photon bremsstrahlung are removed from the sample by requiring that the acollinearity angle of the event be less than15◦.The differential cross section in the angular region|cosθ|<0.9can then be approximated by the lowest order angular dependence to sufficient precision:dσ8 1+cos2θ +A FB cosθ,(3) withθbeing the polar angle of thefinal state fermion with respect to the e−beam direction.For each data set the forward-backward asymmetry is determined from a maximum likeli-hoodfit to our data where the likelihood function is defined as the product over the selected events labelled i of the differential cross section evaluated at their respective scattering angle θi:L= i 3strongly depends on the number of muon chamber layers used in the reconstruction.The charge confusion is determined for each event class individually.The average charge confusion probability,almost entirely caused by muons only measured in the central tracking chamber, is(3.2±0.3)‰,(0.8±0.1)‰and(1.0±0.3)‰for the years1993,1994and1995,respectively, where the errors are statistical.The improvement in the charge determination for1994and 1995reflects the use of the silicon microvertex detector.The correction for charge confusion is proportional to the forward-backward asymmetry and it is less than0.001for all data sets.To estimate a possible bias from a preferred orientation of events with the two muons measured to have the same charge we determine the forward-backward asymmetry of these events using the track with a measured momentum closer to the beam energy.The asymmetry of this subsample is statistically consistent with the standard measurement.Including these like-sign events in the1994sample would change the measured asymmetry by0.0008.Half of this number is taken as an estimate of a possible bias of the asymmetry measurement from charge confusion in the1993−94data.The same procedure is applied to the1995data and the statistical precision limits a possible bias to0.0010.Differences of the momentum reconstruction in forward and backward events would cause a bias of the asymmetry measurement because of the requirement on the maximum measured muon momentum.We determine the loss of efficiency due to this cut separately for forward and backward events by selecting muon pairs without cuts on the reconstructed momentum.No significant difference is observed and the statistical error of this comparison limits the possible effect on the forward-backward asymmetry to be less than0.0004and0.0009for the1993−94 and1995data,respectively.Other possible biases from the selection cuts on the measurement of the forward-backward asymmetry are negligible.This is verified by a Monte Carlo study which shows that events not selected for the asymmetry measurement,but inside thefiducial volume and withξ<15◦,do not have a different A FB value.The background from e+e−→τ+τ−(γ)events is found to have the same asymmetry as the signal and thus neither necessitates a correction nor causes a systematic uncertainty.The effect of the contribution from the two-photon process e+e−→e+e−µ+µ−,further reduced by the tighter acollinearity cut on the measured muon pair asymmetry,can be neglected.The forward-backward asymmetry of the cosmic ray muon background is measured to be−0.02±0.13using the events in the sideband of the distribution of closest approach to the interaction point. Weighted by the relative contribution to the data set this leads to corrections of−0.0007and +0.0003to the peak−2and peak+2asymmetries,respectively.On the peak this correction is negligible.The statistical uncertainty of the measurement of the cosmic ray asymmetry causes a systematic error of0.0001on the peak and between0.0003and0.0005for the peak−2and peak+2data sets.The systematic uncertainties on the measurement of the muon forward-backward asymmetry are summarised in Table9.In1993−94the total systematic error amounts to0.0008at the peak points and to0.0009at the off-peak points due to the larger contamination of cosmic ray muons.For the1995data the determination of systematic errors is limited by the number of events taken with the new detector configuration and the total error is estimated to be0.0015.In Figure15the differential cross sections dσ/dcosθmeasured from the1993−95data sets are shown for three different centre-of-mass energies.The data are corrected for detector acceptance and charge confusion.Data sets with a centre-of-mass energy close to m Z,as well as the data at peak−2and the data at peak+2,are combined.The data are compared to the differential cross section shape given in Equation3.The results of the total cross section and forward-backward asymmetry measurements in e+e−→µ+µ−(γ)are presented in Section10.8e+e−→τ+τ−(γ)Event SelectionThe selection of e+e−→τ+τ−(γ)events aims to select all hadronic and leptonic decay modes of the tau.Z decays into tau leptons are distinguished from other Z decays by the lower visible energy due to the presence of neutrinos and the lower particle multiplicity as compared to hadronic Z pared to our previous analysis[4]the selection of e+e−→τ+τ−(γ) events is extended to a larger polar angular range,|cosθt|≤0.92,whereθt is defined by the thrust axis of the event.Event candidates are required to have a jet,constructed from calorimetric energy de-posits[36]and muon tracks,with an energy of at least8GeV.Energy deposits in the hemisphere opposite to the direction of this most energetic jet are combined to form a second jet.The two jets must have an acollinearity angleξ<10◦.There is no energy requirement on the second jet.High multiplicity hadronic Z decays are rejected by allowing at most three tracks matched to any of the two jets.In each of the two event hemispheres there should be no track with an angle larger than18◦with respect to the jet axis.Resonant four-fermionfinal states with a high mass tau pair and a low mass fermion pair are mostly kept in the sample.The multiplicity cut affects only tau decays into three charged particles with the soft fermion close in space leading to corrections of less than1‰.If the energy in the electromagnetic calorimeter of thefirst jet exceeds85%,or the energy of the second jet exceeds80%,of the beam energy with a shape compatible with an electromagnetic shower the event is classified as e+e−→e+e−(γ)background and hence rejected. Background from e+e−→µ+µ−(γ)is removed by requiring that there be no isolated muon with a momentum larger than80%of the beam energy and that the sum of all muon momenta does not exceed1.5E b.Events are rejected if they are consistent with the signature of two MIPs.To suppress background from cosmic ray events the time of scintillation counter hits asso-ciated to muon candidates must be within±5ns of the beam crossing.In addition,the track in the muon chambers must be consistent with originating from the interaction point.In Figures16to19the energy in the most energetic jet,the number of tracks associated to both jets,the acollinearity between the two jets and the distribution of|cosθt|are shown for the1994data.Data and Monte Carlo expectations are compared after all cuts are applied, except the one under study.Good agreement between data and Monte Carlo is observed.Small discrepancies seen in Figure17are due to the imperfect description of the track reconstruction efficiency in the central chamber.Their impact on the total cross section measurement is small and is included in the systematic error given below.Tighter selection cuts must be applied in the region between barrel and end-cap part of the BGO calorimeter and in the end-cap itself,reducing the selection efficiency(see Figure19). This is due to the increasing background from Bhabha scattering.Most importantly the shower shape in the hadron calorimeter is also used to identify candidate electrons and the cuts on the energy of thefirst and second jet in the electromagnetic end-cap calorimeter are tightened to 75%of the beam energy.。

选择与坚持:跨期选择与延迟满足之比较

选择与坚持:跨期选择与延迟满足之比较

第2期
任天虹等 : 选择与坚持:跨期选择与延迟满足之比较
305
会通过预实验选择一对较为恰当的 SS 与 LL, 以 保证 SS 与 LL 之间的差异既要大到足以使被试愿 意选择后者 , 又要小到使 SS 对儿童有足够的诱惑 , 从而避免等待时间的天花板效应与地板效应 (Mischel & Underwood, 1974)。 动物实验在跨期选择与延迟满足的研究中都 有 所 涉 及 , 这 些动 物研 究 的 实验 范式 直 观 易 懂 , 对儿童被试的研究也颇有启示意义。随着研究内 容的扩展与研究范式的改进 , 跨期选择与延迟满 足在研究对象上有融合的趋势。 3.2 研究内容 跨期选择关注被试的时间折扣 (time discounting), 而延迟满足则更关注被试在等待时间上的 个 体 差 异 、 自 我 控 制 策 略 及 其 有 效 性 (Ainslie, 1975; Mischel et al., 1989)。如果说跨期选择的研 究者将其研究重点放在了计算、分析、推理、权 衡等较为高级的认知过程上 , 那么延迟满足的研 究者则将其重点放在了情绪、意志力、动机强度 等更为基础的本能反应上。 时间折扣是跨期选择研究的基本假设 , 也是 其研究的重要内容 , 它是指在跨期选择中 , 个体 首先会对延迟结果的价值根据其延迟的时间进行 一定的折扣, 然后再对两个结果进行比较(Frederick et al., 2002; Scholten & Read, 2010)。经济学家力 图找到某一通用的公式来描述折扣程度与结果及 延 迟 时 间 之间 的 关 系 , 表 现 为 数 理 模 型 的 优化 ; 心理学家则更关注外界因素对个体时间折扣程度 的影响 , 表现为认知神经机制的揭示。 延迟满足的研究者并不关注折扣程度 , 他们 更加关注等待时间上的个体差异 , 并在实验中细 致地检验被试自我控制策略的选择与使用状况 , 这些研究者热衷于以追踪研究揭示儿童在实验中 的表现与其个性及行为特征之间的关系 (Mischel et al., 1989)。 3.3 研究范式 跨期选择的研究关注被试的选择过程 , 它要 求被试做出一系列选择 ; 延迟满足则更关注被试 的坚持过程 , 要求被试坚持完成选择后的等待过 程。尽管在跨期选择的部分任务中也涉及到等待 过程 , 但是它与延迟满足有本质的区别。被试在 延迟满足任务中能够自主选择中止等待 , 而在跨 期选择任务中则只能消极等待延时结束 (Evans & Beran, 2007)。

不同化学疏果剂对‘富士’苹果的疏除效果及品质影响

不同化学疏果剂对‘富士’苹果的疏除效果及品质影响

天津农$科学Tianjin Agricultural Sciences2021,27(3):17-24,28•作物栽培与设施园艺不同化学疏果剂对‘富士’苹果的疏除效果及品质影响王安丽李文胜周文静吴泽珍!,张振军2,胡安鸿"(1.新疆农业大学林学与园艺学院,新疆乌鲁木齐830052;2.阿克苏地8林业科学研究所,新疆阿克苏843000)摘要:对比不同化学疏果剂对‘红富士’苹果的疏除效果及成本,筛选出适宜阿克苏‘红富士’苹果的化学疏果剂及浓度,以减 少工人疏花成本,调控树体负载量,提升果品品质。

以盛果期‘新红1号’为试材,实采用正交试验设计方法果实6~8m m、10~12mm、14~16m m和18~20m m时喷施不同浓度西维因、萘乙酸、6-B A、疏果剂(山东),以人工疏果为对照,分析化学疏果剂 的疏除效果、成本以及对品质的影响。

同类同浓度的化学疏果剂疏除效果同,成本,对果实品质同。

过单果占比、双果占比、空台率、疏除率、果实横径和成本支出综合比较,盛果期树幼直径6!8m m时喷施800mg*L-1西维因的 疏除效果与人工疏果效果最相近,成本仅为人工疏果的13.080;幼果直径10~12m m时喷施1200mg*L-1的疏除效果与人工疏果效果最相近,成本为人工疏果的16.59%;幼果直径14!16m m时喷施400m g*L-1西维因效果与人工疏果效果最相近,成本为人工疏果的9.570;幼果直径18!20m m时喷施150m g •L-16-B A效果与人工疏果效果最相近,成本为人工疏果的 67.740。

对果实品质影响不显著。

关键词:富士苹果;化学疏果剂;果;本;品质中图分类号:S661.1文献标识码:A D O I 编码:10.3969/j.issn.l006 — 6500.2021.03.005Effects of Different Chemical Fruit Thinning Agents on the Fruit Thinning Effect and Quality of ;Fuji; Apple WANG Anli1,LI Wensheng1,ZHOU Wenjing1,WU Zezhen1,ZHANG Zhenjun2,HU Anhong2(1.College of Forestry and Horticulture, Xinjiang Agricultural University, Urumqi, Xinjiang 830052, China, 2. Aksu Regional Institute of Forestry Science, Aksu, Xinjiang 843000, China)Abstract: Comparison the effect and cost of chemical fruit thinners on ’red Fujiw apples, find suitable chemicals and its concentration which can well reduce costs and regulate the load and improve quality in Aksu. ’Xinhong No. K was used as the test material in the full fruit period. The orthogonal design method was used to spray different concentrations of carbaryl, NAA, 6-BA and fruit thinning agent (Shandong) on the fruit diameter of 6-8mm, 10-12 mm, 14-16 mm and 18-20 mm. The effect, cost and quality of chemical fruit thinning agent on the fruit thinning were analyzed. Different kinds and concentrations of chemical thinning agents have different thinning effects, different cost and different fruit quality# Through comprehensive comparison of single fruit set rate, double fruit set rate, empty fruit rate, flower thinning rate, fruit diameter and costs,when the fruit diameter was 6-8mm, high yield stage trees the ef­fect of spraying 800 mg*L-1of Carbaryl was the closest to hand-thinned, and the cost was only 13.08% of the hand-thinned;when the fruit diameter was 10-12 mm, the effect of spraying 1200 mg*L-1of Carbaryl was the closest to the hand-thinned, and the cost was 16.59% of hand-thinned; the effect of spraying 400 m g*L-1of Carbaryl on 14-16 mm fruit was the closest to the effect of h an d- thinned, the cost was 9.57% of the hand-thinned; the effect of spraying 150 mg*L-16-B A on 18-20 mm was similar to the effect of hand-thinned, and the cost was 67#740 of hand-thinned# The main quality of the fruit was not significant difference#Key words:’Fuji’ apple; chemical thinning agent; fruit setting rate; cost; quality苹果 的水果,以 富,果,人,量 ]1]。

tpo35三篇阅读原文译文题目答案译文背景知识

tpo35三篇阅读原文译文题目答案译文背景知识

tpo35三篇阅读原文译文题目答案译文背景知识阅读-1 (1)原文 (2)译文 (5)题目 (8)答案 (17)背景知识 (18)阅读-2 (21)原文 (21)译文 (24)题目 (27)答案 (36)背景知识 (36)阅读-3 (39)原文 (39)译文 (43)题目 (46)答案 (54)背景知识 (55)阅读-1原文Earth’ s Age①One of the first recorded observers to surmise a long age for Earth was the Greek historian Herodotus, who lived from approximately 480 B.C. to 425 B.C. He observed that the Nile River Delta was in fact a series of sediment deposits built up in successive floods. By noting that individual floods deposit only thin layers of sediment, he was able to conclude that the Nile Delta had taken many thousands of years to build up. More important than the amount of time Herodotus computed, which turns out to be trivial compared with the age of Earth, was the notion that one could estimate ages of geologic features by determining rates of the processes responsible for such features, and then assuming the rates to be roughly constant over time. Similar applications of this concept were to be used again and again in later centuries to estimate the ages of rock formations and, in particular, of layers of sediment that had compacted and cemented to form sedimentary rocks.②It was not until the seventeenth century that attempts were madeagain to understand clues to Earth's history through the rock record. Nicolaus Steno (1638-1686) was the first to work out principles of the progressive depositing of sediment in Tuscany. However, James Hutton (1726-1797), known as the founder of modern geology, was the first to have the important insight that geologic processes are cyclic in nature. Forces associated with subterranean heat cause land to be uplifted into plateaus and mountain ranges. The effects of wind and water then break down the masses of uplifted rock, producing sediment that is transported by water downward to ultimately form layers in lakes, seashores, or even oceans. Over time, the layers become sedimentary rock. These rocks are then uplifted sometime in the future to form new mountain ranges, which exhibit the sedimentary layers (and the remains of life within those layers) of the earlier episodes of erosion and deposition.③Hutton's concept represented a remarkable insight because it unified many individual phenomena and observations into a conceptual picture of Earth’s history. With the further assumption that these geologic processes were generally no more or less vigorous than they are today, Hutton's examination of sedimentary layers led him to realize that Earth's history must be enormous, that geologic time is anabyss and human history a speck by comparison.④After Hutton, geologists tried to determine rates of sedimentation so as to estimate the age of Earth from the total length of the sedimentary or stratigraphic record. Typical numbers produced at the turn of the twentieth century were 100 million to 400 million years. These underestimated the actual age by factors of 10 to 50 because much of the sedimentary record is missing in various locations and because there is a long rock sequence that is older than half a billion years that is far less well defined in terms of fossils and less well preserved.⑤Various other techniques to estimate Earth's age fell short, and particularly noteworthy in this regard were flawed determinations of the Sun's age. It had been recognized by the German philosopher Immanuel Kant (1724-1804) that chemical reactions could not supply the tremendous amount of energy flowing from the Sun for more than about a millennium. Two physicists during the nineteenth century both came up with ages for the Sun based on the Sun's energy coming from gravitational contraction. Under the force of gravity, the compressionresulting from a collapse of the object must release energy. Ages for Earth were derived that were in the tens of millions of years, much less than the geologic estimates of the lime.⑥It was the discovery of radioactivity at the end of the nineteenth century that opened the door to determining both the Sun’s energy source and the age of Earth. From the initial work came a suite of discoveries leading to radio isotopic dating, which quickly led to the realization that Earth must be billions of years old, and to the discovery of nuclear fusion as an energy source capable of sustaining the Sun's luminosity for that amount of time. By the 1960s, both analysis of meteorites and refinements of solar evolution models converged on an age for the solar system, and hence for Earth, of 4.5 billion years.译文地球的年龄①希腊历史学家希罗多德是最早有记录的推测地球年龄的观察家之一,他生活在大约公元前480年到公元前425年。

Peters (2010) Episodic Future Thinking Reduces Reward Delay Discounting

Peters (2010) Episodic Future Thinking Reduces Reward Delay Discounting

NeuronArticleEpisodic Future Thinking ReducesReward Delay Discounting through an Enhancement of Prefrontal-Mediotemporal InteractionsJan Peters1,*and Christian Bu¨chel11NeuroimageNord,Department of Systems Neuroscience,University Medical Center Hamburg-Eppendorf,Hamburg20246,Germany*Correspondence:j.peters@uke.uni-hamburg.deDOI10.1016/j.neuron.2010.03.026SUMMARYHumans discount the value of future rewards over time.Here we show using functional magnetic reso-nance imaging(fMRI)and neural coupling analyses that episodic future thinking reduces the rate of delay discounting through a modulation of neural decision-making and episodic future thinking networks.In addition to a standard control condition,real subject-specific episodic event cues were presented during a delay discounting task.Spontaneous episodic imagery during cue processing predicted how much subjects changed their preferences toward more future-minded choice behavior.Neural valuation signals in the anterior cingulate cortex and functional coupling of this region with hippo-campus and amygdala predicted the degree to which future thinking modulated individual preference functions.A second experiment replicated the behavioral effects and ruled out alternative explana-tions such as date-based processing and temporal focus.The present data reveal a mechanism through which neural decision-making and prospection networks can interact to generate future-minded choice behavior.INTRODUCTIONThe consequences of choices are often delayed in time,and in many cases it pays off to wait.While agents normally prefer larger over smaller rewards,this situation changes when rewards are associated with costs,such as delays,uncertainties,or effort requirements.Agents integrate such costs into a value function in an individual manner.In the hyperbolic model of delay dis-counting(also referred to as intertemporal choice),for example, a subject-specific discount parameter accurately describes how individuals discount delayed rewards in value(Green and Myer-son,2004;Mazur,1987).Although the degree of delay discount-ing varies considerably between individuals,humans in general have a particularly pronounced ability to delay gratification, and many of our choices only pay off after months or even years. It has been speculated that the capacity for episodic future thought(also referred to as mental time travel or prospective thinking)(Bar,2009;Schacter et al.,2007;Szpunar et al.,2007) may underlie the human ability to make choices with high long-term benefits(Boyer,2008),yielding higher evolutionaryfitness of our species.At the neural level,a number of models have been proposed for intertemporal decision-making in humans.In the so-called b-d model(McClure et al.,2004,2007),a limbic system(b)is thought to place special weight on immediate rewards,whereas a more cognitive,prefrontal-cortex-based system(d)is more involved in patient choices.In an alternative model,the values of both immediate and delayed rewards are thought to be repre-sented in a unitary system encompassing medial prefrontal cortex(mPFC),posterior cingulate cortex(PCC),and ventral striatum(VS)(Kable and Glimcher,2007;Kable and Glimcher, 2010;Peters and Bu¨chel,2009).Finally,in the self-control model, values are assumed to be represented in structures such as the ventromedial prefrontal cortex(vmPFC)but are subject to top-down modulation by prefrontal control regions such as the lateral PFC(Figner et al.,2010;Hare et al.,2009).Both the b-d model and the self-control model predict that reduced impulsivity in in-tertemporal choice,induced for example by episodic future thought,would involve prefrontal cortex regions implicated in cognitive control,such as the lateral PFC or the anterior cingulate cortex(ACC).Lesion studies,on the other hand,also implicated medial temporal lobe regions in decision-making and delay discounting. In rodents,damage to the basolateral amygdala(BLA)increases delay discounting(Winstanley et al.,2004),effort discounting (Floresco and Ghods-Sharifi,2007;Ghods-Sharifiet al.,2009), and probability discounting(Ghods-Sharifiet al.,2009).Interac-tions between the ACC and the BLA in particular have been proposed to regulate behavior in order to allow organisms to overcome a variety of different decision costs,including delays (Floresco and Ghods-Sharifi,2007).In line with thesefindings, impairments in decision-making are also observed in humans with damage to the ACC or amygdala(Bechara et al.,1994, 1999;Manes et al.,2002;Naccache et al.,2005).Along similar lines,hippocampal damage affects decision-making.Disadvantageous choice behavior has recently been documented in patients suffering from amnesia due to hippo-campal lesions(Gupta et al.,2009),and rats with hippocampal damage show increased delay discounting(Cheung and Cardinal,2005;Mariano et al.,2009;Rawlins et al.,1985).These observations are of particular interest given that hippocampal138Neuron66,138–148,April15,2010ª2010Elsevier Inc.damage impairs the ability to imagine novel experiences (Hassa-bis et al.,2007).Based on this and a range of other studies,it has recently been proposed that hippocampus and parahippocam-pal cortex play a crucial role in the formation of vivid event repre-sentations,regardless of whether they lie in the past,present,or future (Schacter and Addis,2009).The hippocampus may thus contribute to decision-making through its role in self-projection into the future (Bar,2009;Schacter et al.,2007),allowing an organism to evaluate future payoffs through mental simulation (Johnson and Redish,2007;Johnson et al.,2007).Future thinking may thus affect intertemporal choice through hippo-campal involvement.Here we used model-based fMRI,analyses of functional coupling,and extensive behavioral procedures to investigate how episodic future thinking affects delay discounting.In Exper-iment 1,subjects performed a classical delay discounting task(Kable and Glimcher,2007;Peters and Bu¨chel,2009)that involved a series of choices between smaller immediate and larger delayed rewards,while brain activity was measured using fMRI.Critically,we introduced a novel episodic condition that involved the presentation of episodic cue words (tags )obtained during an extensive prescan interview,referring to real,subject-specific future events planned for the respective day of reward delivery.This design allowed us to assess individual discount rates separately for the two experimental conditions,allowing us to investigate neural mechanisms mediating changes in delay discounting associated with episodic thinking.In a second behavioral study,we replicated the behavioral effects of Exper-iment 1and addressed a number of alternative explanations for the observed effects of episodic tags on discount rates.RESULTSExperiment 1:Prescan InterviewOn day 1,healthy young volunteers (n =30,mean age =25,15male)completed a computer-based delay discounting proce-dure to estimate their individual discount rate (Peters and Bu ¨-chel,2009).This discount rate was used solely for the purpose of constructing subject-specific trials for the fMRI session (see Experimental Procedures ).Furthermore,participants compiled a list of events that they had planned in the next 7months (e.g.,vacations,weddings,parties,courses,and so forth)andrated them on scales from 1to 6with respect to personal rele-vance,arousal,and valence.For each participant,seven subject-specific events were selected such that the spacing between events increased with increasing delay to the episode,and that events were roughly matched based on personal rele-vance,arousal,and valence.Multiple regression analysis of these ratings across the different delays showed no linear effects (relevance:p =0.867,arousal:p =0.120,valence:p =0.977,see Figure S1available online).For each subject,a separate set of seven delays was computed that was later used as delays in the control condition.Median and range for the delays used in each condition are listed in Table S1(available online).For each event,a label was selected that would serve as a verbal tag for the fMRI session.Experiment 1:fMRI Behavioral ResultsOn day 2,volunteers performed two sessions of a delay dis-counting procedure while fMRI was measured using a 3T Siemens Scanner with a 32-channel head-coil.In each session,subjects made a total of 118choices between 20V available immediately and larger but delayed amounts.Subjects were told that one of their choices would be randomly selected and paid out following scanning,with the respective delay.Critically,in half the trials,an additional subject-specific episodic tag (see above,e.g.,‘‘vacation paris’’or ‘‘birthday john’’)was displayed based on the prescan interview (see Figure 1)indicating which event they had planned on the particular day (episodic condi-tion),whereas in the remaining trials,no episodic tag was pre-sented (control condition).Amount and waiting time were thus displayed in both conditions,but only the episodic condition involved the presentation of an additional subject-specific event tag.Importantly,nonoverlapping sets of delays were used in the two conditions.Following scanning,subjects rated for each episodic tag how often it evoked episodic associations during scanning (frequency of associations:1,never;to 6,always)and how vivid these associations were (vividness of associa-tions:1,not vivid at all;to 6,highly vivid;see Figure S1).Addition-ally,written reports were obtained (see Supplemental Informa-tion ).Multiple regression revealed no significant linear effects of delay on postscan ratings (frequency:p =0.224,vividness:p =0.770).We averaged the postscan ratings acrosseventsFigure 1.Behavioral TaskDuring fMRI,subjects made repeated choices between a fixed immediate reward of 20V and larger but delayed amounts.In the control condi-tion,amounts were paired with a waiting time only,whereas in the episodic condition,amounts were paired with a waiting time and a subject-specific verbal episodic tag indicating to the subjects which event they had planned at the respective day of reward delivery.Events were real and collected in a separate testing session prior to the day of scanning.NeuronEpisodic Modulation of Delay DiscountingNeuron 66,138–148,April 15,2010ª2010Elsevier Inc.139and the frequency/vividness dimensions,yielding an‘‘imagery score’’for each subject.Individual participants’choice data from the fMRI session were then analyzed byfitting hyperbolic discount functions to subject-specific indifference points to obtain discount rates (k-parameters),separately for the episodic and control condi-tions(see Experimental Procedures).Subjective preferences were well-characterized by hyperbolic functions(median R2 episodic condition=0.81,control condition=0.85).Discount functions of four exemplary subjects are shown in Figure2A. For both conditions,considerable variability in the discount rate was observed(median[range]of discount rates:control condition=0.014[0.003–0.19],episodic condition=0.013 [0.002–0.18]).To account for the skewed distribution of discount rates,all further analyses were conducted on the log-trans-formed k-parameters.Across subjects,log-transformed discount rates were significantly lower in the episodic condition compared with the control condition(t(29)=2.27,p=0.016),indi-cating that participants’choice behavior was less impulsive in the episodic condition.The difference in log-discount rates between conditions is henceforth referred to as the episodic tag effect.Fitting hyperbolic functions to the median indifference points across subjects also showed reduced discounting in the episodic condition(discount rate control condition=0.0099, episodic condition=0.0077).The size of the tag effect was not related to the discount rate in the control condition(p=0.56). We next hypothesized that the tag effect would be positively correlated with postscan ratings of episodic thought(imagery scores,see above).Robust regression revealed an increase in the size of the tag effect with increasing imagery scores (t=2.08,p=0.023,see Figure2B),suggesting that the effect of the tags on preferences was stronger the more vividly subjects imagined the episodes.Examples of written postscan reports are provided in the Supplemental Results for participants from the entire range of imagination ratings.We also correlated the tag effect with standard neuropsychological measures,the Sensation Seeking Scale(SSS)V(Beauducel et al.,2003;Zuck-erman,1996)and the Behavioral Inhibition Scale/Behavioral Approach Scale(BIS/BAS)(Carver and White,1994).The tag effect was positively correlated with the experience-seeking subscale of the SSS(p=0.026)and inversely correlated with the reward-responsiveness subscale of the BIS/BAS scales (p<0.005).Repeated-measures ANOVA of reaction times(RTs)as a func-tion of option value(lower,similar,or higher relative to the refer-ence option;see Experimental Procedures and Figure2C)did not show a main effect of condition(p=0.712)or a condition 3value interaction(p=0.220),but revealed a main effect of value(F(1.8,53.9)=16.740,p<0.001).Post hoc comparisons revealed faster RTs for higher-valued options relative to similarly (p=0.002)or lower valued options(p<0.001)but no difference between lower and similarly valued options(p=0.081).FMRI DataFMRI data were modeled using the general linear model(GLM) as implemented in SPM5.Subjective value of each decision option was calculated by multiplying the objective amount of each delayed reward with the discount fraction estimated behaviorally based on the choices during scanning,and included as a parametric regressor in the GLM.Note that discount rates were estimated separately for the control and episodic conditions(see above and Figure2),and we thus used condition-specific k-parameters for calculation of the subjective value regressor.Additional parametric regressors for inverse delay-to-reward and absolute reward magnitude, orthogonalized with respect to subjective value,were included in theGLM.Figure2.Behavioral Data from Experiment1Shown are experimentally derived discount func-tions from the fMRI session for four exemplaryparticipants(A),correlation with imagery scores(B),and reaction times(RTs)(C).(A)Hyperbolicfunctions werefit to the indifference points sepa-rately for the control(dashed lines)and episodic(solid lines,filled circles)conditions,and thebest-fitting k-parameters(discount rates)and R2values are shown for each subject.The log-trans-formed difference between discount rates wastaken as a measure of the effect of the episodictags on choice preferences.(B)Robust regressionrevealed an association between log-differences indiscount rates and imagery scores obtained frompostscan ratings(see text).(C)RTs were signifi-cantly modulated by option value(main effectvalue p<0.001)with faster responses in trialswith a value of the delayed reward higher thanthe20V reference amount.Note that althoughseven delays were used for each condition,somedata points are missing,e.g.,onlyfive delay indif-ference points for the episodic condition areplotted for sub20.This indicates that,for the twolongest delays,this subject never chose the de-layed reward.***p<0.005.Error bars=SEM.Neuron Episodic Modulation of Delay Discounting140Neuron66,138–148,April15,2010ª2010Elsevier Inc.Episodic Tags Activate the Future Thinking NetworkWe first analyzed differences in the condition regressors without parametric pared to those of the control condi-tion,BOLD responses to the presentation of the delayed reward in the episodic condition yielded highly significant activations (corrected for whole-brain volume)in an extensive network of brain regions previously implicated in episodic future thinking (Addis et al.,2007;Schacter et al.,2007;Szpunar et al.,2007)(see Figure 3and Table S2),including retrosplenial cortex (RSC)/PCC (peak MNI coordinates:À6,À54,14,peak z value =6.26),left lateral parietal cortex (LPC,À44,À66,32,z value =5.35),and vmPFC (À8,34,À12,z value =5.50).Distributed Neural Coding of Subjective ValueWe then replicated previous findings (Kable and Glimcher,2007;Kable and Glimcher,2010;Peters and Bu¨chel,2009)using a conjunction analysis (Nichols et al.,2005)searching for regions showing a positive correlation between the height of the BOLD response and subjective value in the control and episodic condi-tions in a parametric analysis (Figure 4A and Table S3).Note that this is a conservative analysis that requires that a given voxel exceed the statistical threshold in both contrasts separately.This analysis revealed clusters in the lateral orbitofrontal cortex (OFC,À36,50,À10,z value =4.50)and central OFC (À18,12,À14,z value =4.05),bilateral VS (right:10,8,0,z value =4.22;left:À10,8,À6,z value =3.51),mPFC (6,26,16,z value =3.72),and PCC (À2,À28,24,z value =4.09),representing subjective (discounted)value in both conditions.We next analyzed the neural tag effect,i.e.,regions in which the subjective value correlation was greater for the episodic condi-tion as compared with the control condition (Figure 4B and Table S4).This analysis revealed clusters in the left LPC (À66,À42,32,z value =4.96,),ACC (À2,16,36,z value =4.76),left dorsolateral prefrontal cortex (DLPFC,À38,36,36,z value =4.81),and right amygdala (24,2,À24,z value =3.75).Finally,we performed a triple-conjunction analysis,testing for regions that were correlated with subjective value in both conditions,but in which the value correlation increased in the episodic condition.Only left LPC showed this pattern (À66,À42,30,z value =3.55,see Figure 4C and Table S5),the same region that we previously identified as delay-specific in valuation (Petersand Bu¨chel,2009).There were no regions in which the subjective value correlation was greater in the control condition when compared with the episodic condition at p <0.001uncorrected.ACC Valuation Signals and Functional Connectivity Predict Interindividual Differences in Discount Function ShiftsWe next correlated differences in the neural tag effect with inter-individual differences in the size of the behavioral tag effect.To this end,we performed a simple regression analysis in SPM5on the single-subject contrast images of the neural tag effect (i.e.,subjective value correlation episodic >control)using the behavioral tag effect [log(k control )–log(k episodic )]as an explana-tory variable.This analysis revealed clusters in the bilateral ACC (right:18,34,18,z value =3.95,p =0.021corrected,left:À20,34,20,z value =3.52,Figure 5,see Table S6for a complete list).Coronal sections (Figure 5C)clearly show that both ACC clusters are located in gray matter of the cingulate sulcus.Because ACC-limbic interactions have previously been impli-cated in the control of choice behavior (Floresco and Ghods-Sharifi,2007;Roiser et al.,2009),we next analyzed functional coupling with the right ACC from the above regression contrast (coordinates 18,34,18,see Figure 6A)using a psychophysiolog-ical interaction analysis (PPI)(Friston et al.,1997).Note that this analysis was conducted on a separate first-level GLM in which control and episodic trials were modeled as 10s miniblocks (see Experimental Procedures for details).We first identified regions in which coupling with the ACC changed in the episodic condition compared with the control condition (see Table S7)and then performed a simple regression analysis on these coupling parameters using the behavioral tag effect as an explanatory variable.The tag effect was associated with increased coupling between ACC and hippocampus (À32,À18,À16,z value =3.18,p =0.031corrected,Figure 6B)and ACC and left amygdala (À26,À4,À26,z value =2.95,p =0.051corrected,Figure 6B,see Table S8for a complete list of activa-tions).The same regression analysis in a second PPI with the seed voxel placed in the contralateral ACC region from the same regression contrast (À20,34,22,see above)yielded qual-itatively similar,though subthreshold,results in these same structures (hippocampus:À28,À32,À6,z value =1.96,amyg-dala:À28,À6,À16,z value =1.97).Experiment 2We conducted an additional behavioral experiment to address a number of alternative explanations for the observed effects of tags on choice behavior.First,it could be argued thatepisodicFigure 3.Categorical Effect of Episodic Tags on Brain ActivityGreater activity in lateral parietal cortex (left)and posterior cingulate/retrosplenial and ventro-medial prefrontal cortex (right)was observed in the episodic condition compared with the control condition.p <0.05,FWE-corrected for whole-brain volume.NeuronEpisodic Modulation of Delay DiscountingNeuron 66,138–148,April 15,2010ª2010Elsevier Inc.141tags increase subjective certainty that a reward would be forth-coming.In Experiment 2,we therefore collected postscan ratings of reward confidence.Second,it could be argued that events,always being associated with a particular date,may have shifted temporal focus from delay-based to more date-based processing.This would represent a potential confound,because date-associated rewards are discounted less than delay-associated rewards (Read et al.,2005).We therefore now collected postscan ratings of temporal focus (date-based versus delay-based).Finally,Experiment 1left open the question of whether the tag effect depends on the temporal specificity of the episodic cues.We therefore introduced an additional exper-imental condition that involved the presentation of subject-specific temporally unspecific future event cues.These tags (henceforth referred to as unspecific tags)were obtained by asking subjects to imagine events that could realistically happen to them in the next couple of months,but that were not directly tied to a particular point in time (see Experimental Procedures ).Episodic Imagery,Not Temporal Specificity,Reward Confidence,or Temporal Focus,Predicts the Size of the Tag EffectIn total,data from 16participants (9female)are included.Anal-ysis of pretest ratings confirmed that temporally unspecific and specific tags were matched in terms of personal relevance,arousal,valence,and preexisting associations (all p >0.15).Choice preferences were again well described by hyperbolic functions (median R 2control =0.84,unspecific =0.81,specific =0.80).We replicated the parametric tag effect (i.e.,increasing effect of tags on discount rates with increasing posttest imagery scores)in this independent sample for both temporally specific (p =0.047,Figure 7A)and temporally unspecific (p =0.022,Figure 7A)tags,showing that the effect depends on future thinking,rather than being specifically tied to the temporal spec-ificity of the event cues.Following testing,subjects rated how certain they were that a particular reward would actually be forth-coming.Overall,confidence in the payment procedure washighFigure 4.Neural Representation of Subjective Value (Parametric Analysis)(A)Regions in which the correlation with subjective value (parametric analysis)was significant in both the control and the episodic conditions (conjunction analysis)included central and lateral orbitofrontal cortex (OFC),bilateral ventral striatum (VS),medial prefrontal cortex (mPFC),and posterior cingulate cortex(PCC),replicating previous studies (Kable and Glimcher,2007;Peters and Bu¨chel,2009).(B)Regions in which the subjective value correlation was greater for the episodic compared with the control condition included lateral parietal cortex (LPC),ante-rior cingulate cortex (ACC),dorsolateral prefrontal cortex (DLPFC),and the right amygdala (Amy).(C)A conjunction analysis revealed that only LPC activity was positively correlated with subjective value in both conditions,but showed a greater regression slope in the episodic condition.No regions showed a better correlation with subjective value in the control condition.Error bars =SEM.All peaks are significant at p <0.001,uncorrected;(A)and (B)are thresholded at p <0.001uncorrected and (C)is thresholded at p <0.005,uncorrected for display purposes.NeuronEpisodic Modulation of Delay Discounting142Neuron 66,138–148,April 15,2010ª2010Elsevier Inc.(Figure 7B),and neither unspecific nor specific tags altered these subjective certainty estimates (one-way ANOVA:F (2,45)=0.113,p =0.894).Subjects also rated their temporal focus as either delay-based or date-based (see Experimental Procedures ),i.e.,whether they based their decisions on the delay-to-reward that was actually displayed,or whether they attempted to convert delays into the corresponding dates and then made their choices based on these dates.There was no overall significant effect of condition on temporal focus (one-way ANOVA:F (2,45)=1.485,p =0.237,Figure 7C),but a direct comparison between the control and the temporally specific condition showed a significant difference (t (15)=3.18,p =0.006).We there-fore correlated the differences in temporal focus ratings between conditions (control:unspecific and control:specific)with the respective tag effects (Figure 7D).There were no correlations (unspecific:p =0.71,specific:p =0.94),suggesting that the observed differences in discounting cannot be attributed to differences in temporal focus.High-Imagery,but Not Low-Imagery,Subjects Adjust Their Discount Function in an Episodic ContextFor a final analysis,we pooled the samples of Experiments 1and 2(n =46subjects in total),using only the temporally specific tag data from Experiment 2.We performed a median split into low-and high-imagery participants according to posttest imagery scores (low-imagery subjects:n =23[15/8Exp1/Exp2],imagery range =1.5–3.4,high-imagery subjects:n =23[15/8Exp1/Exp2],imagery range =3.5–5).The tag effect was significantly greater than 0in the high-imagery group (t (22)=2.6,p =0.0085,see Figure 7D),where subjects reduced their discount rate by onaverage 16%in the presence of episodic tags.In the low-imagery group,on the other hand,the tag effect was not different from zero (t (22)=0.573,p =0.286),yielding a significant group difference (t (44)=2.40,p =0.011).DISCUSSIONWe investigated the interactions between episodic future thought and intertemporal decision-making using behavioral testing and fMRI.Experiment 1shows that reward delay dis-counting is modulated by episodic future event cues,and the extent of this modulation is predicted by the degree of sponta-neous episodic imagery during decision-making,an effect that we replicated in Experiment 2(episodic tag effect).The neuroi-maging data (Experiment 1)highlight two mechanisms that support this effect:(1)valuation signals in the lateral ACC and (2)neural coupling between ACC and hippocampus/amygdala,both predicting the size of the tag effect.The size of the tag effect was directly related to posttest imagery scores,strongly suggesting that future thinking signifi-cantly contributed to this effect.Pooling subjects across both experiments revealed that high-imagery subjects reduced their discount rate by on average 16%in the episodic condition,whereas low-imagery subjects did not.Experiment 2addressed a number of alternative accounts for this effect.First,reward confidence was comparable for all conditions,arguing against the possibility that the tags may have somehow altered subjec-tive certainty that a reward would be forthcoming.Second,differences in temporal focus between conditions(date-basedFigure 5.Correlation between the Neural and Behavioral Tag Effect(A)Glass brain and (B and C)anatomical projection of the correlation between the neural tag effect (subjective value correlation episodic >control)and the behav-ioral tag effect (log difference between discount rates)in the bilateral ACC (p =0.021,FWE-corrected across an anatomical mask of bilateral ACC).(C)Coronal sections of the same contrast at a liberal threshold of p <0.01show that both left and right ACC clusters encompass gray matter of the cingulate gyrus.(D)Scatter-plot depicting the linear relationship between the neural and the behavioral tag effect in the right ACC.(A)and (B)are thresholded at p <0.001with 10contiguous voxels,whereas (C)is thresholded at p <0.01with 10contiguousvoxels.Figure 6.Results of the Psychophysiolog-ical Interaction Analysis(A)The seed for the psychophysiological interac-tion (PPI)analysis was placed in the right ACC (18,34,18).(B)The tag effect was associated with increased ACC-hippocampal coupling (p =0.031,corrected across bilateral hippocampus)and ACC-amyg-dala coupling (p =0.051,corrected across bilateral amygdala).Maps are thresholded at p <0.005,uncorrected for display purposes and projected onto the mean structural scan of all participants;HC,hippocampus;Amy,Amygdala;rACC,right anterior cingulate cortex.NeuronEpisodic Modulation of Delay DiscountingNeuron 66,138–148,April 15,2010ª2010Elsevier Inc.143。

黑龙江省哈尔滨师范大学附属中学2024-2025学年高三上学期10月月考英语试题

黑龙江省哈尔滨师范大学附属中学2024-2025学年高三上学期10月月考英语试题

黑龙江省哈尔滨师范大学附属中学2024-2025学年高三上学期10月月考英语试题一、听力选择题1.How many of the dresses does the woman have?A.One.B.Two.C.Three.2.How does the man feel about the shoes?A.Satisfied.B.Embarrassed.C.Dissatisfied.3.Where are the speakers probably?A.In a store.B.In an office.C.In a classroom.4.What is the relationship between the speakers?A.Strangers.B.Friends.C.Husband and wife. 5.What is the weather like now?A.Cloudy.B.Sunny.C.Rainy.听下面一段较长对话,回答以下小题。

6.What do we know about the woman?A.She likes the outdoors.B.She tripped up on a rock.C.She never camped in the woods.7.What is hard in the dark according to the man?A.Setting up a tent.B.Avoiding rocks.C.Building a fire.听下面一段较长对话,回答以下小题。

8.What did the man do yesterday?A.He called his friends.B.He visited the gallery.C.He made a reservation. 9.What is the man’s problem?A.He found the gallery was full of people.B.He didn’t know where to pick up the tickets.C.His name is not on the list.10.What will the woman most likely do next?A.Give some tickets to the man.B.Close the gallery.C.Contact a lady.听下面一段较长对话,回答以下小题。

除去水中的砷

除去水中的砷

Journal of Hazardous Materials 182 (2010) 156–161Contents lists available at ScienceDirectJournal of HazardousMaterialsj o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /j h a z m atAs(III)removal using an iron-impregnated chitosan sorbentDaniel Dianchen Gang a ,∗,Baolin Deng b ,LianShin Lin caDepartment of Civil Engineering,University of Louisiana at Lafayette,Lafayette,LA 70504,USAbDepartment of Civil and Environmental Engineering,University of Missouri,Columbia,MO 65211,USA cDepartment of Civil and Environmental Engineering,West Virginia University,Morgantown,WV 26506,USAa r t i c l e i n f o Article history:Received 18December 2009Received in revised form 28May 2010Accepted 1June 2010Available online 9 June 2010Keywords:Trivalent arsenic Iron-chitosan AdsorptionAs(III)adsorption kinetics Adsorption isotherma b s t r a c tAn iron-impregnated chitosan granular adsorbent was newly developed to evaluate its ability to remove arsenic from water.Since most existing arsenic removal technologies are effective in removing As(V)(arsenate),this study focused on As(III).The adsorption behavior of As(III)onto the iron-impregnated chi-tosan absorbent was examined by conducting batch and column studies.Maximum adsorption capacity reached 6.48mg g −1at pH =8with initial As(III)concentration of 1007␮g L −1.The adsorption isotherm data fit well with the Freundlich model.Seven hundred and sixty eight (768)empty bed volumes (EBV)of 308␮g L −1of As(III)solution were treated in column experiments.These are higher than the empty bed volumes (EBV)treated using iron-chitosan composites as reported by previous researchers.The investi-gation has indicated that the iron-impregnated chitosan is a very promising material for As(III)removal from water.© 2010 Elsevier B.V. All rights reserved.1.IntroductionArsenic,resulting from industrial and mine waste discharges or from natural erosion of arsenic containing rocks,is found in many surface and ground waters [1].Common chemical forms of arsenic in the environment include arsenate (As(V)),arsenite (As(III)),dimethylarsinic acid (DMA),and monomethylarsenic acid (MMA).Inorganic forms of arsenic (As(V)and As(III))are more toxic than the organic forms [2].Arsenite can be predominant in ground-water with low oxygen levels and is generally more difficult to be removed than arsenate [3].Due to the negative impacts of arsenic on human health that range from acute lethality to chronic and car-cinogenic effects,the U.S.Environmental Protection Agency revised the maximum contaminant level (MCL)of arsenic in drinking water from 50to 10␮g L −1[4].This new regulation has posed a chal-lenge for the research of new technologies capable of selectively removing low levels of arsenic.Existing technologies that are being used for arsenic removal include precipitation [5],membrane separation,ion exchange,and adsorption [6–9].While these approaches can remove arsenic to below 10␮g L −1under optimal conditions,most of the systems are expensive,not suitable for small communities with limited resources.Of these methods,much work has been done on arsenic removal through adsorption because it is one of the most effec-∗Corresponding author.Tel.:+13374825184;fax:+13374826688.E-mail addresses:ddgang@ ,digang@ ,Gang@ (D.D.Gang).tive and inexpensive methods for arsenic treatment [7].Therefore,development of highly effective adsorbents is a key for adsorption-based technologies.Several iron(III)oxides,such as amorphous hydrous ferric oxide [5]and crystalline hydrous ferric oxide [10]are well known for their ability to remove both As(V)and As(III)from aqueous solutions.In general,arsenate is more readily removed by ferric (hydr)oxides than arsenite [11].Reported mechanisms for arsenic removal include adsorption onto the hydroxide surfaces,entrapment of adsorbed arsenic in the flocculants,and formation of complexes and ferric arsenate (FeAsO 4)[12].The presence of other anions such as sulfate,chloride,and in particular,silicates,phosphate,and natural organic matters,can significantly affect arsenic adsorption [13–15].The use of iron (hydr)oxides in fine powdered or amor-phous forms was found to be effective for arsenic removal,but the process requires follow-up solid/water separation.For packed-bed adsorption systems,high-efficient granular forms of adsorbent are essential.Recently,several iron based granular materials and processes have been developed for arsenic removal.Dong et al.[16]devel-oped iron coated pottery granules (ICPG)for both As(III)and As(V)removal from drinking water.The column tests showed that ICPG consistently removed total arsenic from test water to below 5␮g L −1level.In another study,Gu et al.[17]used iron-containing granular activated carbon for arsenic adsorption.This iron-containing granular activated carbon was shown to remove arsenic most efficiently when the iron content was approximately 6%.Viraraghavan et al.[18]reported a green sand filtration process and found a strong correlation between influent Fe(II)concen-0304-3894/$–see front matter © 2010 Elsevier B.V. All rights reserved.doi:10.1016/j.jhazmat.2010.06.008D.D.Gang et al./Journal of Hazardous Materials182 (2010) 156–161157tration and arsenic removal percentage.The removal percentage increased from41%to above80%as the ratio of Fe/As was increased from0to20.Granular ferric hydroxide(GFH),another iron based granular material,showed a high treatment capacity for arsenic removal in a column setting before the breakthrough concentration reached10␮g L−1[19].It was found that complexes were formed upon the adsorption of arsenate on GFH[20].Selvin et al.[21]con-ducted laboratory-scale tests over50different media for arsenic removal and found GFH with a particle size of0.8–2.0mm was the most effective one among the tested media.However,some disad-vantages with GFH exist,including quick head loss buildup within 2days because of thefine particle size,and significant reduction (50%)in adsorption capacity with larger sized media(1.0–2.0mm).Chitin and its deacetylated product,chitosan,are the world’s second most abundant natural polymers after cellulose.These polymers contain primary amino groups,which are useful for chemical modifications and can be used as potential separa-tors in water treatment and other industrial applications.Many researchers focused on chitosan as an adsorbent because of its non-toxicity,chelating ability with metals,and biodegradability[22]. Several studies have demonstrated that chitosan and its deriva-tives could be used to remove arsenic from aqueous solutions [23,24].Based on the fact that both iron(III)oxides and chitosan exhib-ited high affinity for arsenic,this study focused on examining the effectiveness of an iron-impregnated chitosan granular adsorbent for arsenic removal.Most arsenic removal technologies are more effective for removing arsenate than for arsenite[12].We found in this study that the iron-impregnated chitosan was effective for arsenite removal from experiments in both batch and column set-tings.2.Experimental2.1.Preparation of iron-chitosan beadsThe experimental procedure for the preparation of iron-chitosan beads was described in detail by Vasireddy[25].To summarize, approximately10g of medium molecular weight chitosan(Aldrich Chemical Corporation,Wisconsin,USA)was added to0.5L of0.01N Fe(NO3)3·9H2O solution under continuous stirring at60◦C for2h to form a viscous gel.The beads were formed by drop-wise addition of chitosan gel into a0.5M NaOH precipitation bath under room temperature.Maintaining this concentration of NaOH was critical for forming spherically shaped beads[25].The beads were then separated from the0.5M NaOH solution and washed several times with deionized water to a neutral pH.The wet beads were then dried in an oven under vacuum and in air.Thefinal iron content of the chitosan bead was about8.4%.2.2.Arsenic measurementAn atomic absorption spectrometer(AAS)(Thermo Electron Corporation)equipped with an arsenic hollow cathode lamp was employed to measure arsenic concentration.An automatic inter-mittent hydride generation device was used to convert arsenic in water samples to arsenic hydride.The hydrides were then purged continuously by argon gas into the atomizer of an atomic absorption spectrometer for concentration measurements.As(III)stock solution(1000mg L−1)was prepared by dissolving 1.32g of As2O3(obtained from J.T.Baker)in distilled water con-taining4g NaOH,which was then neutralized to pH about7with 1%HCl and diluted to1L with distilled water.All the working solu-tions were prepared with standard stock solution.To50mL of each sample solution(i.e.,reagent blank,standard solutions,and water samples),5mL1%HCl and5mL of100g L−1NaI solution were used to convert arsenic in water samples to arsenic hydride.2.3.Arsenic adsorption experimentsEach arsenic solution(100mL)of desired concentration was mixed with the iron-chitosan beads in a250mL conicalflask.The solution pH was adjusted with0.1M HCl or0.1M NaOH to obtain the desired pHs.A pH buffer was not used to avoid potential com-petition of buffer with As(III)sorption.One sample of the same concentration solution without adsorbent(blank),used to estab-lish the initial concentration of the samples,was also treated under same conditions as the samples containing the adsorbent.The solu-tions were placed in a shaker for afixed amount time,followed by filtration to remove the adsorbent.Thefiltrate was then analyzed for thefinal concentration of arsenic using the atomic absorption spectrometer.The solid phase concentration was calculated using the following formula:q=(C i−C f)VM(1) where,q(␮g g−1)is the solid phase concentration,C i(␮g L−1)is the initial concentration of arsenic in solution,C f(␮g L−1)is thefinal concentration of arsenic in treated solution;V(L)is the volume of the solution,and M(g)is the weight of the iron-chitosan adsorbent.2.4.Kinetic experimentsAdsorption kinetics was examined with various initial concen-trations at25◦C.The pH of the solutions was chosen at8.0for optimal adsorption.The adsorbent loading for three different ini-tial concentrations of306,584,and994␮g L−1was all0.2g L−1.A predetermined quantity of iron-chitosan adsorbent(20mg)was placed in separate conicalflasks with pH-adjusted As(III)solution. The conicalflasks were covered with parafilm and placed in a shaker (150rpm),and sub-samples of the solutions were then removed periodically andfiltered prior to arsenic analysis.To determine the reaction rate constants of arsenic adsorption onto iron-chitosan,both the pseudo-first-order and pseudo-second-order models were used.Kinetics of the pseudo-first-order model can be expressed as[26]:ln(q e−q t)=ln q e−k1t(2) where,k1(min−1)is the rate constant of pseudo-first-order adsorp-tion,q t(mg g−1)is the amount of As(III)adsorbed at time t(min), and q e(mg g−1)is the amount of adsorption at equilibrium.The model parameters k1and q e can be estimated from the slope and intercept of the plot of ln(q e−q t)vs t.The pseudo-second-order model can be expressed as follow[27]:tq t=tq e+1k2q2e(3)where,k2(g mg−1min−1)is the pseudo-second-order reaction rate. Parameters k2and q e can be estimated from the intercept and slope of the plot of(t/q t)vs t.2.5.Isotherm modelsAdsorption isotherms such as the Freundlich or Langmuir mod-els are commonly utilized to describe adsorption equilibrium.The Freundlich isotherm model is represented mathematically as:q e=k f C1/ne(4) where,q e(mg g−1)is the amount of As(III)adsorbed,C e(␮g L−1) is the concentration of arsenite in solution(␮g L−1),k f and1/n158 D.D.Gang et al./Journal of Hazardous Materials182 (2010) 156–161Fig.1.Scanning electron micrograph(SEM)of iron-chitosan bead.are parameters of the Freundlich isotherm,denoting a distribu-tion coefficient(L g−1)and intensity of adsorption,respectively.The Langmuir equation is another widely used equilibrium adsorption model.It has the advantage of providing a maximum adsorption capacity q max(mg g−1)that can be correlated to adsorption proper-ties.The Langmuir model can be represented as:q e=q maxK L C e1+K L C e(5)where,q max(mg g−1)and K L(L mg−1)are Langmuir constants representing maximum adsorption capacity and binding energy, respectively.2.6.Column studyColumn study was conducted to investigate the use of iron-chitosan as a low-cost treatment technology for arsenite removal. Experiments were conducted with a12-mm-ID glass column packed with1.5g iron-chitosan as afixed bed.The influent solu-tion had an inlet As(III)concentration of308␮g L−1at pH8,and was passed the column at aflow rate of25mL h−1.Effluent solu-tion samples were collected and analyzed for arsenic concentration during the column test.3.Results and discussion3.1.Structure characterization of iron-chitosan beadsThe prepared iron-chitosan beads were examined by scanning electron microscope(SEM)(AMRAY1600)for the surface morphol-ogy.A working distance of5–10mm,spot size of2–3,secondary electron(SE)mode,and accelerating voltage of20keV were used to view the samples.It can be seen from Fig.1that the beads are porous in structure.X-ray Photoelectron Spectroscopy(XPS),a sur-face sensitive analytic tool to determine the surface composition and electronic state of a sample,was used in this study.In XPS analysis,a survey scan was used to determine the elements exist-ing on the surface.The high resolution utility scans were then used to measure the atomic concentrations of Fe,C,N and O in the sam-ple.Fig.2shows the peak positions of carbon,nitrogen,oxygen,and iron obtained by the XPS for iron-chitosan beads.In Fig.2,the car-bon1s peak was observed at283.0eV with a FWHM(full width at maximum height)of2.015.The Fe peak was observed at730.0eV. The N-1s peak for iron-chitosan bead was found at398.0eV(FWHM 2.00eV),which can be attributed to the amino groups inchitosan.Fig.2.XPS spectrum of iron-chitosan bead.3.2.Effect of pHThe effect of pH on arsenite removal with the iron-chitosan adsorbent was examined using100mL As(III)solution with an initial concentration of314␮g L−1and a solid loading rate of 0.15g L−1.The solution pH was adjusted with0.1M HCl or0.1M NaOH to obtain pHs ranging from4to12.Lower pHs were avoided because the acid environments could lead to partial dissolution of the chitosan polymer and make the beads unstable[25,28]. The solutions were placed in a shaker(150rpm)for20h at room temperature(25◦C),followed byfiltration to remove the adsor-bent.The amounts of As(III)adsorbed,calculated using Eq.(1),are present in Fig.3.Under the experimental conditions,approximately 2.0mg g−1of As(III)was adsorbed and that amount did not change significantly in the pH range4–9.However,when pH was higher than9.2,arsenite removal decreased dramatically with increasing pH.The results can be explained using arsenic chemical speciation in different pH ranges[29].Arsenite remains mostly as a neutral molecule for pH<9.2,and negatively charged at pH>9.2.So at pH>9.2,arsenite sorption is less because of the unfavorable electro-static interaction with negatively charged surfaces.This adsorptive behavior is common for arsenite with other adsorbents[17,30].Gu et al.[17]reported that pH had no obvious effect on As(III)removal in the range of4.4–9.0,with removal efficiency above95%.Another study indicated that the uptake of As(III)by fresh andimmobi-Fig.3.Arsenite removal of the iron-chitosan adsorbent(0.15g L−1)as a function of pH for initial arsenite concentration of314␮g L−1at T=25◦C.D.D.Gang et al./Journal of Hazardous Materials182 (2010) 156–161159Fig.4.Adsorption kinetics for different initial arsenite concentrations with iron-chitosan adsorbent loading of0.2g L−1at pH=8and T=25◦C.lized biomass was not greatly affected by solution pH with optimal biosorption occurring at around pH6–8[30].Raven et al.[11] reported that a maximum adsorption of arsenite on ferrihydrite was observed at approximately pH9.3.3.Kinetics of adsorptionFig.4illustrates the adsorption kinetics for three different ini-tial arsenite concentrations.More than60%of the arsenite was adsorbed by iron-chitosan within thefirst30min,then adsorption leveled off after2h.Given the initial concentrations and adsorbent loading,equilibrium was reached after about2h.The adsorption capacity increased from1.51to4.60mg g−1as the initial arsen-ite concentration was increased from306to994␮g L−1.The rapid adsorption in the beginning can be attributed to the greater con-centration gradient and more available sites for adsorption.This is a common behavior with adsorption processes and has been reported in other studies[31].The sorption rate of As(III)on nat-urally available red soil was initially rapid in thefirst2h and slowed down thereafter[32].Elkhatib et al.[33]reported that the initial adsorption was rapid,with more than50%of As(III) adsorbed during thefirst0.5h in an arsenite adsorption study. Fuller et al.[34]reported that As(V)adsorption onto synthesized ferrihydrite had a rapid initial phase(<5min)and adsorption con-tinued for182h.Raven et al.[11]studied the kinetics of As(V) and As(III)adsorption on ferrihydrite and found that most of the adsorption occurred within thefirst2h.It has been reported that arsenite forms both inner-and outer-sphere surface complexes on amorphous Fe oxide[35].Another possible adsorption mech-anism is hydrogen bond formation between As(III)and chitosan bead[24].Figs.5and6illustrate modelfits of the kinetic data for the pseudo-first-order and pseudo-second-order kinetic models. In general,the pseudo-second-order characterized the kinetic data better than the pseudo-first-order model.Table1summa-Fig.5.Adsorption kinetics of the iron-chitosan adsorbent(0.2g L−1)for three initial arsenite concentrations at pH=8and T=25◦C,and corresponding pseudo-first-ordermodels.Fig.6.Adsorption kinetics of the iron-chitosan adsorbent(0.2g L−1)for three initial arsenite concentrations at pH=8and T=25◦C,and corresponding pseudo-second-order models.rizes adsorption capacities determined from the modelfits.It is noted that the second order rate constant(k2)decreased from 3.19×10−2to 1.15×10−2g mg−1min−1as the initial concen-tration increased from306to994␮g L−1.The initial rate(k2q2e) increased from8.48×10−2to27.97×10−2with increasing initial As(III)concentration.Because as initial concentration increased,the concentration difference between the adsorbent surface and bulk solution increased.Jimenez-Cedillo et al.[36]investigated arsenic adsorp-tion kinetics on iron,manganese and iron-manganese-modified clinoptilolite-rich tuffs and concluded that the adsorption pro-cesses could be described by the pseudo-second-order model.Table1Adsorption capacities and parameter values of kinetic models for three initial arsenite concentrations and iron-chitosan loading of0.2g L−1at pH=8.Initial conc.(␮g L−1)Pseudo-first order Pseudo-second orderk1×102(min−1)R2q e,exp(mg g−1)q e,col(mg g−1)k1×102(g mg−1min−1)R2q e,exp(mg g−1)q e,col(mg g−1)k2q2e×102306 2.630.98 1.51 1.24 3.190.99 1.51 1.638.48584 2.380.96 2.90 2.30 1.310.99 2.90 3.1913.28994 2.370.93 4.60 3.26 1.150.99 4.60 4.9327.97160 D.D.Gang et al./Journal of Hazardous Materials182 (2010) 156–161Fig.7.Adsorption isotherms of the iron-chitosan adsorbent (0.2g L −1)for three initial arsenite concentrations at pH =8,and corresponding isotherm models.Thirunavukkarasu et al.[37]examined As(III)adsorption kinet-ics with granular ferric hydroxide (GFH)and found that most of As(III)adsorption onto GFH occurred at pH 7.6,with 68%of As(III)removed within 1h and 97%removed at the equilibrium time of 6h.Kinetic data fitted the pseudo-second-order kinetic model well with a kinetic rate constant of 0.003g GFH h −1␮g −1As,which is equivalent to 5.0×10−2g mg −1min −1[37].In our study,the kinetic rate constants were from 3.19×10−2to 1.15×10−2g mg −1min −1,which were smaller than using GFH.This could be attributed to the differences in adsorbent parti-cle size and initial arsenic concentrations between these two studies.3.4.Adsorption isothermsFig.7presents the adsorption isotherm data and two isotherm models at pH 8.The maximum adsorption capacity was found to increase from 1.97to 6.48mg g −1as the initial concentration of As(III)increased from 295to 1007␮g L −1.Maximum adsorp-tion capacity reached 6.48mg g −1with initial As(III)concentration of 1007␮g L −1.Chen and Chung [24]reported that the adsorp-tion capacity of As(III)was 1.83mg As g −1for pure chitosan bead.This study confirmed that impregnating iron into chitosan could significantly increase the As(III)adsorption capacity of the chi-tosan bead.In another study,Driehaus et al.[19]reported that the adsorption capacity could reach 8.5mg As g −1of granular fer-ric hydroxide (GFH).Model parameters and regression coefficients are listed in Table 2.The Freundlich model agreed better with the experimental data compared to the Langmuir model.The adsorp-tion intensity (1/n )and the distribution coefficient (k f )increased as the initial arsenite concentration increased.This indicated the dependence of adsorption on initial concentration.Low 1/n values (<1)of the Freundlich isotherm suggested that any large change in the equilibrium concentration of arsenic would not result in a significant change in the amount of arsenic adsorbed.Selim and Zhang [38]reported that adsorption isotherms of three differ-ent soils for As(V)were better fit to the Freundlich modelandFig.8.Breakthrough curve for an inlet arsenite concentration of 308␮g L −1at pH =8for a column reactor packed with the iron-chitosan adsorbent.adsorption intensity values ranged from 0.270to 0.340.Salim and Munekage [39]found that adsorptions of As(III)onto silica ceramic were well fit by the Freundlich isotherm.Similarly low 1/n values for As(V)adsorption have been reported by others [40].3.5.Column studyFig.8shows a breakthrough curve for an inlet arsenite con-centration of 308␮g L −1at pH 8.The break point was observed after 768empty bed volumes (EBV)and adsorbent was exhausted at 1400bed volumes.In comparison,Boddu et al.[23]reported that the break through point was about 40and 120EBV for As(III)and As(V),respectively using chitosan-coated biosorbent.Gupta et al.[41]conducted column tests using iron-chitosan compos-ites for removal of As(III)and As(V)from arsenic contaminated real life groundwater.Their result showed that the iron-chitosan flakes (ICF)could treat 147EBV of As(III)and 112EBV of As(V)spiked groundwater with an As(III)or As(V)concentration of 0.5mg L −1.Given the difference of the initial concentrations between the two studies,the numbers of EBV were lower than what we found in this study.This can be partially attributed to the difference of the water constituents in the real grounder water used in the previous study [41].Gu et al.[17]examined the arsenic breakthrough behaviors for an As-GAC sample prepared from Dacro 20×40LI with an inlet concentration of 56.1␮g L −1As(III).Their results demonstrated that the adsorbent could effectively remove arsenic from ground-water in a column setting.Dong et al.[16]also reported that average removal efficiencies for total arsenic,As(III),and As(V)for a 2-week test period were 98%,97%,and 99%,respectively,at an average flow rate of 4.1L h −1and Empty Bed Contact Time (EBCT)>3min.Table 2Values of the Freundlich and Langmuir isotherm model parameters for three arsenite concentrations with iron-chitosan loading of 0.2g L −1at pH 8.Initial concentration(␮g L −1)Freundlich parameters Longmuir constants k f (L g −1)1/n R 2q max (mg g −1)K L (L mg −1)R 22950.590.240.98 2.000.120.985960.640.260.95 2.820.070.9410070.740.330.996.820.010.95D.D.Gang et al./Journal of Hazardous Materials182 (2010) 156–1611614.ConclusionsOverall,the study has demonstrated that iron-impregnated chi-tosan can effectively remove As(III)from aqueous solutions under a wide range of experimental conditions and removal efficiency depends on various factors including pH,adsorption time,adsor-bent loading,and initial concentration of As(III)in the solution. Results from the kinetic batch experiments indicated that more than60%of the arsenic was adsorbed by the iron-chitosan within 30min of adsorption.Kinetic resultsfit the pseudo-second-order model well.The second order reaction rate constants were found to decrease from3.19×10−2to1.15×10−2g mg−1min−1as the initial As(III)concentration increased from306to994␮g L−1.Adsorp-tion isotherm results indicated that maximum adsorption capacity increased from1.97to6.48mg g−1at pH=8as the initial concen-tration of As(III)increased from0.3to1mg L−1.The adsorption isotherm datafit well to the Freundlich model.Column experi-ments of As(III)removal were conducted using12-mm-ID column at aflow rate of25mL h−1with an initial As(III)concentration of 308␮g L−1.This study corroborates that impregnating iron into chitosan can significantly increase As(III)adsorption capacity of the chitosan bead.Advantages of using the iron-impregnated chitosan include its high efficiency for As(III)treatment and low cost compared with the pure chitosan bead.We expect that the iron-impregnated chi-tosan is a useful adsorbent for As(III)and could be used both in conventional packed-bedfiltration tower and Point of Use(POU) systems.The possible concerns include the physicochemical sta-bility of the adsorbent because of the biodegradable nature of the chitosan material.Further research is underway to examine the adsorbent stability and whether the iron-impregnated chitosan can maintain its capability after several regeneration andCompeting adsorption of other ions will also be AcknowledgmentsThe authors would like to thank Mr.Ravi K.Kadari and Ms. Dhanarekha Vasireddy for conducting the laboratory experiments. The authors are grateful forfinancial support from the U.S.Depart-ment of Energy(Grant No.:DE-FC26-02NT41607).References[1]C.K.Jain,I.Ali,Arsenic:occurrence,toxicity and speciation,Water Res.34(2000)4304–4312.[2]W.R.Cullen,K.J.Reimer,Arsenic speciation in the environment,Chem.Rev.89(1989)713–764.[3]L.Dambies,Existing and prospective sorption technologies for the removal ofarsenic in water,Sep.Sci.Technol.39(2004)603–627.[4]Fed.Regist.67(246)(2002)78203–78209.[5]M.B.Baskan,A.Pala,Determination of arsenic removal efficiency by ferric ionsusing response surface methodology,J.Hazard Mater.166(2009)796–801. [6]A.H.Malik,Z.M.Khan,Q.Mahmood,S.Nasreen,Z.A.Bhatti,Perspectives of lowcost arsenic remediation of drinking water in Pakistan and other countries,J.Hazard Mater.168(2009)1–12.[7]D.Mohan, C.U.Pittman,Arsenic removal from water/wastewater usingadsorbents—a critical review,J.Hazard Mater.142(2007)1–53.[8]V.Fierro,G.Muniz,G.Gonzalez-Sanchez,M.L.Ballinas,A.Celzard,Arsenicremoval by iron-doped activated carbons prepared by ferric chloride forced hydrolysis,J.Hazard Mater.168(2009)430–437.[9]Y.Masue,R.H.Loeppert,T.A.Kramer,Arsenate and arsenite adsorption anddesorption behavior on coprecipitated aluminum:iron hydroxides,Environ.Sci.Technol.41(2007)837–842.[10]X.Q.Chen,m,Q.J.Zhang,B.C.Pan,M.Arruebo,K.L.Yeung,Synthesis ofhighly selective magnetic mesoporous adsorbent,J.Phys.Chem.C113(2009) 9804–9813.[11]K.P.Raven,A.Jain,H.L.Richard,Arsenite and arsenate adsorption on ferrihy-drite:kinetics,equilibrium,and adsorption envelopes,Environ,Sci.Technol.32 (1998)344–349.[12]J.G.Hering,M.Elimelech,Arsenic Removal by Enhanced Coagulation and Mem-brane Processes,AWWA Research Foundation,Denver,CO,1996.[13]D.Pokhrel,T.Viraraghavan,Arsenic removal from aqueous solution by ironoxide-coated biomass:common ion effects and thermodynamic analysis,Sep.Sci.Technol.43(2008)3345–3562.[14]B.Xie,M.Fan,K.Banerjee,J.van Leeuwen,Modeling of arsenic(V)adsorptiononto granular ferric hydroxide,J.Am.Water Works Assoc.99(2007)92–102.[15]M.Jang,W.F.Chen,F.S.Cannon,Preloading hydrous ferric oxide into gran-ular activated carbon for arsenic removal,Environ.Sci.Technol.42(2008) 3369–3374.[16]L.J.Dong,P.V.Zinin,J.P.Cowen,L.C.Ming,Iron coated pottery granules forarsenic removal from drinking water,J.Hazard Mater.168(2009)626–632. [17]Z.Gu,F.Jun,B.Deng,Preparation and evaluation of GAC-based iron-containingadsorbents for arsenic removal,Environ.Sci.Technol.39(2005)3833–3843.[18]T.Viraraghavan,K.S.Subramanian,J.A.Arduldoss,Arsenic in drinking water-problems and solutions,Water Sci.Technol.40(1999)69–76.[19]W.Driehaus,M.Jekel,U.Hildevrand,Granular ferric hydroxide—a new adsor-bent for the removal of arsenic from natural water,J.Water Serv.Res.Technol.47(1998)30–35.[20]X.H.Guan,J.M.Wang,C.C.Chusuei,Removal of arsenic from water using gran-ular ferric hydroxide:macroscopic and microscopic studies,J.Hazard Mater.156(2008)178–185.[21]N.Selvin,G.Messham,J.Simms,I.Pearson,J.Hall,The development of gran-ular ferric media—arsenic removal and additional uses in water treatment,in: Proceedings of Water Quality Technology Conference,Salt Lake City,UT,2000, pp.483–494.[22]S.Hansan,A.Krishnaiah,T.K.Ghosh,Adsorption of chromium(VI)on chitosan-coated perlite,Sep.Sci.Technol.38(2003)3775–3793.[23]V.M.Boddu,K.Abburi,J.L.Talbott,E.D.Smith,R.Haasch,Removal of arsenic(III)and arsenic(V)from aqueous medium using chitosan-coated biosorbent,Water Res.42(2008)633–642.[24]C.C.Chen,Y.C.Chung,Arsenic removal using a biopolymer chitosan sorbent,J.Environ.Sci.Health A41(2006)645–658.[25]D.Vasireddy,Arsenic adsorption onto iron-chitosan composite from drink-ing water,M.S.Thesis,Department of Civil and Environmental Engineering, University of Missouri,Columbia,MO,2005.[26]D.Sarkar,D.K.Chattoraj,Activation parameters for kinetics of protein adsorp-tion at silica-water interface,J.Colloid Interface Sci.157(1993)219–226. [27]Y.S.Ho,G.Mckay,Pseudo-second order model for sorption processes,ProcessBiochem.34(1999)451–465.[28]E.Guibal,ot,J.M.Tobin,Metal-anion sorption by chitosan beads:equilib-rium and kinetic studies,Ind.Eng.Chem.Res.37(1998)1454–1463.[29]S.K.Gupta,K.Y.Chen,Arsenic removal by adsorption,J.Water Pollut.ControlFed.50(1978)493–506.[30]C.T.Kamala,K.H.Chu,N.S.Chary,P.K.Pandey,S.L.Ramesh,A.R.K.Sastry,K.C.Sekhar,Removal of arsenic(III)from aqueous solutions using fresh and immo-bilized plant biomass,Water Res.39(2005)2815–2826.[31]H.D.Ozsoy,H.Kumbur,Adsorption of Cu(II)ions on cotton boll,J.Hazard Mater.136(2006)911–916.[32]P.D.Nemade,A.M.Kadam,H.S.Shankar,Adsorption of arsenic from aqueoussolution on naturally available red soil,J.Environ.Biol.30(2009)499–504. [33]E.A.Elkhatib,O.L.Bennett,R.J.Wright,Kinetics of arsenite adsorption in soils,Soil Sci.Am.J.48(1984)758–762.[34]C.C.Fuller,J.A.Davis,G.A.Waychunas,Surface chemistry of ferrihydrite.Part2.Kinetics of arsenate adsorption and coprecipitation,Geochim.Cosmochim.Ac.32(1993)344–349.[35]S.Goldberg,C.T.Johnston,Mechanisms of arsenic adsorption on amorphousoxides evaluated using macroscopic measurements,vibrational spectroscopy, and surface complexation modeling,J.Colloid Interface Sci.234(2001) 204–216.[36]M.J.Jimenez-Cedillo,M.T.Olguin, C.Fall,Adsorption kinetic of arsen-ates as water pollutant on iron,manganese and iron-manganese-modified clinoptilolite-rich tuffs,J.Hazard Mater.163(2009)939–945.[37]O.S.Thirunavukkarasu,T.Viraraghavan,K.S.Subramanian,Arsenic removalfrom drinking water using granular ferric hydroxide,Water SA29(2003) 161–170.[38]H.M.Selim,H.Zhang,Kinetics of arsenate adsorption–desorption in soils,Env-iron.Sci.Technol.39(2005)6101–6108.[39]M.Salim,Y.Munekage,Removal of arsenic from aqueous solution using sil-ica ceramic:adsorption kinetics and equilibrium studies,Int.J.Environ.Res.3 (2009)13–22.[40]B.A.Manning,S.Goldberg,Arsenic(III)and arsenic(V)adsorption on three Cal-ifornia soils,Soil Sci.162(1997)886–895.[41]A.Gupta,V.S.Chauhan,N.Sankararamakrishnan,Preparation and evaluationof iron-chitosan composites for removal of As(III)and As(V)from arsenic con-taminated real life groundwater,Water Res.43(2009)3862–3870.。

发光调控 英语

发光调控 英语

发光调控英语Title: The Complexity and Intricacies of Luminescence RegulationLuminescence regulation, a field at the intersection of physics, chemistry, and biology, holds immense potential in various applications ranging from displays and lighting to biomedical imaging and sensing. It involves the precise control of the emission of light from a material, either spontaneously or in response to an external stimulus. This article delves into the complexities and intricacies of luminescence regulation, exploring its principles, techniques, and evolving applications.Firstly, it's crucial to understand the fundamental mechanisms of luminescence. Luminescence occurs when a material absorbs energy, either in the form of light, electricity, or heat, and subsequently emits light. This process is typically characterized by the excitation of electrons within the material, followed by their relaxation and emission of photons. The color and intensity of the emitted light depend on the material's chemical composition, structure, and the nature of the excitation.Luminescence regulation involves manipulating these mechanisms to achieve desired emission properties. One approach is through the use of dopants or activators, which introduce additional energy states within the material. These dopants can enhance or modify the emission spectrum, enabling the tuning of color and intensity. Another method involves manipulating the material's physical structure, such as through nanostructuring or the use of porous materials, to alter the path and efficiency of light emission.Moreover, the field of luminescence regulation has benefited significantly from the advancement of synthetic techniques and material science. The ability to synthesize materials with precise compositional and structural control has opened new avenues for precise luminescence tuning. For instance, the development of colloidal quantum dots and perovskite nanocrystals has enabled the creation of luminescent materials with tunable emission wavelengths and high brightness.In terms of applications, luminescence regulation finds widespread use in various fields. In displays and lighting,luminescent materials are used to generate vibrant colors and efficient light emission. The precise control of emission properties enables the creation of displays with high color accuracy and contrast, as well as lighting systems with optimized energy efficiency.In the biomedical field, luminescent materials have revolutionized imaging and sensing techniques. Fluorescence microscopy, for instance, relies on the ability to label specific molecules or cells with luminescent probes, enabling their visualization with high spatial and temporal resolution. Luminescent probes are also used in biosensing applications, where they can detect and quantify biological analytes with high sensitivity and specificity.Furthermore, the emergence of photoluminescence-based solar cells has highlighted the potential of luminescence regulation in renewable energy applications. By engineering the luminescent properties of photovoltaic materials, researchers aim to improve the efficiency and stability of solar cells, addressing key challenges in solar energy conversion.However, the field of luminescence regulation remains challenging and evolving. The complexity of luminescent mechanisms, coupled with the diverse range of materials and applications, poses significant challenges in achieving precise and reliable luminescence control. Ongoing research efforts are focused on developing novel materials and techniques that can further enhance the performance and versatility of luminescent systems.In conclusion, luminescence regulation represents a vibrant and dynamic field with vast potential for innovation and applications. As the understanding of luminescent mechanisms deepens and synthetic techniques improve, the capabilities of luminescent materials will continue to expand, opening new doors in various fields from displays and lighting to biomedicine and renewable energy.。

交通流

交通流

Network impacts of a road capacity reduction:Empirical analysisand model predictionsDavid Watling a ,⇑,David Milne a ,Stephen Clark baInstitute for Transport Studies,University of Leeds,Woodhouse Lane,Leeds LS29JT,UK b Leeds City Council,Leonardo Building,2Rossington Street,Leeds LS28HD,UKa r t i c l e i n f o Article history:Received 24May 2010Received in revised form 15July 2011Accepted 7September 2011Keywords:Traffic assignment Network models Equilibrium Route choice Day-to-day variabilitya b s t r a c tIn spite of their widespread use in policy design and evaluation,relatively little evidencehas been reported on how well traffic equilibrium models predict real network impacts.Here we present what we believe to be the first paper that together analyses the explicitimpacts on observed route choice of an actual network intervention and compares thiswith the before-and-after predictions of a network equilibrium model.The analysis isbased on the findings of an empirical study of the travel time and route choice impactsof a road capacity reduction.Time-stamped,partial licence plates were recorded across aseries of locations,over a period of days both with and without the capacity reduction,and the data were ‘matched’between locations using special-purpose statistical methods.Hypothesis tests were used to identify statistically significant changes in travel times androute choice,between the periods of days with and without the capacity reduction.A trafficnetwork equilibrium model was then independently applied to the same scenarios,and itspredictions compared with the empirical findings.From a comparison of route choice pat-terns,a particularly influential spatial effect was revealed of the parameter specifying therelative values of distance and travel time assumed in the generalised cost equations.When this parameter was ‘fitted’to the data without the capacity reduction,the networkmodel broadly predicted the route choice impacts of the capacity reduction,but with othervalues it was seen to perform poorly.The paper concludes by discussing the wider practicaland research implications of the study’s findings.Ó2011Elsevier Ltd.All rights reserved.1.IntroductionIt is well known that altering the localised characteristics of a road network,such as a planned change in road capacity,will tend to have both direct and indirect effects.The direct effects are imparted on the road itself,in terms of how it can deal with a given demand flow entering the link,with an impact on travel times to traverse the link at a given demand flow level.The indirect effects arise due to drivers changing their travel decisions,such as choice of route,in response to the altered travel times.There are many practical circumstances in which it is desirable to forecast these direct and indirect impacts in the context of a systematic change in road capacity.For example,in the case of proposed road widening or junction improvements,there is typically a need to justify econom-ically the required investment in terms of the benefits that will likely accrue.There are also several examples in which it is relevant to examine the impacts of road capacity reduction .For example,if one proposes to reallocate road space between alternative modes,such as increased bus and cycle lane provision or a pedestrianisation scheme,then typically a range of alternative designs exist which may differ in their ability to accommodate efficiently the new traffic and routing patterns.0965-8564/$-see front matter Ó2011Elsevier Ltd.All rights reserved.doi:10.1016/j.tra.2011.09.010⇑Corresponding author.Tel.:+441133436612;fax:+441133435334.E-mail address:d.p.watling@ (D.Watling).168 D.Watling et al./Transportation Research Part A46(2012)167–189Through mathematical modelling,the alternative designs may be tested in a simulated environment and the most efficient selected for implementation.Even after a particular design is selected,mathematical models may be used to adjust signal timings to optimise the use of the transport system.Road capacity may also be affected periodically by maintenance to essential services(e.g.water,electricity)or to the road itself,and often this can lead to restricted access over a period of days and weeks.In such cases,planning authorities may use modelling to devise suitable diversionary advice for drivers,and to plan any temporary changes to traffic signals or priorities.Berdica(2002)and Taylor et al.(2006)suggest more of a pro-ac-tive approach,proposing that models should be used to test networks for potential vulnerability,before any reduction mate-rialises,identifying links which if reduced in capacity over an extended period1would have a substantial impact on system performance.There are therefore practical requirements for a suitable network model of travel time and route choice impacts of capac-ity changes.The dominant method that has emerged for this purpose over the last decades is clearly the network equilibrium approach,as proposed by Beckmann et al.(1956)and developed in several directions since.The basis of using this approach is the proposition of what are believed to be‘rational’models of behaviour and other system components(e.g.link perfor-mance functions),with site-specific data used to tailor such models to particular case studies.Cross-sectional forecasts of network performance at specific road capacity states may then be made,such that at the time of any‘snapshot’forecast, drivers’route choices are in some kind of individually-optimum state.In this state,drivers cannot improve their route selec-tion by a unilateral change of route,at the snapshot travel time levels.The accepted practice is to‘validate’such models on a case-by-case basis,by ensuring that the model—when supplied with a particular set of parameters,input network data and input origin–destination demand data—reproduces current mea-sured mean link trafficflows and mean journey times,on a sample of links,to some degree of accuracy(see for example,the practical guidelines in TMIP(1997)and Highways Agency(2002)).This kind of aggregate level,cross-sectional validation to existing conditions persists across a range of network modelling paradigms,ranging from static and dynamic equilibrium (Florian and Nguyen,1976;Leonard and Tough,1979;Stephenson and Teply,1984;Matzoros et al.,1987;Janson et al., 1986;Janson,1991)to micro-simulation approaches(Laird et al.,1999;Ben-Akiva et al.,2000;Keenan,2005).While such an approach is plausible,it leaves many questions unanswered,and we would particularly highlight two: 1.The process of calibration and validation of a network equilibrium model may typically occur in a cycle.That is to say,having initially calibrated a model using the base data sources,if the subsequent validation reveals substantial discrep-ancies in some part of the network,it is then natural to adjust the model parameters(including perhaps even the OD matrix elements)until the model outputs better reflect the validation data.2In this process,then,we allow the adjustment of potentially a large number of network parameters and input data in order to replicate the validation data,yet these data themselves are highly aggregate,existing only at the link level.To be clear here,we are talking about a level of coarseness even greater than that in aggregate choice models,since we cannot even infer from link-level data the aggregate shares on alternative routes or OD movements.The question that arises is then:how many different combinations of parameters and input data values might lead to a similar link-level validation,and even if we knew the answer to this question,how might we choose between these alternative combinations?In practice,this issue is typically neglected,meaning that the‘valida-tion’is a rather weak test of the model.2.Since the data are cross-sectional in time(i.e.the aim is to reproduce current base conditions in equilibrium),then in spiteof the large efforts required in data collection,no empirical evidence is routinely collected regarding the model’s main purpose,namely its ability to predict changes in behaviour and network performance under changes to the network/ demand.This issue is exacerbated by the aggregation concerns in point1:the‘ambiguity’in choosing appropriate param-eter values to satisfy the aggregate,link-level,base validation strengthens the need to independently verify that,with the selected parameter values,the model responds reliably to changes.Although such problems–offitting equilibrium models to cross-sectional data–have long been recognised by practitioners and academics(see,e.g.,Goodwin,1998), the approach described above remains the state-of-practice.Having identified these two problems,how might we go about addressing them?One approach to thefirst problem would be to return to the underlying formulation of the network model,and instead require a model definition that permits analysis by statistical inference techniques(see for example,Nakayama et al.,2009).In this way,we may potentially exploit more information in the variability of the link-level data,with well-defined notions(such as maximum likelihood)allowing a systematic basis for selection between alternative parameter value combinations.However,this approach is still using rather limited data and it is natural not just to question the model but also the data that we use to calibrate and validate it.Yet this is not altogether straightforward to resolve.As Mahmassani and Jou(2000) remarked:‘A major difficulty...is obtaining observations of actual trip-maker behaviour,at the desired level of richness, simultaneously with measurements of prevailing conditions’.For this reason,several authors have turned to simulated gaming environments and/or stated preference techniques to elicit information on drivers’route choice behaviour(e.g. 1Clearly,more sporadic and less predictable reductions in capacity may also occur,such as in the case of breakdowns and accidents,and environmental factors such as severe weather,floods or landslides(see for example,Iida,1999),but the responses to such cases are outside the scope of the present paper. 2Some authors have suggested more systematic,bi-level type optimization processes for thisfitting process(e.g.Xu et al.,2004),but this has no material effect on the essential points above.D.Watling et al./Transportation Research Part A46(2012)167–189169 Mahmassani and Herman,1990;Iida et al.,1992;Khattak et al.,1993;Vaughn et al.,1995;Wardman et al.,1997;Jou,2001; Chen et al.,2001).This provides potentially rich information for calibrating complex behavioural models,but has the obvious limitation that it is based on imagined rather than real route choice situations.Aside from its common focus on hypothetical decision situations,this latter body of work also signifies a subtle change of emphasis in the treatment of the overall network calibration problem.Rather than viewing the network equilibrium calibra-tion process as a whole,the focus is on particular components of the model;in the cases above,the focus is on that compo-nent concerned with how drivers make route decisions.If we are prepared to make such a component-wise analysis,then certainly there exists abundant empirical evidence in the literature,with a history across a number of decades of research into issues such as the factors affecting drivers’route choice(e.g.Wachs,1967;Huchingson et al.,1977;Abu-Eisheh and Mannering,1987;Duffell and Kalombaris,1988;Antonisse et al.,1989;Bekhor et al.,2002;Liu et al.,2004),the nature of travel time variability(e.g.Smeed and Jeffcoate,1971;Montgomery and May,1987;May et al.,1989;McLeod et al., 1993),and the factors affecting trafficflow variability(Bonsall et al.,1984;Huff and Hanson,1986;Ribeiro,1994;Rakha and Van Aerde,1995;Fox et al.,1998).While these works provide useful evidence for the network equilibrium calibration problem,they do not provide a frame-work in which we can judge the overall‘fit’of a particular network model in the light of uncertainty,ambient variation and systematic changes in network attributes,be they related to the OD demand,the route choice process,travel times or the network data.Moreover,such data does nothing to address the second point made above,namely the question of how to validate the model forecasts under systematic changes to its inputs.The studies of Mannering et al.(1994)and Emmerink et al.(1996)are distinctive in this context in that they address some of the empirical concerns expressed in the context of travel information impacts,but their work stops at the stage of the empirical analysis,without a link being made to net-work prediction models.The focus of the present paper therefore is both to present thefindings of an empirical study and to link this empirical evidence to network forecasting models.More recently,Zhu et al.(2010)analysed several sources of data for evidence of the traffic and behavioural impacts of the I-35W bridge collapse in Minneapolis.Most pertinent to the present paper is their location-specific analysis of linkflows at 24locations;by computing the root mean square difference inflows between successive weeks,and comparing the trend for 2006with that for2007(the latter with the bridge collapse),they observed an apparent transient impact of the bridge col-lapse.They also showed there was no statistically-significant evidence of a difference in the pattern offlows in the period September–November2007(a period starting6weeks after the bridge collapse),when compared with the corresponding period in2006.They suggested that this was indicative of the length of a‘re-equilibration process’in a conceptual sense, though did not explicitly compare their empiricalfindings with those of a network equilibrium model.The structure of the remainder of the paper is as follows.In Section2we describe the process of selecting the real-life problem to analyse,together with the details and rationale behind the survey design.Following this,Section3describes the statistical techniques used to extract information on travel times and routing patterns from the survey data.Statistical inference is then considered in Section4,with the aim of detecting statistically significant explanatory factors.In Section5 comparisons are made between the observed network data and those predicted by a network equilibrium model.Finally,in Section6the conclusions of the study are highlighted,and recommendations made for both practice and future research.2.Experimental designThe ultimate objective of the study was to compare actual data with the output of a traffic network equilibrium model, specifically in terms of how well the equilibrium model was able to correctly forecast the impact of a systematic change ap-plied to the network.While a wealth of surveillance data on linkflows and travel times is routinely collected by many local and national agencies,we did not believe that such data would be sufficiently informative for our purposes.The reason is that while such data can often be disaggregated down to small time step resolutions,the data remains aggregate in terms of what it informs about driver response,since it does not provide the opportunity to explicitly trace vehicles(even in aggre-gate form)across more than one location.This has the effect that observed differences in linkflows might be attributed to many potential causes:it is especially difficult to separate out,say,ambient daily variation in the trip demand matrix from systematic changes in route choice,since both may give rise to similar impacts on observed linkflow patterns across re-corded sites.While methods do exist for reconstructing OD and network route patterns from observed link data(e.g.Yang et al.,1994),these are typically based on the premise of a valid network equilibrium model:in this case then,the data would not be able to give independent information on the validity of the network equilibrium approach.For these reasons it was decided to design and implement a purpose-built survey.However,it would not be efficient to extensively monitor a network in order to wait for something to happen,and therefore we required advance notification of some planned intervention.For this reason we chose to study the impact of urban maintenance work affecting the roads,which UK local government authorities organise on an annual basis as part of their‘Local Transport Plan’.The city council of York,a historic city in the north of England,agreed to inform us of their plans and to assist in the subsequent data collection exercise.Based on the interventions planned by York CC,the list of candidate studies was narrowed by considering factors such as its propensity to induce significant re-routing and its impact on the peak periods.Effectively the motivation here was to identify interventions that were likely to have a large impact on delays,since route choice impacts would then likely be more significant and more easily distinguished from ambient variability.This was notably at odds with the objectives of York CC,170 D.Watling et al./Transportation Research Part A46(2012)167–189in that they wished to minimise disruption,and so where possible York CC planned interventions to take place at times of day and of the year where impacts were minimised;therefore our own requirement greatly reduced the candidate set of studies to monitor.A further consideration in study selection was its timing in the year for scheduling before/after surveys so to avoid confounding effects of known significant‘seasonal’demand changes,e.g.the impact of the change between school semesters and holidays.A further consideration was York’s role as a major tourist attraction,which is also known to have a seasonal trend.However,the impact on car traffic is relatively small due to the strong promotion of public trans-port and restrictions on car travel and parking in the historic centre.We felt that we further mitigated such impacts by sub-sequently choosing to survey in the morning peak,at a time before most tourist attractions are open.Aside from the question of which intervention to survey was the issue of what data to collect.Within the resources of the project,we considered several options.We rejected stated preference survey methods as,although they provide a link to personal/socio-economic drivers,we wanted to compare actual behaviour with a network model;if the stated preference data conflicted with the network model,it would not be clear which we should question most.For revealed preference data, options considered included(i)self-completion diaries(Mahmassani and Jou,2000),(ii)automatic tracking through GPS(Jan et al.,2000;Quiroga et al.,2000;Taylor et al.,2000),and(iii)licence plate surveys(Schaefer,1988).Regarding self-comple-tion surveys,from our own interview experiments with self-completion questionnaires it was evident that travellersfind it relatively difficult to recall and describe complex choice options such as a route through an urban network,giving the po-tential for significant errors to be introduced.The automatic tracking option was believed to be the most attractive in this respect,in its potential to accurately map a given individual’s journey,but the negative side would be the potential sample size,as we would need to purchase/hire and distribute the devices;even with a large budget,it is not straightforward to identify in advance the target users,nor to guarantee their cooperation.Licence plate surveys,it was believed,offered the potential for compromise between sample size and data resolution: while we could not track routes to the same resolution as GPS,by judicious location of surveyors we had the opportunity to track vehicles across more than one location,thus providing route-like information.With time-stamped licence plates, the matched data would also provide journey time information.The negative side of this approach is the well-known poten-tial for significant recording errors if large sample rates are required.Our aim was to avoid this by recording only partial licence plates,and employing statistical methods to remove the impact of‘spurious matches’,i.e.where two different vehi-cles with the same partial licence plate occur at different locations.Moreover,extensive simulation experiments(Watling,1994)had previously shown that these latter statistical methods were effective in recovering the underlying movements and travel times,even if only a relatively small part of the licence plate were recorded,in spite of giving a large potential for spurious matching.We believed that such an approach reduced the opportunity for recorder error to such a level to suggest that a100%sample rate of vehicles passing may be feasible.This was tested in a pilot study conducted by the project team,with dictaphones used to record a100%sample of time-stamped, partial licence plates.Independent,duplicate observers were employed at the same location to compare error rates;the same study was also conducted with full licence plates.The study indicated that100%surveys with dictaphones would be feasible in moderate trafficflow,but only if partial licence plate data were used in order to control observation errors; for higherflow rates or to obtain full number plate data,video surveys should be considered.Other important practical les-sons learned from the pilot included the need for clarity in terms of vehicle types to survey(e.g.whether to include motor-cycles and taxis),and of the phonetic alphabet used by surveyors to avoid transcription ambiguities.Based on the twin considerations above of planned interventions and survey approach,several candidate studies were identified.For a candidate study,detailed design issues involved identifying:likely affected movements and alternative routes(using local knowledge of York CC,together with an existing network model of the city),in order to determine the number and location of survey sites;feasible viewpoints,based on site visits;the timing of surveys,e.g.visibility issues in the dark,winter evening peak period;the peak duration from automatic trafficflow data;and specific survey days,in view of public/school holidays.Our budget led us to survey the majority of licence plate sites manually(partial plates by audio-tape or,in lowflows,pen and paper),with video surveys limited to a small number of high-flow sites.From this combination of techniques,100%sampling rate was feasible at each site.Surveys took place in the morning peak due both to visibility considerations and to minimise conflicts with tourist/special event traffic.From automatic traffic count data it was decided to survey the period7:45–9:15as the main morning peak period.This design process led to the identification of two studies:2.1.Lendal Bridge study(Fig.1)Lendal Bridge,a critical part of York’s inner ring road,was scheduled to be closed for maintenance from September2000 for a duration of several weeks.To avoid school holidays,the‘before’surveys were scheduled for June and early September.It was decided to focus on investigating a significant southwest-to-northeast movement of traffic,the river providing a natural barrier which suggested surveying the six river crossing points(C,J,H,K,L,M in Fig.1).In total,13locations were identified for survey,in an attempt to capture traffic on both sides of the river as well as a crossing.2.2.Fishergate study(Fig.2)The partial closure(capacity reduction)of the street known as Fishergate,again part of York’s inner ring road,was scheduled for July2001to allow repairs to a collapsed sewer.Survey locations were chosen in order to intercept clockwiseFig.1.Intervention and survey locations for Lendal Bridge study.around the inner ring road,this being the direction of the partial closure.A particular aim wasFulford Road(site E in Fig.2),the main radial affected,with F and K monitoring local diversion I,J to capture wider-area diversion.studies,the plan was to survey the selected locations in the morning peak over a period of approximately covering the three periods before,during and after the intervention,with the days selected so holidays or special events.Fig.2.Intervention and survey locations for Fishergate study.In the Lendal Bridge study,while the‘before’surveys proceeded as planned,the bridge’s actualfirst day of closure on Sep-tember11th2000also marked the beginning of the UK fuel protests(BBC,2000a;Lyons and Chaterjee,2002).Trafficflows were considerably affected by the scarcity of fuel,with congestion extremely low in thefirst week of closure,to the extent that any changes could not be attributed to the bridge closure;neither had our design anticipated how to survey the impacts of the fuel shortages.We thus re-arranged our surveys to monitor more closely the planned re-opening of the bridge.Unfor-tunately these surveys were hampered by a second unanticipated event,namely the wettest autumn in the UK for270years and the highest level offlooding in York since records began(BBC,2000b).Theflooding closed much of the centre of York to road traffic,including our study area,as the roads were impassable,and therefore we abandoned the planned‘after’surveys. As a result of these events,the useable data we had(not affected by the fuel protests orflooding)consisted offive‘before’days and one‘during’day.In the Fishergate study,fortunately no extreme events occurred,allowing six‘before’and seven‘during’days to be sur-veyed,together with one additional day in the‘during’period when the works were temporarily removed.However,the works over-ran into the long summer school holidays,when it is well-known that there is a substantial seasonal effect of much lowerflows and congestion levels.We did not believe it possible to meaningfully isolate the impact of the link fully re-opening while controlling for such an effect,and so our plans for‘after re-opening’surveys were abandoned.3.Estimation of vehicle movements and travel timesThe data resulting from the surveys described in Section2is in the form of(for each day and each study)a set of time-stamped,partial licence plates,observed at a number of locations across the network.Since the data include only partial plates,they cannot simply be matched across observation points to yield reliable estimates of vehicle movements,since there is ambiguity in whether the same partial plate observed at different locations was truly caused by the same vehicle. Indeed,since the observed system is‘open’—in the sense that not all points of entry,exit,generation and attraction are mon-itored—the question is not just which of several potential matches to accept,but also whether there is any match at all.That is to say,an apparent match between data at two observation points could be caused by two separate vehicles that passed no other observation point.Thefirst stage of analysis therefore applied a series of specially-designed statistical techniques to reconstruct the vehicle movements and point-to-point travel time distributions from the observed data,allowing for all such ambiguities in the data.Although the detailed derivations of each method are not given here,since they may be found in the references provided,it is necessary to understand some of the characteristics of each method in order to interpret the results subsequently provided.Furthermore,since some of the basic techniques required modification relative to the published descriptions,then in order to explain these adaptations it is necessary to understand some of the theoretical basis.3.1.Graphical method for estimating point-to-point travel time distributionsThe preliminary technique applied to each data set was the graphical method described in Watling and Maher(1988).This method is derived for analysing partial registration plate data for unidirectional movement between a pair of observation stations(referred to as an‘origin’and a‘destination’).Thus in the data study here,it must be independently applied to given pairs of observation stations,without regard for the interdependencies between observation station pairs.On the other hand, it makes no assumption that the system is‘closed’;there may be vehicles that pass the origin that do not pass the destina-tion,and vice versa.While limited in considering only two-point surveys,the attraction of the graphical technique is that it is a non-parametric method,with no assumptions made about the arrival time distributions at the observation points(they may be non-uniform in particular),and no assumptions made about the journey time probability density.It is therefore very suitable as afirst means of investigative analysis for such data.The method begins by forming all pairs of possible matches in the data,of which some will be genuine matches(the pair of observations were due to a single vehicle)and the remainder spurious matches.Thus, for example,if there are three origin observations and two destination observations of a particular partial registration num-ber,then six possible matches may be formed,of which clearly no more than two can be genuine(and possibly only one or zero are genuine).A scatter plot may then be drawn for each possible match of the observation time at the origin versus that at the destination.The characteristic pattern of such a plot is as that shown in Fig.4a,with a dense‘line’of points(which will primarily be the genuine matches)superimposed upon a scatter of points over the whole region(which will primarily be the spurious matches).If we were to assume uniform arrival rates at the observation stations,then the spurious matches would be uniformly distributed over this plot;however,we shall avoid making such a restrictive assumption.The method begins by making a coarse estimate of the total number of genuine matches across the whole of this plot.As part of this analysis we then assume knowledge of,for any randomly selected vehicle,the probabilities:h k¼Prðvehicle is of the k th type of partial registration plateÞðk¼1;2;...;mÞwhereX m k¼1h k¼1172 D.Watling et al./Transportation Research Part A46(2012)167–189。

2014+AAGBI安全指南:动脉导管采血:预防低血糖患者脑损伤

2014+AAGBI安全指南:动脉导管采血:预防低血糖患者脑损伤

AAGBI SAFETY GUIDELINE Arterial line blood sampling: preventinghypoglycaemic brain injuryPublished byThe Association of Anaesthetists of Great Britain & Ireland2014SeptemberThis guideline was originally published in Anaesthesia. If you wish to refer to this guideline, please use the following reference:Association of Anaesthetists of Great Britain and Ireland. Arterial line blood sampling: preventing hypoglycaemic brain injury 2014. Anaesthesia 2014, 69: pages 380–385This guideline can be viewed online via the following URL:/doi/10.1111/anae.12536/abstract© The Association of Anaesthetists of Great Britain & IrelandGuidelinesArterial line blood sampling:preventing hypoglycaemic brain injury2014The Association of Anaesthetists of Great Britain and IrelandMembership of the Working Party:T.E.Woodcock,T.M.Cook, K.J.Gupta and A.HartleSummaryDrawing samples from an indwelling arterial line is the method of choice for frequent blood analysis in adult critical care areas.Sodium chloride0.9%is the recommendedflush solution for maintaining the patency of arterial catheters,but it is easy to confuse with glucose-con-taining bags on rapid visual examination.The unintentional use of a glucose-containing solution has resulted in artefactually high glucose concentrations in blood samples drawn from the arterial line,leading to insulin administration causing hypoglycaemia and fatal neuroglycopenic brain injury.Recent data show that it remains a common error for incorrectfluids to be administered as arterial lineflush infusions. Adherence to the National Patient Safety Agency’s2008Rapid Response Report on this topic may not be enough to prevent such errors.This guideline makes detailed recommendations on the prescription,checking and administration of arterial line infusions in adult practice.We also make recommendations about storage,arterial pressure monitoring and sampling systems and techniques.Finally,we make recommendations about glucose monitoring and insulin administration.It is intended that adherence to these guidelines will reduce the frequency of samplecontamination errors in arterial line use and capture events,when they do occur,before they cause patient harm. .................................................................................................... This is a consensus document produced by expert members of a Sprint Working Party established by the Association of Anaesthetists of Great Britain and Ireland(AAGBI).It has been seen and approved by the AAGBI Board.Accepted:5November2013•What other guideline statements are available on this topic?In July2008,a National Patient Safety Agency(NPSA)Rapid Response Report was released[1],highlighting examples of patient harm result-ing from glucose-containingflush infusions contaminating blood sam-ples drawn from arterial lines[2,3].Subsequently,it was reported to the Safe Anaesthesia Liaison Group in2011that the NPSA had received169further incident reports,featuring31glucose monitoring errors(personal communication,Prof.D.Cousins,NHS England).The Medicines and Healthcare products Regulatory Authority (MHRA)issued a Drug Safety Update on the issue in2012[4].•Why was this guideline developed?Experiment and experience show that compliance with the procedures required in the NPSA’s2008Rapid Response Report,even with good sampling technique using a simple open arterial line system,is not suf-ficient to prevent injury or death arising from sample contamination error.In a series of102cases where a glucose-containing solution was incorrectly infused,sample contamination error occurred in30(per-sonal communication,Prof.D.Cousins,NHS England).Recent data show that using incorrect arterial linefluid infusions is a common error,with an average of one such event reported to the National Reporting and Learning System every week[5].More than30%of intensive care units have reported recent arterial line errors,with a fur-ther30%reporting errors from operating theatres or the emergency department[5].In one case,neuroglycopenia contributed to the patient’s death[6].•How and why does this statement differ from existing guidelines?The Working Party identified three error-prone processes that can lead to iatrogenic hypoglycaemia.Our recommendations aredesigned to address each of these processes and reduce the likelihood of active errors and latent risks leading to patient harm[7].The error-prone processes are:•use of arterialflush solution(prescription,dispensing,check-ing and administration)•blood sampling technique for glucose concentration measure-ment•administration of insulin to treat apparent hyperglycaemia.•For each error-prone procedure,the Working Party systematically considered whether it was possible or practicable to:eliminate the procedure;apply safeguard technology;use warning or alarm sys-tems;specify training requirements for practitioners;and protect the patient from the consequences of occasional error.The evidence that informs these guidelines mostly concerns arterial blood sampling in adult patients.Our recommendations can also be applied to venous cannulae that areflushed and used for blood sam-pling.There may be additional considerations for paediatric practice that are outside our remit.Basic knowledge for all practitioners‘Dextrose’and‘glucose’are interchangeable biological terms for dextro-rotatory glucose(C6H12O6).Figure1illustrates the arrangement of ‘open’and‘closed’systems for maintaining patency and drawing blood samples fromflushed vascular access systems.The open system has a single three-way tap,close to the vascular catheter,that is used for both removing residualflush solution and obtaining the blood sample.The volume of residualfluid between the sampling point and the blood-stream is referred to as the dead space.To prevent contamination of a blood sample withflush solution,it has been recommended that39 dead space volume be withdrawn and discarded(or saved for return to the circulation)before a sample is taken[8].However,bench-top exper-iments have shown that significant glucose contamination of the blood sample occurs even with59dead space removal when using an open arterial line sampling system with a glucose5%flush solution[9].A solution of glucose5%contains approximately280mmol.lÀ1,hence contamination of a1-ml blood sample with just0.03mlflush solution would conceal true hypoglycaemia or make a normal sample hypergly-caemic,potentially leading to inappropriate insulin therapy.Erroneous administration of glucose10%or20%would result in even greaterF i g u r e 1O p e n a n d c l o s e d s y s t e m s f o r s a m p l i n g f r o m a r t e r i a l l i n e s .T h e s y r i n g e i n d i c a t e d i s f o r r e m o v a l o f d e a d s p a c e v o l u m e a n d t h e r e d a r r o w i n d i c a t e s t h e s a m p l e d r a w i n g p o i n t .contamination.With sodium chloride0.9%flush,only gross sample contamination(e.g.due to incorrect sampling technique)will cause dan-gerous sampling errors.Contamination of a sample with heparinised saline also leads to artefactually increased phosphate concentrations. Erratic or highly varying sequential test results should heighten the sus-picion of blood sample contamination error.The closed system has a port for removal of dead space beyond a separate sampling point and,if used as intended by its designers,effec-tively eliminates the risk of significant contamination of the sample[9]. Moreover,it reduces the risk of bacteraemia and minimises wastage of blood because the withdrawnflush and blood can be returned to the circulation without opening the system.Nervous tissue is not able to sustain functional or basal metabolic activity during hypoglycaemia,and prolonged neural glucose deprivation (neuroglycopenia)leads to permanent or fatal neural injury.Hypoglyca-emia reduces conscious level and causes sympathetically mediated symp-toms of anxiety,tachycardia,tachypnoea,pupillary dilation and sweating.Beta-blocker therapy blunts some of these symptoms.Hypo-glycaemia is difficult to diagnose clinically in patients with an altered level of consciousness and in those treated with exogenous catecholam-ines.In these patients,practitioners should therefore check for hypo-glycaemia in the presence of a new increase in heart rate or respiratory rate,sweating,convulsions,pupillary changes,or a fall in conscious level.Continuous electroencephalography has the potential to detect neuroglycopenia in monitored patients[10].Fatal neuroglycopenic brain injury can occur within two hours of the onset of hypoglycaemia[2]. Neuroglycopenia is therefore not reliably prevented by routine checking of glucose levels(e.g.once per nursing shift)with blood from an alter-native site.Sodium chloride0.9%with glucose5%has been recommended as an intravenousfluid for use in paediatric practice[11].Figure2illus-trates how thisfluid is particularly easy to confuse with sodium chloride 0.9%.Currently,at least twelve otherfluids containing combinations of sodium chloride,glucose and potassium exist for use in a variety of clin-ical circumstances.They all carry increased risk of misidentification, accidental use as arterial lineflush solution and glucose contamination of blood samples.Peripheral glucose testing by pricking afinger or ear lobe may be inac-curate in patients with poor peripheral perfusion,on vasopressor therapy, or with severe peripheral oedema[12].Continuous intravascular glucose(a)(b)Figure2Appearance of sodium chloride0.9%and glucose5%fluid bag (a)when placed inside a pressure bag and(b)when removed,showing the ease of confusion with plain sodium chloride.Reproduced from[6], with permission.monitoring is under development for introduction into clinical practice [13,14],but until that technique is established,it is necessary to draw frequent blood samples for glucose analysis to guide insulin therapy in critically ill patients.RecommendationsThe Working Party’s recommendations apply to any clinical areas where blood is sampled from arterial access devices and includes critical care areas,operating theatres and emergency departments.Policy and trainingHospitals must raise awareness among relevant staff of the serious patient harm that can result from arterial line sample contamination errors and must have a policy that defines local procedures for arterial line use,including prescribing,administering and monitoringflush solutions and blood sampling technique.Policies about the prescribing,administering and monitoring of intravenous infusions and policies about intravenous insulin therapyshould be cross-referenced to the arterial line and blood sampling policy.All staff involved in the insertion of,management of,or sampling from,arterial lines must be appropriately trained and competent to deli-ver the standards set out in the policy,and performance against these standards should be regularly audited.Fluids forflush infusionsSodium chloride0.9%,with or without heparin,should be the only solu-tion to be used for arterial line infusion andflushing.Blood sampling from a cannula lumen that carries other solutions is not recommended. Identifying arterial linesArterial infusion lines must be clearly identifibels and colour dif-ferentiation are appropriate measures to achieve this.Fluid stock and storageIn clinical areas that use arterial lines,bags of sodium chloride0.9%for use as arterial lineflush should be stored away fromfluids for intrave-nous use.Bags should be stored in a suitable receptacle and not scat-tered across a shelf.Only thosefluid solutions in regular use should be stored in a clinical area.For example,sodium chloride0.9%with glucose5%, which is recommended for use in paediatric practice,should not be stored in clinical areas where paediatric practice is unusual.If practi-cality dictates that such solutions must be stored,an additional risk assessment and management plan must be made to prevent their erroneous use.Prescription and setting upAn arterial lineflush solution must be documented by prescription or record of administration(e.g.anaesthetic chart)or standard operating procedure as defined in the hospital’s arterial line policy.Theflush solution must be independently double-checked by a sec-ond practitioner before setting up and attaching to an arterial line.Also known as independent validation,an independent double-check of a high-alert medication is a procedure in which two clinicians separately check,alone and apart from each other,then compare results of each component of prescribing,dispensing and verifying the high-alert medi-cation before administering it to the patient[15].Pressurising devicesAll pressurising devices must be designed to permit unimpaired inspec-tion of the containedflush infusion bag while in use.A fully transparent front panel is strongly recommended.Checking during useTheflush infusion bag must be independently double-checked at least once during each nursing shift and whenever nursing care of the patient is handed over.This double-check must include removal of theflush bag from its pressurising device(Fig.2).Sampling techniquesTo avoid sample contamination withflush infusions,the use of‘closed’arterial line sampling systems is recommended.Where an‘open’system is to be used for blood sampling,inevitable contamination must be kept to a minimum by making the dead space volume between the sampling port and the arterial lumen as small as practicable in the clinical situation.The syringe used for removal of dead space volume must be readily distinguishable from the sampling syringe.Throughout the sampling process and until the sample syringe is removed,the sampling technique must avoidflush solution’s entering the dead space,the sample or any three-way tap at the sampling site. Glucose concentration thresholdsWhen an arterial line is used to take blood samples for measurement of blood glucose concentrations,a value that is unexpectedly high must trigger a medical review and a check of the blood sampling system for possible sample contamination error.The source of the blood sample should be checked to ensure no possibility of sample contamination.If this is not possible,a confirmatory sample must be drawn from the most appropriate alternative site.Initiating and increasing insulin infusionsBefore commencing an insulin infusion in a patient not previously known to be an insulin-dependent diabetic and before increasing an insulin infusion rate above a policy-defined threshold(e.g.6IU.hÀ1)on the basis of samples drawn from aflushed line,there must be a medical review.The source of the blood sample should be checked to ensure no possibility of sample contamination.If this is not possible,a confirma-tory sample must be drawn from the most appropriate alternative site.Abnormal blood testsA blood test that shows an unexpected abnormality or unusual variation from previous results should prompt a check of the source of the blood sample to ensure no possibility of sample contamination.If this is not possible,a confirmatory sample must be drawn from the most appropri-ate alternative site.Recording trends in measured glucose levels and physiological variablesVariations in blood chemistry and vital signs are easier to appreciate from a graphic trend display,which may facilitate earlier detection of hypoglycaemia.Graphic trend displays,particularly of glucose readings and of vital signs including heart rate and respiratory rate,are recom-mended.Monitoring for signs of hypoglycaemiaHypoglycaemia should be considered and specifically checked for in any sedated or unconscious patient receiving insulin therapy who exhibits a new increase in heart rate or respiratory rate,or sweating,pupillary changes or a fall in conscious level.Incident reportingAny incident causing potential or actual patient harm related to con-tamination of blood samples obtained from an arterial line should be reported to both local and national incident reporting systems.Any incident causing patient harm must be disclosed to the patient or his/ her next of kin in accordance with local policy.National monitoring of incidentsNational patient safety organisations should monitor the occurrence of incidents relating to arterial line andflush infusion errors to determine whether further actions are needed to reduce their incidence. Engineered solutionsEquipment manufacturers,pharmaceutical suppliers and clinicians should engage in a collaboration to develop safer systems for the prevention of arterial cannula clotting.Potential solutions to be considered will include:•highly visible and easily distinguishablefluid bags for exclusive use with arterial pressure monitoring and sampling systems.•special connections between thefluid bag and the arterial pressure monitoring and sampling system.•integrated systems,possibly incorporating both the above solutions.Supporting healthcare providers when errors occurWhen errors occur,staff and organisations must“promise to learn and commit to act”[16].It is often the systems,procedures,conditions,envi-ronment and constraints faced by healthcare providers that lead to patient safety problems.Rather than blame individuals,trust should be placed in the goodwill and good intentions of the staff and attention focused on learning from(and remembering)errors[16,17]. AcknowledgementsWe are grateful to all individuals and organisations who provided com-ments on thefirst draft of this guideline,and to the following people who attended a Consensus Conference for Stakeholders at21Portland Place on10October2013:Chris Quinn(Beckton Dickson);Iswori Thakuri and Markku Ahtiainen(British Anaesthetic and Recovery Nurses Association); Catherine Plowright(British Association of Critical Care Nurses);Tim Lewis(College of Operating Department Practitioners);Jane Harper (Intensive Care Society);Caroline Wilson(Group of Anaesthetists in Training);Archie Naughton(AAGBI Lay Representative);Desmond Wat-son(Medical and Dental Defence Union of Scotland);Gary Duncum and Hayley Bird(Smiths Medical);Sharon Maris(Teleflex);Clare Crowley and Mark Tomlin(UK Clinical Pharmacy Association);Graham Milward (Vygon).Competing interestsNo external funding and no competing interests declared. References1.National Patient Safety Agency.Infusions and sampling from arterial lines.RapidResponse Report.NPSA/2008/RRR006./resources/?EntryId45=59891(accessed01/08/2013).2.Sinha S,Jayaram R,Hargreaves CG.Fatal neuroglycopaenia after accidental use of aglucose5%solution in a peripheral arterial cannulaflush system.Anaesthesia 2007;62:615–20.3.Panchagnula U,Thomas AN.The wrong arterial lineflush solution.Anaesthesia2007;62:1077–8.4.Medicines and Healthcare products Regulatory Authority.Glucose solutions:falseblood glucose readings when used toflush arterial lines,leading to incorrect insulinadministration and potentially fatal hypoglycaemia.Drug Safety Update2012;5: A2./Safetyinformation/DrugSafetyUpdate/CON175433 (accessed01/08/2013).5.Leslie R,Gouldson S,Habib N,et al.Management of arterial lines and blood sam-pling in intensive care:a threat to patient safety.Anaesthesia2013;68:1114–19.6.Gupta KJ,Cook TM.Accidental hypoglycaemia caused by an arterialflush drug error:a case report and contributory causes analysis.Anaesthesia2013;68:1178–87.7.Reason J.Human error:models and management.British Medical Journal2000;320:768–70.8.Burnett RW,Covington AK,Fogh-Andersen N.Recommendations on whole bloodsampling,transport,and storage for simultaneous determination of pH,blood gases,and electrolytes.International Federation of Clinical Chemistry Scientific Divi-sion.Journal of the International Federation of Clinical Chemistry1994;6:115–20.9.Brennan KA,Eapen G,Turnbull D.Reducing the risk of fatal and disabling hypoglyca-emia:a comparison of arterial blood sampling systems.British Journal of Anaesthesia 2010;104:446–51.10.Remvig LS,Elsborg R,Sejling AS,et al.Hypoglycemia-related electroencephalogramchanges are independent of gender,age,duration of diabetes,and awareness status in type1diabetes.Journal of Diabetes Science and Technology2012;6:1337–44. 11.National Patient Safety Agency.Reducing the risk of hyponatraemia when adminis-tering intravenous infusions to children.Patient Safety Alert.NPSA/2007/22./resources/?EntryId45=59809(accessed01/08/ 2013).12.Jacobi J,Bircher N,Krinsley J,et al.Guidelines for the use of an insulin infusion for themanagement of hyperglycemia in critically ill patients.Critical Care Medicine2012;40:3251–76.13.Beier B,Musick K,Matsumoto A,Panitch A,Nauman E,Irazoqui P.Toward a contin-uous intravascular glucose monitoring system.Sensors(Basel)2011;11:409–24.14.Romey M,Jovanovic L,Bevier W,Markova K,Strasma P,Zisser e of an intra-vascularfluorescent continuous glucose sensor in subjects with type1diabetes mellitus.Journal of Diabetes Science and Technology2012;6:1260–6.15.ISMP Medication Safety Alert!Nurse Advize-ERR Volume6Edition12December2008./Newsletters/nursing/Issues/NurseAdviseERR200812.pdf(accessed13/10/2013).16.National Advisory Group on the Safety of Patients in England.A promise to learn,acommitment to act–improving the safety of patients in England.https://www./government/uploads/system/uploads/attachment_data/file/226703/Ber wick_Report.pdf(accessed13/10/2013).17.Smith A.Lest we forget:learning and remembering in clinical practice.Anaesthesia2013;68:1099–103.21 Portland Place, London, W1B 1PYTel: 020 7631 1650Fax: 020 7631 4352Email: info@。

营销词汇_汇总

营销词汇_汇总

1.Accessible 可接触的,可达到的2.Accessory 附件3.Accumulated Production 累积产量4.Acquisition 获取,并购5.Actionable 可行动的6.Actors 演员,参与者7.Actual Product 实际产品8.Adaptive 可适应调节的9.Administered VMS 管理型垂直营销系统10.Adopter 采用者11.Adoption 采用12.Advertising 广告13.Affordable 付得起的14.Affordable Method 量入为出法15.AIDA “阿依达”准则(关注,兴趣,欲望,行动)16.Alternative 选项17.Appeal 诉求18.Assortment 多样的汇合19.Audience 受众20.Augmented Product 扩展产品21.Available 可得到的22.Average Cost 平均成本23.Awareness 知晓24.Behavioral Segmentation 行为细分25.Brand contact 品牌接触点26.Brand Equity 品牌资产27.Brand Extensions 品牌延伸28.Brand Sponsorship 品牌持有29.Branding 运用品牌30.Break-Even Analysis 盈亏平衡分析31.Business Analysis 经营分析32.Business Market 企业市场33.Business Portfolio 业务组合34.Business Unit 业务单元35.Buyer-readiness stage 购买者就绪阶段36.Buzz marketing 口碑营销37.By-Product Pricing 副产品定价38.Captive-Product Pricing 必用品定价39.Capture 捕获40.Cash Discount 付现折扣41.Cash Rebate 返现金42.Celebrity 名人43.Channel behavior 渠道行为44.Channel conflict 渠道冲突45.Channel level 渠道层次46.Channel member 渠道成员47.CLV 顾客终身价值48.Co-Brand 联合品牌49.Cognitive Dissonance 认知失调municable 可传播的pany Resources 公司资源petitive Advantage 竞争优势petitive-Parity Method 竞争看齐法plex Buying Behavior 复杂购买行为55.Concentrated Marketing 集中化营销56.Consumer Involvement 消费者卷入57.Consumerism 消费维权/主义58.Consumer-oriented marketing 消费者导向营销59.Contractual VMS 合同型VMS60.Conventional distribution channel 传统分销渠道61.Core Benefit 核心利益62.Corporate Level 公司总部层次63.Corporate VMS 公司型VMS64.Cost-Based Pricing 基于成本的定价65.Cost-Plus Pricing 成本加成定价66.Coupons 折价/优惠券67.Criterion/Criteria (判定)准则/标准68.CRM 客户关系管理69.Customer Equity 顾客资产70.Customer Perceived Value 顾客感知价值71.Customer Profitability 顾客盈利性72.Customer Satisfaction 顾客满意73.Customer-Driven 顾客驱动74.Customer-value marketing 顾客价值营销75.Dealer 经销商76.Deal-Prone 优惠依赖77.Deceptive practice 欺瞒性做法78.Decline Stage 衰退期79.Deficient products 不良产品80.Delight 高兴,欣喜81.Deliver 交付,提供82.Demand Curve 需求曲线83.Demands 需求84.Demographic Segmentation 人口统计细分85.Demography 人口统计学86.Desirable products 可取产品87.Differentiable 可差异的88.Differentiated Marketing 差异化营销89.Direct marketing 直接营销90.Disintermediation 去中介,脱媒91.Dissonance-Reducing Buying Behavior 减少失调购买行为92.Distinct 截然不同的93.Distribution channel 分销渠道94.Distributor 分销商95.Drop Strategy 放弃战略96.Dynamic Pricing 动态定价97.Early Adopters 早期采用者98.Early Majority 早期大众99.Emotional 情感的/情绪的100.Enlightened marketing 开明营销101.Environmental Forces 环境力量102.Environmentalism 环保主义103.Evaluate 评估104.Evaluation Of Alternatives 选项评估105.Excessive markup 过分加价106.Exclusive distribution 专营性分销107.Expectation 期望108.Experience 体验109.Experience Curve 经验曲线110.False wants 不切实际的/虚假的需求111.Fixed Cost 固定成本112.Franchise organization 特许组织113.Functional Strategy 职能战略114.Gender Segmentation 性别细分115.Geodemographics 地理人口的116.Geographic Segmentation 地理细分117.Geographical Pricing 地理定价118.Good-Value Pricing 超值/物有所值定价119.Growth Stage 成长期120.Habitual Buying Behavior 习惯性购买行为121.Harvest Strategy 收割战略122.Horizontal Marketing System 水平营销系统123.IMC 整合营销传播124.Indirect marketing channels 间接营销渠道125.Individual Marketing 个体化营销126.Industrial Product 工业产品127.Inelastic Demand 非弹性需求rmation Search 寻找信息129.Innovation 创新130.Innovative marketing 创新性营销131.Innovators 创新者132.Intangible 无形的133.Integrated 整合的134.Intensive distribution 密集性分销135.International Pricing 国际定价136.Introduction Stage 导入期beling 用标签ggards 落后者te Majority 后期大众140.Level Off 持平141.Licensed Brand 特许品牌142.Line Extensions 产品线延伸143.List Price 公布价格144.Local Marketing 本地化营销145.Logistics Information Management 物流信息管理146.Loss Leader 牺牲品147.Loyalty 忠诚148.Loyalty Status 忠诚状态149.Macroenvironment 宏观环境150.Main Product 主产品151.Maintain Strategy 维持战略152.Manufacturer Brand 制造商品牌153.Market Offering 营销物154.Marketer 营销者155.Marketing channel 营销渠道156.Marketing communication 营销传播157.Marketing Concept 营销理念158.Marketing Effort 营销努力/工作159.Marketing ethics 营销伦理160.Marketing Intermediary 营销中介161.Marketing logistics 营销物流162.Marketing Mix 营销组合163.Marketing Program 营销方案164.Marketing Strategy 营销战略165.Marketplace 市场166.Markup 加价167.Mass Customization 大众化定制168.Mass Marketing 大众化营销169.Matching 匹配170.Materialism 物质主义171.Maturity Stage 成熟期172.Measurable 可度量的173.Microenvironment 微观环境174.Micromarketing 微观化营销175.Mission 愿景176.Moral 道义177.Multibrands 多品牌178.Multiple Segmentation Bases 多重细分基础179.Need Recognition 认识需要180.Needs 需要181.Niche Marketing 缝隙影响182.Nonpersonal 非人际的183.Non-Price Strategy 非价格战略184.Objective-and Task Method 目标任务法185.Occasion Segmentation 场合/时机细分186.Opinion leader 意见领头人187.Optional-Product Pricing 选配品定价188.Outlet 零售终端189.Overhead 费用190.Penetration Pricing 渗透定价191.Percentage-of-Sales Method 销售百分比法192.Personal selling 人员推销193.Physical Distribution 实物分销194.Planned obsolescence 计划性陈旧195.PLC 产品寿命周期196.Pleasing products 愉悦产品197.POP (Point-of-purchase) 售卖点198.Positioning 定位199.Positioning Map 定位图200.Postpurchase Behavior 购后行为201.Predatory 掠夺性的202.Preemptive 先发制人的203.Press release 新闻发布204.Price Adjustment 价格浮动/调节205.Price Change 变价206.Price Elasticity 价格弹性207.Pricing Power 定价权208.Private Brand 商家品牌209.PRM 伙伴关系管理210.Proactive 主动的211.Product Attribute 产品属性212.Product Bundle Pricing 产品组合定价213.Product Category 产品品类214.Product Class 产品种类215.Product Concept 产品理念(5种理念之一) 216.Product Item 产品项217.Product Life-Cycle 产品生命周期218.Product Line 产品线219.Product Line Filling 产品线填充220.Product Line Pricing 产品线定价221.Product Line Stretching 产品线拉伸222.Product Mix 产品组合223.Product Mix Consistency 产品组合一致性224.Product Mix Depth 产品组合深度225.Product Mix Length 产品组合长度226.Product Mix Width 产品组合宽度227.Product Portfolio 产品组合228.Product Position 产品位置229.Product-Form Segment Pricing 产品形式细分定价230.Production Concept 生产理念231.Profitable 有盈利的232.Promotion mix 推广组合233.Promotional Pricing 推广定价234.Psychographic Segmentation 心理图案学细分235.Psychological Pricing 心理定价236.Public relation 公共关系237.Publics 公众238.Pull strategy 拉式战略239.Purchase Decision 购买决定240.Push strategy 推式战略241.Quantity Discount 数量折扣242.Rational 理性的243.Reference Price 参考价格244.Reseller 再销售商245.Retailer 零售商246.Retention 维系247.Revenue 收入248.Sales promotion 促销249.Salutary products 有益产品250.Segment 细分市场251.Segmentation 细分252.Segmentation Base 细分基础253.Segmentation Variable 细分变量254.Segmented Marketing 细分营销255.Segmented Pricing 细分定价256.Selective distribution 选择性分销257.Selling Concept 推销理念258.Sense-of-mission marketing 使命感营销259.Shoddy product 劣质产品260.Skimming Pricing 撇脂定价261.Social criticisms 社会批评262.Social goods 公共产品263.Socially Responsible Marketing 对社会负责任的营销264.Societal marketing 社会营销265.Societal Marketing Concept 社会营销理念266.Strategic Planning 战略计划267.Structural Attractiveness 结构性吸引力268.Substantial 足量的269.Superior 优异的270.Supply Chain 供应链271.Sustainable Marketing 可持续的营销272.Target Costing 目标成本法273.Target Profit Pricing 目标利润定价274.Targeting 确定目标275.Technological Environment 技术环境276.Temporary 暂时的277.Total Cost 总成本278.Trade Discount 中间商折扣279.Trade-In Allowance 以旧换新补贴280.Trial 试用281.Two-Part Pricing (服务业中)两分定价282.Unanticipated Situational Factor 突发情况因素283.Undifferentiated Marketing 无差异营销284.Value delivery network 价值交付网络285.Value Proposition 价值主张286.Value-Added Pricing 高附加值定价287.Value-Based Pricing 基于价值的定价288.Variability 变化性289.Variable Cost 变动成本290.Variety-Seeking Behavior 寻求变化购买行为291.Vertical Marketing System 垂直营销系统292.Wants 欲望293.Warehousing 仓储294.Wholesaler 批发商295.Word-of-mouth influence 口碑。

nrd4163 The role of ligand efficiency metrics in drug discovery

nrd4163 The role of ligand efficiency metrics in drug discovery
The properties of small-molecule drugs, especially those that are orally bioavailable, are concentrated in a relatively narrow range of physicochemical space known as ‘drug-like’ space1,2. In studies of extensive data sets of small molecules, the fundamental properties of molecular size, lipophilicity, shape, hydrogen-bonding properties and polarity have been correlated — to varying degrees — with solubility 3, membrane permeability 4, metabolic stability 5,6, receptor promiscuity 7, in vivo toxicity 8,9 and attrition10,11 in drug development pipelines. Lipophilicity and hydrogen bond donor count seem to be key properties, as they have remained essentially constant in oral drugs over time7,12–15. The median and mean molecular mass of approved drugs has risen by only around 50 Da (15%) over the past three decades, whereas the median and mean molecular mass of synthesized experimental compounds has risen by over 100 Da (30%)16. By contrast, the molecules that are being published in the literature15 and patented by the pharmaceutical industry 17, as well as those entering clinical development pipelines10, are more lipophilic, larger and less three-dimensional14,18 than approved oral drugs. However, analyses indicate that compounds that have a higher molecular mass and higher lipophilicity have a higher probability of failure at each stage of clinical development10,19,20. The control of physicochemical properties is dependent on the specific drug target, the mode of perturbation and the target product profile — all of which may just­ ify developing compounds that lie beyond the norm — and it is also dependent on the variable drug discovery practices of originating institutions7,17. Individual drug discovery projects often justify the pursuit of molecules that have additional risks associated with suboptimal physicochemical properties, as long as experimental project goals and the target product profile criteria are met. However, when viewed in aggregate at a company portfolio level, the physicochemical properties of investigational drugs can have an important influence on the overall attrition rates10,19,20 and therefore ultimately on the return on investment. Three factors have been proposed to underlie the observed inflation in physicochemical properties21,22 of investigational drugs over the past three decades. First, the discovery of initial hit compounds with inflated physicochemical properties has been linked to the rise of high-throughput screening (HTS)23. Larger and more lipophilic compounds, potentially with a higher binding affinity, are more likely to be detected in HTS assays, which are often based on a single affinity end point. Second, the tendency of HTS methods to identify large and lipophilic compounds is amplified by the observed tendency of the lead optimization process to inflate physicochemical properties24,25. Third, the portfolio of drug targets being tackled by the industry includes a growing number of targets that are less druggable than those pursued previously, which may justify the development of compounds with less optimal physicochemical properties20. We believe that the overemphasis on potency, as well as the associated tendency to inflate physicochemical properties, can be remedied by monitoring and

Reduction of Target Gene Expression by a Modified U1 snRNA

Reduction of Target Gene Expression by a Modified U1 snRNA

M OLECULAR AND C ELLULAR B IOLOGY ,0270-7306/01/$04.00ϩ0DOI:10.1128/MCB.21.8.2815–2825.2001Apr.2001,p.2815–2825Vol.21,No.8Copyright ©2001,American Society for Microbiology.All Rights Reserved.Reduction of Target Gene Expression by a Modified U1snRNAS.A.BECKLEY,1P.LIU,1M.L.STOVER,1S.I.GUNDERSON,2A.C.LICHTLER,1ANDD.W.ROWE 1*Department of Genetics and Developmental Biology,University of Connecticut Health Center,Farmington,Connecticut 06030,1and Department of Molecular Biology and Biochemistry,Rutgers University,Piscataway,New Jersey 088542Received 3August 2000/Returned for modification 28September 2000/Accepted 17January 2000Although the primary function of U1snRNA is to define the 5؅donor site of an intron,it can also block the accumulation of a specific RNA transcript when it binds to a donor sequence within its terminal exon.This work was initiated to investigate if this property of U1snRNA could be exploited as an effective method for in-activating any target gene.The initial 10-bp segment of U1snRNA,which is complementary to the 5؅donor sequence,was modified to recognize various target mRNAs (chloramphenicol acetyltransferase [CAT],␤-ga-lactosidase,or green fluorescent protein [GFP]).Transient cotransfection of reporter genes and appropriate U1antitarget vectors resulted in >90%reduction of transgene expression.Numerous sites within the CAT transcript were suitable for targeting.The inhibitory effect of the U1antitarget vector is directly related to the hybrid formed between the U1vector and target transcripts and is dependent on an intact 70,000-molecular-weight binding domain within the U1gene.The effect is long lasting when the target (CAT or GFP)and U1antitarget construct are inserted into fibroblasts by stable transfection.Clonal cell lines derived from stable transfection with a pOB4GFP target construct and subsequently stably transfected with the U1anti-GFP con-struct were selected.The degree to which GFP fluorescence was inhibited by U1anti-GFP in the various clonal cell lines was assessed by fluorescence-activated cell sorter analysis.RNA analysis demonstrated reduction of the GFP mRNA in the nuclear and cytoplasmic compartment and proper 3؅cleavage of the GFP resid-ual transcript.An RNase protection strategy demonstrated that the transfected U1antitarget RNA level varied between 1to 8%of the endogenous U1snRNA level.U1antitarget vectors were demonstrated to have potential as effective inhibitors of gene expression in intact cells.Reducing the output of a target gene has a prominent role in therapeutic strategies for heritable diseases resulting from a dominant negative mutation and in assessing gene function during development.While inactivation at the level of the gene is most definitive,current approaches are time-consuming (22,62)or are still in early stages of development (19,43).Target-ing the mRNA transcripts of a specific gene with antisense oligonucleotides (77)or genes that express an antisense RNA (67)or a ribozyme (39)has shown variable success.Since no clear effector design has proven to be superior,new strategies are continually being introduced.In particular,imbedding the antisense or ribozyme effector within expression loci of snRNA or tRNA genes is proving to have a distinct advantage of high expression and nuclear localization (8).For example,an anti-HIV ribozyme imbedded within a U1snRNA-derived vector reduced the expression of HIV RNA transcripts by 60%within Xenopus laevis oocytes (59).Subsequently,stable trans-fection of the same effector into Jurkat cells dramatically re-duced intracellular HIV transcript levels (58).Ribozymes in-corporated into the U1snRNA gene reduced fibrillin 1gene expression in cell culture (60).Antisense delivered within the U7snRNA gene inhibited the expression of aberrantly spliced ␤-globin mRNA by 60%in a ␤-thalassemia cell line (79).Neuregulin-1was significantly reduced in developing chick em-bryos by expression of multiple ribozymes imbedded in a tRNAgene and delivered to the chick in the context of a replication competent retrovector (85).Further improvements in the de-sign of the chimeric tRNA-ribozyme construct have increased catalytic activity (46,57).Here,we report an alternative approach for reducing the mRNA output of a target gene using a modified U1snRNA transcript as the effector.The first 10-nt of the human U1snRNA gene,which normally binds to 5Јss (CAG ԽGTAAGTA [vertical bar shows splice site])in pre-mRNA (6,34,48,61),were replaced by a sequence complementary to a 10-nt seg-ment in the terminal exon of the target mRNA.While this U1targeting strategy,like ribozyme and antisense methods,de-pends on the formation of an RNA-RNA hybrid,a mechanism different from antisense mediated RNase H destruction (26),antisense mediated inosine substitutions (44),or ribozyme cleavage (51,80)is utilized.Rather,binding of the U1snRNA effector to a terminal exon appears to interfere with posttran-scriptional processing of that transcript,resulting in reduced accumulation of that mRNA (23,37).U1snRNA is a compo-nent of the U1snRNP complex,which also contains seven common snRNP proteins and three specific U1snRNP pro-teins (73,74,83).It initiates spliceosome association with pre-mRNA by defining the 3Јboundary of exons (71).As the splicing reaction proceeds,U1snRNP and the other spliceo-some components are sequentially released from the transcript (41).Factors that affect the dissociation of U1snRNP from a transcript have been found to control mRNA expression in several natural and engineered situations.Persistent binding of U1snRNA to a ␤-globin transcript containing a mutant splice donor site is postulated to account for low ␤-globin accumu-*Corresponding author.Mailing address:Department of Genetics and Developmental Biology,Mail Code 1231,University of Connect-icut Health Center,263Farmington Ave.,Farmington,CT 06030.Phone:(860)679-2324.Fax:(860)679-8345.E-mail:drowe@.2815at Penn State Univ on February 7, 2008 Downloaded fromlation in certain forms of␤-thalassemia(10).Failure of the splicing reaction to remove this segment of RNA by exon skipping results in nuclear retention of the transcript.This mechanism for inhibiting RNA expression can be overcome by the HIV translocation protein,REV,(5,15,64)or engineered suppressors of mutations,e.g.,U1snRNA containing sequence complementary to mutant5Јss(18,33,86).A second factor affecting RNA processing is the proximity of the major splice donor site to the pA signal.In the HIV genome,the pA signal within the5Јlong terminal repeat is located immediately downstream of the transcription start site and upstream of the major5Јss(2).In this orientation,U1snRNA binds to the5Јss and suppresses the upstream pA,allowing formation of the full-length transcript.However,placing this5Јss site further from this pA signal reduces expression of the full-length tran-script because it is truncated at the now-activated upstream cleavage-pA site(3).Persistent U1snRNA binding to a site in proximity to the pA signal may account for the observation that a cryptic or an unpaired5Јss within the terminal exon of an mRNA also prevents cytoplasmic accumulation of that mRNA, such as within the mouse polyomavirus,the BPV,and the U1A gene(23,29,37).We reasoned that directing a modified U1snRNA to a unique sequence within the terminal exon of a target gene would reduce the amount of target RNA accumulating in the cytoplasm.The reduction in gene expression would occur as a consequence of U1snRNA binding either by interfering with the splicing reaction,inhibiting the cleavage-pA reaction,or blocking nucleo-cytoplasmic transport.Thus,the sequence of the human U1snRNA gene was modified to specifically com-plement coding sequence in the targeted transgenes coding for CAT,␤-Gal,or GFP(eGFP;Clontech).The magnitude,spec-ificity,adaptability,and persistence of the U1snRNA-based inhibition were assessed by measuring the reduction in levels of transgene RNA and protein following transient and stable transfection of the modified U1snRNA vectors and transgene expression constructs.MATERIALS AND METHODSAbbreviations.The following abbreviations have been used in this work:5Јss, 5Јsplice sites;70K,70,000molecular weight;␤-Gal,beta-galactosidase;BPV, bovine papillomavirus;CAT,chloramphenicol acetyltransferase;FACS,fluores-cence-activated cell sorter;GFP,greenfluorescent protein;GAPDH,glyceral-dehyde-3-phosphate dehydrogenase;GTC,guanidinium thiocyanate containing 7␮l of␤-mercaptoethanol per100ml(17);HIV,human immunodeficiency virus; MOPS,morpholinepropanesulfonic acid;nt,nucleotides;pA,polyadenylation; PAP,poly(A)polymerase;PBS,phosphate-buffered saline;RFU,relative log fluorescent units;RSV,Rous sarcoma virus;SDS,sodium dodecyl sulfate;SET buffer,1%SDS–1mM EDTA–10nM Tris buffer;SV40,simian virus40;TK, thymidine kinase;UTR,untranslated region.U1targeting constructs.The parental recombinant U1snRNA gene(63) consists of thefive snRNA-specific enhancer elements in the315-bp promoter, the U1coding sequence,and a unique3Јtermination sequence(Fig.1A,panel i).The wild-type U1snRNA will be referred to herein as U1snRNA.Modified constructs will be identified as U1antitarget gene followed by thefirst base number of targeted sequence,e.g.,U1anti-␤-Gal1800.The target numbering begins from the AUG translation start codon.U1antitarget vectors were created by PCR-mutagenesis of the5Јsequence, between basesϩ1andϩ10,which normally complements the5Јsplice donor. The5Ј(mutagenic)primers(Table1and Fig.1A,panel i)contain a proximal Bgl II restriction site(underlined)for insertion into positionϪ8bp in the U1 promoter.The3Ј(selection)primer(5ЈAGTGCCAAGCTTGCATGCCAGCA GGTC3Ј)extends through the U1termination sequence and into the pUC18 polylinker,terminating with a Hin dIII site.A base change(underlined)was made to destroy a Pst I site proximal to the Hin dIII site to allow selection against plasmids containing the original gene.The PCR product was digested with a combination of Bgl II,Hin dIII,and Pst I.The resulting clones were selected by the absence of the Pst I site.The same strategy was used to adapt U1BPV and U1BPV*⌬loop1(30).Because these constructs are located in an RNA expres-sion plasmid(SP6),they had to be adapted to the U1gene construct used for the cell expression studies.The5Ј-mutagenic oligonucleotide consisted of the Bgl II adapter,the CAT737recognition sequence,and an11bp sequence that over-lapped U1BPV and U1BPV*⌬loop1from bp11to22(Table1).The3Јselection oligo(5ЈAGT CTA GA T CTA CTT TTG AAA C TC CAG AAA GT C AGG GGA AAG CGC GAA CG3Ј)consisted of18bp that overlapped U1BPV and U1BPV*⌬loop1at bp165to183(in italic type),followed by the U1poly(A) sequence(in boldface type)and the Xba I site(underlined).The PCR-derived fragments were inserted into the Bgl II and Xba I site of the U1gene,producing U1antiCAT737a,which is identical to U1anti737,and U1antiCAT737⌬70K.All constructs were verified by sequencing.A second series of constructs were engineered to distinguish stable expression of U1antitarget transcripts from the endogenous U1snRNA transcripts.These U1snRNA constructs contain a Hin dIII site introduced into loop III,a non-functional component of the U1snRNA gene(Fig.1A,panel ii)(9,31a,59),and are distinguished by the use of U1(H)in their construct names.To perform this step,the U1antitarget constructs were subcloned into pBSSK IIϩutilizing the Sst I and Hin dIII restriction sitesflanking the locus.An internal Hin dIII sitewas FIG.1.(A)U1snRNA locus.(i)Parental U1snRNA construct with enhancer elements A through E.(ii)Map of the U1(H)construct. The arrows show the specific PCR primers used to introduce the mutations.(B)Target expression vectors.The pOB4family of con-structs has a single splice unit in which a cassette containing a triple stop unit(vertical lines)and the reporter gene are included in the ter-minal exon.(i)RSV␤-Gal is a single exon construct;(ii)pOB4CAT; (iii)pOB4CAT(PvuII737);(iv)pOB4GFP.2816BECKLEY ET AL.M OL.C ELL.B IOL.at Penn State Univ on February 7, 2008 Downloaded fromthen created within the constructs by single-stranded site-directed mutagenesis(45)using the following primer:5ЈGCGATTTCCCCAAGCTTGGGAAACTCG3Ј.In vivo expression constructs.Several reporter genes were used to demon-strate the activity of the U1antigene constructs.The RSV␤-gal expression vectorhas the RSV promoter from pRSV2CAT(27)driving expression of the␤-Galgene(32)and terminates with the bovine growth hormone pA signal(68).Thereare no splicing elements in this construct(Fig.1B,panel i).The expression vector pOB4CAT(4)contains the SV40enhancer-promoter,an untranslated SV40exon,a single intron,a second exon containing stop codonsin each reading frame(triple stop),the CAT gene,and the SV40early pA site(Fig.1B,panel ii).A second CAT expression vector pOB4CAT737PvuII,wascreated to evaluate the specificity of U1targeting vectors(Fig.1B,panel iii).Itcontains six mutated nucleotides atϩ737bp of the vector located in the3ЈUTRupstream of the SV40pA signal.Seven bases(in boldface type)of the originalsequence(5ЈGAATGGCAG AAATTCG CCGG3Ј)were replaced to generate themutant sequence(5ЈGAATGGCAG CTGTATA CCGG3Ј)containing a diagnos-tic Pvu II site(underlined).This was performed by internal PCR mutagenesis ofthe pOB4CAT vector using a5Јoligonucleotide,5ЈTTAAACGTGGCCAATATGGACAAC3Ј,that incorporated a Bal I site(underlined)and the3Јoligonu-cleotide,5ЈCTCGAGTCCGG TATACAG CTGCCATTCATC3Ј,containing themutagenic sequence(in boldface type)and a terminal Xho I site(underlined).The Bal I/Xho I-digested PCR fragment was cloned into an upstream unique Bal Iand downstream Xho I site,rendered unique by prior destruction of the secondupstream Xho I site present in the original pOB4CAT sequence.The eGFP expression vector(pOB4eGFP)was derived from the pOB4CATvector by substitution of the eGFP gene(Clontech)for the CAT gene(Fig.1B,panel iv).Initially the unique Xba I site downstream of the triple stop andupstream of CAT was replaced with a Bgl II site.The Xho I site located at the3Јend of the SV40enhancer was then replaced with an Xba I site,making the Xho Isite at the3Јend of the CAT sequence unique.The eGFP gene was then insertedas a Bam HI-Sal I fragment into the Bgl II/Xho I sites.Cell culture and transfection.NIH3T3fibroblasts were grown and passaged every3days in F12medium containing5%fetal calf serum with2mM glutamine,1%nonessential amino acids,100U of penicillin per ml,and100␮g of strepto-mycin per ml.Transfection was performed using a calcium phosphate precipitateprotocol(75).Transient transfections were performed with10␮g of U1DNA,2.0␮g of reporter DNA and1␮g of TK-luciferase DNA.Three100-mm-diameterplates or three35-mm-diameter wells of six-well plates(Falcon)were used foreach experimental group,with at least three transfections per data point.Anal-ysis was performed on the cell extract harvested48h following the transfection.Stable cotransfection experiments utilized10␮g of U1construct,2.0␮g oftarget gene,and1␮g of SV2Neo selection plasmid.Transformants were selectedwith G418(200␮g/ml)for2weeks.Individual clones were picked,and theremaining colonies on each plate were pooled and expanded.Sequential stabletransfection experiments were performed with10␮g of target gene DNA and1␮g of SV2Neo selection DNA.Transformants were selected with G418(200␮g/ml)for2weeks to obtain clones,one of which was used in all subsequent experiments.U1(H)antitarget DNA(10␮g)was subsequently transfected with1␮g of a TK-hygromycin selection plasmid into this cell line.After two weeks,individual clonal populations were selected and expanded under the dual anti-biotic selection for later analysis.The remaining colonies on each plate werepooled and expanded.RNA extraction and analysis.Total cellular RNA was extracted in700␮l or 2ml of Trizol(BRL)from35-or100-mm-diameter confluent plates,respec-tively,as per manufacturer’s protocol with the exception of an additional pre-cipitation step.The RNA pellet was dissolved in300␮l of GTC(guanadinium thiocyanate containing7␮l of␤-mercaptoethanol per100ml[17])and precip-itated overnight with300␮l of isopropanol atϪ20°C,dried,and resuspended in H2O.Nuclear and cytoplasmic RNA fractions were obtained from10to12confluent 100-mm-diameter plates.The cells were lysed with reticulocyte swelling buffer ac-cording to previously established methods(24).The cytoplasmic fraction was extracted in1ϫSET buffer containing proteinase K(10␮g/ml)and resuspended in100to200␮l of diethyl pyrocarbonate-H2O as previously described(17).The nuclear fraction was extracted a second time in reticulocyte swelling buffer, dispersed in2ml of GTC,extracted with acid phenol(17),and resuspended in 50␮l of H2O.Northern analysis utilized5to10␮g of RNA separated in7%formalde-hyde–1X MOPS–1%agarose gel for255V-h.The RNA was then transferred to a nylon-reinforced nitrocellulose membrane(Schleicher and Schuell)by capillary action and UV crossed-linked twice at1,200␮J.Hybridization was performed for 12h at42°C with afinal concentration of radioactive probe between3ϫ106and 5ϫ106counts per ml of hybridization solution.Direct RNase protection was performed with[␣32P]rUTP uniformly labeled probes transcribed from either the T7or T3bacteriophage promoter in linear-ized pBSSK IIϩ(Strategene)plasmids.The probe was hybridized to10␮g of test RNA or tRNA in hybridization buffer.The sample was digested with crude T1-T2 RNase(60U/ml)at30to34°C for1.5to2h(50),denatured,and separated on a6%denaturing acrylamide gel.Transgene analysis.␤-Gal enzyme activity analysis was performed on the cell extract from one confluent well of a six-well plate.The cells were lysed in500␮l of0.1%SDS for5min,and150␮l of supernatant was used in each reaction mixture(65).A colorimetric stain for␤-Gal activity was performed on cells from parallel wells of the transfection experiment used in the assays for enzyme activity.The staining reaction was terminated after1h.CAT was extracted by lysing cells in1ϫreporter lysis buffer(Promega).The fluor-diffusion assay was performed with10to50␮l of extract using100,000cpm of3H-acetyl coenzyme A(16,69).CAT activity was normalized to luciferase ac-tivity by mixing10to20␮l of cell extract with50␮l of luciferin substrate(Pro-mega)at room temperature and immediately measuring luminescence in a Mono-light2001luminometer(Analytical Luminescence Laboratory,Inc.)for10s. Fluorescence microscopy was performed with an IMT4Olympus inverted microscope using an eGFP optimizedfilter set(chroma,41017;excitation wave-length,470or40nm emission wavelength,525or500nm;dichroic,495LP). FACS analysis was performed on GFP expressing cultures trypsinized to a single cell suspension.The cells were washed twice in PBS and resuspended at3ϫ105 to5ϫ105cells per ml in PBS.The cells were excited at480nm(argon laser),and fluorescence was recorded with a500-nm long-passfilter on a Becton Dickinson FACS Calibur Cytometer.The effect of the U1antitarget construct was assessed by thefluorescence index of the sample.This value was calculated as the product of the percentage of the cell population that exceeded thefluorescence intensity of the control cells and the meanfluorescence intensity of this population.RESULTSThe U1snRNA targeting vectors were expressed from the endogenous U1snRNA gene that utilizes a polymerase II promoter and a U1snRNA-specific termination sequence. Modifications were made to U1snRNA fromϩ1toϩ10,the 5Јss recognition sequence,to produce the U1antitarget vector to a specific RNA target.In addition a6-bp change was in-TABLE1.U1snRNA constructs used to inactivate the␤-Gal,CAT,and GFP target genesConstruct name(s)Target sequence Whole sequence aU1snRNA CAGGTAAGTA Not applicableU1anti␤-gal1800CAGTTCTGTA5ЈGGCCCAAGATCTCATACAGAACTGGCAGG3ЈU1antiCAT488CAGGTTCATC5ЈGGCCCAAGATCTCAGATGAACCTGGCAGG3ЈU1antiCAT568CAGGTTCATC5ЈGGCCCAAGATCTCAGATGAACCTGGCAGG3ЈU1antiGFP490CCGACAAGCA5ЈGGCCCAAGATCTCATGCTTGTCGGGCAGG3ЈU1antiCAT737CAGAAATTCG5ЈCCCAAGATCTCACGAATTTCTGGCAGG GGAGATACC3ЈU1antiCAT737PvuII CAGCTGTATA5ЈCCCAAGATCTCATATACAGCTGGCAGG GGAGATACC3ЈU1anti737a and U1antiCAT737⌬70K CAGAAATTCG5ЈCCCAAGATCTCACGAATTTCTGGCAGG GGAGATACC3Јa Underlined bases indicate the Bgl II restriction site used to insert the mutagenized DNA into the context of the U1snRNA expression vectors.The last three sequences have an additional9bp at the3Јend(italicized)which its complementary to the U1gene from bp14to22.V OL.21,2001U1snRNA REDUCTION OF TARGET GENE EXPRESSION2817at Penn State Univ on February 7, 2008 Downloaded fromserted by site-directed mutagenesis into loop III of the U1snRNA gene to distinguish the modified U1snRNA transcript from the endogenous U1snRNA in cultured cells.Three ex-pression plasmids,each with a reporter as the terminal exon,were used to demonstrate the inhibitory effects of the modified U1snRNA targeting vectors in intact cells.U1antitarget vec-tors were initially tested in transient cotransfection with the reporter genes.To determine the persistence of the inhibitory effect,stable cotransfection experiments were performed with the modified U1snRNA vector and target.Then,to approx-imate inhibition of an endogenous gene,sequential stable transfections were performed in which the target gene’s activ-ity had been established prior to introduction of the U1anti-target vector.Inhibition of target gene expression was assessed by measurements of protein activity and mRNA levels.Finally,the specificity and a possible inhibitory mechanism(s)of the U1antitarget vector have been investigated.Reduction of transgene expression in transient-cotransfec-tion experiments.The RSV ␤-Gal reporter gene was cotrans-fected with either U1snRNA or U1anti-␤-Gal1800.The U1anti-␤-Gal vector reduced ␤-Gal enzyme activity by Ͼ90%compared to cells transfected with U1snRNA (Fig.2A).Par-allel plates stained for ␤-Gal protein expression ranged in intensity from dark to light blue in both the control and test cultures.An average of 25(Ϯ2)␤-Gal-positive cells per high-power field was observed in U1snRNA cotransfections (values in parentheses are standard deviations unless otherwise not-ed),while cotransfection with U1anti-␤-Gal reduced ␤-Gal expression to 3(Ϯ1)cells per high-power field (data not shown).These results demonstrated that a modified U1snRNA could qualitatively and quantitatively reduce protein expression from a targeted gene.Since our goal was to target the terminal exon of selected genes containing multiple exons,the two-exon-unit expression vector pOB4CAT (4),in which the CAT gene is the terminal exon,was used.The pOB4CAT expression vector was cotrans-fected with U1anti-CAT targeted to either nt 448to 457or nt 568to 577of the CAT mRNA sequence.A titration experi-ment of the CAT reporter to U1antitarget vector was per-formed with U1anti-CAT568(Fig.2B).The amount of re-porter construct in the transfection was kept constant,and the amount of U1anti-CAT568vector varied from a ratio of 4:1to 1:5.It was observed that inhibition of CAT expression by U1anti-CAT568was maximal at a ratio of 1:5.This ratio was used for subsequent experiments to demonstrate the effectiveness of U1anti-CAT constructs targeted to randomly selected areas in the CAT mRNA sequence.U1anti-CAT448and U1anti-CAT568vectors inhibited CAT enzyme activity to 5to 10%of controls (Fig.2C).The specificity of the U1anti-CAT vector for a target se-quence was assessed by directing a U1anti-CAT vector to noncoding sequence in the 3ЈUTR of the pOB4CAT reporter,altering the targeted sequence,and then directing a new U1anti-CAT vector to the altered sequence.The inhibitory effect of U1anti-CAT737vector is shown against the parent pOB4CAT and against the mutated pOBCAT737PvuII report-er transgenes (Fig.2D).U1anti-CAT737inhibited the expres-sion of CAT protein by 90%when targeted to pOB4CAT transfected cells but was ineffective when targeted to the mu-tated pOBCAT737PvuII reporter.However,the inhibitory ef-fect was reestablished when U1antiCAT737PvuII was targeted to the mutated reporter.This experiment demonstrates that the inhibitory effect of the U1antitarget vector is dependentonFIG.2.(A)Reduction in ␤-Gal activity from three separate transient U1anti-␤-Gal–RSV ␤-Gal cotransfection experiments.(B)Titration of various amounts of pOB4CAT to 10␮g of U1anti-CAT568,yielding approximate molar ratio of CAT to U1anti-CAT of 4:1,2:1,2:3,and 1:5respectively.(C)Reduction of CAT enzyme activity by U1anti-CAT448and anti-CAT568using cotransfection.(D)Sequence-specific reduction of CAT activity in transient U1anti-CAT-pOB4CAT cotransfection experiments.pOB4CAT was used in lanes 1and 2,and pOB4CAT 737PvuII was used in lanes 3and 4.The U1snRNA constructs were as follows:lane 1,U1snRNA;lane 2,U1anti-CAT737;lane 3,U1anti-CAT737;lane 4,U1anti-CAT737PvuII.These constructs were transiently cotransfected with TK-luciferase construct into NIH 3T3cells.Error bars,standard deviations.2818BECKLEY ET AL.M OL .C ELL .B IOL .at Penn State Univ on February 7, 2008 Downloaded fromthe hybrid formed between U1antitarget RNA and the com-plimentary sequence in the target mRNA.Reduction of transgene expression in stable transfection experiments.To demonstrate the persistence of this inhibitory effect on a target transcript,cells were cotransfected with the pOB4CAT vector,SV2Neo,and either U1anti-CAT568or U1snRNA.After G418selection,six randomly picked clones were analyzed.The CAT activity of the U1snRNA-transfected clones ranged between 33,000and 140,000cpm/h/␮g of pro-tein,with a single clone having an activity of 2,100cpm/h/␮g of protein,while the CAT activity of the U1anti-CAT-transfected clones ranged between 1,050and 2,040cpm/h/␮g of protein,with a single clone having an activity of 105,000cpm/h/␮g of protein (Fig.3A).In addition,CAT enzyme activity from pools of the remaining U1snRNA-transfected clones was 22,800Ϯ3,000cpm/h/␮g of protein,and that of U1anti-CAT568was 9,600Ϯ2,200cpm/h/␮g of protein,consistent with the data obtained from the individual clones.Since each clone repre-sents a different set of integration events with respect to both CAT and the U1anti-CAT vector,it is difficult to compare individual populations.The pooled cells represent a larger cross-section of the integration events,but there remains the question of simultaneous incorporation of both genes into the same cell.To develop a system in which both genes are represented in the cells,U1antitarget vectors were introduced into cells in which the reporter gene had previously been inserted.An established pOB4CAT-expressing stable cell line derived from a single positive clone was subsequently transfected with either U1snRNA or U1anti-CAT568,and five randomly selected individual clones were expanded from both.CAT enzyme ac-tivity in control clones ranged between 82and 3,200cpm/h/␮g of protein (Fig.3B).CAT activity in the U1anti-CAT-trans-fected clones was undetectable.The GFP reporter was chosen as a target because we could assess the U1antitarget activity in living cells using FACS analysis and fluorescence ing a similar sequen-tial transfection protocol,a single GFP-expressing clone was used to establish a cell line that was subsequently transfected with either U1snRNA or U1anti-GFP.The inhibitory effect was assessed by fluorescence microscopy (Fig.4A)and quantitat-ed using FACS analysis (Fig.4B).Cells transfected with U1snRNA were of uniform bright green fluorescence (Fig.4A,panel 3).In U1anti-GFP transfections,some of the cells were equally fluorescent as the control cells,a smaller percentage were less fluorescent and approximately half were not visibly fluorescent (Fig.4A-4).The visual impression of reduced GFP expression in U1anti-GFP transfected cells was confirmed by FACS analysis of the cells.The autofluorescence background,established with NIH 3T3cells (Fig.4B,panel 1),was less than 1.5RFU.The GFP-transfected cell line (Fig.4B,panel 2)had a mode dis-tribution of 3.2RFU,2log units greater than untransfect-ed cells.Pools of multiple colonies transfected with U1(H)snRNA (Fig.4B,panel 3)showed a single peak at 3.2RFU,while pools from U1anti-GFP transfections (Fig.4B,panel 4)were seen as three peaks:50%low fluorescence (Ͻ2RFU),25%intermediate (2to 3RFU)and 25%high (Ն3RFU).A fluorescence index was established to reflect the inhibition by U1antiGFP vector (Table 2).This index emphasizes the mag-nitude of signal generated by the GFP-transfected cell lines over nontransfected cells.The reduction in GFP signal strength in the polyclonal population of U1(H)anti-GFP-transfected cells is approximately 70%.Clones from this transfection were developed for subsequent RNA analysis.Clonal populations of U1snRNA-and U1anti-GFP-trans-fected GFP-expressing cells were randomly selected,expand-ed,and analyzed by FACS (Fig.4C).The FACS profiles of untransfected cells (clone A)and GFP parental cells (clone B)were similar to those of the cells presented in Fig.4B.The cells from the U1(H)snRNA transfections (clones C and D)had a strong narrow peak of fluorescence at approximately 3.0RFU,FIG.3.(A)Reduction of CAT activity from clonally expanded stable U1anti-CAT568/pOB4CAT cotransfection experiments.Bars 1to 6represent the U1snRNA-transfected CAT-expressing cells.Bars 7to 12represent the U1anti-CAT transfected CAT-expressing cells.(B)CAT activity from a clonal cell line derived by stable pOB4CAT and subsequently transfected with U1antiCAT568.Bars 1to 5repre-sent the U1snRNA-transfected clones,and bars 6to 10represent the U1anti-CAT-transfected clones.TABLE 2.Fluorescence of a pooled population of GFP-expressingcells as determined by flow cytometrySampleDescription%in M2window a Mean intensityFluorescenceindex 1Control (nontransfected)0.16333522Parental cells (nontransfected)94.51,804171,0003Parental ϩU1(H)snRNA 92.51,873170,0004Parental ϩU1(H)antiGFP 60.585151,000aM2is the window in the FACS scan where the fluorescent signal exceeds the background of GFP negative cells.(See Fig.4B.)V OL .21,2001U1snRNA REDUCTION OF TARGET GENE EXPRESSION 2819at Penn State Univ on February 7, 2008 Downloaded from。

Reduced cost-based ranking for generating promising subproblems

Reduced cost-based ranking for generating promising subproblems

Reduced Cost-Based Rankingfor Generating Promising SubproblemsMichela Milano1and Willem J.van Hoeve21DEIS,University of BolognaViale Risorgimento2,40136Bologna,Italymmilano@deis.unibo.ithttp://www-lia.deis.unibo.it/Staff/MichelaMilano/2CWI,P.O.Box94079,1090GB Amsterdam,The Netherlandsw.j.van.hoeve@cwi.nlhttp://www.cwi.nl/~wjvh/Abstract.In this paper,we propose an effective search procedure thatinterleaves two steps:subproblem generation and subproblem solution.We mainly focus on thefirst part.It consists of a variable domain valueranking based on reduced costs.Exploiting the ranking,we generate,in a Limited Discrepancy Search tree,the most promising subproblemsfirst.An interesting result is that reduced costs provide a very preciseranking that allows to almost alwaysfind the optimal solution in thefirst generated subproblem,even if its dimension is significantly smallerthan that of the original problem.Concerning the proof of optimality,we exploit a way to increase the lower bound for subproblems at higherdiscrepancies.We show experimental results on the TSPand its timeconstrained variant to show the effectiveness of the proposed approach,but the technique could be generalized for other problems.1IntroductionIn recent years,combinatorial optimization problems have been tackled with hybrid methods and/or hybrid solvers[11,18,13,19].The use of problem relax-ations,decomposition,cutting planes generation techniques in a Constraint Pro-gramming(CP)framework are only some examples.Many hybrid approaches are based on the use of a relaxation R,i.e.an easier problem derived from the original one by removing(or relaxing)some constraints.Solving R to optimality provides a bound on the original problem.Moreover,when the relaxation is a linear prob-lem,we can derive reduced costs through dual variables often with no additional computational cost.Reduced costs provide an optimistic esteem(a bound)of each variable-value assignment cost.These results have been successfully used for pruning the search space and for guiding the search toward promising re-gions(see[9])in many applications like TSP[12],TSPTW[10],scheduling with sequence dependent setup times[8]and multimedia applications[4].We propose here a solution method,depicted in Figure1,based on a two step search procedure that interleaves(i)subproblem generation and(ii)subproblemP.Van Hentenryck(Ed.):CP2002,LNCS2470,pp.1–16,2002.c Springer-Verlag Berlin Heidelberg20022Michela Milano and Willem J.van Hoevesolution.In detail,we solve a relaxation of the problem at the root node and we use reduced costs to rank domain values;then we partition the domain of eachvariable X i in two sets,i.e.,the good part D goodi and the bad part D badi.We searchthe tree generated by using a strategy imposing on the left branch the branchingconstraint X i∈D goodi while on the right branch we impose X i∈D bad i.At eachleaf of the subproblem generation tree,we have a subproblem which can now be solved(in the subproblem solution tree).Exploring with a Limited Discrepancy Strategy the resulting search space, we obtain that thefirst generated subproblems are supposed to be the most promising and are likely to contain the optimal solution.In fact,if the ranking criterion is effective(as the experimental results will show),thefirst generated subproblem(discrepancy equal to0)P(0),where all variables range on the good domain part,is likely to contain the optimal solution.The following generatedsubproblems(discrepancy equal to1)P(1)i have all variables but the i-th rangingon the good domain and are likely to contain worse solutions with respect to P(0), but still good.Clearly,subproblems at higher discrepancies are supposed to contain the worst solutions.A surprising aspect of this method is that even by using low cardinality good sets,we almost alwaysfind the optimal solution in thefirst gener-ated subproblem.Thus,reduced costs provide extremely useful information indicating for each variable which values are the most promising.Moreover,this property of reduced costs is independent of the tightness of the relaxation.Tight relaxations are essential for the proof of optimality,but not for the quality of reduced costs.Solving only thefirst subproblem,we obtain a very effective in-complete method thatfinds the optimal solution in almost all test instances.To be complete,the method should solve all subproblems for all discrepancies to prove optimality.Clearly,even if each subproblem could be efficiently solved,if all of them should be considered,the proposed approach would not be applicable. The idea is that by generating the optimal solution soon and tightening the lower bound with considerations based on the discrepancies shown in the paper,we do not have to explore all subproblems,but we can prune many of them.In this paper,we have considered as an example the Travelling Salesman Problem and its time constrained variant,but the technique could be applied to a large family of problems.The contribution of this paper is twofold:(i)we show that reduced costs provide an extremely precise indication for generating promising subproblems, and(ii)we show that LDS can be used to effectively order the subproblems. In addition,the use of discrepancies enables to tighten the problem bounds for each subproblem.The paper is organized as follows:in Section2we give preliminaries on Lim-ited Discrepancy Search(LDS),on the TSP and its time constrained variant. In Section3we describe the proposed method in detail.Section4discusses the implementation,focussing mainly on the generation of the subproblems using LDS.The quality of the reduced cost-based ranking is considered in Section5.Reduced Cost-Based Ranking for Generating Promising Subproblems3Fig.1.The structure of the search treeIn this section also the size of subproblems is tuned.Section6presents the computational results.Conclusion and future work follow.2Preliminaries2.1Limited Discrepancy SearchLimited Discrepancy Search(LDS)wasfirst introduced by Harvey and Gins-berg[15].The idea is that one can oftenfind the optimal solution by exploring only a small fraction of the space by relying on tuned(often problem dependent) heuristics.However,a perfect heuristic is not always available.LDS addresses the problem of what to do when the heuristic fails.Thus,at each node of the search tree,the heuristic is supposed to provide the good choice(corresponding to the leftmost branch)among possible alternative branches.Any other choice would be bad and is called a discrepancy.In LDS, one tries tofindfirst the solution with as few discrepancies as possible.In fact,a perfect heuristic would provide us the optimal solution immediately.Since this is not often the case,we have to increase the number of discrepancies so as to make it possible tofind the optimal solution after correcting the mistakes made by the heuristic.However,the goal is to use only few discrepancies since in general good solutions are provided soon.LDS builds a search tree in the following way:thefirst solution explored is that suggested by the heuristic.Then solutions that follow the heuristic for every variable but one are explored:these solutions are that of discrepancy equal to one.Then,solutions at discrepancy equal to two are explored and so on.4Michela Milano and Willem J.van HoeveIt has been shown that this search strategy achieves a significant cutoffof the total number of nodes with respect to a depth first search with chronological backtracking and iterative sampling [20].2.2TSP and TSPTWLet G =(V,A )be a digraph,where V ={1,...,n }is the vertex set and A ={(i,j ):i,j ∈V }the arc set,and let c ij ≥0be the cost associated with arc (i,j )∈A (with c ii =+∞for each i ∈V ).A Hamiltonian Circuit (tour )of G is a partial digraph ¯G =(V,¯A )of G such that:|¯A |=n and for each pair of distinct vertices v 1,v 2∈V ,both paths from v 1to v 2and from v 2to v 1exist in ¯G (i.e.digraph ¯G is strongly connected ).The Travelling Salesman Problem (TSP)looks for a Hamiltonian circuit G ∗=(V,A ∗)whose cost (i,j )∈A ∗c ij is a minimum.A classic Integer Linear Programming formulation for TSP is as follows:v (T SP )=mini ∈V j ∈V c ij x ij (1)subject toi ∈V x ij =1,j ∈V(2) j ∈V x ij =1,i ∈V(3) i ∈S j ∈V \S x ij ≥1,S ⊂V,S =∅(4)x ij integer,i,j ∈V(5)where x ij =1if and only if arc (i,j )is part of the solution.Constraints (2)and (3)impose in-degree and out-degree of each vertex equal to one,whereas constraints (4)impose strong connectivity.Constraint Programming relies in general on a different model where we have a domain variable Next i (resp.Prev i )that identifies cities visited after (resp.before)node i .Domain variable Cost i identifies the cost to be paid to go from node i to node Next i .Clearly,we need a mapping between the CP model and the ILP model:Next i =j ⇔x ij =1.The domain of variable Next i will be denoted as D i .Initially,D i ={1,...,n }.The Travelling Salesman Problem with Time Windows (TSPTW)is a time constrained variant of the TSP where the service at a node i should begin within a time window [a i ,b i ]associated to the node.Early arrivals are allowed,in the sense that the vehicle can arrive before the time window lower bound.However,in this case the vehicle has to wait until the node is ready for the beginning of service.As concerns the CP model for the TSPTW,we add to the TSP model a domain variable Start i which identifies the time at which the service begins at node i .Reduced Cost-Based Ranking for Generating Promising Subproblems5A well known relaxation of the TSP and TSPTW obtained by eliminating from the TSP model constraints(4)and time windows constraints is the Linear Assignment Problem(AP)(see[3]for a survey).AP is the graph theory problem offinding a set of disjoint subtours such that all the vertices in V are visited and the overall cost is a minimum.When the digraph is complete,as in our case,AP always has an optimal integer solution,and,if such solution is composed by a single tour,is then optimal for TSP satisfying constraints(4).The information provided by the AP relaxation is a lower bound LB for the original problem and the reduced cost matrix¯c.At each node of the decision tree,each¯c ij estimates the additional cost to pay to put arc(i,j)in the so-lution.More formally,a valid lower bound for the problem where x ij=1isLB|xij=1=LB+¯c ij.It is well-known that when the AP optimal solution isobtained through a primal-dual algorithm,as in our case(we use a C++adap-tation of the AP code described in[2]),the reduced cost values are obtained without extra computational effort during the AP solution.The solution of the AP relaxation at the root node requires in the worst case O(n3),whereas each following AP solution can be efficiently computed in O(n2)time through a single augmenting path step(see[2]for details).However,the AP does not provide a tight bound neither for the TSP nor for the TSPTW.Therefore we will improve the relaxation in Section3.1.3The Proposed MethodIn this section we describe the method proposed in this paper.It is based on two interleaved steps:subproblem generation and subproblem solution.Thefirst step is based on the optimal solution of a(possibly tight)relaxation of the original problem.The relaxation provides a lower bound for the original problem and the reduced cost matrix.Reduced costs are used for ranking(the lower the better)variable domain values.Each domain is now partitioned according to this ranking in two sets called the good set and the bad set.The cardinality of the good set is problem dependent and is experimentally defined.However,it should be significantly lower than the dimension of the original domains.Exploiting this ranking,the search proceeds by choosing at each node the branching constraint that imposes the variable to range on the good domain, while on backtracking we impose the variable to range on the bad domain.By exploring the resulting search tree by using an LDS strategy we generatefirst the most promising problems,i.e.,those where no or few variables range on the bad sets.Each time we generate a subproblem,the second step starts for optimally solving it.Experimental results will show that,surprisingly,even if the sub-problems are small,thefirst generated subproblem almost always contains the optimal solution.The proof of optimality should then proceed by solving the re-maining problems.Therefore,a tight initial lower bound is essential.Moreover, by using some considerations on discrepancies,we can increase the bound and prove optimality fast.6Michela Milano and Willem J.van HoeveThe idea of ranking domain values has been previously used in incomplete algorithms,like GRASP [5].The idea is to produce for each variable the so called Restricted Candidate List (RCL),and explore the subproblem generated only by RCLs for each variable.This method provides in general a good starting point for performing local search.Our ranking method could in principle be applied to GRASP-like algorithms.Another connection can be made with iterative broadening [14],where one can view the breadth cutoffas corresponding to the cardinality of our good sets.The first generated subproblem of both approaches is then the same.However,iterative broadening behaves differently on backtracking (it gradually restarts increasing the breadth cutoff).3.1Linear RelaxationIn Section 2.2we presented a relaxation,the Linear Assignment Problem (AP),for both the TSP and TSPTW.This relaxation is indeed not very tight and does not provide a good lower bound.We can improve it by adding cutting planes.Many different kinds of cutting planes for these problems have been proposed and the corresponding separation procedure has been defined [22].In this paper,we used the Sub-tour Elimination Cuts (SECs)for the TSP.However,adding linear inequalities to the AP formulation changes the structure of the relaxation which is no longer an AP.On the other hand,we are interested in maintaining this structure since we have a polynomial and incremental algorithm that solves the problem.Therefore,as done in [10],we relax cuts in a Lagrangean way,thus maintaining an AP structure.The resulting relaxation,we call it AP cuts ,still has an AP structure,but provides a tighter bound than the initial AP.More precisely,it provides the same objective function value as the linear relaxation where all cuts are added defining the sub-tour polytope.In many cases,in particular for TSPTW instances,the bound is extremely close to the optimal solution.3.2Domain PartitioningAs described in Section 2.2,the solution of an Assignment Problem provides the reduced cost matrix with no additional computational cost.We recall that the reduced cost c ij of a variable x ij corresponds to the additional cost to be paid if this variable is inserted in the solution,i.e.,x ij =1.Since these variables are mapped into CP variables Next ,we obtain the same esteem also for variable domain values.Thus,it is likely that domain values that have a relative low reduced cost value will be part of an optimal solution to the TSP.This property is used to partition the domain D i of a variable Next i into D good i and D bad i ,such that D i =D good i ∪D bad i and D good i ∩D bad i =∅for all i =1,...,n .Given a ratio r ≥0,we define for each variable Next i the good set D good i by selecting from the domain D i the values j that have the r ∗n lowest c ij .Consequently,D bad i =D i \D good i .Reduced Cost-Based Ranking for Generating Promising Subproblems7 The ratio defines the size of the good domains,and will be discussed in Section5.Note that the optimal ratio should be experimentally tuned,in order to obtain the optimal solution in thefirst subproblem,and it is strongly problem dependent.In particular,it depends on the structure of the problem we are solving.For instance,for the pure TSP instances considered in this paper,a good ratio is0.05or0.075,while for TSPTW the best ratio observed is around 0.15.With this ratio,the optimal solution of the original problem is indeed located in thefirst generated subproblem in almost all test instances.3.3LDS for Generating SubproblemsIn the previous section,we described how to partition variable domains in a good and a bad set by exploiting information on reduced costs.Now,we show how to explore a search tree on the basis of this domain partitioning.At each node corresponding to the choice of variable X i,whose domain has been par-titioned in D goodi and D badi,we impose on the left branch the branching con-straint X i∈D goodi ,and on the right branch X i∈D bad i.Exploring with aLimited Discrepancy Search strategy this tree,wefirst explore the subproblem suggested by the heuristic where all variable range on the good set;then sub-problems where all variables but one range on the good set,and so on.If the reduced cost-based ranking criterion is accurate,as the experimental results confirm,we are likely tofind the optimal solution in the subproblem P(0) generated by imposing all variables ranging on the good set of values.If this heuristic fails once,we are likely tofind the optimal solution in one of the nsubproblems(P(1)i with i∈{1,...,n})generated by imposing all variables butone(variable i)ranging on the good sets and one ranging on the bad set.Then,we go on generating n(n−1)/2problems P(2)ij all variables but two(namely iand j)ranging on the good set and two ranging on the bad set are considered, and so on.In Section4we will see an implementation of this search strategy that in a sense squeezes the subproblem generation tree shown in Figure1into a con-straint.3.4Proof of OptimalityIf we are simply interested in a good solution,without proving optimality,we can stop our method after the solution of thefirst generated subproblem.In this case,the proposed approach is extremely effective since we almost alwaysfind the optimal solution in that subproblem.Otherwise,if we are interested in a provably optimal solution,we have to prove optimality by solving all sub-problems at increasing discrepancies.Clearly, even if all subproblems could be efficiently solved,generating and solving all of them would not be practical.However,if we exploit a tight initial lower bound, as explained in Section3.1,which is successively improved with considerations on the discrepancy,we can stop the generation of subproblems after few trials since we prove optimality fast.8Michela Milano and Willem J.van HoeveAn important part of the proof of optimality is the management of lower and upper bounds.The upper bound is decreased as we find better solutions,and the lower bound is increased as a consequence of discrepancy increase.The idea is to find an optimal solution in the first subproblem,providing the best possible upper bound.The ideal case is that all subproblems but the first can be pruned since they have a lower bound higher than the current upper bound,in which case we only need to consider a single subproblem.The initial lower bound LB 0provided by the Assignment Problem AP cuts at the root node can be improved each time we switch to a higher discrepancy k .For i ∈{1,...,n },let c ∗i be the lowest reduced cost value associated with D bad i ,corresponding to the solution of AP cuts ,i.e.c ∗i =min j ∈D bad i c ij .Clearly,a first trivial bound is LB 0+min i ∈{1,...,n }c ∗i for all problems at discrepancy greaterthan or equal to 1.We can increase this bound:let L be the nondecreasing ordered list of c ∗i values,containing n elements.L [i ]denotes the i -th element in L .The following theorem achieves a better bound improvement [21].Theorem 1.For k ∈{1,...,n },LB 0+ k i =1L [i ]is a valid lower bound for the subproblems corresponding to discrepancy k.Proof.The proof is based on the concept of additive bounding procedures [6,7]that states as follows:first we solve a relaxation of a problem P .We obtain a bound LB ,in our case LB 0and a reduced-cost matrix c .Now we define a second relaxation of P having cost matrix c .We obtain a second lower bound LB (1).The sum LB +LB (1)is a valid lower bound for P .In our case,the second relaxation is defined by the constraints imposing in-degree of each vertex less or equal to one plus a linear version of the k-discrepancy constraint : i ∈V j ∈D bad i x ij =k .Thus, k i =1L [i ]is exactly the optimal solution for this problem.✷Note that in general reduced costs are not additive,but in this case they are.As a consequence of this result,optimality is proven as soon as LB 0+ k i =1L [i ]>UB for some discrepancy k ,where UB is the current upper bound.We used this bound in our implementation.3.5Solving Each SubproblemOnce the subproblems are generated,we can solve them with any complete technique.In this paper,we have used the method and the code described in [12]and in [10]for the TSPTW.As a search heuristic we have used the one behaving best for each problem.4ImplementationFinding a subproblem of discrepancy k is equivalent to finding a set S⊆{1,...,n }with |S |=k such thatNext i ∈D bad i for i ∈S ,andNext i ∈D good i for i ∈{1,...,n }\S.Reduced Cost-Based Ranking for Generating Promising Subproblems9 The search for such a set S and the corresponding domain assignments have been‘squeezed’into a constraint,the discrepancy constraint discr cst.It takesas input the discrepancy k,the variables Next i and the domains D goodi .Declar-atively,the constraint holds if and only if exactly k variables take their values in the bad sets.Operationally,it keeps track of the number of variables that take their value in either the good or the bad domain.If during the search for a solution in the current subproblem the number of variables ranging on their bad domain is k, all other variables are forced to range on their good domain.Equivalently,if the number of variables ranging on their good domain is n−k,the other variables are forced to range on their bad domain.The subproblem generation is defined as follows(in pseudo-code):for(k=0..n){add(discr_cst(k,next,D_good));solve subproblem;remove(discr_cst(k,next,D_good));}where k is the level of discrepancy,next is the array containing Next i,andD good is the array containing D goodi for all i∈{1,...,n}.The command solvesubproblem is shorthand for solving the subproblem which has been considered in Section3.5.A more traditional implementation of LDS(referred to as‘standard’in Ta-ble1)exploits tree search,where at each node the domain of a variable is split into the good set or the bad set,as described in Section3.3.In Table1,the performance of this traditional approach is compared with the performance of the discrepancy constraint(referred to as discr cst in Table1).In this table results on TSPTW instances(taken from[1])are reported.All problems are solved to optimality and both approaches use a ratio of0.15to scale the size of the good domains.In the next section,this choice is experimentally derived. Although one method does not outperform the other,the overall performance of the discrepancy constraint is in general slightly better than the traditional LDS approach.In fact,for solving all instances,we have in the traditional LDS approach a total time of2.75with1465fails,while using the constraint we have 2.61seconds and1443fails.5Quality of HeuristicIn this section we evaluate the quality of the heuristic used.On the one hand, we would like the optimal solution to be in thefirst subproblem,corresponding to discrepancy0.This subproblem should be as small as possible,in order to be able to solve it fast.On the other hand,we need to have a good bound to prove optimality.For this we need relatively large reduced costs in the bad domains, in order to apply Theorem1effectively.This would typically induce a larger first subproblem.Consequently,we should make a tradeoffbetweenfinding a10Michela Milano and Willem J.van HoeveTable parison of traditional LDS and the discrepancy constraint standard discr cst instancetime fails time fails rbg016a0.08440.0444rbg016b0.15570.1043rbg017.20.05140.0514rbg0170.10690.0947rbg017a0.09420.0942rbg019a0.06300.0630rbg019b0.12710.1271rbg019c0.201520.19158rbg019d0.0660.066rbg020a 0.0750.065standard discr cst instance time fails time fails rbg021.20.13660.1366rbg021.30.221910.20158rbg021.40.11850.0940rbg021.50.10450.16125rbg021.60.191100.2110rbg021.70.23700.2270rbg021.80.15880.1588rbg021.90.171080.17108rbg0210.191520.19158rbg027a 0.22530.2153good first solution and proving optimality.This is done by tuning the ratio r ,which determines the size of the first subproblem.We recall from Section 3.2that |D good i |≤rn for i ∈{1,...,n }.In Tables 2and 3we report the quality of the heuristic with respect to the ra-tio.The TSP instances are taken from TSPLIB [23]and the asymmetric TSPTW instances are due to Ascheuer [1].All subproblems are solved to optimality with a fixed strategy,as to make a fair comparison.In the tables,‘size’is the actual relative size of the first subproblem with respect to the initial problem.The size is calculated by 1n n i =1|D good i |/|D i |.The domains in the TSPTW instances are typically much smaller than the number of variables,because of the time window constraints that already remove a number of domain values.Therefore,the first subproblem might sometimes be relatively large,since only a few values are left after pruning,and they might be equally promising.The next columns in the tables are ‘opt’and ‘pr’.Here ‘opt’denotes the level of discrepancy at which the optimal solution is found.Typically,we would like this to be 0.The column ‘pr’stands for the level of discrepancy at whichTable 2.Quality of heuristic with respect to ratio for the TSPratio=0.025ratio=0.05ratio=0.075ratio=0.1instancesize opt pr fails size opt pr fails size opt pr fails size opt pr fails gr170.250120.250120.320170.32017gr210.170110.230160.230160.280114gr240.170120.210120.210120.260139fri260.160110.200140.24013270.2401327bayg290.14121270.17013410.21011k 0.21011k bays290.131619k 0.1602540.2001650.200165average 0.170.332220.200 1.17680.2401680.250175optimality is proved.This would preferably be1.The column‘fails’denotes the total number of backtracks during search needed to solve the problem to optimality.Concerning the TSP instances,already for a ratio of0.05all solutions are in thefirst subproblem.For a ratio of0.075,we can also prove optimality at discrepancy1.Taking into account also the number of fails,we can argue that both0.05and0.075are good ratio candidates for the TSP instances.For the TSPTW instances,we notice that a smaller ratio does not necessarily increase the total number of fails,although it is more difficult to prove optimality. Hence,we have a slight preference for the ratio to be0.15,mainly because of its overall(average)performance.An important aspect we are currently investigating is the dynamic tuning of the ratio.6Computational ResultsWe have implemented and tested the proposed method using ILOG Solver and Scheduler[17,16].The algorithm runs on a Pentium1Ghz,256MB RAM,and uses CPLEX6.5as LP solver.The two sets of test instances are taken from the TSPLIB[23]and Ascheuer’s asymmetric TSPTW problem instances[1].Table4shows the results for small TSP instances.Time is measured in seconds,fails again denote the total number of backtracks to prove optimality. The time limit is set to300seconds.Observe that our method(LDS)needs less number of fails than the approach without subproblem generation(No LDS). This comes with a cost,but still our approach is never slower and in some cases considerably faster.The problems were solved both with a ratio of0.05and0.075, the best of which is reported in the table.Observe that in some cases a ratio of 0.05is best,while in other cases0.075is better.For three instances optimality could not be proven directly after solving thefirst subproblem.Nevertheless the optimum was found in this subproblem(indicated by objective‘obj’is‘opt’). Time and the number of backtracks(fails)needed in thefirst subproblem are reported for these instances.In Table5the results for the asymmetric TSPTW instances are shown.Our method(LDS)uses a ratio of0.15to solve all these problems to optimality.It is compared to our code without the subproblem generation(No LDS),and to the results by Focacci,Lodi and Milano(FLM2002)[10].Up to now,FLM2002 has the fastest solution times for this set of instances,to our knowledge.When comparing the time results(measured in seconds),one should take into account that FLM2002uses a Pentium III700MHz.Our method behaves in general quite well.In many cases it is much faster than FLM2002.However,in some cases the subproblem generation does not pay off.This is for instance the case for rbg040a and rbg042a.Although our method finds the optimal solution in thefirst branch(discrepancy0)quite fast,the initial bound LB0is too low to be able to prune the search tree at discrepancy1.In those cases we need more time to prove optimality than we would have needed if we。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
ASSORTMENT: AN ATTRIBUTE-BASED APPROACH
Peter Boatwright Joseph C. Nunes*
LAST REVISED: August, 2000
* Peter Boatwright is Assistant Professor of Marketing at the Graduate School of Industrial Administration at Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213-3890. Joseph C. Nunes is Assistant Professor of Marketing at the University of Southern California’s Marshall School of Business, Los Angeles, CA 90089-1421. Both authors contributed equally and are listed alphabetically. The authors thank a major online grocery shopping and delivery service. Questions should be directed to either Joseph C. Nunes at jnunes@ or Peter Boatwright at pbhb@. The authors thank Ajay Kalra, Shantanu Dutta and Xavier Dréze for their valuable comments. They would also like to thank the participants at the Sheth Symposium (1999) at the Katz Graduate School of Business and the participants of the 1999 Southern California UCI/USC/UCLA Marketing Symposium for theernatives (Kahn and Lehman 1991). Hence, many observers within industry and academia believe that grocers can make sizable reductions in the number of SKUs offered without impacting sales negatively, if done properly. In fact, a study by Drèze, Hoch and Purk (1994) saw aggregate sales actually go up nearly 4% in eight test categories after experimenters deleted 10 percent of the less popular SKUs and dedicated more shelf space to high-selling items. Their experiment lasted 16 weeks and tracked sales at 30 test stores and 30 control stores.1 Broniarczyk, Hoyer and McAlister (1998) examined the link between the number of items offered, assortment from the consumer’s perspective, and sales. They found that reductions (up to 54%) in the number of low-selling SKUs need not affect perceptions of variety, and thus sales. In a field study, they eliminated approximately half of the lowselling items in five categories (candy, beer, soft drinks, salty snacks and cigarettes) in two test convenience stores while holding shelf space constant. Neither sales nor consumers’ perceptions of variety differed significantly between two test stores and two control stores. However, as Broniarczyk, Hoyer and McAlister (1998) point out [p. 175], their findings are limited by the extent to which their results would generalize to other categories, and how the specific features of the category might make consumers more or less sensitive to SKU reductions (e.g., a category with a small number of brands). This research addresses both of these issues directly. First and foremost, we examine how different types of SKU reductions – defined by how they affect the attributes available in a category (i.e., the number of brands, sizes, and flavors) – affect sales differently. Fader and Hardie (1996) posed the question, “Is it sufficient to drop the slowest-selling items or is it wiser to eliminate all items sharing an
KEY WORDS: Product Assortment, Grocery Retailing, Internet Marketing, Size Loyalty, Brand Perceptions, Brand Loyalty, Electronic Commerce
When it comes to product assortment, the conventional wisdom among supermarket managers has been “more is better.” Consequently, the average number of SKUs (stock-keeping units) at a supermarket has grown from 6,000 a generation ago to more than 30,000 items today (Food Marketing Institute Report 1993, Drèze, Hoch and Purk 1994). Recognizing that a reduction in SKUs can clear away clutter and lower costs, grocery retailers have been under immense pressure in recent years to begin offering a more “efficient assortment” by simply eliminating the low- or non-selling items within a category. Yet retailers are generally reluctant to cut items for fear that consumers, unhappy with their offerings, may leave the store and never come back. Most grocers realize that consumers often prefer stores that carry large assortments of products for a number of reasons (Arnold, Oum and Tigert 1983). For one, the larger the selection, the more likely the consumer is to find a product that matches their exact specifications (Baumol and Ide 1956). In addition, more products mean more flexibility, which is important if the consumer has uncertain preferences (Koopmans 1964, Reibstein, Youngblood and Fromkin 1975, Kreps 1979, Kahn and Lehman 1991) or is predisposed to variety-seeking (Berlyne 1960, Helson 1964, McAlister and Pessemier 1982, Kahn 1995). Recent research, however, suggests that consumer choice is affected by the perception of variety among a selection, which depends on more than just the number of distinct products on the shelves. The consumer’s perception of variety can be influenced by the space devoted to the category, the presence or absence of the consumer’s favorite item (Broniarczyk, Hoyer and McAlister 1998), the arrangement of an assortment and the presence of repeated items (Hoch Bradlow and Wansink 1999), and the number of
相关文档
最新文档