Simulation study of World Wide Web traffic over the ATM ABR service
跨学科视野的研究英语作文
跨学科视野的研究英语作文Title: The Importance of Interdisciplinary Perspectives in Research。
In today's rapidly evolving world, interdisciplinary research has emerged as a crucial approach to addressing complex challenges and advancing human knowledge. By integrating insights and methodologies from multiple disciplines, interdisciplinary research not only fosters innovation but also facilitates a deeper understanding of interconnected phenomena. In this essay, we will explore the significance of interdisciplinary perspectives in research and examine how they contribute to solving real-world problems.Interdisciplinary research transcends the boundaries of traditional academic disciplines, bringing together experts from diverse fields to collaborate on common objectives. This collaboration enables researchers to tackle multifaceted issues that cannot be adequately addressedwithin the confines of a single discipline. For example, in the field of environmental sustainability,interdisciplinary teams may combine expertise in ecology, economics, and policy analysis to develop comprehensive strategies for addressing climate change.One of the key advantages of interdisciplinary research is its ability to generate novel insights by integrating perspectives from different disciplines. By examining a problem from various angles, researchers can uncover hidden connections and patterns that may not be apparent within a single disciplinary framework. This holistic approach not only expands our understanding of complex phenomena but also stimulates innovative thinking and problem-solving.Moreover, interdisciplinary research encourages creativity and innovation by fostering collaboration across disciplinary boundaries. By bringing together individuals with diverse expertise and perspectives, interdisciplinary teams can generate new ideas and approaches that may not have emerged within a single discipline. This collaborative process often leads to breakthroughs and discoveries thathave the potential to transform entire fields of study.Interdisciplinary research also plays a critical rolein addressing real-world problems that defy simple solutions. Many of today's most pressing challenges, such as climate change, public health crises, and social inequality, are multifaceted issues that require comprehensive, interdisciplinary approaches. By integrating insights from multiple disciplines, researchers can develop more effective strategies for addressing these complex problems and promoting positive change.Furthermore, interdisciplinary research fosters interdisciplinary communication and collaboration, breaking down silos between academic disciplines and facilitating the exchange of ideas and knowledge. By promoting dialogue and collaboration across disciplines, interdisciplinary research creates opportunities for researchers to learn from one another, share best practices, and build upon each other's work. This collaborative ethos not only enriches the research process but also contributes to the development of a more interconnected and inclusivescientific community.In conclusion, interdisciplinary research is essential for addressing the complex challenges of the 21st century and advancing human knowledge. By integrating insights and methodologies from multiple disciplines, interdisciplinary research fosters innovation, generates novel insights, and facilitates the development of comprehensive solutions to real-world problems. As we continue to confrontincreasingly complex challenges, interdisciplinary perspectives will be indispensable for driving progress and promoting positive change.This essay draws attention to the critical role of interdisciplinary research in addressing complex challenges and advancing human knowledge. By integrating insights and methodologies from multiple disciplines, interdisciplinary research fosters innovation, generates novel insights, and facilitates the development of comprehensive solutions to real-world problems. As we continue to confrontincreasingly complex challenges, interdisciplinaryperspectives will be indispensable for driving progress and promoting positive change.。
科学文献查询途径
数学学术网站收录大全1. /《科学》杂志2. /The NASA Astrophysics Data System -- 世界最大免费全文网站,超过300,000篇全文.主要学科:天体物理学3. 美国洛斯阿拉莫斯核物理实验室的论文预印本服务器,全世界物理学研究者最重要的交流工具,覆盖几乎全部的物理学,大部分计算机科学和一部分数学。
4. /数学论文预印本服务器搜索系统,可以查阅大部分数学分支的预印本资源5. /theses/学位论文库,大部分文章可以看全文6. 计算机科学研究报告和论文7. /《科学美国人》杂志8./计算数学,数值分析/epigone/alt.math.undergrad数学论坛。
一个关于大学本科数学问题和难题的论坛。
数学论坛包括了读者可以发表文章参与讨论的文档,以及可以查看或检索文档化的信息。
/pubs/monthly.html美国数学月刊(MMA联机版)——美国数学协会(MAA)Roger A.Horn,编辑。
本月刊刊登数学方面的论文,评论以及其它相关的文章。
/software/uasft.htmlArizona数学软件。
超过60个教育程序的集合,可被教师和学生在课堂上、实验室和家庭环境中使用。
/BNALib,一个运行于个人pc上的数值分析软件库。
以源代码形式提供的BNALib软件包,是一个子例程,函数和演示程序组成的工具箱,它可进行数值分析计算。
/ccma/计算数学和应用中心------Penn州立大学数学系。
Penn州数值分析和应用数学的教学与科研工作中心之一。
应用与计算数学研究班课程系列,PDEs与数值方法研究班课程系列,教职人员与研究生的情况等。
/cdsns/动力系统和非线性研究中心———Georgia Tech数学学院。
始建于1988年9月,以加强数学学院已经开展了的研究活动,研究重点包括动力系统,微分方程,非线性分析和应用。
/CyberMath/Waterloo Maple公司。
World-Wide-Web
World-Wide-WebWorld Wide WebThe World Wide Web (commonly abbreviated as "the Web") is a system of interlinked hypertext documents accessed via the Internet. With a Web browser, one can view Web pages that may contain text, images, videos, and other multimedia and navigate between them using hyperlinks. Using concepts from earlier hypertext systems, the World Wide Web was started in 1989 by the English physicist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium, and later by Robert Cailliau, a Belgian computer scientist, while both were working at CERN in Geneva, Switzerland. In 1990, they proposed building a "web of nodes" storing "hypertext pages" viewed by "browsers" on a network, and released that web in December. Connected by the existing Internet, other websites were created, around the world, adding international standards for domain names & the HTML language. Since then, Berners-Lee has played an active role in guiding the development of Web standards (such as the markup languages in which Web pages are composed), and in recent years has advocated his vision of a Semantic Web.The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an application built on top of the Internet.HistoryIn March 1989, Berners-Lee wrote a proposal which referenced ENQUIRE and described a more elaborate information management system. With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web of nodes" with "hypertext documents" to store data. That data would be viewed in "hypertext pages" (webpages) by various "browsers" (line-mode or full-screen) on the computer network, using an "access protocol" connecting the "Internet and DECnet protocol worlds".The proposal had been modeled after EBT's (Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University) Dynatext SGML reader that CERN had licensed. The Dynatext system, although technically advanced (a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime), was considered too expensive and with an inappropriate licensing policy for general HEP (HighEnergy Physics) community use: a fee for each document and each time a document was changed.A NeXT Computer was used by Berners-Lee as the world's first Web server and also to write the first Web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first Web browser (which was a Web editor as well), the first Web server, and the first Web pages which described the project itself.On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup. This date also marked the debut of the Web as a publicly available service on the Internet.The first server outside Europe was set up at SLAC in December 1991.The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University--- among others Ted Nelson and Andries van Dam--- Ted Nelson's Project Xanadu and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex," which was described in the 1945 essay "As We May Think".Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed a system of globally unique identifiers for resources on the Web and elsewhere: the Uniform Resource Identifier.The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing Web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions.On April 30, 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular Web browser was ViolaWWW, which was based upon HyperCard.Scholars generally agree that a turning point for the World Wide Web began with the introduction of the Mosaic Web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore. Prior to the release of Mosaic, graphics were not commonly mixed with text in Web pages, and its popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol.The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October, 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA)—which had pioneered the Internet—and the European Commission.By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of whom are the precursors or inspiring examples of today's most popular services.How it worksThe terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. In short, the Web is an application running on the Internet.Viewing a Web page on the World Wide Web normally begins either by typing the URL of the page into a Web browser, or by following a hyperlink to that page or resource. The Web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it.First, the server-name portion of the URL is resolved into an IP address using the global, distributed Internet database known as the domain name system, or DNS. This IP address is necessary to contact the Web server. The browser thenrequests the resource by sending an HTTP request to the Web server at that particular address. In the case of a typical Web page, the HTML text of the page is requested first and parsed immediately by the Web browser, which then makes additional requests for images and any other files that form parts of the page. Statistics measuring a website's popularity are usually based either on the number of 'page views' or associated server 'hits' (file requests) that take place. Having received the required files from the Web server, the browser then renders the page onto the screen as specified by its HTML, CSS, and other Web languages. Any images and other resources are incorporated to produce theon-screen Web page that the user sees.Most Web pages will themselves contain hyperlinks to other related pages and perhaps to downloads, source documents, definitions and other Web resources. Such a collection of useful, related resources, interconnected via hypertext links, is what was dubbed a "web" of information. Making it available on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.Ajax updatesJavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within Web pages. The standardized version is ECMAScript. To overcome some of the limitations of the page-by-page model described above, some web applications also use Ajax (asynchronous JavaScript and XML). JavaScript is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse-clicks, or based on lapsed time. The server's responses are used to modify the current page rather than creating a new page with each response. Thus the server only needs to provide limited, incremental information. Since multiple Ajax requests can be handled at the same time, users can interact with a page even while data is being retrieved. Some web applications regularly poll the server to ask if new information is available.WWW prefix in Web addressesMany Web addresses begin with www, because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. So, the host name for a web server is often www as it is ftp for an FTP server, and news or nntp for a USENET news server etc. These host names then appear as DNS subdomain names, as in "".The use of such subdomain names is not required by any technical or policy standard; indeed, the first ever web server was called "nxoc01.cern.ch", and many web sites exist without a www subdomain prefix, or with some other prefix such as "www2", "secure" etc. These subdomain prefixes have no consequence; they are simply chosen names. Many web servers are set up such that both the domain by itself (e.g., ) and the www subdomain (e.g.,) refer to the same site, others require one form or the other, or they may map to different web sites.When a single word is typed into the address bar and the return key is pressed, some web browsers automatically try adding "www." to the beginning of it and possibly ".com", ".org" and ".net" at the end. For example, typing'microsoft<return>' may resolve to and'openoffice<return>' to . This feature was beginning to be included in early versions of Mozilla Firefox (when it still had the working title 'Firebird') in early 2003. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only with regard to mobile devices.The 'http://' or 'https://' part of web addresses does have meaning: These refer to Hypertext Transfer Protocol and to HTTP Secure and so define the communication protocol that will be used to request and receive the page and all its images and other resources. The HTTP network protocol is fundamental to the way the World Wide Web works, and the encryption involved in HTTPS adds an essential layer if confidential information such as passwords or bank details are to be exchanged over the public internet. Web browsers often prepend this 'scheme' part to URLs too, if it is omitted. In overview, RFC 2396 defined web URLs to have the following form:<scheme>://<path>?<query>#<fragment>.Pronunciation of "www"In English, www is pronounced by individually pronouncing the name of characters (double-u double-u double-u). Although some technical users pronounce it dub-dub-dub this is not widespread. The English writer Douglas Adams once quipped:The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for.– Douglas Adams, The Independent on Sunday, 1999It is also interesting that in Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (万维网), which satisfies www and literally means "myriad dimensional net",a translation thatvery appropriately reflects the design concept and proliferation of the World Wide Web.Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalized, with no intervening hyphens. Additionally, Web (with a capital W) is used to indicate its status as an abbreviation.StandardsMany formal standards and other technical specifications define the operation of different aspects of the World Wide Web, the Internet, and computer information exchange. Many of the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and other organizations.Usually, when Web standards are discussed, the following publications are seen as foundational:•Recommendations for markup languages,especially HTML and XHTML, from theW3C. These define the structure andinterpretation of hypertext documents. •Recommendations for stylesheets, especially CSS, from the W3C.•Standards for ECMAScript (usually in theform of JavaScript), from EcmaInternational.•Recommendations for the Document Object Model, from W3C.Additional publications provide definitions of other essential technologies for the World Wide Web, including, but not limited to, the following:•Uniform Resource Identifier (URI), which isa universal system for referencing resourceson the Internet, such as hypertextdocuments and images. URIs, often calledURLs, are defined by the IETF's RFC 3986/ STD 66: Uniform Resource Identifier (URI): Generic Syntax, as well as its predecessorsand numerous URI scheme-defining RFCs; •HyperText Transfer Protocol (HTTP),especially as defined by RFC 2616:HTTP/1.1 and RFC 2617: HTTPAuthentication, which specify how thebrowser and server authenticate each other. PrivacyComputer users, who save time and money, and who gain conveniences and entertainment, may or may not have surrendered the right to privacy in exchange for using a number of technologies including the Web. Worldwide, more than a half billion people have used a social network service, and of Americans who grew up with the Web, half created an online profile and arepart of a generational shift that could be changing norms. Facebook progressed from U.S. college students to a 70% non-U.S. audience and estimates that only 20% of its members use privacy settings.Privacy representatives from 60 countries have resolved to ask for laws to complement industry self-regulation, for education for children and otherminors who use the Web, and for default protections for users of social networks. They also believe data protection for personally identifiable information benefits business more than the sale of that information. Users can opt-in to features in browsers to clear their personal histories locally and block some cookies andadvertising networks but they are still tracked in websites' server logs, and particularly Web beacons. Berners-Lee and colleagues see hope in accountability and appropriate use achieved by extending the Web's architecture to policy awareness, perhaps with audit logging, reasoners and appliances.Among services paid for by advertising, Yahoo! could collect the most data about users of commercial websites, about 2,500 bits of information per month about each typical user of its site and its affiliated advertising network sites. Yahoo! was followed by MySpace with about half that potential and then byAOL-TimeWarner, Google, Facebook, Microsoft, and eBay.SecurityThe Web has become criminals' preferred pathway for spreading malware. Cybercrime carried out on the Web can include identity theft, fraud, espionage and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten Web pages may contain malicious code. Most Web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia.The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favors the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users.Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of code and all content regardless of its source. Some have argued that for enterprise to see security as a business opportunity rather than a cost center, "ubiquitous, always-on digital rights management" enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety isfar preferable to locking down the Internet.AccessibilityAccess to the Web is for everyone regardless of disability including visual, auditory, physical, speech, cognitive, and neurological. Accessibility features also help others with temporary disabilities like a broken arm and an agingpopulation as their abilities change. The Web is used for receiving information as well as providing information and interacting with society, making it essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities.The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect. —Tim Berners-LeeMany countries regulate web accessibility as a requirement for websites. International cooperation in the W3C Web Accessibility Initiative led to simple guidelines that Web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.InternationalizationThe W3C Internationalization Activity assures that Web technology will work in all languages, scripts, and cultures. Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding. Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII. RFC 3987 allows more characters—any character in the Universal Character Set—and now a resource can be idenfified by IRI in any language. StatisticsAccording to a 2001 study, there were massively more than 550 billion documents on the Web, mostly in the invisible Web, or deep Web. A 2002 survey of 2,024 million Web pages determined that by far the most Web content was in English: 56.4%; next were pages in German (7.7%), French (5.6%), and Japanese (4.9%). A more recent study, which used Web searches in 75 different languages to sample the Web, determined that there were over 11.5 billion Web pages in the publicly indexable Web as of the end of January 2005. As of March 2009, the indexable web contains at least 25.21 billion pages. On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hajaj announced that Google Search had discovered one trillion unique URLs.As of May 2009, over 109.5 million websites operated. Of these 74% were commercial or other sites operating in the .com generic top-level domain.Speed issuesFrustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to an alternative, pejorative name for the World Wide Web: the World Wide Wait. Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other solutions to reduce the World Wide Wait can be found at W3C.Standard guidelines for ideal Web response times are:•0.1 second (one tenth of a second). Idealresponse time. The user doesn't sense anyinterruption.• 1 second. Highest acceptable response time.Download times above 1 second interruptthe user experience.•10 seconds. Unacceptable response time.The user experience is interrupted and theuser is likely to leave the site or system. CachingIf a user revisits a Web page after only a short interval, the page data may not need to be re-obtained from the source Web server. Almost all Web browsers cache recently-obtained data, usually on the local hard drive. HTTP requests sent by a browser will usually only ask for data that has changed since the last download. If the locally-cached data are still current, it will be reused.Caching helps reduce the amount of Web traffic on the Internet. The decision about expiration is made independently for each downloaded file, whether image, stylesheet, JavaScript, HTML, or whatever other content the site may provide. Thus even on sites with highly dynamic content, many of the basic resources only need to be refreshed occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that theycan be cached efficiently. This helps reduce page download times and lowers demands on the Web server.There are other components of the Internet that can cache Web content. Corporate and academic firewalls often cache Web resources requested by one user for the benefit of all. (See also Caching proxy server.) Some search engines, such as Google or Yahoo!, also store cached content from websites.Apart from the facilities built into Web servers that can determine when files have been updated and so need to be re-sent, designers of dynamically-generated Web pages can control the HTTP headers sent back to requesting users, so that transient or sensitive pages are not cached. Internet banking and news sites frequently use this facility.Data requested with an HTTP 'GET' is likely to be cached if other conditions are met; data obtained in response to a 'POST' is assumed to depend on the data that was POSTed and so is not cached.。
World-Wide-Web
World Wide WebThe World Wide Web (commonly abbreviated as "the Web") is a system of interlinked hypertext documents accessed via the Internet. With a Web browser, one can view Web pages that may contain text, images, videos, and other multimedia and navigate between them using hyperlinks. Using concepts from earlier hypertext systems, the World Wide Web was started in 1989 by the English physicist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium, and later by Robert Cailliau, a Belgian computer scientist, while both were working at CERN in Geneva, Switzerland. In 1990, they proposed building a "web of nodes" storing "hypertext pages" viewed by "browsers" on a network, and released that web in December. Connected by the existing Internet, other websites were created, around the world, adding international standards for domain names & the HTML language. Since then, Berners-Lee has played an active role in guiding the development of Web standards (such as the markup languages in which Web pages are composed), and in recent years has advocated his vision of a Semantic Web.The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an application built on top of the Internet.HistoryIn March 1989, Berners-Lee wrote a proposal which referenced ENQUIRE and described a more elaborate information management system. With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web of nodes" with "hypertext documents" to store data. That data would be viewed in "hypertext pages" (webpages) by various "browsers" (line-mode or full-screen) on the computer network, using an "access protocol" connecting the "Internet and DECnet protocol worlds".The proposal had been modeled after EBT's (Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University) Dynatext SGML reader that CERN had licensed. The Dynatext system, although technically advanced (a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime), was considered too expensive and with an inappropriate licensing policy for general HEP (High Energy Physics) community use: a fee for each document and each time a document was changed.A NeXT Computer was used by Berners-Lee as the world's first Web server and also to write the first Web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first Web browser (which was a Web editor as well), the first Web server, and the first Web pages which described the project itself.On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup. This date also marked the debut of the Web as a publicly available service on the Internet.The first server outside Europe was set up at SLAC in December 1991.The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University--- among others Ted Nelson and Andries van Dam--- Ted Nelson's Project Xanadu and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex," which was described in the 1945 essay "As We May Think".Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed a system of globally unique identifiers for resources on the Web and elsewhere: the Uniform Resource Identifier.The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing Web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions.On April 30, 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular Web browser was ViolaWWW, which was based upon HyperCard.Scholars generally agree that a turning point for the World Wide Web began with the introduction of the Mosaic Web browser in 1993, a graphical browser developed by a team at the National Center for Supercomputing Applications at the University ofIllinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen. Funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative, a funding program initiated by the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore. Prior to the release of Mosaic, graphics were not commonly mixed with text in Web pages, and its popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol.The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October, 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA)—which had pioneered the Internet—and the European Commission.By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of whom are the precursors or inspiring examples of today's most popular services.How it worksThe terms Internet and World Wide Web are often used in every-day speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. In short, the Web is an application running on the Internet.Viewing a Web page on the World Wide Web normally begins either by typing the URL of the page into a Web browser, or by following a hyperlink to that page or resource. The Web browser then initiates a series of communication messages, behind the scenes, in order to fetch and display it.First, the server-name portion of the URL is resolved into an IP address using the global, distributed Internet database known as the domain name system, or DNS. This IP address is necessary to contact the Web server. The browser then requests the resource by sending an HTTP request to the Web server at that particular address. In the case of a typical Web page, the HTML text of the page is requested first and parsed immediately by the Web browser, which then makes additional requests for images and any other files that form parts of the page. Statistics measuring a website'spopularity are usually based either on the number of 'page views' or associated server 'hits' () that take place.Having received the required files from the Web server, the browser then renders the page onto the screen as specified by its HTML, CSS, and other Web languages. Any images and other resources are incorporated to produce the on-screen Web page that the user sees.Most Web pages will themselves contain hyperlinks to other related pages and perhaps to downloads, source documents, definitions and other Web resources. Such a collection of useful, related resources, interconnected via hypertext links, is what was dubbed a "web" of information. Making it available on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.Ajax updatesJavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within Web pages. The standardized version is ECMAScript. To overcome some of the limitations of the page-by-page model described above, some web applications also use Ajax (asynchronous JavaScript and XML). JavaScript is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse-clicks, or based on lapsed time. The server's responses are used to modify the current page rather than creating a new page with each response. Thus the server only needs to provide limited, incremental information. Since multiple Ajax requests can be handled at the same time, users can interact with a page even while data is being retrieved. Some web applications regularly poll the server to ask if new information is available.in Web addressesMany Web addresses begin with www, because of the long-standing practice of naming Internet hosts (servers) according to the services they provide. So, the host name for a web server is often it is an , and news or nntp for a USENET news server etc. These host names then appear as DNS subdomain names, as in "".The use of such subdomain names is not required by any technical or policy standard; indeed, the first ever web server was called "nxoc01.cern.ch", and many web sites exist without a prefix, or with some other prefix such as "www2", "secure" etc. These subdomain prefixes have no consequence; they are simply chosen names. Many web servers are set up such that both the domain by itself (e.g., ) and the (e.g., ) refer to the same site, others require one form or the other, or they may map to different web sites.When a single word is typed into the address bar and the return key is pressed, some web browsers automatically try adding "www." to the beginning of it and possibly ".com", ".org" and ".net" at the end. For example, typing 'microsoft<return>' may resolve to and 'openoffice<return>' to . This feature was beginning to be included in early versions of Mozilla Firefox (when it still had the working title 'Firebird') in early 2003. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only with regard to mobile devices.The 'http://' or 'https://' part of web addresses does have meaning: These refer to Hypertext Transfer Protocol and to HTTP Secure and so define the communication protocol that will be used to request and receive the page and all its images and other resources. The HTTP network protocol is fundamental to the way the World Wide Web works, and the encryption involved in HTTPS adds an essential layer if confidential information such as passwords or bank details are to be exchanged over the public internet. Web browsers often prepend this 'scheme' part to URLs too, if it is omitted. In overview, RFC 2396 defined web URLs to have the following form:<scheme>://<path>?<query>#<fragment>.Pronunciation of "www"In English, pronounced by individually pronouncing the name of characters (double-u double-u double-u). Although some technical users pronounce itdub-dub-dub this is not widespread. The English writer Douglas Adams once quipped:The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for.– Douglas Adams, The Independent on Sunday, 1999It is also interesting that in Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (万维网), which satisfies literally means "myriad dimensional net",a translation that very appropriately reflects the design concept and proliferation of the World Wide Web.Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalized, with no intervening hyphens. Additionally, Web (with a capital W) is used to indicate its status as an abbreviation.StandardsMany formal standards and other technical specifications define the operation of different aspects of the World Wide Web, the Internet, and computer informationexchange. Many of the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and other organizations.Usually, when Web standards are discussed, the following publications are seen as foundational:•Recommendations for markup languages, especially HTML and XHTML, from the W3C.These define the structure and interpretation of hypertext documents.•Recommendations for stylesheets, especially CSS, from the W3C.•Standards for ECMAScript (usually in the form of JavaScript), from Ecma International.•Recommendations for the Document Object Model, from W3C.Additional publications provide definitions of other essential technologies for the World Wide Web, including, but not limited to, the following:•Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Internet, such as hypertext documents and images. URIs, often called URLs, aredefined by the IETF's RFC 3986 / STD 66: Uniform Resource Identifier (URI): GenericSyntax, as well as its predecessors and numerous URI scheme-defining RFCs;•HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1 and RFC 2617: HTTP Authentication, which specify how the browser and server authenticate each other.PrivacyComputer users, who save time and money, and who gain conveniences and entertainment, may or may not have surrendered the right to privacy in exchange for using a number of technologies including the Web. Worldwide, more than a half billion people have used a social network service, and of Americans who grew up with the Web, half created an online pro are part of a generational shift that could be changing norms. Facebook progressed from U.S. college students to a 70% non-U.S. audience and estimates that only 20% of its members use privacy settings.Privacy representatives from 60 countries have resolved to ask for laws to complement industry self-regulation, for education for children and other minors who use the Web, and for default protections for users of social networks. They also believe data protection for personally identifiable information benefits business more than the sale of that information. Users can opt-in to features in browsers to clear their personal histories locally and block some cookies and advertising networks but they are still tracked in websites' server logs, and particularly Web beacons. Berners-Lee and colleagues see hope in accountability and appropriate use achieved by extending the Web's architecture to policy awareness, perhaps with audit logging, reasoners and appliances.Among services paid for by advertising, Yahoo! could collect the most data about users of commercial websites, about 2,500 bits of information per month about each typical user of its site and its affiliated advertising network sites. Yahoo! was followed by MySpace with about half that potential and then by AOL-TimeWarner, Google, Facebook, Microsoft, and eBay.SecurityThe Web has become criminals' preferred pathway for spreading malware. Cybercrime carried out on the Web can include identity theft, fraud, espionage and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten Web pages may contain malicious code. Most Web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia.The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favors the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users.Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of code and all content regardless of its source. Some have argued that for enterprise to see security as a business opportunity rather than a cost center, "ubiquitous, always-on digital rights management" enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.AccessibilityAccess to the Web is for everyone regardless of disability including visual, auditory, physical, speech, cognitive, and neurological. Accessibility features also help others with temporary disabilities like a broken arm and an aging population as their abilities change. The Web is used for receiving information as well as providing information and interacting with society, making it essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities.The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect. —Tim Berners-LeeMany countries regulate web accessibility as a requirement for websites. International cooperation in the W3C Web Accessibility Initiative led to simple guidelines that Web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.InternationalizationThe W3C Internationalization Activity assures that Web technology will work in all languages, scripts, and cultures. Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding. Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII. RFC 3987 allows more characters—any character in the Universal Character Set—and now a resource can be idenfified by IRI in any language.StatisticsAccording to a 2001 study, there were massively more than 550 billion documents on the Web, mostly in the invisible Web, or deep Web. A 2002 survey of 2,024 million Web pages determined that by far the most Web content was in English: 56.4%; next were pages in German (7.7%), French (5.6%), and Japanese (4.9%). A more recent study, which used Web searches in 75 different languages to sample the Web, determined that there were over 11.5 billion Web pages in the publicly indexable Web as of the end of January 2005. As of March 2009, the indexable web contains at least 25.21 billion pages. On July 25, 2008, Google software engineers Jesse Alpert and Nissan Hajaj announced that Google Search had discovered one trillion unique URLs. As of May 2009, over 109.5 million websites operated. Of these 74% were commercial or other sites operating in the .com generic top-level domain.Speed issuesFrustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to an alternative, pejorative name for the World Wide Web: the World Wide Wait. Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other solutions to reduce the World Wide Wait can be found at W3C.Standard guidelines for ideal Web response times are:•0.1 second (one tenth of a second). Ideal response time. The user doesn't sense any interruption.• 1 second. Highest acceptable response time. Download times above 1 second interrupt the user experience.•10 seconds. Unacceptable response time. The user experience is interrupted and the user is likely to leave the site or system.CachingIf a user revisits a Web page after only a short interval, the page data may not need to be re-obtained from the source Web server. Almost all Web browsers cacherecently-obtained data, usually on the local hard drive. HTTP requests sent by a browser will usually only ask for data that has changed since the last download. If the locally-cached data are still current, it will be reused.Caching helps reduce the amount of Web traffic on the Internet. The decision about expiration is made independently for each downloaded file, whether image, stylesheet, JavaScript, HTML, or whatever other content the site may provide. Thus even on sites with highly dynamic content, many of the basic resources only need to be refreshed occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. This helps reduce page download times and lowers demands on the Web server. There are other components of the Internet that can cache Web content. Corporate and academic firewalls often cache Web resources requested by one user for the benefit of all. (See also Caching proxy server.) Some search engines, such as Google or Yahoo!, also store cached content from websites.Apart from the facilities built into Web servers that can determine when files have been updated and so need to be re-sent, designers of dynamically-generated Web pages can control the HTTP headers sent back to requesting users, so that transient or sensitive pages are not cached. Internet banking and news sites frequently use this facility.Data requested with an HTTP 'GET' is likely to be cached if other conditions are met; data obtained in response to a 'POST' is assumed to depend on the data that was POSTed and so is not cached.。
毕业论文外文翻译--虚拟现实技术的发展过程及研究现状(适用于毕业论文外文翻译+中英文对照)
虚拟现实技术的发展过程及研究现状虚拟现实技术是近年来发展最快的技术之一,它与多媒体技术、网络技术并称为三大前景最好的计算机技术。
与其他高新技术一样,客观需求是虚拟现实技术发展的动力。
近年来,在仿真建模、计算机设计、可视化计算、遥控机器人等领域,提出了一个共同的需求,即建立一个比现有计算机系统更为直观的输入输出系统,成为能与各种船感器相联、更为友好的人机界面、人能沉浸其中、超越其上、进出自如、交互作用的多维化信息环境。
VR技术是人工智能、计算机图形学、人机接口技术、多媒体技术、网络技术、并行计算技术等多种技术的集成。
它是一种有效的模拟人在自然环境中视听、动等行为的高级人机交互技术。
虚拟现实(Virtual Reality ):是一种最有效的模拟人在自然环境中视、听、动等行为的高级人机交互技术,是综合计算机图形技术、多媒体技术、并行实时计算技术、人工智能、仿真技术等多种学科而发展起来的20世纪90年代计算机领域的最新技术。
VR以模拟方式为使用者创造一个实时反映实体对象变化与相互作用的三维图像世界,在视、听、触、嗅等感知行为的逼真体验中,使参与者可直接探索虚拟对象在所处环境中的作用和变化;仿佛置身于虚拟的现实世界中,产生沉浸感(immersive)、想象(imaginative和实现交互性interactive) 。
VR技术的每一步都是围绕这三个特征而前进的。
这三个特征为沉浸特征、交互特征和构想特征。
这三个重要特征用以区别相邻近的技术,如多媒体技术、计算机可视化技术沉浸特征,即在VR提供的虚拟世界中,使用户能感觉到是真实的进入了一个客观世界;交互特征,要求用户能用人类熟悉的方式对虚拟环境中的实体进行观察和操纵;构想特征:即“从定性和定量综合集成环境中得到感性和理性的认识:从而化概念和萌发新意”。
1.VR技术发展的三个阶段VR技术的发展大致可分为三个阶段:20世纪50年代至70年代VR技术的准备阶段;80年代初80年代中期,是VR 技术系统化、开始走出实验室进入实际应用的阶段;80年代末至90年代初,是VR技术迅猛发展的阶段。
模型参数不确定条件下的自主水下航行器(AUV)自适应编队控制研究
模型参数不确定条件下的自主水下航行器(AUV)自适应编队控制研究Chapter 1 Introduction1.1 Background and motivation1.2 Research objectives1.3 Contribution of the study1.4 Outline of the paperChapter 2 Literature Review2.1 Overview of Autonomous Underwater Vehicles (AUVs)2.2 Formation control of AUVs2.3 Adaptive control for AUVs2.4 Uncertainty analysis in AUVsChapter 3 Mathematical Modelling and Control Design3.1 Modelling of AUVs3.2 Formation control of AUVs3.3 Feedback linearization control design3.4 Adaptive control design3.5 Robust control designChapter 4 Simulation and Experimental Setup4.1 Simulation setup4.2 Experimental setup4.3 Results and analysisChapter 5 Conclusion and Future Work5.1 Summary of the study5.2 Limitations and future work5.3 Significance and contributions of the study5.4 Implications for future research.Chapter 1: IntroductionIn recent years, Autonomous Underwater Vehicles (AUVs) have gained significant attention as they are capable of performing various underwater tasks, including environmental monitoring, underwater exploration, and mapping. The AUVs offer several key advantages over human-operated vehicles, including their ability to work efficiently in hazardous and inaccessible locations. However, the development of AUVs' reliable and adaptive control during real-time operation still remains a significant challenge.The motivation behind this research is to address the limitations and extend the capabilities of AUVs through the implementation of a self-adaptive formation control method. The study aims to investigate and develop an adaptive control algorithm that allows AUVs to maintain a desired formation without relying on accurate information about the model parameters of the system. The proposed control design will enhance the autonomy, reliability, and robustness of AUVs, and it is expected to have significant implications for future underwater exploration.The research objectives can be broken down into four primary areas. Firstly, to analyze the uncertainties in AUVs' mathematical models and evaluate how they affect the system's dynamics. Secondly, to propose an adaptive control method that can manage the uncertainties without relying on perfect knowledge of the models' parameters. Thirdly, to design a robust control scheme that will ensure the stability and performance of the system, even in the presence of external disturbances or model uncertainties. Finally, to evaluate the proposed control methods using a combination ofsimulation and experimental tests.The contribution of this study is to develop a self-adaptive formation control strategy that can be applied to any AUV system using only limited information about the model parameters. The proposed control method will allow AUVs to operate autonomously and perform various underwater tasks more efficiently, with greater reliability, and with reduced intervention from human operators. The research findings have significant implications for applications in underwater exploration, ocean monitoring, and other related areas.The paper's outline is organized as follows. Chapter 2 provides an overview of autonomous underwater vehicles, formation control, adaptive control, and uncertainty analysis for AUVs. Chapter 3 discusses mathematical modeling of AUVs, control design, and sensitivity analysis. Chapter 4 describes the simulation and experimental setup for evaluating the algorithm's performance. Chapter 5 concludes the study, summarizes the contributions, and outlines future directions for research.Chapter 2: Literature Review2.1 Autonomous Underwater Vehicles (AUVs)Autonomous Underwater Vehicles (AUVs) are unmanned vehicles capable of operating in underwater environments. They are designed to perform various tasks, including environmental monitoring, mapping, and underwater exploration. AUVs offer several advantages over human-operated vehicles, including their ability to work efficiently in hazardous and inaccessible locations, providing a cost-effective solution for ocean exploration.2.2 Formation ControlFormation control refers to the ability of multiple vehicles or robots to maintain a specific geometric configuration while performing a task. In underwater applications, this involves controlling multiple AUVs to move and operate in a predetermined formation to complete a task collectively. Formation control research has applications in various fields, including agriculture, aerospace, and underwater systems.2.3 Adaptive ControlAdaptive control is a branch of control theory that deals with the development of control systems that can adjust themselves in real-time based on changes in the system's behavior. This makes adaptive control an ideal choice for systems with changing dynamics, such as AUVs operating in complex underwater environments. Adaptive control designs often have features such as parameter estimation, self-tuning, and fault tolerance.2.4 Uncertainty Analysis for AUVsAUVs are subject to a variety of uncertainties, including environmental disturbances and vehicle dynamics. The uncertainties can result in inaccurate control of the AUVs, which can limit their effectiveness in performing tasks. Uncertainty analysis involves evaluating the impact of these uncertainties on the AUV's dynamics and developing control systems robust enough to manage their effects.2.5 Related WorkVarious control methods have been proposed for AUVs. Researchers have developed adaptive control algorithms using various techniques, including neural networks, fuzzy logic, and reinforcement learning. Other studies have investigated formation control for AUVs using centralized and decentralized control strategies. However, few studies have combined formation control and adaptive control for AUVs.2.6 ConclusionThis chapter provided an overview of the relevant literature related to the research topic. Specifically, it explored the concepts of autonomous underwater vehicles, formation control, adaptive control, and uncertainty analysis for AUVs. The chapter also highlighted related work, identifying gaps in existing research and setting the foundation for this study's proposed methodology. Chapter 3 will delve into the mathematical modeling of AUVs, control design, and sensitivity analysis.Chapter 3: Methodology3.1 Mathematical Modeling of AUVsThe first step in developing a control system for AUVs is to create a mathematical model of the system that describes its dynamics. The model must include the vehicle's physical properties, such as mass, drag, and buoyancy, as well as the environmental disturbances acting on the vehicle. In this study, we will use the six-degree-of-freedom (6-DOF) model to represent the AUV'sdynamics. This model includes equations of motion for the vehicle's translational and rotational motion.3.2 Control DesignThe next step in developing a control system for AUVs is to design a control algorithm that emulates the desired behavior. We will use an adaptive control algorithm that can adjust its parameters to account for changing dynamics in real-time. The adaptive control algorithm will be based on the backstepping control method, which is a nonlinear control technique that can handle uncertainties in the system.3.3 Formation ControlAfter designing the adaptive control algorithm, we will incorporate it into the formation control system. Formation control involves coordinating the movements of multiple AUVs to maintain a predetermined geometric pattern while performing a task. We will use a centralized formation control approach where a leader AUV provides the reference trajectory, and the follower AUVs adjust their position and heading to maintain the formation.3.4 Sensitivity AnalysisTo evaluate the effectiveness of the control system, we will perform sensitivity analysis on the system parameters. Sensitivity analysis involves studying the response of the system to small variations in the input parameters. By examining the system's sensitivity to different parameters, we can determine the criticalparameters that affect the system's performance and optimize their values.3.5 ValidationAfter building and simulating the control system, we will validate its effectiveness through experiments. We will use a simulator to simulate the AUVs' dynamics and simulate scenarios in which the AUVs operate in varying oceanic conditions. The validation process will involve comparing the performance of the control system with traditional control systems in various scenarios.3.6 ConclusionThis chapter outlined the methodology used to develop a formation control and adaptive control system for AUVs. The methodology involved creating a mathematical model of the AUV's dynamics, designing an adaptive control algorithm, incorporating formation control, performing sensitivity analysis, and validating the control system through simulations. Chapter 4 will present the results of the simulations and sensitivity analysis.Chapter 4: Results and Analysis4.1 Simulation SetupThis chapter presents the results and analysis of the simulations performed to evaluate the performance of the proposed formation control and adaptive control system for AUVs. The simulations were performed using Matlab Simulink software, and the AUVs were modeled using the 6-DOF mathematical model.The simulations were carried out in a large water tank, where the AUVs had to perform two tasks: (1) move in a straight line at a constant speed, and (2) follow a circular trajectory. The formation pattern used was a line formation, where the leader AUV moved straight ahead, and the follower AUVs adjusted their position and heading to maintain a constant distance from each other.4.2 ResultsThe simulations were carried out under varying oceanic conditions such as current and wave disturbances. The results show that the proposed formation control and adaptive control system were effective in maintaining the desired formation pattern and completing the tasks successfully.Under ideal conditions, the AUVs moved in a straight line at a constant speed without any deviation from the desired trajectory. The follower AUVs adjusted their position and heading to maintain a constant distance from each other without colliding with the leader AUV.Under oceanic disturbances, the control system adjusted its parameters in real-time to compensate for the disturbances and maintain the formation pattern. The simulations showed that the adaptive control algorithm was effective in handling uncertainties in the system and maintaining stability.4.3 Sensitivity AnalysisSensitivity analysis was performed by varying the control parameters and observing the effect on the system's performance. The parameters varied were the gain values of the adaptive control algorithm, the distance between the leader and follower AUVs, and the velocity of the leader AUV.The results showed that the gain values of the adaptive control algorithm had a significant impact on the system's performance, and optimal values needed to be selected for different tasks and oceanic disturbances. The distance between the leader and follower AUVs and the velocity of the leader AUV also affected the system's performance, with smaller distances and faster velocities resulting in better performance.4.4 AnalysisThe simulations and sensitivity analysis showed that the proposed formation control and adaptive control system were effective in maintaining the desired formation pattern and completing tasks under varying oceanic conditions. The adaptive control algorithm was effective in handling uncertainties in the system and maintaining stability.The system's performance was sensitive to the control parameters, with different optimal values required for different tasks and oceanic disturbances. The results showed that the control system's performance could be improved by selecting optimal values for the control parameters.4.5 ConclusionThis chapter presented the results and analysis of the simulations performed to evaluate the performance of the proposed formation control and adaptive control system for AUVs. The simulations showed that the control system was effective in maintaining the desired formation pattern and completing tasks under varying oceanic conditions.Sensitivity analysis showed that the system's performance was sensitive to the control parameters, and optimal values needed to be selected for different tasks and oceanic disturbances. The next chapter will provide a summary of the study's findings and recommendations for future research.Chapter 5: Conclusion and Future Directions5.1 Summary of FindingsThis study presented a formation control and adaptive control system for AUVs to maintain a desired formation pattern under varying oceanic conditions. The simulations showed that the proposed control system was effective in achieving the desired formation pattern and completing tasks successfully.The adaptive control algorithm was effective in handling uncertainties in the system and maintaining stability. Sensitivity analysis showed that the system's performance was sensitive to the control parameters, and optimal values needed to be selected for different tasks and oceanic disturbances.5.2 ContributionsThis study contributes to the field of underwater robotics by proposing a formation control and adaptive control system for AUVs that can handle uncertainties in the system and maintain stability under varying oceanic conditions. The proposed control system can be used in a variety of underwater applications, such as underwater surveillance, exploration, and maintenance.The sensitivity analysis conducted in this study provides insight into the effect of control parameters on the system's performance and highlights the importance of selecting optimal values for different tasks and oceanic disturbances.5.3 Future DirectionsFuture research can be conducted to further improve the proposed control system. One area of research is to investigate the use of machine learning algorithms to optimize the control parameters based on real-time data from the sensors. This approach can improve the control system's performance and accuracy in handling uncertainties in the system.Another area of research is to investigate the use of different formation patterns, such as circular or triangular formations, and different control algorithms to achieve the desired formation pattern. This can provide more flexibility in the control system's application and expand its capabilities in underwater tasks. Finally, more experimental studies can be conducted in real-world underwater environments to evaluate the proposed control system'seffectiveness and robustness. This can provide further validation of the control system's performance and its practical application in underwater tasks.5.4 ConclusionThis study proposed a formation control and adaptive control system for AUVs to maintain a desired formation pattern under varying oceanic conditions. The simulations showed that the control system was effective in achieving the desired formation pattern and completing tasks under different oceanic disturbances.Sensitivity analysis showed that the control system's performance was sensitive to the control parameters, and optimal values needed to be selected for different tasks and oceanic disturbances.Future research can focus on further improving the proposed control system using machine learning algorithms, investigating different formation patterns and control algorithms, and conducting experimental studies in real-world underwater environments.。
交通流
Network impacts of a road capacity reduction:Empirical analysisand model predictionsDavid Watling a ,⇑,David Milne a ,Stephen Clark baInstitute for Transport Studies,University of Leeds,Woodhouse Lane,Leeds LS29JT,UK b Leeds City Council,Leonardo Building,2Rossington Street,Leeds LS28HD,UKa r t i c l e i n f o Article history:Received 24May 2010Received in revised form 15July 2011Accepted 7September 2011Keywords:Traffic assignment Network models Equilibrium Route choice Day-to-day variabilitya b s t r a c tIn spite of their widespread use in policy design and evaluation,relatively little evidencehas been reported on how well traffic equilibrium models predict real network impacts.Here we present what we believe to be the first paper that together analyses the explicitimpacts on observed route choice of an actual network intervention and compares thiswith the before-and-after predictions of a network equilibrium model.The analysis isbased on the findings of an empirical study of the travel time and route choice impactsof a road capacity reduction.Time-stamped,partial licence plates were recorded across aseries of locations,over a period of days both with and without the capacity reduction,and the data were ‘matched’between locations using special-purpose statistical methods.Hypothesis tests were used to identify statistically significant changes in travel times androute choice,between the periods of days with and without the capacity reduction.A trafficnetwork equilibrium model was then independently applied to the same scenarios,and itspredictions compared with the empirical findings.From a comparison of route choice pat-terns,a particularly influential spatial effect was revealed of the parameter specifying therelative values of distance and travel time assumed in the generalised cost equations.When this parameter was ‘fitted’to the data without the capacity reduction,the networkmodel broadly predicted the route choice impacts of the capacity reduction,but with othervalues it was seen to perform poorly.The paper concludes by discussing the wider practicaland research implications of the study’s findings.Ó2011Elsevier Ltd.All rights reserved.1.IntroductionIt is well known that altering the localised characteristics of a road network,such as a planned change in road capacity,will tend to have both direct and indirect effects.The direct effects are imparted on the road itself,in terms of how it can deal with a given demand flow entering the link,with an impact on travel times to traverse the link at a given demand flow level.The indirect effects arise due to drivers changing their travel decisions,such as choice of route,in response to the altered travel times.There are many practical circumstances in which it is desirable to forecast these direct and indirect impacts in the context of a systematic change in road capacity.For example,in the case of proposed road widening or junction improvements,there is typically a need to justify econom-ically the required investment in terms of the benefits that will likely accrue.There are also several examples in which it is relevant to examine the impacts of road capacity reduction .For example,if one proposes to reallocate road space between alternative modes,such as increased bus and cycle lane provision or a pedestrianisation scheme,then typically a range of alternative designs exist which may differ in their ability to accommodate efficiently the new traffic and routing patterns.0965-8564/$-see front matter Ó2011Elsevier Ltd.All rights reserved.doi:10.1016/j.tra.2011.09.010⇑Corresponding author.Tel.:+441133436612;fax:+441133435334.E-mail address:d.p.watling@ (D.Watling).168 D.Watling et al./Transportation Research Part A46(2012)167–189Through mathematical modelling,the alternative designs may be tested in a simulated environment and the most efficient selected for implementation.Even after a particular design is selected,mathematical models may be used to adjust signal timings to optimise the use of the transport system.Road capacity may also be affected periodically by maintenance to essential services(e.g.water,electricity)or to the road itself,and often this can lead to restricted access over a period of days and weeks.In such cases,planning authorities may use modelling to devise suitable diversionary advice for drivers,and to plan any temporary changes to traffic signals or priorities.Berdica(2002)and Taylor et al.(2006)suggest more of a pro-ac-tive approach,proposing that models should be used to test networks for potential vulnerability,before any reduction mate-rialises,identifying links which if reduced in capacity over an extended period1would have a substantial impact on system performance.There are therefore practical requirements for a suitable network model of travel time and route choice impacts of capac-ity changes.The dominant method that has emerged for this purpose over the last decades is clearly the network equilibrium approach,as proposed by Beckmann et al.(1956)and developed in several directions since.The basis of using this approach is the proposition of what are believed to be‘rational’models of behaviour and other system components(e.g.link perfor-mance functions),with site-specific data used to tailor such models to particular case studies.Cross-sectional forecasts of network performance at specific road capacity states may then be made,such that at the time of any‘snapshot’forecast, drivers’route choices are in some kind of individually-optimum state.In this state,drivers cannot improve their route selec-tion by a unilateral change of route,at the snapshot travel time levels.The accepted practice is to‘validate’such models on a case-by-case basis,by ensuring that the model—when supplied with a particular set of parameters,input network data and input origin–destination demand data—reproduces current mea-sured mean link trafficflows and mean journey times,on a sample of links,to some degree of accuracy(see for example,the practical guidelines in TMIP(1997)and Highways Agency(2002)).This kind of aggregate level,cross-sectional validation to existing conditions persists across a range of network modelling paradigms,ranging from static and dynamic equilibrium (Florian and Nguyen,1976;Leonard and Tough,1979;Stephenson and Teply,1984;Matzoros et al.,1987;Janson et al., 1986;Janson,1991)to micro-simulation approaches(Laird et al.,1999;Ben-Akiva et al.,2000;Keenan,2005).While such an approach is plausible,it leaves many questions unanswered,and we would particularly highlight two: 1.The process of calibration and validation of a network equilibrium model may typically occur in a cycle.That is to say,having initially calibrated a model using the base data sources,if the subsequent validation reveals substantial discrep-ancies in some part of the network,it is then natural to adjust the model parameters(including perhaps even the OD matrix elements)until the model outputs better reflect the validation data.2In this process,then,we allow the adjustment of potentially a large number of network parameters and input data in order to replicate the validation data,yet these data themselves are highly aggregate,existing only at the link level.To be clear here,we are talking about a level of coarseness even greater than that in aggregate choice models,since we cannot even infer from link-level data the aggregate shares on alternative routes or OD movements.The question that arises is then:how many different combinations of parameters and input data values might lead to a similar link-level validation,and even if we knew the answer to this question,how might we choose between these alternative combinations?In practice,this issue is typically neglected,meaning that the‘valida-tion’is a rather weak test of the model.2.Since the data are cross-sectional in time(i.e.the aim is to reproduce current base conditions in equilibrium),then in spiteof the large efforts required in data collection,no empirical evidence is routinely collected regarding the model’s main purpose,namely its ability to predict changes in behaviour and network performance under changes to the network/ demand.This issue is exacerbated by the aggregation concerns in point1:the‘ambiguity’in choosing appropriate param-eter values to satisfy the aggregate,link-level,base validation strengthens the need to independently verify that,with the selected parameter values,the model responds reliably to changes.Although such problems–offitting equilibrium models to cross-sectional data–have long been recognised by practitioners and academics(see,e.g.,Goodwin,1998), the approach described above remains the state-of-practice.Having identified these two problems,how might we go about addressing them?One approach to thefirst problem would be to return to the underlying formulation of the network model,and instead require a model definition that permits analysis by statistical inference techniques(see for example,Nakayama et al.,2009).In this way,we may potentially exploit more information in the variability of the link-level data,with well-defined notions(such as maximum likelihood)allowing a systematic basis for selection between alternative parameter value combinations.However,this approach is still using rather limited data and it is natural not just to question the model but also the data that we use to calibrate and validate it.Yet this is not altogether straightforward to resolve.As Mahmassani and Jou(2000) remarked:‘A major difficulty...is obtaining observations of actual trip-maker behaviour,at the desired level of richness, simultaneously with measurements of prevailing conditions’.For this reason,several authors have turned to simulated gaming environments and/or stated preference techniques to elicit information on drivers’route choice behaviour(e.g. 1Clearly,more sporadic and less predictable reductions in capacity may also occur,such as in the case of breakdowns and accidents,and environmental factors such as severe weather,floods or landslides(see for example,Iida,1999),but the responses to such cases are outside the scope of the present paper. 2Some authors have suggested more systematic,bi-level type optimization processes for thisfitting process(e.g.Xu et al.,2004),but this has no material effect on the essential points above.D.Watling et al./Transportation Research Part A46(2012)167–189169 Mahmassani and Herman,1990;Iida et al.,1992;Khattak et al.,1993;Vaughn et al.,1995;Wardman et al.,1997;Jou,2001; Chen et al.,2001).This provides potentially rich information for calibrating complex behavioural models,but has the obvious limitation that it is based on imagined rather than real route choice situations.Aside from its common focus on hypothetical decision situations,this latter body of work also signifies a subtle change of emphasis in the treatment of the overall network calibration problem.Rather than viewing the network equilibrium calibra-tion process as a whole,the focus is on particular components of the model;in the cases above,the focus is on that compo-nent concerned with how drivers make route decisions.If we are prepared to make such a component-wise analysis,then certainly there exists abundant empirical evidence in the literature,with a history across a number of decades of research into issues such as the factors affecting drivers’route choice(e.g.Wachs,1967;Huchingson et al.,1977;Abu-Eisheh and Mannering,1987;Duffell and Kalombaris,1988;Antonisse et al.,1989;Bekhor et al.,2002;Liu et al.,2004),the nature of travel time variability(e.g.Smeed and Jeffcoate,1971;Montgomery and May,1987;May et al.,1989;McLeod et al., 1993),and the factors affecting trafficflow variability(Bonsall et al.,1984;Huff and Hanson,1986;Ribeiro,1994;Rakha and Van Aerde,1995;Fox et al.,1998).While these works provide useful evidence for the network equilibrium calibration problem,they do not provide a frame-work in which we can judge the overall‘fit’of a particular network model in the light of uncertainty,ambient variation and systematic changes in network attributes,be they related to the OD demand,the route choice process,travel times or the network data.Moreover,such data does nothing to address the second point made above,namely the question of how to validate the model forecasts under systematic changes to its inputs.The studies of Mannering et al.(1994)and Emmerink et al.(1996)are distinctive in this context in that they address some of the empirical concerns expressed in the context of travel information impacts,but their work stops at the stage of the empirical analysis,without a link being made to net-work prediction models.The focus of the present paper therefore is both to present thefindings of an empirical study and to link this empirical evidence to network forecasting models.More recently,Zhu et al.(2010)analysed several sources of data for evidence of the traffic and behavioural impacts of the I-35W bridge collapse in Minneapolis.Most pertinent to the present paper is their location-specific analysis of linkflows at 24locations;by computing the root mean square difference inflows between successive weeks,and comparing the trend for 2006with that for2007(the latter with the bridge collapse),they observed an apparent transient impact of the bridge col-lapse.They also showed there was no statistically-significant evidence of a difference in the pattern offlows in the period September–November2007(a period starting6weeks after the bridge collapse),when compared with the corresponding period in2006.They suggested that this was indicative of the length of a‘re-equilibration process’in a conceptual sense, though did not explicitly compare their empiricalfindings with those of a network equilibrium model.The structure of the remainder of the paper is as follows.In Section2we describe the process of selecting the real-life problem to analyse,together with the details and rationale behind the survey design.Following this,Section3describes the statistical techniques used to extract information on travel times and routing patterns from the survey data.Statistical inference is then considered in Section4,with the aim of detecting statistically significant explanatory factors.In Section5 comparisons are made between the observed network data and those predicted by a network equilibrium model.Finally,in Section6the conclusions of the study are highlighted,and recommendations made for both practice and future research.2.Experimental designThe ultimate objective of the study was to compare actual data with the output of a traffic network equilibrium model, specifically in terms of how well the equilibrium model was able to correctly forecast the impact of a systematic change ap-plied to the network.While a wealth of surveillance data on linkflows and travel times is routinely collected by many local and national agencies,we did not believe that such data would be sufficiently informative for our purposes.The reason is that while such data can often be disaggregated down to small time step resolutions,the data remains aggregate in terms of what it informs about driver response,since it does not provide the opportunity to explicitly trace vehicles(even in aggre-gate form)across more than one location.This has the effect that observed differences in linkflows might be attributed to many potential causes:it is especially difficult to separate out,say,ambient daily variation in the trip demand matrix from systematic changes in route choice,since both may give rise to similar impacts on observed linkflow patterns across re-corded sites.While methods do exist for reconstructing OD and network route patterns from observed link data(e.g.Yang et al.,1994),these are typically based on the premise of a valid network equilibrium model:in this case then,the data would not be able to give independent information on the validity of the network equilibrium approach.For these reasons it was decided to design and implement a purpose-built survey.However,it would not be efficient to extensively monitor a network in order to wait for something to happen,and therefore we required advance notification of some planned intervention.For this reason we chose to study the impact of urban maintenance work affecting the roads,which UK local government authorities organise on an annual basis as part of their‘Local Transport Plan’.The city council of York,a historic city in the north of England,agreed to inform us of their plans and to assist in the subsequent data collection exercise.Based on the interventions planned by York CC,the list of candidate studies was narrowed by considering factors such as its propensity to induce significant re-routing and its impact on the peak periods.Effectively the motivation here was to identify interventions that were likely to have a large impact on delays,since route choice impacts would then likely be more significant and more easily distinguished from ambient variability.This was notably at odds with the objectives of York CC,170 D.Watling et al./Transportation Research Part A46(2012)167–189in that they wished to minimise disruption,and so where possible York CC planned interventions to take place at times of day and of the year where impacts were minimised;therefore our own requirement greatly reduced the candidate set of studies to monitor.A further consideration in study selection was its timing in the year for scheduling before/after surveys so to avoid confounding effects of known significant‘seasonal’demand changes,e.g.the impact of the change between school semesters and holidays.A further consideration was York’s role as a major tourist attraction,which is also known to have a seasonal trend.However,the impact on car traffic is relatively small due to the strong promotion of public trans-port and restrictions on car travel and parking in the historic centre.We felt that we further mitigated such impacts by sub-sequently choosing to survey in the morning peak,at a time before most tourist attractions are open.Aside from the question of which intervention to survey was the issue of what data to collect.Within the resources of the project,we considered several options.We rejected stated preference survey methods as,although they provide a link to personal/socio-economic drivers,we wanted to compare actual behaviour with a network model;if the stated preference data conflicted with the network model,it would not be clear which we should question most.For revealed preference data, options considered included(i)self-completion diaries(Mahmassani and Jou,2000),(ii)automatic tracking through GPS(Jan et al.,2000;Quiroga et al.,2000;Taylor et al.,2000),and(iii)licence plate surveys(Schaefer,1988).Regarding self-comple-tion surveys,from our own interview experiments with self-completion questionnaires it was evident that travellersfind it relatively difficult to recall and describe complex choice options such as a route through an urban network,giving the po-tential for significant errors to be introduced.The automatic tracking option was believed to be the most attractive in this respect,in its potential to accurately map a given individual’s journey,but the negative side would be the potential sample size,as we would need to purchase/hire and distribute the devices;even with a large budget,it is not straightforward to identify in advance the target users,nor to guarantee their cooperation.Licence plate surveys,it was believed,offered the potential for compromise between sample size and data resolution: while we could not track routes to the same resolution as GPS,by judicious location of surveyors we had the opportunity to track vehicles across more than one location,thus providing route-like information.With time-stamped licence plates, the matched data would also provide journey time information.The negative side of this approach is the well-known poten-tial for significant recording errors if large sample rates are required.Our aim was to avoid this by recording only partial licence plates,and employing statistical methods to remove the impact of‘spurious matches’,i.e.where two different vehi-cles with the same partial licence plate occur at different locations.Moreover,extensive simulation experiments(Watling,1994)had previously shown that these latter statistical methods were effective in recovering the underlying movements and travel times,even if only a relatively small part of the licence plate were recorded,in spite of giving a large potential for spurious matching.We believed that such an approach reduced the opportunity for recorder error to such a level to suggest that a100%sample rate of vehicles passing may be feasible.This was tested in a pilot study conducted by the project team,with dictaphones used to record a100%sample of time-stamped, partial licence plates.Independent,duplicate observers were employed at the same location to compare error rates;the same study was also conducted with full licence plates.The study indicated that100%surveys with dictaphones would be feasible in moderate trafficflow,but only if partial licence plate data were used in order to control observation errors; for higherflow rates or to obtain full number plate data,video surveys should be considered.Other important practical les-sons learned from the pilot included the need for clarity in terms of vehicle types to survey(e.g.whether to include motor-cycles and taxis),and of the phonetic alphabet used by surveyors to avoid transcription ambiguities.Based on the twin considerations above of planned interventions and survey approach,several candidate studies were identified.For a candidate study,detailed design issues involved identifying:likely affected movements and alternative routes(using local knowledge of York CC,together with an existing network model of the city),in order to determine the number and location of survey sites;feasible viewpoints,based on site visits;the timing of surveys,e.g.visibility issues in the dark,winter evening peak period;the peak duration from automatic trafficflow data;and specific survey days,in view of public/school holidays.Our budget led us to survey the majority of licence plate sites manually(partial plates by audio-tape or,in lowflows,pen and paper),with video surveys limited to a small number of high-flow sites.From this combination of techniques,100%sampling rate was feasible at each site.Surveys took place in the morning peak due both to visibility considerations and to minimise conflicts with tourist/special event traffic.From automatic traffic count data it was decided to survey the period7:45–9:15as the main morning peak period.This design process led to the identification of two studies:2.1.Lendal Bridge study(Fig.1)Lendal Bridge,a critical part of York’s inner ring road,was scheduled to be closed for maintenance from September2000 for a duration of several weeks.To avoid school holidays,the‘before’surveys were scheduled for June and early September.It was decided to focus on investigating a significant southwest-to-northeast movement of traffic,the river providing a natural barrier which suggested surveying the six river crossing points(C,J,H,K,L,M in Fig.1).In total,13locations were identified for survey,in an attempt to capture traffic on both sides of the river as well as a crossing.2.2.Fishergate study(Fig.2)The partial closure(capacity reduction)of the street known as Fishergate,again part of York’s inner ring road,was scheduled for July2001to allow repairs to a collapsed sewer.Survey locations were chosen in order to intercept clockwiseFig.1.Intervention and survey locations for Lendal Bridge study.around the inner ring road,this being the direction of the partial closure.A particular aim wasFulford Road(site E in Fig.2),the main radial affected,with F and K monitoring local diversion I,J to capture wider-area diversion.studies,the plan was to survey the selected locations in the morning peak over a period of approximately covering the three periods before,during and after the intervention,with the days selected so holidays or special events.Fig.2.Intervention and survey locations for Fishergate study.In the Lendal Bridge study,while the‘before’surveys proceeded as planned,the bridge’s actualfirst day of closure on Sep-tember11th2000also marked the beginning of the UK fuel protests(BBC,2000a;Lyons and Chaterjee,2002).Trafficflows were considerably affected by the scarcity of fuel,with congestion extremely low in thefirst week of closure,to the extent that any changes could not be attributed to the bridge closure;neither had our design anticipated how to survey the impacts of the fuel shortages.We thus re-arranged our surveys to monitor more closely the planned re-opening of the bridge.Unfor-tunately these surveys were hampered by a second unanticipated event,namely the wettest autumn in the UK for270years and the highest level offlooding in York since records began(BBC,2000b).Theflooding closed much of the centre of York to road traffic,including our study area,as the roads were impassable,and therefore we abandoned the planned‘after’surveys. As a result of these events,the useable data we had(not affected by the fuel protests orflooding)consisted offive‘before’days and one‘during’day.In the Fishergate study,fortunately no extreme events occurred,allowing six‘before’and seven‘during’days to be sur-veyed,together with one additional day in the‘during’period when the works were temporarily removed.However,the works over-ran into the long summer school holidays,when it is well-known that there is a substantial seasonal effect of much lowerflows and congestion levels.We did not believe it possible to meaningfully isolate the impact of the link fully re-opening while controlling for such an effect,and so our plans for‘after re-opening’surveys were abandoned.3.Estimation of vehicle movements and travel timesThe data resulting from the surveys described in Section2is in the form of(for each day and each study)a set of time-stamped,partial licence plates,observed at a number of locations across the network.Since the data include only partial plates,they cannot simply be matched across observation points to yield reliable estimates of vehicle movements,since there is ambiguity in whether the same partial plate observed at different locations was truly caused by the same vehicle. Indeed,since the observed system is‘open’—in the sense that not all points of entry,exit,generation and attraction are mon-itored—the question is not just which of several potential matches to accept,but also whether there is any match at all.That is to say,an apparent match between data at two observation points could be caused by two separate vehicles that passed no other observation point.Thefirst stage of analysis therefore applied a series of specially-designed statistical techniques to reconstruct the vehicle movements and point-to-point travel time distributions from the observed data,allowing for all such ambiguities in the data.Although the detailed derivations of each method are not given here,since they may be found in the references provided,it is necessary to understand some of the characteristics of each method in order to interpret the results subsequently provided.Furthermore,since some of the basic techniques required modification relative to the published descriptions,then in order to explain these adaptations it is necessary to understand some of the theoretical basis.3.1.Graphical method for estimating point-to-point travel time distributionsThe preliminary technique applied to each data set was the graphical method described in Watling and Maher(1988).This method is derived for analysing partial registration plate data for unidirectional movement between a pair of observation stations(referred to as an‘origin’and a‘destination’).Thus in the data study here,it must be independently applied to given pairs of observation stations,without regard for the interdependencies between observation station pairs.On the other hand, it makes no assumption that the system is‘closed’;there may be vehicles that pass the origin that do not pass the destina-tion,and vice versa.While limited in considering only two-point surveys,the attraction of the graphical technique is that it is a non-parametric method,with no assumptions made about the arrival time distributions at the observation points(they may be non-uniform in particular),and no assumptions made about the journey time probability density.It is therefore very suitable as afirst means of investigative analysis for such data.The method begins by forming all pairs of possible matches in the data,of which some will be genuine matches(the pair of observations were due to a single vehicle)and the remainder spurious matches.Thus, for example,if there are three origin observations and two destination observations of a particular partial registration num-ber,then six possible matches may be formed,of which clearly no more than two can be genuine(and possibly only one or zero are genuine).A scatter plot may then be drawn for each possible match of the observation time at the origin versus that at the destination.The characteristic pattern of such a plot is as that shown in Fig.4a,with a dense‘line’of points(which will primarily be the genuine matches)superimposed upon a scatter of points over the whole region(which will primarily be the spurious matches).If we were to assume uniform arrival rates at the observation stations,then the spurious matches would be uniformly distributed over this plot;however,we shall avoid making such a restrictive assumption.The method begins by making a coarse estimate of the total number of genuine matches across the whole of this plot.As part of this analysis we then assume knowledge of,for any randomly selected vehicle,the probabilities:h k¼Prðvehicle is of the k th type of partial registration plateÞðk¼1;2;...;mÞwhereX m k¼1h k¼1172 D.Watling et al./Transportation Research Part A46(2012)167–189。
国开大学英语试题及答案
国开大学英语试题及答案一、选择题(每题2分,共20分)1. The correct spelling of the word is ______.A) seperateB) seperatelyC) separateD) seprate2. Which of the following is NOT a type of fruit?A) AppleB) BananaC) CarrotD) Orange3. Fill in the blank: The _______ of the project will be completed by the end of this month.A) implementationB) implicationC) simplificationD) limitation4. Choose the best synonym for the word "innovate".A) conserveB) imitateC) createD) replicate5. The sentence "She is a _______ of her father'sintelligence." is best completed by:A) beneficiaryB) descendantC) inheritorD) successor6. The phrase "to turn a blind eye" means:A) to be helpfulB) to be observantC) to ignoreD) to be curious7. What is the past tense of "begin"?A) begunB) beganC) beginD) beginning8. The word "meticulous" is closest in meaning to:A) carelessB) detailedC) vagueD) general9. The opposite of "extrovert" is:A) introvertB) extrovertedC) introvertedD) introversion10. The idiom "break the ice" is used to describe:A) stopping a conversationB) starting a conversationC) ending a meetingD) making a decision二、填空题(每题1分,共10分)11. The _______ of the company is to provide high-quality products to its customers.[Mission]12. She is fluent in _______ languages.[Three]13. The _______ of the book is very interesting.[Plot]14. He was _______ to the idea of going on a vacation.[Averse]15. The _______ of the building was completed in 2020.[Construction]16. She is a _______ of the arts.[Appreciator]17. The _______ of the meeting has been postponed.[Commencement]18. The _______ of the project was a success.[Outcome]19. The _______ of the speech was very moving.[Delivery]20. The _______ of the new policy was met with mixed reactions.[Announcement]三、阅读理解(每题2分,共20分)Read the following passage and answer the questions.Passage:[A passage about the history and development of the internet.]21. When was the internet first developed?A) 1960sB) 1970sC) 1980sD) 1990s22. What was the primary purpose of the internet in its early stages?A) CommunicationB) EducationC) Military useD) Entertainment23. Which of the following is NOT a feature of the modern internet?A) EmailB) Social mediaC) Online shoppingD) Fax machines24. What does the term "World Wide Web" refer to?A) A collection of interconnected networksB) A specific type of internet browserC) A military communication systemD) A database of websites25. According to the passage, how has the internet changed over time?A) It has become more centralizedB) It has become more decentralizedC) It has remained largely the sameD) It has been replaced by new technology四、完形填空(每题1分,共10分)[A passage with missing words, followed by four options for each blank.]26. The _______ of the new technology was a turning point in the industry.A) introductionB) conclusionC) rejectionD) limitation27. Despite the _______, the team continued to work on the project.A) challengesB) rewardsC) opportunitiesD) resources28. The _______ of the company's success was due to its innovative approach.A) factorB) featureC) aspectD) element29. The _______ of the plan was carefully considered before implementation.A) feasibilityB) popularityC) complexityC) duration。
虚拟现实的世界英语作文
虚拟现实的世界英语作文1Virtual reality is a fascinating and rapidly evolving technology that has captured the attention of people all over the world.It has some remarkable features. For example, it can create an immersive environment that makes users feel like they are truly in a different place or situation. This sense of presence is one of the most attractive aspects of virtual reality.The applications of virtual reality are wide-ranging. In the field of gaming, it allows players to have more interactive and exciting experiences. In education, it can bring history and science to life, making learning more engaging and memorable. In healthcare, it has been used to help patients cope with pain and anxiety.However, virtual reality also has its influences. On the positive side, it provides new ways for people to have fun and learn. But on the negative side, some people might become addicted to it and neglect their real lives.In conclusion, virtual reality holds great promise for the future. It is likely to become even more advanced and integrated into various aspects of our lives. We should embrace its benefits while being aware of its potential drawbacks.Virtual reality has become an increasingly prominent technology in today's world. Some people believe that it brings numerous benefits and opportunities, while others are concerned about its potential negative impacts.From the perspective of technological progress, virtual reality has opened up new frontiers. It allows us to experience things that were previously impossible, such as immersive space exploration or historical reenactments. This not only enriches our knowledge but also stimulates our imagination.However, when it comes to social influence, there are mixed consequences. On one hand, it has changed the way we socialize. People can interact with others from different parts of the world in a virtual environment. But on the other hand, it may reduce face-to-face communication and make people more isolated in the real world.In addition, the rise of virtual reality has had a significant impact on traditional industries. For instance, the gaming industry has been transformed, but some traditional forms of entertainment might suffer.In conclusion, virtual reality is a double-edged sword. We should embrace its advantages while being cautious of its drawbacks. We need to ensure that it is used in a way that benefits humanity and does not cause harm to our social fabric.One day, I decided to explore the virtual reality world. The reason was that I was curious about what kind of wonders it could bring me.I put on the VR headset and entered a magical forest. The trees were so tall and the leaves were shining under the sunlight. I walked along a path, hearing the sounds of birds and the rustling of leaves. Suddenly, a fierce tiger appeared in front of me. I was scared at first, but then I remembered that this was a virtual world. So, I bravely faced it and found a way to escape.During this adventure, I felt both excited and nervous. The virtual reality world made me feel like I was really in a different place, experiencing things that I couldn't in the real world.In the end, this experience taught me that virtual reality can bring us amazing experiences, but we also need to be clear that it's not the same as the real world. We should enjoy it while keeping a balance with reality.4Virtual reality (VR) is a fascinating technology that has been making waves in recent years. It creates an artificial environment that feels incredibly real to the user.VR works by using special headsets or goggles that display images and sounds. These devices track the user's head movements, allowing them to look around the virtual world as if they were really there. Sensorsalso capture the user's hand and body movements, enabling interaction with the virtual objects and environments.The technology behind VR is complex and involves many components. Powerful computers are needed to generate the realistic graphics and process the data quickly. Special software is used to create the virtual scenes and simulate physics and interactions. High-resolution displays and advanced audio systems enhance the immersive experience.For example, in some VR games, players can feel like they are really in a fantasy world, fighting monsters or exploring ancient ruins. In education, students can visit historical places or conduct experiments that would be impossible in the real world.In the future, VR is expected to become even more advanced and widespread. It could revolutionize fields such as healthcare, where doctors could practice surgeries in a virtual environment. It might also change the way we work, allowing for virtual meetings and collaborative projects.Overall, virtual reality holds great promise and has the potential to transform our lives in countless ways.5The investigation question of this report is to explore the current situation and problems of the virtual reality world. The purpose is to provide a comprehensive understanding and in-depth analysis of thisemerging field.The survey methods include questionnaires and interviews. A total of 500 people from different age groups were randomly selected for the questionnaires, and 50 in-depth interviews were conducted. The data shows that among the respondents, 70% of young people aged 15-25 are highly interested in virtual reality and have a certain degree of acceptance. However, only 40% of people aged 35-50 show interest.The analysis of the results reveals that younger generations are more inclined to embrace new technologies and are more receptive to the immersive experience provided by virtual reality. However, for the older age group, concerns such as health risks and practicality may limit their acceptance.In conclusion, virtual reality has great potential but also faces challenges. It is recommended that developers pay more attention to user experience and safety to expand the user base. At the same time, education and publicity should be strengthened to increase public understanding and acceptance of virtual reality.。
Diffusion-based caching along routing paths,” presented at NLANL Web Caching Workshop
May 26, 1997 Boston University Computer Science Department 111 Cummington Street Boston, MA 02215 Phone: (617) 353-8919 Fax: (617) 353-6457
Caching for the Web can be bene cial in di erent ways: not only can it reduce network tra c and client response time, but it can also enable large scale server load balancing. In this paper, we present preliminary simulation data to characterize the performance of WebWave, a di usion-based caching protocol for server load balancing that we have recently proposed. Initial results suggest that WebWave indeed achieves load balance, even under self-similar request load. Furthermore, the number of cache copies created by WebWave appears to be within acceptable levels.
Байду номын сангаас
1
2 WebWave Simulation
For this performance study we used MaRS (Maryland Routing Simulator) 1]. MaRS is an event driven simulator designed to evaluate routing algorithms. In MaRS, a network consists of storeand-forward entities connected by links, routing algorithms and workload generators (static sourcesink pairs). To evaluate the performance of WebWave, we adapted MaRS by introducing clients, servers, and documents. Our modi cations to MaRS account for protocol dependent tasks. Servers and communication links are taxed for both load gossip and the creation of new cache copies; additionally, request packets are charged 2 msec for passing through the lter. The documents requested by a collection of clients are determined using a synthetic self-similar trace generated by SURGE 4], and explained in more detail later in this section. Each event in the trace le represents a document request. Client requests are scheduled using exponential inter-arrival times 3]. For each request, a client generates a request packet to the home server (document publisher). In our model, a client request can be intercepted and serviced by an intermediate WebWave server caching the requested document. To achieve this goal, routers under MaRS were modi ed so as to be able to interact with the attached cache server and exchange cache related information. Each server provides its underlying router with lter code. As a packet moves through the network, routers inspect its header and determine its type. All request packets are passed through the lter to determine if the document will be served locally (a lter injected by a home server will intercept all requests to documents published by it). If a packet is intercepted, the lter composes a pseudo header, attaches it to the request and hands it over to the local server. Pseudo headers contain information|inserted by routers|such as the identity of the attached WebWave server that this packet ew-by. Servers are de ned using two attributes: capacity (in bytes/sec) and gossip period length. For each server, the simulator periodically computes and records the number of hits to each document and the load (utilization). For each client request, the simulator records the document requested, its size, the home server publishing the document, the server intercepting the request, number of hops to this server, and the response time. To drive our simulation, we employ SURGE (Scalable URL Request Generator), a synthetic Web load generator designed and implemented by Barford and Crovella 4]. SURGE generates a sequence of le requests that satis es the same statistical properties that characterize experimentally measured Web loads for a set of clients. In particular, the resulting synthetic request trace has Zipf le popularity distribution and heavy-tailed le size distribution 10]. Furthermore, the load mimics empirically measured temporal and spatial locality properties, which are critical for the accurate evaluation of cache performance 2]. This means that the trace exhibits the following characteristics: The fraction of requests for each le is inversely proportional to its rank by popularity. File size distributions show heavy tails with parameter < 2:0. As declines, tra c generated becomes increasingly self-similar 15]. When each request in the stream is expressed as the number of requests since the same le was last requested before, the resulting stack distance distribution is lognormal. This causes the generated load to mimic the locality of reference observed in real traces.
基于身份密码系统和区块链的跨域认证协议
第44卷第5期2021年5月Vol.44Ao.5May2021计算机学报CHINESE JOURNAL OF COMPUTERS基于身份密码系统和区块链的跨域认证协议魏松杰年,李莎莎年王佳贺年年(南京理工大学计算机科学与工程学院南京210094)2南京理工大学网络空间安全学院南京210094)摘要随着信息网络技术的快速发展和网络规模的持续扩张,网络环境中提供的海量数据和多样服务的丰富性和持久性都得到了前所未有的提升•处于不同网络管理域中的用户与信息服务实体之间频繁交互,在身份认证、权限管理、信任迁移等方面面临一系列安全问题和挑战•本文针对异构网络环境中用户访问不同信任域网络服务时的跨域身份认证问题,基于NC身份密码系统,结合区块链技术的分布式对等网络架构,提出了一种联盟链上基于身份密码体制的跨信任域身份认证方案本先网十对基于NC架构下固有的实体身份即时撤销困难问题,通过加入安全仲裁节点来实现用户身份管理,改进了一种基于安全仲裁的身份签名方案mIBE,在保证功能有效性和安全性的基础上,mNS性能较IN-BME方案节省1次哈希运算、2次点乘运算和3次点加运算.其次,本文设计了区块链证书用于跨域认证,利用联盟链分布式账本存储和验证区块链证书,实现域间信任实体的身份核验和跨域认证本提出的跨域认证协议通过安全性分析证明了其会话密钥安全,并且协议的通信过程有效地减轻了用户端的计算负担本过真实机器上的算法性能测试,与现有同类方案在统一测试标准下比较,本文方案在运行效率上也体现出了明显的优势处关键词区块链;身份密码擞字签名;安全仲裁;可信共识中图法分类号TP304DOI号年2年97/SP.J.年16.2021.00908A Cross-Domain Authentication Protocol by Identity-Based Cryptography onConsortium BlockchaitWRI SongOe年实LI ShrShy WANG年南chool of Computer Science and Engineering,Nanjing UniuersiLy of Science and Technology,Nanjing210094) 2(School of Cyberspace SecuriLo,Nanjing UniversiLo of Science anE Technologo,Nanjiny210094)Abstrach With the exciting growth ol global Internet services and applications in the past decades,tremendoue amounS of varioue dats and service resourcee are prevailing on networO and attracting usere frow different administration domaine all oven the world.The Internes cyberspaceis nevee short of security threate and resource abusere.Reliable and efficient netwoW entity authenticatione and identifica/tion veiificatione an the cornee stonee foe all typee of secure networe applicatiod environmente and usage scenarioe.Especiallp how to verify an entity's identity outside ite origin,and how to extend such authentication capability acrose different administration domainein network without obvious securitp wean point os performance bottlenece,it is a realistic challengefoe traditional cryptography basen authentication schemee.Eithee the encryption kep basen oe thePKI certificate Used approaches suffee the threate on credential managemente and the inefficiencb revocation.Towards the problec of cross-domain authentication when users in heterogeneous netwom environmente accese netwom servicee from different trust homains,this papee proposee收稿日期=2019-11-30;在线发布日期:2021-01-22.本课题得到国家自然科学基金南1年2年6本1年2年9)、赛尔网络下一代互联网创新项目南GI1年年3上海航天科技创新基金(SAST2019-033)资助.魏松杰实士实教授实国计算机学会南CF)会员实要研究方向为网络安全、区块链技术、网络协议分析.E-mail:swei@njush edu.co.李莎莎,硕士研究生,主要研究方向为区块链技术、分布式系统.王佳贺(通信作者),硕士,主要研究方向为区块链技术、协议设计、身份认证.E-mail:jhwang@.本5期魏松杰等:基于身份密码系统和区块链的跨域认证协议909a new design of blockchain certificate to implement cross-domain authentication based on theidentity-based cryptosystem and the distributed architecturs oi blockchain technology.A novel cross-trust-domaie authenticatioe scheme baset on IBC system b constructeO and evaluated.Firstly,to solvo the problem oi instantaneout entity identity revocatioe based on the ICC architecture,n security-mediatos based identity signature scheme,mIBS,is proposed with optimized identity managemebC scheme.A securitp mediatoc servee t c trust.^五!!to approw or decline anp nthmtidon attempt.Cp retaininy part oi each entity?s identity authenticatiob key in the domaib,the security mmia_tor ca_n quickly collaboratc with other nodee to eithm verify the entity?s identity or fail its requesh for authenticatiob?i.e.The proposed mIBS algorithm for IBC-based authenticatiob,ensuree entitp nthentica/tion functionalitp u O securitp,with the computatioo overheaU reduced greatlp compared with the IC-BMS scheme.The cross-domaio authenticatioo is supported and implemented on c consortium blockchaio system.Wc optimizc the PKI certificatc structurr nd desigo c blockchaio certificatc to record domaio credential on blockchaio.Clockchaio certificatc authoritiec,just liks CAc in X.509,arc organized and coordinated together to ruo the(:01150讥;11111ledgee a_s the domaio credentiaO storage,veiificatioo and exchanys pared with ths centralized CA organizatioo,ths distributed ledgee on blockchaio nodee hae better replicatioo of certificate data,higher scalabilitp,cryptography-guaranteed informatioo integritp,and decentralized consensuc calculation capabilitp.The proposed mIBS algorithm and the blockchain-based authentication protocol arc thoroughly evaluated foe securitp and eCciencp.TheorCca!analysie and deduction show the new scheme holds the same securhp strength pc the original IBA system,buh saves some on the operation execution overhead.The state-of-the-art distributed usee authentication schemee in literature arc used u benchmarke to evaluate the proposed blockchain-based distribution authentication.The new scheme is robusU enough to survive any typical networU attacke and interruptions,and with sigXicotly improved computation overheaU efficiency when beiny measured alive on experimentai machinee.Keywords blockchain;identity-based cryptography;digital signature;security mediator;trush consensue1引言以Interneh为代表的信息网络技术,极大地拓展了数字服务用户行为的持续时间和延展范围,让原本属于不同服务区域、用户体系、业务流程的信息,能够依托网络基础设施而自由流动、广泛传播. Interneh作为海量异构网络的融合连接体,造就了网络资源的全球覆盖与服务应用域的全面联通,在带来数字服务繁荣普及的同时,也使得用户在不同应用服务域间的信息交互愈发频繁.用户在跨域访问网络资源时,由于身份认证和权限验证过程所带来的额外开销不可避免,因而设计面向全域网络环境的身份认证机制,实现身份的有效验证、一致认证、统一管理显得尤为重要.针对Internet等大规模网络应用场景下,特定数字空间范围内不同信息服务实体(Information Service Entity,ISE)间荣杂的交互过程,实现用户在不同ISE间的跨域认证过程具有广泛的应用意义和工程价值.跨域认证,即用户在多个可信区域之间完成一致的身份验证过程,既要保证全域信任关系建立的可信性、高效认证的可用性、认证过程的可靠性等,又要实现多信任域内的认证系统对有效用户的及时统一认证和即时管理巴在分布式系统的实施场景下,出现了三种主流的跨域认证框架和实践方案:⑴应用对称密钥技术设计认证架构;⑵采用公钥基础设施(Publie Key Infrastructure,PKI)实施分布式认证;(3)基于身份密码学(Identity-based Cryptography,IBC)设计认证架构.这三种架构方案所采用的密码学技术具有特质差异,适用于不同场景,也造成各自不同的优劣效果.基于传统对称密钥技术的方案运行速度快、认证效率高,但面临密钥912计算机学报010年5泄露的安全风险.鉴于网络空间内的恶意攻击和安全威胁愈加复杂多样和广泛持久⑵,这类方案的应用场景具有局限性.采用PKI体系的认证架构有效避免了对称密钥难以管理的困境,尤其适合于分布式应用场景,具有优良的系统扩展性和实践灵活性3但PKI认证过程在数字证书的管理、分发过程中存在计算复杂、开销冗杂现象,性能不佳.基于体C的认证方案直接以实体本身的有效标识作为公钥,使得认证过程不再囿于证书机制,简化了实体身份对应密钥的管理过程.但体C认证系统的实体私钥依赖密钥生成中心(Key Generation Center, KGC)集中计算产生,依然需要密钥托管,因此体C 方案适用于小规模信任域网络中.现有体C方案中实体身份撤销是通过KCC定期停止提供私钥来实现,撤销过程缺乏即时有效能力.总之,目前的实体跨域认证方案或技术均未能兼顾有效性、安全性、高效性,无法支撑用户与认E间跨域认证的完整需求4为解决现有身份认证方案在大规模跨域场景应用过程中存在的问题,本文在体C认证系统的基础上上行改进,结合区块链分布式存储与共识的特性,设计了区块链证书结构以支撑跨域认证过程具新性地构造了基于身份密码体制的跨信任域认证方案.具体工作内容和研究成果包括:(南针对原有体C架构下实体身份难以及时撤销的问题,改进设计了基于安全仲裁的身份签名方案mIBh;(0)结合身份密码体制与区块链技术设计并实现了分布式跨域认证方案,体C模式用于域内认证,借鉴联盟链分布式共识方法实现了域间认证,采用区块链证书支持跨域身份认证的完整过程;身)设计了一种多域信任的身份认证协议,既保证了密钥协商过程的安全性,又有效降低网络通信与节点计算开销,提高了认证效率,满足用户与ISS间在大规模分布式应用场景下的跨域认证需求.2相关工作异军突起的区块链技术,提供了一种具有多中心、防篡改、可追溯、易扩展特点的分布式数据记录实现方法.不断扩增的数据单元按照顺序组织起来,其间通过哈希摘要相关联,以数据发布者、记录者和确认者的电子签名为保障期通过数据加密、离散共识、时序关联等手段,区块链实现了去中心化的点对点可信事务交互,提供了融合数据可用、内容可验、操作可溯能力的分布式安全应用服务,是信用进化史上继生物血亲、贵重金属、国家货币信用后的第四座里程碑2基于身份的密码技术体c脱胎并借鉴于PKI 技术,同样采用公钥密码认证体系,但以用户身份信息来绑定生成公钥,避免了PKI体系中依赖证书的公钥认证管理过程.以此为基础,众多学者针对分布式应用场景下如何处理不同信任域间的认证传递问题,即跨域身份认证技术,展开了研究并取得了-系列成果.例如一种方案⑴提出利用PKI体制构建新型虚拟网桥CA(Certification Authority)信任模型,用以实现虚拟企业间有效的跨域认证过程.这类方案采用分布式可验证秘密共享协议和基于椭圆曲线密码系统的签名算法,实施简单且应用场景广泛.基于PKI体系搭建分布式跨域信任平台,实现域间可信的模控制与管理,在而支持多模环境下的信任传递9此外,一些方案利用逐步成熟的体C体制构建无证书的跨域认证系统和认证协议,实现多域WMA环境下安全高效的实体认证和通信功能—国内已有科研人员设计了基于身份的签名算法,尝试利用椭圆曲线加法群上运算实现身份匿名条件下的跨域认证过程年•Wang等人给出了一种认证密钥协商协议⑴,,结合异构签密实现了认C和PKI系统之间的认证转换,具有更高的安全性和更好的可用性.区块链技术在比特币等数字货币应用领域取得广泛成功,其相关设计理念和系统架构也为众多研究者们在探索跨域身份认证问题提供了新的思路. Wang等人年采用区块链技术提出了跨域认证模型BlockCAN及其跨域认证协议,将根证书颁发机构作为验证节点组织在联盟链上,解决了用户在访问多域资源时面临的安全和效率问题,展现了优于基于PKI体系的跨域身份验证能力.国内研究者结合区块链技术、分C域和PKI体系设计跨异构域认证方案,采用国密SA4和区块链代理协同生成密钥,同样通过构造联盟链提供跨域认证过程的可靠性,实现可SOV逻辑证明的协议安全性与实用性年.区块链技术极大地拓宽了解决跨域认证问题的探索空间,技巧性地融合了自证身份、互证信任、共证真实等一系列信息安全功能.本文基于联盟区块链技术尝试解决相互独立的认C系统间的跨域认证和信任传递问题,实现用户身份多域一致性的安全保证.5期魏松杰等:基于身份密码系统和区块链的跨域认证协议9113基于身份密码体制的签名算法首先对跨域认证模型中基于IBC 的域内认证方案进行改进,设计了基于仲裁的身份签名和认证 算法,简要称为mIBS.本节给出基于仲裁的IBC 域 结构设计,描述了 mIBS 方案的实现原理和方法,并对其进行安全性分析.3. 1基于仲裁的IBC 域结构设计高可靠性的用户身份认证和权限管理系统,必不可少地需要支持用户身份的实时可信验证、有效 控制与及时撤销.在ICC 系统中,公钥是基于用户身份信息关联生成的,理论上可以通过撤销用户身 份来使得对应的公钥失效1.但用户身份信息作为公开数据被广泛发布用于验证,也正是IBC 系统的 特色所在,实际应用中难以直接核销用户身份.因此,类似PKI 体系中的证书撤销机制,对于IBC 系 统无法有效适用1.既然身份验证是为了对用户行 为和权限进行限定,也可以设置权限仲裁,通过控制 某一身份用户在体C 系统中服务权限验证结果,来实现密钥管理和身份撤销的效果1.即基于安全仲 裁(Security Mediator,SEM )的 IBC 域内方案.这里KGC 和SEM 在系统内分立,KGC 密钥生成中心为系统内用户生成私钥,SEM 在系统运行过程中给用 户使用密码服务提供信令,如图1所示.相比于基于“ID ||有效期”的公钥撤销方式,KGC 和SEM 的分立能够提供更高细粒度的安全控制,提高系统的访问控制灵活性.这里的体C 认证过程仅在域内采②返回签名信令裁I 仲▼全安①申请签名信令用户终端1的另一部分私钥用户终端1的一部分私钥用户终端1图1基于仲裁的体C 域结构用,节点数量和网络规模可控,而SEM 信令仅运行一个数据量不大的简单运算,因此整体运算负载和 网络开销不大,性能可控.3.2基于仲裁的身份签名方案借鉴身份签名体S 算法,本文方案同样根据双 线性映射构造mBS 数字签名,并保证以下两点:(l)mlCE 算法在SEM 签发签名信令前能够验证签名请求消息的来源合法性,即判断是否来自合法用户;(2)用户发送给SEM 作为验证签名的依据不为 明文,需要隐藏好待签的明文信息1.整个方案包括参数生成(Setup )、密钥生私(KeyGen)、签名(Sign)和验证(Verify)四个算法,具体描述如下.Setup 阶段:设置安全系数儿初始化得到阶为大素数P (K >)的循环群(V )K),(V,X),这里G 的生成元为 P.选取双线性映射e :基)犌—犌,满足双线性对的可计算性、非退化性和双线性要求.选取哈希函数H 1: {0,也 Gi* 和 H ,: {0,也 * X G ) fZ 犖,具和 G )代表G /0}和G )⑴.KGC 随机选取s 狊[1> —)作为系统主密钥,并计算群G 的元素PpTs . 作为系统主公钥也]为取整运算,系统主密钥对为VK )g).KGC 保存系统私钥s,并公开系统参数为2 S 具具的本系具具,具);消息空间M =(0即)",签名空间 Stgn = GrXG 犖.KeyGen 阶段:对于一个用户标识为犐犇⑴系和GC 为其计算公钥犘犐和私钥犱D :P d = H 的犐和犌的⑴犱D = G ]为犇(2)然后KGC 对该用户私钥进行切分,先随机选择S d GG 本T 为并根据下式计算:d-Do 为P d (3)dDDg d T D —d : = E 15—s id DJ.id(4)犱由KGC 发送给用户,本M"发给SEM.Sign 阶段:给定消息底犕计算消息犿的正确签名如下:⑴用户签名消息观之前① 随机选择任意点P l GG 和任意整数咗 G 本-叮,计算群G 冲元素g ,q = eKP, ,K)(5)② 计算整数ng= H⑴系、V 、③ 计算签名S p ,S p =+ 莒犱0(7)912计算机学报2022年④向SEM发送请求Reques狋⑴gtruU.(2)SEM收到用户签名后①首先检索该用户的身份查看是否属于被撤销的情况,若已撤销则停止服务.②接着计算签名信令SsEM,拼接得到完整的消息犿的签名犛犿,SsEM=(8)S m S use e I SsEM(9)③SEM根据式⑴计算P m并验证元素g的正确性:g'=e(S”,P)•e(.P ID,—间(19)若=则证明消息观的签名申请是合法的,SsEM可由SEM发送给用户.根据哈希函数的特性,如果需要签名的消息那么签名信令S semi H S sem)即信令难以被重用.⑴用户签名为了验证目标信令SsEM的有效性,用户在收到S sem后,计算S”和g'则=g时输出签名〈S”〉根Verify阶段:对签名S,的验证过程是根据g'计算g',g'=犎⑵'(19) g'=g时证明该签名S,正确.3.3安全性证明为了检验更高细粒度下跨域身份认证过程的可靠性和有效性,下面对mIBS签名算法从计算和算法设计两方面作简要安全证明.⑴计算安全性证明群G中生成系统和用户的密钥,群G)产生用户和SEM签名,攻击者利用田1(D)和s,H年IDS 推导S和S d的难度等同于求解椭圆曲线上离散对数难题.同理,由签名信令S sem通过式⑴年求解(s+sd)的难度也相当,因此离散对数难题与哈希函数的安全性假设保障了方案的计算安全性.S sem=gd D M=g(s+s D)求⑵)(12)⑴算法安全性证明在PBS方案中,判断签名申请的合法性和判断S sem签名结果的有效性都需要验证因此只需要证明"立便能保证可信域内的身份双向安全认证,即式⑴).用户在向SEM申请消息犿签名〈S”〉时发送Request=⑴ggeU,不包含原始消息,这也保证了待签名消息,的隐私性.U计犲犿不计(d)——om)g=e(kP)+g<5?D r+'i MM t)•ed,,——om和=e(.kP)+gd ID—))ed,,——om间=e(gsPm判求e(kPi这)•ed,,——间=e(⑵),$因间•e⑵Pi则算e(⑵)则oo K g—e(kP9,P')—q(13) 4基于区块链的跨域身份认证模型mIBS算法可以在域内实现基于IBC的用户身份认证,本节利用区块链技术实现用户和信息服务实体间跨域的交互认证过程,设计了区块链系统模型,并描述了区块链证书的结构和认证工作原理. 4.1跨域认证协议设计本文设计的基于IBC和区块链架构的跨域认证模型,遵循如下设计目标:(1)基于区块链的分布式系统架构将多个IBC信任域在链上组织起来作为跨域信任机制的共同参与者.⑴通过区块链交易共识的方式建立域间的信任验证和身份管理.每个IBC域的代理服务器作为区块链节点参与交易传播和共识,同时依照区块链记录交易的方式对信任授权进行管理.⑴在区块链上存储目标域证书,用于快速组装和验证跨域身份认证交易2.本文采用联盟区块链架构来设计跨域身份认证模型,采用基于身份的方式来认证分属于不同IBC 信任域的用户实体和信息服务实体.如图2中所示,域内信息服务实体的私钥通过KGC密钥拆分两部分后,分发给仲裁机构SEM和实体本身.作为区块链的区域代理节点,在每个IBC域设有区块链证书服务器(Blockchain Certificate Authority,PCCA).信息服务实体ISE与用户间的认证过程如下:(1)当用户请求同域内某信息服务时,首先向ISEM出认证请求,ISE随即向SEM发起请求,收到SEM签名信令后完成-系列签名操作,签名结果发到域内身份验证服务器(Identity Authentication Server,IAS)基行认证中.如需撤销一个ISE的身份不证要求SEM停止为其发送签名信令.最后,用户可以根据IAS发回的认证响应决定ISE是否通过认证.同域内的认证过程可视为跨域认证的特殊情况,具体过程描述从略.⑴当请求用户和实体服务资源分属不同信任域时,通过区块链来进行跨域信任传递,完成用户与ISE间的认证过程.在图2中,假设用户U】与IS方进行交互,认证过程如下.用C】域和IBQ域前期通过BCC域和BCC域基于区块链证书完成域间认5期魏松杰等:基于身份密码系统和区块链的跨域认证协议913IBC 域] BCCA]图0跨域认证系统模型(步骤1:跨域认证请求Req. 4步骤2、3记认C 域间公共参数交换,步骤4发送会话密钥ST,步骤5期、7:信息服务实体体已权限认证,步骤8、开会话密钥K'认证)证,同时交换两个域认证系统的公开参数和公钥生 成算法WCCH 会为用户生成会话密钥,发送给身份认证服务器体方魏性收到认证请求后,向本域内的SER 申请签名信令,方法参照0. 2节中的相 关描述.通过SER 认证后,系E?会将完整签名结果发送给用户所在访问域的IAN ,待其验证签名信息后将认证结果返回用户.这时用户即可根据认证结 果来访问体-中的相应服务.4.2区块链证书设计为了解决0. 3节体G 域和体Q 域代理BCO和BCCH 的可信认证问题,本文利用区块链的不 可篡改性来完善数字证书,将设计的区块链证书 作为信任凭证支撑身份跨域认证过程.具体地,区块链证书依照PKI 体系中X.504数字证书标准进行改进,由各域中参与联盟链的BCCH 生成并记录 在链上.针对于身份跨域认证需求,图3比较了本文构造的区块链证书和原始X.504数字证书,具体改|签发者|磁者珂X509证书履甬看丽丽|使用者公钥|版本号1有效期1序列号1签发者1起始II 结束娥者id 服务的URL签名算漓拓展项|CA 签名||拓展项|i 使用者区块链证书|囲者版本号aw 序列号起始H 结束|跨域凭证1|图3区块链证书进内容如下:通)省略了 X.504数字证书中的签名算法内 容.签名算法用于验证证书的真实一致性.区块链本身已经采用相应密码学方法实现了链上数据的原始 真实性、完整一致性保障.各个域中的BCCH 区块 链代理,只需要生成区块链证书后将其哈希值记入区块链账本中,证书可以在链上查验.传统X.504数字证书的签名和验证过程被链上证书的存储和查验 操作所替代,这有助于提高用户认证的效率.通)取消了 X.5期证书中用于撤销检查服务的统一资源定位符URL 模块.区块链证书直接存储 在数据时序关联的联盟链上,随时可通过检索链上数据来查询证书状态,或者通过发送交易来记录新 数据,不再需要提供在线证书状态协议0CSP 和证书吊销列表CRL 管理服务.可以通过向链上发送交易签发(Issue )和交易撤销(Revoke )两种类型的数 据操作来管理证书的实时状态.这避免了传统X.504 使用OCSP 和CLL 带来的通信和查询开销.5基于区块链的跨域认证协议针对基于区块链的跨域身份认证模型,本节描述了用户跨域认证协议,详细给出了身份跨域认证中双方会话密钥的具体协商步骤,同时采用理 论分析和实验测试方法,评估协议的安全性和有 效性.914计算制学报2029年5.1跨域认证协议设计作为协议运行的初始状态,假定所有IBC域的KGC、SEM和IAS等服务节点都是诚实可靠的,域内实体间认证已经完成.每个KCC公开一致性参数(1假具],假具Pot认认),但各KCC系统显然具有不同的主公钥P a和主密钥和根联盟区块链上可以提供各域BCCA的证书状态查询.如图2中所示,以C域中用户U访问IBC)域中信息服务资源ISE?为例,描述协议的跨域认证工作过程.KGC】和KGQ分别为两个域的密钥生成中心认密钥分别为S1和S2P[1假-年,对应的系统公钥分别为Pone=1(,和P am=1,表9中定义了以下协议过程描述用到的重要符号.表1协议符号定义符号定义A f B:{m}从实体A到实体:送消息)Encry(C对消息)实施非对称加密计算M S⑵)对消息,实施基于身份的签名计算D E⑵)对消息,实施基于身份的加密计算CerLscA)IBCA的区块链证书(1)Ui^-BCCA::基I M u,I-Disn2,不,Request^,D SdDuU l D el|(e es狋犜当U]向BCC域发起IS方身份认证请求,不为时间戳,为证明自己的合法性,使用自身私钥对消息进行IBS签名操作.(2)SCCA^BCCA):基""『(D方方,认)Cerhcu,认描uest2))BCCA,收到请求后确认用户UU法身份,确认□在有效范围内,并在链上查询ISE)寸应的域代理BCCA,:BCC域选取时间戳犜、1方)、区块链证书CerhcA,s认证请求,加密后发送给BCCA.(3)BCCA—BCCA:基"c r y(gokK,认当BCCA收到并解密BCCA:的消息,验证T,后,与IAS联合查验C(cca的合法性,若证书有效则响应Request,;同时BCCA,将所在域的系统主公钥Pom与时间戳丁3加密后返回给BCCA.(4)BCCA—BCCA:{Ewcr-ydodK,并当BCCAif IAA:{LBE(K,并’Request)}K—H:(IMu:21-Disn,2,*1(,P public BCCA收到并解密BCCA发送的消息,验证T T效后,保存C域的系统公钥Poc KCCA:同样需要将自身域的系统公钥Pone与一个时间戳T密后返回给BCC,同时BCCA根据上式计算会话密钥K IBE加密后发送给IAS:(5)BCCA2^ISE2:{IBE(B/,R6,Request)} K=H)(M:2D)和[和.PodK BCCA收到BCCA】的消息,验证U有效后,保存l域的系统公钥Poe.BCCA计算会话密钥K(显然K z=H),IBn加密后发给ISM.(^ISEz—IASi:基,〈S”〉},C=—㊉KISE?获得BCCA发送的消息,验证八有效后,保存会话密钥K根据mIBS算法,验鸟通过仲裁获得消息犿的完整签名〈S”〉.最后,ISM计算密文C后,同签名结果〈S,}-起发给IASi:(7)IASif U:11X10,—,验)}在步骤显)中,S会解密来自BCCA的消息,验证八后获得会话密钥K通过K解密C获得消息观,就可以用mIBS方名算法验证〈S”〉获知ISM 的合法性.验证通过后,IAS将成功认证消息、会话密钥K及时间戳基于身份加密后发送给用户U 否则中止验证过程.5.2安全性分析5.2.1会话密钥安全证明会话密钥KK的安全性是基于攻击者UM 模型提出的.在这个模型中,会话密钥需具备如下安全性质2.性质1若通信双方都没有被攻陷且会话匹配验证方获得的会话密钥是一致的.引理1假定EncryD采用ECC加密算法, IBS(显)和I B E(显)基于椭圆曲线的双线性映射实现,均为选择密文攻击(Chosen CiphertexU Attack—CCA)安全的.若U、ISE、SSM、IAS及BCCA等实体或节点都未被攻陷验证话密钥KK将能够在协议执行过程中被成功地协商.证明.反证法.假设攻击者成功伪造协议认证过程中实体间传递的消息的概率e不可忽略对么对于上述协议各个步骤的破解概率为:第显)步攻击者成功伪造消息的概率等同于破解IBS算法的概率不s;第显)和显)步中,攻击者伪造消息的成功概率,等同于破解ECC加密算法的概率不c-第显)和显)步,攻击者伪造消息的成功概率,等同于破解IBE算法的概率不c)第显)步,攻击者伪造消息的成功概率,等同于破解IBS算法的概率不bs;第显)步,攻击者伪造消息的成功概率,等同于破解IBE算法的概率不e;综上r V不s+2不c+3不+不)M,概率不。
Computer simulation
Computer simulationA computer simulation, a computer model, or a computational model is a computer program, or network of computers, that attempts to simulate an abstract model of a particular system. Computer simulations have become a useful part of mathematical modeling of many natural systems in physics (computational physics), astrophysics, chemistry and biology, human systems in economics, psychology, social science, and engineering. Simulations can be used to explore and gain new insights into new technology, and to estimate the performance of systems too complex for analytical solutions.Computer simulations vary from computer programs that run a few minutes, to network-based groups of computers running for hours, to ongoing simulations that run for days. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using the traditional paper-and-pencil mathematical modeling. Over 10 years ago, a desert-battle simulation, of one force invading another, involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program;a 1-billion-atom model of material deformation (2002); a 2.64-million-atom model of the complex maker of protein in all organisms, a ribosome, in 2005;[3] and the Blue Brain project at EPFL (Switzerland), began in May 2005, to create the first computer simulation of the entire human brain, right down to the molecular level.[Simulation versus modelingTraditionally, forming large models of systems has been via a mathematical model, which attempts to find analytical solutions to problems and thereby enable the prediction of the behavior of the system from a set of parameters and initial conditions.While computer simulations might use some algorithms from purely mathematical models, computers can combine simulations with reality or actual events, such as generating input responses, to simulate test subjects who are no longer present.Whereas the missing test subjects are being modeled/simulated, the system they use could be the actual equipment, revealing performance limits or defects in long-term use by these simulated users.Note that the term computer simulation is broader than computer modeling, which implies that all aspects are being modeled in the computer representation. However, computer simulation also includes generating inputs from simulated users to run actual computer software or equipment, with only part of the system being modeled: an example would be flight simulators which can run machines as well as actual flight software.Computer simulations are used in many fields, including science, technology, entertainment, health care, and business planning and puter simulation was developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulations; the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible. Computer models were initially used as a supplement for other arguments, but their use later became rather widespread.Computer simulation in scienceComputer simulation of the process of osmosisGeneric examples of types of computer simulations in science, which are derived from an underlying mathematical description:a numerical simulation of differential equations which cannot be solved analytically, theories which involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g. climate models, roadway noise models, roadway air dispersion models), continuummechanics and chemical kinetics fall into this category.a stochastic simulation, typically used for discrete systems where events occur probabilistically, and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).Specific examples of computer simulations follow:statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting .agent based simulation has been used effectively in ecology, where it is often called individual based modeling and has been used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality puter simulations have also been used to formally model theories of human cognition and performance, e.g. ACT-Rcomputer simulation using molecular modeling for drug puter simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.]Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. There are one-, two- and three- dimensional models used. A one dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows one to simplify this complex subject to down-to-earth presentations of molecular theory.Notable, and sometimes controversial, computer simulations used in scienceinclude: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's puter simulation in practical contexts.Smog around Karl Marx Stadt (Chemnitz), Germany: computer simulation in 1990Computer simulations are used in a wide variety of practical contexts, such as:analysis of air pollutant dispersion using atmospheric dispersion modelingdesign of complex systems such as aircraft and also logistics systems.design of Noise barriers to effect roadway noise mitigation.flight simulators to train pilots.weather forecasting.Simulation of other computers is emulation..forecasting of prices on financial markets (for example Adaptive Modeler).behavior of structures (such as buildings and industrial parts) under stress and other conditions.design of industrial processes, such as chemical processing plants.Strategic Management and Organizational Studies.Reservoir simulation for the petroleum engineering to model the subsurface reservoir.Process Engineering Simulation tools.Robot simulators for the design of robots and robot control algorithms.Urban Simulation Models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies. See a more detailed article on Urban Environment Simulation.Traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network, for transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.modeling car crashes to test safety mechanisms in new vehicle modelsThe reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human in the loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard if not impossible to reproduce exactly.Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build a unique prototype and test it. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.[7]Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time e.g. in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.。
carry out a simulation study to
carry out a simulation study toSimulation studies are a valuable tool for understanding complex systems and processes, especially when it's not possible or practical to conduct real-world experiments. A simulation study can help to answer a variety of research questions by replicating the conditions of a real-world scenario in a controlled environment.To carry out a simulation study, you need to follow these general steps:Define the research question: Start by clarifying the purpose of your simulation study. What do you hope to learn by simulating this system or process? Identify the key variables and parameters that will be part of your simulation.Select the appropriate simulation software: There are various simulation software packages available, such as MATLAB, Simulink, or Python-based packages like SimPy or DEAP. Choose one that suits your needs and has the capabilities to handle your specific simulation requirements.Build the simulation model: Based on your research question, construct a mathematical model that represents the system or process you're simulating. This model should capture the essential features and dynamics of the system while abstracting away unnecessary details. Parameterize the model: Specify the values for the variables andparameters in your model. Collect data or make assumptions about these values based on prior knowledge or real-world observations.Run the simulation: Program the simulation using the selected software and execute it to generate results. The software will iteratively update the state of the system according to the defined rules and parameters. Analyze and interpret the results: Examine the data generated by the simulation, looking for patterns, trends, or other insights. Compare these findings with predictions or expectations based on your research question. Validate the results by comparing them with existing literature or real-world data if available.Draw conclusions: Draw conclusions from your analysis, answering your research question and addressing any limitations of your study. Use these insights to inform future research or provide practical guidance for decision-making in related fields.Communicate the findings: Share your results with others through presentations, reports, or academic publications. Provide a clear explanation of your methods, results, and their relevance to the field you're studying.Remember that the quality of your simulation study depends on the validity and realism of your model, as well as the choice of appropriate software and techniques for simulating your system or process.。
外文翻译--一种新型的基于Web的在线考试系统的计算机科学教育
译文一种新型的基于Web的在线考试系统的计算机科学教育Yuan Zhenming1, Zhang Liang2, Zhan Guohua3摘要:基于Web的网上考试系统是一种为大众教育的评价有效的解决方案。
我们已经开发一种在线考试系统,它是基于浏览器/服务器架构.该系统已经发布了考试和自动评级;是关于客观的问题和操作问题,以供客观的问题和经营问题他作为一个程序,经营的Microsoft Windows ,编辑的Microsoft Word ,Excel和PowerPoint等等。
她已经成功地应用于距离评价的基本操作技能的计算机科学,如过程的计算机技能,在大学和在全国范围内考试,为高中毕业生在中国浙江省进行实行。
指数条款:考试系统和自动分级系统,它们都是基于Web的和DCOM的系统。
1. 导言:在中国,基本电脑操作技能的教育已广泛展开.这项技术包括经营的Windows的MS Office ,网络技巧等等,这是结合不同的课程,以及作为电子政府的基础。
现在,每一个本科生必须通过的计算机技能的考试.然而,每一位公务员必须通过相应的计算机操作考试过程。
此外,在高中的过程中进行了计算机基础教育。
自20世纪90年代末,在浙江省成千上万的人们开始参加不同的层次计算机教育和测试.这是非常有必要的,对于建立一个基于网络的学习和考试系统.它是为众多的人民,作为一个有效的解决办法去集体的学习和评价计算机基础教育的。
几个基于Web的学习和测试系统已经被设计完,比如说WebCT [1],QUIZIT [2] ,ASSYST[3]和PILOT[4]。
在线可以很容易使用最广泛的问题类型的基于Web 的考试制度是客观的测试和测验,其中承担简单的答案可以正式检查和评估。
典型的问题是以有/无问题,多-choice/singleanswer问题有限的,multiple-choice/multiple-answer问题,并填写在的问题与一个字符串和数值答案。
机器人操作臂奇异路径约束最优轨迹规划_连广宇 2002
s ( k) =
∑$ s
i= 0
i
m n a a 其中 x, x ∈ R 为机械臂末端位姿和速度, H ,H ∈R 为 关节角位移和速度. 本文讨论非冗余机械臂即 m =
k = 0, 1, …, L ( 14) k = 0 对应于解曲线初始点 , 有 s ( 0) = 0, 当 k = L 时,
552
线性方程 , 其解为 n + 1 维空间的曲线 , 若存在参数 来表示该解曲线 , 则此参数即为所求的奇异路径参 数 s . 一个自然的选择是解曲线弧长 , 因解曲线所在 空间包含关节变量和 K , 称此弧长参数为扩展关节空 间路径参数 s . 由拟弧长算法可求取方程 ( 8) 的解曲 线, 选定 s 后 , 即得到 H ( s) , K ( s) . 3. 2 基于拟弧长法的奇异路径跟踪 设( H 0K 0) 为方程 ( 8) 的初始解 , 从该解出发采用
3 奇异路径的参数化 ( Parameterization of singular path)
使机械臂 Jacobian 矩阵降秩的关节位形称为运 动学奇异点, 穿越奇异点的路径称为奇异路径. 在奇 异点邻域内, 求取常规路径参数到关节变量的映射 因 Jacobian 矩阵降秩而变得困难 , 为此引入 路径跟 踪方程在扩展关节空间中解曲线的概念, 用于求解 奇异路径跟踪问题 , 并以此定义广义路径参数. 3. 1 扩展关节空间路径参数 刚性机械臂正运动学和微分运动学可表示为 k : Rn → Rm : x = k ( H ) a= J ( H a a= 5k H x )H 5H ( 5) ( 6)
a k ) 为该点处解曲线的单位切向量. 在正则点 H ( H ,K ) K a a a 的零空间是一维的 , 此时切向量 y k = ( H k. K k ) H ,K ) ya 构造辅助方程 N ( y k , $s k ) = y T k ( y - y k ) - $ sk = 0 ( 12) 上式的几何意义为垂直于 ya k 且与 y k 点距离为 $ sk 的 超平面, $ sk 为局部参数 . 联立( 8) ( 12) 得 G= F( y ) N ( y, $ s ) = 0 ( 13) ( 11)
WS以及NW小世界网络的生成(MATLAB)
WS以及NW⼩世界⽹络的⽣成(MATLAB)WS⼩世界⽹络⽣成算法,⼀般⼩世界⽹络⽣成算法速度慢,节点度分布与数学推导不符,在⽹络仿真中造成不便,这⾥针对实际⽹络动⼒学仿真过程撰写了WS⼩世界⽹络的MATLAB⽣成算法,并考虑了矩阵化,具有较⾼的速度。
以下是対应的代码:% The simulation of WS-smallworld network% the algorithm of WS-smallworld's generation has been improved in speed,% and tend to be easily understood% writen by winter-my-dream@% Example:% N = 100; %network size (number of nodes)% m = 6; %2*m is the average edges of each nodes% p = 0.1; %rewiring probability% matrix = small_world_WS_new(N,m,p);function matrix = small_world_WS_new(N,m,p)rng('default')rng('shuffle')matrix=zeros(N,N);% generate regular networkfor i=m+1:N-mmatrix(i,i-m:i+m)=1;endfor i=1:mmatrix(i,1:i+m)=1;endfor i=N-m+1:Nmatrix(i,i-m:N)=1;endfor i=1:mmatrix(i,N-m+i:N)=1;matrix(N-m+i:N,i)=1;end% rewiring the networkfor i = 1:N% then rewiring the edges with the probability of p[series1,series2] = range_sort(N,m,i);index0 = series1(rand(2*m,1)>1-p);if(~isempty(index0))matrix(i,index0) = 0;matrix(i,series2(randperm(length(series2),length(index0))))=1;endendmatrix = matrix -diag(diag(matrix));endfunction [series1,series2] = range_sort(N,m,i)% select the index of nodes in row i for rewiringif(i-m>0 && i+m<=N)series1 = i-m:i+m;series2 = setdiff(1:N,series1);elseif(i-m<=0)series1 = [1:i+m,N-m+i:N];series2 = setdiff(1:N,series1);elseseries1 = [1:m-N+i,i-m:N];series2 = setdiff(1:N,series1);end% Without considering the connection of diagonal elementsseries1(series1==i) = [];end参考⽂献:Watts D J, Strogatz S H. Collective dynamics of ‘small-world’networks[J]. nature, 1998, 393(6684): 440-442.NW⼩世界⽹络的⽣成⽅法相对简单,我这⾥附加对应代码:% 基于Matlab 的⼩世界⽹络仿真% 经过矩阵化修改后,⽣成速度已经⼤⼤加快function matrix = small_world_NW(N,m,p)% N=50;m=3;p=0.1;% matrix=sparse([]);matrix = zeros(N,N);for i=m+1:N- mmatrix(i,i- m:i+m)=1;endfor i=1:mmatrix(i,1:i+m)=1;endfor i=N- m+1:Nmatrix(i,i- m:N)=1;endfor i=1:mmatrix(i,N- m+i:N)=1;matrix(N- m+i:N,i)=1;end% Random add edgekk=(rand(N,N)<p);matrix = logical(matrix + kk);matrix = matrix -diag(diag(matrix));对应⽣成⽹络的测试图的代码:clear,clc,close all% load A.txtN=10;m=2;p=0.1;% A= small_world_WS_new(N,m,p);A = small_world_NW(N, m, p);t=linspace(0,2*pi,N+1);x=sin(t);y=cos(t);figureset(gcf,'color','w')plot(x,y,'o','markerfacecolor','k'),hold onfor i=1:Nfor j=1:Nif (A(i,j)==1)fp1=plot([x(i),x(j)],[y(i),y(j)],'r-'); hold on set(fp1,'linesmoothing','on')endendendaxis([-1.05,1.05,-1.05,1.05])axis squareaxis offsum(sum(A))。
国外海洋体育研究热点与动态分析
第40卷第1期2021年2月福建体育科技Fujian Sports Science and TechnologyVol.40No.1February2021国外海洋体育研究热点与动态分析董亚琦钟建伟2,余翰龙1,黄勤山彳,詹晓梅$(1.共青城市中学教研室,江西九江332020;2.江西师范大学体育学院,江西南昌330022;3.江西理工大学理学院;江西赣州341000)摘要:以Web of Science核心合集收录的2001-2018年海洋体育相关研究文献为基础,采用定量与定性相结合的研究方法,旨在研究国外海洋体育研究的热点、主题演变和知识基础,梳理其发展脉络,为我国海洋体育的发展提供参考。
运用Cite Space V对研究文献进行统计分析,结果显示,海洋体育研究的发文量呈波浪式增长趋势,高产国家、机构和作者主要集中在美国、澳大利亚、英国、加拿大和巴西等欧美国家。
海洋体育研究形成了体育学、生理学、环境生态学、营养学和休闲学等多学科融合的复合型学科群。
领域研究热点主要包括锻炼、表现、体育、体力活动、体成分等;聚类主要是水疗法、水上竞技、海洋休闲和运动伤害四类。
未来海洋体育现代治理、海洋体育体医融合、海洋体育进校园、海洋体育安全系统可能成为主要的研究方向。
关键词:海洋体育;研究热点;动态分析文章编号:1004-8790(2021)01-0022-04中图分类号:G811.6文献标识码:AHotspots and Dynamic Analysis of Foreign Marine Sports ResearchDONG Ya-qi1'2,ZHONG Jian-wei2,YU Han-long1,HUANG Qin-shan3,ZHAN Xiao-mei2(1.Teaching and Research Office of Gongqingcheng Middle School,Jiujiang332020,China;2.College of Physical Education,Jiangxi Normal University,Nanchang330022,China;3. School of Science,Jiangxi University of Science and Technology,Ganzhou341000,China)Abstract:Based on the research literature of marine sports related to the collection of the World of Web of Science from2001to2018,the research method combining quantitative and qualitative is aimed at studying the hotspots, theme evolution and knowledge base of foreign marine sports research.It sorts out the development context and provides reference for the development of marine sports in ing Cite Space V to conduct statistical analysis of the research literature,the results show that the number of publications in marine sports research is wave-like.The high-yielding countries,institutions and authors are mainly concentrated in the United States,Australia, the United Kingdom,Canada and Brazil.Marine sports research has formed a multidisciplinary group of disciplines such as physical education,physiology,environmental ecology,nutrition and leisure.Field research hotspots mainly include exercise,performance,physical activity,physical activity,body composition,etc.;clustering is mainly hydrotherapy,water sports,marine leisure and sports injuries.The future modem management of marine sports, marine sports and medical integration,marine sports into the campus,and marine sports safety systems may become the main research directions.Key words:marine sports;research hotspot;dynamic analysis基金项目:江西省教育科学“十三五”规划2018年度课题(18PTYB058)o作者简介:董亚琦(1991-),男,山西运城人,硕士,研究方向:体育产业。
数字化世界英文作文
数字化世界英文作文Title: Embracing the Digital World: Exploring the Impact of Digitization。
In today's era, the digital realm permeates every aspect of our lives, revolutionizing how we communicate, work, learn, and entertain ourselves. The rapid advancement of technology has ushered in an era where digitization is ubiquitous, shaping the way we interact with the world around us. This essay delves into the multifaceted impact of the digital world on society.First and foremost, the digitalization of information has democratized access to knowledge on an unprecedented scale. The internet serves as a vast repository of information, readily accessible to anyone with an internet connection. This accessibility has empowered individuals to educate themselves on a wide array of subjects, breaking down traditional barriers to learning. From online courses to educational websites, the digital world has transformededucation into a lifelong pursuit that knows no bounds.Moreover, the digital revolution has revolutionized communication, facilitating instant connectivity across the globe. Social media platforms, messaging apps, and email have transformed the way we stay in touch with friends, family, and colleagues. Distance is no longer a barrier to communication, allowing individuals to forge connections and collaborate regardless of geographical boundaries. However, the omnipresence of digital communication has also raised concerns about privacy, online harassment, and the erosion of face-to-face interactions.In addition to communication and education, the digital world has reshaped the landscape of commerce and industry. E-commerce platforms have revolutionized the way we shop, offering unparalleled convenience and choice to consumers. Businesses, both large and small, have leveraged digital technologies to streamline operations, reach new markets, and enhance customer experiences. The rise of remote work has further accelerated this transformation, blurring the lines between traditional office spaces and virtual workenvironments.Furthermore, the entertainment industry has undergone a paradigm shift in the digital age. Streaming services have disrupted traditional media consumption patterns, offering on-demand access to a vast library of content. From movies and music to video games and virtual reality experiences, the digital world has unlocked new forms of entertainment and immersive storytelling. However, concerns about digital piracy, copyright infringement, and the commodification of creativity persist in this rapidly evolving landscape.Beyond its impact on individuals and industries, the digital world has also sparked broader societal transformations. The proliferation of social media has fueled movements for social change, enabling grassroots activism and mobilizing communities around shared causes. However, it has also given rise to echo chambers, misinformation, and the spread of divisive ideologies, highlighting the double-edged nature of digital connectivity.In conclusion, the digital world has fundamentally transformed the way we live, work, and interact with one another. While it has brought about unprecedented opportunities for connection, innovation, and progress, it has also presented new challenges and complexities. As we navigate this digital landscape, it is essential to harness the potential of technology for the greater good while remaining vigilant to its pitfalls. Only by embracing the opportunities and addressing the challenges of digitization can we fully realize its transformative potential for society.。
虚拟现实技术探秘英语作文
Virtual reality, or VR, is a technology that has been capturing the imagination of the techsavvy and the general public alike. As a high school student with a keen interest in emerging technologies, Ive always been fascinated by the potential of VR to transform our lives. This essay is a personal exploration of the world of virtual reality, drawing from my experiences and the insights Ive gained from various sources.My journey into the realm of VR began with a simple curiosity about how it works. I learned that VR is a computergenerated simulation of a threedimensional environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves equipped with sensors. The technology immerses users in a digital world, blurring the lines between the virtual and the real.One of the most memorable experiences I had with VR was during a school trip to a local tech exhibition. We were given the opportunity to try out a VR headset that transported us to the depths of the ocean. As I put on the headset, the bustling exhibition hall around me faded away, replaced by the tranquility of the underwater world. Schools of colorful fish swam past, and I could almost feel the cool water against my skin. It was a magical experience that left me with a profound sense of awe.The potential applications of VR are vast and varied. In the field of education, for instance, VR can provide immersive learning experiences that traditional classroom settings cannot match. Imagine learning about the solar system by actually walking on the surface of Mars or exploringthe depths of the ocean without leaving the classroom. This level of immersion can make learning more engaging and memorable.In healthcare, VR is being used to help patients overcome phobias and manage pain. It has also been employed in physical therapy, providing patients with a motivating and controlled environment to practice movements that may be difficult in real life. The ability of VR to create controlled environments is also beneficial in training scenarios. For example, pilots and astronauts undergo VR simulations to prepare for various situations they might encounter.Gaming is another area where VR has made significant strides. The immersive nature of VR gaming offers a new level of engagement that traditional gaming cannot provide. Players can physically move around and interact with the game environment, creating a more realistic and thrilling experience.However, with great potential comes great responsibility. The immersive nature of VR raises questions about its impact on users, especially young people. There are concerns about the potential for addiction, as well as the effects of prolonged exposure to virtual environments on mental health. As with any technology, its essential to use VR in moderation and be aware of its potential effects.In conclusion, virtual reality is a technology with immense potential that is still in its infancy. As it continues to evolve, it will undoubtedly bring about significant changes in various aspects of our lives. From education tohealthcare, entertainment to training, VR is poised to redefine our experiences and interactions with the world. As a high school student, I am excited to witness and be a part of this technological revolution, and I look forward to seeing where VR will take us in the future.。
汽车行业英文缩写
A/D/V Analysis/Development/Validation 分析/发展/ 验证审批体系AA Approve Architecture实际完成日期ACD Actual Completion Date装配线平衡系统ALBS Assembly Line Balance SystemANDON 暗灯提前采购AP Advanced PurchasingAPI Advanced Product Information 先进的产品信息APQP Advanced Product Quality Planning 先期产品质量策划ATT Actual Tact Time 实际单件工时制造质量BIQ Building in QualityBIW Body In White 白车身设计清单BOD Bill of Design设备清单BOE Bill of Equipment装载清单BOL Bill of LogisticBOM Bill of Material 原料清单过程清单BOP Bill of Process业务计划实施BPD Business Plant DeploymentCAD Computer-Aided Design 计算机辅助设计CAE Computer-Aided Engineering 计算机辅助工程(软件)CARE Customer Acceptance & Review Evaluation 用户接受度和审查评估持续改进CIP Continue Improve Process隔间融合为组CIT Compartment Integration Team完全拆缷CKD Complete Knockdown坐标测量仪CMM Coordinate Measuring MachinesCPV Cost per Vehicle 单车成本CR&W Controls/Robotics & Welding 控制/机器人技术和焊接合同签订CS Contract SigningCTD Cumulative Trauma Disadjust 累积性外伤失调零件技术规格CTS Component Technical SpecificationCVIS Completed Vehicle Inspection Standards 整车检验标准设计分析过程DAP Design Analysis Process设计中心DES Design Center装配设计DFA Design for Assembly试验设计DOE Design Of ExperimentsDOL Die Operation Line-Up 冲模业务排行单车缺陷数DPV Defect per Vehicle设计质量验证DQV Design Quality Verification设计发布工程师DRE Design Release EngineerDRL Direct Run Loss 直行损失率直行率DRR Direct Run Run决策支持中心DSC Decision Support CenterECD Estimated Completion Date 计划完成日期 3 l工程组经理EGM Engineering Group ManagerENG Engineering 工程技术、工程学精选文库EOA End of Accelerati on停止加速EPC&L En gi neeri ng P roduction Control & Logistics 工程生产控制和后勤EQF Early Quality Feedback 早期质量反馈EWO Engin eeri ng Work Order 工程工作指令 FA Final App roval 最终认可 FE Functional Evaluati on功能评估FEDR Fu nctio nal Evaluation Dis position Rep ort 功能评估部署报告FFF Free Form Fabricati on 自由形态制造FIN Finan cial 金融的FPS Fixed Poi nt Stop 定点停FTP File Tran sfer P rotocol 文件传送协议 FTQ First Time Quality 一次送检合格率GA Gen eral Assembly 总装GA ShopGeneral Assembly Sho p总装车间P ai nt Sho p 涂装车间 Body Sho p 车身车间 Press Shop 冲压车间GCAGD&T GDSGM GMA PGME GMIO GMIQ GMPTG GMS GP GQTS GSB HVAC IC ICD Global Customer AuditGeometric Dime nsioning & Tolera ncingGlobal Delivery SurveyGeneral Motors GM Asia P acific Gen eral Motors Europe Gen eral Motors Intern ati onal Op erati ons General Motors In itial Quality Gen eral Motors Po wertra in Group Global Manu facturi ng System Gen eral P rocedureGlobal Quality Track ing System Global Strategy Board全球顾客评审 几何尺寸及精度 全球发运检查 通用汽车 通用亚太 通用汽车欧洲 通用汽车国际运作 通用汽车初始质量 通用汽车动力组 通用全球制造系统 通用程序全球质量跟踪系统 全球战略部加热、通风及空调IE ILRS IO IOM IOS IPC IPTV IQS IR ISP ITPHeati ng. Ven tilati on ,and Air Con diti oning 初始租约 界面控制文件工业工程 间接劳动报告系统 国际业务 检验操作方法 检验操作概要 国际产品中心 每千辆车的故障率 初始质量调查 事故报告综合计划 综合培训方法In itiate CharterIn terface Con trol Docume nt In dustrial Engin eeri ngIn direct Labor Reporting System Intern ati onal Op erati ons Insp ecti on Op eratio n MethodInspection Op erati on Summary Intern ati onal Product Cen ter In cide nts Per Thousa nd Vehicles In itial Quality Survey In cide nt ReportIn tegrated Scheduli ng Project In tegrated Training Process精选文库In terior Tech nical Sp ecificati on Draw ing Intern ati onal Uniform Vehicle Audit Job Eleme nt Sheet Job Issue SheetJust in Time Job per hourKey Con trol CharacteristicsKey Characteristics Desig natio n System Key p roduct Characteristic Look atMetal Fabricati on Divisi on Manu facturi ng Op erati onsManu facturi ng In tegrati on Engin eer Material Labor Bala nee System Manu facturi ng Engin eeri ng Milford Proving Grou nd Master P rocess In dex Master Parts List Material Planning System Material Required DateMaterial Safety Data Sheets Manu facturi ng System Engin eer Mea n Time Betwee n Failures Manu facturi ng Tech nical Sp ecificati on Motor Vehicle Safety Sta ndards North America n Market An alysis North America n Op erati ons NAO Contain erization Numerically Con trolled Notice of Authorizati on NAO Strategy Board Orga ni zati on and Empio yee Development Occu patio nal Safety & HealthOccup atio nal Safety & Health Act Product ion Con trol and Logistics金属预制件区 制造过程)制造综合工程师 物化劳动平衡系统制造工程 试验场 主程序索引 主零件列表 原料计划系统物料需求日期 化学品安全数据单制造系统工程 平均故障时间 生产技术规范汽车发动机安全标准 北美市场分析 北美业务NAO 货柜运输数字控制 授权书 北美业务部组织和员工发展 职业安全健康T职业安全与健康法案职业安全健康管理体系 职业安全标准 生产结果 产品临时授权绩效评估委员会 项目评估和控制条件 产品装配文件零件准备跟踪系统问题信息生产控制和支持ITSD lUVA JESJIS JIT JPH KCC KCDS KPC LT MFD MFG MIE MLBS MNG MPG MPI MPL MPS MRD MSDS MSE MTBF MTS MVSS NAMA NAO NAOC NC NOA NSB OED OSH OSHA OSHMS OSHS PA PAA PAC P ACE PAD P ARTS PC PCLOccup atio nal Safety & Health Man ageme nt System Occup ati onal Safety & Health Stan dards Producti on Achieveme nt P roduct Actio n Authorizati on P erforma nee Assessme nt CommitteeProgram Assessme nt and Con trol En viro nment P roduct Assembly Docume nt Part Readi ness Track ing System Problem Communi cati on内部技术规范图 国际统一车辆审核 工作要素单 工作要素单 准时制 每小时工作量关键控制特性关键特性标识系统关键产品特性精选文库Process Con trol Man ager Problem Communi cati on Report P roduct Data Man ageme nt Product Descri pti on System Product Development TeamP roduct ion Engin eeri ng Dep artme nt P roduct Evaluati on P rogram PersonnelP rogram Executi on Team P rogram Man ageme nt People In volveme nt Product ion Launch ProcessProcess Modeli ng In tegrati on Program Manu facturi ng Man ager P roduct Manu facturability Requireme ntsP roducti on Order Man ageme nt System Point of Pu rchase P ush PullP roductio n Part App roval P rocess Personal P rotective Equipment P roblems Per Hun dred P roblems Per Millio n P ractical P roblem Solvi ng P erforma nee ReviewP roblem Reporting and Resoluti onProblem Resoluti on and Track ing System Po rtfolio Strategy Cou ncil Plant Support Team Primary Tryout P roduct ion Trial Run Purchas ing Quality AuditQuality Assessme nt Process Quality Build Concern Quality CharacteristicQuality Con trol Op erati on Sheets Quality Engin eerQuality Engin eeri ng Team Quality Function Deployment Quality, Reliability and Durability Quality System Quality Review CharterRequired CompI eti on DatePCM PCR PDM PDS PDT PED PEP PER PET PGM PI PLP PMI PMM PMR POMS POP PP PPAP PPE PPH PPM PPS PR PR/R P RTS PSC PST PTO PTR PUR QA QAP QBC QC QCOS QE QET QFD QRD QS QUA RCRCD工艺控制负责人 问题交流报告 产品资料管理 产品说明系统 产品发展小组 产品工程部 产品评估程序 人员项目执行小组项目管理 人员参与 生产启动程序 加工建模一体化 项目制造经理 产品制造能要求 产品指令管理小组 采购点 推拉生产零部件批准程序个人防护用品百辆车缺陷数 百万辆车缺陷数 实际问题解决 绩效评估问题报告和解决 问题解决跟踪系统 部长职务策略委员会 工厂支持小组 第一次试验生产试运行 采购 质量评审质量评估过程 质量体系构建关系 质量特性质量风险控制 质量工程师质量工程小组 质量功能配置 质量、可靠性和耐久力 质量体系质量评估特许 必须完成日期精选文库RFQ Request For Quotati on 报价请求RGM Reliability Growth Man ageme nt 可靠性增长小组RONA Retur n on Net Assets 净资产评估RPO Regular Product ion Op ti on 正式产品选项RQA Routi ng Quality Assessme nt 程序安排质量评定RT&TM Rigorous Tracki ng and Throughout Ma nageme nt 严格跟踪和全程管理SDC Strategic Decisi on Cen ter 战略决策中心SF Styli ng Freeze 造型冻结SIL Si ngle Issue List 单一问题清单SIP Stan dardized Insp ecti on Process 标准化检验过程SL System Layouts 系统规划SLT Short Leadi ng Team 缩短制造周期SMB P Synchronous Math-Based Process 理论同步过程SMT Systems Man ageme nt Team 系统管理小组SNR 坏路实验SOP Start of P roduct ion 生产启动SOP Safe Op erati ng Practice 安全操作规程SOR Stateme nt of Requireme nts 技术要求SOS Stan dardizati on Op erati on Sheet 标准化工作操作单SOW Stateme nt of Work 工作说明SPA Shipping Priority Audit 发运优先级审计SPC Statistical Process Con trol 统计过程控制SPE Surface and Protot ype Engin eeri ng 表面及原型工程SPO Service P arts Op erati ons 配件组织SPT Sin gle Point Team 专一任务小组SQA Supp lier Quality Assura nee 供应商质量保证(供应商现场工程师)SQC Supp lier Quality Con trol 供方质量控制SQD Supp lier Quality Development 供应方质量开发SQE Supp lier Quality Engin eer 供方质量工程师SQ IP Su pp lier Quality Imp roveme nt P rocess 供应商质量改进程序SSLT Subsystem Leadersh ip Team 子系统领导组SSTS Subsystem Tech ni cal Sp ecificati on 技术参数子系统STD Stan dardizati on 标准化STO Secon dary Tryout 二级试验SUI 安全作业指导书SUW Sta ndard Unit of Work 标准工作单位SWE Simulated Work En viro nment 模拟工作环境TAG Timi ng An alysis Group 定时分析组TBD To Be Determ ined 下决定TCS Tract ion Con trol System 牵引控制系统TDC Tech no logy Devel opment Centre 技术中心TDMF Text Data Man ageme nt Facility 文本数据管理设备TG Tooli ng 工具TIMS Test In cide nt Man ageme nt System 试验事件管理系统精选文库TIR Test In cide nt Report 试验事件报告TMIE Total Manu facturi ng In tegrati on Engin eer 总的制造综合工程TOE Total Own ersh ip Exp erie nee 总的物主体验TPM Total Product ion Maintenance 全员生产维护TSM Trade Study Methodology 贸易研究方法TT Tact Time 单件工时TVDE Total Vehicle Dime nsional Engin eer 整车外型尺寸工程师TVIE Total Vehicle In tegrati on Engin eer 整车综合工程师TWS Tire and Wheel System 轮胎和车轮系统UAW Un ited Auto Workers 班组UCL Uniform Criteria List 统一的标准表UDR Un verified Data Release 未经核对的资料发布UPC Uniform Parts Classificati on 统一零件分级VAE Vehicle Assembly Engin eer 车辆装配工程师VCD Vehicle Chief Desig ner 汽车首席设计师VCE Vehicle Chief Engin eer 汽车总工程师CVCRI Validati on Cross-Refere nee In dex 确认父叉引用索引VDR Verified Data Release 核实数据发布VDS Vehicle Descri pti on Summary 汽车描述概要VDT Vehicle Development Team 汽车发展组VEC Vehicle Engin eeri ng Cen ter 汽车工程中心VIE Vehicle In tegrati on Engin eer 汽车综合工程师VIN Vehicle Ide ntificatio n Number 车辆识别代码VIS Vehicle In formati on System 汽车信息系统VLE Vehicle Li ne Executive 总装线主管VLM Vehicle Launch Man ager 汽车创办经理VOC Voice of Customer 顾客的意见VOD Voice of Desig n 设计意见VS Validatio n Statio n 确认站VSAS Vehicle Syn thesis An alysis and Simulation 汽车综合、分析和仿真VSE Vehicle System Engin eer 汽车系统工程师VTS Vehicle Tech ni cal Sp ecificati on 汽车技术说明书WOT Wide Open Throttle 压制广泛开放WPO Work Place Organi zati on 工作场地布置WWP Worldwide Purchas ing 全球采购COMMWIP Correctio n 纠错浪费。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
ABSTRACT
1. INTRODUCTION
In this section, we give a brief overview of TCP congestion control, the ABR ow control mechanism, and the ERICA+ rate allocation algorithm. TCP provides a reliable, connection-oriented service. TCP connections provide window-based end-to-end ow control. The receiver's window (rcvwnd) is enforced by the receiver as a measure of its bu ering capacity. The congestion window (cwnd) is used at the sender as a measure of the available capacity of the network. The sender cannot send more than the minimum of rcvwnd and cwnd. The TCP congestion control scheme6 consists of the \slow start" and \congestion avoidance" phases. In the \slow start" phase, cwnd is initialized to one TCP segment. The cwnd is incremented by one segment for each acknowledgment received, so the cwnd doubles every round trip. The cwnd can reach up to ssthresh (initialized to 64K bytes). In the \congestion avoidance" phase, the cwnd is incremented by 1=cwnd for every segment acknowledged. If an acknowledgment is not received by the timeout period, the segment is considered to be lost, and \slow-start" phase is entered. The cwnd is set to one, and ssthresh is set to max(2, min(cwnd/2, rcvwnd)). Figure 1 shows the di erent phases of the TCP congestion control mechanism. It has been shown that TCP/IP performance can be improved with the fast retransmit and recovery7 and selective acknowledgments8 options.
Proceedings of SPIE Symposium on Voice, Video and Data Communications, Vol. 3530, Conference on Performance and Cooston, MA November 1998, pp. 415-422. c SPIE. Further author information: Tel: (614) 688-4482; Fax: (614) 292-2911; E-mail: fvandalor, jaing@; WWW: /~jain/
Asynchronous transfer mode (ATM) is the technology chosen for implementing the Broadband Integrated Services Digital Network (B-ISDN). The performance of Internet protocols over ATM is an extremely important research area. As web tra c forms a major portion of the Internet tra c, we model world wide web (WWW) servers and clients running over an ATM network using the available bit rate (ABR) service. The WWW servers are modeled using a variant of the SPECweb961 benchmark, while the WWW clients are based on a model proposed in 2]. The tra c generated is typically bursty, having active and idle transmission periods. A timeout occurs after a certain idle interval. During idle periods, the underlying TCP congestion windows remain large until the timer expires. When the application becomes active again, these large windows may be used to send data in a burst. This raises the possibility of large queues at the switches, if the source rates are not controlled by ABR. We study this problem and show that ABR scales well to a large number of bursty TCP sources in the system. Keywords: ATM, WWW model, TCP/IP, ABR service ATM is a high speed, connection-oriented, cell switching technology. It uses small xed-size packets (also called cells) to transport its tra c. It is designed to provide an integrated service for supporting audio, video and data tra c. ATM provides multiple categories of service to support di erent quality of service (QoS) requirements. The current set of service categories speci ed are: the constant bit rate (CBR), real-time variable bit rate (rt-VBR), non-real time variable bit rate (nrt-VBR), available bit rate (ABR), and unspeci ed bit rate (UBR). The CBR service is aimed at transporting voice and synchronous data applications. The VBR (rt- and nrt-) services provide support for video and audio applications which do not require isochronous transfer. The ABR service transports data tra c. It guarantees a minimum cell rate (MCR) allocation and uses closed-loop feedback to control source rates. Cell loss is small in ABR, though ABR does not give bounded cell loss guarantees. UBR provides \best-e ort" delivery for data applications and does not give any guarantees. The ATM technology is already being used in the backbone of the Internet and many campus backbones. The transport control protocol/internet protocol (TCP/IP) suite of protocols are the most important protocols of the Internet. The performance of TCP/IP over ATM has been addressed in 3{5]. As large ATM networks are built, it is important to study the performance of real-world applications like the World Wide Web over ATM. Such applications typically have low average bandwidth demands from the network, but desire good response time when they become active. It is interesting from the tra c management perspective to study the aggregate e ect of hundreds of such applications browsing or down-loading large documents over an ATM backbone. This study focuses on the performance of WWW tra c over the ATM ABR service. Section 2 gives an overview of the TCP congestion control algorithm, the congestion control mechanism in the ATM ABR service, and the ERICA+ algorithm. Section 3 discusses the issues involved in transporting WWW tra c over ATM. Section 4 discusses the implications of using the HTTP/1.1 standard and presents the WWW server model and the WWW client model used in this study. Section 5 explains the simulation con gurations, and section 6 gives the TCP/IP and the ERICA+ parameter values used in the simulations. Section 7 discusses the simulation results. Finally, section 8 provides a summary of the paper.