外文文献及其翻译电子政务信息

合集下载

外文参考文献翻译-中文

外文参考文献翻译-中文

外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。

为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。

为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。

因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。

本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。

关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。

随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。

HSR还需要快速切换功能。

因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。

LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。

随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。

HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。

为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。

HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。

4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。

在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。

⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。

最新外文文献翻译格式范例

最新外文文献翻译格式范例

外文文献翻译格式范例本科毕业设计(外文翻译)外文参考文献译文及原文学院信息工程学院专业信息工程(电子信息工程方向)年级班别 2006级(4)班学号 3206003186学生姓名柯思怡指导教师 ______ 田妮莉 _ __2010年6月目录熟悉微软SQL Server (1)1Section A 引言 (1)2Section B 再谈数据库可伸缩性 (4)3Section C 数据库开发的特点 (7)Get Your Arms around Microsoft SQL Server (9)1Section A Introduction to SQL Server 2005 (9)2Section B Database Scalability Revisited (13)3Section C Features for Database Development (17)熟悉微软SQL Server1 Section A 引言SQL Server 2005 是微软SQL生产线上最值得期待的产品。

在经过了上百万个邮件,成百上千的规范说明,以及数十次修订后。

微软承诺SQL Server 2005 是最新的基于Windows数据库应用的数据库开发平台。

这节的内容将指出SQL Server 2005产品的一些的重要特征。

SQL Server 2005几乎覆盖OLTP及OLAP技术的所又内容。

微软公司的这个旗舰数据库产品几乎能覆盖所有的东西。

这个软件在经过五年多的制作后,成为一个与它任何一个前辈产品都完全不同的产品。

本节将介绍整个产品的大部分功能。

当人们去寻求其想要的一些功能和技术时,可以从中提取出重要的和最感新区的内容,包括SQL Server Engine 的一些蜕变的历史,以及各种各样的SQL Server 2005的版本,可伸缩性,有效性,大型数据库的维护以及商业智能等如下:●数据库引擎增强技术。

SQL Server 2005 对数据库引擎进行了许多改进,并引入了新的功能。

互联网Web服务中英文对照外文翻译文献

互联网Web服务中英文对照外文翻译文献

互联网Web服务中英文对照外文翻译文献(文档含英文原文和中文翻译)An internet-based logistics management system forenterprise chains.Developing the internet-based application toolWeb services offer new opportunities in business landscape, facilitating a global marketplace where business rapidly create innovative products and serve customers better. Whatever that business needs is, Web services have the flexibility to meet the demand and allow to accelerate outsourcing. In turn, the developer can focus on building core competencies to create customer and shareholder value. Application development is also more efficient because existing Web services, regardless of where they were developed, can easily be reused.Many of the technology requirements for Web services exist today, such as open standards for business to-business applications, mission-critical transaction platforms and secure integration and messaging products. However, to enable robust and dynamic integration of applications, the industry standards and tools that extend the capabilities of to days business-to-business interoperability are required. The key to taking full advantage of Web services is to understand what Web services are and how the market is likely to evolve. One needs to be able to invest inplatforms and applications today that will enable the developer to quickly and effectively realize these benefits as well as to be able to meet the specific needs and increase business productivity.Typically, there are two basic technologies to be implemented when dealing with internet-based applications; namely server-based and client-based. Both technologieshave their strong points regarding development of the code and the facilities they provide. Server-based applications involve the development of dynamically created web pages. These pages are transmitted to the web browser of the client and contain code in the form of HTML and JA V ASCRIPT language. The HTML part is the static part of the page that contains forms and controls for user needs and the JA V ASCRIPT part is the dynamic part of the page. Typically, the structure of the code can be completely changed through the intervention of web server mechanisms added on thetransmission part and implemented by server-based languages such as ASP, JSP, PHP, etc. This comes to the development of an integrated dynamic page application where user desire regarding problem peculiarities (calculating shortest paths, execute routing algorithms, transact with the database, etc.) is implemented by appropriately invoking different parts of the dynamic content of such pages. In server-based applications allcalculations are executed on the server. In client-based applications, JA V A applets prevail. Communication of the user is guaranteed by the well-known JA V A mechanism that acts as the medium between the user and code.Everything is executed on the client side. Data in this case have to be retrieved, once and this might be the time-consuming part of the transaction.In server-based applications, server resources are used for all calculations and this requires powerful server facilities with respect to hardware and software. Client-based applications are burdened with data transmission (chiefly related to road network data). There is a remedy to that; namely caching. Once loaded, they are left in the cache archives of the web browser to be instantly recalled when needed.In our case, a client-based application was developed. The main reason was the demand from the users point of view for personal data discretion regarding their clients. In fact, this information was kept secret in our system even from the server side involved.Data management plays major role in the good function of our system. This role becomes more substantial when the distribution takes place within a large and detailed road network like this of a major complex city. More specifically, in order to produce the proposed the routing plan, the system uses information about:●the locations of the depot and the customers within the road networkof the city (their co-ordinates attached in the map of the city),●the demand of the customers serviced,●the capacity of the vehicles used,●the spatial characteristics of road segments of the net work examined, ●the topography of the road network,●the speed of the vehicle, considering the spatial characteristics of theroad and the area within of which is moved,●the synthesis of the company fleet of vehicles.Consequently, the system combines, in real time, the available spatial characteristics with all other information mentioned above, and tools for modelling, spatial, non-spatial, and statistical analysis, image processing forming a scalable, extensible and interoperable application environment. The validation and verification of addresses of customers ensure the accurate estimation of travel times and distances travelled. In the case of boundary in the total route duration, underestimates of travel time may lead to failure of the programmed routing plan whereas overestimates can lower the utilization of drivers andvehicles, and create unproductive wait times as well (Assad, 1991). The data corresponding to the area of interest involved two different details. A more detailed network, appropriately for geocoding (approximately250,000 links) and a less detailed for routing (about 10,000 links). The two networks overlapped exactly. The tool that provides solutions to problems of effectively determining the shortest path, expressed in terms of travel time or distance travelled, within a specific road network, using the D ijkstra’s algorithm(Winston,1993). In particular, the Dijkstra’s algorithm is used in two cases during the process of developing the routing plan. In the first case, it calculates the travel times between all possible pairs of depot and customers so that the optimizer would generate the vehicle routes connecting them and in the second case it determines the shortest path between two involved nodes (depot or customer) in the routing plan, as this was determined by the algorithm previously. Due to the fact, that U-turn and left-,right-turn restrictions were taken into consideration for network junctions, an arc-based variant of the algorithm was taken into consideration (Jiang, Han, & Chen, 2002).The system uses the optimization algorithms mentioned in the following part in order to automatically generate the set of vehicle routes (which vehicles should deliver to which customers and in which order) minimizing simultaneously the vehicle costs and the total distance travelled by the vehicles This process involves activities that tend to be more strategic and less structured than operational procedures. The system helpsplanners and managers to view information in new way and examine issues such as:●the average cost per vehicle, and route,●the vehicle and capacity utilization,●the service level and cost,●the modification of the existing routing scenario by adding orsubtracting customers.In order to support the above activities, the interface of the proposed system provides a variety of analyzed geographic and tabulated data capabilities. Moreover, the system can graphically represent each vehicle route separately, cutting it o? from the final routing plan and offering the user the capability for perceiving the road network and the locations of depot and customers with all details.物流管理系统发展基于互联网的应用工具Web服务提供的商业景观的新机会,促进全球市场在业务快速推出创新的产品和客户提供更好的服务。

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。

英文文献及翻译:基于互联网的可靠性信息系统

英文文献及翻译:基于互联网的可靠性信息系统

英文资料翻译The Reliability of Internet-Based Information SystemSummary of papers foc used on the reliability of the information system w ith the w ide area network and server struc ture development. Existing customers of the system and an amendment to the transformation server HTTP task to perform analysis and advanc ed graphics. At the same time, the artic le is also on the global information network and the tec hnical bac kground, as w ell as, c lient / server systems analysis explained. With systems development, design engineers and reliability analysts c an more quickly and easily on an analysis of the reliability of the system. Keywords: information system, WWW, c lient / server architecture, the reliability of 1. The introduc tion of information systems have a w ide range of practical applic ation, it can be useful for the judge to make a dec isive strategy. Is generally believed that the information system is built on the model of the organizational struc ture of a partic ular data flow. In reliability engineering, researchers in the access and data analysis w ill be some diffic ulties. The system development proc ess is the accumulation of data from the majority of analysts to obtain the reliability. In the component data, computer failure rate for eac h component, the applic ation-spec ific data (for example, the importanc e of the applic ation, function of the number of pins, and so on.) Developers for the design of the system are very important in terms of . Institutions in the organization, client / server arc hitecture has been integrated as a good way of computer data. With the traditional focus on the computer environment, the c lient / server environment, users share data, applic ations, are easier to deal w ith the process [1]. Ability to work depends on the balanc e of the applic ation of c lient / server system, an important role.Support the development of the Internet as an interactive data display and distribution of the means of transmission. Internet c lient and server interaction in the standardization of information was a great success. Similarly, in the development of c lient and server software or netw ork protocol, if not require spec ial resourc es, Internet-based system c an quickly create.In this chapter, we explained the Internet-based and c lient / server tec hnology to achieve the reliability of information systems. Chapter II provides an overview of c lient / server computing in response to the Internet. Chapter III describes the reliability of information systems implementation details, and Chapter IV of further study w ere summarized and disc ussed.2. Internet and c lient / server architectureClient / server struc ture of the relationship betw een the tw o proc esses c an be said to be running a number of tasks in c ooperation. It supports the integrity of information sy stems and scalability [2]. Lyu (1995) demonstrated that the c lient / server structure of the four advantages: cost reduction, productivity improvement, system life cyc le availability of a longer and better. Therefore, c lient / server system architec ture is c onsidered a viable struc ture of information systems.With the development of the Internet to achieve c lient / server structure of the simplest possible way out is the task of the c lient softw are is displayed and the format of the information obtained from the server using a w eb browser. Many bibliographic retrieval system is the typical example. In a web browser as a c lient acc ess to an existing c lient / server platform, only a c lass of system code (HTML and help code) need to maintain.But for other systems, the c lient software on behalf of the server in the implementation of additional tasks or users, the c o-ordination mechanisms need a w eb browser-based c lient to run these jobs. A typic al solution is to use the Common Gatew ay Interfac e (CGI) program. How ever, due to various reasons, this approach is not satisfactory.. In a CGI-based system, all are usually handled by the c lient task must be simulated by the CGI program. Increase the burden on the server. Another from the standard Internet browser acces s to c lient / server applic ations is invented by the Dossick and Kaiser [3]. They have put forward a HTTP proxy to c onnect to the existing c lient / server netw ork system. HTTP proxy to intercept HTTP requests for data and use the original set of requests for their transfer to thesourc e system.The use of APIs is similar to Netsc ape's embedded browser-spec ific tools to create c lient / server system, browser-based c lient is feasible. How ever, the use of suc h APIs generated by the Web-based c lient software to limit the use of a proprietary platform, as well as a dedic ated w eb browser. Unnecessary restric tions w hic h offset a lot of c lients to create Web-based benefits.3. SystemElec tronics and Telecommunic ations Researc h Institute (ETRI) has developed ERIS is called the reliability of information systems. It c an be synthesized using a computer system failure rate and reliability of the calc ulation [4]. ERIS c lients inc lude procedures by the two neutral c omponents, they are different hardw are platforms: workstations and personal computers. Not familiar w ith the UNIX environment, users w ill be inconvenient to use.Needs to be noted that the reliability of software tools made by Birolini. In order to bec ome useful to the user softw are, as opposed to ot her requirements, a large enough database is very important. In stand-alone environment, the user c an have an independent data storage. This w ill be a waste of c omputer resourc es and time. Most existing tools are independent, to share data betw een users inc onvenient. Based on the above in the ERIS test requirements and the views c ollec ted, w e set the follow ing elements:- Friendly user interfac e: man-machine interface for the effective handling large amounts of data is very important. At the same time, his understanding of the results of the analysis is very helpful.- Openness: information servic es must be w idely used. Open the same end-users in the reliability of the information can be used in the c lient easy access to other applic ations.- Data sharing: Onc e part of the data into the DBMS, then this data should be shared by other users.- User Management: User information is stored c an effectively deal w ith the increase in users.- Sec urity: safe design must be appropriate to consider the design data in order to prevent the outside world open. Only those w ith only the c orrect user identific ation number (ID) and password of the user to enter the database server.Based on the above-mentioned requirements, ERIS functions of the development of the follow ing. The system is divided into the follow ing two categories: user / database management and reliability analysis.We are a c ombination of methods to connec t to the Internet as well as the source c lient / server structure of the development of ERIS. Web br owser in the display and formatting information can be used effectively for all users. Web browser management concepts used for ERIS. ERIS allows management through the user's web browser applic ation. Home users can apply for ERIS through the use of ID. Onc e his / her registration ID in the user database, he / she can be in any place to download the c lient program ERIS.ERIS's the realization of c lient similar to Windows program. Conduc ive to the server through the c lient to deal w ith the original func tion-spec ific applications. They have a better and easy to store a user-friendly interfac e, in order to merge the reliability of learning, provide them w ith a good query to the design process. Server process and c lient processes is in line w ith the TCP / IP pr otocol standard data requirements. ERIS provides me w ith Internet and c lient / server architecture of the c omposite structure. CGI and COM servers have two proc esses. CGI solution components from a w eb browser c lient to issue the HTTP request and return the corresponding results. COM is to manage the process of data link request. There is a temporary database error filter components and user information. Only authenticated users and the information data c an be registered.UNIX server operating system is the use of workstations, the c lient is the PC. Informix database management system used to manage users and data. Server proc ess through the ESQL / C language. Client through the MS Visual C + + and Delphi development tools for development.4. Conc lusionERIS is to design engineers and reliability analysts w idely used development system.Succession through a combination of Internet and c lient / server structure of the concept, we have the sc ope in the design of the engine, set up quic kly to understand the reliability of the design environment. Through the use of the Internet, the distribution of time to install a tool to reduc e a lot than before. ERIS also via the Internet to provide servic es to other organizations. Internet tec hnology development and w ill stimulate popular Internet-based system to the traditional c lient / server system changes.基于互联网的可靠性信息系统论文主要讨论的是信息可靠性系统随着广域网和服务器构造的发展。

互联网金融安全中英文对照外文翻译文献

互联网金融安全中英文对照外文翻译文献

互联网金融安全中英文对照外文翻译文献中英文对照外文翻译文献(文档含英文原文和中文翻译)Database Security in a Web Environment IntroductionDatabases have been common in government departments and commercial enterprises for many years. Today, databases in any organization are increasingly opened up to a multiplicity of suppliers, customers, partners and employees - an idea that would have been unheard of a few years ago. Numerous applications and their associated data are now accessed by a variety of users requiring different levels of access via manifold devices and channels – often simultaneously. For example:• Online banks allow customers to perform a variety of banking operations - via the Internet and over the telephone – whilst maintaining the privacy of account data.• E-Commerce merchants and their Service Providers must store customer, order and payment data on their merchant server - and keep it secure.• HR departments allow employees to update their personal information –whilst protecting certain management information from unauthorized access.• The medical profession must protect the confidentiality of patient data –whilst allowing essential access for treatment.• Online brokerages need to be able to provide large numbers of simultaneous users with up-to-date and accurate financial information.This complex landscape leads to many new demands upon system security. The global growth of complex web-based infrastructures is driving a need for security solutions that provide mechanisms to segregate environments; perform integrity checking and maintenance; enable strong authentication andnon-repudiation; and provide for confidentiality. In turn, this necessitates comprehensive business and technical risk assessment to identify the threats,vulnerabilities and impacts, and from this define a security policy. This leads to security definitions throughout the infrastructure - operating system, database management system, middleware and network.Financial, personal and medical information systems and some areas of government have strict requirements for security and privacy. Inappropriate disclosure of sensitive information to the wrong parties can have severe social, legal and regulatory consequences. Failure to address the basics can result in substantial direct and consequential financial losses - witness the fraud losses through the compromise of several million credit card numbers in merchants’ databases [Occf], plus associated damage to brand-image and loss of consumer confidence.This article discusses some of the main issues in database and web server security, and also considers important architecture and design issues.A Simple ModelAt the simplest level, a web server system consists of front-end software and back-end databases with interface software linking the two. Normally, the front-end software will consist of server software and the network server operating system, and the back-end database will be a relational orobject-oriented database fulfilling a variety of functions, including recording transactions, maintaining accounts and inventory. The interface software typically consists of Common Gateway Interface (CGI) scripts used to receive information from forms on web sites to perform online searches and to update the database.Depending on the infrastructure, middleware may be present; in addition, security management subsystems (with session and user databases) that address the web server’s and related applications’ requirements for authentication, accesscontrol and authorization may be present. Communications between this subsystem and either the web server, middleware or database are via application program interfaces (APIs)..This simple model is depicted in Figure 1.Security can be provided by the following components:• Web server.• Middleware.• Operating system.. Figure 1: A Simple Model.• Database and Database Management System.• Security management subsystem.The security of such a system addressesAspects of authenticity, integrity and confidentiality and is dependent on the security of the individual components and their interactions. Some of the most common vulnerabilities arise from poor configuration, inadequate change control procedures and poor administration. However, even if these areas are properlyaddressed, vulnerabilities still arise. The appropriate combination of people, technology and processes holds the key to providing the required physical and logical security. Attention should additionally be paid to the security aspects of planning, architecture, design and implementation.In the following sections, we consider some of the main security issues associated with databases, database management systems, operating systems and web servers, as well as important architecture and design issues. Our treatment seeks only to outline the main issues and the interested reader should refer to the references for a more detailed description.Database SecurityDatabase management systems normally run on top of an operating system and provide the security associated with a database. Typical operating system security features include memory and file protection, resource access control and user authentication. Memory protection prevents the memory of one program interfering with that of another and limits access and use of the objects employing techniques such as memory segmentation. The operating system also protects access to other objects (such as instructions, input and output devices, files and passwords) by checking access with reference to access control lists. Security mechanisms in common operating systems vary tremendously and, for those that are lacking, there exists special-purpose security software that can be integrated with the existing environment. However, this can be an expensive, time-consuming task and integration difficulties may also adversely impact application behaviors.Most database management systems consist of a number of modules - including database querying and database and file management - along with authorization, concurrent access and database description tables. Thesemanagement systems also use a variety of languages: a data definition language supports the logical definition of the database; developers use a data manipulation language; and a query language is used by non-specialist end-users.Database management systems have many of the same security requirements as operating systems, but there are significant differences since the former are particularly susceptible to the threat of improper disclosure, modification of information and also denial of service. Some of the most important security requirements for database management systems are: • Multi-Level Access Control.• Confidentiality.• Reliability.• Integrity.• Recovery.These requirements, along with security models, are considered in the following sections.Multi-Level Access ControlIn a multi-application and multi-user environment, administrators, auditors, developers, managers and users – collectively called subjects - need access to database objects, such as tables, fields or records. Access control restricts the operations available to a subject with respect to particular objects and is enforced by the database management system. Mandatory access controls require that each controlled object in the database must be labeled with a security level, whereas discretionary access controls may be applied at the choice of a subject.Access control in database management systems is more complicated than in operating systems since, in the latter, all objects are unrelated whereas in a database the converse is true. Databases are also required to make accessdecisions based on a finer degree of subject and object granularity. In multi-level systems, access control can be enforced by the use of views - filtered subsets of the database - containing the precise information that a subject is authorized to see.A general principle of access control is that a subject with high level security should not be able to write to a lower level object, and this poses a problem for database management systems that must read all database objects and write new objects. One solution to this problem is to use a trusted database management system.ConfidentialitySome databases will inevitably contain what is considered confidential data. For example, it could be inherently sensitive or its source may be sensitive, or it may belong to a sensitive table, thus making it difficult to determine what is actually confidential. Disclosure is also difficult to define, as it can be direct, indirect, involve the disclosure of bounds or even mere existence.An inference problem exists in database management systems whereby users can infer sensitive information from relatively insensitive queries. A trivial example is a request for information about the average salary of an employee and the number of employees turns out to be just one, thus revealing the employee’s salary. However, much more sophisticated statistical inference attacks can also be mounted. This highlights the fact that, although the data itself may be properly controlled, confidential information may still leak out.Controls can take several forms: not divulging sensitive information to unauthorized parties (which depends on the respective subject and object security levels), logging what each user knows or masking response data. The first control can be implemented fairly easily, the second quickly becomesunmanageable for a large number of users and the third leads to imprecise responses, and also exemplifies the trade-off between precision and security. Polyinstantiation refers to multiple instances of a data object existing in the database and it can provide a partial solution to the inference problem whereby different data values are supplied, depending on the security level, in response to the same query. However, this makes consistency management more difficult.Another issue that arises is when the security level of an aggregate amount is different to that of its elements (a problem commonly referred to as aggregation). This can be addressed by defining appropriate access control using views.Reliability, Integrity and RecoveryArguably, the most important requirements for databases are to ensure that the database presents consistent information to queries and can recover from any failures. An important aspect of consistency is that transactions execute atomically; that is, they either execute completely or not at all.Concurrency control addresses the problem of allowing simultaneous programs access to a shared database, while avoiding incorrect behavior or interference. It is normally addressed by a scheduler that uses locking techniques to ensure that the transactions are serial sable and independent. A common technique used in commercial products is two-phase locking (or variations thereof) in which the database management system controls when transactions obtain and release their locks according to whether or not transaction processing has been completed. In a first phase, the database management system collects the necessary data for the update: in a second phase, it updates the database. This means that the database can recover from incomplete transactions by repeatingeither of the appropriate phases. This technique can also be used in a distributed database system using a distributed scheduler arrangement.System failures can arise from the operating system and may result in corrupted storage. The main copy of the database is used for recovery from failures and communicates with a cached version that is used as the working version. In association with the logs, this allows the database to recover to a very specific point in the event of a system failure, either by removing the effects of incomplete transactions or applying the effects of completed transactions. Instead of having to recover the entire database after a failure, recovery can be made more efficient by the use of check pointing. It is used during normal operations to write additional updated information - such as logs, before-images of incomplete transactions, after-images of completed transactions - to the main database which reduces the amount of work needed for recovery. Recovery from failures in distributed systems is more complicated, since a single logical action is executed at different physical sites and the prospect of partial failure arises.Logical integrity, at field level and for the entire database, is addressed by the use of monitors to check important items such as input ranges, states and transitions. Error-correcting and error-detecting codes are also used.Security ModelsVarious security models exist that address different aspects of security in operating systems and database management systems. For example, theBell-LaPadula model defines security in terms of mandatory access control and addresses confidentiality only. The Bell LaPadula models, and other models including the Biba model for integrity, are described more fully in [Cast95] and [Pfle89]. These models are implementation-independent and provide a powerfulinsight into the properties of secure systems, lead to design policies and principles, and some form the basis for security evaluation criteria.Web Server SecurityWeb servers are now one of the most common interfaces between users and back-end databases, and as such, their security becomes increasingly important. Exploitation of vulnerabilities in the web server can lead to unforeseen attacks on middleware and backend databases, bypassing any controls that may be in place. In this section, we focus on common web server vulnerabilities and how the authentication requirements of web servers and databases are met.In general, a web server platform should not be shared with other applications and should be the only machine allowed to access the database. Using a firewall can provide additional security - either between the web server and users or between the web server and back-end database - and often the web server is placed on a de-militarized zone (DMZ) of a firewall. While firewalls can be used to block certain incoming connections, they must allow HTTP (and HTTPS) connections through to the web server, and so attacks can still be launched via the ports associated with these connections.VulnerabilitiesVulnerabilities appear on a weekly basis and, here, we prefer to focus on some general issues rather than specific attacks. Common web server vulnerabilities include:• No policy exists.• The default configuration is on.• Reusable passwords appear in clear.• Unnecessary ports available for network services are not disabled.• New security holes are not tracked. Even if they are, well-known vulnerabilities are not always fixed as the source code patches are not applied by system administrator and old programs are not re-compiled or removed.• Security tools are not used to scan the network for weaknesses and changes or to detect intrusions.• Faulty and buggy software - for example, buffer overflow and stack smashingAttacks• Automatic directory listings - this is of particular concern for the interface software directories.• Server root files are generally visible or accessible.• Lack of logs and bac kups.• File access is often not explicitly configured by the system administrator according to the security policy. This applies to configuration, client, administration and log files, administration programs, and CGI program sources and executables. CGI scripts allow dynamic web pages and make program development (in, for example, Perl) easy and rapid. However, their successful exploitation may allow execution of malicious programs, launching ofdenial-of-service attacks and, ultimately, privilege escalation on a server.Web Server and Database AuthenticationWhile user, browser and web server authentication are relatively well understood [Garf97], [Ghos98] and [Tree98], the introduction of additional components, such as databases and middleware, raise a number of authentication issues. There are a variety of options for authentication in a simple model (Figure 1). Firstly, both the web server and database management system can individually authenticate a user. This option requires the user to authenticatetwice which may be unacceptable in certain applications, although a singlesign-on device (which aims to manage authentication in a user-transparent way) may help. Secondly, a common approach is for the database to automatically grant user access based on web server authentication. However, this option should only be used for accessing publicly available information. Finally, the database may grant user access employing the web server authentication credentials as a basis for its own user authentication, using security management subsystems (Figure 1). We consider this last option in more detail.Web-based communications use the stateless HTTP protocol with the implication that state, and hence authentication, is not preserved when browsing successive web pages. Cookies, or files placed on user’s machine by a web server, were developed as a means of addressing this issue and are often used to provide authentication. However, after initial authentication, there is typically no re authentication per page in the same realm, only the use of unencrypted cookies (sometimes in association with IP addresses). This approach provides limited security as both cookies and IP addresses can be tampered with or spoofed.A stronger authentication method, commonly used by commercial implementations, uses digitally signed cookies. This allows additional systems, such as databases, to use digitally signed cookie data, including a session ID, as a basis for authentication. When a user has been authenticated by a web server (using a password, for example), a session ID is assigned and is stored in a security management subsystem database. When a user subsequently requests information from a database, the database receives a copy of the session ID, the security management subsystem checks this session ID against its local copy and, if authentication is successful, user access is granted to the database.The session ID is typically transmitted in the clear between the web server and database, but may be protected by SSL or even by physical security measures. The communications between the browser and web servers, and the web servers and security management subsystem (and its databases), are normally protected by SSL and use a web server security API that is used to digitally sign and verify browser cookies. The communications between the back-end databases and security management subsystem (and its databases) are also normally protected by SSL and use a database security API that verifies session Ids originating from the database and provides additional user authorization credentials. The web server security API is generally proprietary while, for the database security API, many vendors have adopted standards such as the Generic Security Services API (GSS-API) or CORBA [RFC2078] and [Corba].Architecture and DesignSecurity requirements for designing, building and implementing databases are important so that the systems, as part of the overall infrastructure, meet their requirements in actual operation. The various security models provide an important insight into the design requirements for databases and their management systems.Secure Database Management System ArchitecturesIn multi-level database management systems, a variety of architectures are possible: trusted subject, integrity locked, kernels and replicated. Trusted subject is used by most of the leading database management system vendors and can be integrated in existing products. Basically, the trusted subject architecture allows users to access a database via an un trusted front-end, a trusted database management system and trusted operating system. The operating systemprovides physical access to the database and the database management system provides multilevel object protection.The other architectures - integrity locked, kernels and replicated - all vary in detail, but they use a trusted front-end and an un trusted database management system. For details of these architectures and research prototypes, the reader is referred to [Cast95]. Different architectures are suited to different environments: for example, the trusted subject architecture is less integrated with the underlying operating system and is best suited when a trusted path can be assured between applications and the database management system.Secure Database Management System DesignAs discussed above, there are several fundamental differences between operating system and database management system design, including object granularity, multiple data types, data correlations and multi-level transactions. Other differences include the fact that database management systems include both physical and logical objects and that the database lifecycle is normally longer.These differences must be reflected in the design requirements which include:• Access, flow and infer ence controls.• Access granularity and modes.• Dynamic authorization.• Multi-level protection.• Polyinstantiation.• Auditing.• Performance.These requirements should be considered alongside basic information integrity principles, such as:• Well-formed transactions - to ensure that transactions are correct and consistent.• Continuity of operation - to ensure that data can be properly recovered, depending on the extent of a disaster.• Authorization and role management – to ensure that distinct roles are defined and users are authorized.• Authenticated users - to ensure that users are authenticated.• Least privilege - to ensure that users have the minimal privilege necessary to perform their tasks.• Separation of duties - to ensure that no single individual has access to critical data.• Delegation of authority - to ensure that the database management system policies are flexible enough to meet the organization’s requirements.Of course, some of these requirements and principles are not met by the database management system, but by the operating system and also by organizational and procedural measures.Database Design MethodologyVarious approaches to design exist, but most contain the same main stages. The principle aim of a design methodology is to provide a robust, verifiable design process and also to separate policies from how policies are actually implemented. An important requirement during any design process is that different design aspects can be merged and this equally applies to security.A preliminary analysis should be conducted that addresses the system risks, environment, existing products and performance. Requirements should then beanalyzed with respect to the results of a risk assessment. Security policies should be developed that include specification of granularity, privileges and authority.These policies and requirements form the input to the conceptual design that concentrates on subjects, objects and access modes without considering implementation details. Its purpose is to express information and process flows in a complete and consistent way.The logical design takes into account the operating system and database management system that will be used and which of the security requirements can be provided by which mechanisms. The physical design considers the actual physical realization of the logical design and, indeed, may result in a revision of the conceptual and logical phases due to physical constraints.Security AssuranceOnce a product has been developed, its security assurance can be assessed by a number of methods including formal verification, validation, penetration testing and certification. For example, if a database is to be certified as TCSEC Class B1, then it must implement the Bell-LaPadula mandatory access control model in which each controlled object in the database must be labeled with a security level.Most of these methods can be costly and lengthy to perform and are typically specific to particular hardware and software configurations. However, the international Common Criteria certification scheme provides the added benefit of a mutual recognition arrangement, thus avoiding the prospect of multiple certifications in different countries.ConclusionThis article has considered some of the security principles that are associated with databases and how these apply in a web based environment. Ithas also focused on important architecture and design principles. These principles have focused mainly on the prevention, assurance and recovery aspects, but other aspects, such as detection, are equally important in formulating a total information protection strategy. For example, host-based intrusion detection systems as well as a robust and tested set of business recovery procedures should be considered.Any fit-for-purpose, secure e-business infrastructure should address all the above aspects: prevention, assurance, detection and recovery. Certain industries are now starting to specify their own set of global, secure e-business requirements. International card payment associations have recently started to require minimum information security standards from electronic commerce merchants handling credit card data, to help manage fraud losses and associated impacts such as brand-image damage and loss of consumer confidence.网络环境下的数据库安全简介数据库在政府部门和商业机构得到普遍应用已经很多年了。

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库

外文文献-中文翻译-数据库英文原文2:《DBA Survivor: Become a Rock Star DBA》by Thomas LaRock,Published By Apress.2010You know that a database is a collection of logically related data elements that may be structured in various ways lo meet the multiple processing and retrieval needs of organizations and individuals. There’s nothing new about databases—early ones were chiseled in stone, penned on scrolls, and written on index cards. But now databases are commonly recorded on magnetizable media, and computer programs are required to perform the necessary storage and retrieval operations.Yo u’ll see in the following pages that complex data relationships and linkages may be found in all but the simplest databases. The system software package that handles the difficult tasks associated with creating, accessing, and maintaining database records is called a database management system (DBMS) .The programs in a DBMS package establish an interface between the database itself and the users of the database. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions (hat aren't available in regular reports. These questions might initially be vague and / or poorly defined, but peo ple can "browse” through the database until they have the needed information. Inshort, the DBMS will “m anage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t10programmers. In a file-oriented system, users needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information[4].The availability of a DBMS, however, offers users a much faster alternative communications path.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and docs not provide the kinds of control a necessary in a multiuser organization. These-controls arc only available when a set of application programs arc customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have been very popular. Personal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lasers and so on .By the nature of intended usage, database systems on these machines except from several of the requirements of full doge database systems. Since data sharing is not tended, concurrent operations even less so. the fewer can be less complex. Security and integrity maintenance arc de-emphasized or absent. As data limes will be small, performance efficiency is also important. In fact, the only aspect of a database system that is important is data Independence. Data-dependence, as stated earlier, means that applicant programs and user queries need not recognizant physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data a( a high level (close to (he application) and be totally shielded from the10low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here. Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc. offered by theses stems.The main negative point concerns absence of data protection features. Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or malicious intent. The second weakness of many of the PC-based systems is that of performance. If data volumes grow up to a few thousands of records, performance could be a bottleneck.For organization where growth in data volumes is expected, availability of. the same or compatible software on large machines should be considered.This is one of the most common misconceptions about database management systems that are used in personal computers. Thoroughly comprehensive and sophisticated business systems can be developed in dBASE, Paradox and other DBMSs. However, they are created by experienced programmers using the DBMS's own programming language. Thai is not the same as users who create and manage personal10files that are not part of the mainstream company system.Transaction Management of DatabaseThe objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments. The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions. The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locks before accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations. The consistency criterion adopted for traditional transactions is the notion of scrializability. Scrializa-bility is enforced in conventional database systems through theuse of locking for automatic concurrency control, and logging for automatic recovery from system failure situations. A “transaction’’ that doesn't provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion. However, there must be some consistency criterion.Version System Management of DatabaseDespite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics10has been a key impediment to version support in database systems. Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g. a compound document, a user manual, etc. and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there arc three directions of research and development in versioning. First is the notion of a parameterized versioning", that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discardingsome of themes either impractical or flawed. The third is the investigation into the semantics and implementation of versioning collections of objects and of versioning the database.There is no consensus of the definition of the te rm “management information system”. Some writers prefer alternative terminology such as “information processing system”, "information and decision syste m, “organizational information syste m”, or simply “i nformat ion system” to refer to the computer-based information processing system which supports the operations, management, and decision-making functions of an organization. This text uses “MIS” because i t is descriptive and generally understood; it also frequently uses "information system”instead of ''MIS” t o refer to an organizational information system.10A definition of a management information system, as the term is generally understood, is an integrated, user-machine system for providing information 丨o support operations, management, and decision-making functions in an organization. The system utilizes computer hardware and software; manual procedures: models for analysis planning, control and decision making; and a database. The fact that it is an integrated system does not mean that it is a single, monolithic structure: rather, ii means that the parts fit into an overall design. The elements of the definition arc highlighted below: Computer-based user-machine system.Conceptually, a management information can exist without computer, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but the extent to whichinformation use should be computerized. The concept of a user-machine system implies that some (asks are best performed humans, while others are best done by machine. The user of an MIS is any person responsible for entering input da(a, instructing the system, or utilizing the information output of the system. For many problems, the user and the computer form a combined system with results obtained through a set of interactions between the computer and the user.User-machine interaction is facilitated by operation in which the user's input-output device (usually a visual display terminal) is connected lo the computer. The computer can be a personal computer serving only one user or a large computer that serves a number of users through terminals connected by communication lines. The user input-output device permits direct input of data and immediate output of results. For instance, a person using The computer interactively in financial planning poses 4t what10if* questions by entering input at the terminal keyboard; the results are displayed on the screen in a few second.The computer-based user-machine characteristics of an MIS affect the knowledge requirements of both system developer and system user, “computer-based” means that the designer of a management information system must have a knowledge of computers and of their use in processing. The “user-machine” concept means the system designer should also understand the capabilities of humans as system components (as information processors) and the behavior of humans as users of information.Information system applications should not require users Co be computer experts. However, users need to be able lo specify(heir information requirements; some understanding of computers, the nature of information, and its use in various management function aids users in this task.Management information system typically provide the basis for integration of organizational information processing. Individual applications within information systems arc developed for and by diverse sets of users. If there are no integrating processes and mechanisms, the individual applications may be inconsistent and incompatible. Data item may be specified differently and may not be compatible across applications that use the same data. There may be redundant development of separate applications when actually a single application could serve more than one need. A user wanting to perform analysis using data from two different applications may find the task very difficult and sometimes impossible.The first step in integration of information system applications is an overall information system plan. Even though application systems are implemented one at a10time, their design can be guided by the overall plan, which determines how they fit in with other functions. In essence, the information system is designed as a planed federation of small systems.Information system integration is also achieved through standards, guidelines, and procedures set by the MIS function. The enforcement of such standards and procedures permit diverse applications to share data, meet audit and control requirements, and be shares by multiple users. For instance, an application may be developed to run on a particular small computer. Standards for integration may dictate that theequipment selected be compatible with the centralized database. The trend in information system design is toward separate application processing form the data used to support it. The separate database is the mechanism by which data items are integrated across many applications and made consistently available to a variety of users. The need for a database in MIS is discussed below.The term “information” and “data” are frequently used interchangeably; However, information is generally defined as data that is meaningful or useful to The recipient. Data items are therefore the raw material for producing information.The underlying concept of a database is that data needs to be managed in order to be available for processing and have appropriate quality. This data management includes both software and organization. The software to create and manage a database is a database management system.When all access to any use of database is controlled through a database management system, all applications utilizing a particular data item access the same data item which is stored in only one place. A single updating of the data item updates it for10all uses. Integration through a database management system requires a central authority for the database. The data can be stored in one central computer or dispersed among several computers; the overriding requirement is that there be an organizational function to exercise control.It is usually insufficient for human recipients to receive only raw data or even summarized data. Data usually needs to be processed and presented in such a way that Che result is directed toward the decision to be made. To do this, processing of dataitems is based on a decision model.For example, an investment decision relative to new capital expenditures might be processed in terms of a capital expenditure decision model.Decision models can be used to support different stages in the decision-making process. “Intelligence’’ models can be used to search for problems and/or opportunities. Models can be used to identify and analyze possible solutions. Choice models such as optimization models maybe used to find the most desirable solution.In other words, multiple approaches are needed to meet a variety of decision situations. The following are examples and the type of model that might be included in an MIS to aid in analysis in support of decision-making; in a comprehensive information system, the decision maker has available a set of general models that can be applied to many analysis and decision situations plus a set of very specific models for unique decisions. Similar models are available tor planning and control. The set of models is the model base for the MIS.Models are generally most effective when the manager can use interactive dialog (o build a plan or to iterate through several decision choices under different conditions.10中文译文2:《数据库幸存者:成为一个摇滚名明星》众所周知,数据库是逻辑上相关的数据元的汇集.这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。

电子类文献中英文翻译

电子类文献中英文翻译

外文翻译原文:Progress in ComputersThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the worldhad increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputerwith fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upo n the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computersThe RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive nameof computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chipsare cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make se nse to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of theinternational semiconductor industry.At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller.My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met willdepend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

数据库安全中英文对照外文翻译文献

数据库安全中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Database Security in a Web Environment IntroductionDatabases have been common in government departments and commercial enterprises for many years. Today, databases in any organization are increasingly opened up to a multiplicity of suppliers, customers, partners and employees - an idea that would have been unheard of a few years ago. Numerous applications and their associated data are now accessed by a variety of users requiring different levels of access via manifold devices and channels – often simultaneously. For example:• Online banks allow customers to perform a variety of banking operations - via the Internet and over the telephone – whilst maintaining the privacy of account data.• E-Commerce merchants and their Service Providers must store customer, order and payment data on their merchant server - and keep it secure.• HR departments allow employees to update their personal information –whilst protecting certain management information from unauthorized access.• The medical profession must protect the confidentiality of patient data –whilst allowing essential access for treatment.• Online brokerages need to be able to provide large numbers of simultaneous users with up-to-date and accurate financial information.This complex landscape leads to many new demands upon system security. The global growth of complex web-based infrastructures is driving a need for security solutions that provide mechanisms to segregate environments; perform integrity checking and maintenance; enable strong authentication andnon-repudiation; and provide for confidentiality. In turn, this necessitates comprehensive business and technical risk assessment to identify the threats,vulnerabilities and impacts, and from this define a security policy. This leads to security definitions throughout the infrastructure - operating system, database management system, middleware and network.Financial, personal and medical information systems and some areas of government have strict requirements for security and privacy. Inappropriate disclosure of sensitive information to the wrong parties can have severe social, legal and regulatory consequences. Failure to address the basics can result in substantial direct and consequential financial losses - witness the fraud losses through the compromise of several million credit card numbers in merchants’ databases [Occf], plus associated damage to brand-image and loss of consumer confidence.This article discusses some of the main issues in database and web server security, and also considers important architecture and design issues.A Simple ModelAt the simplest level, a web server system consists of front-end software and back-end databases with interface software linking the two. Normally, the front-end software will consist of server software and the network server operating system, and the back-end database will be a relational orobject-oriented database fulfilling a variety of functions, including recording transactions, maintaining accounts and inventory. The interface software typically consists of Common Gateway Interface (CGI) scripts used to receive information from forms on web sites to perform online searches and to update the database.Depending on the infrastructure, middleware may be present; in addition, security management subsystems (with session and user databases) that address the web server’s and related applications’ requirements for authentication, accesscontrol and authorization may be present. Communications between this subsystem and either the web server, middleware or database are via application program interfaces (APIs)..This simple model is depicted in Figure 1.Security can be provided by the following components:• Web server.• Middleware.• Operating system.. Figure 1: A Simple Model.• Database and Database Management System.• Security management subsystem.The security of such a system addressesAspects of authenticity, integrity and confidentiality and is dependent on the security of the individual components and their interactions. Some of the most common vulnerabilities arise from poor configuration, inadequate change control procedures and poor administration. However, even if these areas are properlyaddressed, vulnerabilities still arise. The appropriate combination of people, technology and processes holds the key to providing the required physical and logical security. Attention should additionally be paid to the security aspects of planning, architecture, design and implementation.In the following sections, we consider some of the main security issues associated with databases, database management systems, operating systems and web servers, as well as important architecture and design issues. Our treatment seeks only to outline the main issues and the interested reader should refer to the references for a more detailed description.Database SecurityDatabase management systems normally run on top of an operating system and provide the security associated with a database. Typical operating system security features include memory and file protection, resource access control and user authentication. Memory protection prevents the memory of one program interfering with that of another and limits access and use of the objects employing techniques such as memory segmentation. The operating system also protects access to other objects (such as instructions, input and output devices, files and passwords) by checking access with reference to access control lists. Security mechanisms in common operating systems vary tremendously and, for those that are lacking, there exists special-purpose security software that can be integrated with the existing environment. However, this can be an expensive, time-consuming task and integration difficulties may also adversely impact application behaviors.Most database management systems consist of a number of modules - including database querying and database and file management - along with authorization, concurrent access and database description tables. Thesemanagement systems also use a variety of languages: a data definition language supports the logical definition of the database; developers use a data manipulation language; and a query language is used by non-specialist end-users.Database management systems have many of the same security requirements as operating systems, but there are significant differences since the former are particularly susceptible to the threat of improper disclosure, modification of information and also denial of service. Some of the most important security requirements for database management systems are: • Multi-Level Access Control.• Confidentiality.• Reliability.• Integrity.• Recovery.These requirements, along with security models, are considered in the following sections.Multi-Level Access ControlIn a multi-application and multi-user environment, administrators, auditors, developers, managers and users – collectively called subjects - need access to database objects, such as tables, fields or records. Access control restricts the operations available to a subject with respect to particular objects and is enforced by the database management system. Mandatory access controls require that each controlled object in the database must be labeled with a security level, whereas discretionary access controls may be applied at the choice of a subject.Access control in database management systems is more complicated than in operating systems since, in the latter, all objects are unrelated whereas in a database the converse is true. Databases are also required to make accessdecisions based on a finer degree of subject and object granularity. In multi-level systems, access control can be enforced by the use of views - filtered subsets of the database - containing the precise information that a subject is authorized to see.A general principle of access control is that a subject with high level security should not be able to write to a lower level object, and this poses a problem for database management systems that must read all database objects and write new objects. One solution to this problem is to use a trusted database management system.ConfidentialitySome databases will inevitably contain what is considered confidential data. For example, it could be inherently sensitive or its source may be sensitive, or it may belong to a sensitive table, thus making it difficult to determine what is actually confidential. Disclosure is also difficult to define, as it can be direct, indirect, involve the disclosure of bounds or even mere existence.An inference problem exists in database management systems whereby users can infer sensitive information from relatively insensitive queries. A trivial example is a request for information about the average salary of an employee and the number of employees turns out to be just one, thus revealing the employee’s salary. However, much more sophisticated statistical inference attacks can also be mounted. This highlights the fact that, although the data itself may be properly controlled, confidential information may still leak out.Controls can take several forms: not divulging sensitive information to unauthorized parties (which depends on the respective subject and object security levels), logging what each user knows or masking response data. The first control can be implemented fairly easily, the second quickly becomesunmanageable for a large number of users and the third leads to imprecise responses, and also exemplifies the trade-off between precision and security. Polyinstantiation refers to multiple instances of a data object existing in the database and it can provide a partial solution to the inference problem whereby different data values are supplied, depending on the security level, in response to the same query. However, this makes consistency management more difficult.Another issue that arises is when the security level of an aggregate amount is different to that of its elements (a problem commonly referred to as aggregation). This can be addressed by defining appropriate access control using views.Reliability, Integrity and RecoveryArguably, the most important requirements for databases are to ensure that the database presents consistent information to queries and can recover from any failures. An important aspect of consistency is that transactions execute atomically; that is, they either execute completely or not at all.Concurrency control addresses the problem of allowing simultaneous programs access to a shared database, while avoiding incorrect behavior or interference. It is normally addressed by a scheduler that uses locking techniques to ensure that the transactions are serial sable and independent. A common technique used in commercial products is two-phase locking (or variations thereof) in which the database management system controls when transactions obtain and release their locks according to whether or not transaction processing has been completed. In a first phase, the database management system collects the necessary data for the update: in a second phase, it updates the database. This means that the database can recover from incomplete transactions by repeatingeither of the appropriate phases. This technique can also be used in a distributed database system using a distributed scheduler arrangement.System failures can arise from the operating system and may result in corrupted storage. The main copy of the database is used for recovery from failures and communicates with a cached version that is used as the working version. In association with the logs, this allows the database to recover to a very specific point in the event of a system failure, either by removing the effects of incomplete transactions or applying the effects of completed transactions. Instead of having to recover the entire database after a failure, recovery can be made more efficient by the use of check pointing. It is used during normal operations to write additional updated information - such as logs, before-images of incomplete transactions, after-images of completed transactions - to the main database which reduces the amount of work needed for recovery. Recovery from failures in distributed systems is more complicated, since a single logical action is executed at different physical sites and the prospect of partial failure arises.Logical integrity, at field level and for the entire database, is addressed by the use of monitors to check important items such as input ranges, states and transitions. Error-correcting and error-detecting codes are also used.Security ModelsVarious security models exist that address different aspects of security in operating systems and database management systems. For example, theBell-LaPadula model defines security in terms of mandatory access control and addresses confidentiality only. The Bell LaPadula models, and other models including the Biba model for integrity, are described more fully in [Cast95] and [Pfle89]. These models are implementation-independent and provide a powerfulinsight into the properties of secure systems, lead to design policies and principles, and some form the basis for security evaluation criteria.Web Server SecurityWeb servers are now one of the most common interfaces between users and back-end databases, and as such, their security becomes increasingly important. Exploitation of vulnerabilities in the web server can lead to unforeseen attacks on middleware and backend databases, bypassing any controls that may be in place. In this section, we focus on common web server vulnerabilities and how the authentication requirements of web servers and databases are met.In general, a web server platform should not be shared with other applications and should be the only machine allowed to access the database. Using a firewall can provide additional security - either between the web server and users or between the web server and back-end database - and often the web server is placed on a de-militarized zone (DMZ) of a firewall. While firewalls can be used to block certain incoming connections, they must allow HTTP (and HTTPS) connections through to the web server, and so attacks can still be launched via the ports associated with these connections.VulnerabilitiesVulnerabilities appear on a weekly basis and, here, we prefer to focus on some general issues rather than specific attacks. Common web server vulnerabilities include:• No policy exists.• The default configuration is on.• Reusable passwords appear in clear.• Unnecessary ports available for network services are not disabled.• New security holes are not tracked. Even if they are, well-known vulnerabilities are not always fixed as the source code patches are not applied by system administrator and old programs are not re-compiled or removed.• Security tools are not used to scan the network for weaknesses and changes or to detect intrusions.• Faulty and buggy software - for example, buffer overflow and stack smashingAttacks• Automatic directory listings - this is of particular concern for the interface software directories.• Server root files are generally visible or accessible.• Lack of logs and bac kups.• File access is often not explicitly configured by the system administrator according to the security policy. This applies to configuration, client, administration and log files, administration programs, and CGI program sources and executables. CGI scripts allow dynamic web pages and make program development (in, for example, Perl) easy and rapid. However, their successful exploitation may allow execution of malicious programs, launching ofdenial-of-service attacks and, ultimately, privilege escalation on a server.Web Server and Database AuthenticationWhile user, browser and web server authentication are relatively well understood [Garf97], [Ghos98] and [Tree98], the introduction of additional components, such as databases and middleware, raise a number of authentication issues. There are a variety of options for authentication in a simple model (Figure 1). Firstly, both the web server and database management system can individually authenticate a user. This option requires the user to authenticatetwice which may be unacceptable in certain applications, although a singlesign-on device (which aims to manage authentication in a user-transparent way) may help. Secondly, a common approach is for the database to automatically grant user access based on web server authentication. However, this option should only be used for accessing publicly available information. Finally, the database may grant user access employing the web server authentication credentials as a basis for its own user authentication, using security management subsystems (Figure 1). We consider this last option in more detail.Web-based communications use the stateless HTTP protocol with the implication that state, and hence authentication, is not preserved when browsing successive web pages. Cookies, or files placed on user’s machine by a web server, were developed as a means of addressing this issue and are often used to provide authentication. However, after initial authentication, there is typically no re authentication per page in the same realm, only the use of unencrypted cookies (sometimes in association with IP addresses). This approach provides limited security as both cookies and IP addresses can be tampered with or spoofed.A stronger authentication method, commonly used by commercial implementations, uses digitally signed cookies. This allows additional systems, such as databases, to use digitally signed cookie data, including a session ID, as a basis for authentication. When a user has been authenticated by a web server (using a password, for example), a session ID is assigned and is stored in a security management subsystem database. When a user subsequently requests information from a database, the database receives a copy of the session ID, the security management subsystem checks this session ID against its local copy and, if authentication is successful, user access is granted to the database.The session ID is typically transmitted in the clear between the web server and database, but may be protected by SSL or even by physical security measures. The communications between the browser and web servers, and the web servers and security management subsystem (and its databases), are normally protected by SSL and use a web server security API that is used to digitally sign and verify browser cookies. The communications between the back-end databases and security management subsystem (and its databases) are also normally protected by SSL and use a database security API that verifies session Ids originating from the database and provides additional user authorization credentials. The web server security API is generally proprietary while, for the database security API, many vendors have adopted standards such as the Generic Security Services API (GSS-API) or CORBA [RFC2078] and [Corba].Architecture and DesignSecurity requirements for designing, building and implementing databases are important so that the systems, as part of the overall infrastructure, meet their requirements in actual operation. The various security models provide an important insight into the design requirements for databases and their management systems.Secure Database Management System ArchitecturesIn multi-level database management systems, a variety of architectures are possible: trusted subject, integrity locked, kernels and replicated. Trusted subject is used by most of the leading database management system vendors and can be integrated in existing products. Basically, the trusted subject architecture allows users to access a database via an un trusted front-end, a trusted database management system and trusted operating system. The operating systemprovides physical access to the database and the database management system provides multilevel object protection.The other architectures - integrity locked, kernels and replicated - all vary in detail, but they use a trusted front-end and an un trusted database management system. For details of these architectures and research prototypes, the reader is referred to [Cast95]. Different architectures are suited to different environments: for example, the trusted subject architecture is less integrated with the underlying operating system and is best suited when a trusted path can be assured between applications and the database management system.Secure Database Management System DesignAs discussed above, there are several fundamental differences between operating system and database management system design, including object granularity, multiple data types, data correlations and multi-level transactions. Other differences include the fact that database management systems include both physical and logical objects and that the database lifecycle is normally longer.These differences must be reflected in the design requirements which include:• Access, flow and infer ence controls.• Access granularity and modes.• Dynamic authorization.• Multi-level protection.• Polyinstantiation.• Auditing.• Performance.These requirements should be considered alongside basic information integrity principles, such as:• Well-formed transactions - to ensure that transactions are correct and consistent.• Continuity of operation - to ensure that data can be properly recovered, depending on the extent of a disaster.• Authorization and role management – to ensure that distinct roles are defined and users are authorized.• Authenticated users - to ensure that users are authenticated.• Least privilege - to ensure that users have the minimal privilege necessary to perform their tasks.• Separation of duties - to ensure that no single individual has access to critical data.• Delegation of authority - to ensure that the database management system policies are flexible enough to meet the organization’s requirements.Of course, some of these requirements and principles are not met by the database management system, but by the operating system and also by organizational and procedural measures.Database Design MethodologyVarious approaches to design exist, but most contain the same main stages. The principle aim of a design methodology is to provide a robust, verifiable design process and also to separate policies from how policies are actually implemented. An important requirement during any design process is that different design aspects can be merged and this equally applies to security.A preliminary analysis should be conducted that addresses the system risks, environment, existing products and performance. Requirements should then beanalyzed with respect to the results of a risk assessment. Security policies should be developed that include specification of granularity, privileges and authority.These policies and requirements form the input to the conceptual design that concentrates on subjects, objects and access modes without considering implementation details. Its purpose is to express information and process flows in a complete and consistent way.The logical design takes into account the operating system and database management system that will be used and which of the security requirements can be provided by which mechanisms. The physical design considers the actual physical realization of the logical design and, indeed, may result in a revision of the conceptual and logical phases due to physical constraints.Security AssuranceOnce a product has been developed, its security assurance can be assessed by a number of methods including formal verification, validation, penetration testing and certification. For example, if a database is to be certified as TCSEC Class B1, then it must implement the Bell-LaPadula mandatory access control model in which each controlled object in the database must be labeled with a security level.Most of these methods can be costly and lengthy to perform and are typically specific to particular hardware and software configurations. However, the international Common Criteria certification scheme provides the added benefit of a mutual recognition arrangement, thus avoiding the prospect of multiple certifications in different countries.ConclusionThis article has considered some of the security principles that are associated with databases and how these apply in a web based environment. Ithas also focused on important architecture and design principles. These principles have focused mainly on the prevention, assurance and recovery aspects, but other aspects, such as detection, are equally important in formulating a total information protection strategy. For example, host-based intrusion detection systems as well as a robust and tested set of business recovery procedures should be considered.Any fit-for-purpose, secure e-business infrastructure should address all the above aspects: prevention, assurance, detection and recovery. Certain industries are now starting to specify their own set of global, secure e-business requirements. International card payment associations have recently started to require minimum information security standards from electronic commerce merchants handling credit card data, to help manage fraud losses and associated impacts such as brand-image damage and loss of consumer confidence.网络环境下的数据库安全简介数据库在政府部门和商业机构得到普遍应用已经很多年了。

电子商务信息安全中英文对照外文翻译文献

电子商务信息安全中英文对照外文翻译文献

电子商务信息安全中英文对照外文翻译文献(文档含英文原文和中文翻译)原文:E-commerce Information Security ProblemsⅠ. IntroductionE-commerce (E-Business) is in open networks, including between enterprises (B2B), business and consumers (B2C) commercial transactions, compared with the traditional business model, e-commerce with efficient, convenient, covered wide range of characteristics and benefits. However, e-commerce open this Internet-based data exchange is great its security vulnerabilities, security is a core e-commerce development constraints and key issues.In this paper, the basic ideas and principles of systems engineering, analyzes the current security threats facing e-commerce, in this based on security technology from the perspective of development trend of e-commerce.Ⅱ. E-commerce modelModern e-commerce technology has focused on the establishment and operation of the network of stores. Network in the department stores and real stores no distinction between structure and function, differences in their function and structure to achieve these methods and the way business operate.Web store from the front view is a special kind of WEB server. WEB site of modern multimedia support and a good interactive feature as the basis for the establishment of this virtual store, so customers can, as in a real supermarket pushing a shopping cart to select goods, and finally in the checkout check out. These online stores also constitute the three pillars of software: catalog, shopping cart and customer checkout. Customers use an electronic currency and transaction must store customers and stores are safe and reliable.Behind the store in the network, enterprises must first have a product storage warehouse and administration; second network to sell products by mail or other delivery channels to customers hands; Third, enterprises should also be responsible for product after-sales service, This service may be through networks, may not. Internet transactions are usually a first Pay the bill and getting goods shopping. For customers, convenience is that the goods purchased will be directly delivered to their home, but hard to feel assured that the goods can not be confirmed until the handsreach into their own hands, what it is.Therefore, the credibility of the store network and service quality is actually the key to the success of e-commerce.Ⅲ.the key to development of electronic commerceE-commerce in the telecommunications network to develop. Therefore, the advanced computer network infrastructure and telecommunications policy easing the development of electronic commerce has become a prerequisite. Currently, telecom services, high prices, limited bandwidth, the service is not timely or not reliable and so the development of e-commerce has become a constraint. Speed up the construction of telecommunications infrastructure, to break the telecommunications market monopoly, introduce competition mechanism to ensure fair competition in the telecommunications business, to promote networking, ensure to provide users with low-cost, high-speed, reliable communications services is a good construction target network environment, but also all of the world common task.E-commerce the most prominent problem is to solve the on-line shopping, trading and clearing of security issues, including the establishment of e-commerce trust between all the main issues, namely the establishment of safety certification system (CA) issues; choose safety standards (such as SET , SSL, PKI, etc.) problems; using encryption and decryption method and encryption strength problems. Establishment of security authentication system which is the key.Online trading and traditional face to face or written transactions in different ways, it is transmitted through the network business information and trade activities. The security of online transactions means:Validity: the validity of the contract to ensure online transactions, to prevent system failure, computer viruses, hacker attacks.Confidentiality: the content of the transaction, both transactions account, the password is not recognized by others and stealing.Integrity: to prevent the formation of unilateral transaction information and modify.Therefore, the e-commerce security system should include: secure and reliable communications network to ensure reliable data transmission integrity, prevent viruses, hackers; electronic signatures and other authentication systems; complete data encryption system and so on.Ⅳ.e-commerce security issues facingAs e-commerce network is the computer-based, it inevitably faces a number of security issues.(1) Information leakPerformance in e-commerce for the leakage of business secrets, including two aspects: the parties are dealing transactions by third parties to steal the contents; transaction to the other party to provide documents used illegal use by third parties.(2) AlteredE-commerce information for business performance in the authenticity and integrity issues. Electronic transaction information in the network transmission process may be others to illegally modify, delete or re-changed, so that information about its authenticity and integrity.(3) IdentificationWithout identification, third-party transactions is likely to fake the identity of parties to a deal breaker, damage the reputation of being counterfeit or stolen by one party to the transaction fake results and so on, for identification, the transaction between the two sides can prevent suspicion situation.(4) Computer virusesComputer virus appeared 10 years, a variety of new virus and its variants rapidly increasing, the emergence of the Internet for the spread of the virus has provided the best medium. Many new viruses directly using the network as its transmission, as well as many viruses spread faster through dried networks, frequently causing billions of dollars in economic losses.(5) HackerWith the spread of a variety of application tools, hackers have been popular, and are not in the past; non-computer expert can not be a hacker. Have kicked Yahoo's mafia boy did not receive any special training, only a few attacks to the users to download software and learn how to use the Internet on a big dry.Ⅴ.e-commerce security and safety factorsEnterprise application security is the most worried about e-commerce, and how to protect the security of e-commerce activities, will remain the core of e-commerce research. As a secure e-commerce system, we must first have a safe, reliable communication network, to ensure that transaction information secure and rapidtransmission; second database server to ensure absolute security against hackers break into networks to steal information. E-commerce security technologies include encryption, authentication technology and e-commerce security protocols, firewall technology.(A), encryption technologyTo ensure the security of data and transactions to prevent fraud, to confirm the true identity of transaction parties, e-commerce to adopt encryption technology, encryption technology is through the use of code or password to protect data security. For encrypted data is called plaintext, specifically through the role of a encryption algorithm, the conversion into cipher text, we will express this change as the cipher text is called encryption, the cipher text by the decryption algorithm to form a clear role in the output of this a process known as decryption. Encryption algorithm known as the key parameters used. The longer the key, the key space is large, traverse the key space the more time spent, the less likely cracked.Encryption technology can be divided into two categories: symmetric encryption and asymmetric encryption. Symmetric encryption to the data encryption standard DES (Data Encryption Standard) algorithm is represented. Asymmetric encryption is usually RSA (Rivets Shamir Aleman) algorithm is represented.(B), authenticationCommonly used security authentication technologies: digital signatures, digital certificates, digital time stamp, CA security authentication technology.(C), hacker protection technologyCurrently, hackers have become the biggest e-commerce security threats, thus preventing hacking network security technology has become the main content, by governments and industry are highly valued. Hacking techniques include buffer overflow attacks, Trojans, port scans, IP fraud, network monitoring, password attacks, and denial of service Dos attacks. At present, people have made many effective anti-hacker technologies, including firewalls, intrusion detection, and network security evaluation techniques.Ⅵ.the future security of e-commerceIncreasingly severe security problems, are growing threat to national and global economic security, governments have been based on efforts in the following areas: (1) Strengthen the legislation, refer to the advanced countries have effective legislation, innovative, e-commerce and improve the protection of the laws againstcyber-crime security system.(2) Establishment of relevant institutions, to take practical measures to combat cyber crime. Development of the law, the implementing agencies should also be used for its relevant laws, which must establish an independent oversight body, such as the executing agency to implement the law.(3) Increase investment in network security technology; improve the level of network security technology. E-commerce security law is the prerequisite and basis for development and secure e-commerce security technology is a means of protection. There are many security issues are technical reasons, it should increase the technology resources, and continuously push forward the development of old technologies and developing new security technology.(4) To encourage enterprises to protect themselves against Internet crime against. To avoid attack, companies can not hold things to chance, must attach great importance to system vulnerabilities, in time to find security holes to install the operating system and server patches, and network security detection equipment should be used regularly scan the network monitoring, develop a set of complete security protection system to enable enterprises to form a system and combined with the comprehensive protection system.(5) To strengthen international cooperation to strengthen global efforts to combat cyber crime. As e-commerce knows no borders, no geographical, it is a completely open area, so the action against cyber crime e-commerce will also be global. This will require Governments to strengthen cooperation, can not have "the saying which goes, regardless of others, cream tile" misconception.(6) To strengthen the network of national safety education, pay attention to the cultivation of outstanding computer.Ⅶ. ConclusionE-commerce in China has developed rapidly in recent years, but the security has not yet established. This has an impact on the development of electronic commerce as a barrier.To this end, we must accelerate the construction of the e-commerce security systems. This will be a comprehensive, systematic project involving the whole society. Specifically, we want legal recognition of electronic communications records of the effectiveness of legal protection for electronic commerce; we should strengthen the research on electronic signatures, to protect e-commerce technology; we need to build e-commerce authentication system as soon as possible, to organize protection for electronic commerce. Moreover, for e-commerce features without borders, we shouldalso strengthen international cooperation, so that e-commerce truly plays its role. Only in this way, we can adapt to the timesPromoting China's economic development; also the only way we can in the economic globalization today, to participate in international competition, and thus gain a competitive advantage.Source: Michael Hecker, Tharam S. Dillon, and Elizabeth Chang IEEE Internet Computing prentice hall publishing, 2002电子商务中的信息安全问题一、引言电子商务(E-Business)是发生在开放网络上的包括企业之间(B2B)、企业和消费者之间(B2C)的商业交易,与传统商务模式相比,电子商务具有高效、便捷、覆盖范围广等特点和优点。

外文文献及其翻译电子政务信息

外文文献及其翻译电子政务信息

外文文献及其翻译电子政务信息1。

政府信息化的含义?政府信息化是指:政府有效利用现代信息和通信技术,通过不同的信息服务设施,对政府的业务流程、组织结构、人员素质等诸方面进行优化、改造的过程。

2.广义和狭义的电子政务的定义?广义的电子政务是指:运用信息技术和通信技术实现党委、人大、政协、政府、司法机关、军队系统和企事业单位的行政管理活动。

(电子党务、电子人大、电子政协)狭义的电子政务是指:政府在其管理和服务职能中运用现代信息和通信技术,实现政府组织结构和工作流程的重组优化,超越时间、空间和部门分隔的制约,全方位的向社会提供优质规范、透明的服务,是政府管理手段的变革.3。

电子政务的组成部分?①:政府部门内部办公职能的电子化和网络化;②:政府职能部门之间通过计算机网络实现有权限的实时互通的信息共享;③:政府部门通过网络与公众和企业间开展双向的信息交流与策;4.理解电子政务的发展动力?①:信息技术的快速发展;②:政府自身改革与发展的需要;③:信息化、民主化的社会需求的推动;5。

电子政务的应用模式?模式有:1.政府对公务员的电子政务(G2E);2.政府间的电子政务(G2G);3。

政府对企业的电子政务(G2B);4.政府对公众的电子政务(G2C);6。

电子政务的功能?①:提高工作效率,降低办公成本;②:加快部门整合,堵塞监管漏洞;③:提高服务水平,便于公众的监督;④:带动社会信息化发展;7.我国电子政务发展存在的主要问题?①:政府公务员与社会公众对电子政务的认识不足;②:电子政务发展缺乏整体规划和统一性标准;③:电子政务管理体制改革远未到位;④:电子政务整体应用水平还较低;⑤:政府公务员的素质有待提高;⑥:电子政务立法滞后;⑦:对电子政务安全问题缺乏正确认识;8。

政府创新的含义和内容?含义:是指各级政府为适应公共管理与行政环境的需要,与时俱进的转变观念与职能,探索新的行政方法与途径,形成新的组织结构、业务流程和行政规范,全面提高行政效率,更好的履行行政职责的实践途径。

外文文献及中文翻译

外文文献及中文翻译

毕业设计说明书英文文献及中文翻译学专 指导教师:2014 年 6 月软件学院JSP Technology Conspectus And Specialties The JSP (Java Server mix) technology is used by the Sun microsystem issued by the company to develop dynamic Web application technology. With its easy, cross-platform, in many dynamic Web application programming languages, in a short span of a few years, has formed a complete set of standards, and widely used in electronic commerce, etc. In China, the JSP now also got more extensive attention, get a good development, more and more dynamic website to JSP technology. The related technologies of JSP are briefly introduced.The JSP a simple technology can quickly and with the method of generating Web pages. Use the JSP technology Web page can be easily display dynamic content. The JSP technology are designed to make the construction based on Web applications easier and efficient, and these applications and various Web server, application server, the browser and development tools work together.The JSP technology isn't the only dynamic web technology, also not the first one, in the JSP technology existed before the emergence of several excellent dynamic web technology, such as CGI, ASP, etc. With the introduction of these technologies under dynamic web technology, the development and the JSP. Technical1 JSP the development background and development historyIn web brief history, from a world wide web that most of the network information static on stock transactions evolution to acquisition of an operation and infrastructure. In a variety of applications, may be used for based on Web client, look no restrictions.Based on the browser client applications than traditional based on client/server applications has several advantages. These benefits include almost no limit client access and extremely simplified application deployment and management (to update an application, management personnel only need to change the program on a server, notthousands of installation in client applications). So, the software industry is rapidly to build on the client browser multi-layer application.The rapid growth of exquisite based Web application requirements development of technical improvements. Static HTML to show relatively static content is right choice, The new challenge is to create the interaction based on Web applications, in these procedures, the content of a Web page is based on the user's request or the state of the system, and are not predefined characters.For the problem of an early solution is to use a CGI - BIN interface. Developers write to interface with the relevant procedures and separate based on Web applications, the latter through the Web server to invoke the former. This plan has serious problem -- each new extensible CGI requirements in a new process on the server. If multiple concurrent users access to this procedure, these processes will use the Web server of all available resources, and the performance of the system will be reduced to extremely low.Some Web server providers have to provide for their server by plugins "and" the API to simplify the Web application development. These solutions are associated with certain Web server, cannot solve the span multiple suppliers solutions. For example, Microsoft's Active Server mix (ASP) technology in the Web page to create dynamic content more easily, but also can work in Microsoft on Personal Web Server and IIS.There are other solutions, but cannot make an ordinary page designers can easily master. For example, such as the Servlet Java technologies can use Java language interaction application server code easier. Developers to write such Servlet to receive signals from the Web browser to generate an HTTP request, a dynamic response (may be inquires the database to finish the request), then send contain HTML or XML documents to the response of the browser.note: one is based on a Java Servlet Java technical operation in the server program (with different, the latter operating in the Applet browser end). In this book the Servlet chapter 4.Using this method, the entire page must have made in Java Servlet. If developers or Web managers want to adjust page, you'll have to edit and recompile the Servlet Java, even in logic has been able to run. Using this method, the dynamic content with the application of the page still need to develop skills.Obviously, what is needed is a industry to create dynamic content within the scope of the pages of the solution. This program will solve the current scheme are limited. As follows:can on any Web server or applications.will application page displays and separation.can rapidly developing and testing.simplify the interactive development based on Web application process.The JSP technology is designed to meet such requirements. The JSP specification is a Web server, application server, trading system and develop extensive cooperation between the tool suppliers. From this standard to develop the existing integration and balance of Java programming environment (for example, Java Servlet and JavaBeans) support techniques and tools. The result is a kind of new and developing method based on Web applications, using component-based application logic page designers with powerful functions.2 Overall Semantics of a JSP PageA JSP page implementation class defines a _jspService() method mapping from the request to the response object. Some details of this transformation are specific to the scripting langu age used (see Chapter JSP.9, “Scripting”). Most details are not language specific and are described in this chapter.The content of a JSP page is devoted largely to describing the data that is written into the output stream of the response. (The JSP container usually sends this data back tothe client.) The description is based on a JspWriter object that is exposed through the implicit object out (see Section JSP.1.8.3, “Implicit Objects”). Its value varies: Initially, out is a new JspWriter object. This object may be different from the stream object returned from response.getWriter(), and may be considered to be interposed on the latter in order to implement buffering (see Section JSP.1.10.1, “The page Directive”). This is the initial out object. JSP page authors are prohibited from writing directly to either the PrintWriter or OutputStream associated with the ServletResponse.The JSP container should not invoke response.getWriter() until the time when the first portion of the content is to be sent to the client. This enables a number of uses of JSP, including using JSP as a language to “glue” actions that deliver binary content, or reliably forwarding to a servlet, or change dynamically the content type of the response before generating content. See Chapter JSP.4, “Internationalization Issues”.Within the body of some actions, out may be temporarily re-assigned to a different (nested) instance of a JspWriter object. Whether this is the case depends on the details of the action’s semantics. Typically the conte nt of these temporary streams is appended to the stream previously referred to by out, and out is subsequently re-assigned to refer to the previous (nesting) stream. Such nested streams are always buffered, and require explicit flushing to a nesting stream or their contents will be discarded.If the initial out JspWriter object is buffered, then depending upon the value of the autoFlush attribute of the page directive, the content of that buffer will either be automatically flushed out to the ServletResponse output stream to obviate overflow, or an exception shall be thrown to signal buffer overflow. If the initial out JspWriter is unbuffered, then content written to it will be passed directly through to the ServletResponse output stream.A JSP page can also describe what should happen when some specific events occur. In JSP 2.1, the only events that can be described are the initialization and the destructionof the page. These events are described using “well-known method names” in declaration elements..JavaScript is used for the first kind is browser, the dynamic general purpose of client scripting language. Netscape first proposed in 1995, but its JavaScript LiveScript called. Then quickly Netscape LiveScript renamed JavaScript, Java developers with them from the same issued a statement. A statement Java and JavaScript will complement each other, but they are different, so the technology of the many dismissed the misunderstanding of the two technologies.JavaScript to create user interface control provides a scripting language. In fact, in the browser into the JavaScript code logic. It can support such effect: when the cursor on the Web page of a mobile user input validation or transform image.Microsoft also write out their JavaScript version and the JScript called. Microsoft and Netscape support JavaScript and JScript around a core characteristics and European Manufacturers is.md by (ECMA) standards organization, the control standard of scripting language. ECMA its scripting language ECMAScript named.Servlets and JSPs often include fragments of information that are common to an organization, such as logos, copyrights, trademarks, or navigation bars. The web application uses the include mechanisms to import the information wherever it is needed, since it is easier to change content in one place then to maintain it in every piece of code where it is used. Some of this information is static and either never or rarely changes, such as an organization's logo. In other cases, the information is more dynamic and changes often and unpredictably, such as a textual greeting that must be localized for each user. In both cases, you want to ensure that the servlet or JSP can evolve independently of its included content, and that the implementation of the servlet or JSP properly updates its included content as necessary.You want to include a resource that does not change very much (such as a page fragment that represents a header or footer) in a JSP. Use the include directive in the including JSP page, and give the included JSP segment a .jspf extension.You want to include content in a JSP each time it receives a request, rather than when the JSP is converted to a servlet. Use the jsp:include standard action.You want to include a file dynamically in a JSP, based on a value derived from a configuration file. Use the jsp:include standard action. Provide the value in an external properties file or as a configuration parameter in the deployment descriptor.You want to include a fragment of an XML file inside of a JSP document, or include a JSP page in XML syntax. Use the jsp:include standard action for the includes that you want to occur with each request of the JSP. Use the jsp:directive.include element if the include action should occur during the translation phase.You want to include a JSP segment from outside the including file's context. Use the c:import3 The operation principle and the advantages of JSP tagsIn this section of the operating principle of simple introduction JSP and strengths.For the first time in a JSP documents requested by the engine, JSP Servlet is transformed into a document JSP. This engine is itself a Servlet. The operating process of the JSP shown below:(1) the JSP engine put the JSP files converting a Java source files (Servlet), if you find the files have any grammar mistake JSP, conversion process will interrupt, and to the server and client output error messages.(2) if converted, with the engine JSP javac Java source file compiler into a corresponding scale-up files.(3) to create a the Servlet (JSP page), the transformation of the Servlet jspInit () method was executed, jspInit () method in the life cycle of Servlet executed only once.(4) jspService () method invocation to the client requests. For each request, JSP engine to create a new thread for processing the request. If you have multiple clients and request the JSP files, JSP engine will create multiple threads. Each client requests a thread. To execute multi-thread can greatly reduce the requirement of system resources, improving the concurrency value and response time. But also should notice the multi-thread programming, due to the limited Servlet always in response to memory, so is very fast.(5) if the file has been modified. The JSP, server will be set according to the document to decide whether to recompile, if need to recompile, will replace the Servlet compile the memory and continue the process.(6) although the JSP efficiency is high, but at first when the need to convert and compile and some slight delay. In addition, if at any time due to reasons of system resources, JSP engine will in some way of uncertain Servlet will remove from memory. When this happens jspDestroy () method was first call.(7) and then Servlet examples were marked with "add" garbage collection. But in jspInit () some initialization work, if establish connection with database, or to establish a network connection, from a configuration file take some parameters, such as, in jspDestory () release of the corresponding resources.Based on a Java language has many other techniques JSP page dynamic characteristics, technical have embodied in the following aspects:3.1 simplicity and effectivenessThe JSP dynamic web pages with the compilation of the static HTML pages of writing is very similar. Just in the original HTML page add JSP tags, or some of the proprietary scripting (this is not necessary). So, a familiar with HTML page write designpersonnel may be easily performed JSP page development. And the developers can not only, and write script by JSP tags used exclusively others have written parts to realize dynamic pages. So, an unfamiliar with the web developers scripting language, can use the JSP make beautiful dynamic pages. And this in other dynamic web development is impossible.3.2 the independence of the programThe JSP are part of the family of the API Java, it has the general characteristics of the cross-platform Java program. In other words, is to have the procedure, namely the independence of the platform, 6 Write bided anywhere! .3.3 procedures compatibilityThe dynamic content can various JSP form, so it can show for all kinds of customers, namely from using HTML/DHTML browser to use various handheld wireless equipment WML (for example, mobile phones and pdas), personal digital equipment to use XML applications, all can use B2B JSP dynamic pages.3.4 program reusabilityIn the JSP page can not directly, but embedded scripting dynamic interaction will be cited as a component part. So, once such a component to write, it can be repeated several procedures, the program of the reusability. Now, a lot of standard JavaBeans library is a good example.中北大学2014届毕业设计英文文献译文JSP技术简介及特点JSP(Java Server Pages)技术是由Sun公司发布的用于开发动态Web应用的一项技术。

微处理器外文翻译文献

微处理器外文翻译文献

微处理器外文翻译文献(文档含中英文对照即英文原文和中文翻译)外文:Microcomputer SystemsElectronic systems are used for handing information in the most general sense; this information may be telephone conversation, instrument read or a company‟s accounts, but in each case the same main type of operation are involved: the processing, storage and transmission of information. in conventional electronic design these operations are combined at the function level; for example a counter, whether electronic or mechanical, stores the current and increments it by one as required. A system such as an electronicclock which employs counters has its storage and processing capabilities spread throughout the system because each counter is able to store and process numbers.Present day microprocessor based systems depart from this conventional approach by separating the three functions of processing, storage, and transmission into different section of the system. This partitioning into three main functions was devised by V on Neumann during the 1940s, and was not conceived especially for microcomputers. Almost every computer ever made has been designed with this structure, and despite the enormous range in their physical forms, they have all been of essentially the same basic design.In a microprocessor based system the processing will be performed in the microprocessor itself. The storage will be by means of memory circuits and the communication of information into and out of the system will be by means of special input/output(I/O) circuits. It would be impossible to identify a particular piece of hardware which performed the counting in a microprocessor based clock because the time would be stored in the memory and incremented at regular intervals but the microprocessor. However, the soft ware which defined the system‟s behavior would contain sections that performed as counters. The apparently rather abstract approach to the architecture of the microprocessor and its associated circuits allows it to be very flexible in use, since the system is defined almost entirely software. The design process is largely one of software engineering, and the similar problems of construction and maintenance which occur in conventional engineering are encountered when producing software.The figure1-1 illustrates how these three sections within a microcomputer are connected in terms of the communication of information within the machine. The system is controlled by the microprocessor which supervises the transfer of information between itself and the memory and input/output sections. The external connections relate to the rest (that is, the non-computer part) of the engineering system.Fig.1-1 Three Sections of a Typical Microcomputer Although only one storage section has been shown in the diagram, in practice two distinct types of memory RAM and ROM are used. In each case, the word …memory‟ is rather inappropriate since a computers memory is more like a filing cabinet in concept; information is stored in a set of numbered …boxes‟ and it is referenced by the serial number of the …box‟ in question.Microcomputers use RAM (Random Access Memory) into which data can be written and from which data can be read again when needed. This data can be read back from the memory in any sequence desired, and not necessarily the same order in which it was written, hence the expression …random‟ access memory. Another type of ROM (Read Only Memory) is used to hold fixed patterns of information which cannot be affected by the microprocessor; these patterns are not lost when power is removed and are normally used to hold the program which defines the behavior of a microprocessor based system. ROMs can be read like RAMs, but unlike RAMs they cannot be used to store variable information. Some ROMs have their data patterns put in during manufacture, while others are programmable by the user by means of special equipment and are called programmable ROMs. The widely used programmable ROMs are erasable by means of special ultraviolet lamps and are referred to as EPROMs, short for Erasable Programmable Read Only Memories. Other new types of device can be erased electrically without the need for ultraviolet light, which are called Electrically Erasable Programmable Read Only Memories, EEPROMs.The microprocessor processes data under the control of the program, controlling the flow of information to and from memory and input/output devices. Some input/output devices are general-purpose types while others are designed for controlling special hardware such as disc drives or controlling information transmission to other computers. Most types of I/O devices are programmable to some extent, allowing different modes of operation, while some actually contain special-purpose microprocessors to permit quite complex operations to be carried out without directly involving the main microprocessor.The microprocessor processes data under the control of the program, controlling the flow of information to and from memory and input/output devices. Some input/output devices are general-purpose types while others are designed for controlling specialhardware such as disc drives or controlling information transmission to other computers. Most types of I/O devices are programmable to some extent, allowing different modes of operation, while some actually contain special-purpose microprocessors to permit quite complex operations to be carried out without directly involving the main microprocessor.The microprocessor , memory and input/output circuit may all be contained on the same integrated circuit provided that the application does not require too much program or data storage . This is usually the case in low-cost application such as the controllers used in microwave ovens and automatic washing machines . The use of single package allows considerable cost savings to e made when articles are manufactured in large quantities . As technology develops , more and more powerful processors and larger and larger amounts of memory are being incorporated into single chip microcomputers with resulting saving in assembly costs in the final products . For the foreseeable future , however , it will continue to be necessary to interconnect a number of integrated circuits to make a microcomputer whenever larger amounts of storage or input/output are required.Another major engineering application of microcomputers is in process control. Here the presence of the microcomputer is usually more apparent to the user because provision is normally made for programming the microcomputer for the particular application. In process control applications the benefits lf fitting the entire system on to single chip are usually outweighed by the high design cost involved, because this sort lf equipment is produced in smaller quantities. Moreover, process controllers are usually more complicated so that it is more difficult to make them as single integrated circuits. Two approaches are possible; the controller can be implemented as a general-purpose microcomputer rather like a more robust version lf a hobby computer, or as a …packaged‟ system, signed for replacing controllers based on older technologies such as electromagnetic relays. In the former case the system would probably be programmed in conventional programming languages such as the ones to9 be introduced later, while in the other case a special-purpose language might be used, for example one which allowed the function of the controller to be described in terms of relay interconnections, In either case programs can be stored in RAM, which allows them to be altered to suit changes in application, but this makes the overall system vulnerable to loss lf power unless batteries are used to ensure continuity of supply. Alternatively programs can be stored in ROM, in which case theyvirtually become part of the electronic …hardware‟and are often referred to as firmware. More sophisticated process controllers require minicomputers for their implementation, although the use lf large scale integrated circuits …the distinction between mini and microcomputers, Products and process controllers of various kinds represent the majority of present-day microcomputer applications, the exact figures depending on one‟s interpretation of the word …product‟. Virtually all engineering and scientific uses of microcomputers can be assigned to one or other of these categories. But in the system we most study Pressure and Pressure Transmitters. Pressure arises when a force is applied over an area. Provided the force is one Newton and uniformly over the area of one square meters, the pressure has been designated one Pascal. Pressure is a universal processing condition. It is also a condition of life on the planet: we live at the bottom of an atmospheric ocean that extends upward for many miles. This mass of air has weight, and this weight pressing downward causes atmospheric pressure. Water, a fundamental necessity of life, is supplied to most of us under pressure. In the typical process plant, pressure influences boiling point temperatures, condensing point temperatures, process efficiency, costs, and other important factors. The measurement and control of pressure or lack of it-vacuum-in the typical process plant is critical.The working instruments in the plant usually include simple pressure gauges, precision recorders and indicators, and pneumatic and electronic pressure transmitters. A pressure transmitter makes a pressure measurement and generates either a pneumatic or electrical signal output that is proportional to the pressure being sensed.In the process plant, it is impractical to locate the control instruments out in the place near the process. It is also true that most measurements are not easily transmitted from some remote location. Pressure measurement is an exception, but if a high pressure of some dangerous chemical is to be indicated or recorded several hundred feet from the point of measurement, a hazard may be from the pressure or from the chemical carried.To eliminate this problem, a signal transmission system was developed. This system is usually either pneumatic or electrical. And control instruments in one location. This makes it practical for a minimum number of operators to run the plant efficiently.When a pneumatic transmission system is employed, the measurement signal is converted into pneumatic signal by the transmitter scaled from 0 to 100 percent of themeasurement value. This transmitter is mounted close to the point of measurement in the process. The transmitter output-air pressure for a pneumatic transmitter-is piped to the recording or control instrument. The standard output range for a pneumatic transmitter is 20 to 100kPa, which is almost universally used.When an electronic pressure transmitter is used, the pressure is converted to electrical signal that may be current or voltage. Its standard range is from 4 to 20mA DC for current signal or from 1 to 5V DC for voltage signal. Nowadays, another type of electrical signal, which is becoming common, is the digital or discrete signal. The use of instruments and control systems based on computer or forcing increased use of this type of signal.Sometimes it is important for analysis to obtain the parameters that describe the sensor/transmitter behavior. The gain is fairly simple to obtain once the span is known. Consider an electronic pressure transmitter with a range of 0~600kPa.The gain isdefined as the change in output divided by the change in input. In this case, the output is electrical signal (4~20mA DC) and the input is process pressure (0~600kPa). Thus the gain. Beside we must measure Temperature Temperature measurement is important in industrial control, as direct indications of system or product state and as indirect indications of such factors as reaction rates, energy flow, turbine efficiency, and lubricant quality. Present temperature scales have been in use for about 200 years, the earliest instruments were based on the thermal expansion of gases and liquids. Such filled systems are still employed, although many other types of instruments are available. Representative temperature sensors include: filled thermal systems, liquid-in-glass thermometers, thermocouples, resistance temperature detectors, thermostats, bimetallic devices, optical and radiation pyrometers and temperature-sensitive paints.Advantages of electrical systems include high accuracy and sensitivity, practicality of switching or scanning several measurements points, larger distances possible between measuring elements and controllers, replacement of components(rather than complete system), fast response, and ability to measure higher temperature. Among the electrical temperature sensors, thermocouples and resistance temperature detectors are most widely used.kPamA kPa mA kPa kPa mA mA Kr 027.0600160600420==--=DescriptionThe AT89C51 is a low-power, high-performance CMOS 8-bit microcomputer with 4K bytes of Flash programmable and erasable read only memory (PEROM). The device is manufactured using Atmel‟s high-density nonvolatile memory technology and is compatible with the industry-standard MCS-51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89C51 is a powerful microcomputer which provides a highly-flexible and cost-effective solution to many embedded control applications. Function characteristicThe AT89C51 provides the following standard features: 4K bytes of Flash, 128 bytes of RAM, 32 I/O lines, two 16-bit timer/counters, a five vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator and clock circuitry. In addition, the AT89C51 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port and interrupt system to continue functioning. The Power-down Mode saves the RAM contents but freezes the oscillator disabling all other chip functions until the next hardware reset.Pin DescriptionVCC:Supply voltage.GND:Ground.Port 0:Port 0 is an 8-bit open-drain bi-directional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as highimpedance inputs.Port 0 may also be configured to be the multiplexed loworder address/data bus during accesses to external program and data memory. In this mode P0 has internal pullups.Port 0 also receives the code bytes during Flash programming,and outputs the code bytes during programverification. External pullups are required during programverification.Port 1Port 1 is an 8-bit bi-directional I/O port with internal pullups.The Port 1 output buffers cansink/source four TTL inputs.When 1s are written to Port 1 pins they are pulled high by the internal pullups and can be used as inputs. As inputs,Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bi-directional I/O port with internal pullups.The Port 2 output buffers can sink/source four TTL inputs.When 1s are written to Port 2 pins they are pulled high by the internal pullups and can be used as inputs. As inputs,Port 2 pins that are externally being pulled low will source current, because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses. In this application, it uses strong internal pullupswhen emitting 1s. During accesses to external data memory that use 8-bit addresses, Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bi-directional I/O port with internal pullups.The Port 3 output buffers can sink/source four TTL inputs.When 1s are written to Port 3 pins they are pulled high by the internal pullups and can be used as inputs. As inputs,Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89C51 as listed below:Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device.ALE/PROGAddress Latch Enable output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation ALE is emitted at a constant rate of 1/6 the oscillator frequency, and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external Data Memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable is the read strobe to external program memory.When the AT89C51 is executing code from external program memory, PSEN is activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset.EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage(VPP) during Flash programming, for parts that require12-volt VPP.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit. XTAL2Output from the inverting oscillator amplifier.Oscillator CharacteristicsXTAL1 and XTAL2 are the input and output, respectively,of an inverting amplifier which can be configured for use as an on-chip oscillator, as shown in Figure 1.Either a quartz crystal or ceramic resonator may be used. To drive the device from an external clock source, XTAL2 should be left unconnected while XTAL1 is driven as shown in Figure 2.There are no requirements on the duty cycle of the external clock signal, since the input to the internal clocking circuitry is through a divide-by-two flip-flop, but minimum and maximum voltage high and low time specifications must be observed.翻译:微型计算机控制系统(单片机控制系统)广义地说,微型计算机控制系统(单片机控制系统)是用于处理信息的,这种被用于处理的信息可以是电话交谈,也可以是仪器的读数或者是一个企业的帐户,但是各种情况下都涉及到相同的主要操作:信息的处理、信息的存储和信息的传递。

电子通信专业 外文翻译 外文文献 英文文献 电信现代运营

电子通信专业 外文翻译 外文文献 英文文献 电信现代运营

毕业设计(外文翻译材料)Telecommunication Modern Operation TelephoneIn an analogue telephone network, the caller is connected to the person he wants to talk to by switches at various telephone exchanges. The switches form an electrical connection between the two users and the setting of these switches is determined electronically when the caller dials the number. Once the connection is made, the caller's voice is transformed to an electrical signal using a small microphone in the caller's handset. This electrical signal is then sent through the network to the user at the other end where it transformed back into sound by a small speaker in that person's handset. There is a separate electrical connection that works in reverse, allowing the users to converse.The fixed-line telephones in most residential homes are analogue — that is, the speaker's voice directly determines the signal's voltage. Although short-distance calls may be handled from end-to-end as analogue signals, increasingly telephone service providers are transparently converting the signals to digital for transmission before converting them back to analogue for reception. The advantage of this is that digitized voice data can travel side-by-side with data from the Internet and can be perfectly reproduced in long distance communication (as opposed to analogue signals that are inevitably impacted by noise).- 1 --Mobile phones have had a significant impact on telephone networks. Mobile phone subscriptions now outnumber fixed-line subscriptions in many markets. Sales of mobile phones in 2005 totalled 816.6 million with that figure being almost equally shared amongst the markets of Asia/Pacific (204 m), Western Europe (164 m), CEMEA (Central Europe, the Middle East and Africa) (153.5 m), North America (148 m) and Latin America (102 m). In terms of new subscriptions over the five years from 1999, Africa has outpaced other markets with 58.2% growth. Increasingly these phones are being serviced by systems where the voice content is transmitted digitally such as GSM or W-CDMA with many markets choosing to depreciate analogue systems such as AMPS.There have also been dramatic changes in telephone communication behind the scenes. Starting with the operation of TAT-8 in 1988, the 1990s saw the widespread adoption of systems based on optic fibres. The benefit of communicating with optic fibres is that they offer a drastic increase in data capacity. TAT-8 itself was able to carry 10 times as many telephone calls as the last copper cable laid at that time and today's optic fibre cables are able to carry 25 times as many telephone calls as TAT-8. This increase in data capacity is due to several factors: First, optic fibres are physically much smaller than competing technologies. Second, they do not suffer from crosstalk which means several hundred of them can be easily bundled together in a single cable. Lastly, improvements in multiplexing have led to an exponential growth in the data capacity of a single fibre.Assisting communication across many modern optic fibre networks is a protocol known as Asynchronous Transfer Mode (ATM). The ATM protocol allows for the side-by-side data transmission mentioned in the second paragraph. It is suitable for public telephone networks because it establishes a pathway for data through the network and associates a traffic contract with that pathway. The traffic contract is essentially an agreement between the client and the network about how the network is to handle the data; if the network cannot meet the conditions of the traffic contract it does not accept the connection. This is important because telephone calls can- 2 --negotiate a contract so as to guarantee themselves a constant bit rate, something that will ensure a caller's voice is not delayed in parts or cut-off completely. There are competitors to ATM, such as Multiprotocol Label Switching (MPLS), that perform a similar task and are expected to supplant ATM in the future.Radio and televisionIn a broadcast system, a central high-powered broadcast tower transmits a high-frequency electromagnetic wave to numerous low-powered receivers. The high-frequency wave sent by the tower is modulated with a signal containing visual or audio information. The antenna of the receiver is then tuned so as to pick up the high-frequency wave and a demodulator is used to retrieve the signal containing the visual or audio information. The broadcast signal can be either analogue (signal is varied continuously with respect to the information) or digital (information is encoded as a set of discrete values).The broadcast media industry is at a critical turning point in its development, with many countries moving from analogue to digital broadcasts. This move is made possible by the production of cheaper, faster and more capable integrated circuits. The chief advantage of digital broadcasts is that they prevent a number of complaints with traditional analogue broadcasts. For television, this includes the elimination of problems such as snowy pictures, ghosting and other distortion. These occur because of the nature of analogue transmission, which means that perturbations due to noise will be evident in the final output. Digital transmission overcomes this problem because digital signals are reduced to discrete values upon reception and hence small perturbations do not affect the final output. In a simplified example, if a binary message 1011 was transmitted with signal amplitudes [1.0 0.0 1.0 1.0] and received with signal amplitudes [0.9 0.2 1.1 0.9] it would still decode to the binary message 1011 — a perfect reproduction of what was sent. From this example, a problem with digital transmissions can also be seen in that if the noise is great enough it can significantly alter the decoded message. Using forward error correction a receiver can- 3 --correct a handful of bit errors in the resulting message but too much noise will lead to incomprehensible output and hence a breakdown of the transmission.In digital television broadcasting, there are three competing standards that are likely to be adopted worldwide. These are the ATSC, DVB and ISDB standards; the adoption of these standards thus far is presented in the captioned map. All three standards use MPEG-2 for video compression. ATSC uses Dolby Digital AC-3 for audio compression, ISDB uses Advanced Audio Coding (MPEG-2 Part 7) and DVB has no standard for audio compression but typically uses MPEG-1 Part 3 Layer 2. The choice of modulation also varies between the schemes. In digital audio broadcasting, standards are much more unified with practically all countries choosing to adopt the Digital Audio Broadcasting standard (also known as the Eureka 147 standard). The exception being the United States which has chosen to adopt HD Radio. HD Radio, unlike Eureka 147, is based upon a transmission method known as in-band on-channel transmission that allows digital information to "piggyback" on normal AM or FM analogue transmissions.However, despite the pending switch to digital, analogue receivers still remain widespread. Analogue television is still transmitted in practically all countries. The United States had hoped to end analogue broadcasts on December 31, 2006; however, this was recently pushed back to February 17, 2009. For analogue television, there are three standards in use. These are known as PAL, NTSC and SECAM. For analogue radio, the switch to digital is made more difficult by the fact that analogue receivers are a fraction of the cost of digital receivers. The choice of modulation for analogue radio is typically between amplitude modulation (AM) or frequency modulation (FM). To achieve stereo playback, an amplitude modulated subcarrier is used for stereo FM.The InternetThe Internet is a worldwide network of computers and computer networks that can communicate with each other using the Internet Protocol. Any computer on the Internet has a unique IP address that can be used by other computers to route- 4 --information to it. Hence, any computer on the Internet can send a message to any other computer using its IP address. These messages carry with them the originating computer's IP address allowing for two-way communication. In this way, the Internet can be seen as an exchange of messages between computers.An estimated 16.9% of the world population has access to the Internet with the highest access rates (measured as a percentage of the population) in North America (69.7%), Oceania/Australia (53.5%) and Europe (38.9%).In terms of broadband access, Iceland (26.7%), South Korea (25.4%) and the Netherlands (25.3%) lead the world.The Internet works in part because of protocols that govern how the computers and routers communicate with each other. The nature of computer network communication lends itself to a layered approach where individual protocols in the protocol stack run more-or-less independently of other protocols. This allows lower-level protocols to be customized for the network situation while not changing the way higher-level protocols operate. A practical example of why this is important is because it allows an Internet browser to run the same code regardless of whether the computer it is running on is connected to the Internet through an Ethernet or Wi-Fi connection. Protocols are often talked about in terms of their place in the OSI reference model, which emerged in 1983 as the first step in an unsuccessful attempt to build a universally adopted networking protocol suite.For the Internet, the physical medium and data link protocol can vary several times as packets traverse the globe. This is because the Internet places no constraints on what physical medium or data link protocol is used. This leads to the adoption of media and protocols that best suit the local network situation. In practice, most intercontinental communication will use the Asynchronous Transfer Mode (ATM) protocol (or a modern equivalent) on top of optic fibre. This is because for most intercontinental communication the Internet shares the same infrastructure as the public switched telephone network.- 5 --At the network layer, things become standardized with the Internet Protocol (IP) being adopted for logi cal addressing. For the world wide web, these “IP addresses” are derived from the human readable form using the Domain Name System (e.g.72.14.207.99 is derived from ). At the moment, the most widely used version of the Internet Protocol is version four but a move to version six is imminent.At the transport layer, most communication adopts either the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). TCP is used when it is essential every message sent is received by the other computer where as UDP is used when it is merely desirable. With TCP, packets are retransmitted if they are lost and placed in order before they are presented to higher layers. With UDP, packets are not ordered or retransmitted if lost. Both TCP and UDP packets carry port numbers with them to specify what application or process the packet should be handled by. Because certain application-level protocols use certain ports, network administrators can restrict Internet access by blocking the traffic destined for a particular port.Above the transport layer, there are certain protocols that are sometimes used and loosely fit in the session and presentation layers, most notably the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. These protocols ensure that the data transferred between two parties remains completely confidential and one or the other is in use when a padlock appears at the bottom of your web browser. Finally, at the application layer, are many of the protocols Internet users would be familiar with such as HTTP (web browsing), POP3 (e-mail), FTP (file transfer), IRC (Internet chat), BitTorrent (file sharing) and OSCAR (instant messaging).Local area networksDespite the growth of the Internet, the characteristics of local area networks (computer networks that run at most a few kilometres) remain distinct. This is because networks on this scale do not require all the features associated with larger networks and are often more cost-effective and efficient without them.- 6 --In the mid-1980s, several protocol suites emerged to fill the gap between the data link and applications layer of the OSI reference model. These were Appletalk, IPX and NetBIOS with the dominant protocol suite during the early 1990s being IPX due to its popularity with MS-DOS users. TCP/IP existed at this point but was typically only used by large government and research facilities. As the Internet grew in popularity and a larger percentage of traffic became Internet-related, local area networks gradually moved towards TCP/IP and today networks mostly dedicated to TCP/IP traffic are common. The move to TCP/IP was helped by technologies such as DHCP that allowed TCP/IP clients to discover their own network address —a functionality that came standard with the AppleTalk/IPX/NetBIOS protocol suites.It is at the data link layer though that most modern local area networks diverge from the Internet. Whereas Asynchronous Transfer Mode (ATM) or Multiprotocol Label Switching (MPLS) are typical data link protocols for larger networks, Ethernet and Token Ring are typical data link protocols for local area networks. These protocols differ from the former protocols in that they are simpler (e.g. they omit features such as Quality of Service guarantees) and offer collision prevention. Both of these differences allow for more economic set-ups.Despite the modest popularity of Token Ring in the 80's and 90's, virtually all local area networks now use wired or wireless Ethernet. At the physical layer, most wired Ethernet implementations use copper twisted-pair cables (including the common 10BASE-T networks). However, some early implementations used coaxial cables and some recent implementations (especially high-speed ones) use optic fibres. Optic fibres are also likely to feature prominently in the forthcoming 10-gigabit Ethernet implementations. Where optic fibre is used, the distinction must be made between multi-mode fibre and single-mode fibre. Multi-mode fibre can be thought of as thicker optical fibre that is cheaper to manufacture but that suffers from less usable bandwidth and greater attenuation (i.e. poor long-distance performance).- 7 --电信现代运营电话在一个模拟电话网络, 来电者通过交换机与对方进行不同的电话交流。开关在两用户间形成一个电气连接,其参数是由来电者按键时的电气特性决定的。一旦连接,来电者的声音通过来电端处的电话听筒转化为电信号。然后电信号通过网络发送到另一端的用户,并通过小型扬声器将信号转化为声音。有一个单独的电气连接用于进行转换,以使用户交谈。固定电话,在多数居民区是模拟电话,那就是,发言者的声音,直接决定着信号的电压。虽然距离短,来电可能会被作为模拟信号的端到端信号处理,越来越多电话服务供应商是适度的在传输前将模拟信号数字化以便传输,之后转为模拟信号以便接收。它的优势是,数字化语音数据可以从互联网上以数字形式传输,而且可以完全转载于远程通信。(对比来看,模拟信号无可避免会受到噪声影响。) 手机已对电话网络产生了重大影响。移动电话用户现在在许多市场超过了固定线路用户。手机销量在2005年总额为8.166亿,被一下数字平分,其中亚洲/太平洋(2.04亿),西欧(1.64亿),cemea(中欧,中东和非洲)(1.535亿),北美(1.48亿)和拉丁美洲(1.02亿)。在从1999年之后的五年时间内新增用户来看,非洲已以58.2 %的增长超过了其他地区的市场。手机逐渐采用如GSM或W-CDMA这些可以数字化传输语音信号的系统,从而使AMPS这样的模拟系统衰落。电话通信也隐约地有了戏剧性的变化。开始运作的TAT-8(跨大西洋传输电缆)始于1988年, 20世纪90年代见证了基于光纤系统的普及。光纤传输的优势在于其所提供的数据容量的急剧增加。TAT-8可以传输相当于同轴电缆电话10倍的数据,而现在的光纤能传输25倍于TAT-8的数据。数据能力的增加是由于几个因素:第一,光纤体积远小于其他竞争技术。第二,他们不受到串扰这意味着数百条光纤可以很容易地捆绑在一个单一的电缆内。最后,复用技术的改善导致了单条光纤数据容量的指数增长。基于现代光纤网络的通信是一项称为异步传输模式( ATM )的协议。如第二段所说,ATM协议允许为并排的数据传输。它适用于公共电话网络,因为它建立了通过网络数据通道并以此进行通信。传输协议基本上是一个用户与网络之间的协议,它规定了网络如何来处理数据;如果网络不能满足条件的传输协议,它不接受连接。这很重要,因为电话可以通过协议,保证自己的恒定比特率,这将确保来电者的声音,不是延迟的部分或完全切断。ATM的竞争对手,如多标签交换(MPLS),执行类似的任务,并可望在未来取代ATM。电台和电视台在一个广播系统,中央高功率广播塔传输高频率的电磁波,到众多的低功率接收器上。由广播塔发送的高频率波由信号调制且该信号载有视频或音频信息。接收天线稍作调整,以提取高频率波,解调器用来恢复载有视力或音频信息的信号。广播信号可以是模拟(信号多种多样,载有信息且连续)或数字(信息作为一套离散值,可以编码)。广播媒体业正处于发展中一个关键的转折点,许多国家都从模拟发展到数字广播。此举是可使生产更经济,更快且更能够集成电路。与传统的模拟广播相比,数字广播最大的优势是,他们防止了一些投诉。对电视来说,这包括消除问题,如雪花屏,重影和其他失真。这些发生原因,是因为模拟传输的性质,这意味着噪声干扰会明显影响最后的输出。数字传输,克服了这个问题,因为接收时数字信号变为离散值,这样小扰动不影响最终输出。举一个简单的例子,一个二进制信息1011,已与信号的振幅[ 1.0 0.0 1.0 1.0 ]调制,并收到信号的振幅[ 0.9 0.2 1.1 0.9 ]它将仍然解码为二进制信息1011-一个完美原码再现。从这个例子可以看出,数字传输也由一个问题,如果噪音足够大,它可以大大改变解码信息。使用前向错误校正接收器可以在最终结果中纠正少数比特错误,但太多的噪音将导致难以理解的输出,因此,传输失败。在数字电视广播中,有3个相互竞争的标准,很可能是全世界公认的。它们是ATSC标准,DVB标准和ISDB标准;通过这些标准,到目前为止,应用于标题地图。所有这三个标准,使用MPEG - 2 视频压缩。ATSC标准采用杜比数字AC - 3音频压缩,ISDB利用先进音频编码( MPEG - 2的第7部分),而DVB没有音频压缩标准,但通常使用MPEG - 1第3部分第2层。不同标准所用的调制方式也有所不同。在数字音频广播中,标准更为统一,几乎所有国家都选择采用数字音频广播的标准(也称为作为尤里卡147标准)。也有例外,美国已选择采用高清广播。高清广播,不同于尤里卡147 ,它是基于称为在带内通道传输的传输方法,这使数字化信息,进行“背驮式”AM或FM模拟传输。然而,尽管数字化迫在眉睫,模拟接收机仍然普遍应用。模拟电视仍然传送几乎所有国家。美国希望于2006年12月31日之前结束模拟广播;不过,最近又推到2009年2月17日。对于模拟电视,有三个标准在使用中。它们是PAL制式,NTSC制式和SECAM制式。模拟电台,切换到数字变得更加困难,因为模拟接收器只占数字接收机的一小部分成本。模拟电台调制方式通常采用AM(幅度调制)或FM(频率调制)。为实现立体声播放,振幅调制副载波用于立体声调频。互联网互联网是一个全球计算机组成的网络,也是一种用IP联系在一起的计算机网络。在互联网上的任何一台计算机都有一个唯一的IP地址,其他计算机可以用其进行路由选择。因此,在互联网上,任何一台电脑可以通过IP地址传送讯息给任何其他的计算机。这些带有计算机IP地址的信息,允许计算机之间双向沟通。这样一来,互联网可以被看作是一个计算机之间信息的交换。据估计,16.9 %的世界人口已经进入互联网且具有最高访问率(以人口百分比衡量),它们在北美地区(69.7 %),大洋洲/澳大利亚(53.5%)和欧洲(38.9%)。在宽带接入方面,冰岛(26.7%),韩国(25.4%)和荷兰(25.3 %)世界领先。互联网的成功,部分是因为协议管理计算机和路由器如何互相沟通。计算机网络通信本身的性质,有助于分层实现,此时,协议栈中的各个独立协议或多或少独立于其他协议。这使得低级别的协议适应网络的情况,而不影响高层协议的实现。一个实际的例子可以说明它的重要性,因为它允许一个互联网浏览器上运行相同的代码,不管运行的计算机连接到互联网是通过以太网还是通过Wi - Fi连接。协议经常以其在OSI参考模型中的位置命名,1983年为第一步,也是一次不成功的尝试,它试图建立一个普遍采用的网络协议套件。对于互联网来说,物理介质和数据链路层协议可以不同的数倍包遍历全球。这是因为互联网对所用的物理介质或数据链路协议没有限制。这导致媒体和协议的应用,它们最适合本地网络的情况。在实践中,多数洲际通讯将使用异步转移模式( ATM )协议(或一个现代的替代物)并辅以光纤。这是因为,对于大多数的洲际通信来说,互联网与公共交换式电话网络一样拥有相同的基础设施。在网络层,适用于逻辑寻址的IP开始标准化。在万维网上,这些“IP地址”来自通过域名系统处理的人类可读格式(例如72.14.207.99是来自)中。目前,使用最广泛的版本的互联网协议是版本 4 ,但向版本六过渡已是迫在眉睫。在传输层大部分通信采用的是传输控制协议(TCP)或用户数据报协议(UDP)。TCP是基本协议,每条来自其他计算机的消息均需采用TCP,而UDP只有在有利时才会被采用。有了TCP,数据包若在它们置于更高层次前丢失或乱序,它们会被重发。有了UDP,数据包丢失时会乱序,也不会重发。TCP和UDP数据包携带端口以便指出数据包应交由哪些应用程序或进程。因为某些应用级协议使用某些端口,网络管理员可以通过阻断某一特定端口为目的端口的传输限制上网。在传输层之上,有一些协议会用到并适当应用于会话层和表示层,最显着的是安全套接层(SSL)和传输层安全(TLS)协议。这些协议,确保双方之间传输的数据仍然完全保密并且一方或另一方在使用时,挂锁出现于Web浏览器的底部。最后,在应用层,有很多的协议为互联网用户所熟悉,如HTTP ( Web浏览) , 的POP3 (电子邮件),FTP (档案传输),IRC (网上聊天),BitTorrent(文件共享)和OSCAR(即时通讯)。局域网不看互联网的发展, 仅局域网的特点(运行于几公里内的计算机网络)仍然明显。这是因为这种规模的网络并不需要所有与较大的网络有关的功能,因此往往更具成本效益和高效率。在二十世纪八十年代中期,几个协议套件的出现,填补了OSI参考模型中数据链路层和应用层之间的空隙。如AppleTalk,IPX和NetBios与20世纪90年代初占主导地位,因MS-DOS而广受欢迎的协议套件IPX。而TCP / IP,在这一点上,通常只用于大型政府和研究设施。随着互联网的受欢迎程度的增长以及较大的流量与互联网逐渐相关,局域网逐步走向TCP / IP。今天的网络大多用于TCP / IP流量是常见的。向TCP / IP的转变由如允许的TCP / IP客户发现自己的网络地址的DHCP的技术支撑,而这与A ppleTalk,IPX/和N etBIOS协议套件以其成为标准。在数据链路层,最现代的局域网偏离互联网。而异步转移模式(ATM)或多协议标签转换(MPLS)技术是典型的数据链路协议,适用于较大的网络。以太网和令牌环网是典型的局域网数据链路协议。这些协议不同于前协议,因为它们更简单(例如,它们省略了服务质量保证等功能) ,并提供碰撞预防。双方的这些差异,是基于经济成本的考虑。尽管令牌环在80年代和90年代有了一定的普及,但是现在几乎所有的局域网使用有线或无线以太网。在物理层,大多数有线以太网实现使用铜双绞线电缆(包括常用的10 Base-T的网络)。然而,一些早期的实现使用同轴电缆,而最近的一些实现(特别是超高速的)使用光纤。光纤也可能在即将到来的10千兆以太网的实现中有着出色的表现。用光纤时,必须对多模光纤和单模光纤加以区分。对于制造商来说,多模光纤可以被认为是便宜的厚光纤,但只有较少可用的带宽和更大的衰减(即较差的长途性能)。。

外文文献及译文模板

外文文献及译文模板

文献、资料题目:学 专 班 姓 名: 张三学 号: 2010888888指导教师:翻译日期: ××××年××月××日临沂大学本科毕业论文外文文献及译文,the National Institute of Standards and Technology (NIST) has been working to develop a new encryption standard to keep government information secure .The organization is in the final stages of an open process of selecting one or more algorithms ,or data-scrambling formulas ,for the new Advanced Encryption Standard (AES) and plans to make adecision by late summer or early fall .The standard is slated to go into effect next year .AES is intended to be a stronger ,more efficient successor to Triple Data Encryption Standard (3DES),which replaced the aging DES ,which was cracked in less than three days in July 1998.“Until we have the AES ,3DES will still offer protection for years to come .So there is no need to immediately switch over ,”says Edward Roback , acting chief of the computer security division at NIST and chairman of the AES selection committee .“What AES will offer is a more efficient algorithm .It will be a federal standard ,but it will be widely implemented in the IT community .”According to Roback ,efficiency of the proposed algorithms is measured by how fast they can encrypt and decrypt information ,how fast they can present an encryption key and how much information they can encrypt .The AES review committee is also looking at how much space the algorithm takes up on a chip and how much memory it requires .Roback says the selection of a more efficient AES will also result in cost savings and better use of resources .“DES was desig ned for hardware implementations ,and we are now living in a world of much more efficient software ,and we have learned an awful lot about the design of algorithms ,”says Roback .“When you start multiplying this with the billions of implementations done daily ,the saving on overhead on the networks will be enormous .”……临沂大学本科毕业论文外文文献及译文- 2 -以确保政府的信息安全。

中文和外文的B2B文献

中文和外文的B2B文献

摘要先行者优势是企业在多阶段的发展过程中内生的。

在最初阶段产生的某种不对称使得企业稍稍领先于其竞争者。

这种不对称一旦产生,就会有一系列的机制帮助该企业尽力利用这一领先地位。

先行者相比于跟进者也存在着一些劣势,如在位惰性、技术与市场的不确定性等。

网络经济具有一些现象或效应与传统经济大不相同,如网络外部性、规模经济、收益递增规律、蝴蝶效应等。

因此有必要对网络企业中的先行者与跟进者进行深入的分析,找出其独特的先行者优势和跟进者优势,以供网络企业作为参考。

本文选取了两个典型的外贸B2B平台——环球资源网、阿里巴巴——通过对比分析其创立时间、营业收入、网站访问量等判断出先行者和跟进者。

并分析先行者优势和跟进者优势。

最后给出辽宁省内B2B外贸平台未来发展的建议。

本文得出以下结论:阿里巴巴已经超越了以前的先行者环球资源,成为了现在外贸B2B行业的先行者。

阿里巴巴的负的网络外部性已逐渐凸现,发展即将到达瓶颈。

环球资源略显保守但稳健的发展势头、良好的口碑以及质量使其仍具备赶超的可能。

关键词B2B先行者外贸环球资源阿里巴巴Title Analysis of First Mover and Second Mover of trade B2B latformsABSTRACTFirst mover advantages engodeny in multi-stage development process of the enterprise.Some kind of asymmetry produced in the initial stages makes the enterprise slightly ahead of its competitors. This asymmetry once produced, will have a series of mechanism helping the company to try to make use of the leading position. Compared to followers,the leaders also have some disadvantages, such as the reign inert, technology and market uncertainty, etc.Network economy has some phenomena or effect which is very different from traditional economy , such as network externalities, economy of scale, increasing returns law, the butterfly effect, etc. So it is necessary to have a thorough analysis on first mover and followers of network enterprise to find its unique first mover advantages and follower advantages, to supply network enterprise to serve as a reference.This paper selects two typical trade B2B platforms - global sources , and Alibaba -- through comparative analysis of the time of establishment, business revenue and site traffic ,etc ,recognizing the first mover and followers. And analyzes first mover advantages and follower sntages . Finally summarizes and forecasts the development trend of the two foreign trade B2B platform ,meanwhile suggestions are given.The article concludes the following conclusions:1、Alibaba has surpassed the previous forerunner—— global sources and is now the first mover of foreign trade B2B industry.2、Alibaba's negative network externalities has gradually relief, development willarrive bottleneck.3、Global sources’ slightly conse rvative, but steady development momentum, good reputation and quality makes it still have super possible.Keywords B2B First Mover foreign trade Global Sources Alibaba摘要网络的出现加快了大规模市场向市场细分过渡,即营销模式从传统的大规模同质化营销向集中的个性化营销过渡。

3000字的本科毕业外文文献翻译(格式标准)

3000字的本科毕业外文文献翻译(格式标准)

本科毕业生外文文献翻译学生姓名:指导教师:所在学院:专业:中国·大庆2013 年5 月Chapter 1IntroductionSpread-spectrum techniques are methods by which a signal (e.g. an electrical, electromagnetic, or acoustic signal ) generated in a particular bandwidth is deliberately spread in the frequency domain, resulting in a signal with a wider bandwidth. These techniques are used for a variety of reasons, including the establishment of secure communications, increasing resistance to natural interference and jamming, to prevent detection, and to limit power flux density (e.g. in satellite downlinks).1.1 History Frequency hoppingThe concept of frequency hopping was first alluded to in the 1903 U.S. Patent 723,188 and U.S. Patent 725,605 filed by Nikola Tesla in July 1900. Tesla came up with the idea after demonstrating the world's first radio-controlled submersible boat in 1898, when it became apparent the wireless signals controlling the boat needed to be secure from "being disturbed, intercepted, or interfered with in any way." His patents covered two fundamentally different techniques for achieving immunity to interference, both of which functioned by altering the carrier frequency or other exclusive characteristic. The first had a transmitter that worked simultaneously at two or more separate frequencies and a receiver in which each of the individual transmitted frequencies had to be tuned in, in order for the control circuitry to respond. The second technique used a variable-frequency transmitter controlled by an encoding wheel that altered the transmitted frequency in a predetermined manner. These patents describe the basic principles of frequency hopping and frequency-division multiplexing, and also the electronic AND-gate logic circuit.Frequency hopping is also mentioned in radio pioneer Johannes Zenneck's book Wireless Telegraphy (German, 1908, English translation McGraw Hill, 1915), although Zenneck himself states that Telefunken had already tried it several years earlier. Zenneck's book was a leading text of the time, and it is likely that many later engineers were aware of it. A Polish engineer, Leonard Danilewicz, came up with the idea in 1929.Several other patents were taken out in the 1930s, including one by Willem Broertjes (Germany 1929, U.S. Patent 1,869,695, 1932). During World War II, the US Army Signal Corps was inventing a communication system called SIGSALY for communication between Roosevelt and Churchill, which incorporated spread spectrum, but due to its top secret nature, SIGSALY's existence did not become known until the 1980s.The most celebrated invention of frequency hopping was that of actress Hedy Lamarr and composer George Antheil, who in 1942 received U.S. Patent 2,292,387 for their "Secret Communications System". Lamarr had learned about the problem at defense meetings she had attended with her former husband Friedrich Mandl, who was an Austrian arms manufacturer. The Antheil-Lamarr version of frequency hopping used a piano-roll to change among 88 frequencies, and was intended to make radio-guided torpedoes harder for enemies to detect or to jam. The patent came to light during patent searches in the 1950s when ITT Corporation and other privatefirms began to develop Code Division Multiple Access (CDMA), a civilian form of spread spectrum, though the Lamarr patent had no direct impact on subsequent technology. It was in fact ongoing military research at MIT Lincoln Laboratory, Magnavox Government & Industrial Electronics Corporation, ITT and Sylvania Electronic Systems that led to early spread-spectrum technology in the 1950s. Parallel research on radar systems and a technologically similar concept called "phase coding" also had an impact on spread-spectrum development.1.2 Commercial useThe 1976 publication of Spread Spectrum Systems by Robert Dixon, ISBN 0-471-21629-1, was a significant milestone in the commercialization of this technology. Previous publications were either classified military reports or academic papers on narrow subtopics. Dixon's book was the first comprehensive unclassified review of the technology and set the stage for increasing research into commercial applications.Initial commercial use of spread spectrum began in the 1980s in the US with three systems: Equatorial Communications System's very small aperture (VSAT) satellite terminal system for newspaper newswire services, Del Norte Technology's radio navigation system for navigation of aircraft for crop dusting and similar applications, and Qualcomm's OmniTRACS system for communications to trucks. In the Qualcomm and Equatorial systems, spread spectrum enabled small antennas that viewed more than one satellite to be used since the processing gain of spread spectrum eliminated interference. The Del Norte system used the high bandwidth of spread spectrum to improve location accuracy.In 1981, the Federal Communications Commission started exploring ways to permit more general civil uses of spread spectrum in a Notice of Inquiry docket. This docket was proposed to FCC and then directed by Michael Marcus of the FCC staff. The proposals in the docket were generally opposed by spectrum users and radio equipment manufacturers, although they were supported by the then Hewlett-Packard Corp. The laboratory group supporting the proposal would later become part of Agilent.The May 1985 decision in this docket permitted unlicensed use of spread spectrum in 3 bands at powers up to 1 Watt. FCC said at the time that it would welcome additional requests for spread spectrum in other bands.The resulting rules, now codified as 47 CFR 15.247 permitted Wi-Fi, Bluetooth, and many other products including cordless telephones. These rules were then copied in many other countries. Qualcomm was incorporated within 2 months after the decision to commercialize CDMA.1.3 Spread-spectrum telecommunicationsThis is a technique in which a (telecommunication) signal is transmitted on a bandwidth considerably larger than the frequency content of the original information.Spread-spectrum telecommunications is a signal structuring technique that employs direct sequence, frequency hopping, or a hybrid of these, which can be used for multiple access and/or multiple functions. This technique decreases the potential interference to other receivers while achieving privacy. Spread spectrum generally makes use of a sequential noise-like signalstructure to spread the normally narrowband information signal over a relatively wideband (radio) band of frequencies. The receiver correlates the received signals to retrieve the original information signal. Originally there were two motivations: either to resist enemy efforts to jam the communications (anti-jam, or AJ), or to hide the fact that communication was even taking place, sometimes called low probability of intercept (LPI).Frequency-hopping spread spectrum (FHSS), direct-sequence spread spectrum (DSSS), time-hopping spread spectrum (THSS), chirp spread spectrum (CSS), and combinations of these techniques are forms of spread spectrum. Each of these techniques employs pseudorandom number sequences —created using pseudorandom number generators —to determine and control the spreading pattern of the signal across the alloted bandwidth. Ultra-wideband (UWB) is another modulation technique that accomplishes the same purpose, based on transmitting short duration pulses. Wireless Ethernet standard IEEE 802.11 uses either FHSS or DSSS in its radio interface.Chapter 22.1 Spread-spectrum clock signal generationSpread-spectrum clock generation (SSCG) is used in some synchronous digital systems, especially those containing microprocessors, to reduce the spectral density of the electromagnetic interference (EMI) that these systems generate. A synchronous digital system is one that is driven by a clock signal and because of its periodic nature, has an unavoidably narrow frequency spectrum. In fact, a perfect clock signal would have all its energy concentrated at a single frequency and its harmonics, and would therefore radiate energy with an infinite spectral density. Practical synchronous digital systems radiate electromagnetic energy on a number of narrow bands spread on the clock frequency and its harmonics, resulting in a frequency spectrum that, at certain frequencies, can exceed the regulatory limits for electromagnetic interference (e.g. those of the FCC in the United States, JEITA in Japan and the IEC in Europe).To avoid this problem, which is of great commercial importance to manufacturers, spread-spectrum clocking is used. This consists of using one of the methods described in the Spread-spectrum telecommunications section in order to reduce the peak radiated energy. The technique therefore reshapes the system's electromagnetic emissions to comply with the electromagnetic compatibility (EMC) regulations. It is a popular technique because it can be used to gain regulatory approval with only a simple modification to the equipment.Spread-spectrum clocking has become more popular in portable electronics devices because of faster clock speeds and the increasing integration of high-resolution LCD displays in smaller and smaller devices. Because these devices are designed to be lightweight and inexpensive, passive EMI reduction measures such as capacitors or metal shielding are not a viable option. Active EMI reduction techniques such as spread-spectrum clocking are necessary in these cases, but can also create challenges for designers. Principal among these is the risk that modifying th e system clock runs the risk of the clock/data misalignment.2.2Direct-sequence spread spectrumIn telecommunications, direct-sequence spread spectrum (DSSS) is a modulation technique. As with other spread spectrum technologies, the transmitted signal takes up more bandwidth than the information signal that is being modulated. The name 'spread spectrum' comes from the fact that the carrier signals occur over the full bandwidth (spectrum) of a device's transmitting frequency.2.2.1Features1.It phase-modulates a sine wave pseudorandomly with a continuous string ofpseudonoise (PN) code symbols called "chips", each of which has a much shorter duration than an information bit. That is, each information bit is modulated by a sequence of much faster chips. Therefore, the chip rate is much higher than the information signal bit rate.2. It uses a signal structure in which the sequence of chips produced by the transmitter isknown a priori by the receiver. The receiver can then use the same PN sequence to counteract the effect of the PN sequence on the received signal in order to reconstruct the informationsignal.2.2.2Transmission methodDirect-sequence spread-spectrum transmissions multiply the data being transmitted by a "noise" signal. This noise signal is a pseudorandom sequence of 1 and −1 values, at a frequency much higher than that of the original signal, thereby spreading the energy of the original signal into a much wider band.The resulting signal resembles white noise, like an audio recording of "static". However, this noise-like signal can be used to exactly reconstruct the original data at the receiving end, by multiplying it by the same pseudorandom sequence (because 1 × 1 = 1, and −1 × −1 = 1). This process, known as "de-spreading", mathematically constitutes a correlation of the transmitted PN sequence with the PN sequence that the receiver believes the transmitter is using.For de-spreading to work correctly, the transmit and receive sequences must be synchronized. This requires the receiver to synchronize its sequence with the transmitter's sequence via some sort of timing search process. However, this apparent drawback can be a significant benefit: if the sequences of multiple transmitters are synchronized with each other, the relative synchronizations the receiver must make between them can be used to determine relative timing, which, in turn, can be used to calculate the receiver's position if the transmitters' positions are known. This is the basis for many satellite navigation systems.The resulting effect of enhancing signal to noise ratio on the channel is called process gain. This effect can be made larger by employing a longer PN sequence and more chips per bit, but physical devices used to generate the PN sequence impose practical limits on attainable processing gain.If an undesired transmitter transmits on the same channel but with a different PN sequence (or no sequence at all), the de-spreading process results in no processing gain for that signal. This effect is the basis for the code division multiple access (CDMA) property of DSSS, which allows multiple transmitters to share the same channel within the limits of the cross-correlation properties of their PN sequences.As this description suggests, a plot of the transmitted waveform has a roughly bell-shaped envelope centered on the carrier frequency, just like a normal AM transmission, except that the added noise causes the distribution to be much wider than that of an AM transmission.In contrast, frequency-hopping spread spectrum pseudo-randomly re-tunes the carrier, instead of adding pseudo-random noise to the data, which results in a uniform frequency distribution whose width is determined by the output range of the pseudo-random number generator.2.2.3Benefits∙Resistance to intended or unintended jamming∙Sharing of a single channel among multiple users∙Reduced signal/background-noise level hampers interception (stealth)∙Determination of relative timing between transmitter and receiver2.2.4Uses∙The United States GPS and European Galileo satellite navigation systems∙DS-CDMA (Direct-Sequence Code Division Multiple Access) is a multiple access scheme based on DSSS, by spreading the signals from/to different users with different codes.It is the most widely used type of CDMA.∙Cordless phones operating in the 900 MHz, 2.4 GHz and 5.8 GHz bands∙IEEE 802.11b 2.4 GHz Wi-Fi, and its predecessor 802.11-1999. (Their successor 802.11g uses OFDM instead)∙Automatic meter reading∙IEEE 802.15.4 (used e.g. as PHY and MAC layer for ZigBee)2.3 Frequency-hopping spread spectrumFrequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly switching a carrier among many frequency channels, using a pseudorandom sequence known to both transmitter and receiver. It is utilized as a multiple access method in the frequency-hopping code division multiple access (FH-CDMA) scheme.A spread-spectrum transmission offers three main advantages over a fixed-frequency transmission:1.Spread-spectrum signals are highly resistant to narrowband interference. Theprocess of re-collecting a spread signal spreads out the interfering signal, causing it to recede into the background.2.Spread-spectrum signals are difficult to intercept. An FHSS signal simply appearsas an increase in the background noise to a narrowband receiver. An eavesdropper would only be able to intercept the transmission if they knew the pseudorandom sequence.3.Spread-spectrum transmissions can share a frequency band with many types ofconventional transmissions with minimal interference. The spread-spectrum signals add minimal noise to the narrow-frequency communications, and vice versa. As a result, bandwidth can be utilized more efficiently.2.3.1 Basic algorithmTypically, the initiation of an FHSS communication is as follows1.The initiating party sends a request via a predefined frequency or control channel.2.The receiving party sends a number, known as a seed.3.The initiating party uses the number as a variable in a predefined algorithm, whichcalculates the sequence of frequencies that must be used. Most often the period of the frequency change is predefined, as to allow a single base station to serve multiple connections.4.The initiating party sends a synchronization signal via the first frequency in thecalculated sequence, thus acknowledging to the receiving party it has correctly calculated the sequence.5.The communication begins, and both the receiving and the sending party changetheir frequencies along the calculated order, starting at the same point in time.2.3.2 Military useSpread-spectrum signals are highly resistant to deliberate jamming, unless the adversary has knowledge of the spreading characteristics. Military radios use cryptographic techniques to generate the channel sequence under the control of a secret Transmission Security Key(TRANSEC) that the sender and receiver share.By itself, frequency hopping provides only limited protection against eavesdropping and jamming. To get around this weakness most modern military frequency hopping radios often employ separate encryption devices such as the KY-57. U.S. military radios that use frequency hopping include HAVE QUICK and SINCGARS.2.3.3Technical considerationsThe overall bandwidth required for frequency hopping is much wider than that required to transmit the same information using only one carrier frequency. However, because transmission occurs only on a small portion of this bandwidth at any given time, the effective interference bandwidth is really the same. Whilst providing no extra protection against wideband thermal noise, the frequency-hopping approach does reduce the degradation caused by narrowband interferers.One of the challenges of frequency-hopping systems is to synchronize the transmitter and receiver. One approach is to have a guarantee that the transmitter will use all the channels in a fixed period of time. The receiver can then find the transmitter by picking a random channel and listening for valid data on that channel. The transmitter's data is identified by a special sequence of data that is unlikely to occur over the segment of data for this channel and the segment can have a checksum for integrity and further identification. The transmitter and receiver can use fixed tables of channel sequences so that once synchronized they can maintain communication by following the table. On each channel segment, the transmitter can send its current location in the table.In the US, FCC part 15 on unlicensed system in the 900MHz and 2.4GHz bands permits more power than non-spread spectrum systems. Both frequency hopping and direct sequence systems can transmit at 1 Watt. The limit is increased from 1 milliwatt to 1 watt or a thousand times increase. The Federal Communications Commission (FCC) prescribes a minimum number of channels and a maximum dwell time for each channel.In a real multipoint radio system, space allows multiple transmissions on the same frequency to be possible using multiple radios in a geographic area. This creates the possibility of system data rates that are higher than the Shannon limit for a single channel. Spread spectrum systems do not violate the Shannon limit. Spread spectrum systems rely on excess signal to noise ratios for sharing of spectrum. This property is also seen in MIMO and DSSS systems. Beam steering and directional antennas also facilitate increased system performance by providing isolation between remote radios.2.3.4 Variations of FHSSAdaptive Frequency-hopping spread spectrum (AFH) (as used in Bluetooth) improves resistance to radio frequency interference by avoiding using crowded frequencies in the hopping sequence. This sort of adaptive transmission is easier to implement with FHSS than with DSSS.The key idea behind AFH is to use only the “good” frequencies, by avoiding the "bad" frequency channels -- perhaps those "bad" frequency channels are experiencing frequency selective fading, or perhaps some third party is trying to communicate on those bands, or perhaps those bands are being actively jammed. Therefore, AFH should be complemented by a mechanism for detecting good/bad channels.However, if the radio frequency interference is itself dynamic, then the strategy of “badchannel removal”, applied in AFH might not work well. For example, if there are several colocated frequency-hopping networks (as Bluetooth Piconet), then they are mutually interfering and the strategy of AFH fails to avoid this interference.In this case, there is a need to use strategies for dynamic adaptation of the frequency hopping pattern.Such a situation can often happen in the scenarios that use unlicensed spectrum.In addition, dynamic radio frequency interference is expected to occur in the scenarios related to cognitive radio, where the networks and the devices should exhibit frequency-agile operation.Chirp modulation can be seen as a form of frequency-hopping that simply scans through the available frequencies in consecutive order.第一章介绍扩频技术是信号(例如一个电气、电磁,或声信号)生成的特定带宽频率域中特意传播,从而导致更大带宽的信号的方法。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

外文文献及其翻译电子政务信息1.政府信息化的含义?政府信息化是指:政府有效利用现代信息和通信技术,通过不同的信息服务设施,对政府的业务流程、组织结构、人员素质等诸方面进行优化、改造的过程。

2.广义和狭义的电子政务的定义?广义的电子政务是指:运用信息技术和通信技术实现党委、人大、政协、政府、司法机关、军队系统和企事业单位的行政管理活动。

(电子党务、电子人大、电子政协)狭义的电子政务是指:政府在其管理和服务职能中运用现代信息和通信技术,实现政府组织结构和工作流程的重组优化,超越时间、空间和部门分隔的制约,全方位的向社会提供优质规范、透明的服务,是政府管理手段的变革。

3.电子政务的组成部分?①:政府部门内部办公职能的电子化和网络化;②:政府职能部门之间通过计算机网络实现有权限的实时互通的信息共享;③:政府部门通过网络与公众和企业间开展双向的信息交流与策;4.理解电子政务的发展动力?①:信息技术的快速发展;②:政府自身改革与发展的需要;③:信息化、民主化的社会需求的推动;5.电子政务的应用模式?模式有:1.政府对公务员的电子政务(G2E);2.政府间的电子政务(G2G);3.政府对企业的电子政务(G2B);4.政府对公众的电子政务(G2C);6.电子政务的功能?①:提高工作效率,降低办公成本;②:加快部门整合,堵塞监管漏洞;③:提高服务水平,便于公众的监督;④:带动社会信息化发展;7.我国电子政务发展存在的主要问题?①:政府公务员与社会公众对电子政务的认识不足;②:电子政务发展缺乏整体规划和统一性标准;③:电子政务管理体制改革远未到位;④:电子政务整体应用水平还较低;⑤:政府公务员的素质有待提高;⑥:电子政务立法滞后;⑦:对电子政务安全问题缺乏正确认识;8.政府创新的含义和内容?含义:是指各级政府为适应公共管理与行政环境的需要,与时俱进的转变观念与职能,探索新的行政方法与途径,形成新的组织结构、业务流程和行政规范,全面提高行政效率,更好的履行行政职责的实践途径。

内容:政府观念改革和创新、政府管理与创新、政府职能与创新、政府服务与创新、政府服务改革与创新、政府业务流程重组与创新、工作方式的改革与创新9.政府流程优化与重组的概念?是指对企业的经营过程进行根本性的重新思考和彻底翻新,以使企业在成本、质量、服务和速度等重大特征上获得明显的改善,并强调通过充分利用信息技术使企业获得巨大提高。

10.政府流程优化与重组的步骤?①:制定计划;②:优化与重组准备;③:审视现有流程;④:重新设计;⑤:实施新流程;⑥:评估反馈;11.政府流程优化与重组的方法?(尤其是流程图法的掌握41页)方法有:流程图法、角色行为图法、IDEF系列方法、统一建模语言法、Petri网方法、工作流方法、柔性建模技术.12.理解“三网一库”?政府信息资源管理中心?(54页)“三网”指:内网、外网和专网;“一库”指:政务信息资源库内网即机关内部办公网,以政府各部门的局域网为基础,建立在保密通信平台上,主要运行党政决策指挥、宏观调控、行政执行、应急处理、监督检查、信息查询等各类相对独立的电子政务应用系统。

外网即公共管理和服务网。

建立在公共通信平台上,主要用于政府信息发布,向社会提供服务。

专网即办公业务资源网。

链接从中央到地方的各级党政机关,上下级相关业务部门。

根据机构职能在业务范围与内网有条件的互联,是吸纳地区级别涉密信息共享。

专网与内网之间采取逻辑隔离。

政府信息资源库,包括党政和各行业的业务数据和管理信息。

它分布于三网之上,按密级和使用要求为不同用户服务。

政府信息资源管理中心建有信息资源元数据库,提供丰富的信息资源供各部门访问。

包括:人口信息,法人单位信息、自然资源信息、空间地理以及宏观经济数据等。

他在数据存储、备份的基础上,为政府部门和企业和公众提供数据共享、数据交换和决策支持服务。

它采取自顶向下的层次结构,分为国家级、省市级和地级市GDC,分别负责本级或本行业的数据服务。

它存储的信息主要分为:基础型、公益型、综合型信息。

它采取统一的目录服务体系。

13.电子政务系统的层次模型?它自下而上可分为:网络系统层、信息管理层、应用服务层、应用业务层。

层次模型框架:电子政务标准和规范体系、面向电子政务的安全体系。

14.了解VPN?了解OSI即开放式通信系统互联参考模型(7个紧密层次)?VPN(Virtual Private Network)即虚拟专用网,它采用了一种称为隧道(Tunnel)的技术,使得政府和企业可以在公网上建立起相互独立、安全、可连接分支机构、分布式网点和移动用户的多个虚拟专用网。

OSI参考模型,即开放式通信系统互联参考模型,是国际标准化组织(ISO)提出的一个试图使各种计算机在世界范围内互连为网络的标准框架。

OSI的框架模型分为7层:物理层、数据链路层、网络层、传输层、会话层、表示层、应用层。

15.TCP/IP协议簇的内容?TCP/IP是互联网上广泛使用的一种协议,它实际上是包含多个协议的协议簇,起源于美国国防部。

可以映射到4层:①网络访问层,负责在线路上传输帧并从线路上接收帧;②网际层,进行路由选择;③传输层,负责管理计算机间的会话,包括TCP/UDP两个协议;④应用层。

16.数据存储与备份?数据存储备份是指:为防止系统出现操作失误或系统故障导致数据丢失,而将全系统或部分数据集合从应用主机的硬盘或阵列复制到其他的存储介质的过程。

数据存储备份的介质:磁盘阵列、磁带库、光盘塔或光盘库、光盘网络镜像服务器。

17.单点登录技术?(84页)单点登录(Single Sign On),简称为 SSO,是目前比较流行的企业业务整合的解决方案之一。

SSO的定义是在多个应用系统中,用户只需要登录一次就可以访问所有相互信任的应用系统。

通常情况下运维内控审计系统、4A系统或者都包含此项功能,目的是简化账号登录过程并保护账号和密码安全,对账号进行统一管理。

18.了解内部公文处理应用?(87页)发文管理、收文管理=======自己看书本,懒得打字。

19.电子政务决策支持系统的内容(DSS)?城市应急联动(106页)?内容为:制定决策方案、下达决策指令、执行决策指令、系统反馈和修正。

20.移动政务的应用?应用方式:基于消息的应用(典型代表是基于短信的应用,形式为短信预警、短信公告、短信通知);基于移动互联网的应用(GPRS/CDMA乃至未来的3G/4G数据传输技术的应用);基于位置的应用(利用GPS定位或者移动网络自身定位)21.政府信息资源的构成?内容构成:①政府决策信息;②为社会各界服务的信息;③反馈信息;④政府间交流;22.政府信息公开的内容及步骤?内容为:根据行政机关的级别确定为县级以上人民政府及其部门、设区的市级人民政府、县级人民政府及其部门和乡镇人民政府公开的信息。

步骤:见书129页23.以政府信息资源的目录服务体系为依托的信息交换?交换体系是以统一的国家电子政务网络为依托,支持区域、跨部门政府信息资源交换与共享的信息系统。

交换流程为:需求方:需要资源的部门系统提供资源请求;交换平台;目录服务;提供方:交换平台根据目录返回结果定位资源。

24.一站式服务的形式?(142页)一站式服务的特点为:以网络为工具,以用户为中心,以应用为灵魂,以便民为母的。

形式有:以利用现代计算机和通信网络技术,提供政府网上服务,提供全面的政务信息,政府资源共享,个性化服务(据功能)25.政府电子化采购的流程(145页)?流程步骤为:生成采购单、发布采购需求信息、供应商应标、网上开标与定标、签订采购合同、供应商供货、货款结算和支付26.电子政务安全问题产生原因?①:技术保障措施不完善:计算机系统本身的脆弱性;软件系统存在缺陷;网络的开放性②:管理体系不健全:组织及人员风险;管理制度不完善;安全策略有漏洞;缺乏应急体系③:基础设施建设不健全:法律体系不健全;安全标准体系不完整、电子政务的信任体系问题④:社会服务体系问题27电子政务的安全需求是什么?需求是:保护政务信息资源价值不受侵犯,保证信息资产的拥有者面临最小的风险和获取最大的安全利益,使政务的信息基础设施、信息应用服务和信息内容为抵御上述威胁而具有保密性、完整性、真实性、可用性和可控性的能力,从而确保一个政府部门能够有效地完成法律所赋予的政府职能。

表现为:信息的真实性、保密性、完整性、不可否认性、可用性。

28.了解几种网络安全技术?A.防火墙:①:一般分为数据包过滤型、应用级网关型、代理服务器型。

②安全策略是:一切未被允许的都是禁止的;一切未被禁止的都是允许的。

B. 电子签名技术:是将摘要信息用发送者的私钥加密,与原文一起传送给接收者。

接收者只有用发送者的公钥才能解密被加密的摘要信息,然后用HASH函数对收到的原文产生一个摘要信息,与解密的摘要信息对比。

如果相同,则说明收到的信息是完整的,在传输过程中没有被修改,否则说明信息被修改过,因此数字签名能够验证信息的完整性。

C.非对称密钥加密技术:需要使用一对相关的密钥:一个加密,一个解密。

分为两种基本模式:加密模式和验证模式D.入侵检测系统:是依照一定得安全策略,对网络、系统的运行状况进行监视,尽可能的发现各种攻击企图、攻击行为或攻击结果,以保证网络系统的机密性、完整性和可用性。

类型有:基于主机的入侵检测系统,基于网络的入侵检测系统、分布式入侵检测系统。

方法有:基于特征的检测、基于异常的检测、完整性检验。

E. 防病毒系统:分为主机防病毒和网络防病毒。

主机房病毒式安装防病毒软件进行实时监测,发现病毒立即清除或修复。

网络防病毒是有安全提供商提供一整套的解决方案,针对网络进行全面防护。

F. 漏洞扫描系统:功能有:动态分析系统的安全漏洞,检查用户网络中的安全隐患,发布检测报告,提供有关漏洞的详细信息和最佳解决对策,杜绝漏洞、降低风险,防患未然。

G.安全认证:用户访问系统之前经过身份认证系统,监控器根据用户身份和授权数据库决定用户能否访问某个资源。

电子政务系统必须建立基于CA认证体制的身份认证系统。

数字证书用来证明数字证书持有者的身份。

29.PMI与PKI的概念和区别?概念:PKI即公钥基础设施,又叫公钥体系,是一种利用公钥加密技术为电子商务、电子政务提供一整套安全基础平台的技术和规范,采用数字证书来管理公钥。

PMI即授权管理基础设施,是国家信息安全基础设施的一个重要组成部分,目标是向用户和应用程序提供授权管理服务,提供用户身份到应用授权的映射功能,提供与实际应用出息模式相对应的、与具体应用系统开发和管理无关的授权和访问控制机制,简化具体应用系统的开发和维护。

区别:①:解决问题的不同。

PKI解决对身份的认证问题。

PMI 是身份认证之后,决定你具有的权限和能做什么的问题。

相关文档
最新文档