云技术和服务中英文对照外文翻译文献

合集下载

关于云计算的英语作文

关于云计算的英语作文

关于云计算的英语作文英文回答:Cloud computing has revolutionized the way businesses and individuals store, process, and access data and applications. As a result, there are multiple benefits of cloud computing, including improved flexibility, reduced costs, increased security, enhanced collaboration, and innovative services.Flexibility is one of the key advantages of cloud computing. It allows users to access their data and applications from anywhere with an internet connection. This makes it easier for businesses to operate remotely and for individuals to work from home.Reduced costs are another advantage of cloud computing. Businesses can avoid the upfront costs of purchasing and maintaining their own IT infrastructure. Instead, they can pay for cloud services on a pay-as-you-go basis.Increased security is another key benefit of cloud computing. Cloud providers have invested heavily insecurity measures to protect their customers' data. This makes cloud computing a more secure option than storingdata on-premises.Enhanced collaboration is another advantage of cloud computing. Cloud-based applications make it easier forteams to collaborate on projects. This is because cloud applications can be accessed from anywhere, and they allow users to share files and collaborate in real-time.Innovative services are another key benefit of cloud computing. Cloud providers are constantly developing new services, such as artificial intelligence, machine learning, and data analytics. These services can help businesses improve their operations and make better decisions.Overall, cloud computing offers a number of benefitsthat can help businesses and individuals improve their operations, reduce costs, and increase innovation.中文回答:云计算的优势。

参考文献中文的英文对照

参考文献中文的英文对照

参考文献中文的英文对照在学术论文中,参考文献是非常重要的一部分,它可以为论文的可信度和学术性增添分数,其中包括中文和英文文献。

以下是一些常见的参考文献中文和英文对照:1. 书籍 Book中文:王小明. 计算机网络技术. 北京:清华大学出版社,2018.英文:Wang, X. Computer Network Technology. Beijing: Tsinghua University Press, 2018.2. 学术期刊 Article in Academic Journal中文:张婷婷,李伟. 基于深度学习的影像分割方法. 计算机科学与探索,2019,13(1):61-67.英文:Zhang, T. T., Li, W. Image Segmentation Method Based on Deep Learning. Computer Science and Exploration, 2019, 13(1): 61-67.3. 会议论文 Conference Paper中文:王维,李丽. 基于云计算的智慧物流管理系统设计. 2019年国际物流与采购会议论文集,2019:112-117.英文:Wang, W., Li, L. Design of Smart Logistics Management System Based on Cloud Computing. Proceedings of the 2019 International Conference on Logistics and Procurement, 2019: 112-117.4. 学位论文 Thesis/Dissertation中文:李晓华. 基于模糊神经网络的水质评价模型研究. 博士学位论文,长春:吉林大学,2018.英文:Li, X. H. Research on Water Quality Evaluation Model Based on Fuzzy Neural Network. Doctoral Dissertation, Changchun: Jilin University, 2018.5. 报告 Report中文:国家统计局. 2019年国民经济和社会发展统计公报. 北京:中国统计出版社,2019.英文:National Bureau of Statistics. Statistical Communique of the People's Republic of China on the 2019 National Economic and Social Development. Beijing: China Statistics Press, 2019.以上是一些常见的参考文献中文和英文对照,希望对大家写作有所帮助。

云计算外文翻译参考文献

云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。

云计算Cloud-Computing-外文翻译

云计算Cloud-Computing-外文翻译

毕业设计说明书英文文献及中文翻译学生姓名:学号:计算机与控制工程学院:专指导教师:2017 年 6 月英文文献Cloud Computing1。

Cloud Computing at a Higher LevelIn many ways,cloud computing is simply a metaphor for the Internet, the increasing movement of compute and data resources onto the Web. But there's a difference: cloud computing represents a new tipping point for the value of network computing. It delivers higher efficiency, massive scalability, and faster,easier software development. It's about new programming models,new IT infrastructure, and the enabling of new business models。

For those developers and enterprises who want to embrace cloud computing, Sun is developing critical technologies to deliver enterprise scale and systemic qualities to this new paradigm:(1) Interoperability —while most current clouds offer closed platforms and vendor lock—in, developers clamor for interoperability。

云计算外文翻译参考文献

云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。

5G无线通信网络中英文对照外文翻译文献

5G无线通信网络中英文对照外文翻译文献

5G无线通信网络中英文对照外文翻译文献(文档含英文原文和中文翻译)翻译:5G无线通信网络的蜂窝结构和关键技术摘要第四代无线通信系统已经或者即将在许多国家部署。

然而,随着无线移动设备和服务的激增,仍然有一些挑战尤其是4G所不能容纳的,例如像频谱危机和高能量消耗。

无线系统设计师们面临着满足新型无线应用对高数据速率和机动性要求的持续性增长的需求,因此他们已经开始研究被期望于2020年后就能部署的第五代无线系统。

在这篇文章里面,我们提出一个有内门和外门情景之分的潜在的蜂窝结构,并且讨论了多种可行性关于5G无线通信系统的技术,比如大量的MIMO技术,节能通信,认知的广播网络和可见光通信。

面临潜在技术的未知挑战也被讨论了。

介绍信息通信技术(ICT)创新合理的使用对世界经济的提高变得越来越重要。

无线通信网络在全球ICT战略中也许是最挑剔的元素,并且支撑着很多其他的行业,它是世界上成长最快最有活力的行业之一。

欧洲移动天文台(EMO)报道2010年移动通信业总计税收1740亿欧元,从而超过了航空航天业和制药业。

无线技术的发展大大提高了人们在商业运作和社交功能方面通信和生活的能力无线移动通信的显著成就表现在技术创新的快速步伐。

从1991年二代移动通信系统(2G)的初次登场到2001年三代系统(3G)的首次起飞,无线移动网络已经实现了从一个纯粹的技术系统到一个能承载大量多媒体内容网络的转变。

4G无线系统被设计出来用来满足IMT-A技术使用IP面向所有服务的需求。

在4G系统中,先进的无线接口被用于正交频分复用技术(OFDM),多输入多输出系统(MIMO)和链路自适应技术。

4G无线网络可支持数据速率可达1Gb/s的低流度,比如流动局域无线访问,还有速率高达100M/s的高流速,例如像移动访问。

LTE系统和它的延伸系统LTE-A,作为实用的4G系统已经在全球于最近期或不久的将来部署。

然而,每年仍然有戏剧性增长数量的用户支持移动宽频带系统。

云计算外文文献+翻译

云计算外文文献+翻译

云计算外文文献+翻译1. 引言云计算是一种基于互联网的计算方式,它通过共享的计算资源提供各种服务。

随着云计算的普及和应用,许多研究者对该领域进行了深入的研究。

本文将介绍一篇外文文献,探讨云计算的相关内容,并提供相应的翻译。

2. 外文文献概述作者:Antonio Fernández Anta, Chryssis Georgiou, Evangelos Kranakis出版年份:2019年该外文文献主要综述了云计算的发展和应用。

文中介绍了云计算的基本概念,包括云计算的特点、架构、服务模型以及云计算的挑战和前景。

3. 研究内容该研究综述了云计算技术的基本概念和相关技术。

文中首先介绍了云计算的定义和其与传统计算的比较,深入探讨了云计算的优势和不足之处。

随后,文中介绍了云计算的架构,包括云服务提供商、云服务消费者和云服务的基本组件。

在架构介绍之后,文中提供了云计算的三种服务模型:基础设施即服务(IaaS)、平台即服务(PaaS)和软件即服务(SaaS)。

每种服务模型都从定义、特点和应用案例方面进行了介绍,并为读者提供了更深入的了解。

此外,文中还讨论了云计算的挑战,包括安全性、隐私保护、性能和可靠性等方面的问题。

同时,文中也探讨了云计算的前景和未来发展方向。

4. 文献翻译《云计算:一项调查》是一篇全面介绍云计算的文献。

它详细解释了云计算的定义、架构和服务模型,并探讨了其优势、不足和挑战。

此外,该文献还对云计算的未来发展进行了预测。

对于研究云计算和相关领域的读者来说,该文献提供了一个很好的参考资源。

它可以帮助读者了解云计算的基本概念、架构和服务模型,也可以引导读者思考云计算面临的挑战和应对方法。

5. 结论。

计算机网络类嵌入式系统的网络服务器中英文翻译、外文翻译、外文文献翻译

计算机网络类嵌入式系统的网络服务器中英文翻译、外文翻译、外文文献翻译

Web Server for Embedded SystemsAfter the “everybody-in-the-Internet-wave” now obviously follows the“everything-in-the-Internet-wave”.The most coffee, vending and washingmachines are still not available about the worldwide net. However the embeddedInternet integration for remote maintenance and diagnostic as well as the so-calledM2M communication is growing with a considerable speed rate.Just the remote maintenance and diagnostic of components and systems by Webbrowsers via the Internet, or a local Intranet has a very high weight for manydevelopment projects. In numerous development departments people work oncompletely Web based configurations and services for embedded systems. Theremaining days of the classic user interface made by a small LC-display with frontpanel and a few function keys are over. Through future evolutions in the field ofthe mobile Internet, Bluetooth-based PAN s (Personal Area Network's) andthe rapidly growing M2M communication (M2M=Machine-to-Machine)a further innovating advance is to be expected.The central function unit to get access on an embedded system via Web browser isthe Web server. Such Web servers bring the desired HTML pages (HTML=HyperText Markup Language) and pictures over the worldwide Internetor a local network to the Web browser. This happens HTTP-based (HyperText Transfer Protocol). A TCP/IP protocol stack –that means it is based onsophisticated and established standards–manages the entire communication.Web server (HTTP server) and browser (HTTP client) build TCP/IP-applications. HTTP achieved a phenomenal distribution in the last years.Meanwhile millions of user around the world surf HTTP-based in the WorldWide Web. Today almost every personal computer offers the necessaryassistance for this protocol. This status is valid more and more for embeddedsystems also. The HTTP spreads up with a fast rate too.1. TCP/IP-based HTTP as Communication PlatformHTTP is a simple protocol that is based on a TCP/IP protocol stack (picture 1.A).HTTP uses TCP (Transmission Control Protocol). TCP is a relative complex andhigh-quality protocol to transfer data by the subordinate IP protocol. TCP itselfalways guarantees a safeguarded connection between two communication partnersbased on an extensive three-way-handshake procedure. As aresult the data transfer via HTTP is always protected. Due tothe extensive TCP protocol mechanisms HTTP offers only a low-gradeperformance.Figure 1: TCP/IP stack and HTTP programming modelHTTP is based on a simple client/server-concept. HTTP server and clientcommunicate via a TCP connection. As default TCP port value the port number80 will be used. The server works completely passive. He waits for a request(order) of a client. This request normally refers to the transmition of specificHTML documents. This HTML documents possibly have to be generateddynamically by CGI. As result of the requests, the server will answer with aresponse that usually contains the desired HTML documents among others(picture 1.B).GET /test.htm HTTP/1.1Accept]: image/gif, image/jpeg, */*User selling agent: Mozilla/4.0Host: 192.168.0.1Listing 1.A: HTTP GET-requestHTTP/1.1 200 OKDate: Mon, 06 Dec 1999 20:55:12 GMTServer: Apache/1.3.6 (Linux)Content-length: 82Content-type: text/html<html><head><title>Test-Seite</title></head><body>Test-SeiteThe DIL/NetPCs DNP/1110 – Using the Embedded Linux</body></html>Listing 1.B: HTTP response as result of the GET-request from listing 1.AHTTP requests normally consist of several text lines, which are transmitted to theserver by TCP. The listing 1.A shows an example. The first line characterizes therequest type (GET), the requested object (/test1.htm) and the used HTTP version(HTTP/1.1). In the second request line the client tells the server, which kind offiles it is able to evaluate. The third line includes information about theclient- software. The fourth and last line of the request from listing 1.A is used toinform the server about the IP address of the client. In according to the type ofrequest and the used client software there could follow some further lines. Asan end of the request a blank line is expected.The HTTP responses as request answer mostly consist of two parts. At first thereis a header of individual lines of text. Then follows a content object (optional).This content object maybe consists of some text lines –in case of a HTML file– ora binary file when a GIF or JPEG image should be transferred. The first line of theheader is especially important. It works as status or error message. If anerror occurs, only the header or a part of it will be transmitted as answer.2. Functional principle of a Web ServerSimplified a Web server can be imagined like a special kind of a file server.Picture 2.A shows an overview. The Web server receives a HTTP GET-requestfrom the Web browser. By this request, a specific file is required as answer (seestep 1 into picture 2.A). After that, the Web server tries to get access on the filesystem of the requested computer. Then it attempts to find the desired file (step 2).After the successful search the Web server read the entire file(step 3) and transmit it as an answer (HTTP response comprising of headerand content object) to the Web browser (step 4). If the Web server cannot findthe appropriate file in the file system, an error message (HTTP response whichonly contains the header) is simply be send as response to the client.Figure 2: Functional principle from Web server and browserThe web content is build by individual files. The base is build by static files withHTML pages. Within such HTML files there are references to further filesembedded –these files are typically pictures in GIF or JPEG format. However,also references to other objects, for example Java-Applets, are possible. After aWeb browser has received a HTML file of a Web server, this file will beevaluated and then searched for external references. Now the steps 1 to 4 frompicture 2.A will run again for every external reference in order to request therespective file from the corresponding Web server. Please note, that such areference consists of the name or IP address of a Web server (e.g. ""),as well as the name of the desired file (e.g. "picture1.gif"). So virtually everyreference can refer to another Web server. In other words, a HTML file could belocated on the server "ssv-embedded.de" but the required picture -which isexternal referenced by this HTML file- is located on the Web server"". Finally this (worldwide) networking of separate objects is thecause for the name World Wide Web (WWW). All files, which are required by aWeb server, are requested from a browser like the procedure shown on picture2.A. Normally these files are stored in the file system of the server. TheWebmaster has to update these files from time to time.A further elementary functionality of a Web server is the CommonGateway Interface(CGI) -we have mentioned before. Originally this technologyis made only for simple forms, which are embedded into HTML pages. The data,resulting from the padding of a form, will be transmitted to a Web server viaHTTP-GET or POST-request (see step 1 into picture 2.B). In such a GET- orPOST-request the name of the CGI program, which is needed for theevaluation of a form, is fundamentally included. This program has to be on theWeb server. Normally the directory "/cgi-bin" is used as storage location.As result of the GET- or POST-request the Web server starts the CGI programlocated in the subdirectory "/cgi-bin" and delivers the received data in form ofparameters (step 2). The outputs of a CGI program are guided to the Web server(step 3). Then the Web server sends them all as responses to the Web browser(step 4).3. Dynamic generated HTML PagesIn contradiction to a company Web site server, which informs people about theproduct program and services by static pages and pictures, an embeddedWeb server has to supply dynamically generated contents. The embedded Webserver will generate the dynamic pages in the moment of the first access by abrowser. How else could we check the actual temperature of a system viaInternet? Static HTML files are not interesting for an embedded Web server.The most information about the firmware version and service instructions arestored in HTML format. All other tasks are normally made via dynamic generatedHTML.There are two different technologies to generate a specific HTML page in themoment of the request: First the so-called server-side-scripting and secondthe CGI programming. At the server-side-scripting, script code is embeddedinto a HTML page. If required, this code will be carried out on the server (server-sided).For this, there are numerous script languages available. All these languages areusable inside a HTML-page. In the Linux community PHP is used mostly. Thefavourite of Microsoft is VBScript. It is also possible to insert Java directly intoHTML pages. Sun has named this technology JSP(Java Server Pages).The HTML page with the script code is statically stored in the file system of theWeb server. Before this server file is delivered to the client, a special programreplaces the entire script code with dynamic generated standard HTML. The Webbrowser will not see anything from the script language.Figure 3: Single steps of the Server-Side-ScriptingPicture 3 shows the single steps of the server-side-scripting. In step 1 the Webbrowser requests a specific HTML file via HTTP GET-request. The Web serverrecognizes the specific extension of the desired file (for example *.ASP or *.PHPinstead of *.HTM and/or *.HTML) and starts a so-called scripting engine(see step 2). This program gets the desired HTML file including the script codefrom the file system (step 3), carry out the script code and make a newHTML file without script code (step 4). The included script code will be replacedby dynamic generated HTML. This new HTML file will be read by the Webserver (step 5) and send to the Web browser (step 6). If a server-sided scripting issupposed to be used by an embedded Web server, so you haveto consider the necessary additional resources. A simple example: In orderto carry out the embedded PHP code into a HTML page, additional programmodules are necessary for the server. A scripting engine together with theembedded Web server has to be stored in the Flash memory chip of an embeddedsystem. Through that, during run time more main memory is required.4. Web Server running under LinuxOnce spoken about Web servers in connection with Linux most peopleimmediately think of Apache. After investigations of the Netcraft Surveythis program is the mostly used Web server worldwide. Apache is anenhancement of the legendary NCSA server. The name Apache itself hasnothing to do with Red Indians. It is a construct from "A Patchy Server" becausethe first version was put together from different code and patch files.Moreover there are numerous other Web servers - even for Linux. Most of this arestanding under the GPL (like Apache) and can be used license free. Avery extensive overview you can find at "/". EveryWeb server has his advantages and disadvantages. Some are developed forspecific functions and have very special qualities. Other distinguishes at bestthrough their reaction rate at many simultaneous requests, as wellas the variety of theirconfiguration settings. Others are designed to need minimal resources and offer very small setting possibilities, as well as only one connection to a client.The most important thing by an embedded Web server is the actual resource requirements. Sometimes embedded systems offer only minimal resources, which mostly has to be shared with Linux. Meanwhile there are numerous high- performance 32-bit-386/486-microcontroller or (Strong)ARM-based embedded systems that own just 8 Mbytes RAM and 2 Mbytes Flash-ROM (picture 4). Outgoing from this ROM (Read-only-Memory, i.e. Flash memory chips) a complete Linux, based on a 2.2- or 2.4-Kernel with TCP/IP protocol stack and Web server, will be booted. HTML pages and programs are also stored in the ROM to generate the dynamic Web pages. The space requirements of an embedded system are similar to a little bigger stamp. There it is quite understandable that there is no place for a powerful Web server like Apache.Figure 4: Embedded Web Server Module with StrongARM and LinuxBut also the capability of an Apache is not needed to visualize the counter of a photocopier or the status of a percolator by Web servers and browsers. In most cases a single Web server is quite enough. Two of such representatives are boa () and thttpd (). At first, both Web servers are used in connection with embedded systems running under Linux. The configuration settings for boa and thttpd are poor, but quite enough. By the way, the source code is available to the customer. The practicable binary files for these servers are always smaller than 80 Kbytes and can be integrated in the most embedded systems without problems. For the dynamic generation of HTML pages both servers only offer CGI (Common Gateway Interface) as enlargement. Further technologies, like server-side-includes (SSI) are not available.The great difference between an embedded Web server and Apache is, next to the limited configuration settings, the maximal possible number of simultaneous requests. High performance servers like Apache immediately make an own process for every incoming call request of a client. Inside of this process allfurther steps will then be executed. This requires a very good programming and a lot of free memory resources during run time. But, on the other hand many Web browsers can access such a Web server simultaneously. Embedded Web server like boa and thttpd work only with one single process. If two users need to get access onto a embedded Web server simultaneously, one of both have to wait a few fractions of a second. But in the environment of the embedded systems that is absolutely justifiable. In this case it is first of all a question of remote maintenance, remote configuration and similar tasks. There are not many simultaneous requests expected.The DIL/NetPCs DNP/1110 – Using the Embedded LinuxList of FiguresFigure 1: TCP/IP stack and HTTP programming modelFigure 2: Functional principle from Web server and browserFigure 3: Single steps of the Server-Side-ScriptingFigure 4: Embedded Web Server Module with StrongARM and LinuxListingsListing 1.A: HTTP GET-requestListing 1.B: HTTP response as result of the GET-request from listing 1.A ContactSSV Embedded SystemsHeisterbergallee 72D-30453 HannoverTel. +49-(0)511-40000-0Fax. +49-(0)511-40000-40Email: sales@ist1.deWeb: www.ssv-embedded.deDocument History (Sadnp05.Doc)Revision Date Name1.00 24.05.2002FirstVersion KDWThis document is meant only for the internal application. The contents ofthis document can change any time without announcement. There is takenover no guarantee for the accuracy of the statements. Copyright ©SSV EMBEDDED SYSTEMS 2002. All rights reserved.INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED 'ASIS' WITHOUT WARRANTY OF ANY KIND. The user assumes the entirerisk as to the accuracy and the use of this document. Some names withinthis document can be trademarks of their respective holders.北京工业大学毕业设计(译文)译文:嵌入式系统的网络服务器在“每个人都处在互联网的浪潮中”之后,现在很明显随之而来的是“每件事都处在互联网的浪潮中”。

计算机网络技术中英文对照外文翻译文献

计算机网络技术中英文对照外文翻译文献

中英文资料外文翻译网站建设技术1.介绍网络技术的发展,为今天全球性的信息交流与资在建立源共享和交往提供了更多的途径和可能。

足不出户便可以知晓天下大事,按几下键盘或点几下鼠标可以与远在千里之外的朋友交流,网上通信、网上浏览、网上交互、网上电子商务已成为现代人们生活的一部分。

Internet 时代, 造就了人们新的工作和生活方式,其互联性、开放性和共享信息的模式,打破了传统信息传播方式的重重壁垒,为人们带来了新的机遇。

随着计算机和信息时代的到来,人类社会前进的脚步在逐渐加快。

近几年网页设计发展,快得人目不暇接。

随着网页设计技术的发展,丰富多彩的网页成为网上一道亮丽的风景线。

要想设计美观实用的网页就应该深入掌握网站建设技术。

在建立网站时,我们分析了网站建立的目的、内容、功能、结构,应用了更多的网页设计技术。

2、网站的定义2.1 如何定义网站确定网站的任务和目标,是建设网站所面临的最重要的问题。

为什么人们会来到你的网站? 你有独特的服务吗? 人们第一次到你的网站是为了什么? 他们还会再来吗? 这些问题都是定义网站时必须考虑的问题。

要定义网站,首先,必须对整个网站有一个清晰认识,弄清到底要设计什么、主要的目的与任务、如何对任务进行组织与规划。

其次,保持网站的高品质。

在众多网站的激烈竞争中,高品质的产品是长期竞争的最大优势。

一个优秀的网站应具备:(1)用户访问网站的速度要快;(2)注意反馈与更新。

及时更新网站内容、及时反馈用户的要求;(3)首页设计要合理。

首页给访问者留下的第一印象很重要,设计务必精美,以求产生良好的视觉效果。

2.2 网站的内容和功能在网站的内容方面,就是要做到新、快、全三面。

网站内容的类型包括静态的、动态的、功能的和事物处理的。

确定网站的内容是根据网站的性质决定的,在设计政府网站、商业网站、科普性网站、公司介绍网站、教学交流网站等的内容和风格时各有不同。

我们建立的网站同这些类型的网站性质均不相同。

中英文翻译云计算

中英文翻译云计算
ቤተ መጻሕፍቲ ባይዱ
.)enil der( ]8[ raf yb gnitupmoc dirG decaptuo ydaerla sah ,)enil wolley( ygolonhcet noitazilautriV yb delbane si hcihw ,)enil eulb( gnitupmoc duolC,)1 erugiF( sdnert elgooG ni detroper sA .secivres erawtfos elbarugifnoc dna stnemnorivne gnitupmoc deetnaraug SoQ ,serutcurtsarfni TI cim anyd elbixelf reffo ot seitiliba sti ot eud cipot toh a sa segreme yltnerruc gnitupmoc duolC
.sresu duolC ot deredner yllanif ,eg ami m roftalp elgnis a ot detadilosnoc dna detartsehcro ,derugifnocer yllacit amotua eb n ac sduolC edisni atad dna erawtfos ,erawdraH .sresu duolC ot yltnerapsnart degan am dna metsys suomonotua n a si duolC gnitupmoc ehT metsyS suomonotuA • .ezis y romem dna htdiwdnab O/I ,deeps UPC ekil ecn amrofrep erawdrah ,.g.e ,sresu rof SoQ eetnaraug n ac sduolC gnitupmoc yb dedivorp stnemnorivne gnitupmoc ehT reffo deetnaraug SoQ • .segelivirp evitartsinimda nwo yllausu sresu sa ,noitarugifnoc k rowten ,noitallatsni erawtfos ,elpm axe rof ,no retal stnemnorivne gnitupmoc rieht ezimotsuc nac sresU .dnamed no sresu rof secivres dna secruoser edivorp sduolC gnitupmoC noisivorp eciv res dn amed-nO • .resworb tenretnI ro k rowem arf secivres beW ekil secafretni dehsilbatse llew emos yb dessecca eb nac dna tnednepedni noitacol era secafretni duolC – .BM 51 dnuora si ezis tneilc tikduolC submiN eht ,elpm axe roF .thgiewthgil si yllacol dellatsni eb ot deriuqer si hcihw tneilc duolC ehT – .metsys gnitarepo ro ,relipmoc ,eg augnal gnimm argorp ,.g.e ,stibah gnikrow rieht egn ahc ot sresu ecrof ton od secafretni duolC ehT – :secafretni cirtnec-resu htiw dessecca eb dluoc secivres duolC secafretni cirtnec-resU • :stcepsa gniwollof eht ni ]41[ gnitupmoC tenretnI ,]7[ gnitupmoc labolG ,]8[ gnitupmoc dirG ekil ,smgidarap gnitupmoc rehto morf flesti sehsiugnitsid gnitupmoc duolC

信息技术发展趋势研究论文中英文外文翻译文献

信息技术发展趋势研究论文中英文外文翻译文献

信息技术发展趋势研究论文中英文外文翻译文献本文旨在通过翻译介绍几篇关于信息技术发展趋势的外文文献,以帮助读者更全面、深入地了解该领域的研究进展。

以下是几篇相关文献的简要介绍:1. 文献标题: "Emerging Trends in Information Technology"- 作者: John Smith- 发表年份: 2019本文调查了信息技术领域的新兴趋势,包括人工智能、大数据、云计算和物联网等。

通过对相关案例的分析,研究人员得出了一些关于这些趋势的结论,并探讨了它们对企业和社会的潜在影响。

2. 文献标题: "Cybersecurity Challenges in the Digital Age"- 作者: Anna Johnson- 发表年份: 2020这篇文献探讨了数字时代中信息技术领域所面临的网络安全挑战。

通过分析日益复杂的网络威胁和攻击方式,研究人员提出了一些应对策略,并讨论了如何提高组织和个人的网络安全防护能力。

3. 文献标题: "The Impact of Artificial Intelligence on Job Market"- 作者: Sarah Thompson- 发表年份: 2018这篇文献研究了人工智能对就业市场的影响。

作者通过分析行业数据和相关研究,讨论了自动化和智能化技术对各个行业和职位的潜在影响,并提出了一些建议以适应未来就业市场的变化。

以上是对几篇外文文献的简要介绍,它们涵盖了信息技术发展趋势的不同方面。

读者可以根据需求进一步查阅这些文献,以获得更深入的了解和研究。

云存储服务系统研究中英文外文文献

云存储服务系统研究中英文外文文献

本科毕业设计(论文)中英文对照翻译(此文档为word格式,下载后您可任意修改编辑!)文献出处:Mehra P. The study of cloud storage service system [J]. Internet Computing, IEEE, 2016, 1(5): 10-19.原文The study of cloud storage service systemMehra PAbstractCloud storage is a new concept, which developments and extensions in a cloud computing, so to understand cloud storage is the first to knowabout cloud computing. Cloud computing is a kind of super computing model based on Internet, in a remote data center, tens of thousands of computer and server connected to a computer cloud. Therefore, cloud computing allows you to experience even 10 trillion times a second operation ability, have such a powerful computing ability to simulate nuclear explosion, forecasting and market development trend of climate change. User through a computer, laptop, mobile phone access to the data center, operation according to their needs. With the acceleration development of the concept of cloud computing, people began to looking for a new place for huge amounts of information in cloud storage. The cloud (cloud storage) emerged from a widely attention and support. Similar to the concept of cloud storage and cloud computing, it refers to the application through the cluster, grid technology or distributed file systems, and other functions, the network of a large number of various types of storage devices set up by applying the software to work together, common external provide access to data storage and business functions of a system. Keywords: cloud storage, cloud storage service system, the HDFS1 IntroductionThe rise of cloud makes the whole IT industry in a significant change, from equipment/application centered toward centered on information and this change will cause a series of changes, and affect thetechnical and business mode two levels. The biggest characteristic of the cloud is a mass, high performance/high traffic and low cost, and the biggest change is that its bring providers from sales tools to gradually according to the actual use of tools to collect fees, from selling products to selling services. Therefore, it can be said that cloud storage is not stored, but service. Cloud storage but also has the following characteristics: strong extensibility, should not be limited by the specific geographic location, based on the business component, according to the use of fees, and across different applications. The research content of this article for the study of the cloud storage service system based on HDFS, aims to build a cloud storage service system based on HDFS, solve the enterprise mass data storage problem, reduce the cost of implementing the distributed file system, promote the Hadoop technology promotion. Cloud storage is widely discussed in the present on the cloud computing concept of extension and development, to a large number of different types of storage devices in the network integration, thereby providing access to data storage and business functions. Hadoop distributed file system (HDFS) is the underlying implementation of open source cloud computing software platform Hadoop framework part, has the characteristic such as high transmission rate, high fault tolerance, can be in the form of a flow to access the data in the file system, so as to solve the access speed and security issues, achieve huge amounts of datastorage management.2 Each big cloud storage products of the company 2.1 The Amazon’s strategyAmazon is among the first to launch the cloud storage service enterprises. Amazon first launch a service of cloud computing is Amazon web services (Amazon web services, the AWS), the cloud computing service is composed of four core components: simple arrangement, simple storage service, elastic computing cloud and is still in the backs of the test. In August 2008, Amazon in order to enhance its efforts on the cloud storage strategy, its Internet services add "persist" function to the elastic compute cloud (ECZ).The vendor launched Elastic Block Storage (Elastic Block Storage, EBS) products, and claims that the product can through the Internet service form at the same time provide Storage and computing functions. 2.2 Nirvana and CDNetworks strategyFamous cloud storage platform providers Nirvanix and content delivery network service provider CDNetworks released a new cooperation, and strategic partnerships, to provide the industry the only cloud storage and content delivery service integration platfo rm. Use it’s located in all parts of the world 63 content distribution node, the user can store unlimited data on the Internet, and get good data protection and data security guarantee. Cooperation will bring CDNetworks in cloud storageand Nirvanix the same capacity, not only can safely store huge amounts of media content, and can rely on CDNetworks data center to deliver data anywhere in the world in real time, the two companies, said based on this partnership of cooperation, make it have better overall media delivery ability, also helps users save 80% 90% of the construction of its own storage infrastructure costs.2.3 Google's strategyThe company in this year's FO developer technical conference announced called "Google storage cloud storage services, to challenge Amazon s3 cloud storage service. Look from the function design, Google storage will refer to the Amazon s3, for existing s3 user’s switch to Google storage service. Google storage services will include RESTAPI agreement, to allow developers to download via Google account provides authentication, data backup services. In addition, Google will also to outside developers to provide network user interface and data management tools. 2.4 EMC’s strategyEMC's cloud storage infrastructure solution is a kind of management system based on strategy, the service provided can create different types of cloud storage ability, for example, it can be for not paying customers to create file two copies, and stored in different locations around the world, and for paying customers to create a backup storage on October 5, andprovides its all over the world access to the file of higher reliability and faster access. In software systems, Atm0S including data services, such as copying, data compression, data reduplication, with cheap standard x86 server to hundreds of terabytes of hard disk storage space.EMC has promised that it automatically configure the new storage and adaptive ability of a hardware failure, also allows the user to use W b manage service agreement and read. At present there are three versions, Atm0S system capacity is respectively 120 TB otb, 24, and 36 orb, All of them are based on x86 servers and support gigabit or 10 gb Ethernet connection.3 Cluster development of storage technology The rise of cloud storage is upending the existing network storage architecture. Facing the current pet bytes of mass storage requirements, the traditional SAN or NAS will exist in the expansion of the capacity and performance bottlenecks. Such as by its physical elements (such as the number of disk drives, the connected server performance and memory size and the number of controller), can cause a lot of functional limitations (such as: the number of file system support, snapshot or copy number, etc.).Once encountered the bottleneck of storage system, it will constantly encourage users to upgrade to a bigger storage system and add more management tools, thus increasing the cost. Cloud storage the service mode of the new storage architecture is demanded to keep very low cost, and some existinghigh-end storage devices are obviously cannot meet this need. From the perspective of the practice of Google company, they are not used in the existing cloud computing environment SAN architecture, but use, is a scalable distributed file system GFS) this is a highly efficient cluster storage technology.GFS is a scalable distributed file system, used in large, distributed, on a visit to a large amount of data applications. It runs on ordinary PC, but can provide strong fault tolerance, can give a large number of users with the overall performance of the service. Cloud storage 130] is not stored, but service. Wan and the Internet like a cloud, the cloud storage for users, not referring to a particular device, but is a by many a collection of storage devices and servers. Users use the cloud storage, not using a storage device, but use, is the entire cloud storage system with a data access service. So strictly speaking, the cloud is not stored, but a service. Cloud storage is the core of application software combined with a storage device, by applying the software to realize the change of the service to the storage device. 4 Cloud storage system analysesCompared with the traditional storage devices, cloud storage is not only hardware, but a network devices, storage devices, servers, applications, public access interface, access, and the client program such as a complex system composed of multiple parts. Parts storage device as the core, through the application software to provide access to datastorage and business services. The structure of cloud storage system model consists of four layers.(1) The storage layerStorage layer is the most basic part of the cloud storage. Storage devices can be a fiber channel storage devices, or other storage devices. Storage devices are often large number of cloud storage and distribution of many different areas, between each other through the wide area network, Internet or fiber channel network together. Storage devices is a unified storage management system, can realize the logic of storage virtualization management, more link redundancy management, as well as the hardware equipment condition monitoring and fault maintenance.(2) The basic managementBased management is the core part of the cloud storage, is also the most difficult part of the cloud storage. Based management through cluster and grid computing, distributed file system such as technology, realize the cloud storage between multiple storage devices in the work, make multiple storage devices can provide the same service, and to provide better data access performance, bigger and stronger content distribution system, 1391, data encryption technology to ensure the data in the cloud storage will not be access by unauthorized users, at the same time, through a variety of data for disaster and techniques and measurescan ensure that data is not lost in the cloud storage, ensure the security and stability of the cloud storage itself. (3) The application of the interface layer Application of the interface layer is the most flexible part of the cloud storage. Different cloud storage operation unit can be according to actual business types, different application service interface, with the application of different services. Such as video monitoring application platform, network hard disk reference platform, the remote data backup application platform, etc. (4) Any an authorized user can access layer through a standard utility application login interface to cloud storage system, the cloud storage service. Cloud storage operation services, cloud storage provide different type of access and the access method.译文云存储服务系统研究Mehra P摘要云存储是在云计算(cloud computing)概念上延伸和发展出来的一个新的概念,因此,要了解云存储首先要了解云计算。

(完整word版)JAVA外文文献+翻译

(完整word版)JAVA外文文献+翻译

Java and the InternetIf Java is, in fact, yet another computer programming language, you may question why it is so important and why it is being promoted as a revolutionary step in computer programming. The answer isn't immediately obvious if you’re comin g from a traditional programming perspective. Although Java is very useful for solving traditional stand—alone programming problems, it is also important because it will solve programming problems on the World Wide Web。

1.Client—side programmingThe Web’s in itial server—browser design provided for interactive content, but the interactivity was completely provided by the server. The server produced static pages for the client browser, which would simply interpret and display them。

Basic HTML contains simple mechanisms for data gathering: text-entry boxes, check boxes, radio boxes, lists and drop—down lists, as well as a button that can only be programmed to reset the data on the form or “submit” the data on the form back to the server。

英语作文云计算

英语作文云计算

英语作文云计算Cloud computing has revolutionized the way we access and manage data, offering a wide range of benefits that have transformed the landscape of technology and business. This essay will explore the concept of cloud computing, its advantages, and its potential impact on the future.Introduction to Cloud ComputingCloud computing is a model of delivering computing services that involves a network of remote servers hosted on the internet to store, manage, and process data, rather than a local server or a personal computer. The term "cloud" refers to the internet, and the services are provided on demand, allowing for scalability and flexibility.Advantages of Cloud Computing1. Cost Efficiency: One of the most significant benefits of cloud computing is cost savings. Companies can avoid the high upfront costs of purchasing hardware and software, reducing the need for on-premises IT infrastructure.2. Scalability: Cloud services can easily scale up or down to match the organization's needs. This means that as a business grows, its computing resources can expand without the needfor significant capital outlay.3. Accessibility: Data can be accessed from anywhere with an internet connection, which is particularly useful for remote teams and for employees who travel frequently.4. Reliability and Redundancy: Cloud providers typicallyoffer robust security measures and redundancy, ensuring that data is backed up and protected against loss.5. Maintenance and Updates: Cloud providers are responsible for maintaining the servers, which means that users do not have to worry about software updates or hardware maintenance.Impact on the FutureThe future of cloud computing is promising, with continued growth expected in various sectors. As more businesses migrate to the cloud, we can anticipate further advancements in:- Artificial Intelligence (AI): The cloud provides the necessary computational power to train and deploy AI models efficiently.- Internet of Things (IoT): With the proliferation of connected devices, cloud computing will play a crucial rolein managing the vast amounts of data they generate.- Big Data Analytics: Cloud platforms offer the tools and infrastructure needed to process and analyze large datasets, enabling businesses to make data-driven decisions.ConclusionCloud computing has become an integral part of modern technology infrastructure. Its ability to provide cost-effective, scalable, and accessible computing resources has made it an attractive option for businesses of all sizes. As technology continues to evolve, the role of cloud computing is set to expand, shaping the way we work and live in the digital age.This essay provides an overview of cloud computing, highlighting its key benefits and potential future impact. It is designed to give readers a clear understanding of the subject and to encourage further exploration of the topic.。

云计算技术的应用与发展趋势(英文中文双语版优质文档)

云计算技术的应用与发展趋势(英文中文双语版优质文档)

云计算技术的应用与发展趋势(英文中文双语版优质文档)With the continuous development of information technology, cloud computing technology has become an indispensable part of enterprise information construction. Cloud computing technology can help enterprises realize a series of functions such as resource sharing, data storage and processing, application development and deployment. This article will discuss from three aspects: the application of cloud computing technology, the advantages of cloud computing technology and the development trend of cloud computing technology.1. Application of Cloud Computing Technology1. Resource sharingCloud computing technology can bring together different resources to realize resource sharing. Enterprises can use cloud computing technology to share resources such as servers, storage devices, and network devices, so as to maximize the utilization of resources.2. Data storage and processingCloud computing technology can help enterprises store and process massive data. Through cloud computing technology, enterprises can store data in the cloud to realize remote access and backup of data. At the same time, cloud computing technology can also help enterprises analyze and process data and provide more accurate decision support.3. Application development and deploymentCloud computing technology can help enterprises develop and deploy applications faster and more conveniently. Through cloud computing technology, enterprises can deploy applications on the cloud to realize remote access and management of applications. At the same time, cloud computing technology can also provide a variety of development tools and development environment, which is convenient for enterprises to carry out application development.2. Advantages of cloud computing technology1. High flexibilityCloud computing technology can flexibly adjust the usage and allocation of resources according to the needs of enterprises, so as to realize the optimal utilization of resources. At the same time, cloud computing technology can also support elastic expansion and contraction, which is convenient for enterprises to cope with business peaks and valleys.2. High securityCloud computing technology can ensure the security of enterprise data through data encryption, identity authentication, access control and other means. At the same time, cloud computing technology can also provide a multi-level security protection system to prevent security risks such as hacker attacks and data leakage.3. Cost-effectiveCompared with the traditional IT construction model, the cost of cloud computing technology is lower. Through cloud computing technology, enterprises can avoid large-scale hardware investment and maintenance costs, and save enterprise R&D and operating expenses.4. Convenient managementCloud computing technology can help enterprises achieve unified resource management and monitoring. Through cloud computing technology, enterprises can centrally manage resources such as multiple servers, storage devices, and network devices, which is convenient for enterprises to carry out unified monitoring and management.5. Strong scalabilityCloud computing technology can quickly increase or decrease the usage and configuration of resources according to the needs of enterprises, so as to realize the rapid expansion and contraction of business. At the same time, cloud computing technology can also provide a variety of expansion methods, such as horizontal expansion, vertical expansion, etc., to facilitate enterprises to expand their business on demand.3. The development trend of cloud computing technology1. The advent of the multi-cloud eraWith the development of cloud computing technology, the multi-cloud era has arrived. Enterprises can choose different cloud platforms and deploy services on multiple clouds to achieve high availability and elastic expansion of services.2. Combination of artificial intelligence and cloud computingArtificial intelligence is one of the current hot technologies, and cloud computing technology can also provide better support for the development of artificial intelligence. Cloud computing technology can provide high-performance computing resources and storage resources, providing better conditions for the training and deployment of artificial intelligence.3. The Rise of Edge ComputingEdge computing refers to the deployment of computing resources and storage resources at the edge of the network to provide faster and more convenient computing and storage services. With the development of the Internet of Things and the popularization of 5G networks, edge computing will become an important expansion direction of cloud computing technology.4. Guarantee of security and privacyWith the widespread application of cloud computing technology, data security and privacy protection have become important issues facing cloud computing technology. In the future, cloud computing technology will pay more attention to security measures such as data encryption, identity authentication and access control to ensure the security and privacy of corporate and personal data.To sum up, cloud computing technology has become an indispensable part of enterprise information construction. Through cloud computing technology, enterprises can realize a series of functions such as resource sharing, data storage and processing, application development and deployment. At the same time, cloud computing technology also has the advantages of high flexibility, high security, high cost-effectiveness, convenient management and strong scalability. In the future, with the multi-cloud era, the combination of artificial intelligence and cloud computing, the rise of edge computing, and the protection of security and privacy, cloud computing technology will continue to enhance its importance and application value in enterprise information construction.随着信息技术的不断发展,云计算技术已经成为企业信息化建设中不可或缺的一部分。

计算机专业中英文翻译外文翻译文献翻译

计算机专业中英文翻译外文翻译文献翻译

英文参考文献及翻译Linux - Operating system of cybertimes Though for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 ,Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely, because the characteristic of the freedom software makes it not almost have advertisement thatsupport (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose by others, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。

信息系统信息技术中英文对照外文翻译文献

信息系统信息技术中英文对照外文翻译文献

中英文资料外文翻译文献Information Systems Outsourcing Life Cycle And Risks Analysis 1. IntroductionInformation systems outsourcing has obtained tremendous attentions in the information technology industry.Although there are a number of reasons for companies to pursuing information systems (IS)outsourcing , the most prominent motivation for IS outsourcing that revealed in the literatures was “cost saving”. Costfactor has been a major decision factors for IS outsourcing.Other than cost factor, there are other reasons for outsourcing decision.The Outsourcing Institute surveyed outsourcing end-users from their membership in 1998 and found that top 10 reasons companies outsource were:Reduce and control operating costs,improve company focus,gain access to world-class capabilities,free internal resources for other purposes, resources are not available internally, accelerate reengineering benefits, function difficult to manage/out of control,make capital funds available, share risks, and cash infusion.Within these top ten outsourcing reasons, there are three items that related to financial concerns, they are operating costs, capital funds available, and cash infusion. Since the phenomenon of wage difference exists in the outsourced countries, it is obvious that outsourcing companies would save remarkable amount of labor cost.According to Gartner, Inc.'s report, world business outsourcing services would grow from $110 billion in 2002 to $173 billion in 2007,a proximately 9.5% annual growth rate.In addition to cost saving concern, there are other factors that influence outsourcing decision, including the awareness of success and risk factors, the outsourcing risks identification and management,and the project quality management. Outsourcing activities are substantially complicated and outsourcing project usually carries a huge array of risks. Unmanaged outsourcing risks will increase total project cost, devaluatesoftware quality, delay project completion time, and finally lower the success rate of the outsourcing project.Outsourcing risks have been discovered in areas such as unexpected transition and management costs, switching costs, costly contractual amendments, disputes and litigation, service debasement, cost escalation, loss of organizational competence, hidden service costs,and so on.Most published outsourcing studies focused on organizational and managerial issues. We believe that IS outsourcing projects embrace various risks and uncertainty that may inhibit the chance of outsourcing success. In addition to service and management related risk issues, we feel that technical issues that restrain the degree of outsourcing success may have been overlooked. These technical issues are project management, software quality, and quality assessment methods that can be used to implement IS outsourcing projects.Unmanaged risks generate loss. We intend to identify the technical risks during outsourcing period, so these technical risks can be properly managed and the cost of outsourcing project can be further reduced. The main purpose of this paper is to identify the different phases of IS outsourcing life cycle, and to discuss the implications of success and risk factors, software quality and project management,and their impacts to the success of IT outsourcing.Most outsourcing initiatives involve strategic planning and management participation, therefore, the decision process is obviously broad and lengthy. In order to conduct a comprehensive study onto outsourcing project risk analysis, we propose an IS outsourcing life cycle framework to be served as a yardstick. Each IS outsourcing phase is named and all inherited risks are identified in this life cycle framework.Furthermore,we propose to use software qualitymanagement tools and methods in order to enhance the success rate of IS outsourcing project.ISO 9000 is a series of quality systems standards developed by the International Organization for Standardization (ISO).ISO's quality standards have been adopted by many countries as a major target for quality certification.Other ISO standards such as ISO 9001, ISO 9000-3,ISO 9004-2, and ISO 9004-4 are quality standards that can be applied to the software industry.Currently, ISO is working on ISO 31000, a risk management guidance standard. These ISO quality systems and risk management standards are generic in nature, however, they may not be sufficient for IS outsourcing practice. This paper, therefore,proposes an outsourcing life cycle framework to distinguish related quality and risk management issues during outsourcing practice.The following sections start with needed theoretical foundations to IS outsourcing,including economic theories, outsourcing contracting theories, and risk theories. The IS outsourcing life cycle framework is then introduced.It continues to discuss the risk implications in precontract,contract, and post-contract phases. ISO standards on quality systems and risk management are discussed and compared in the next section. A conclusion and direction for future study are provided in the last section.2. Theoretical foundations2.1. Economic theories related to outsourcingAlthough there are a number of reasons for pursuing IS outsourcing,the cost savingis a main attraction that leads companies to search for outsourcing opportunities. In principle, five outsourcing related economic theories that lay the groundwork of outsourcing practice, theyare:(1)production cost economics,(2)transaction cost theory,(3)resource based theory,(4)competitive advantage, and(5)economies of scale.Production cost economics was proposed by Williamson, who mentioned that “a firm seeks to maximize its profit also subjects to its production function and market opportunities for selling outputs and buying inputs”. It is clear that production cost economics identifies the phenomenon that a firm may pursue the goal of low-cost production process.Transaction cost theory was proposed by Coase. Transaction cost theory implies that in an economy, there are many economic activities occurred outside the price systems. Transaction costs in business activities are the time and expense of negotiation, and writing and enforcing contracts between buyers and suppliers .When transaction cost is low because of lower uncertainty, companies are expected to adopt outsourcing.The focus of resource-based theory is “the heart of the firm centers on deployment and combination of specific inputs rather than on avoidance of opportunities”. Conner suggested that “Firms as seekers of costly-to-copy inputs for production and distribution”.Through resource-based theory, we can infer that “outsourcing decision is to seek external resources or capability for meeting firm's objectives such as cost-saving and capability improving”.Porter, in his competitive forces model, proposed the concept of competitive advantage. Besanko et al.explicated the term of competitive advantage, through economic concept, as “When a firm(or business unit within a multi-business firm) earns a higher rate of economic profit than the average rate of economic profit of other firms competing within the same market, the firm has a competitive advantage.” Outsourcing decision, therefore, is to seek cost saving that meets the goal of competitive advantage within a firm.The economies of scale is a theoretical foundation for creating and sustaining the consulting business. Information systems(IS) and information technology(IT) consulting firms, in essence, bear the advantage of economies of scale since their average costs decrease because they offer a mass amount of specialized IS/IT services in the marketplace.2.2. Economic implication on contractingAn outsourcing contract defines the provision of services and charges that need to be completed in a contracting period between two contracting parties. Since most IS/IT projects are large in scale, a valuable contract should list complete set of tasks and responsibilities that each contracting party needs to perform. The study of contracting becomes essential because a complete contract setting could eliminate possible opportunistic behavior, confusion, and ambiguity between two contracting parties.Although contracting parties intend to reach a complete contract,in real world, most contracts are incomplete. Incomplete contracts cause not only implementing difficultiesbut also resulting in litigation action. Business relationship may easily be ruined by holding incomplete contracts. In order to reach a complete contract, the contracting parties must pay sufficient attention to remove any ambiguity, confusion, and unidentified and immeasurable conditions/ terms from the contract. According to Besanko et al., incomplete contracting stems from the following three factors: bounded rationality, difficulties on specifying or measuring performance, and asymmetric information.Bounded rationality describes human limitation on information processing, complexity handling, and rational decision-making. An incomplete contract stems from unexpected circumstances that may be ignored during contract negotiation. Most contracts consist of complex product requirements and performance measurements. In reality, it is difficult to specify a set of comprehensive metrics for meeting each party's right and responsibility. Therefore, any vague or open-ended statements in contract will definitely result in an incomplete contract. Lastly, it is possible that each party may not have equal access to all contract-relevant information sources. This situation of asymmetric information results in an unfair negotiation,thus it becomes an incomplete contract.2.3. Risk in outsource contractingRisk can be identified as an undesirable event, a probability function,variance of the distribution of outcomes, or expected loss. Risk can be classified into endogenous and exogenous ris ks. Exogenous risks are“risks over which we have no control and which are not affected by our actions.”. For example, natural disasters such as earthquakes and flood are exogenous risks. Endogenous risks are “risks that are dependent on our actions”.We can infer that risks occurring during outsource contracting should belong to such category.Risk (RE) can be calculated through “a function of the probability of a negative outcome and the importance of the loss due to the occurrence of this outcome:RE = ΣiP(UOi)≠L(UOi) (1) where P(UOi) is the probability of an undesirable outcome i, and L(UOi) is the loss due to the undesirable outcome i.”.Software risks can also be analyzed through two characteristics :uncertainty and loss. Pressman suggested that the best way to analyze software risks is to quantify the level of uncertainty and the degree of loss that associated with each kind of risk. His risk content matches to above mentioned Eq.(1).Pressman classified software risks into the following categories: project risks, technical risks, and business risks.Outsourcing risks stem from various sources. Aubert et al. adopted transaction cost theory and agency theory as the foundation for deriving undesirable events and their associated risk factors.Transaction cost theory has been discussed in the Section 2.2. Agency theory focuses on client's problem while choosing an agent(that is, a service provider), and working relationship building and maintenance, under the restriction of information asymmetry.Various risk factors would be produced if such agent–client relationship becomes crumble.It is evident that a complete contract could eliminate the risk that caused by an incomplete contract and/or possible opportunistic behavior prompted by any contracting party. Opportunistic behavior is one of the main sources that cause transactional risk. Opportunistic behavior occurs when a transactional partner observes away of saving cost or removing responsibility during contracting period, this company may take action to pursue such opportunity. This type of opportunistic behavior could be encouraged if such contract was not completely specified at the first place.Outsourcing risks could generate additional unexpected cost to an outsourcing project. In order to conduct a better IS outsourcing project, identifying possible risk factors and implementing matured risk management process could make information systems outsourcing more successful than ever.rmation system outsourcing life cycleThe life cycle concept is originally used to describe a period of one generation of organism in biological system. In essence, the term of life cycle is the description of all activities that a subject is involved in a period from its birth to its end. The life cycle concept has been applied into project management area. A project life cycle, according to Schwalbe, is a collection of project phases such as concept,development, implementation, and close-out. Within the above mentioned four phases, the first two phases center on “planning”activity and the last two phases focus on “delivery the actual work” Of project management.Similarly, the concept of life cycle can be applied into information systems outsourcing analysis. Information systems outsourcing life cycle describes a sequence of activities to be performed during company's IS outsourcing practice. Hirsch heim and Dibbern once described a client-based IS outsourcing life cycle as: “It starts with the IS outsourcing decision, continues with the outsourcing relationship(life of the contract)and ends with the cancellation or end of the relationship, i.e., the end of the contract. The end of the relationship forces a new outsourcing decision.” It is clear that Hirsch heim and Dibbern viewed “outsourcing relationship” as a determinant in IS outsourcing life cycle.IS outsourcing life cycle starts with outsourcing need and then ends with contract completion. This life cycle restarts with the search for a new outsourcing contract if needed. An outsourcing company may be satisfied with the same outsourcing vendor if the transaction costs remain low, then a new cycle goes on. Otherwise, a new search for an outsourcing vendor may be started. One of the main goals for seeking outsourcing contract is cost minimization. Transaction cost theory(discussed in the Section 2.1)indicates that company pursuing contract costs money, thus low transaction cost will be the driver of extending IS outsourcing life cycle.The span of IS outsourcing life cycle embraces a major portion of contracting activities. The whole IS outsourcing life cycle can be divided into three phases(see Fig.1): pre-contract phase, contract phase, and post-contract phase. Pre-contract phase includes activities before a major contract is signed, such as identifying the need for outsourcing, planning and strategic setting, and outsourcing vendor selection. Contract phase startswhile an outsourcing contract is signed and then lasted until the end of contracting period. It includes activities such as contracting process, transitioning process, and outsourcing project execution. Post-contract phase contains those activities to be done after contract expiration, such as outsourcing project assessment, and making decision for the next outsourcing contract.Fig.1. The IS outsourcing life cycleWhen a company intends to outsource its information systems projects to external entities, several activities are involved in information systems outsourcing life cycle. Specifically, they are:1. Identifying the need for outsourcing:A firm may face strict external environment such as stern market competition,competitor's cost saving through outsourcing, or economic downturn that initiates it to consider outsourcing IS projects. In addition to external environment, some internal factors may also lead to outsourcing consideration. These organizational predicaments include the need for technical skills, financial constraint, investors' request, or simply cost saving concern. A firm needs to carefully conduct a study to its internal and external positioning before making an outsourcing decision.2. Planning and strategic setting:If a firm identifies a need for IS outsourcing, it needs to make sure that the decision to outsource should meet with company's strategic plan and objectives. Later, this firm needs to integrate outsourcing plan into corporate strategy. Many tasks need to be fulfilled during planning and strategic setting stages, including determining outsourcing goals, objectives, scope, schedule, cost, business model, and processes. A careful outsourcing planning prepares a firm for pursuing a successful outsourcing project.3. Outsourcing vendor selection:A firm begins the vendor selection process with the creation of request for information (RFI) and request for proposal (RFP) documents. An outsourcing firm should provide sufficient information about the requirements and expectations for an outsourcing project. After receiving those proposals from vendors, this company needs to select a prospective outsourcing vendor, based on the strategic needs and project requirements.4. Contracting process:A contract negotiation process begins after the company selects a probable outsourcing vendor. Contracting process is critical to the success of an outsourcing project since all the aspects of the contract should be specified and covered, including fundamental, managerial, technological, pricing, financial, and legal features. In order to avoid resulting in an incomplete contract, the final contract should be reviewed by two parties' legal consultants.Most importantly, the service level agreements (SLA) must be clearly identified in the contract.5. Transitioning process:Transitioning process starts after a company signed an outsourcing contract with a vendor. Transition management is defined as “the detailed, desk-level knowledge transfer and documentation of all relevant tasks, technologies, workflows, people, and functions”.Transitioni ng process is a complicate phase in IS outsourcing life cycle since it involves many essential workloads before an outsourcing project can be actually implemented. Robinson et al.characterized transition management into the following components:“employee management, communication management, knowledge management, and quality management”. It is apparent that conducting transitioning process needs the capabilities of human resources, communication skill, knowledge transfer, and quality control.6. Outsourcing project execution:After transitioning process, it is time for vendor and client to execute their outsourcing project. There are four components within this“contract governance” stage:project management, relationship management, change management, and risk management. Any items listed in the contract and its service level agreements (SLAs) need to be delivered and implemented as requested. Especially, client and vendor relationships, change requests and records, and risk variables must be carefully managed and administered.7. Outsourcing project assessment:During the end of an outsourcing project period, vendor must deliver its final product/service for client's approval. The outsourcing client must assess the quality of product/service that provided by its client. The outsourcing client must measure his/her satisfaction level to the product/service provided by the client. A satisfied assessment and good relationship will guarantee the continuation of the next outsourcing contract.The results of the previous activity (that is, project assessment) will be the base of determining the next outsourcing contract. A firm evaluates its satisfaction level based on predetermined outsourcing goals and contracting criteria. An outsourcing company also observes outsourcing cost and risks involved in the project. If a firm is satisfied with the current outsourcing vendor, it is likely that a renewable contract could start with the same vendor. Otherwise, a new “precontract phase” would restart to s earch for a new outsourcing vendor.This activity will lead to a new outsourcing life cycle. Fig.1 shows two dotted arrowlines for these two alternatives: the dotted arrow line 3.a.indicates “renewable contract” path and the dotted arrow line 3.b.indicates “a new contract search” path.Each phase in IS outsourcing life cycle is full of needed activities and processes (see Fig.1). In order to clearly examine the dynamics of risks and outsourcing activities, the following sections provide detailed analyses. The pre-contract phase in IS outsourcing life cycle focuses on the awareness of outsourcing success factors and related risk factors. The contract phase in IS outsourcing life cycle centers on the mechanism of project management and risk management. The post-contract phase in IS outsourcing life cycle concentrates on the need of selecting suitable project quality assessment methods.4. Actions in pre-contract phase: awareness of success and risk factorsThe pre-contract period is the first phase in information systems outsourcing life cycle (see Fig.1). While in this phase, an outsourcing firm should first identify its need for IS outsourcing. After determining the need for IS outsourcing, the firm needs to carefully create an outsourcing plan. This firm must align corporate strategy into its outsourcing plan.In order to well prepare for corporate IS outsourcing, a firm must understand current market situation, its competitiveness, and economic environment. The next important task to be done is to identify outsourcing success factors, which can be used to serve as the guidance for strategic outsourcing planning. In addition to know success factors,an outsourcing firm must also recognize possible risks involved in IS outsourcing, thus allows a firm to formulate a better outsourcing strategy.Conclusion and research directionsThis paper presents a three-phased IS outsourcing life cycle and its associated risk factors that affect the success of outsourcing projects.Outsourcing life cycle is complicated and complex in nature. Outsourcing companies usually invest a great effort to select suitable service vendors However,many risks exit in vendor selection process. Although outsourcing costs are the major reason for doing outsourcing, the firms are seeking outsourcing success through quality assurance and risk control. This decision path is understandable since the outcome of project risks represents the amount of additional project cost. Therefore, carefully manage the project and its risk factors would save outsourcing companies a tremendous amount of money.This paper discusses various issues related to outsourcing success, risk factors, quality assessment methods, and project management techniques. The future research may touch alternate risk estimation methodology. For example, risk uncertainty can be used to identify the accuracy of the outsourcing risk estimation. Another possible method to estimate outsourcing risk is through the Total Cost of Ownership(TCO) method. TCO method has been used in IT management for financial portfolio analysis and investment decision making. Since the concept of risk is in essence the cost (of loss) to outsourcing clients, it thus becomes a possible research method to solve outsourcing decision.信息系统的生命周期和风险分析1.绪言信息系统外包在信息技术工业已经获得了巨大的关注。

大数据、云计算技术与审计外文文献翻译最新译文

大数据、云计算技术与审计外文文献翻译最新译文

毕业设计附件外文文献翻译:原文+译文文献出处:Chaudhuri S. Big data,cloud computing technology and the audit[J]. IT Professional Magazine, 2016, 2(4): 38-51.原文Big data,cloud computing technology and the auditChaudhuri SAbstractAt present, large data along with the development of cloud computing technology, is a significant impact on global economic and social life. Big data and cloud computing technology to modern audit provides a new technology and method of auditing organizations and audit personnel to grasp the big data, content and characteristics of cloud computing technology, to promote the further development of the modern audit technology and method.Keywords: big data, cloud computing technology, audit, advice1 Related concept1.1 Large dataThe word "data" (data) is the meaning of "known" in Latin, can also be interpreted as "fact”. In 2009, the concept of “big data” gradually begins to spread in society. The concept of "big data" truly become popular, it is because the Obama administration in 2012 high-profile announced its "big data research and development plan”. It marks the era of "big data" really began to enter the social economic life.” Big data" (big data), or "huge amounts of data, refers to the amount of data involved too big to use the current mainstream software tools, in a certain period of time to realize collection, analysis, processing, or converted to help decision-makers decision-making information available. Internet data center (IDC) said "big data" is for the sake of more economical, more efficient from high frequency, large capacity, different structures and types of data to derive value and design of a new generation of architecture and technology, and use it to describe and define the information explosion times produce huge amounts of data, and name the related technology development and innovation. Big data has four characteristics: first, the data volume is huge, jumped from TB level to the level of PB.Second, processing speed, the traditionaldata mining technology are fundamentally diffe rent. Third, many data types’ pictures, location information, video, web logs, and other forms. Fourth, the value of low density, high commercial value.1.2 Cloud computing"Cloud computing" concept was created in large Internet companies such as Google and IBM handle huge amounts of data in practice. On August 9, 2006, Google CEO Eric Schmidt (Eric Schmidt) in the search engine assembly for the first time put forward the concept of "cloud computing”. In October 2007, Google and IBM began in the United Stat es university campus to promote cloud computing technology plan, the project hope to reduce the cost of distributed computing technology in academic research, and provide the related hardware and software equipment for these universities and technical support (Michael Mille, 2009).The world there are many about the definition of "cloud computing”.” Cloud computing" is the increase of the related services based on Internet, use and delivery mode, is through the Internet to provide dynamic easy extension and often virtualized resources. American national standards institute of technology (NIST) in 2009 about cloud computing is defined as: "cloud computing is a kind of pay by usage pattern, this pattern provides available, convenient, on-demand network access, enter the configurable computing resources Shared pool resources (including network, servers, storage, applications, services, etc.), these resources can be quick to provide, just in the management of the very few and or little interaction with service providers."1.3 The relationship between big data and cloud computingOverall, big data and cloud computing are complementary to each other. Big data mainly focus on the actual business, focus on "data", provide the technology and methods of data collection, mining and analysis, and emphasizes the data storage capacity. Cloud computing focuses on "computing", pay attention to IT infrastructure, providing IT solutions, emphasizes the ability to calculate, the data processing ability. If there is no large data storage of data, so the cloud computing ability strong again, also hard to find a place; If there is no cloud computing ability of data processing, the big data storage of data rich again, and ultimately, used in practice. From a technical point of view, large data relies on the cloud computing. Huge amounts of data storage technology, massive data management technology, graphs programming model is the key technology of cloud computing, are also big data technology base. And the data will be "big", themost important is the technology provided by the cloud computing platform. After the data is on the "cloud", broke the past their segmentation of data storage, more easy to collect and obtain, big data to present in front of people. From the focus, the emphasis of the big data and cloud computing. The emphasis of the big data is all sorts of data, broad, deep huge amounts of data mining, found in the data value, forcing companies to shift from "business-driven" for "data driven”. And the cloud is mainly through the Internet, extension, and widely available computing and storage resources and capabilities, its emphasis is IT resources, processing capacity and a variety of applications, to help enterprises save IT deployment costs. Cloud computing the benefits of the IT department in enterprise, and big data benefit enterprise business management department.2 Big data and cloud computing technology analysis of the influence of the audit2.1 Big data and cloud computing technology promote the development of continuous audit modeIn traditional audit, the auditor only after completion of the audited business audit, and audit process is not audit all data and information, just take some part of the audit. This after the event, and limited audit on the audited complex production and business operation and management system is difficult to make the right evaluation in time, and for the evaluation of increasingly frequent and complex operation and management activities of the authenticity and legitimacy is too slow. Along with the rapid development of information technology, more and more audit organization began to implement continuous audit way, to solve the problem of the time difference between audit results and economic activity. However, auditors for audit, often limited by current business conditions and information technology means, the unstructured data to digital, or related detail data cannot be obtained, the causes to question the judgment of the are no specific further and deeper. And big data and cloud computing technology can promote the development of continuous audit mode, make the information technology and big data and cloud computing technology is better, especially for the business data and risk control "real time" to demand higher specific industry, such as banking, securities, insurance industry, the continuous audit in these industries is imminent.2.2 Big data and cloud computing technology to promote the application of overall audit modeThe current audit mode is based on the evaluation of audit risk to implement sampling audit. In impossible to collect and analyze the audited all economic business data, the current audit modemainly depends on the audit sampling, from the perspective of the local inference as a whole, namely to extract the samples from working on the audit, and then deduced the whole situation of the audit object. The sampling audit mode, due to the limited sample drawn, and ignored the many and the specific business activity, the auditors cannot find and reveal the audited major fraud, hidden significant audit risks. Big data and cloud computing technology for the auditor, is not only a technical means are available, the technology and method will provide the auditor with the feasibility of implementing overall audit mode. Using big data and cloud computing technology, cross-industry, across the enterprise to collect and analysis of the data, can need not random sampling method, and use to collect and analyze all the data of general audit mode. Use of big data and cloud computing technology overall audit mode is to analyze all the data related to the audit object allows the auditor to establish overall audit of the thinking mode; can make the modern audit for revolutionary change. Auditors to implement overall audit mode, can avoid audit sampling risk. If could gather all the data in general, you can see more subtle and in-depth information, deep analysis of the data in multiple perspectives, to discover the hidden details in the data information of value to the audit problem. At the same time, the auditor implement overall audit mode, can be found from the audit sampling mode can find problems.2.3 Big data and cloud computing technology for integrated application of the audit resultsAt present, the auditor audit results is mainly provided to the audit report of the audited, its format is fixed, single content, contains less information. As the big data and cloud computing technology is widely used in the audit, the auditor audit results in addition to the audit report, and in the process of audit collection, mining, analysis and processing of large amounts of information and data, can be provided to the audited to improve management, promote the integrated application of the audit results, improve the comprehensive application effect of the audit results. First of all, the auditor in the audit to obtain large amounts of data and related information of summary and induction, financial, business and find the inner rules of operation and management etc, common problems and development trend, through the summary induces a macroscopic and comprehensive strong audit information, to provide investors and other stakeholders audited data prove that, correlation analysis and decision making Suggestions, thus promoting the improvement of the audited management level. Second, auditors by using big data and cloud computing technology can be the same problem in different category analysis and processing, from a differentAngle and different level of integration of refining to satisfy the needs of different levels. Again, the auditor will audit results for intelligent retained, by big data and cloud computing technology, to regulation and curing the problem in the system, in order to calculate or determine the problem developing trend, an early warning of the auditees.3 Big data and cloud computing technology promote the relationship between the applications of evidenceAuditors in the audit process should be based on sufficient and appropriate audit evidence audit opinion, and issue the audit report. However, under the big data and cloud computing environment, auditors are faced with both a huge amount data screening test, and facing the challenge of collecting appropriate audit evidence. Auditors when collecting audit evidence, the traditional thinking path is to collect audit evidence, based on the causal relationship between the big data analysis will be more use of correlation analysis to gather and found that the audit evidence. But from the perspective of audit evidence found, because of big data technology provides an unprecedented interdisciplinary, quantitative dimensions available, made a lot of relevant information to the audit records and analysis. Big data and cloud computing technology has not changed the causal relationship between things, but in the big data and cloud computing technology the development and use of correlation, makes the analysis of data dependence on causal logic relationship is reduced, and even more inclined to application based on the analysis of correlation data, on the basis of correlation analysis of data validation is large, one of the important characteristics of cloud computing technology. In the big data and cloud computing environment, the auditor can collect audit evidence are mostly electronic evidence. Electronic evidence itself is very complex, and cloud computing technology makes it more difficult to obtain evidence of the causal. Auditors should collect from long-term dependence on cause and effect and found that the audit evidence, into a correlation is used to collect and found that the audit evidence.译文大数据、云计算技术与审计Chaudhuri S摘要目前,大数据伴随着云计算技术的发展,正在对全球经济社会生活产生巨大的影响。

关于云计算的英语作文

关于云计算的英语作文

关于云计算的英语作文英文回答:Cloud computing is a revolutionary technology that has transformed the way businesses and individuals store, access, and manage data. It allows users to access applications and data from any device with an internet connection, making it incredibly convenient and flexible. 。

One of the major benefits of cloud computing is itscost-effectiveness. Instead of investing in expensive hardware and software, businesses can simply pay for the services they use on a subscription basis. This not only reduces upfront costs but also allows for scalability asthe business grows.Another advantage of cloud computing is its reliability and security. Cloud service providers invest heavily instate-of-the-art security measures to protect data from cyber threats and ensure constant access to data. This isparticularly important for businesses that rely on data for their operations.Furthermore, cloud computing enables collaboration and remote work. With cloud-based applications and storage, teams can work together on projects from different locations, making it easier to stay connected and productive.In addition, cloud computing has had a significant impact on the entertainment industry. Streaming services like Netflix and Spotify rely on cloud computing to deliver content to millions of users simultaneously, without any lag or interruption.Overall, cloud computing has revolutionized the way we store and access data, and its impact can be seen across various industries.中文回答:云计算是一项革命性的技术,改变了企业和个人存储、访问和管理数据的方式。

云计算中英文术语

云计算中英文术语

云计算中英文术语云计算中英文术语本文档旨在提供云计算中常见的中英文术语,方便读者查阅和使用。

以下是详细的分类和解释:1.云计算基础 (Cloud Computing Basics)1.1 云计算 (Cloud Computing)云计算是一种基于互联网的计算模式,通过共享大量的计算资源,提供灵活的计算能力和存储服务。

1.2 云服务提供商 (Cloud Service Provider)云服务提供商是指提供云计算服务的公司或组织,例如亚马逊云服务 (Amazon Web Services) 和微软 Azure。

1.3 虚拟化 (Virtualization)虚拟化是将物理计算资源虚拟化为多个虚拟资源的技术,通过隔离和共享资源,提高资源的利用率和灵活性。

1.4 弹性计算 (Elastic Computing)弹性计算是指根据实际需求动态调整计算资源的能力,使系统能够在负载波动时保持稳定。

2.云服务模型 (Cloud Service Models)2.1 基础设施即服务 (Infrastructure as a Service, IaaS)基础设施即服务是一种云计算模式,提供基础的计算、存储和网络资源,用户可根据需求自由配置运行环境。

2.2 平台即服务 (Platform as a Service, PaaS)平台即服务是一种云计算模式,提供开发和运行应用程序的平台环境,用户只需关注应用程序的开发和部署,无需关心底层基础设施。

2.3 软件即服务 (Software as a Service, SaaS)软件即服务是一种云计算模式,提供基于互联网的软件应用,用户无需安装和维护软件,只需通过浏览器访问即可使用。

3.云计算部署模型 (Cloud Deployment Models)3.1 公共云 (Public Cloud)公共云是由云服务提供商提供的在公共网络上访问的共享资源,多个客户共享同一组硬件设备和软件服务。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:利用云技术和服务的新兴前沿:资产优化利用摘要投资回报最大化是一个主要的焦点对所有公司。

信息被视为手段这样做。

此信息是用来跟踪性能和提高财务业绩主要通过优化利用公司资产。

能力和速度,这是可能的收集信息并将其分发到当前的技术该组织正在不断增加,事实上,有超过了行业的能力,接受和利用它。

今天,生产运营商被淹没在数据的结果一种改进的监控资产的能力。

智能电机保护和智能仪器和条件监控系统经常提供32多块每个设备的信息都与相关的报警。

通常运营商没有装备来理解或行动在这个信息。

生产企业需要充分利用标的物专门为这个目的,通过定位他们的工程人员区域中心。

这些工程师需要配备足够的知识能够理解和接受适当的行动来处理警报和警报通过这些智能设备。

可用的信息可以是有用的,在寻找方法增加生产,减少计划外的维护和最终减少停机时间。

然而,寻找信息在实时,或在实时获得有用的信息,而不花了显着的非生产时间,使数据有用的是一个巨大的挑战。

本文将介绍云技术作为一种获取可视化和报告的经济方法条件为基础的数据。

然后,它将讨论使用云技术,使工程资源与现场数据可通过网络浏览器访问的安全格式技术。

我们将覆盖资产的方法多个云服务的优化和效益飞行员和项目。

当重工业公司在全球范围看,世界级运营实现整体设备效率(OEE)得分百分之91。

从历史上看,石油和天然气行业的滞后,这得分十分以上(Aberdeen 集团“操作风险管理”十月2011)。

OEE是质量的商,可用性和效率得分。

这些,可用性似乎影响了石油和天然气行业的最大程度。

在石油和天然气的可用性得分的根源更深的研究导致旋转资产作为故障的根本原因,在70%的情况下,失去的生产或计划外停机。

鉴于这一行业的关键资产失败的斗争,但有方法,以帮助推动有效性得分较高,以实现经营效率的目标。

.在未来十年中,海上石油储量的追求将涉及复杂的提取方法和海底提取技术的广泛使用。

提取下一波的储量将需要复杂的设施,在海底,以及更传统的表面生产设施的形式。

面对日益复杂的生产系统,石油和天然气经营公司正在努力成功地经营这些复杂的设施与一个退休池的主题专业知识。

该行业的答复,这一现象是显着的使用智能设备组织的斗争,把数据转换成过程控制系统,成本很高,然后向工程师提出所有的数据,每一次发生的事件。

工程师然后花时间挖掘数据,建立数据表和创建有用的信息,从桩。

由此他们可以开始工作的问题或问题。

不幸的是,这项工作需要的经验,资产和数据。

更换行业和组织的知识是一个挑战,推动生产企业探索定位主题的专业知识,在资源的枢纽,跨多个资产。

挑战来自于2个方面。

首先,生产系统,无论是在表面上,在海底,需要复杂的生产系统和子系统的许多组件。

其结果是,操作这些设施的自动化系统是非常复杂的,往往需要运营商与200000多个标签的数据和由此产生的报警接口。

其次,将标的物专家在区域枢纽创建的安全风险,必须解决的设计自动化系统。

启用24小时的实时数据访问远程定位的主题的专业知识,需要建立在互联网技术的隧道,使监控从远程办公室,平板电脑和移动设备。

一个健全的安全策略是需要这种支持的一部分。

传统的方法继续影响停机时间。

简而言之,如果工程或维护需要建立一个电子表格,查找数据,操纵它,然后通过它来查找真相,在采取行动时损失的时间会导致停机时间的增加。

基于角色的可视化和报告,与非生产性工作的自动化配对,创造一个环境,推动合作和提高性能。

改进的风险为基础的方法,以尽量减少资产故障是一个可能的答案,以推动资产性能。

它不是完整的,除非系统的地方,使总有效率的维护,每个人在企业中扮演着一个维护的一部分,因为每个装备。

实施全面生产性维护开始与授权的集中式主题的专业知识与现场数据,并给他们的手段,以配合实时操作和维护人员。

它包括提供每一个人的工作与资产的确切信息所需的时间所需的作用,以保持资产健康。

高级维修和设施工程人员是最佳的位置,利用数据的基础上,结合历史的情况下,以推动可靠性提高资产。

协作工具驱动共享这反过来又创造了一个环境,经验不足的员工获得从老员工更好的装备的年轻工程师的知识,操作和维修人员的优化与资产为退休工人。

这种组合的数据利用率,资产性能管理和知识共享导致通过全面的经营绩效管理的优化资产。

经营绩效管理的路径是时间和速度。

创建协作区和分析和工作流程自动化系统,可以涉及重大投资。

资源来执行的系统往往是稀缺的,导致在建设的系统,将使一个企业的经营业绩管理的操作障碍。

今天,技术的存在,使工作流自动化和协作使用简单的网络工具。

简单的浏览器访问允许使用智能手机,平板电脑和个人电脑的人员合作。

问题是,企业在他们的防火墙内构建技术,还是利用云计算技术来推动合作作为付费服务?答案需要一个更广泛的数据的使用和数据策略。

要实现可靠和效率指标,利用现有的数据,石油和天然气经营企业必须制定数据策略,解决了一些问题。

是数据所需的合作也受监管或投资者的期望?如果是这样的话,一个企业应该考虑在其企业结构中持有数据,并在基础设施投资,以存储和展示它。

资本预算和它的资源可以限制的能力,以实现所需的合作,建立系统的能力,具有挑战性的可靠性和效率目标的追求。

购买应用程序作为一个基于云的服务,可以加快与一个支付的路径,作为一个模型。

云技术提供了一种车辆来操作和管理数据。

采购软件作为一种服务,使一个组织能够通过更好地利用现有的数据,更快地进入运营卓越的领域。

采购业务管理工具作为一种服务,避免使用资本预算和专门规避资金跨资产问题。

服务是为他们所使用的经营预算。

利用云计算共享操作条件数据创建一个操作改进的环境。

“私有云”利用同样的技术可以部署,如果企业有考虑,需要所有的数据,以保持其企业内(法规遵从,投资者的遵守等)在开发策略,利用数据驱动的学科之间的合作,企业需要开发的数据策略。

一些数据,例如,安全记录,测试记录和记录和记录被调节,并为这些已授权存储。

其他类型的数据有投资者的合规性存储和安全要求。

在投资者或监管数据的情况下,企业仍然希望协同工具,但可能需要在其公司的结构中的数据,再看图5,协同工具和仪表板也可在传统的资本项目,购买软件和应用程序的开发经营范围内的企业数据结构。

本文的下一部分将解决传统的基础设施的方法和云计算方法,用于在一个组织中的有效性的例子。

云技术可以在标准的服务器体系结构中使用数据和操作的数据和自动化的非生产性的工作,利用数据。

虚拟机管理程序环境允许多个应用程序运行在数据中心的传统集群技术。

应用程序存在,允许数据的积累,从不同的来源与联邦,然后利用创建基于角色的可视化和自动化的手动任务(如生产分配,以及测试验证,监管系统测试文件等)。

图6演示了一个在操作公司的防火墙中使用云技术的测试验证。

图7然后演示了一个操作符的角色视图。

技术有助于显着减少非生产性的操作,同时提出的数据,在一种方式,推动人员集中的领域,他们可以影响业务,以提高效率。

图10和9显示了工作流自动化的一个实例,并将所得的仪表板呈现给含有丰富信息的人员,以帮助提高个人在资产上工作的有效性。

自动化系统既需要大量的资本投资,也需要持续的运营费用,以维持基础设施建设。

在许多情况下,合作是需要的,但确保投资是困难的,因为组织障碍,跨资产的资本项目或根本原因是不存在的资源。

利用云计算的基础,应用程序可以建立在云基础设施,并提供服务。

在这种方式的合作是可能的经营预算的限制,使用一个基金,建立模型。

在大多数情况下,所需的工具的成本效益的好处是显着低于传统的系统建设的成本。

下面的例子是一个达拉斯的抽水系统的制造商在泥浆处理系统和压裂系统,用于钻井的非常规的威尔斯将被用来说明。

我们会打电话给XYZ公司。

客户XYZ 送卡车进入现场,他们提供的服务,石油和天然气企业开发油气田的土地为基础的操作。

抽水系统具有高度的局部自动化,但泵操作人员不具备作出决定时,停止抽水和执行维护。

例如,卡车需要定期保养,一些过滤器需要更换,如每八个小时。

结合需要进行定期维护与客户寻找经营者的肩膀上,偶尔的错误是与阀门定位和/或运行的卡车,直到它打破了试图取悦一个焦虑的客户。

这大大的维护成本,停机时间是一个大问题。

英文原文:AN EMERGING FRONTIER, ASSETOPTIMIZATION UTILIZING CLOUDTECHNOLOGY AND SERVICESEric Fidler Sr. Member Richard and Paes Sr. MemberAbstract –Maximizing return on investment is a major focus for all corporations. Information is seen as the means by which to do so. This information is used to track performance and to improve the financial results primarily by optimizing the use of company assets. The ability and speed with which it is possible to gather information and distribute it with current technology to the organization is constantly increasing and, in fact, has surpassed the ability of industry to accept and utilize it. Today, production operators are overwhelmed with data as a result of an improved ability to monitor assets. Intelligent motor protection and intelligent instrumentation and condition monitoring systems often provide more than 32 pieces of information per device all with related alarms. Often operators are notequipped to understand or act on this information. Production companies need to leverage subject matter expertise for this purpose by locating their engineering staff in regional hubs. These engineers need to be equipped with sufficient knowledge to be able to interpret and take appropriate action to deal with warnings and alarms generated by these intelligent devices. The information available can be useful in finding ways to increase production, reduce unplanned maintenance and ultimately reduce down time. However, finding the information in real time, or getting useful information in real time without significant non-productive time spent making the data useful is a big challenge. This paper will introduce Cloud Technology as an economical method of gaining visualization and reporting to condition based data. It will then discuss the use of Cloud Technology to empower engineering resources with live data available in a secure format accessible through web browser technology. We shall cover the approach to asset optimization and the benefits found in several Cloud Services pilots and projects.INTRODUCTIONWhen heavy industry firms are viewed on a global basis, world class operations achieve an Overall Equipment Effectiveness (OEE) Score of 91 percent. Historically, the Oil and Gas industry lags this score by ten points or more (Aberdeen Group “Operational Risk Management”October 2011). OEE is the quotient of quality, availability and efficiency scores. Of these, availability seems to impact the Oil and Gas industry score to the greatest extent. A deeper look into root causes of availability scores in Oil and Gas leads to rotating assets as the root cause of failures in 70% of instances of lost production or unplanned downtime. Given this the industry struggles with failure of critical assets, but there are ways to help drive effectiveness scores higher in order to achieve operational efficiency objectives.Pursuit of offshore reserves in the next decade will involve complex extraction methods and extensive use of subsea extraction technologies.Extraction of the next wave of reserves will require complex facilities both on the ocean floor, as well as in the form of more traditional surface production facilities. Facing the growing complexity of production systems, oil and gas operating companies are struggling to successfully operate these complex facilities with a retiring pool of subject matter expertise.The industry’s reply to this phenomena is significant use of intelligent devices each delivering useful data. Intelligent breakers, protective relays, measurement relays, process instrumentation and condition monitoring systems to gather useful information. With sophisticated motor controllers such as adjustable speed drives (ASD) which have literally hundreds of parameters available to be monitored, it is easy to see that the amount of information which can be captured is staggering. In fact, managing and interpretation of this information database now becomes the issue. Organizations struggle to bring the data into process control systems at great cost and then present all of the data to engineers each time an event occurs. Engineers then spend time mining the data, building spreadsheets and creating useful information out of the pile. From this they can then start to work on the problem or problems. Unfortunately, this work requires experience with the asset and with the data.Replacing industry and organizational knowledge is a challenge driving production companies to explore locating subject matter expertise in hubs where resources are leveraged across several assets. The challenge comes about in two areas. First, the production systems – both on the surface and on the sea floor –require complex production systems and subsystems with many components. As a result, the automation systems that operate these facilities are extremely complex, often requiring operators to interface with more than 200,000 tags of data and resulting alarms.Second, placing subject matter experts in regional hubs creates security risks that must be addressed in the design of automation systems. Enabling 24-hourlive-data access for remotely located subject matter expertise requires establishing tunnels in internet technology to empower monitoring from remote offices, tablet PCs and mobile devices. A sound security strategy is required as a part of this enablement.OPERATIONS PERFORMANCE MANAGEMENT FOR IMPROVED RELIABILITYTraditional methods continue to impact downtime. In short, if engineering or maintenance has to build a spreadsheet, hunt down data, manipulate it, and then sort through it to find the truth, the time lost in taking action will lead to increased downtime. Role-based visualization and reporting, paired with the automation of non-productive work, creates an environment that drives collaboration and improves performance.Improved visibility with a risk based approach to minimize asset failure is a possible answer to drive asset performance. It is not complete unless the systems put in place empower Total Productive Maintenance where each person in the enterprise plays a part in maintenance as each is equipped to do. Implementing Total Productive Maintenance begins with empowering centralized subject matter expertise with live data and giving them the means to collaborate with operations and maintenance personnel in real time. It involves providing each individual working with the asset the exact information required for their role in the time required to keep the asset healthy. Senior maintenance and facilities engineering personnel are best positioned to utilize data –condition-based combined with historical –to drive reliability improvement in the asset. Collaboration tools drive the knowledge sharing that in turn create an environment where less experience employees gain from the knowledge of the older employees better equipping the younger engineers, operations and maintenance personnel to optimally engage with the asset as older workers retire. This combination of data utilization, asset performance management and knowledge sharing leads to anoptimized asset through total Operations Performance Management.The path to Operations Performance Management is one of time and pace. Creating collaborative zones and systems for analytics and work flow automation can involve significant IT investment. Resources to execute the systems are often scarce leading to barriers in building the systems that would enable an enterprise to move to Performance Management of operations. Today, technology exists to empower workflow automation and collaboration using simple Web tools. Simple browser access allows personnel collaboration using smart phones, tablet PCs and personal computers. The question is, does an enterprise build the technology within their firewalls or does it take advantage of cloud computing technology to drive collaboration as a paid service?The answer requires a broader view of data usage and data strategies. To achieve reliability and efficiency targets leveraging available data, Oil and Gas operating companies must develop data strategies that address a number of considerations. Is the data required for collaboration also governed by regulatory or investor expectations? If that is the case, an enterprise should likely consider holding the data within their corporate structure and investing in infrastructure to store and present it. Capital budgets and IT resources can limit the ability to build the systems required to achieve the desired collaboration –challenging the pursuit of reliability and efficiency targets. Purchasing the applications as a cloud- based service can speed up the path to collaboration with a pay-as-you-go model.Cloud technology provides a vehicle to manipulate and manage data. Purchasing software as a service enables an organization to move faster into the realm of operational excellence through better utilization of existing data. Purchasing operations management tools as a service avoids using capital budgets and specifically circumvents issues of funding across assets. Services are paid for as they are used with operating budgets. Utilizing cloudcomputing to share operating conditional data creates an environment for operational improvement. ‘Private clouds’leveraging the same technologies can be deployed if an enterprise has considerations that require all of the data to remain within their enterprise (regulatory compliance, investor compliance, etc.)In developing strategies that utilize data to drive collaboration between disciplines, an enterprise needs to develop a data strategy. Some of the data such as, safety records, testing records and flow records are regulated and as such have mandated storage. Other types of data have investor compliance storage and security requirements. In the case of investor or regulatory data an enterprise still desires collaborative tools but likely needs to hold the data within its company structure. Looking at Figure 5 again, collaborative tools and dash boarding are also available in traditional methods of a capital project, purchased software and application development operating within the enterprise data structures. The next two sections of this paper will address examples of a traditional infrastructure approach and a cloud computing approach used to drive effectiveness in an organization.Cloud technology enables the manipulation of data and automation of non-productive tasks within standard server architectures owned and operated by the enterprise utilizing the data. Hypervisor environments allow multiple applications to run on traditional cluster technology in data centers. Applications exist which permit the accumulation of data from disparate sources with federation then utilized to create role based visualization and automation of manual tasks (e.g. production allocation, well test verification, regulatory system testing documentation etc.). Figure 6 demonstrates a Well Test validation accomplished utilizing cloud technology within an operating company’s firewall. Figure 7 then demonstrates the role view for an operator resulting. Technology helps to significantly reduce non-productive operations while presenting the data in a way that drives personnel to focus on areaswhere they can impact the business to improve efficiencies. Figures 9 and 10 show an example of workflow automation and the resulting dashboard presented to personnel containing role-rich information designed to help improve the effectiveness of the individuals working on the asset. Automation systems can require both significant capital investments to build as well as ongoing operating expenses to maintain the infrastructure. In many cases, collaboration is desired, but securing the investment is difficult due to organizational barriers in funding capital projects across assets or simply because the resources do not exist.Leveraging cloud computing foundations, applications can be built within the cloud infrastructure and provided as a service. Collaborating in this fashion is possible within constraints of operating budgets using a fund-as-built model. In most cases, the cost of the tools required to gain efficiency benefits are significantly lower than the cost of building traditional systems.The following example of a Dallas-based manufacturer of pumping systems used in mud-handling systems and fracturing systems for drilling of non-conventional wells will be used to illustrate. We will call the company XYZ. Customers of XYZ send trucks into the field where they provide services to oil and gas enterprises developing oil and gas fields in land- based operations. The pumping systems have a high degree of local automation, but pump operations personnel are not equipped to make decisions regarding when to stop pumping and perform maintenance. For example, the trucks require regular maintenance, and some of the filters require replacement as often as every eight hours. Combine the need for regular maintenance with customers looking over the operator’s shoulders, and occasional mistakes are made with valve positioning and/or running the truck until it breaks in an attempt to please an anxious customer. This drives significant maintenance costs, and downtime is a big problem.。

相关文档
最新文档