云计算毕业设计外文文献

合集下载

关于云计算的英语作文

关于云计算的英语作文

关于云计算的英语作文英文回答:Cloud computing has revolutionized the way businesses and individuals store, process, and access data and applications. As a result, there are multiple benefits of cloud computing, including improved flexibility, reduced costs, increased security, enhanced collaboration, and innovative services.Flexibility is one of the key advantages of cloud computing. It allows users to access their data and applications from anywhere with an internet connection. This makes it easier for businesses to operate remotely and for individuals to work from home.Reduced costs are another advantage of cloud computing. Businesses can avoid the upfront costs of purchasing and maintaining their own IT infrastructure. Instead, they can pay for cloud services on a pay-as-you-go basis.Increased security is another key benefit of cloud computing. Cloud providers have invested heavily insecurity measures to protect their customers' data. This makes cloud computing a more secure option than storingdata on-premises.Enhanced collaboration is another advantage of cloud computing. Cloud-based applications make it easier forteams to collaborate on projects. This is because cloud applications can be accessed from anywhere, and they allow users to share files and collaborate in real-time.Innovative services are another key benefit of cloud computing. Cloud providers are constantly developing new services, such as artificial intelligence, machine learning, and data analytics. These services can help businesses improve their operations and make better decisions.Overall, cloud computing offers a number of benefitsthat can help businesses and individuals improve their operations, reduce costs, and increase innovation.中文回答:云计算的优势。

计算机专业的毕业设计外文翻译模板

计算机专业的毕业设计外文翻译模板

淮阴工学院毕业设计说明书(论文)作者: ** 学号:10813013**学院: 计算机工程专业: 计算机科学与技术题目: 专家系统指导者:(姓名) (专业技术职务)评阅者:(姓名) (专业技术职务)年月毕业设计说明书(论文)中文摘要系统介绍:专家系统是一种计算机程序,是计算机科学的一个分支主要研究人工智能。

人工智能的科学目标是能让计算机程序展示智能行为。

它关注象征性的推理或推理的概念和方法,以及如何使用知识将这些理论放入一台机器内。

当然,长期的智能包含了很多认知技能,包括解决问题,学习,理解语言的能力,人工智能能解决所有的这些问题。

但大多数已经取得的进展,已经取得了该问题的解决。

人工智能程序到达专家级解决问题的能力在任务地区将承担的知识有关的具体任务是所谓的知识或专家系统。

通常,长期的专家系统是保留程序的基础知识,包含人类专家的知识。

和教科书或非专家知识的对比,往往,专家系统和基于知识的系统公司同义。

两者合计,他们代表了人工智能应用最广泛的类型。

在专家系统捕获人类智力活动的区域被称为任务域。

域是指正在执行任务的区域,任务是指一些以目标位向导,解决问题的活动。

典型的任务是诊断,规则,调度,配置和设计。

专家系统被称为知识工程及其从业人员被称为知识的工程师。

知识工程师必须确保计算机的所有知识需要解决一个问题。

知识工程师必须选择一个或多个形式,代表所需的知识为符号模式在计算机的内存——也就是说,他(或她)必须选择一种知识表示。

他还必须确保计算机可以使用的知识有效地选择从一个几个推理方法。

实践知识工程描述。

我们首先描述了专家系统的组成部分。

专家系统的组成:每一个专家系统由两个主要部分组成:知识库和推理或推论,发动机专家系统的知识库包含事实和启发式知识。

事实性知识是任务域被广泛共享,通常在教科书或期刊,并共同商定后,在特定领域的那些知识渊博的知识启发式知识是不太严格,更多体验,更多的判断性能的知识。

在事实性知识,启发式知识很少被讨论,主要是个人主义。

云计算研究现状文献综述及外文文献

云计算研究现状文献综述及外文文献

本文档包括该专题的:外文文献、文献综述文献标题:An exploratory study on factors affecting the adoption of cloud computing by information professionals作者:Aharony, Noa期刊:The Electronic Library, 33(2), 308-328.年份:2015一、外文文献An exploratory study on factors affecting the adoption of cloud computing byinformation professionals(影响云计算采用与否的一个探索性研究)Aharony, NoaPurpose- The purpose of this study explores what factors may influence information professionals to adopt new technologies, such as cloud computing in their organizations. The objectives of this study are as follows: to what extent does the technology acceptance model (TAM) explain information professionals intentions towards cloud computing, and to what extent do personal characteristics, such as cognitive appraisal and openness to experience, explain information professionals intentions to use cloud computing.Design/methodology/approach- The research was conducted in Israel during the second semester of the 2013 academic year and encompassed two groups of information professionals: librarians and information specialists. Researchers used seven questionnaires to gather the following data: personal details, computer competence, attitudes to cloud computing, behavioral intention, openness to experience, cognitive appraisal and self-efficacy. Findings- The current study found that the behavioral intention to use cloud computing was impacted by several of the TAM variables, personal characteristics and computer competence.Originality/value- The study expands the scope of research about the TAM by applying it to information professionals and cloud computing and highlights the importance of individual traits, such as cognitive appraisal, personal innovativeness, openness to experience and computer competence when considering technology acceptance. Further, the current study proposes that if directors of information organizations assume that novel technologies may improve their organizations' functioning, they should be familiar with both the TAM and the issue of individual differences. These factors may help them choose the most appropriate workers.Keywords: Keywords Cloud computing, TAM, Cognitive appraisal, Information professionals, Openness to experienceIntroductionOne of the innovations that information technology (IT) has recently presented is thephenomenon of cloud computing. Cloud computing is the result of advancements in various technologies, including the Internet, hardware, systems management and distributed computing (Buyya et al. , 2011). Armbrust et al. (2009) suggested that cloud computing is a collection of applications using hardware and software systems to deliver services to end users via the Internet. Cloud computing offers a variety of services, such as storage and different modes of use (Leavitt, 2009). Cloud computing enables organizations to deliver support applications and avoid the need to develop their own IT systems (Feuerlicht et al. , 2010).Due to the growth of cloud computing use, the question arises as to what factors may influence information professionals to adopt new technologies, such as cloud computing, in their organizations. Assuming that using new technologies may improve the functioning of information organizations, this study seeks to explore if information professionals, who often work with technology and use it as an important vehicle in their workplace, are familiar with technological innovations and whether they are ready to use them in their workplaces. As the phenomenon of cloud computing is relatively new, there are not many surveys that focus on it and, furthermore, no one has so far focussed on the attitudes of information professionals towards cloud computing. The research may contribute to an understanding of the variables that influence attitudes towards cloud computing and may lead to further inquiry in this field.The current study uses the well-known technology acceptance model (TAM), a theory for explaining individuals' behaviours towards technology (Davis, 1989; Venkatesh, 2000), as well as personal characteristics, such as cognitive appraisal and openness to new experiences, as theoretical bases from which we can predict factors which may influence information professionals adopting cloud computing in their workplaces. The objectives of this study are to learn the following: the extent to which the TAM explains information professionals' attitudes towards cloud computing, and the extent to which personal characteristics, such as cognitive appraisal and openness to experiences, explain the intention of information professionals to use cloud computing.Theoretical backgroundCloud computingResearchers have divided cloud computing into three layers: Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). SaaS has changed the concept of software as a product to that of a service instead. The software runs in the cloud and the user can access it via the Internet to work on an application. PaaS enables powerful tools for developers to create the applications, without having to deal with concerns about the infrastructure. IaaS provides complete infrastructure resources (e.g. servers, software, network equipment and storage). With IaaS, consumers do not have to purchase the latest technology, perform maintenance, upgrade software or buy software licenses (Anuar et al. , 2013). Cloud computing deployment can be divided into four types: private clouds, public clouds, community clouds and hybrid clouds (Mell and Grance, 2011). Public clouds have open access, private clouds run within organizations, community clouds containresources that are shared with others in the community and hybridclouds encompass two or more cloud models. Anuar et al. (2013) presented the main characteristics of cloud computing: flexible scale that enables flexible-scale capabilities for computing; virtualization that offers a new way of getting computing resources remotely, regardless of the location of the user or the resources; high trust , as the cloud offers more reliability to end users than relying on local resources; versatility , because cloud services can serve different sectors in various disciplines use the same cloud; and on demand service , as end users can tailor their service needs and pay accordingly.As cloud computing is relatively new, there are not a lot of surveys that focus on it. Several researchers conducted in-depth interviews investigating respondents' attitudes towards keeping their virtual possessions in the online world (Odom et al. , 2012). Teneyuca (2011) reported on a survey of cloud computing usage trends that included IT professionals as respondents. Results revealed preferences for virtualization and cloud computing technologies. However, the major reasons for cloud computing adoption being impeded were the lack of cloud computing training (43 per cent) and security concerns (36 per cent). Another report showed that nearly 40 per cent of Americans think that saving data to their hard drive is more secure than saving it to a cloud (Teneyuca, 2011). A further study (Ion et al., 2011) explored private users' privacy attitudes and beliefs about cloud computing in comparison with those in companies. Anuar et al. (2013) investigated cloud computing in an academic institution, claiming that cloud computing technology enhances performance within the academic institution. A study that was carried out in the education arena examined factors that led students to adopt cloud computing technology (Behrend et al. , 2010). Technology acceptance modelThe TAM (Davis, 1989) is a socio-technical model which aims to explain user acceptance of an information system. It is based on the theory of reasoned action (TRA) (Fishbein and Ajzen, 1975) which seeks to understand how people construct behaviours. The model suggests that technology acceptance can be explained according to the individual's beliefs, attitudes and intentions (Davis, 1989). The TAM hypothesizes that one's intention is the best predictor of usage behaviour and suggests that an individual's behavioural intention to use technology is determined by two beliefs: perceived usefulness (PU) and perceived ease of use (PEOU). PU refers to the individual's perception that using a technology will improve performance and PEOU addresses a user's perceptions that using a particular system would be free of effort (Davis, 1989). The current study concentrates on PEOU as the researchers wanted to examine if information professionals' perceptions about new technology is affected by its simplicity and friendly interface. Earlier research mainly investigated personal behaviour to use new information systems and technology in the following: corporate environments (Gefen and Straub, 1997);Web shopping (Chang et al. , 2002; Lin and Lu, 2000);education, particularly e-learning (Park, 2009) and m-learning (Aharony, 2014); and the library arena (Aharony, 2011; Park et al. , 2009).Personal innovativenessA construct which may contribute to information professionals' intention behaviour to use cloud computing is personal innovativeness, a major characteristic in innovation diffusion research in general (Agarwal and Prasad, 1998; Rogers, 1983, 1995). Agarwal and Prasad (1998) have coined the term "personal innovativeness in the domain of IT" (PIIT), which describes a quite stable characteristic of the individual across situational considerations. Previous studies found that personal innovativeness is a significant determinant of PEOU, as well as of PU (Agarwal and Karahanna, 2000; Lewis et al. , 2003). Several researchers have suggested that innovative people will search for intellectually or sensorially stimulating experiences (Uray and Dedeoglu, 1997).Openness to experienceAnother variable that may predict respondents' perspectives towards cloud computing is openness to experience which addresses the tendency to search for new and challenging experiences, to think creatively and to enjoy intellectual inquiries (McCrae and Sutin, 2009). People who are highly open to experience are perceived as also open to new challenges, thoughts and emotions (McCrae and Costa, 2003). Studies reported that there is a positive relation between openness to experience and intelligence tests (Gignac et al. , 2004). According to Weiss et al. (2012), challenging transitions may influence differently those who are high or low in openness to experience. Those who are high may approach these situations with curiosity, emphasizing the new possibilities offered to them. However, those who are low in openness may be threatened and try to avoid them by adhering to predictable environments. Various researchers note that people who are high in openness to experience are motivated to resolve new situations (McCrae, 1996; Sorrentino and Roney, 1999). Furthermore, openness to experience is associated with cognitive flexibility and open-mindedness (McCrae and Costa, 1997), and negatively associated with rigidity, uncertainty and inflexibility (Hodson and Sorrentino, 1999). Thus, people who are less open to experience tend to avoid novelty and prefer certainty. Studies reveal that openness to experience declines in the later years (Allemand et al. , 2007; Donnellan and Lucas, 2008).Challenge and threatThe following section will focus on the personality characteristics of challenge and threat that might affect information professionals' behavioural intention to use cloud computing. Challenge and threat are the main variables of a unidimensional, bipolar motivational state. They are the result of relative evaluations of situational demands and personal resources that are influenced both by cognitive and affective processes in motivated performance situations (Vick et al. , 2008). According to Lazarus and Folkman (1984), challenge refers to the potential for growth or gain and is characterized by excitement and eagerness, while threat addresses potential harm and is characterized by anxiety, fear and anger. Situations that suggest low demands and high resources are described as challenging, while those that suggest high demands and low resources are perceived as threatening (Seginer, 2008). In general, challenge or threat can take place in situations such as delivering a speech, taking a test, sports competitions or performing with another person on a cooperative or competitive task.The challenge appraisal suggests that with effort, the demands of the situation can be overcome (Lazarus et al. , 1980; Park and Folkman, 1997). On the other hand, threat appraisal indicates potential danger to one's well-being or self-esteem (Lazarus, 1991; Lazarus and Folkman, 1984), as well as low confidence in one's ability to cope with the threat (Bandura, 1997; Lazarus, 1991; Lazarus and Folkman, 1984). Different studies (Blascovich et al. , 2002; Blascovich and Mendes, 2000; Lazarus and Folkman, 1984; Lazarus et al. , 1980) have found that challenge leads to positive feelings associated with enjoyment, better performance, eagerness and anticipation of personal rewards or benefits. Several studies which focussed on the threat and challenge variable were carried out in the library and information science environment as well (Aharony, 2009, 2011).Self-efficacyAn additional variable which may influence individuals' behavioural intention to use cloud computing is self-efficacy. The concept of self-efficacy was developed in the discipline of "social learning theory" by Bandura (1997). Self-efficacy addresses individuals' beliefs that they possess the resources and skills needed to perform and succeed in a specific task. Therefore, individuals' previous performance and their perceptions of relevant resources available may influence self-efficacy beliefs (Bandura, 1997). Self-efficacy is not just an ability perception, it encompasses the motivation and effort required to complete the task and it helps determine which activities are required, the effort in pursuing these activities and persistence when facing obstacles (Bandura, 1986, 1997). The construct of self-efficacy is made up of four principal sources of information:"mastery experience" refers to previous experience, including success and failure; "vicarious experience" addresses observing the performances, successes and failures of others;"social persuasion" includes verbal persuasion from peers, colleagues and relatives; and"physiological and emotional states" from which people judge their strengths, capabilities and vulnerabilities (Bandura, 1986, 1994, 1995).As self-efficacy is based on self-perceptions regarding different behaviours, it is considered to be situation specific. In other words, a person may exhibit high levels of self-efficacy within one domain, while exhibiting low levels within another (Cassidy and Eachus, 2002). Thus, self-efficacy has generated research in various disciplines such as medicine, business, psychology and education (Kear, 2000; Lev, 1997; Schunk, 1985; Koul and Rubba, 1999). Computer self-efficacy is a sub-field of self-efficacy. It is defined as one's perceived ability to accomplish a task with the use of a computer (Compeau and Higgins, 1995). Various studies have noted that training and experience play important roles in computer self-efficacy (Compeau and Higgins, 1995; Kinzie et al. , 1994; Stone and Henry, 2003). Several studies have investigated the effect of computer self-efficacy on computer training performance (Compeau and Higgins, 1995) and on IT use (Easley et al. , 2003).HypothesesBased on the study objectives and assuming that PEOU, personal innovativeness,cognitive appraisal and openness to experience may predict information professionals' behavioural intention to use cloud computing, the underlying assumptions of this study are as follows:H1. High scores in respondent PEOU will be associated with high scores in their behavioural intention to use cloud computing.H2. High scores in respondents' personal innovativeness will be associated with high scores in their behavioural intention to use cloud computing.H3. Low scores in respondents' threat and high scores in respondents' challenge will be associated with high scores in their behavioural intention to use cloud computing. H4. High scores in respondents' self-efficacy will be associated with high scores in their behavioural intention to use cloud computing.H5. High scores in respondents' openness to experience will be associated with high scores in their behavioural intention to use cloud computing.H6. High scores in respondents' computer competence and in social media use will be associated with high scores in their behavioural intention to use cloud computing. MethodologyData collectionThe research was conducted in Israel during the second semester of the 2013 academic year and encompassed two groups of information professionals: librarians and information specialists. The researchers sent a message and a questionnaire to an Israeli library and information science discussion group named "safranym", which included school, public and academic librarians, and to an Israeli information specialist group named "I-fish", which consists of information specialists that work in different organizations. Researchers explained the study's purpose and asked their members to complete the questionnaire. These two discussion groups consist of about 700 members; 140 responses were received, giving a reply percentage of 20 per cent. Data analysisOf the participants, 25 (17.9 per cent) were male and 115 (82.1 per cent) were female. Their average age was 46.3 years.MeasuresThe current study is based on quantitative research. Researchers used seven questionnaires to gather the following data: personal details, computer competence, attitudes towards cloud computing, behavioural intention, openness to experience, cognitive appraisal and self-efficacy.The personal details questionnaire had two statements. The computer competence questionnaire consisted of two statements rated on a 5-point Likert scale (1 = strongest disagreement; 5 = strongest agreement). The cloud computing attitude questionnaire, based on Liuet al. (2010), was modified for this study and consisted of six statements rated on a seven-point Likert scale (1 = strongest disagreement; 7 = strongest agreement). A principal components factor analysis using Varimax rotation with Kaiser Normalization was conducted and explained 82.98 per cent of the variance. Principal components factor analysis revealed two distinct factors. The first related to information professionals' personal innovativeness (items 2, 3 and 5), and the second to information professionals' perceptions about cloud computing ease ofuse (PEOU) (items 1, 4, and 6); the values of Cronbach's Alpha were 0.89 and 0.88, respectively.The behavioural intention questionnaire, based on Liu et al. (2010), was modified for this study and consisted of three statements rated on a six-point Likert scale (1 = strongest disagreement; 6 = strongest agreement). Its Cronbach's Alpha was 0.79. The openness to experience questionnaire was derived from the Big Five questionnaire (John et al. , 1991) and consisted of eight statements rated on a five-point Likert scale (1 = strongest disagreement; 5 = strongest agreement); Cronbach's Alpha was 0.81. The cognitive appraisal questionnaire measured information professionals' feelings of threat versus challenge when confronted with new situations. It consisted of 10 statements rated on a six-point scale (1 = fully disagree; 6 = fully agree). This questionnaire was previously used (Aharony, 2009, 2011; Yekutiel, 1990) and consisted of two factors: threat (items 1, 2, 3, 5, 7 and 8) and challenge (items 4, 6, 9 and 10). Cronbach's Alpha was 0.70 for the threat factor and 0.89 for the challenge factor.The self-efficacy questionnaire was based on Askar and Umay's (2001) questionnaire and consisted of 18 statements rated on a five-point scale (1 = fully disagree; 5 = fully agree); Cronbach's Alpha was 0.96.FindingsTo examine the relationship between openness to experience, cognitive appraisal (threat, challenge and self-efficacy), TAM variables (personal innovativeness and PEOU), and behavioural intention to use cloud computing, researchers performed Pearson correlations, which are given in Table I.Table I presents significant correlations between research variables and the dependent variable (behavioural intention to use cloud computing). All correlations are positive, except the one between threat and behavioural intention to use cloud computing. Hence, the higher these measures, the greater the behavioural intention to use cloud computing. A significant negative correlation was found between threat and the dependent variable. Therefore, the more threatened respondents are, the lower is their behavioural intention to use cloud computing.Regarding the correlations between research variables, significant positive correlations were found between openness to experience and challenge, self-efficacy, personal innovativeness and PEOU. A significant negative correlation was found between openness to experience and threat. That is, the more open to experience respondents are, the more challenged they are, the higher is their self-efficacy, personal innovativeness, and PEOU and the less threatened they are. In addition, significant negative correlations were found between threat and self-efficacy, personal innovativeness and PEOU. We can conclude that the more threatened respondents are, the less they are self-efficient, personally innovative and the less they perceive cloud computing as easy to use. Significant positive correlations were also found between self-efficacy and personal innovativeness and PEOU. Thus, the more self-efficient respondents are, the more personally innovative they are and the more they perceive cloud computing as easy to use.The study also examined two variables associated with computer competence:computer use and social media use. Table II presents correlations between these two variables and the other research variables.Significant, high correlations were found between computer competence variables and openness to experience, self-efficacy, personal innovativeness, PEOU and behavioural intention to use cloud computing. Hence, the higher respondents' computer competence, the more they are open to experience, self-efficient and personally innovative, and perceive cloud computing as easy to use, the higher is their behavioural intention to use cloud computing.Researchers also examined relationships with demographic variables. To examine the relationship between age and other research variables, the researchers performed Pearson correlations. A significant negative correlation was found between age and PEOU, r = -0.21, p < 0.05. We may assume that the younger the respondents are, the more they perceive cloud computing as easy to use. To examine whether there are differences between males and females concerning the research variables, a MANOV A was performed and did not reveal a significant difference between the two groups concerning research variables, F (7,130) = 1.88, p > 0.05.The researchers also conducted a hierarchical regression using behavioural intention to use cloud computing as a dependent variable. The predictors were entered as five steps:respondents' openness to experience;respondents' computer competence (computer use and social media use);cognitive appraisal (threat, challenge and self-efficacy);TAM variables (personal innovativeness and PEOU); andinteractions with the TAM variables.The entrance of the four first steps was forced, while the interactions were done according to their contribution to the explained variance of behavioural intention to use cloud computing. The regression explained 54 per cent of behavioural intention to use cloud computing. Table III presents the standardized and unstandardized coefficients of the hierarchical regression of respondents' behavioural intention to use cloud computing.The first step introduced the openness variable that contributed significantly by adding 13 per cent to the explained variance of behavioural intention to use cloud computing. The beta coefficient of the openness variable is positive; hence, the more open to experience respondents are, the higher is their behavioural intention to use cloud computing. The second step introduced the two computer competence variables (computer use and social media use) which contributed 5 per cent to the explained variance of behavioural intention. Of these two variables, only the social media variable contributed significantly and its beta coefficient was positive. In other words, the more respondents use social media, the higher is their behavioural intention to use cloud computing. Note that Pearson correlations found significant positive correlations between these two variables and behavioural intention to use cloud computing. It seems that because of the correlation between these two variables, r = 0.33, p < 0.001, the computer use variable did not contribute to the regression.As the third step, researchers added respondents' personal appraisal variables (threat and challenge, and self-efficacy), and this also contributed significantly by adding 25 per cent to the explained variance of behavioural intention. The beta coefficients of challenge and of self-efficacy were positive, while that of threat was negative. Therefore, we may conclude that the more respondents perceived themselves as challenged and self-efficient, and the less they perceived themselves as threatened, the higher is their behavioural intention to use cloud computing. The inclusion of this step caused a decrease in the [beta] size of the openness to experience variable that changed it into an insignificant one, and may suggest a possibility of mediation. Sobel tests indicated that self-efficacy mediates between openness to experience and behavioural intention (z = 4.68, p < 0.001). Hence, the more respondents are open to experience, the higher is their self-efficacy and, as a result, the higher is their behavioural intention to use cloud computing.The fourth step added the TAM variables (respondents' PEOU and personal innovation), and this also contributed significantly by adding 9 per cent to the explained variance of behavioural intention to use cloud computing. The beta coefficient of this variable was positive; therefore, the more respondents perceived themselves to be personally innovative and cloud computing as easy to use, the higher is their behavioural intention to use cloud computing. Note that in this step there was a decrease in the [beta] size of self-efficacy. Sobel tests indicated that of the two variables, PEOU mediates between self-efficacy and behavioural intention (z = 4.77, p < 0.001). Thus, the more respondents perceive themselves as self-efficient, the higher they perceive cloud computing's PEOU and, as a result, the higher is their behavioural intention to use it.As the fifth step, researchers added the interaction between computer use X personal innovativeness. This interaction added 2 per cent to the explained variance of behavioural intention to use cloud computing and is presented in Figure 1.Figure 1 shows a correlation between personal innovation and behavioural intention to use cloud computing among respondents who are low and high in computer use. This correlation is higher among respondents who are low in computer use, [beta] = . 40, p < 0.05, than among those who are high in computer use, [beta] = 0.04, p < 0.05. It seems that especially among participants who are low in computer use, the higher their personal innovativeness, the higher is their behavioural intention to use cloud computing.DiscussionThe present research explored the extent to which the TAM and personal characteristics, such as threat and challenge, self-efficacy and openness to experience, explain information professionals' perspectives on cloud computing. Researchers divided the study hypotheses into three categories. The first (consisting of H1 -H2 ) refers to the TAM, the second (H3 -H5 ) to personality characteristics and, finally, H6 to computer competence. All hypotheses were accepted. Regarding the first category of hypotheses, results show that both were accepted. Findings suggest that high scores in PEOU and personal innovativeness are associated with high scores in respondents' intention to adopt cloud computing. These findings can be associated with previous。

计算机专业毕业论文外文翻译

计算机专业毕业论文外文翻译

附录(英文翻译)Rich Client Tutorial Part 1The Rich Client Platform (RCP) is an exciting new way to build Java applications that can compete with native applications on any platform. This tutorial is designed to get you started building RCP applications quickly. It has been updated for Eclipse 3.1.2By Ed Burnette, SASJuly 28, 2004Updated for 3.1.2: February 6, 2006IntroductionTry this experiment: Show Eclipse to some friends or co-workers who haven't seen it before and ask them to guess what language it is written in. Chances are, they'll guess VB, C++, or C#, because those languages are used most often for high quality client side applications. Then watch the look on their faces when you tell them it was created in Java, especially if they are Java programmers.Because of its unique open source license, you can use the technologies that went into Eclipse to create your own commercial quality programs. Before version 3.0, this was possible but difficult, especially when you wanted to heavily customize the menus, layouts, and other user interface elements. That was because the "IDE-ness" of Eclipse was hard-wired into it. Version 3.0 introduced the Rich Client Platform (RCP), which is basically a refactoring of the fundamental parts of Eclipse's UI, allowing it to be used for non-IDE applications. Version 3.1 updated RCP with new capabilities, and, most importantly, new tooling support to make it easier to create than before.If you want to cut to the chase and look at the code for this part you can find it in the accompanying zip file. Otherwise, let's take a look at how to construct an RCP application.Getting startedRCP applications are based on the familiar Eclipse plug-in architecture, (if it's not familiar to you, see the references section). Therefore, you'll need to create a plug-in to be your main program. Eclipse's Plug-in Development Environment (PDE) provides a number of wizards and editors that take some of the drudgery out of the process. PDE is included with the Eclipse SDK download so that is the package you should be using. Here are the steps you should follow to get started.First, bring up Eclipse and select File > New > Project, then expand Plug-in Development and double-click Plug-in Project to bring up the Plug-in Project wizard. On the subsequent pages, enter a Project name such as org.eclipse.ui.tutorials.rcp.part1, indicate you want a Java project, select the version of Eclipse you're targeting (at least 3.1), and enable the option to Create an OSGi bundle manifest. Then click Next >.Beginning in Eclipse 3.1 you will get best results by using the OSGi bundle manifest. In contrast to previous versions, this is now the default.In the next page of the Wizard you can change the Plug-in ID and other parameters. Of particular importance is the question, "Would you like to create a rich client application?". Select Yes. The generated plug-in class is optional but for this example just leave all the other options at their default values. Click Next > to continue.If you get a dialog asking if Eclipse can switch to the Plug-in Development Perspective click Remember my decision and select Yes (this is optional).Starting with Eclipse 3.1, several templates have been provided to make creating an RCP application a breeze. We'll use the simplest one available and see how it works. Make sure the option to Create a plug-in using one of the templates is enabled, then select the Hello RCP template. This isRCP's equivalent of "Hello, world". Click Finish to accept all the defaults and generate the project (see Figure 1). Eclipse will open the Plug-in Manifest Editor. The Plug-in Manifest editor puts a friendly face on the various configuration files that control your RCP application.Figure 1. The Hello World RCP project was created by a PDE wizard.Taking it for a spinTrying out RCP applications used to be somewhat tedious. You had to create a custom launch configuration, enter the right application name, and tweak the plug-ins that were included. Thankfully the PDE keeps track of all this now. All you have to do is click on the Launch an Eclipse Application button in the Plug-in Manifest editor's Overview page. You should see a bare-bones Workbench start up (see Figure 2).Figure 2. By using thetemplates you can be up andrunning anRCPapplication inminutes.Making it aproductIn Eclipse terms a product is everything that goes with your application, including all the other plug-ins it depends on, a command to run the application (called the native launcher), and any branding (icons, etc.) that make your application distinctive. Although as we've just seen you can run a RCP application without defining a product, having one makes it a whole lot easier to run the application outside of Eclipse. This is one of the major innovations that Eclipse 3.1 brought to RCP development.Some of the more complicated RCP templates already come with a product defined, but the Hello RCP template does not so we'll have to make one.In order to create a product, the easiest way is to add a product configuration file to the project. Right click on the plug-in project and select New > Product Configuration. Then enter a file name for this new configuration file, such as part1.product. Leave the other options at their default values. Then click Finish. The Product Configuration editor will open. This editor lets you control exactly what makes up your product including all its plug-ins and branding elements.In the Overview page, select the New... button to create a new product extension. Type in or browse to the defining plug-in(org.eclipse.ui.tutorials.rcp.part1). Enter a Product ID such as product, and for the Product Application selectorg.eclipse.ui.tutorials.rcp.part1.application. Click Finish to define the product. Back in the Overview page, type in a new Product Name, for example RCP Tutorial 1.In Eclipse 3.1.0 if you create the product before filling inthe Product Name you may see an error appear in the Problems view. The error will go away when you Synchronize (see below). This is a known bug that is fixed in newer versions. Always use the latest available maintenance release for the version of Eclipse you're targeting!Now select the Configuration tab and click Add.... Select the plug-in you just created (org.eclipse.ui.tutorials.rcp.part1) and then click on Add Required Plug-ins. Then go back to the Overview page and press Ctrl+S or File > Save to save your work.If your application needs to reference plug-ins that cannot be determined until run time (for example the tomcat plug-in), then add them manually in the Configuration tab.At this point you should test out the product to make sure it runs correctly. In the Testing section of the Overview page, click on Synchronize then click on Launch the product. If all goes well, the application should start up just like before.Plug-ins vs. featuresOn the Overview page you may have noticed an option that says the product configuration is based on either plug-ins or features. The simplest kind of configuration is one based on plug-ins, so that's what this tutorial uses. If your product needs automatic update or Java Web Start support, then eventually you should convert it to use features. But take my advice and get it working without them first.Running it outside of EclipseThe whole point of all this is to be able to deploy and run stand-alone applications without the user having to know anything about the Java and Eclipse code being used under the covers. For a real application you may want to provide a self-contained executable generated by an install program like InstallShield or NSIS. That's really beyond the scope of this article though, so we'll do something simpler.The Eclipse plug-in loader expects things to be in a certain layout so we'll need to create a simplified version of the Eclipse install directory. This directory has to contain the native launcher program, config files,and all the plug-ins required by the product. Thankfully, we've given the PDE enough information that it can put all this together for us now.In the Exporting section of the Product Configuration editor, click the link to Use the Eclipse Product export wizard. Set the root directory to something like RcpTutorial1. Then select the option to deploy into a Directory, and enter a directory path to a temporary (scratch) area such as C:\Deploy. Check the option to Include source code if you're building an open source project. Press Finish to build and export the program.The compiler options for source and class compatibility in the Eclipse Product export wizard will override any options you have specified on your project or global preferences. As part of the Export process, the plug-in is code is recompiled by an Ant script using these options.The application is now ready to run outside Eclipse. When you're done you should have a structure that looks like this in your deployment directory:RcpTutorial1| .eclipseproduct| eclipse.exe| startup.jar+--- configuration| config.ini+--- pluginsmands_3.1.0.jarorg.eclipse.core.expressions_3.1.0.jarorg.eclipse.core.runtime_3.1.2.jarorg.eclipse.help_3.1.0.jarorg.eclipse.jface_3.1.1.jarorg.eclipse.osgi_3.1.2.jarorg.eclipse.swt.win32.win32.x86_3.1.2.jarorg.eclipse.swt_3.1.0.jarorg.eclipse.ui.tutorials.rcp.part1_1.0.0.jarorg.eclipse.ui.workbench_3.1.2.jarorg.eclipse.ui_3.1.2.jarNote that all the plug-ins are deployed as jar files. This is the recommended format starting in Eclipse 3.1. Among other things this saves disk space in the deployed application.Previous versions of this tutorial recommended using a batch file or shell script to invoke your RCP program. It turns out this is a bad idea because you will not be able to fully brand your application later on. For example, you won't be able to add a splash screen. Besides, theexport wizard does not support the batch file approach so just stick with the native launcher.Give it a try! Execute the native launcher (eclipse or eclipse.exe by default) outside Eclipse and watch the application come up. The name of the launcher is controlled by branding options in the product configuration.TroubleshootingError: Launching failed because the org.eclipse.osgi plug-in is not included...You can get this error when testing the product if you've forgotten to list the plug-ins that make up the product. In the Product Configuration editor, select the Configuration tab, and add all your plug-ins plus all the ones they require as instructed above.Compatibility and migrationIf you are migrating a plug-in from version 2.1 to version 3.1 there are number of issues covered in the on-line documentation that you need to be aware of. If you're making the smaller step from 3.0 to 3.1, the number of differences is much smaller. See the References section for more information.One word of advice: be careful not to duplicate any information in both plug-in.xml and MANIFEST.MF. Typically this would not occur unless you are converting an older plug-in that did not use MANIFEST.MF into one that does, and even then only if you are editing the files by hand instead of going through the PDE.ConclusionIn part 1 of this tutorial, we looked at what is necessary to create a bare-bones Rich Client application. The next part will delve into the classes created by the wizards such as the WorkbenchAdvisor class. All the sample code for this part may be found in the accompanying zip file.ReferencesRCP Tutorial Part 2RCP Tutorial Part 3Eclipse Rich Client PlatformRCP Browser example (project org.eclipse.ui.examples.rcp.browser)PDE Does Plug-insHow to Internationalize your Eclipse Plug-inNotes on the Eclipse Plug-in ArchitecturePlug-in Migration Guide: Migrating to 3.1 from 3.0Plug-in Migration Guide: Migrating to 3.0 from 2.1译文:Rich Client教程第一部分The Rich Client Platform (RCP)是一种创建Java应用程序的令人兴奋的新方法,可以和任何平台下的自带应用程序进行竞争。

云计算外文翻译参考文献

云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。

计算机毕设英文参考文献

计算机毕设英文参考文献

计算机毕设英文参考文献当涉及到毕业设计或者毕业论文的参考文献时,你可以考虑以下一些经典的计算机科学领域的文献:1. D. E. Knuth, "The Art of Computer Programming," Addison-Wesley, 1968.2. A. Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem," Proceedings of the London Mathematical Society, 1936.3. V. Bush, "As We May Think," The Atlantic Monthly, 1945.4. C. Shannon, "A Mathematical Theory of Communication," Bell System Technical Journal, 1948.5. E. W. Dijkstra, "Go To Statement Considered Harmful," Communications of the ACM, 1968.6. L. Lamport, "Time, Clocks, and the Ordering of Events in a Distributed System," Communications of the ACM, 1978.7. T. Berners-Lee, R. Cailliau, "WorldWideWeb: Proposal for a HyperText Project," 1990.8. S. Brin, L. Page, "The Anatomy of a Large-Scale Hypertextual Web Search Engine," Computer Networks and ISDN Systems, 1998.这些文献涵盖了计算机科学领域的一些经典工作,包括算法、计算理论、分布式系统、人机交互等方面的内容。

云计算外文翻译参考文献

云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。

大数据文献综述英文版

大数据文献综述英文版

The development and tendency of Big DataAbstract: "Big Data" is the most popular IT word after the "Internet of things" and "Cloud computing". From the source, development, status quo and tendency of big data, we can understand every aspect of it. Big data is one of the most important technologies around the world and every country has their own way to develop the technology.Key words: big data; IT; technology1 The source of big dataDespite the famous futurist Toffler propose the conception of “Big Data” in 1980, for a long time, because the primary stage is still in the development of IT industry and uses of information sources, “Big Data” is not get enough attention by the people in that age[1].2 The development of big dataUntil the financial crisis in 2008 force the IBM ( multi-national corporation of IT industry) proposing conception of “Smart City”and vigorously promote Internet of Things and Cloud computing so that information data has been in a massive growth meanwhile the need for the technology is very urgent. Under this condition, some American data processing companies have focused on developing large-scale concurrent processing system, then the “Big Data”technology become available sooner and Hadoop mass data concurrent processing system has received wide attention. Since 2010, IT giants have proposed their products in big data area. Big companies such as EMC、HP、IBM、Microsoft all purchase other manufacturer relating to big data in order to achieve technical integration[1]. Based on this, we can learn how important the big data strategy is. Development of big data thanks to some big IT companies such as Google、Amazon、China mobile、Alibaba and so on, because they need a optimization way to store and analysis data. Besides, there are also demands of health systems、geographic space remote sensing and digital media[2].3 The status quo of big dataNowadays America is in the lead of big data technology and market application. USA federal government announced a “Big Data’s research and development” plan in March,2012, which involved six federal government department the National Science Foundation, Health Research Institute, Department of Energy, Department of Defense, Advanced Research Projects Agency and Geological Survey in order to improve the ability to extract information and viewpoint of big data[1]. Thus, it can speed science and engineering discovery up, and it is a major move to push some research institutions making innovations.The federal government put big data development into a strategy place, which hasa big impact on every country. At present, many big European institutions is still at the primary stage to use big data and seriously lack technology about big data. Most improvements and technology of big data are come from America. Therefore, there are kind of challenges of Europe to keep in step with the development of big data. But, in the financial service industry especially investment banking in London is one of the earliest industries in Europe. The experiment and technology of big data is as good as the giant institution of America. And, the investment of big data has been maintained promising efforts. January 2013, British government announced 1.89 million pound will be invested in big data and calculation of energy saving technology in earth observation and health care[3].Japanese government timely takes the challenge of big data strategy. July 2013, Japan’s communications ministry proposed a synthesize strategy called “Energy ICT of Japan” which focused on big data application. June 2013, the abe cabinet formally announced the new IT strategy----“The announcement of creating the most advanced IT country”. This announcement comprehensively expounded that Japanese new IT national strategy is with the core of developing opening public data and big data in 2013 to 2020[4].Big data has also drawn attention of China government.《Guiding opinions of the State Council on promoting the healthy and orderly development of the Internet of things》promote to quicken the core technology including sensor network、intelligent terminal、big data processing、intelligent analysis and service integration. December 2012, the national development and reform commission add data analysis software into special guide, in the beginning of 2013 ministry of science and technology announced that big data research is one of the most important content of “973 program”[1]. This program requests that we need to research the expression, measure and semantic understanding of multi-source heterogeneous data, research modeling theory and computational model, promote hardware and software system architecture by energy optimal distributed storage and processing, analysis the relationship of complexity、calculability and treatment efficiency[1]. Above all, we can provide theory evidence for setting up scientific system of big data.4 The tendency of big data4.1 See the future by big dataIn the beginning of 2008, Alibaba found that the whole number of sellers were on a slippery slope by mining analyzing user-behavior data meanwhile the procurement to Europe and America was also glide. They accurately predicting the trend of world economic trade unfold half year earlier so they avoid the financial crisis[2]. Document [3] cite an example which turned out can predict a cholera one year earlier by mining and analysis the data of storm, drought and other natural disaster[3].4.2 Great changes and business opportunitiesWith the approval of big data values, giants of every industry all spend more money in big data industry. Then great changes and business opportunity comes[4].In hardware industry, big data are facing the challenges of manage, storage and real-time analysis. Big data will have an important impact of chip and storage industry,besides, some new industry will be created because of big data[4].In software and service area, the urgent demand of fast data processing will bring great boom to data mining and business intelligence industry.The hidden value of big data can create a lot of new companies, new products, new technology and new projects[2].4.3 Development direction of big dataThe storage technology of big data is relational database at primary. But due to the canonical design, friendly query language, efficient ability dealing with online affair, Big data dominate the market a long term. However, its strict design pattern, it ensures consistency to give up function, its poor expansibility these problems are exposed in big data analysis. Then, NoSQL data storage model and Bigtable propsed by Google start to be in fashion[5].Big data analysis technology which uses MapReduce technological frame proposed by Google is used to deal with large scale concurrent batch transaction. Using file system to store unstructured data is not lost function but also win the expansilility. Later, there are big data analysis platform like HA VEn proposed by HP and Fusion Insight proposed by Huawei . Beyond doubt, this situation will be continued, new technology and measures will come out such as next generation data warehouse, Hadoop distribute and so on[6].ConclusionThis paper we analysis the development and tendency of big data. Based on this, we know that the big data is still at a primary stage, there are too many problems need to deal with. But the commercial value and market value of big data are the direction of development to information age.忽略此处..[1] Li Chunwei, Development report of China’s E-Commerce enterprises, Beijing , 2013,pp.268-270[2] Li Fen, Zhu Zhixiang, Liu Shenghui, The development status and the problems of large data, Journal of Xi’an University of Posts and Telecommunications, 18 volume, pp. 102-103,sep.2013 [3] Kira Radinsky, Eric Horivtz, Mining the Web to Predict Future Events[C]//Proceedings of the 6th ACM International Conference on Web Search and Data Mining, WSDM 2013: New York: Association for Computing Machinery,2013,pp.255-264[4] Chapman A, Allen M D, Blaustein B. It’s About the Data: Provenance as a Toll for Assessing Data Fitness[C]//Proc of the 4th USENIX Workshop on the Theory and Practice of Provenance, Berkely, CA: USENIX Association, 2012:8[5] Li Ruiqin, Zheng Janguo, Big data Research: Status quo, Problems and Tendency[J],Network Application,Shanghai,1994,pp.107-108[6] Meng Xiaofeng, Wang Huiju, Du Xiaoyong, Big Daya Analysis: Competition and Survival of RDBMS and ManReduce[J], Journal of software, 2012,23(1): 32-45。

计算机专业B-S模式 中英文 文献

计算机专业B-S模式 中英文 文献

外文翻译ENGLISHE:Develop Web application program using ASP the architecture that must first establish Web application. Now in application frequently with to have two: The architecture of C/S and the architecture of B/S.Client/server and customer end / server hold the architecture of C/S.The customer / server structure of two floor.Customer / server ( Client/Server ) model is a kind of good software architecture, it is the one of best application pattern of network. From technology, see that it is a logic concept, denote will a application many tasks of decomposing difference carry out , common completion is entire to apply the function of task. On each network main computer of web site, resource ( hardware, software and data ) divide into step, is not balanced, under customer / server structure, without the client computer of resource through sending request to the server that has resource , get resource request, so meet the resource distribution in network not balancedness. With this kind of structure, can synthesize various computers to cooperate with work, let it each can, realize the scale for the system of computer optimization ( Rightsizing ) with scale reduce to melt ( Downsizing ). Picture is as follows:It is most of to divide into computer network application into two, in which the resource and function that part supports many users to share , it is realized by server; Another part faces every user , is realized by client computer, also namely, client computer is usual to carry out proscenium function , realizes man-machine interaction through user interface , or is the application program of specific conducted user. And server usually carries out the function of backstage supporter , manages the outside request concerning seting up, accepting and replying user that shared. For a computer, it can have double function , is being certain and momentary to carve to act as server , and again becomes client computer in another time.Customer / server type computer divide into two kinds, one side who offers service is called as server , asks one side of service to be called as customer. To be able to offer service, server one side must have certain hardware and corresponding server software; Also, customer one side mustalso have certain hardware and corresponding customer software.There must be a agreement between server and customer, both sides communicate according to this agreement.Apply customer / server model in Internet service , the relation between customer and server is not immutable. Some Internet node offers service on the one hand , also gets service on the other hand from other node; It is even in one time dialogue course, mutual role also exchanges probably. As in carry out file transmission , if be called as one side who offers file server, is called as one side who gets file customer, when using get or mget order since another node takes file, can think that what self use and it is client computer , is using put or mput order to another node dispatch file can again think the machine that used self is server.Multilayer customer / server structureAlong with the development of enterprise application, recently, have again arisen a kind of new multilayer architecture, it applies customer end to divide into two minutes: Customer application and server apply. Customer application is the part of original customer application , is another and partial to have been transfered to server to apply. New customer application takes the responsibility for user interface and simple regular business logic and new server application resident core , changeable business logic. Therefore its structure has become new ( Client application + Server application )/Server structure. Following picture shows:This kind of structure has solved traditional Client/Server can expand problem, have reduced customer end business logic , and have reduced the requirement of customer end for hardware. At the same time because of a lot of business logic concentrations have gone to unitary application server on, the maintenance work of application system had been also concentrated together, have eliminated the problem in the traditional structure of Client/Server that software distributes. This kind of structure is called as the architecture of B/S.Browser/Server and browser / server hold the architecture of B/S. Onessence, Browser/Server is also a kind of structure of Client/Server, it is a kind of from the traditional two levels of structural development of Client/Server come to the three-layer structural special case of Client/Server that applied on Web.In the system of Browser/Server, user can pass through browser to a lot of servers that spread on network to send request. The structure of Browser/Server is maximum to have simplified the work of client computer, on client computer, need to install and deploy few customer end software only , server will bear more work, for database visit and apply program carry out will in server finish.Under the three-layer architecture of Browser/Server, express layer ( Presentatioon ) , function layer ( Business Logic ) , data layer ( Data Service ) have been cut the unit of 3 relative independences: It is the first layer of to express layer: Web browser.In expressing layer contain system show logic, locate in customer end. It's task is to suggest by Web browser to the certain a Web server on network that service is asked , after verifying for user identity, Web server delivers needed homepage with HTTP agreement to customer end, client computer accept the homepage file that passed , and show it in Web browser on.Second layer function layer: Have the Web server of the application function of program extension.In function layer contain the systematic handling of general affairs logic, locate in Web server end. It's task is the request concerning accepting user , need to be first conducted and corresponding to expand application program and database to carry out connection , passes through the waies such as SQL to database server to put forward data handling to apply for, then etc. database server the result of handling data submit to Web server, deliver again by Web server to return customer end.The number of plies of 3th according to layer: Database server.In data layer contain systematic data handling logic, locate in database server end. It's task is to accept the request that Web server controls for database, realization is inquired and modified for database , update etc. function, submit operation result to Web server.Careful analysis is been easy to see , the architecture of Browser/Server of three-layer is the handling of general affairs of the two levels of structure of Client/Server logic modular from the task of client computer in split , from the first floor of individual composition bear the pressure of its task and such client computer have alleviated greatly, distribute load balancedly and have given Web server, so from the structural change of Client/server of original two floor the structure of Browser/Server of three-layer. This kind of three-layer architecture following picture shows.This kind of structure not only client computer from heavy burden andthe requirement of performance that rises continuously for it in liberation come out , also defend technology people from heavy maintenance upgrading work in free oneself. Since client computer handles general affairs , logic partial minutes have given function server, make client computer right off " slender " a lot of, do not take the responsibility for handling complex calculation and data again visit etc. crucial general affairs, is responsible to show part, so, maintenance people do not rush about again for the maintenance work of program between every client computer, and put major energy in the program on function server update work. Between this kind of three-layer structural layer and layer, the mutually independent change of any first floor does not affect the function of other layer. It has changed the defect of the two levels of architecture of Client/Server of tradition from foundation, it is the transform with deep once in application systematic architecture.The contrast of two architecturesThe architecture of Browser/Server and the architecture ofClient/Server compare with all advantages that not only have the architecture of Client/Server and also have the architecture ofClinet/Server the unique advantage that place does not have: Open standard: The standard adopted by Client/Server only in department unification for but, it's application is often for special purpose.It is lower to develop and defend cost: It need to be implemented on all client computers that the application of Client/Server must develop the customer end software for special purpose, no matter installation and disposition escalate still, have wasted manpower and material resources maximumly. The application of Browser/Server need in customer end have general browser , defend and escalate to work in server end go on , need not carry out any change as customer holds , have reduced the cost of development and maintenance so greatly.It is simple to use , interface friendly: The interface of the user of Client/Server is decided by customer end software, interface and the method of its use are not identical each, per popularize a system of Client/Server ask user study from the beginning, is hard to use. The interface of the user of Browser/Server is unified on browser, browser is easy to use , interface friendly, must not study use again other software, the use of a Lao Yong Yi that has solved user problem.Customer end detumescence: The customer end of Client/Server has the function that shows and handles data , as the requirement of customer end is a client computer " it is fat " very high. The customer of Browser/Server holds the access that not takes the responsibility for database again and the etc. task of complex data calculation, need it only show , the powerful role that has played server fully is so large to have reduced the requirement for customer end, customer end become very " thin ".System is flexible: The 3 minutes of the system of Client/Server, in modular, have the part that need to change to want relation to the change of other modular, make system very difficult upgrading. The 3 minutes of the system of Browser/Server modular relative independence, in which a part of modular change, other modular does not get influence, it is very easy that system improve to become, and can form the system with much better performance with the product of different manufacturer.Ensure systematic safety: In the system of Client/Server, directly join with database server because of client computer, user can very easily change the data on server, can not guarantee systematic safety. The system of Browser/Server has increased a level of Web server between client computer and database server , makes two not to be directly linked again, client computer can not be directly controled for database, prevent user efficiently invade illegally.The architecture of Browser/Server of three-layer has the advantage that a lot of traditional architectures of Client/Server does not have , and is close to have combined the technology of Internet/Intranet, is that the tendency of technical development tends to , it application system tape into one brand-new develop times. From this us option the configuration of B/S the architecture that develops as system.what are C/S with B/SFor " C/S " with the technology of " B/S " develop change know , first,must make it clear that 3 problems.( 1 ) What is the structure of C/S.C/S ( Client/Server ) structure, the server structure and client computer that all know well. It is software systematic architecture, through it can hold hardware environment fully using two advantage, realize task reasonable distribution to Client end and Server end , have reduced systematic communication expense. Now, the most systems of application software are the two levels of structure of the form of Client/Server , are developing to the Web application of distribution type since current software application is systematic, Web and the application of Client/Server can carry out same business handling , apply different modular to share logic assembly; Therefore it is systematic that built-in and external user can visit new and existing application , through the logic in existing application system, can expand new application system. This is also present application system develop direction. Traditional C /S architecture though adopting is open pattern, but this is the openness that system develops a level , in specific application no matter Client end orServer end the software that need to still specify support. Because of the software software that need to develop different edition according to the different system of operating system that can not offer the structure of C/S and the open environment of user genuine expectation , besides, the renovation of product is very rapid, is nearly impossible to already meet the 100 computer above users of local area network at the same time use. Price has low efficiency high. If my courtyard uses , Shanghai exceed the orchid company's management software " statistics of law case" is typical C /S architecture management software.( 2 ) What is the structure of B/S.B/S ( Browser/Server ) structure browser and server structure. It is along with the technology of Internet spring up , it is for the structure of improvement or a kind of change of the structure of C/S. Under this kind of structure, user working interface is to realize through WWW browser, lose the logic of general affairs very much in front( Browser) realization, but the major logic of general affairs in server end( Server) realization, form the three-layer claimed 3-tier structure. So, have simplified customer end computer load greatly , have alleviated system to defend workload and the cost with upgrading , have reduced the overall cost of user ( TCO ). With present technology see , local area network the network application that establishes the structure of B/S , and under the pattern of Internet/Intranet, database application is easy to hold relatively , cost also is lower. It is that oneness goes to the development of position , can realize different people, never same place, with difference receive the way of entering ( for example LAN, WAN, Internet/Intranet etc.) visit and operate common database; It can protect data platform efficiently with management visit limits of authority, server database is also safe. Now in my courtyard, net ( Intranet ) , outer net ( Internet ) with Beijing eastern clear big company " law case and the management software of official business " is the structural management software of B/S , policemen each working station in local area network pass through WWW browser can realize working business. Especially in JAVA step platform language appearance after, the configuration management software of B/S is more facilitated , is shortcut, efficient.( 3 ) The management software technology of main stream.The technology of main stream of management software technology is as management thought , have also gone through 3 develop period. First, interface technology goes to Windows graph interface ( or graph user interface GUI ) from last century DOS character interface, till Browser browser interface 3 differences develop period. Secondly, today own the browser interface of computer, is not only visual and is easy to use , what is more major is that any its style of application software based on browser platform is as, make the requirement of choosing a person for the job for operating training not high and software operability is strong , is easy to distinguish; Moreover platform architecture the file that also goes to today from past single user development /server ( F /S ) system and client computer /server ( C /S ) system and browser /server ( B /S ) system.The comparison of C/S and B/SC/S and B/S is the now world two technologies of main stream of developing pattern technical configuration. C/S is that American Borland company researches and develop most early, B/S is that American Microsoft researches and develop. Now this two technologies with quilt world countries grasp , it is many that domestic company produce article with C/S and the technical development of B/S. This two technologies have the certain market share of self , is with customer crowd , each domestic enterprise says that own management software configuration technical function is powerful, advanced, convenient , the customer group that can lift , have a crowd scholar ink guest to shake flag self cry out , advertisement flies all over the sky , may be called benevolent to see kernel, sage sees wisdomC/S configures inferior position and the advantage of software( 1 ) Application server operation data load is lightcomparatively.The database application of the most simple architecture of C/S is become by two partial groups, customer applies program and database server program. Both can be called as proscenium program and the program of backstage supporter respectively. The machine of operation database server program is also called as application server. Once server program had been started , waits the request concerning responding customer program hair at any time; Customer application program operation can becalled as customer computer on the own computer of user, in correspondence with database server, when needs carry out any operation for the data in database, customer program seeks server program voluntarily , and sends request to it, server program is regular as basis intends to make to reply, send to return result, application server operation data load is lighter.( 2 ) Data store management function relatively transparent.In database application data store management function, is carried out respectively independently by server program and customer application program , is regular as proscenium application can violate , and usually those different( no matter is have known still unknown ) operations data, in server program, do not concentrate realization, for instance visit limits of authority, serial number can be repeated , must have customer talent establishment the rule order. It is these to own , for the last user that works on proscenium program is " transparent ", they need not be interest in ( can not usually also interfere ) the course of behind, can complete own all work. In the application of customer server configuration proscenium program not is very " thin ", troublesome matter is delivered to server and network. In the system of C/S take off , database can not become public really , professionally more competent storehouse, it gets independent special management.( 3 ) The inferior position of the configuration of C/S is high maintenance cost make investment just big.First, with the configuration of C/S, will select proper database platform to realize the genuine "unification" of database data, make the data synchronism that spreads in two lands complete deliver by database system go to manage, but the logically two operators of land will directly visit a same database to realize efficiently , have so some problems, if needs establishment the data synchronism of " real time ", the database server that must establish real time communication connection between two places and maintains two lands is online to run , network management staff will again want to defend and manage for customer end as server defends management , maintenance and complex tech support and the investment of this high needs have very high cost, maintenance task is measured.Secondly, the software of the structure of C/S of tradition need to develop thesoftware of different edition according to the different system of operating system , is very rapid because of the renovation of product, price is working needs high with inefficient already do not meet. In JAVA step platform language appearance after, the configuration of B/S is more vigorous impact C/S , and forms threat and challenge for it. .The advantage of B/S configuration software( 1 ) The Maintenance of inferior position and upgrading way are simple.Now upgrading and the improvement of software system more and more frequently, the product of the configuration of B/S embodies more convenient property obviously. For one a little a little bit big unit , if systematic administrator needs , between hundreds of 1000 even last computers round trip run , efficiency and workload is to can imagine, but the configuration of B/S software needs management server have been all right , all customer ends are browser only, need not do any maintenance at all. No matter the scale of user has , is what , has how many branch will not increase any workload of maintenance upgrading , is all to operate needs to aim at server to go on; If need differently only, net server connection specially , realize long-range maintenance and upgrading and share. So client computer more and more " thin ", and server more and more " fat " is the direction of main stream of future informative development. In the future, software upgrading and maintenance will be more and more easy , and use can more and more simple, this is for user manpower , material resources, time and cost save is obvious , it is astonishing. Therefore defend and escalate revolutionary way is the client computer " it is thin ", " is fat " server.( 2 ) Cost reduction, it is more to select.All know windows in the computer of top of a table on nearly one Tong world, browser has become standard disposition, but on server operating system, windows is in absolute dominance position not. Current tendency is the application management software that uses the configuration of B/S all , need to install only in Linux server on , and safety is high. The so server option of operating system is many, no matter choosing those operating system, can let the most of ones use windows in order to the computer of top of a table of operating system does not get influence, this for make most popular free Linux operating system develop fast, Linux except operatingsystem is free besides, it is also free to link database, this kind of option is very pupular.Say, many persons on daily, "Sina website" nets , so long as having installed browser for can , and what need not know the server of " Sina website " to use is that what operating system, and in fact the most of websites do not use windows operating system really, but the computer of user is most of as installing to be windows operating system.( 3 ) Application server operation data load value comparatively.Since B/S configures management, software installation in server end ( Server ) on, it is been all right that network administrator need to manage server only, the user interface major logic of general affairs in server ( Server ) end pass through WWW browser completely realization, lose the logic of general affairs very much in front( Browser) realization, all customer ends has only browser, network administrator need to do hardware maintenance only. But application server operation data load is heavier, once occuring " server collapse " to wait for problem, consequence is unimaginable. Therefore a lot of units have database to stock server , are ready for any eventuality.原文翻译:利用ASP开发Web应用程序首先必须确立Web应用的体系结构。

毕业论文外文文献翻译

毕业论文外文文献翻译
学号:20090127712009012771
2013届本科生毕业论文英文参考文献翻译
Oracle虚拟机服务器软件虚拟化在一个64位
Linux环境的性能和可扩展性
(译文)
学院(系):
信息工程
专业年级:
学生姓名:
指导教师:
合作指导教师:
完成日期:
2013年6月
Oracle虚拟机服务器软件虚拟化在一个64位Linux环境的性能和可扩展性
benefits, however, this has not been without its attendantproblems and anomalies, such as performance tuning anderratic performance metrics, unresponsive virtualized systems,crashed virtualized servers, misconfigured virtual hostingplatforms, amongst others. The focus of this research was theanalysis of the performance of the Oracle VM servervirtualization platform against that of the bare-metal serverenvironment. The scalability and its support for high volumetransactions were also analyzed using 30 and 50 active usersfor the performance evaluation. Swingbench and LMbench,two open suite benchmark tools were utilized in measuringperformance. Scalability was also measured using Swingbench.Evidential results gathered from Swingbench revealed 4% and8% overhead for 30 and 50 active users respectively in theperformance evaluation of Oracle database in a single OracleVM. Correspondingly, performance metric法

云计算外文文献+翻译

云计算外文文献+翻译

云计算外文文献+翻译1. 引言云计算是一种基于互联网的计算方式,它通过共享的计算资源提供各种服务。

随着云计算的普及和应用,许多研究者对该领域进行了深入的研究。

本文将介绍一篇外文文献,探讨云计算的相关内容,并提供相应的翻译。

2. 外文文献概述作者:Antonio Fernández Anta, Chryssis Georgiou, Evangelos Kranakis出版年份:2019年该外文文献主要综述了云计算的发展和应用。

文中介绍了云计算的基本概念,包括云计算的特点、架构、服务模型以及云计算的挑战和前景。

3. 研究内容该研究综述了云计算技术的基本概念和相关技术。

文中首先介绍了云计算的定义和其与传统计算的比较,深入探讨了云计算的优势和不足之处。

随后,文中介绍了云计算的架构,包括云服务提供商、云服务消费者和云服务的基本组件。

在架构介绍之后,文中提供了云计算的三种服务模型:基础设施即服务(IaaS)、平台即服务(PaaS)和软件即服务(SaaS)。

每种服务模型都从定义、特点和应用案例方面进行了介绍,并为读者提供了更深入的了解。

此外,文中还讨论了云计算的挑战,包括安全性、隐私保护、性能和可靠性等方面的问题。

同时,文中也探讨了云计算的前景和未来发展方向。

4. 文献翻译《云计算:一项调查》是一篇全面介绍云计算的文献。

它详细解释了云计算的定义、架构和服务模型,并探讨了其优势、不足和挑战。

此外,该文献还对云计算的未来发展进行了预测。

对于研究云计算和相关领域的读者来说,该文献提供了一个很好的参考资源。

它可以帮助读者了解云计算的基本概念、架构和服务模型,也可以引导读者思考云计算面临的挑战和应对方法。

5. 结论。

hadoop英文参考文献

hadoop英文参考文献

hadoop英文参考文献Hadoop是一个开源的分布式计算平台,它基于Google的MapReduce算法和Google文件系统(GFS)的思想,能够处理大规模的数据集。

对于研究Hadoop的人来说,阅读Hadoop的英文参考文献是非常必要的。

下面是一些Hadoop的英文参考文献:1. Apache Hadoop: A Framework for Running Applications on Large Clusters Built of Commodity Hardware. This paper describes the architecture of Hadoop and its components, including the Hadoop Distributed File System (HDFS) and MapReduce.2. Hadoop MapReduce: Simplified Data Processing on Large Clusters. This paper provides an overview of the MapReduce programming model and how it can be used to process large data sets on clusters of commodity hardware.3. Hadoop Distributed File System. This paper provides a detailed description of the Hadoop Distributed File System (HDFS), including its architecture, design goals, and implementation.4. Hadoop Security Design. This paper describes the security features of Hadoop, including authentication, authorization, and auditing.5. Hadoop Real World Solutions Cookbook. This book providespractical examples of how Hadoop can be used to solve real-world problems, including data processing, data warehousing, and machine learning.6. Hadoop in Practice. This book provides practical guidance on how to use Hadoop to solve data analysis problems, including data cleaning, data modeling, and data visualization.7. Hadoop: The Definitive Guide. This book provides a comprehensive overview of Hadoop and its components, including HDFS, MapReduce, and YARN. It also includes practical examples and best practices for using Hadoop.8. Pro Hadoop. This book provides a deep dive into Hadoop and its ecosystem, including HDFS, MapReduce, YARN, and a variety of tools and frameworks for working with Hadoop.9. Hadoop Operations. This book provides guidance on how to deploy, manage, and monitor Hadoop clusters in production environments, including best practices for performance tuning, troubleshooting, and security.以上是一些Hadoop的英文参考文献,它们涵盖了Hadoop的各个方面,对于学习和使用Hadoop都是非常有帮助的。

毕业设计数据可视化系统参考外文文献

毕业设计数据可视化系统参考外文文献

毕业设计数据可视化系统参考外文文献毕业设计数据可视化系统是一个涉及多个领域的综合性项目,因此需要参考多方面的外文文献。

以下是一些可能相关的外文文献资源:1. Data Visualization: A Handbook for Data Driven Design作者: Isabel Meirelles这本手册提供了数据可视化的基础知识和技术,包括数据清理、数据转换和可视表示等方面的内容。

2. The Visual Display of Quantitative Information作者: Edward R. Tufte这是一本经典的数据可视化书籍,详细介绍了如何使用图表、图形和表格等视觉元素来表示和呈现定量数据。

3. Data Visualization: A Practical Introduction作者: Jacqueline Peterson这本书提供了一个全面的数据可视化指南,从数据清理和准备到可视表示和解释等方面都有详细的介绍。

4. Information Visualization: Perception for Design作者: Collin F. Lynch这本书介绍了信息可视化的基本概念和技术,包括认知、感知和可视化等方面的内容。

它还提供了一些实用的设计技巧和工具。

5. Visualizing Data: Exploring and Explaining Data Through Tables, Charts, Maps, and more作者: Andy Kirk这本书提供了一系列的数据可视化方法和技巧,包括各种图表、地图和图形等。

它还强调了数据可视化的解释和传达方面的内容。

此外,还可以查阅一些专门针对数据可视化的学术期刊和会议论文集,例如IEEE Transactions on Visualization and Computer Graphics、Proceedings of the IEEE Symposium on Information Visualization等。

关于计算机的英文文献

关于计算机的英文文献

关于计算机的英文文献以下是一些关于计算机的英文文献:1. "Computer Science: The Discipline" by David Gries and Fred B. Schneider, published in 1993 in the journal Communications of the ACM.2. "The Art of Computer Programming" by Donald E. Knuth, published in three volumes between 1968 and 1973.3. "A Mathematical Theory of Communication" by Claude Shannon, published in 1948 in the Bell System Technical Journal.4. "Operating Systems Design and Implementation" by Andrew S. Tanenbaum and Albert S. Woodhull, published in 1997.5. "The Structure and Interpretation of Computer Programs" by Harold Abelson and Gerald Jay Sussman, published in 1984.6. "Computer Networks" by Andrew S. Tanenbaum, published in 1981.7. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, published in 1990.8. "Foundations of Computer Science" by Alfred Aho and Jeffrey Ullman, published in 1992.9. "Computer Architecture: A Quantitative Approach" byJohn L. Hennessy and David A. Patterson, first published in 1990.10. "Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig, first published in 1995.。

云存储服务系统研究中英文外文文献

云存储服务系统研究中英文外文文献

本科毕业设计(论文)中英文对照翻译(此文档为word格式,下载后您可任意修改编辑!)文献出处:Mehra P. The study of cloud storage service system [J]. Internet Computing, IEEE, 2016, 1(5): 10-19.原文The study of cloud storage service systemMehra PAbstractCloud storage is a new concept, which developments and extensions in a cloud computing, so to understand cloud storage is the first to knowabout cloud computing. Cloud computing is a kind of super computing model based on Internet, in a remote data center, tens of thousands of computer and server connected to a computer cloud. Therefore, cloud computing allows you to experience even 10 trillion times a second operation ability, have such a powerful computing ability to simulate nuclear explosion, forecasting and market development trend of climate change. User through a computer, laptop, mobile phone access to the data center, operation according to their needs. With the acceleration development of the concept of cloud computing, people began to looking for a new place for huge amounts of information in cloud storage. The cloud (cloud storage) emerged from a widely attention and support. Similar to the concept of cloud storage and cloud computing, it refers to the application through the cluster, grid technology or distributed file systems, and other functions, the network of a large number of various types of storage devices set up by applying the software to work together, common external provide access to data storage and business functions of a system. Keywords: cloud storage, cloud storage service system, the HDFS1 IntroductionThe rise of cloud makes the whole IT industry in a significant change, from equipment/application centered toward centered on information and this change will cause a series of changes, and affect thetechnical and business mode two levels. The biggest characteristic of the cloud is a mass, high performance/high traffic and low cost, and the biggest change is that its bring providers from sales tools to gradually according to the actual use of tools to collect fees, from selling products to selling services. Therefore, it can be said that cloud storage is not stored, but service. Cloud storage but also has the following characteristics: strong extensibility, should not be limited by the specific geographic location, based on the business component, according to the use of fees, and across different applications. The research content of this article for the study of the cloud storage service system based on HDFS, aims to build a cloud storage service system based on HDFS, solve the enterprise mass data storage problem, reduce the cost of implementing the distributed file system, promote the Hadoop technology promotion. Cloud storage is widely discussed in the present on the cloud computing concept of extension and development, to a large number of different types of storage devices in the network integration, thereby providing access to data storage and business functions. Hadoop distributed file system (HDFS) is the underlying implementation of open source cloud computing software platform Hadoop framework part, has the characteristic such as high transmission rate, high fault tolerance, can be in the form of a flow to access the data in the file system, so as to solve the access speed and security issues, achieve huge amounts of datastorage management.2 Each big cloud storage products of the company 2.1 The Amazon’s strategyAmazon is among the first to launch the cloud storage service enterprises. Amazon first launch a service of cloud computing is Amazon web services (Amazon web services, the AWS), the cloud computing service is composed of four core components: simple arrangement, simple storage service, elastic computing cloud and is still in the backs of the test. In August 2008, Amazon in order to enhance its efforts on the cloud storage strategy, its Internet services add "persist" function to the elastic compute cloud (ECZ).The vendor launched Elastic Block Storage (Elastic Block Storage, EBS) products, and claims that the product can through the Internet service form at the same time provide Storage and computing functions. 2.2 Nirvana and CDNetworks strategyFamous cloud storage platform providers Nirvanix and content delivery network service provider CDNetworks released a new cooperation, and strategic partnerships, to provide the industry the only cloud storage and content delivery service integration platfo rm. Use it’s located in all parts of the world 63 content distribution node, the user can store unlimited data on the Internet, and get good data protection and data security guarantee. Cooperation will bring CDNetworks in cloud storageand Nirvanix the same capacity, not only can safely store huge amounts of media content, and can rely on CDNetworks data center to deliver data anywhere in the world in real time, the two companies, said based on this partnership of cooperation, make it have better overall media delivery ability, also helps users save 80% 90% of the construction of its own storage infrastructure costs.2.3 Google's strategyThe company in this year's FO developer technical conference announced called "Google storage cloud storage services, to challenge Amazon s3 cloud storage service. Look from the function design, Google storage will refer to the Amazon s3, for existing s3 user’s switch to Google storage service. Google storage services will include RESTAPI agreement, to allow developers to download via Google account provides authentication, data backup services. In addition, Google will also to outside developers to provide network user interface and data management tools. 2.4 EMC’s strategyEMC's cloud storage infrastructure solution is a kind of management system based on strategy, the service provided can create different types of cloud storage ability, for example, it can be for not paying customers to create file two copies, and stored in different locations around the world, and for paying customers to create a backup storage on October 5, andprovides its all over the world access to the file of higher reliability and faster access. In software systems, Atm0S including data services, such as copying, data compression, data reduplication, with cheap standard x86 server to hundreds of terabytes of hard disk storage space.EMC has promised that it automatically configure the new storage and adaptive ability of a hardware failure, also allows the user to use W b manage service agreement and read. At present there are three versions, Atm0S system capacity is respectively 120 TB otb, 24, and 36 orb, All of them are based on x86 servers and support gigabit or 10 gb Ethernet connection.3 Cluster development of storage technology The rise of cloud storage is upending the existing network storage architecture. Facing the current pet bytes of mass storage requirements, the traditional SAN or NAS will exist in the expansion of the capacity and performance bottlenecks. Such as by its physical elements (such as the number of disk drives, the connected server performance and memory size and the number of controller), can cause a lot of functional limitations (such as: the number of file system support, snapshot or copy number, etc.).Once encountered the bottleneck of storage system, it will constantly encourage users to upgrade to a bigger storage system and add more management tools, thus increasing the cost. Cloud storage the service mode of the new storage architecture is demanded to keep very low cost, and some existinghigh-end storage devices are obviously cannot meet this need. From the perspective of the practice of Google company, they are not used in the existing cloud computing environment SAN architecture, but use, is a scalable distributed file system GFS) this is a highly efficient cluster storage technology.GFS is a scalable distributed file system, used in large, distributed, on a visit to a large amount of data applications. It runs on ordinary PC, but can provide strong fault tolerance, can give a large number of users with the overall performance of the service. Cloud storage 130] is not stored, but service. Wan and the Internet like a cloud, the cloud storage for users, not referring to a particular device, but is a by many a collection of storage devices and servers. Users use the cloud storage, not using a storage device, but use, is the entire cloud storage system with a data access service. So strictly speaking, the cloud is not stored, but a service. Cloud storage is the core of application software combined with a storage device, by applying the software to realize the change of the service to the storage device. 4 Cloud storage system analysesCompared with the traditional storage devices, cloud storage is not only hardware, but a network devices, storage devices, servers, applications, public access interface, access, and the client program such as a complex system composed of multiple parts. Parts storage device as the core, through the application software to provide access to datastorage and business services. The structure of cloud storage system model consists of four layers.(1) The storage layerStorage layer is the most basic part of the cloud storage. Storage devices can be a fiber channel storage devices, or other storage devices. Storage devices are often large number of cloud storage and distribution of many different areas, between each other through the wide area network, Internet or fiber channel network together. Storage devices is a unified storage management system, can realize the logic of storage virtualization management, more link redundancy management, as well as the hardware equipment condition monitoring and fault maintenance.(2) The basic managementBased management is the core part of the cloud storage, is also the most difficult part of the cloud storage. Based management through cluster and grid computing, distributed file system such as technology, realize the cloud storage between multiple storage devices in the work, make multiple storage devices can provide the same service, and to provide better data access performance, bigger and stronger content distribution system, 1391, data encryption technology to ensure the data in the cloud storage will not be access by unauthorized users, at the same time, through a variety of data for disaster and techniques and measurescan ensure that data is not lost in the cloud storage, ensure the security and stability of the cloud storage itself. (3) The application of the interface layer Application of the interface layer is the most flexible part of the cloud storage. Different cloud storage operation unit can be according to actual business types, different application service interface, with the application of different services. Such as video monitoring application platform, network hard disk reference platform, the remote data backup application platform, etc. (4) Any an authorized user can access layer through a standard utility application login interface to cloud storage system, the cloud storage service. Cloud storage operation services, cloud storage provide different type of access and the access method.译文云存储服务系统研究Mehra P摘要云存储是在云计算(cloud computing)概念上延伸和发展出来的一个新的概念,因此,要了解云存储首先要了解云计算。

云计算-专业英语论文

云计算-专业英语论文

Cloud ComputingChen PengSchool of Information Science and Technology, YanChenNormal College,YanChen ChinaEmail:chenpengyls@163。

comAbstract--Cloud computing is a new computing model;it is developed based on grid computing.We introduced the development history of cloud computing and its application situation and gave a new definition;took google’s cloud computing techniques as an example,summed up key techniques,such as data storage technology(Google File System),data management technology(BigTable),as well as programming model and task scheduling model(Map—Reduce),used in cloud computing;and analyzed the differences among cloud computing,grid computing and traditional super—computing,and fingered out the further development prospects of cloud computing.Key words:cloud computing;data storage;data management;programming modelⅠ。

英文文献及翻译(计算机专业)

英文文献及翻译(计算机专业)

NET-BASED TASK MANAGEMENT SYSTEMHector Garcia-Molina, Jeffrey D. Ullman, Jennifer WisdomABSTRACTIn net-based collaborative design environment, design resources become more and more varied and complex. Besides common information management systems, design resources can be organized in connection with design activities.A set of activities and resources linked by logic relations can form a task. A task has at least one objective and can be broken down into smaller ones. So a design project can be separated into many subtasks forming a hierarchical structure.Task Management System (TMS) is designed to break down these tasks and assign certain resources to its task nodes.As a result of decomposition.al1 design resources and activities could be managed via this system.KEY WORDS:Collaborative Design, Task Management System (TMS), Task Decomposition, Information Management System1 IntroductionAlong with the rapid upgrade of request for advanced design methods, more and more design tool appeared to support new design methods and forms. Design in a web environment with multi-partners being involved requires a more powerful and efficient management system .Design partners can be located everywhere over the net with their own organizations. They could be mutually independent experts or teams of tens of employees. This article discusses a task management system (TMS) which manages design activities and resources by breaking down design objectives and re-organizing design resources in connection with the activities. Comparing with common information management systems (IMS) like product data management system and document management system, TMS can manage the whole design process. It has two tiers which make it much more f1exible in structure.The 1ower tier consists of traditional common IMSS and the upper one fulfillslogic activity management through controlling a tree-like structure, allocating design resources and making decisions about how to carry out a design project. Its functioning paradigm varies in differe nt projects depending on the project’s scale and purpose. As a result of this structure, TMS can separate its data model from its logic mode1.It could bring about structure optimization and efficiency improvement, especially in a large scale project.2 Task Management in Net-Based Collaborative Design Environment2.1 Evolution of the Design EnvironmentDuring a net-based collaborative design process, designers transform their working environment from a single PC desktop to LAN, and even extend to WAN. Each design partner can be a single expert or a combination of many teams of several subjects, even if they are far away from each other geographically. In the net-based collaborative design environment, people from every terminal of the net can exchange their information interactively with each other and send data to authorized roles via their design tools. The Co Design Space is such an environment which provides a set of these tools to help design partners communicate and obtain design information. Code sign Space aims at improving the efficiency of collaborative work, making enterprises increase its sensitivity to markets and optimize the configuration of resource.2.2 Management of Resources and Activities in Net-Based Collaborative EnvironmentThe expansion of design environment also caused a new problem of how to organize the resources and design activities in that environment. As the number of design partners increases, resources also increase in direct proportion. But relations between resources increase in square ratio. To organize these resources and their relations needs an integrated management system which can recognize them and provide to designers in case of they are needed.One solution is to use special information management system (IMS).An IMS can provide database, file systems and in/out interfaces to manage a given resource. Forexample there are several IMS tools in Co Design Space such as Product Data Management System, Document Management System and so on. These systems can provide its special information which design users want.But the structure of design activities is much more complicated than these IM S could manage, because even a simple design project may involve different design resources such as documents, drafts and equipments. Not only product data or documents, design activities also need the support of organizations in design processes. This article puts forward a new design system which attempts to integrate different resources into the related design activities. That is task management system (TMS).3 Task Breakdown Model3.1 Basis of Task BreakdownWhen people set out to accomplish a project, they usually separate it into a sequence of tasks and finish them one by one. Each design project can be regarded as an aggregate of activities, roles and data. Here we define a task as a set of activities and resources and also having at least one objective. Because large tasks can be separated into small ones, if we separate a project target into several lower—level objectives, we define that the project is broken down into subtasks and each objective maps to a subtask. Obviously if each subtask is accomplished, the project is surely finished. So TMS integrates design activities and resources through planning these tasks.Net-based collaborative design mostly aims at products development. Project managers (PM) assign subtasks to designers or design teams who may locate in other cities. The designers and teams execute their own tasks under the constraints which are defined by the PM and negotiated with each other via the collaborative design environment. So the designers and teams are independent collaborative partners and have incompact coupling relationships. They are driven together only by theft design tasks. After the PM have finished decomposing the project, each designer or team leader who has been assigned with a subtask become a 1ow-class PM of his own task. And he can do the same thing as his PM done to him, re-breaking down and re-assigning tasks.So we put forward two rules for Task Breakdown in a net-based environment, incompact coupling and object-driven. Incompact coupling means the less relationshipbetween two tasks. When two subtasks were coupled too tightly, the requirement for communication between their designers will increase a lot. Too much communication wil1 not only waste time and reduce efficiency, but also bring errors. It will become much more difficult to manage project process than usually in this situation. On the other hand every task has its own objective. From the view point of PM of a superior task each subtask could be a black box and how to execute these subtasks is unknown. The PM concerns only the results and constraints of these subtasks, and may never concern what will happen inside it.3.2 Task Breakdown MethodAccording to the above basis, a project can be separated into several subtasks. And when this separating continues, it will finally be decomposed into a task tree. Except the root of the tree is a project, all eaves and branches are subtasks. Since a design project can be separated into a task tree, all its resources can be added to it depending on their relationship. For example, a Small-Sized-Satellite.Design (3SD) project can be broken down into two design objectives as Satellite Hardware. Design (SHD) and Satellite-Software-Exploit (SSE). And it also has two teams. Design team A and design team B which we regard as design resources. When A is assigned to SSE and B to SHD. We break down the project as shown in Fig 1.It is alike to manage other resources in a project in this way. So when we define a collaborative design project’s task model, we should first claim the project’s targets. These targets include functional goals, performance goals, and quality goals and so on. Then we could confirm how to execute this project. Next we can go on to break down it. The project can be separated into two or more subtasks since there are at 1east two partners in a collaborative project. Either we could separate the project into stepwise tasks, which have time sequence relationships in case of some more complex projects and then break down the stepwise tasks according to their phase-to-phase goals.There is also another trouble in executing a task breakdown. When a task is broken into severa1 subtasks; it is not merely “a simple sum motion” of other tasks. In most cases their subtasks could have more complex relations.To solve this problem we use constraints. There are time sequence constraint (TSC) and logic constraint (LC). The time sequence constraint defines the time relationships among subtasks. The TSC has four different types, FF, FS, SF and SS. Fmeans finish and S presents start. If we say Tabb is FS and lag four days, it means Tb should start no later than four days after Ta is finished.The logic constraint is much more complicated. It defines logic relationship among multiple tasks.Here is given an example:“Task TA is separated into three subtasks, Ta, T b and Tc. But there are two more rules.Tb and Tc can not be executed until Ta is finished.Tb and Tc can not be executed both,that means if Tb was executed, Tc should not be executed, and vice versa. This depends on the result of Ta.”So we say Tb and Tc have a logic constraint. After finishing breaking down the tasks, we can get a task tree as Fig, 2 illustrates.4 TMS Realization4.1 TMS StructureAccording to our discussion about task tree model and task breakdown basis, we can develop a Task Management System (TMS) based on Co Design Space using Java language, JSP technology and Microsoft SQL 2000. The task management system’s structure is shown in Fig. 3.TMS has four main modules namely Task Breakdown, Role Management, Statistics and Query and Data Integration. The Task Breakdown module helps users to work out task tree. Role Management module performs authentication and authorization of access control. Statistics and Query module is an extra tool for users to find more information about their task. The last Data Integration Module provides in/out interface for TMS with its peripheral environment.4.2 Key Points in System Realization4.2.1 Integration with Co Design SpaceCo Design Space is an integrated information management system which stores, shares and processes design data and provides a series of tools to support users. These tools can share all information in the database because they have a universal DataMode1. Which is defined in an XML (extensible Markup Language) file, and has a hierarchical structure. Based on this XML structure the TMS h data mode1 definition is organized as following.<?xml version= 1.0 encoding= UTF-8’?><!--comment:Common Resource Definitions Above.The Followingare Task Design--><!ELEMENT ProductProcessResource (Prcses?, History?,AsBuiltProduct*,ItemsObj?, Changes?, ManufacturerParts?,SupplierParts?,AttachmentsObj? ,Contacts?,PartLibrary?,AdditionalAttributes*)><!ELEMENT Prcses (Prcs+) ><!ELEMENT Prcs (Prcses,PrcsNotes?,PrcsArc*,Contacts?,AdditionalAttributes*,Attachments?)><!ELEM ENT PrcsArc EMPTY><!ELEMENT PrcsNotes(PrcsNote*)><!ELEMENT PrcsNote EMPTY>Notes: Element “Pros” is a task node ob ject, and “Process” is a task set object which contains subtask objects and is belongs to a higher class task object. One task object can have no more than one “Presses” objects. According to this definition, “Prcs” objects are organized in a tree-formation process. The other objects are resources, such as task link object (“Presage”), task notes (“Pros Notes”), and task documents (“Attachments”) .These resources are shared in Co Design database.文章出处:计算机智能研究[J],47卷,2007:647-703基于网络的任务管理系统摘要在网络与设计协同化的环境下,设计资源变得越来越多样化和复杂化。

关于计算机的英文文献写作范文摘要

关于计算机的英文文献写作范文摘要

关于计算机的英文文献写作范文摘要全文共3篇示例,供读者参考篇1Title: A Study on the Impact of Computers on SocietyAbstract:Computers have become an integral part of modern society, with their influence pervading all aspects of human life. This study aims to explore the impact of computers on society, focusing on the social, economic, and cultural aspects. The research is based on a comprehensive review of existing literature and empirical studies that have investigated the relationship between computers and society.The study finds that computers have revolutionized communication and information exchange, leading to a more connected and globalized world. The internet, in particular, has transformed the way people interact, work, and socialize. The rise of social media and online platforms has created new channels for communication and expression, but also raised concerns about privacy and data security.Economically, computers have changed the nature of work and productivity, with automation and artificial intelligence increasingly taking over routine tasks. While this has led to increased efficiencies and innovation, it has also raised questions about job displacement and income inequality. The gig economy and freelance work are becoming more common, as people adapt to the changing landscape of labor.Culturally, computers have influenced the way people consume media, create art, and express themselves. Digital technologies have democratized access to information and creative tools, but also raised issues of authenticity and copyright. The prevalence of online platforms for entertainment and social interaction has reshaped cultural practices and norms.In conclusion, computers have had a profound impact on society, shaping the way people communicate, work, and think. While the benefits of technology are clear, it is important to consider the social and ethical implications of its widespread adoption. More research is needed to understand the long-term effects of computers on society and to ensure that technology serves the greater good.篇2Title: Writing a Research Paper on ComputersAbstract:This paper discusses the process of writing a research paper on computers. It provides a step-by-step guide on how to effectively research, organize, and write a paper on the topic of computers. The paper outlines the importance of choosing a specific research question, conducting thorough research, and citing sources properly. It also explains how to structure a research paper on computers, including the introduction, literature review, methodology, results, discussion, and conclusion sections. Additionally, the paper provides tips on how to write clearly and concisely, avoid plagiarism, and revise and edit the paper for clarity and coherence. Overall, this paper serves as a comprehensive guide for students and researchers looking to write a research paper on computers.篇3Title: A Study on Computer Science: Writing Research PapersAbstract:Computer science is a rapidly growing field with a wide array of topics and subfields for researchers to explore. Writing research papers in computer science requires a combination oftechnical expertise and strong writing skills. This paper provides an overview of the key components of a research paper in computer science, along with useful tips and strategies for successful writing.The first step in writing a research paper in computer science is to select a topic that is both interesting and relevant to current advancements in the field. The paper should clearly define the research question or problem to be addressed, along with the objectives and methodology of the study. It is important to review existing literature on the topic to ensure that the research is original and contributes to the existing body of knowledge.The next step is to organize the paper into logical sections, including an introduction, literature review, methodology, results, discussion, and conclusion. Each section should be clearly structured and well-written, with appropriate citations and references to support the claims made in the paper. It is important to use a clear and concise writing style, avoiding unnecessary jargon and technical terms that may confuse the reader.In addition to the technical content of the paper, the writing style and presentation are also important factors to consider. The paper should be well-organized, with a logical flow of ideas andarguments. Charts, tables, and figures can be used to illustrate key points and data, but should be used sparingly and effectively.Finally, the paper should be carefully proofread and edited to ensure that it is free of errors in grammar, punctuation, and spelling. It is also important to consider the formatting and citation style required by the target journal or conference. By following these guidelines and tips, researchers can improve the quality of their research papers in computer science and increase their chances of publication and impact in the field.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

云计算——新兴的计算技术摘要:云计算是涉及通过互联网提供托管服务的总称。

这些服务大致分为三类:基础设施即服务(IaaS)、平台即服务(PaaS)和软件即服务(SaaS)。

云计算这个名字的灵感来自于云符号经常用来代表在互联网上流程图和图表。

这是在继主机计算、个人电脑计算、客户端服务器计算和Web计算之后的第五代计算技术。

本文将围绕云计算进行讨论。

关键词:云计算,IaaS(基础设施即服务),PaaS的(平台即服务),SaaS(软件即服务)1引言云服务有三个鲜明的特点区别于传统的主机服务模式,它们分别是:云服务的出售通常按分钟或小时收取费用;云服务是有弹性的,一个用户可以在不同的时间拥有可多可少的服务;云服务完全由供应商托管(消费者只需要通过个人电脑和互联网就可以使用)。

虚拟化的重大创新、分布式计算的发展,以及高速互联网的建设和经济的衰落,都加速了对云计算的兴趣。

云可以是私有的或公有的。

公有云向互联网上的任何人销售(目前,亚马逊的网络服务是最大的公有云服务提供商)。

私有云是一个专有网络或数据中心,向一部分人提供托管服务。

当服务提供商使用公有云资源来创建自己的私有云,这样的结果被称为虚拟化的私有云。

私有云或公共云的云计算目标是提供方便的、可扩展的计算资源和IT服务[1]。

IaaS(基础设施即服务),像亚马逊提供的Web服务是根据唯一的IP地址和存储块按照客户的需求提供虚拟的服务器实例。

客户使用提供商的API来启动、停止、访问和配置他们的虚拟服务器和存储。

在企业中,云计算能够根据需要进行购买。

PaaS(平台即服务)被定义为托管提供商提供的一套软件开发工具和产品。

开发人员通过互联网在基础平台上开发应用程序。

PaaS的提供者可以使用API,门户网站或门户网站上的软件为客户的计算机进行安装。

(的产物)和GoogleApps都是PaaS的例子。

开发人员需要知道的是,目前还没有云的互操作性和云数据可移植性的标准。

一些供应商可能不会允许客户创建的软件移出提供商的平台。

在SaaS(软件即服务)的云模型中,供应商通过前端门户向客户提供硬件设施、软件产品、以及用户交互。

SaaS是一个很广泛的市场。

服务可以从基于Web电子邮件的目录控制到数据库处理。

由于服务提供商提供的主机应用程序和数据,最终用户从任何地方可以自由地使用该服务。

2云计算的优势云计算具有的优势是什么?(a)最小化的资本开支(b)位置和设备独立性(c)利用和提高能效(d)非常高的可扩展性(e)高计算能力答案:供应商的视角:申请厂商更容易吸引新客户。

(a)提供最低成本的方法和配套应用;(b)能够使用商品服务器和存储硬件;(c)能够降低数据中心的运营成本;(d)一个词概括:经济学。

3云计算的障碍从客户的视角来看,云计算的障碍有:(a)数据安全;(b)很多客户不希望他们的数据迁移到可以信任的“云”上;(c)数据必须进行本地保留;(d)延迟;(e)云可以走多少毫秒;(f)不是实时应用的理想选择;(g)应用程序可用性;(h)无法通过现有的传统应用进行切换;(i)等效的云应用不存在;总结,并非所有的应用程序都要工作在公共云之上。

4云计算的体系结构云计算架构及其实现的定义非常强调UNIX哲学,具备开发人员必须遵循的一系列规则,确保云计算将很容易地实施,并且保证应用程序对用户的优势。

虽然有很多Unix哲学的定义,规则和原则,他们都有一个共同的信念:建立一个协同工作的事物。

通过UNIX哲学,设计云计算架构的开发商必须记住的是只有一个应用程序并且至少有一个输出。

云计算可能是由不同的阶段组成,但这些阶段是以实现在线申请一致的计算为目标。

数据中心和服务器农场提出了应用程序的需求。

可以说,许多硬件可以被用来支持一个进程,但这些都应该确保该应用程序有足够的后备设备计划。

云计算的应用程序也被认为是由安全性和性能监测组成的。

通过云计算架构的正确执行,应用程序将能够为用户提供7×24小时的服务。

5云计算安全安全是企业的最关注的问题之一。

不管业务有多大,或者有多小,都应该采取必要的安全措施。

不同的安全漏洞通常会引来不同意图的攻击。

一个单一的安全问题可能意味着数百万元的企业,可能会突然就无法正常经营了。

安全措施对于云计算是非常必要的[3]。

从互联网启动的应用程序更容易受到攻击[3]。

局域网的应用程序比部署在外网的应用程序安全性好。

这是云计算的独特情况。

云计算的实现可能需要数百万的资金用于基础设施的建设和应用程序的开发,但仍置于不同类型的攻击之中。

5.1保护用户除了以上的问题之外,云计算或者任何网络形式的应用程序都应该考虑保护其用户。

开发者应该保证用户相关的数据不会被错误的获取。

这里有两种方案保证云计算的安全:限制用户的访问和认证。

(a)限制性访问可能来自简单的用户名和密码,但是云计算的应用程序不仅应立足于这些挑战。

IP的特定应用和用户访问的时间只是云计算安全应予以执行的一些措施。

限制用户访问的挑战在于限制用户的访问权限。

每个用户必须手动指派安全检查,以确保限制通行不同的文件。

(b)证书对于用户的认证也同样重要。

开发商必须开放其提供安全认证的专家或公司。

这是一种被不同类型的攻击测试用户的方式。

这常常是云计算外部安全检查面对的可以公开公司秘密的难题,但是这必须以确保其用户的安全为前提。

5.2数据安全除了对用户受到不同类型攻击的保护,数据本身应该得到保护。

在这方面,硬件与软件都会起到作用。

而且,认证在数据安全方面是非常必要的。

另一方面,计算机硬件需要不同类型的安全考虑。

数据中心的位置选择不仅要考虑是否靠近控制器和用户,而且还需要考虑其外部安全问题(甚至是保密)。

数据中心应该得到保护,可以适应不同类型的天气条件,能够抵御会造成物理损坏的火灾等自然灾害。

对于添置应用程序有关的硬件,某些硬件必须增加安全性。

其中之一是手动关机,以防止信息的进一步访问。

虽然数据可以由其他应用程序控制,但是数据也会流失除非立即关机。

5.3恢复和调查云计算安全不应只注重预防本身。

充足的资源,也应侧重于恢复,如果真的发生不幸就会罢工。

即使在灾难发生时,某些计划必须到位,以确保每个应用都能复苏。

该计划并不一定要把重点放在单独攻击的软件上,如某些外部灾害天气情况应该有单独的恢复计划。

当一切都已经恢复,供应商及受理申请的公司应通过一些途径调查问题的原因。

通过调查,可以发现该事件导致的原因,甚至采取法律的行动,如果安全受到故意破坏。

实施云计算,安全是最困难的任务之一。

它不仅要求能够从软件方面抵御各种形式的攻击,而且在硬件方面也要能够起到作用。

攻击造成灾难性的影响只需要一个安全漏洞,所以它始终是每个人面临的安全性挑战。

6云计算的挑战云计算应用的挑战主要在于应用程序可以处理的请求数量[2]。

尽管这个问题有了数据中心的建议,没有正确开发阿杜应用程序也会遇到门槛。

为了解决这一问题,开发者使的元数据来为用户提供个性化的服务,以及数据处理。

通过元数据,个性化的要求将被接纳,并会得到妥善落实。

元数据还保证在正常运行时间的交易数据请求将会得到延缓如果开发商选择这样做。

7云计算的未来云计算可能是一些企业和消费者比较新的概念。

但是即使一些企业刚开始采用,也能获益于云计算的优势。

很多行业巨头都已经开始期待着下一个云计算阶段。

云计算的未来应高度考虑在任何行业的企业。

云计算的能够完全适应任何一个行业的可能性正在慢慢开始出现。

如果一个企业不考虑在云计算时代的未来,面临的挑战以及云计算的优势可能无法得到合理和充分利用。

Cloud C omputing-The Emerging Technology of C omputingPratima Manhas,Shaveta ThakralECE Dept,Manav Rachna International,University(MRIU),Faridabad,IndiaABSTRACT:Cloud computing is a general term for anything that involves delivering hosted services over the Internet.These services are broadly divided into three categories:Infrastructure-as-a-Service(IaaS), Platform-as-a-Service(PaaS)and Software-as-a-Service(SaaS).The name cloud computing was inspired by the cloud symbol that's often used to represent the Internet in flowcharts and diagrams.It is the5th generation of computing after mainframe,personal computer,client server computing and the web.In this certain application,limitation and future scope of this technology was discussed.KEY WORDS:Cloud computing,IaaS(Infrastructure-as-a-Service),PaaS(Platform-as-a-Service),SaaS (Software-as-a-Service)1IntroductionA cloud service has three distinct characteristics that differentiate it from traditional hosting.It is sold on demand,typically by the minute or the hour;it is elastic--a user can have as much or as little of a service as they want at any given time;and the service is fully managed by the provider(the consumer needs nothing but a personal computer and Internet access).Significant innovations in virtualization and distributed computing,as well as improved access to high-speed Internet and a weak economy,have accelerated interest in cloud computing.A cloud can be private or public.A public cloud sells services to anyone on the Internet.(Currently, Amazon Web Services is the largest public cloud provider.)A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people.When a service provider uses public cloud resources to create their private cloud,the result is called a virtual private cloud.Private or public,the goal of cloud computing is to provide easy,scalable access to computing resources and IT services[1].Infrastructure-as-a-Service like Amazon Web Services provides virtual server instances with unique IP addresses and blocks of storage on demand.Customers use the provider's application program interface(API) to start,stop,access and configure their virtual servers and storage.In the enterprise,cloud computing allows a company to pay for only as much capacity as is needed,and bring more online as soon as required.Platform-as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure.Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs,website portals or gateway software installed on the customer's computer. ,(an outgrowth of )and Google Apps are examples of PaaS.Developers need to know that currently,there are not standards for interoperability or data portability in the cloud.Some providers will not allow software created by their customers to be moved off the provider's platform.In the software-as-a-service cloud model,the vendor supplies the hardware infrastructure,the software product and interacts with the user through a front-end portal.SaaS is a very broad market.Services can beanything from Web-based email to inventory control and database processing.Because the service provider hosts both the application and the data,the end user is free to use the service from anywhere.2Benefits of Cloud Computing(a)Minimized Capital expenditure(b)Location and Device independence(c)Utilization and efficiency improvement(d)Very high Scalability(e)High Computing powerA.VendorPerspectiveEasier for application vendors to reach new customers(a)Lowest cost way of delivering and supporting applications(b)Ability to use commodity server and storage hardware(c)Ability to drive down data center operational cots(d)In one word:economics3Barriers to Cloud ComputingA.Customer Perspective(a)Data Security(b)Many customers don’t wish to trust their data to“the cloud”(c)Data must be locally retained for regulatory reasons(d)Latency(e)The cloud can be many milliseconds away(f)Not suitable for real-time applications(g)Application Availability(h)Cannot switch from existing legacy applications(i)Equivalent cloud applications do not existNot all applications work on public clouds4Architecture of cloud computingCloud computing architecture and its implementation is strongly defined by the Unix Philosophy.It’s basically a set of rules and principles that developers would have to follow to ensure that cloud computing would be easily implemented and the application would be for the advantage of the user.Although there are many definitions,rules and principles for Unix Philosophy,they all point out to one belief:build one thing and make sure it works consistently.Through the Unix Philosophy,developers who design the architecture for cloud computing have to remember that they there only to support only one application or at least an output. Cloud computing could be composed of different stages but these stages are geared towards the consistent computing of the online application.Keeping up with the demands of the application is the data centers and server farms.It could be said that too much hardware could be used to support a single process but these are implemented to ensure that theapplication would have as many back-up plans as it could have.Applications from cloud computing is also implemented with the thought of consistent security and performance monitoring.Through proper implementation of cloud computing architecture,the application will be accessible24/7with a new100% uptime for their users.5Security on cloud computingSecurity is one of the biggest concerns of businesses in any form.Whether a business is a small brick-and-mortar or a multi-million online ventures,security should be implemented.Exposing the company to different security flaws is always inviting to different elements with malicious intent.A single security strike could mean millions of dollars for businesses and might single handedly close the business down.Proper implementation of security measures is highly recommended for cloud computing[3].The mere fact that the application is launched through internet makes it vulnerable to any time of attack[3].An application available in LAN(Local Area Network)only could even be infiltrated from the outside so placing an application over the internet is always a security risk.This is the unique situation of cloud computing. Implementation of cloud computing could require millions of dollars in infrastructure and applications development but it still places itself at risk for different types of attacks.A.Protecting the UsersAbove everything else,cloud computing or any type of online application format should consider protecting its users.Developers should make sure that data related to the user should not be mishandled and could be extracted just by one.There are two ways to ensure cloud computing security:restrictive user access and certifications.(i)Restrictive access could come from simple username/password challenge to complicated CAPTCHA log in forms.But applications in cloud computing should not only base itself on these challenges.IP specific applications and user time-outs are only some of the security measures that should be implemented.The challenge in restrictive user access is to limit the access privilege of the user.Each user will have to be assigned manually with security clearance to ensure limitation of access to different files.(ii)Certifications are also important for user certification.Developers have to open their application to security specialists or companies that provide certifications for security.This is one way of assuring users that the application has been fully tested against different types of attacks.This is often the dilemma for cloud computing as external security checks might open the company secrets on cloud computing.But this has to be sacrificed to ensure the security of their users.B.Data SecurityAside from user protection against different types of attacks,the data itself should be protected.In this aspect,the hardware and software linked to cloud computing should be scrutinized.Again,a certification is highly desired in this part of cloud computing.The hardware component for cloud computing on the other hand requires a different type of security consideration.The location of data center should not only be selected because of its proximity to controllers and intended users but also on its security(and even secrecy)from external problems.The data center should be protected against different types of weather conditions,fire and even physical attacks that might destroy the center physically.With regards to the hardware component in relation to the application,certain manual components have to be available for increased security.Among them is manual shutdown to prevent further access of the information.Although data could be controlled with another application that data could be infiltrated unless the application is shutdown immediately.C.Recovery and InvestigationCloud computing security should not only focus itself on prevention.Ample resources should also be focused on recovery if the unfortunate event really strikes.Even before disaster happens,certain plans have to be in place to ensure that everyone will be working in unison towards recovery.The plans do not have to be focused on software attacks alone–certain external disasters such as weather conditions should have separate recovery plans.When everything has been recovered,developers and the company handling the application should have the means to investigate the cause of the problem.Through investigation,certain conditions that lead to the event could be realized and insecurities could be discovered.Even legal actions could be done if security has been breached on purpose.Security is one of the most difficult task to implement in cloud computing.It requires constant vigilance against different forms of attacks not only in the application side but also in the hardware components.Attacks with catastrophic effects only needs one security flaw so it’s always a challenge for everyone involved to make things secured.6Challenges in cloud computingThe challenge for applications in cloud computing is largely based on the number of requests the application could handle[2].Although this factor could be highly suggested by the data center,the application will usually have a threshold if they are not properly written.To deal with this concern,developers use metadata to enable personalized services to their users as well as data processing.Through metadata,individualized requests will be entertained and will be properly implemented.Metadata also ensures uptime of transaction as data requests will be slowed down if the developer chooses to do so.7Future of cloud computingCloud computing may be a relatively new concept for some businesses and consumers.But even though some businesses are only starting to adopt and realizing the advantages of cloud computing,industry giants are already looking forward to the next big step of cloud computing.The future of cloud computing should be highly considered by businesses in any industry.The possibility of full adaptation of cloud computing by almost any industry is slowly starting to happen.If a business will not consider their future in cloud computing,the challenges as well as the advantages of cloud computing may not be addressed and fully harnessed.References[1]Toby Velte,Anthony Velte,Robert Elsenpeter.“Cloud Computing,A Practical Approach”,Mcgraw-Hill Education,2009.[2]Ronald Krutz and Russell Vines.“Cloud Security:A Comprehensive Guide to Secure Cloud Computing”, Wiley Publishing Inc,2010.[3]John Rittenhouse and James Ransome.“Cloud Computing:Implementation,Management,and Security”, CRC Press2010.。

相关文档
最新文档