The NeQuick model
运用图尔敏模型的英语作文
运用图尔敏模型的英语作文The Toulmin Model: A Versatile Approach to Effective ArgumentationThe Toulmin Model, developed by British philosopher Stephen Toulmin, is a widely recognized framework for constructing and analyzing arguments. This model provides a structured approach to developing and presenting persuasive arguments, making it a valuable tool for writers, students, and professionals across various fields.At its core, the Toulmin Model consists of six key elements: claim, data, warrant, backing, qualifier, and rebuttal. The claim is the central assertion or conclusion that the argument aims to support. The data is the evidence or facts that are used to support the claim. The warrant is the logical reasoning that connects the data to the claim, explaining why the data is relevant and sufficient. The backing provides additional support or justification for the warrant, strengthening the overall argument. The qualifier acknowledges the limitations or exceptions to the claim, while the rebuttal anticipates and addresses potential counterarguments.By incorporating these elements, the Toulmin Model encourageswriters to carefully consider the structure and logic of their arguments, ensuring that they are well-reasoned, comprehensive, and persuasive.One of the key advantages of the Toulmin Model is its versatility. It can be applied to a wide range of argumentative contexts, from academic essays and research papers to business proposals and public speeches. Regardless of the specific topic or setting, the Toulmin Model provides a consistent framework for organizing and presenting arguments in a clear and effective manner.In the academic context, the Toulmin Model is particularly useful for crafting persuasive research papers and essays. By following this model, students can develop a strong, well-supported thesis statement, and then use the remaining elements to build a comprehensive and logical argument. This approach not only helps students to organize their thoughts and ideas but also ensures that their writing is coherent, compelling, and responsive to potential counterarguments.Moreover, the Toulmin Model can be applied to various disciplines, from the humanities and social sciences to the natural and applied sciences. For instance, in a scientific research paper, the claim might be a hypothesis or a proposed solution to a problem, the data could be the experimental results or observations, and the warrant mightbe the underlying scientific principles or theories that explain the relationship between the data and the claim.In the business world, the Toulmin Model can be a valuable tool for crafting effective presentations, proposals, and negotiations. By clearly articulating the claim, supporting it with relevant data and warrants, and anticipating potential objections or concerns, business professionals can enhance the persuasiveness and credibility of their arguments.Beyond the academic and professional realms, the Toulmin Model can also be applied to everyday argumentative situations, such as discussions with friends, family, or colleagues. By understanding and applying the principles of the Toulmin Model, individuals can improve their ability to communicate their ideas effectively, engage in constructive dialogue, and reach mutually satisfactory resolutions.In conclusion, the Toulmin Model is a powerful and versatile framework for developing and presenting persuasive arguments. By incorporating the key elements of claim, data, warrant, backing, qualifier, and rebuttal, writers, speakers, and thinkers can craft arguments that are well-reasoned, comprehensive, and responsive to diverse perspectives. Whether in academic, professional, or personal contexts, the Toulmin Model offers a structured approach toeffective argumentation, empowering individuals to communicate their ideas with clarity, coherence, and conviction.。
competition-model
Defect
The main weakness of the model is over-reliance on rather artificial interpretation tasks.
5. Conclusion
The Competition Model is a psycholinguistic theory of language acquisition and sentence processing. It has been investigated by means of experimental syudies which elicited rather artificial language response. Although the theory contributes to the researches on the SLA, especially the input , some researchers think it is necessary to develop more natural, on-line ways toof investingating input process.
functions in the L2 2) what weights to attach to the use of
individual forms in the performance of specific functions.
2. Theoretical Framework
2.4 Cue Strength
Second, it emphasizes the role of transfer, automatization, and parasitism in learning of the L2.
value chain模型英文版
value chain模型英文版English:The value chain model is a strategic framework that outlines the key activities involved in a company's production process, from raw materials acquisition to the delivery of the final product or service to the end customer. Developed by Michael Porter in 1985, the value chain model helps businesses identify opportunities for cost reduction, differentiation, and competitive advantage. The value chain is typically divided into two main categories: primary activities and support activities. Primary activities are directly related to the production and delivery of the product, such as inbound logistics, operations, outbound logistics, marketing and sales, and service. Support activities, on the other hand, provide the infrastructure and resources necessary for the primary activities to function effectively, including procurement, human resource management, technology development, and infrastructure. By analyzing each activity within the value chain, companies can pinpoint areas of their operations where they can create the most value and improve overall efficiency. This strategic approach enables firms to increase profitability, enhancecustomer satisfaction, and gain a sustainable competitive advantage in the marketplace.Translated content:价值链模型是一个战略框架,详细说明了公司生产过程中的关键活动,从原材料采购到最终产品或服务交付给最终客户。
The Monetary Models of Exchange Rate Determination
Sticky-Price Model
Assume that PPP only holds in the long run, such that
st = pt − p t
*
where a bar over a variable indicates a long-run equilibrium value. In the long run PPP holds and thus the Flex-Price model is valid, but only in the long run.
s t = ( mt − mt* ) − δ ( yt − yt* ) + γ ( it − it* )
In the short run, prices are sticky and PPP will not hold and neither will the Flex-Price model.
Sticky-Price Model
mt − pt = δ yt − γ it
where • mt is the domestic money supply in logs • pt is the log of domestic price level • yt is the log of domestic real income • it is the domestic nominal interest rate
熊彼特增长模型英文教程--香港浸会大学-工商管理学院课件
innovators!
• James M. Utterback:
If a company only focuses on current product rather
than improving the quality, it will be swept out of market!
• Peter Druke:
12/4/2020
熊彼特增长模型英文教程--香港浸会大学 -工商管理学院
Literature Review of Schumpeterian Model
• How economy grows in Schumpeterian growth theory?
12/4/2020
熊彼特增长模型英文教程--香港浸会大学 -工商管理学院
Literature Review of Schumpeterian Model
*Capital-based growth theory emphasizes on the idea that the accumulation of capital (material capital and human capital) is the most important power to push the advancement of technology and economy.
market.
12/4/2020
熊彼特增长模型英文教程--香港浸会大学 -工商管理学院
Literature Review of Schumpeterian Model
• Creative destruction is the motivation of the development of capitalism. Additionally, the Schumpeterian growth theory which we define here does not order the process of innovation to be a process of creative destruction. Under the vertical innovation framework, process of innovation is a process of creative destruction, so old goods will be replaced by new goods. Whereas, under horizontal innovation framework, both the old and new goods will exist in the market.
考研英语阅读理解外刊原文经济学人
How companies use AI to set prices企业如何利用人工智能技术为商品定价The pricing of products is turning from art into science产品定价正从艺术走向科学Few American business tactics are as peculiar in a freewheeling capitalist society as the manufacturer’s suggested retail price. P.H. Hanes, founder of the textile mill that would eventually become HanesBrands, came up with it in the 1920s. That allowed him to use adverts in publications across America to deter distributors from gouging buyers of his knitted under garments.在美国这个自由放任的资本主义社会,很少有商业策略像制造商的建议零售价那般奇特。
建议零售价的概念最初是由汉佰百货(从一个纺织厂发展而来)的创始人P·H·哈内斯于上世纪20年代提出的。
他在美国各地的出版物上刊登建议零售价,以防止经销商向购买其针织内衣的顾客乱开价。
Even today many American shopkeepers hew to manufacturers’ recommended prices, as much as they would love to raise them to offset the inflationary pressures on their other costs. A growing number, though, resort to more sophisticated pricing techniques.即使在今天,许多美国店家仍坚持使用制造商的建议零售价,尽管他们也希望提高价格以抵消通胀带来的其他成本压力。
☆business models for the internet of things
International Journal of Information Management 35(2015)672–678Contents lists available at ScienceDirectInternational Journal of InformationManagementj o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /i j i n f o m gtBusiness models for the Internet of ThingsR.M.Dijkman a ,∗,B.Sprenkels a ,b ,T.Peeters a ,A.Janssen ba Eindhoven University of Technology,P.O.Box 513,Eindhoven,The Netherlands bDeloitte Innovation,Orteliuslaan 982,Utrecht,The Netherlandsa r t i c l e i n f o Article history:Received 29May 2015Accepted 30July 2015Available online 27August 2015Keywords:Internet of Things Business modelNew product development Applicationsa b s t r a c tThe Internet of Things is the connection –via the internet –of objects from the physical world that are equipped with sensors,actuators and communication technology.This technology is looked at by a large variety of domains,such as manufacturing,healthcare and energy,to facilitate the development of new applications and the improvement of existing applications.To also enable the commercial exploitation of these applications,new types of business models must be developed.Frameworks exist to facilitate the development of business models.These frameworks define the building blocks that a business model address.This paper presents a business model framework specifically for Internet of Things applications.Through a literature survey,interviews and a survey among practitioners,it identifies the building blocks that are relevant in an Internet of Things business model,types of options that can be focused on within these building blocks and the relative importance of these building blocks and types.The framework can be used by developers as a starting point for creating business models for Internet of Things applications.©2015Elsevier Ltd.All rights reserved.1.IntroductionThe Internet of Things (IoT)refers to the interconnection of physical objects,by equipping them with sensors,actuators and a means to connect to the Internet.By technologically enabling this,the goal is to develop new applications and to improve existing applications.Famous examples of IoT applications include monitor-ing of personal health through wearables,washing machines that enable you to pay per load instead of for the machine,greenhouses that adapt their internal climate to the monitored properties of the crops that grow inside,and stables that adapt feeding and milking schedules to monitored properties of individual cows.The IoT is currently going through a phase of rapid growth.The number of connected ‘things’has increased threefold over the past five years (Digitimes,2013)and is estimated to be 4.9billion in 2015(Gartner,2014).As a consequence,organizations expect the IoT to become an important source of revenue.Cisco estimated that the global IoT market will generate $14trillion in profit over the next decade (Bort,2013)and Gartner (2013)predicts that the total global economic added value for the IoT market will be $1.9trillion dollars in 2020.∗Corresponding author.E-mail addresses:r.m.dijkman@tue.nl (R.M.Dijkman),bsprenkels@deloitte.nl (B.Sprenkels),t.j.g.peeters@tue.nl (T.Peeters),ajanssen@deloitte.nl (A.Janssen).The Economist Intelligence Unit (2013)stated that the biggest incentive for businesses to move ahead with the IoT are arguably the potential financial returns from its “productisation”.In other words,for the Internet of Things to be fully adopted by businesses,financial returns are key.Therefore,business models and ways to create value for IoT technology are needed.However,in spite of the thought that along with the introduction of the Internet of Things new revenue opportunities will rise and old business models will not be applicable to do so,the question what business models will be applicable remains (The Economist Intelligence Unit,2013).Moreover,as our literature review in Section 2of this paper will show,there currently exists little academic knowledge on how business models for IoT applications differ from business models for other application and how they should be constructed.This paper aims to fill that gap,by presenting a framework for developing business models for IoT applications.The framework is created based on a literature survey into existing business model frameworks and subsequently adapting these frameworks based on interviews in 11companies that develop IoT applications.Finally,the relative importance of the different parts of the framework for IoT applications are determined through a survey with 300respon-dents resulting in 72observations.By doing so,the contribution of this paper is a novel business model framework for IoT applications that is both grounded in literature and in interviews and a survey among IoT professionals.Against this background the rest of this paper is structured as follows.Section 2presents existing business models for IoT/10.1016/j.ijinfomgt.2015.07.0080268-4012/©2015Elsevier Ltd.All rights reserved.R.M.Dijkman et al./International Journal of Information Management35(2015)672–678673applications from literature.Section3presents our research method,Section4the data analysis,Section5the results,Section6a discussion of the results and Section7the conclusions.2.Existing business models for the Internet of ThingsA business model is an overview of the manner in which a com-pany does its business.“It is a description of the value a company offers to one or several segments of customers and of the architec-ture of thefirm and its network of partners for creating,marketing, and delivering this value and relationship capital,to generate prof-itable and sustainable revenue streams”(Osterwalder,Pigneur,& Tucci,2005).Business models are usually split into various com-ponents(Chesbrough&Rosenbloom,2002;Morris,Schindehutte, &Allen,2005).The most widely used components in the business model literature are customer segments,value propositions,chan-nels,customer relationships,revenue streams,key resources,key activities,key partnerships,and cost structure(Osterwalder et al., 2005).A business model framework is a tool that helps a company to develop its business models,by providing an overview of these components.To develop a framework for business models for IoT applications,we initially searched for existing business models for IoT in literature with the aim of generalizing them into a frame-work.To conduct this search,we used the keywords“Internet of Things”AND“business model*”;searching the ACM Digital Library; IEEE Explore;Science Direct;Springer and Web of Science.With these search terms;we only found20papers.From these papers; we selected the ones that contained an actual business model; which led to only5papers.Two of these papers developed their business model;based on a business model framework called the Business Model Canvas(Osterwalder&Pigneur,2010);which;in turn;is synthesized from a large number of similar frameworks (Osterwalder,2004).Table1shows the components that are covered by the various business models.These components are the partners,activities and resources that are key to produce and sell the product,the value that the product brings,the way in which the relation with the customer is built,the channel through which the product is sold,the types of customers that the product targets,the way in which costs are incurred and the way in which revenue is made.The table shows that the two models that are based on the Business Model Canvas cover all components of that framework.The models by Fan and Zhou(2011)and Liu and Jia(2010)cover a subset of these com-ponents.Li and Xu(2013)use different terminology to introduce their business model and primarily focus on the different stake-holders in developing an IoT platform and the activities that these stakeholders should perform.When developing the business model framework for IoT applications in this paper,we take the Business Model Canvas as a starting point,because two of thefive business models for IoT applications that we found in the literature are based on the Business Model Canvas and because the Business Model Can-vas itself is based on a meta-analysis of business model framework literature.We also apply the Business Model Canvas terminology by labelling the business model components‘building blocks’.A business model is constructed by choosing one or multiple specific ‘types’for each building block.For example,‘Asset Sale’is a type of the building block‘Revenue Stream’that can be used to constructa business model.3.Empirical research methodologyThe overall goal of this research is to create a framework for developing business models for IoT applications and considering the literature survey described in the previous section shows that a business model framework has building blocks that are developed by specific building block types.Therefore,we use an empirical research methodology to identify the building blocks and building block types for business models for IoT applications.Subsequently, we determine which building blocks and specific types are con-sidered significantly more important than others for developing business models for IoT applications.The literature survey from the previous section shows that the research area of IoT business models is relatively unexplored;five IoT business models exist in literature and these have not been empirically validated.Therefore,we choose a sequential exploratory research design,based on the approach proposed by Teddlie and Tashakkori(2006),that is useful for exploring relation-ships when study variables are not known(Hanson,Creswell,Clark, Petska,&Creswell,2005).In a sequential exploratory research approach qualitative data are collected and analyzedfirst,followed by collection and analysis of quantitative data.Afterwards,the inferences of both strands are integrated in one discussion.The sequential use of two different methods increases construct valid-ity(Greene,Caracelli,&Graham,1989),and ultimately leads to stronger conclusions(Teddlie&Tashakkori,2006).In particular,we continue as follows.First,we identify build-ing blocks and specific building block types using literature and interviews with professionals who work on IoT business mod-els.Subsequently,we determine the relative importance of the identified building blocks and types using the results of a survey contrasted with the results of the interviews.4.Data and analysisThis section analyses the data that is collected to determine the building blocks and specific building block types for business models for IoT and to determine their relative importance.4.1.InterviewsIn line with the motivation for using the Business Model Canvas as a starting point for developing a framework for IoT business mod-els,we develop an interview protocol based on the Business Model Canvas.The interview is semi-structured and aims to ask practi-tioners working on IoT business models about the completeness and correctness of the building blocks and types for IoT business models identified in the Business Model Canvas.The questions are based on the questionnaire developed by Osterwalder and Pigneur (2010,p.19–42).Appendix A shows the complete interview proto-col.Participants were searched in the following ways:A.Referrals from business networkB.Referrals from IoT specialistsC.Contact for IoT company found on the internetD.Referrals from prior intervieweesAs a result,11interviews were planned as shown in Table2.Table3shows the descriptive statistics of the interviews.It shows the sector in which the company of the interviewee oper-ates,the size of the company,the type of clients of the company, whether the IoT offering that is considered in the interview(see Table2)is primarily a product,a service,or both,and the num-ber of years that the company is offering the product or service. The interviews were transcribed,sent back to the interviewees for verification,and coded.The interviews were used to complete and adapt the types of an IoT business model,by adding,removing,split-ting or merging types,or by proposing an alternative classification for a type.Appendix B shows how these changes were made.It also674R.M.Dijkman et al./International Journal of Information Management35(2015)672–678Table1Components covered in IoT business models.Sun,Yan,Lu,Bie,and Thomas(2012)Bucherer and Uckelmann(2011)Fan and Zhou(2011)Liu and Jia(2010)Li and Xu(2013)Key partners X X X X XKey activities X X X XKey resources X XValue propositions X X XCustomer relationships X X XChannels X XCustomer segments X X X X XCost structure X X XRevenue streams X X X X XTable2Interviews per company.Interview number Company Sector IoT application Way offinding1Focus Cura Healthcare/independent living‘ThuisMeetApp’A2Dutch Domotics Healthcare/independent living‘Zorgdomotica’A3Hoogendoorn Agriculture iSii compact C4Essent Energy e-Thermostat A5Bundles Smart home Washbundles D6Blinq Systems Smart buildings MapIQ A7Ambient systems Supply chain Ambient supply chain B8GSETrack Transportation GSETrack C9Prometheus Supply chain Telematica B10Philips Smart home Philips Hue A11Mieloo&Alexander Supply chain ScanGreen ATable3Descriptive statistics interviews.Number of companiesSectorAgriculture1Energy1Healthcare2Smart home2Smart buildings1Supply chain3Transportation1SizeMicro(<10employees)5Small(10–50employees)2Medium(50–250employees)2Large(>250employees)2ClientsB2B8B2C3Product or serviceProduct4Service2Both5Offering age<1year31–5years65–10years1>10years1shows how frequently a particular type was mentioned during the interviews.4.2.SurveyA survey was used to ask the opinion of IoT professionals about the importance of each building block and each type.To this end, as suggested by Dillman(2000),we asked on a Likert scale whether respondents‘strongly disagree’,‘disagree’,were‘neutral’about the statement,‘agree’,or‘strongly agree’that a selected building block or type should be incorporated into a business model.In addition to that,we asked demographic questions and ques-tions about an IoT initiative or project that the respondent had been involved in,if applicable.A pilot with six participants was performed.Apart from spelling errors,minor improvements for understandability and layout,no significant changes were made. The questionnaire is included in Appendix B.An online survey was the most appropriate option to reach out to the target group.The survey was distributed online in various IoT focused groups on LinkedIn,Facebook and MeetUp.In addition, mailing lists,intranet posts,and e-mails in one of the authors’busi-ness network were used to distribute the survey.Ilieva,Baron and Healey(2002)present that the average response time in online sur-veys is5.59days.To take some slack into account,the survey was kept online for two weeks.The survey resulted in300responses,of which96were com-pleted.All partially completed cases in which all the core questions on the building blocks were completed were added,which resulted in a total of103cases.We analyzed the data for various potential problems.First,a check on double IP addresses in the cases revealed no cases of respondents who intentionally took the survey twice.Second,the time spent to take the survey was checked for all cases.Since the tests on the survey pointed out that taking the survey appropri-ately took about15min,every case which wasfinished in under 10min was not considered to be valid.Based on this criterion,24 cases were deleted.Per building block,the standard deviations of the types were calculated(the question on the importance of each building block was also added).If a standard deviation is0,this implies that the respondentfilled out all factors within that build-ing block with the exact same value.This could of course be honest and intentional.However,in7cases this happened multiple times, which raises the suspicion that respondentsfilled out the questions without the intention of giving appropriate answers.Deleting those cases left72observations for our analysis.Because of the way in which we distributed our survey–by sending our survey to a large number of internet forums and e-mail lists–and the low response rate that we assume to be associ-ated with this,we could have a non-response bias.Therefore,we check for differences between answers of early responders and lateR.M.Dijkman et al./International Journal of Information Management35(2015)672–678675Table4Descriptive statistics survey respondents.GenderMale6388% Female68% Not specified34%Age group18–2411%25–342332% 35–543650% 55+1014% Not specified23%EducationNo degree23% High school57% Bachelor68% Master4867% Doctorate913% Not specified23%ExpertiseYes4968% No2332%Country of originThe Netherlands3244% USA1318% Germany34% India34% Austria34% Other(13)countries1825%responders(Armstrong&Overton,1977).Analyzing the skewness, kurtosis and normality on the differences and t-tests on the means of early responders(responded infirst week)and late respon-ders,wefind no significant differences between the early and late responders.Therefore,we conclude that the risk of a non-response bias is low.Table4shows the descriptive statistics of the respondents of the survey.88%of the respondents of the survey is male(to compare,in2012in the USA the number of females enrolled in com-puter and information science Bachelor degree was18%(National Center for Education Statistics,2012)).Furthermore,respondents are relatively young and highly educated.Although respondents originate from various countries,The Netherlands and USA seem overrepresented.According to Hofstede(1980),the culture of The Netherlands and the USA are quite similar and both cultures are very different from other cultures such as Asian and South Ameri-can culture.Therefore,we should be cautious with generalizing the conclusions of this paper to IoT business models developed around the world.To determine which building blocks and which building block types are considered more or most important,we use one sample t-tests(two tailed,˛=0.05).Due to the likert scale answer format,our data is not normally distributed and also not homoge-nous in its variances.However,given our sample size,our sample is robust to these violations of normality(Bartlett,1935).5.ResultsBased on the analysis described in the previous section,wefirst present a business model framework for IoT business models by showing the building blocks of such a business model and the iden-tified types for the building blocks.Second,we present the relative importance of the building blocks and types.5.1.Overall business model frameworkFig.1shows the business model framework for IoT applications that was derived from literature(see Section2)and interviews with practitioners in the IoT domain(see Section4.1).The build-ing blocks that were identified in thefirstfive interviews were used to adapt the types that were identified by Osterwalder and Pigneur(2010),by adding new types to the list,splitting up a type into multiple more detailed types,merging types into one more abstract type,or providing an alternative classification of types for a building block.Thefigure shows the building blocks that constitute an IoT business model(key partners,key activities, key resources,value propositions,customer relationships,chan-nels,customer segments,cost structure,and revenue streams)and the possible types for each building block.A type that is marked with a gray background represents a type that was added based on the interviews.Subsequently,we ran an exploratory factor analysis based on the survey data to explore whether unobserved factors are underlying the added building block types of the most modified building blocks (key partners,key activities,key resources,and cost structure).The Kaiser–Meyer–Olkin measure of sampling adequacy ranges from miserable(0.55)to middling(0.70),but exceeds the‘unacceptable’threshold in all cases(Kaiser,1974).We only select factors with an eigenvalue larger than1.Since postestimation analyses indi-cate correlation between the factors,we apply oblique(promax) instead of orthogonal rotation to facilitate interpretation of these factors.In the key partners block,two factors emerge.thefirst fac-tor is oriented towards transportation,and contains the types distributors,logistics,and service partners.The second factor is oriented towards upgrading towards IoT,and contains hardware producers,software developers,and data interpretation.In the key activities block,a two factor solution emerges as well.The first factor consists of the types marketing and sales and is clearly oriented towards these activities.The second factor consists of product development and implementation and is interpreted as research,development,and engineering.The factor analyses of the building blocks key resources and cost structure each yield only one factor with an eigenvalue larger than1,and do not result in interpretable factors.Appendix D contains the tables with factor loadings for the key partners and key activities building blocks.5.2.Relative importance of building blocks and typesFig.2shows the relative importance of the building blocks,as measured through the survey,together with their95%confidence intervals and the average score over all building blocks.The sur-vey respondents indicated value proposition as the most important building block for IoT business models,as it scored significantly higher(−x=6.38)than all other building blocks at the<0.01level of significance.Furthermore,the measured differences between the importance of the other building blocks is low,although channels are considered significantly less important at the<0.02level of sig-nificance.Interviewees mainly indicate the building blocks value proposition(9times),customer relationships(6)and key partners (5)to be among the most important in their business models for IoT applications.Thus,the results from the interviews support the results from the survey.Fig.3shows the relative importance of the specific types within the building blocks.The types are ordered according to the score that they received from the respondents.Types that score signifi-cantly more important than the average score in a building block are above or to the left of the grey area.Types that are considered significantly less important than other types are below or to the right of the grey area.All reported differences are significant at the <0.05level,but most are even significant at the<0.01level.676R.M.Dijkman et al./International Journal of Information Management 35(2015)672–678Fig.1.Business model framework for IoT applicationsFig.2.Relative importance of building blocks.Fig.3.Business model framework for IoT applications with relative importance of specific types.#<0.05significance,*<0.02significance,**<0.01significance.R.M.Dijkman et al./International Journal of Information Management35(2015)672–678677In an attempt to determine whether there are any logical clus-ters in the data that form coherent business models,we determined the positive correlations between all of the types.The results of this are shown in Appendix E.These results should be seen as exploratory.6.DiscussionThe survey indicated that the value proposition is the most important building block in IoT business models,which resembles the central role for this building block in the literature(Chesbrough &Rosenbloom,2002;Morris et al.,2005).It is also in line with the result from the interviews;the interviewees indicated that value proposition was the most important building block.Besides the value proposition,the customer relationships and key partnerships are also considered to be important building blocks in IoT business models.This result is also amplified by the fact that in4of the11 interviews the specific combination of these three building blocks was most important in their business model.Within the value proposition block,the types convenience,per-formance,getting the job done,comfort and possibility for updates were indicated most important by the survey respondents.For the first two types,the interview results were accordingly(respectively 10and7of the11companies indicated the presence of the types in their business models).For the second two types getting the job done and comfort,the interview results did not match(respectively 2and1of the companies indicated the presence of the types in their business model).Strikingly,7of the11interviewed compa-nies indicated cost reduction to be present as a type of their value proposition,but the survey did not confirm these results.Openshaw et al.(2014)argued that cost reduction in IoT business models is not bad,but it also is not enough.Businesses should extend from cost reduction models to exploring revenue models,such as additional revenues from the data being generated.This implies that,although cost reduction is not the only possible type of value proposition,it belongs to the most important value proposition types.In the customer relationships block,a split is noticeable.The survey results indicate that IoT applications will be mostly focused on co-creation and communities,and4of the11interviewed com-panies pointed out to use co-creation in their business model For instance,Philips indicated to use of co-creation as type of cus-tomer relationship for their IoT product‘Hue’,since customers can design their own light recipes.Mieloo&Alexander indicated that co-creation is only used in the development phase of their product. Communities were however indicated to be used by only1of the 11companies.Another focus within customer relationships is on the self-reliance that can be achieved with IoT products:7of the 11companies indicated that the relationship type self-service is present in their business model.The survey results confirm this, since the type is scored higher,though not significantly higher, than the average in the building block and significantly higher than 2of the5other stly,the data generated by IoT applica-tions enables customer involvement,among which on an individual basis.For instance,conventional washer manufacturers only have after sales customer contact when the machine breaks down.In the business model of Bundles,which offers a connected washer, customers get monthly feedback about their washing behavior. Furthermore,where customers of Essent used to see their energy usage once a year,the E-thermostat currently enables customers to monitor their use of energy daily.Thisfinding is in line with Hui(2014):IoT business models add personalization and context through information gained over time.Access to the customer data enables quicker and more personalized customer contact.Key partnerships is the third and last building block that is con-sidered more important than the others in IoT business models.The survey results indicate that software&app developers,launching customers,hardware partners and data analysis partners are the most important partnerships types to shape in IoT business models. The interview results confirm thefirst three,as respectively8,9and 10of the11interviewed companies pointed out that these types are among the key partnerships in their business bined with the results of our factor analysis,these results hint at that incorporating IoT products in the product portfolio is a specializa-tion that is(partly)acquired by outsourcing.For instance,Bundles argued that it is not possible to build your solution alone and IoT companies will have to outsource also crucial activities to part-ners.This corroborates Quinn’s(2000)observation that innovations combining software and technology are prone to be outsourced.However,this also points out an increasing complexity in part-nerships in business models for IoT applications(Hui,2014).An additional question in the survey shows that respondents agree with the statement that the partner structure of IoT applications’business models is more complex than the partner structure of con-ventional business models(−x=5.43).Moreover,Mieloo&Alexander stated that partnerships are becoming crucial and that more collab-oration causes long term relations,information sharing and joint cost reduction.Thus,understanding how others in the ecosystem make money becomes important for achieving long-term success.As expected the correlation results from Appendix E show that types from the same building block correspond far more often to each other than types from different building blocks.They also show logical correlations between key activities,key partners and cost,when it comes to hardware development,software develop-ment and logistics(i.e.if hardware development is more important as an activity,it is also more important as a cost factor and in key partnerships).Zooming in on particular value propositions also pro-vides some potentially interesting insights.For example,the value proposition‘comfort’seems to be related to outsourced software and hardware development,with the service delivered by the com-pany itself and revenue collected through asset sale,via partner stores and wholesalers.However,these results should be seen as exploratory and require further research.7.ConclusionsThis paper presents empirical research that leads to a frame-work for business models for Internet of Things ing a literature survey,interviews and a survey among300respon-dents that led to72observations,we established the building blocks of a business model and specific types within those build-ing blocks.Subsequently,we established the relative importance of those building blocks and types.Our research is subject to some important limitations.First of all,although our study yields novel and insightful results for IoT business models,the downside of our wide exploratory research approach is that it lacks thefinesse of a more detailed approach whereby a specific building block or industry sector is targeted. Such a study would result in more specific recommendations,albeit for a narrower target group.Moreover,the relatively low number of observations prevented us from doing a factor analysis on all building blocks and types,which could have revealed important insights into patterns of various business stly,as men-tioned before,our respondents the majority of our respondents is based in the USA or the Netherlands.Our results thus should not be generalized to dissimilar cultures and economies.Notwithstanding these limitations,however,we believe our research also has some important contributions.For academics,this research project contributes byfilling the literature gap regard-ing IoT business models and can serve as starting point of future research on IoT business models.It is thefirst study that extensively。
电离层模型精度比较
电离层模型精度比较巩岩,韩保民(山东理工大学建筑工程学院,山东淄博255049)摘要:为了更好的进行电离层延迟改正,使用了常用电离层模型NeQuick模型和IRI 模型,随机选取某几天的某几个时刻进行数据处理,将得到的结果与IGS分析中心结果进行比较。
结果表明,用不同的模型得到的TEC值不一样,精度不同,其中的精度更高。
关键字:NeQuick模型;IRI模型;TEC众所周知,电离层是围绕地球的一层离子化的大气,它的电子密度、稳定程度和厚度等都在不断变化着,这些变化主要是受太阳活动的影响。
太阳发生质量喷发时,可产生数以百万吨计的物质磁云飞入空间,当这些磁云到达地球电离层时,就会使电离层的电子密度发生很大变化,产生所谓的电离层暴,造成严峻的空间天气状况,严重时可以中断无线电通信系统和损害地球轨道卫星(如通信卫星)。
当GPS信号传播到地球或低轨飞行器时,必须穿透电离层,此时就会产生路径延迟(等价于相应的延迟),而电离层延迟误差是GPS定位中的一项重要误差源,特别是2000年5月美国政府宣布取消了SA政策以后,电离层延迟被认为是影响GPS定位精度的最大误差源。
因此对电离层活动的监测和预报,或许可以给出早期的预警信息,以便及时保护贵重的通信卫星,揭示太阳和电离层中某些现象发生的规律性,以及了解地球磁场及其他圈层变化和相互作用的规律。
1电离层模型方法与原理电离层活动的监测很难建立完善的理论预报模型,目前大都采用统计规律及经验模型做预报,但准确率不高。
电离层TEC的长期预报模式大致分两类,一种是利用NeQuick模型预测的电子密度计算TEC,二是利用IRI模型预测的电离层剖面计算电离层TEC。
1.1NeQuick模型NeQuick模型是由意大利萨拉姆国际理论物理中心的高空物理和电波传播实验(ARPL OICTP, Trieste)与奥地利格拉茨大学的地球物理、气象和天体物理研究所(IGAM,U2niversity of Graz) 联合研究得到的新电离层模型, 该模型已经在欧空局EGNOS项目中使用, 并建议Galileo系统的单频用户采纳来修正电离层延迟。
寡头垄断与粘性价格英文版
• Another is the kinked demand curve
• If a firm increases price, others won’t go along, so demand is very elastic for price increases
寡头垄断与粘性价格英文版
Models of Oligopoly Behavior
• No single general model of oligopoly behavior exists.
Oligopoly
• An oligopoly is a market structure characterized by: – Few firms – Either standardized or differentiated products – Difficult entry
16-14
Comparing Contestable Market and Cartel Models
• The cartel model is appropriate for oligopolists that collude, set a monopoly price, and prevent market entry
16-16
Why Are Prices Sticky?
• When there is a kink in the demand curve, there has to be a gap in the marginal revenue curve.
• The kinked demand curve is not a theory of oligopoly but a theory of sticky prices.
economic model定义
economic model定义下载温馨提示:该文档是我店铺精心编制而成,希望大家下载以后,能够帮助大家解决实际的问题。
文档下载后可定制随意修改,请根据实际需要进行相应的调整和使用,谢谢!并且,本店铺为大家提供各种各样类型的实用资料,如教育随笔、日记赏析、句子摘抄、古诗大全、经典美文、话题作文、工作总结、词语解析、文案摘录、其他资料等等,如想了解不同资料格式和写法,敬请关注!Download tips: This document is carefully compiled by the editor. I hope that after you download them, they can help you solve practical problems. The document can be customized and modified after downloading, please adjust and use it according to actual needs, thank you!In addition, our shop provides you with various types of practical materials, such as educational essays, diary appreciation, sentence excerpts, ancient poems, classic articles, topic composition, work summary, word parsing, copy excerpts, other materials and so on, want to know different data formats and writing methods, please pay attention!经济模型在现代经济学中扮演着至关重要的角色。
in a theoritical model, decision making
In a theoretical model of decision making, individuals are assumed to gather and process information in a rational and systematic manner in order to arrive at the best possible choice. This theoretical framework often involves weighing the costs and benefits of different options, considering various probabilities and potential outcomes, and evaluating the potential impact of each decision on one's goals and objectives.One common theoretical model of decision making is known as expected utility theory, which posits that individuals make choices based on the expected value of each option, taking into account both the potential gains and losses associated with each possible outcome. This model assumes that individuals are able to accurately assess the probabilities of different outcomes and are able to make decisions that maximize their expected utility or satisfaction.Another influential theoretical perspective on decision making is bounded rationality, which suggests that individuals do not always have the capacity to gather and process all relevant information when making decisions due to cognitive limitations and time constraints. Instead, individuals use heuristics and shortcuts to simplify the decision-making process and satisfice, or choose the first option that meets their criteria, rather than exhaustively searching for the best possible choice.Additionally, some theoretical models of decision making incorporate emotional and motivational factors, recognizing that individuals' choices areoften influenced by their desires, fears, and emotional responses to different options. This may involve considering the role of affective forecasting, or predicting one's emotional reactions to different outcomes, in decision making processes.Overall, theoretical models of decision making provide valuable frameworks for understanding and predicting how individuals make choices in various contexts, and serve as the foundation for further empirical research and practical applications in fields such as economics, psychology, and management.。
GAME THEORY MODELS(博弈模型) 尼科尔森中级微观ppt
• Each player has the ability to choose among a set of possible actions • The specific identity of the players is irrelevant
• This means that A will also choose to play music loudly • The A:L,B:L strategy choice obeys the criterion for a Nash equilibrium
• because L is a dominant strategy for B, it is the best choice no matter what A does • if A knows that B will follow his best strategy, then L is the best choice for A
The Prisoners’ Dilemma
• The most famous two-person game with an undesirable Nash equilibrium outcome
• games in which the strategies chosen by A and B are alternate levels of a single continuous variable • games where players use mixed strategies
19
Existence of Nash Equilibria
16
商业模式思维模型
商业模式思维模型(中英文版)English:The business model thinking model is a framework that organizations use to describe, design, and test their strategies for creating value.It encompasses the components necessary for a business to thrive, including its customer segments, value propositions, channels, customer relationships, revenue streams, key activities, key resources, and partnerships.By utilizing this model, businesses can better understand their current operations, identify areas for improvement, and explore new opportunities for growth.中文:商业模式思维模型是一种框架,组织用来描述、设计和测试他们创造价值的策略。
它包括了企业繁荣所需的所有组成部分,包括其客户细分、价值主张、渠道、客户关系、收入来源、关键活动、关键资源和合作伙伴。
通过使用这个模型,企业可以更好地理解他们当前的运营,识别改进领域,并探索新的增长机会。
English:One of the key benefits of the business model thinking model is its ability to facilitate collaboration and communication within an organization.It provides a common language and visual representation that can be easily understood by all stakeholders, including employees, management, and investors.This shared understanding helps to aligneveryone towards a common goal, ensuring that all efforts are focused on creating and capturing value.中文:商业模式思维模型的一个关键好处是,它能够促进组织内部的协作和沟通。
不同NeQuick电离层模型参数的应用精度分析
不同NeQuick电离层模型参数的应用精度分析王宁波;袁运斌;李子申;李敏;霍星亮【摘要】Galileo adopts NeQuick model for single-frequency ionospheric delay corrections.For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS).In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed.The accuracy of Global Position System (GPS) broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC) provided by global ionospheric maps (GIM), GPS test stations and JASON-2 altimeter.The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay.NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.%Galileo采用NeQuick作为全球广播电离层模型,其实际应用中以有效电离水平因子Az代替太阳活动指数作为NeQuick 的输入参数,并利用二次多项式拟合得到广播星历中播发的3个电离层参数.本文在总结和讨论NeQuick模型参数估计方法及其变化特征的基础上,分别以全球电离层格网、GPS基准站及JASON-2测高卫星提供的电离层TEC为参考,分析不同NeQuick模型参数(包括以太阳活动参数F10.7为输入的NeQuick2、以本文解算参数为输入的NeQuickC和以Galileo广播电离层参数为输入的NeQuickG)在全球大陆及海洋地区的应用精度,并与GPS广播的Klobuchar模型对比.结果表明,NeQuickG在全球范围内的修正精度为54.2%~65.8%,NeQuickC的修正精度为71.1%~74.2%,NeQuick2的修正精度与NeQuickG相当,略优于GPS广播星历中播发的Klobuchar模型.【期刊名称】《测绘学报》【年(卷),期】2017(046)004【总页数】9页(P421-429)【关键词】Galileo;NeQuick模型;电离层延迟;总电子含量【作者】王宁波;袁运斌;李子申;李敏;霍星亮【作者单位】中国科学院光电研究院,北京 100094;中国科学院测量与地球物理研究所大地测量与地球动力学国家重点实验室,湖北武汉 430077;中国科学院测量与地球物理研究所大地测量与地球动力学国家重点实验室,湖北武汉 430077;中国科学院光电研究院,北京 100094;中国科学院测量与地球物理研究所大地测量与地球动力学国家重点实验室,湖北武汉 430077;中国科学院测量与地球物理研究所大地测量与地球动力学国家重点实验室,湖北武汉 430077【正文语种】中文【中图分类】P228空间电离层是影响全球导航卫星系统(global navigation satellite system,GNSS)应用最棘手的误差源之一。
蜘蛛侠模型英语作文
蜘蛛侠模型英语作文Title: The Spectacular Spider-Man: A Heroic Model。
Spider-Man, the iconic superhero created by Stan Lee and Steve Ditko, has captivated audiences worldwide with his extraordinary abilities, compelling narratives, and enduring legacy. As a model of heroism, Spider-Man embodies virtues such as courage, responsibility, and selflessness, serving as an inspiration for individuals of all ages. In this essay, we delve into the characteristics and impact of Spider-Man as a role model, exploring how his actions and ethos resonate with audiences globally.First and foremost, Spider-Man exemplifies the virtue of courage in the face of adversity. From battling supervillains to confronting personal challenges, Spider-Man consistently demonstrates unwavering bravery. Despite facing formidable foes like Green Goblin, Doctor Octopus, and Venom, he confronts danger head-on, risking his own safety to protect the citizens of New York City. Spider-Man's willingness to confront danger and stand up for what is right serves as a powerful example of courage for individuals facing their own trials and tribulations.Moreover, Spider-Man epitomizes the principle of responsibility, a central theme in his character development. After gaining his superhuman abilities from a radioactive spider bite, Peter Parker learns the valuable lesson that "with great power comes great responsibility." As Spider-Man, Peter balances his dual identity, fulfilling his duties as a superhero while navigating the challenges of everyday life. Whether it's juggling his commitments as a student, photographer, or crimefighter, Spider-Man emphasizes the importance of fulfilling one's obligations and making choices that benefit others.Additionally, Spider-Man embodies the spirit of selflessness, consistently placing the needs of others above his own. Despite grappling with personal losses and setbacks, Spider-Man remains dedicated to serving his community and protecting the innocent. He often prioritizes the greater good over his individual desires, illustratingthe altruistic nature of true heroism. Whether it's rescuing civilians from danger or assisting fellow superheroes in times of crisis, Spider-Man's acts of selflessness inspire others to emulate his example and make a positive impact in their own communities.Furthermore, Spider-Man's resilience in the face of adversity serves as a testament to the power of perseverance. Despite facing numerous setbacks and defeats, Spider-Man refuses to succumb to despair, always striving to overcome obstacles and emerge victorious. His ability to learn from failure, adapt to new challenges, and continue fighting against injustice resonates with audiences facing their own struggles and setbacks. Spider-Man's resilience reminds us that even in our darkest moments, there is always hope for a brighter tomorrow.In conclusion, Spider-Man stands as a timeless model of heroism, embodying virtues such as courage, responsibility, selflessness, and resilience. Through his courageous actions, unwavering sense of responsibility, selfless dedication to others, and indomitable spirit, Spider-Maninspires individuals around the world to embrace their own inner hero and strive for excellence in all aspects of their lives. As we continue to celebrate Spider-Man's enduring legacy, let us heed the lessons of his character and work together to create a better, more compassionate world for future generations.。
空间映射
ORIGINAL ARTICLETwo-level refined direct optimization scheme using intermediate surrogate models for electromagnetic optimization of a switched reluctance motorGuillaume Crevecoeur •Ahmed Abou-Elyazied Abdallh•Ivo Couckuyt •Luc Dupre´•Tom Dhaene Received:2February 2011/Accepted:21June 2011/Published online:8July 2011ÓSpringer-Verlag London Limited 2011Abstract Electromagnetic optimization procedures require a large number of evaluations in numerical forward models.These computer models simulate complex prob-lems through the use of numerical techniques,e.g.finite elements.Hence,the evaluations need a large computa-tional time.Two-level methods such as space mapping have been developed that include a second model so as to accelerate the inverse procedures.Contrary to existing two-level methods,we propose a scheme that enables acceler-ation when the second model is based on the initial numerical model with coarse discretizations.This paper validates the proposed refined direct optimization method onto algebraic test functions.Moreover,we applied the methodology onto the geometrical optimization of the magnetic circuit of a switched reluctance motor.The obtained numerical results show the efficiency of the opti-mization algorithm with respect to the computational time and the accuracy.Keywords Switched reluctance motor ÁOptimal design ÁFinite elements ÁGeometrical optimization ÁSurrogate models ÁKriging1IntroductionElectromagnetic rotating machines are indispensable in industry.Specifically,switched reluctance motors (SRMs)are widely used due to their simple working principle.In order to optimally design such machines,optimal design procedures with high-fidelity computer models,e.g.finite element (FE)models,are commonly utilized [1–3].The motor under study is a 6/4SRM,see Fig.1,where we aim at optimizing the geometry of the magnetic circuit of the motor so as to obtain an average torque profile that is as high as possible.In a general electromagnetic optimization framework,the electromagnetic field computations are performed by solving the classical equations of Maxwell,specifying the geometry,materials,and ing efficient numerical techniques,e.g.finite element,finite difference,boundary element methods,puter models of the electro-magnetic device can be built.These computer models solve the so-called forward problems with high solution accu-racy.However,these computer models are generally CPU time-consuming.So that the traditional direct optimization approaches,which strive towards the minimization of a predefined cost function iteratively by the use of the for-ward model,become very time demanding,difficult,and impractical.In this perspective,so-called two-level optimization methods,e.g.space mapping [4,5],manifold mapping [6],response and parameter mapping [7],etc.,were presented.In these two-level optimization methods,the optimization procedure is accelerated by incorporating,next to the high-fidelity ‘‘fine model’’,an additional low-fidelity ‘‘coarse model’’.Indeed,these two-level optimization methods were successfully applied onto different electromagnetic devices.For example,the space mapping technique wasG.Crevecoeur (&)ÁA.A.-E.Abdallh ÁL.Dupre´Department of Electrical Energy,Systems and Automation,Ghent University,Sint-Pietersnieuwstraat 41,9000Ghent,Belgiume-mail:Guillaume.Crevecoeur@ugent.beI.Couckuyt ÁT.DhaeneDepartment of Information Technology (INTEC),Ghent University-IBBT,Sint-Pietersnieuwstraat 41,9000Ghent,BelgiumEngineering with Computers (2012)28:199–207DOI 10.1007/s00366-011-0239-5applied onto the efficient optimal design of electromagnetic actuators [8,9],optimal design of a SRM [10],a trans-former [11]etc.In [8–10],the used coarse models were mostly analytical models,i.e.lumped magnetic reluctance network,where approximations were made with respect to geometry,materials,and sources.However,the construc-tion of such fast coarse models can be also time demanding and difficult,especially when dealing with complex for-ward models.Moreover,these coarse models have to be sufficiently faster than the fine models;otherwise the optimization procedure is not remarkably accelerated [12].Therefore,a two-level optimization method based on a relatively easier to build coarse models is needed.We propose a novel alternative two-level optimization scheme that enables to solve optimization problems on the fly in a more efficient way when including a coarse model that is directly derived from the fine numerical model with coarse discretizations.The reason for considering such class of coarse model in two-level procedures is because they are much easier to build than analytical models and because they enable better approximation of the system under study where the physics of the model are more accurately incorporated, e.g.for the case geometrical details are important in the forward solution or in case the nonlinearity of the material model is important to the forward solution.For such class of coarse models,it is better to use a numerical model.This paper describes the proposed refined direct opti-mization (RDO)scheme in detail and implements the scheme within the widely used Nelder–Mead simplex (NMS)method and nonlinear least squares methods.The scheme is based on the framework presented in [13,14]and the two-level genetic algorithm [15],which employ surrogate models.For details concerning the surrogate models,we refer to [16].Acceleration of the optimal design is obtained by optimizing only once a surrogate model that is based on the coarse model with coarse dis-cretizations.This surrogate model is corrected by an interpolation model calibrated with the fine model with fine discretizations.In order to validate the method,we apply the method onto algebraic test functions.In a next stage,we apply the method for the optimal geometrical design of a 6/4SRM and compare the results with the space mapping technique and the traditional direct optimization technique.2Two-level minimization methodsMinimization methods that include next to the initial computer model,a second model,are so-called two-level minimization methods.In forward modelling problems,the initial ‘‘fine’’model is mostly based on numerical tech-niques such as the finite element method (FEM),finite difference method (FDM),etc.This model has a high level of accuracy that requires a large computational time.The second ‘‘coarse’’model has a lower level of fidelity and is computationally fast.We denote the fine and coarse model as f(x)and c(x)respectively with input parameter vector x .Metamodels,denoted here as models that interpolate input–output data and which do not solve the physics of the problem,e.g.response surface models [17],Kriging mod-els [18,19],can act as ‘‘coarse’’models in two-level minimization methods [14,20–22]and are constructed by interpolating response data,obtained by evaluatingtheD seD siD gD ret spD rit rpx..xmodel fðx iÞfor a certain set of sample points x i;i¼1;...;N d in the design space,where N d is the number of design points.When dealing with complex problems,N d needs to be large in order to obtain a sufficiently accurate metamodel.Metamodels can be used within optimization schemes,i.e.metamodel-assisted optimization(MAO)(e.g.[23])and surrogate-based optimization(SBO)(e.g.[24]), with or without additional evaluations of thefine model for refinement of the metamodel.The efficient global optimi-zation(EGO)algorithm[25]is an example of a minimi-zation method that enables refinement of the metamodel (Kriging model)during the optimization procedure by performing additionalfine model evaluations.Indeed, EGO/expected improvement provides a balance between exploration,i.e.enhancing the accuracy of the metamodel, and exploitation,i.e.refining the metamodel solely in the region of the current optimum.The main drawback when using metamodels is that it becomes difficult to determine an accurate metamodel when dealing with high-dimen-sional parameters and with highly nonlinear forward models(‘‘curse of dimensionality’’)[26].The number of evaluations that need to be carried out in thefine model for building the metamodel can increase to a large extend.A second type of coarse models can be physics-based where assumptions are made with respect to the geometry, sources,materials,etc.Space mapping(SM)and manifold mapping mostly include such models.In the most basic methods,e.g.the aggressive space mapping(ASM)algo-rithm[5],P forward model evaluations are carried out as well as P minimizations of the coarse model for different objectives.P is the total number of iterations in the two-level algorithm.The total time for minimizing the objective function equals:T SM¼PT fþPN c T cð1Þwith T f and T c being the needed computational time for carrying out one evaluation in thefine and coarse forward model,respectively.N c is the average number of evaluations that need to be carried out for minimizing the coarse model,given certain objective(s).If we assume that the traditional‘‘one-level’’(1L)minimization method needs N f&N c evaluations in thefine model,then we can calculate the total time as T1L=N f T f.Acceleration of space mapping with respect to traditional minimization methods can be defined as:A1¼T1LT SM%N c T fPT fþPN c T c:ð2ÞThis acceleration depends on the ratio s=T f/T c and A1[1is obtained when N c T f[PT f?PN c T c ors[PN cN cÀPð3Þwhere we can assume N c)P so that s needs to be largerthan the number of iterations in space mapping or manifoldmapping.It is not always possible to build a sufficientlyfast coarse model,e.g.coarse models that are numericalmodels with coarse discretizations.3Refined direct optimization(RDO)scheme3.1Iterative schemeAs mentioned in the previous section,existing two-levelschemes cannot accelerate the procedure when the coarsemodel is not sufficiently fast or when a large number ofevaluations are needed for building a metamodel.In thispaper,we carry out only one optimization of a surrogate-based model[P=1in(1)]that is iteratively refined duringthe optimization itself(increased number offine modelevaluations).This surrogate model is based on the coarsemodel and tries to approximate thefine model through theuse of iteratively refined metamodels.The specific featureof this scheme is that acceleration is possible even forrelatively small s.The basic idea of the RDO scheme is to alter the opti-mization of the cost Y; e.g.least-squares differencebetween targets and simulations,of thefine model:xÃf¼arg minxYðfðxÞÞð4Þto the optimization of the cost of the surrogate model s(x):xÃs¼arg minxYðsðxÞÞð5Þwhere we want that near xÃs;sðxÞwell-approximates f(x),sothat xÃs is close to xÃf:Here,we use metamodels forinterpolating the coarse model response data to thefinemodel response data.The relation between coarse modelresponse andfine model response can become less complexand less difficult to determine.In this way,N d can bereduced.The surrogate model,used in the RDO scheme,has the following form:sðxÞ¼cðxÞþeðxÞð6Þwith error function e(x)that is determined using meta-models.Notice that(6)can also be of the following form:s(x)=c(x)e(x).In this paper,we use the Kriging meta-model,see e.g.[18],for building the error function.Thesurrogate model s(x)is refined during the optimizationprocedure by performing a limited number offine modelevaluations.For more details concerning the use of Krigingwithin the RDO scheme,see Sect.3.2.Notice that when using a coarse model that is based on thefine model with coarse discretizations that we have to besure that the error model e(x)is not modelling the numericalnoise but that due to the use of coarse discretizations,i.e.some physics are not fully included in the fine model.For example,the level of discretizations near an air gap in electromagnetic devices can be modelled in a coarse way so that the physics of the coarse model are not modelled with a high fidelity near an air gap,i.e.neglecting fringing effects.The proposed method has the same features as the tra-ditional direct optimization method,i.e.start value,stop-ping criteria,etc.,where the internal parameters of the RDO method are self-tunable.The method uses a trust-region strategy for updating the surrogate model.An out-line is given:Step 1:An initial set of N init samples is generated by an optimal maximin Latin hypercube design (LHD;[27])around start value x ð0Þwithin the trust region radius D ð0Þ:x ð0Þiwith i ¼1;...;N init :Evaluations are then made in the coarse and fine model:F ¼f x ð0Þ1 ;...;f x ðN Þ1 h ið7ÞC ¼c x ð0Þ1 ;...;c x ðN Þ1h i:ð8ÞStep 2:Construction of surrogate model s ð0Þðx Þby determining e ð0Þðx Þin (6)by interpolating x ð0Þi withF –C .We initialize m =1.Step 3:Partial run of direct minimization method (NMS,gradient method,etc.)using surrogate model s ðm À1Þðx Þwith start value x ð0Þ:Updates x ðk Þ;k ¼1;...;K are carried out,depending on the used direct optimization method.The partial run of direct optimization method is stopped when x ðK Þis near to the trust region boundary or when the stopping criteria of the direct minimization method are fulfilled.x ðm Þbecomes x ðK Þ:Step 4:Determine the accuracy of the surrogate model in order to determine the new trust region D ðm Þ:The accuracy of the previous surrogate model s ðm À1Þ;depends on the fidelity of the coarse model c(x)relatively to the fine model and on the accuracy of the error function e(x).The accuracy is determined as follows [13]:q ðm Þ¼Y f x ðm À1ÞÀÁÀÁÀY f xðm ÞÀÁÀÁY s x ðÞðÞÀY s x ðÞðÞ:ð9ÞOn the basis of q (m ),we determine D ðm Þ;similar to [14].Step 5:Update of surrogate model:s ðm Þðx Þor error model e ðm Þin the region D ðm Þ:A limited number of evaluations R are carried out in the fine and coarse model so as to refine the surrogate model in the next trust region.We add the simulations to the datasets (7),(8).Step 6:If the termination criteria of the direct optimi-zation method are not satisfied,then go to step 3,and set m =m ?1.The computationally demanding steps 1and 4in the two-level refined direct method can be parallelized so to improve the acceleration of the procedure.The total time equation of the RDO scheme (without parallelization)is theoretically:T RDO ¼N init ðT f þT c ÞþQ ðKT c þRT c þRT f Þð10Þwith Q the total number of iterations in the RDO scheme.Remark that we can assume that QK &N c with N c from equation (1),i.e.the total number of iterations for minimizing the surrogate model is close to the total number of iterations for minimizing the coarse model.As long as N f [N init ?QR ,acceleration with respect to the traditional method:A 2¼T 1L T RDOð11Þis satisfied when s \N init þN c þRKN f ÀN init ÀQR:ð12Þ3.2Kriging in RDO schemeKriging is a popular technique to interpolate deterministic noise-free data [20,28].These Gaussian process-based surrogate models are compact and cheap to evaluate.Kriging is applied in the RDO scheme for approximating e(x)in equation (6).We elaborate here in a general way the working of the Kriging modeling where a Kriging model is made starting from a certain model m .Let us consider the following N Kr -dimensional base (training)setX B ;Kr ¼f x kr ;1;x kr ;2;...;x kr ;N Kr g ð13Þandm B ;Kr ¼f m ðx kr ;1Þ;m ðx kr ;2Þ;...;m ðx kr ;N Kr Þgð14Þbeing the associated responses in the model m .Then,the Kriging model m Kr ðx Þwith input vector x ,is also known as the best linear unbiased predictor (BLUP)that can be obtained bym Kr ðx Þ¼M a þr ðx ÞW À1ðm B ;Kr ÀF a Þð15ÞM and F are Vandermonde matrices of the test point x and the base set X B ;Kr ;respectively.The coefficient vector a is determined by generalized least squares (GLS).r(x)is an 19N Kr vector of correlations between the point x and the base set X B ;Kr ;where the i -th element is given by r i ðx Þ¼w ðx ;x kr ;i Þ;i ¼1;...;N Krð16ÞW in (15)is a N Kr 9N Kr correlation matrix,where the entries are given by W i ;j ¼w ðx kr ;i ;x kr ;j Þ:In this work,the correlation function is chosen Gaussian:w ðx i ;x j Þ¼exp Xn k ¼1Àh k k x i ;k Àx j ;k k !ð17Þwhere x i ,k denotes the k -th component of vector x i and n the dimension of the input vector x .The parameters h k ;k ¼1;...;n of the correlation function are determined by maximum likelihood estimation (MLE).The optimization for MLE was performed using SQPLab [29].The regres-sion function is chosen constant,i.e.,F ¼½11...1 T and M the identity matrix.In this RDO scheme,a Kriging interpolant is built in step 2.The base set is X B ;Kr ¼f x ð0Þ1;...;x ð0ÞN g that needs to be interpolated with m B ;Kr ¼f f ðx ð0Þ1ÞÀc ðx ð0Þ1Þ;...;f ðx ð0ÞN ÞÀc ðx ð0ÞN Þg :In step 5of the iterative scheme,the training set is extended with R points yielding a more refined Kriging interpolant.4Optimal design of the magnetic circuit of a switched reluctance motorThe forward problem for the optimal design of the magnetic circuit of the SRM consists in determining the torque profile of the SRM for a set of geometrical variables.The SRM is excited from stator windings which are concentric coils wound in series on diagonally opposite stator poles,see Fig.1.The rotor is brushless with no windings.The variable geometrical parameters,as depicted in Fig.1,are the width of the stator pole t sp and the rotor pole t rp ,the internal diameter of the stator yoke D s ,1and the external diameter of the rotor yoke D r ,0.The external diameter of the stator is D s ,0=135mm,the internal diameter of the rotor is D r ,1=25mm and the air gap width is d =0.25mm.The motor can be analyzed using a magnetic equivalent circuit [30].For accurate prediction of the behavior of the machine,i.e.correct simulation of the torque for different rotor positions,numerical methods such as the Finite Ele-ment Method (FEM)are more suitable for the accurate simulation of the machine [10,31].The demand for servo-type torque control requires the calculation of the instanta-neous torque for each rotor angle [32].The electromagnetic torque can be computed through the following equation T em ðh 0;I 0Þ¼oo hW co ðh ;I 0Þj h ¼h 0ð18Þfor a certain given rotor angle h 0and excitation current I 0.W co is the so-called co-energy defined as:W co ¼ZW di ð19Þwith W the flux linkage and i the current where the inte-gration can be carried out in FEM directly through globalintegration over the domain of the solution,see e.g.[32].The forward problem can be solved using the following Poisson’s equation:r Âðl À1r ÂA Þ¼Jð20Þfor the vector potential A and for a certain current density J ,which is related to the enforced current I 0in the windings around the two opposite stator poles.Since the currents J are perpendicularly oriented on the plane of the magnetic circuit (J =J z being the current density in the z-direction),see Fig.1,the magnetic induction and field are also oriented in this plane.The vector potential has thus a component perpendicular upon the plane of the magnetic circuit:A =A z .The Poisson’s equation (20)can in this way be reduced to the following equation in 2D:r Áðl À1r A Þ¼ÀJð21ÞThe FE calculations depend on the geometrical parameters and on the specifications of the permeability l .For the magnetic circuit,this permeability is nonlinear and we use the following single-valued constitutive B -H relationship:l ¼B 0H 01þB B 0 m À1 !À1ð22Þwhich is determined by the following three parameters[H 0,B 0,m ]and which originates from the following equation [10,33]:H H 0¼B B 0 þB B 0 m:ð23ÞThe fine model consists of very fine discretizations of the motor under study.The number of elements is approxi-mately 250,000.The number of mesh elements in the coarse model is approximately 10times smaller.The tor-que is calculated for the excitation current of I 0=4A and for 5different rotor angles h 0=25,27.5,30,32.5,35mechanical degrees.During the optimization procedures,the average torque is maximized for a fixed value of I 0.5Results and discussion5.1RDO of algebraic test functionsIn order to validate the RDO scheme,we applied the method onto two different algebraic test functions and compared the results with the traditional direct optimiza-tion scheme.Firstly,the following algebraic function was minimized:Y 1ðf ðx ÞÞ f ðx Þ¼Àexp ðÀðx 21þx 22ÞÞð24Þwith x Ãf ¼½0;0 T:The coarse model is similar to the fine model with altered output (A c )and input (matrix B c )space:c ðx Þ¼A c f ðB c x Þð25Þwith optimal value x Ãc ¼½1;À1 T:Figure 3shows the val-ues of the iterates x ðk Þin the traditional NMS method for minimizing the fine and the coarse model so as to obtain x Ãfand x Ãc ,respectively.The figure additionally shows the alternative path followed in the variable (design)space by the RDO algorithm in order to achieve convergence tox Ãs ¼x Ãf :The internal parameters for the RDO algorithm are the following:N init =8,R =4with initial trust region D ð0Þ¼0:2D Ãwhere D Ãdenotes the whole input space region.The total number of iterations is Q =8.The total number of evaluations is in the fine model 40and in the coarse model 110.Secondly,we applied the RDO onto the minimization of the two-dimensional Rosenbrock test function,with fine model:Y 2ðf ðx ÞÞ¼100x 2Àx 21ÀÁ2þ1Àx 21ÀÁ:ð26ÞThe coarse model is again seriously altered in input and output space.Figure 4compares the convergence history of the traditional method (cost log ðYðf ðx ðk ÞÞÞÞÞwith the RDO method (cost log ðYðs ðx ðk ÞÞÞÞÞin each k -th iteration.The internal parameters are chosen as follows:N init =10,R =5with this time D ð0Þ¼D Ã:The trust region is reducedduring the minimization procedure.It can be observed from Fig.4that the minimization procedure follows an alter-nated path in the parameter space and that the iteratively refined surrogate model is minimized.Near to the minimal value of the cost function,the surrogate model s ðQ Þðx Þisclose to the fine model f ðx Þso that x Ãs %x Ãf :The value of the trust region ratio D ðk ÞD Ãfor each iteration is shown in Fig.5a and the minimal value of the cost function in the fine model evaluated in (7)for the first iteration and in step 5for the next iterations is shown in Fig.5b.Convergence is observed after Q =8iterations with 50evaluations in the fine model and 350evaluations in the coarse model.5.2RDO scheme for the optimal design of a SRM The computational time for one forward fine model is T f(np )=21.2min,for one coarse model is T c =8.1min.When using preconditioning of the fine model based on the coarse model,i.e.solution of the fine model is obtained by starting from the coarse model solution,then the totalcomputational time is T f(wp )=7.9min.The superscript np denotes that no preconditioning was performed,while the superscript wp denotes that preconditioning was per-formed.Preconditioning can be performed in steps 1and 3.The cost function that was implemented for the optimal design calculates the maximum average torque for the rotor angles:Fig.5Minimization of Y 2using RDO with (a)trust region in each iteration,(b)minimal value of fine model evaluations in step 5Y¼ÀXh0T emðh0;I0Þð27Þwith the rotor angles h0specified in Sect.4.The intermediate surrogate model(6)is modelled with e(x)being a Kriging model[28].In order to guarantee that the numerical noise is not interpolated between thefine and the coarse model,a co-Kriging model could be imple-mented[34].However,the numerical simulations showed that a Kriging model was sufficient for obtaining a highly accurate intermediate surrogate model.The internal parameters taken for the RDO scheme are the following:N init¼24;Dð0Þ¼DÃ;R¼12.The minimi-zation in step3is carried out through sequential quadratic programming(SQP)[35].The identified optimal geomet-rical parameters xÃRDO are listed in Table1.These optimal parameters correspond well with the optimal parametersxÃSQPobtained using SQP with thefine model only.Figure6depicts the minimal value of thefine model evaluated in(7)and step5.Convergence is obtained after Q=5iterations.We also added the trust region radius in thisfigure.Figure7shows the convergence history of the surrogate model in the partial run of direct optimization (step3of the RDO scheme).The total number offine model evaluations N f and coarse model evaluations in the RDO scheme and the one-level direct optimization SQP method is given in Table2. The total computational time is also given where the total time needed in RDO(np)is1.6times faster.We observe that when using the preconditionedfine model evaluations (RDO(wp)),the time needed in steps1and5can be accelerated so that the total computational time for opti-mization is approximately2times faster.Figure8shows the percentual error between thefine and coarse model k FÀC k=k F k in step1of the RDO algorithm. Thisfigure shows the error for5different angles of the rotor and shows that the error highly depends on the rotor angle. This is because the discretization highly influences the accuracy of the computer model for angles25–30°.When we compare this with Fig.9which is the percentual error between thefine and surrogate model k FÀC k=k F k in step5 of the RDO scheme then we observe that the surrogate model has a relatively good quality.Acceleration of the RDO scheme can possibly be achieved by evaluating thefine model mainly in the region where the error of the Kriging model is large,as in the EGO algorithm.It is difficult to provide a correct quantitative compari-son between the space mapping methodology(e.g.ASM)Table1Optimal parameters of SRM machineParameters t sp(mm)D si(mm)t rp(mm)D re(mm)xÃRDO17.1109.420.1243.79xÃSQP17.0109.219.8944.04。
Enneagram九型人格英文版百科
The Enneagram (also sometimes called Enneagon) is a nine-pointed geometric figure. The term derives from two Greek words - ennea (nine) and grammos (something written or drawn).The introduction of the Enneagram figure is credited to G.I. Gurdjieff, who introduced it in his teachings as a universal symbol which displays the fundamental cosmic laws. Gurdjieff did not disclose where the figure originally came from besides claiming that it was the emblem of secret societies.The Enneagram figure is now used for various purposes in a number of different teaching systems. In more recent years the figure has mostly come into prominence because of its use with what is often called the Enneagram of Personality. The fundamental concepts of the Enneagram of Personality are attributed to Oscar Ichazo.Enneagrams shown as sequential stellationsIn geometry an enneagram is a regular nine-sided star polygon, using the same points as the regular enneagon but connected in fixed steps.It has two forms: {9/2} and {9/4} connecting every 2nd and every 4th points respectively. There is also a star figure, {9/3}, made from the regular enneagon points but connected as a compound of three equilateral trianglesThe modern use of the Enneagram figure is generally credited to G.I. Gurdjieff and his Fourth Way teaching tradition. His teachings concerning the figure and what it represents does not have any direct connection to the later teachings by Oscar Ichazo and others concerningego-fixations or personality types.The enneagram figure is a circle with nine points. Inscribed within the circle is a triangle taking in points 9, 3 and 6. The inscribed figure resembling a web links the other six points in a cyclic figure 1-4-2-8-5-7. The rules of the magic number 142857 can be applied to the enneagram's explanations of processes.According to Gurdjieff, the enneagram is the symbol of the "law of seven" and "the law of three" combined (the two fundamental laws which govern the universe), and therefore theenneagram can be used to describe any natural whole phenomenon, cosmos, process in life, or any other piece of knowledge.A basic example of the possible usage of the enneagram is that it can be used to illustrateGurdjieff's concept of the evolution of the three types of ‘food’ necessary for a man: ordinary food, air and impressions. Each point on the enneagram in this case would represent the stage and the possibility of further evolution of food at a certain stage in the human body.Most processes on the enneagram are represented through octaves where the points serve as the notes; a concept which is derived from Gurdjieff’s idea of the law of seven. In an octave the developing process comes to a critical point (one of the triangle points) at which help fromoutside is needed for it to rightly continue. This concept is best illustrated on the keys of the piano where every white key would represent an enneagram web point. The adjacent white keys which are missing a black key (half note) in between represent the enneagram webpoints which have a triangle point in between. In order that this point would pass onto the next, an external push is required.In the enneagram a process is depicted as going right around the circle beginning at 9 (the ending point of a previous process). The process can continue until it reaches point 3. At point three an external aid is needed in order that the process continues. If it doesn't receive the ‘help’, the process will stop evolving and will devolve back into the form from which it evolved. The process continues until point 6, and later 9, where a similar "push" is needed. If the process passes point 9, the initial process will end, while giving birth to a new one.The line of development associated with the Fourth Way developed from the writings ofGurdjieff's students - principally P.D. Ouspensky, Maurice Nicoll, J.G. Bennett and Rodney Collin. They developed Gurdjieff's ideas and left their own accounts. There is an extensive bibliography devoted to the Gurdjieff-Ouspensky tradition.A Gurdjieff foundation exists which claims an authority based on a line of succession directlythrough Mr Gurdjieff. The foundation preserves Gurdjieff's music and movements andcontinues its own work with the Enneagram figure.The enneagram as a structured process was studied by John G. Bennett and his associates.Bennett showed how it applied to something as mundane as a restaurant as well as tosomething as spiritual as the Beatitudes. It is currently being used to explicate the idea ofself-organization in management.The Enneagram of Personality is derived from (established in U.S. Court 970 F.2d 1067, 1075.2nd Circuit, 1992) partial understandings of the insights of Oscar Ichazo., the Bolivian-born founder of the Arica School (established in 1968). No evidence has appeared before Ichazo's offerings for using the Enneagram figure with concepts such as "ego fixations" or "personality types" or indeed in any way where each point is described in a way that can be viewed as a typology. All historical documentation of this kind of terminology appears only after Ichazo's original teachings.Ichazo claims that sometime in the 1950s he received insight into how certain mechanistic and repetitive thought and behavior patterns can be understood in connection with the Enneagram figure and with what he called Trialectic logic as part of a complete and integrated model of the human psyche. The purpose of Ichazo's teachings was to help people transcend their identification with - and the suffering caused by - their own mechanistic thought and behavior patterns.The theory was founded upon the basic premise that all life seeks to continue and perpetuate itself and the human psyche must follow the same common laws of reality as such. From this, Ichazo defined three basic human instincts for survival (Conservation, Relations and Adaptation) and two poles of attraction to self-perpetuation (Sexual and Spiritual). With a baseline of a psyche in a state of unity as a prototypical model, the Fixations were defined as aberrations from this baseline, much as the DSM is an observationally based tool for recognizing personality disorders. In fact, Ichazo has related the Fixations with the DSM Diagnostic and Statistical Manual of Mental Disorders categories to show that Fixations are the precursor to mental illness. Each Fixation is diagnosed from the particular experience of psychological trauma a child suffers when the child's expectations are not met in each respective Instinct. Since a child is completely self-centered in its expectations, it is inevitable that the child will experience disappointment of expectation viewed by the child as a matter of one of the three fundamental attitudes (attracted, unattracted or disinterested are the only possible attitudes), and thus experience trauma and begin to form mechanistic thought and behavior patterns in an attempt to protect itself from experiencing a recurrence of the trauma. This basic, irrefutable understanding of three fundamental Instincts and three possible attitudes along with the understanding that a human being can be in a state of unity, analyzed with Trialectic logic forms a solid foundation upon which the theory of Fixations is based. As such, the theory of ego Fixations has a particular foundation which can be tested. The idea of "Personality Types" is an invention of intuition without any particular foundation beyond the theory of ego Fixations, and as such can be interpreted to mean whatever any of the Enneagram of Personality proponents chooses it to mean whenever they choose to so interpret it. Thus we understand why there is no specific, solid agreement among the various proponents of "Personality" as something objective and anything more than a proposition to obfuscate human suffering.By understanding one's Fixations and through self observation, the hold on the mind, and suffering caused by the Fixations, is reduced and even transcended. There was never an intention or purpose in Ichazo's original work to use this knowledge to reinforce or manipulate what is essentially a source of human suffering. Therefore almost all later interpretations of the Enneagram of Personality are viewed by Ichazo as unfounded and therefore misguided and psychologically harmful as well as spiritually harmful (in the sense of coming to see one's process as such) in light of his original intentions. In other words, the Enneagram Movement can be considered, in most cases, to actually be promoting the strengthening of the basis for the personality disorders we find expositions of in the DSM.From the 1970s Ichazo's partial and misunderstood Enneagram teachings were adapted and developed by a number of others, first by the Chilean-born psychiatrist, Claudio Naranjo, who was a member of a training program in Arica, Chile with Ichazo for some months in 1969. Naranjo taught his understanding of the Enneagram of Personality to a number of his American students, including some Jesuit priests who then taught it to seminarians.It is believed by Enneagram theorists that the points of the Enneagram figure indicate a number of ways in which nine principal ego-archetypal forms or types of human personality (also often called "Enneatypes") are psychologically connected. These nine types are often given names that indicate some of their more distinctively typical characteristics. Such names are insufficient to capture the complexities and nuances of the types which require study and observation to understand in depth.Some brief descriptions of the Enneatypes are as follows:One: Reformer, Critic, Perfectionist - This type focuses on integrity. Ones can be wise, discerning and inspiring in their quest for the truth. They also tend to dissociate themselves from their flaws or what they believe are flaws (such as negative emotions) and can become hypocritical and hyper-critical of others, seeking the illusion of virtue to hide their own vices. The One's greatest fear is to be flawed and their ultimate goal is perfection. The corresponding "deadly sin" Ones is Anger and their "holy idea" or essence is Holy Perfection. Under stress Ones express qualities of Fours and when relaxed qualities of Sevens.Two: Helper, Giver, Caretaker - Twos, at their best, are compassionate, thoughtful and astonishingly generous; they can also be prone to passive-aggressive behavior, clinginess and manipulation. Twos want, above all, to be loved and needed and fear being unworthy of love. The corresponding "deadly sin" of Twos is Pride and their "holy idea" or essence is Holy Will. Under stress Twos express qualities of Eights and when relaxed qualities of Fours.Three: Achiever, Performer, Succeeder - Highly adaptable and changeable. Some walk the world with confidence and unstinting authenticity; others wear a series of public masks, acting the way they think will bring them approval and losing track of their true self. Threes are motivated by the need to succeed and to be seen as successful. The corresponding "deadly sin" of Threes is Deceit and their "holy idea" or essence is Holy Law. Under stress Threes express qualities of Nines and when relaxed qualities of Sixes.Four: Romantic, Individualist, Artist - Driven by a desire to understand themselves and find a place in the world they often fear that they have no identity or personal significance. Fours embrace individualism and are often profoundly creative and intuitive. However, they have a habit of withdrawing to internalize, searching desperately inside themselves for something they never find and creating a spiral of depression. The corresponding "deadly sin" of Fours is Envy and their "holy idea" or essence is Holy Origin. Under stress Fours express qualities of Twos and when relaxed qualities of Ones.Five: Observer, Thinker, Investigator - Fives are motivated by the desire to understand the world around them, specifically in terms of facts. Believing they are only worth what theycontribute, Fives have learned to withdraw, to watch with keen eyes and speak only when they can shake the world with their observations. Sometimes they do just that. However, some Fives are known to withdraw from the world, becoming reclusive hermits and fending off social contact with abrasive cynicism. Fives fear incompetency or uselessness and want to be capable and knowledgeable above all else. The corresponding "deadly sin" of the Five is Avarice and their "holy idea" or essence is Holy Omniscience. Under stress Fives express qualities of Sevens and when relaxed qualities of Eights.Six: Loyalist, Devil's Advocate, Defender - Sixes long for stability above all else. They exhibit unwavering loyalty and responsibility, but once betrayed, they are slow to trust again. They are prone to extreme anxiety and passive-aggressive behavior. Their greatest fear is to lack support and guidance. The corresponding "deadly sin" of the Six is Cowardice and their "holy idea" or essence is Holy Faith and Strength. Under stress Sixes express qualities of Threes and when relaxed qualities of Nines. There are two kinds of Sixes - phobic and counterphobic. Phobic Sixes have a tendency to run or hide from things they fear while counterphobic Sixes are more likely to confront their fears.Seven: Enthusiast, Adventurer, Materialist, Epicure - Sevens are adventurous, and busy with many activities with all the energy and enthusiasm of the Puer Aeternus. At their best they embrace life for its varied joys and wonders and truly live in the moment; but at their worst they dash frantically from one new experience to another, too scared of disappointment to actually enjoy themselves. Sevens fear being unable to provide for themselves or to experience life in all of its richness. The corresponding "deadly sin" of Sevens is Gluttony and their "holy idea" or essence is Holy Wisdom". Under stress Sevens express qualities of Ones and when relaxed qualities of Fives.Eight: Leader, Protector, Challenger - Eights value personal strength and they desire to be powerful and in control. They concern themselves with self-preservation. They are natural leaders, who can be either friendly and charitable or dictatorially manipulative, ruthless, and willing to destroy anything in their way. Eights seek control over their own lives and destinies, and fear being harmed or controlled by others. The corresponding "deadly sin" of the Eight is Lust and their "holy idea" or essence is Holy Truth. Under stress Eights express qualities of Fives and when relaxed qualities of Twos.Nine: Mediator, Peacemaker, Preservationist - Nines are ruled by their empathy. At their best they are perceptive, receptive, gentle, calming and at peace with the world. On the other hand, they prefer to dissociate from conflicts; they indifferently go along with others' wishes, or simply withdraw, acting via inaction. They fear the conflict caused by their ability to simultaneously understand opposing points of view and seek peace of mind above all else. The corresponding "deadly sin" of the Nine is Sloth and their "holy idea" or essence is Holy Love. Under stress Nines express qualities of Sixes and when relaxed qualities of Threes.Whilst a person's Enneatype is determined by only one of the ego-fixations, their personality characteristics are also influenced and modified in different ways by all of the other eight fixations as well.Most Enneagram teachers and theorists believe that one of the principal kinds of influence and modification come from the two points on either side of their Enneatype. These two points are known as the 'Wings'.Observation seems to indicate, for example, that Ones will tend to manifest some characteristics of both Nines and Twos. Some Enneagram theorists believe that one of the Wings will always have a more dominant influence on an individual's personality, while others believe that either Wing can be dominant at any particular time depending on the person's circumstances and development.This aspect of Enneagram theory was originally suggested by Claudio Naranjo and then further developed by some of the Jesuit teachers.The lines of the triangle and hexagon are believed to indicate psychological dynamics between the points connected depending on whether a person is in a more stressed or secure and relaxed state. Therefore the connecting points on the lines are usually called the 'Stress Points' and 'Security Points'. In Don Riso's teachings these lines are also called the 'Directions of Integration' and the 'Directions of Disintegration' as he believes that the security points also indicates the 'direction' towards greater psychological wellbeing and the stress points towards psychological breakdown.The more traditional understanding of the stress and security points is that when people are in a more secure or relaxed state they will tend to express aspects of the 'security' or 'integration' type associated with their main type and aspects of the other direction when stressed. Relaxed or secure Ones, for instance, will tend to manifest some more positive aspects of the Seven personality type, Ones tending to be highly self-inhibitory whereas Sevens give themselves permission to enjoy the moment. On the other hand, stressed Ones will express some more negative aspects of the Four personality, particularly the obsessive introspection; they also share a certain amount of self-loathing and self-inhibition.Another emerging belief about these connections between points is that people may access and express the positive and negative aspects of both points depending on their particular circumstances.The connecting points are often indicated on Enneagram figures by the use of arrows and are sometimes also called 'Arrow Points'. The sequence of stress points is 1-4-2-8-5-7-1 for the hexagon and 9-6-3-9 for the triangle. The security points sequence is in the opposite direction (1-7-5-8-2-4-1 and 9-3-6-9). These sequences are found in the repeating decimals resulting from division by 7 and 3, respectively, both of those numbers being important to Gurdjieff's system. (1/7 = 0.1428571...; 1/3 = 0.3333..., 2/3 = 0.6666..., 3/3 = 0.9999...).Each type also has three main instinctual subtypes - the Self-Preservation, Sexual and Social subtypes. Because each point is different, it may be perceived as having a tendency toward one subtype or another. It requires keen observation and understanding to discover a person's tendency toward a particular subtype.▪Self-Preservation subtypes pay most attention to physical survival needs.▪Sexual subtypes focus most on intimacy and one-to-one relationships.▪Social subtypes care most about others, in groups and communities.▪One –Anger, as the frustration that comes from Ones working hard to do things right while the rest of the world doesn't care about doing things right and not appreciating the sacrifices and efforts Ones have made.▪Two –Pride, as self-inflation of the ego, in the sense of Twos seeing themselves as indispensable to others and to having no needs whilst also being needed by others.▪Three –Deceit, in the misrepresentation of self by marketing and presenting an image valued by others rather than presenting an authentic self.▪Four –Envy of someone else reminds Fours that they can never be what another person is, reawakening their sense of self-defectiveness.▪Five –Avarice, as the hoarding of resources in an attempt to minimize their needs in the face of a world that takes more than it gives; thus isolating Fives from the world.▪Six –Fear, often in the form of a generalized anxiety that can't find an actual source of fear. Sixes may wrongly identify a source of fear through projection, possibly seeingenemies and dangers where there are none.▪Seven –Gluttony, not in the sense of eating too much but, rather, of sampling everything the world has to offer (breadth) and not taking the time for richer experience (depth).▪Eight –Lust, in the sense of wanting more of what Eights find stimulating, to a point beyond which most people would feel overwhelmed and stop.▪Nine –Sloth, or laziness in discovering a personal agenda and instead choosing the less problematic strategy of just going along with other people's agendas.Because of differences among teachers in their understanding of the personality characteristics of the nine types and more theoretical aspects of Enneagram dynamics some skeptics argue that more research needs to be done to test the Enneagram as an empirically valid typology.While some believe that current research does not support the Enneagram's validity (especially regarding the concepts of Wings and the Stress and Security Points), others believe that because of its complex and 'spiritual' nature the Enneagram typology cannot be accurately evaluated by conventional empirical methods.Recently published research (2005) based on a type indicator questionnaire developed by Don Riso and Russ Hudson [3] claims to have demonstrated that the nine Enneagram types are "real and objective". Katherine Chernick Fauvre also claims to have statistically validated research that indicates that the three Instinctual Subtypes are real and objective.[citation needed] Concerning the brain, at least three different models have been proposed for identifying a basis for the Enneagram in neuroscience:▪Asymmetry in PFC and amygdala activity▪Triune brain▪Differential neurotransmitter activityConcerning the first brain model, a partially finished book entitled "Personality and the Brain" was posted for free download in December 2005. This book, written by a self-described "hacker", presents a model for linking the Enneagram to the current findings of neuroscience regarding prefrontal cortex (PFC) and amygdala asymmetry.Concerning the second brain model, The Enneagram and the Triune Brain offers a different theory on the neuroscience of Enneagram. This article was originally published in the October 2000 issue of the Enneagram Monthly and links the Enneagram with Paul MacLean's triune brain theory.In his 1996 book The Emotional Brain: The Mysterious Underpinnings of Emotional Life (at pages 92-103 of the paperback version), neuroscientist Joseph LeDoux rejected McLean's triune brain model to the extent that this model limits emotional functions to what McLean called the "limbic system". LeDoux explains that emotional functions are not limited to the limbic system (e.g. areas of the neocortex also play various roles); conversely, the limbic system is not limited to emotional functions (e.g. that area also processes certain cognitivefunctions). If LeDoux's criticisms of the triune brain theory are correct this would obviate this second model as a useful basis for the Enneagram in neuroscience.Concerning the third brain model, the paper The Enneagram and Brain Chemistry offers a theory that the different Enneagram types derive from different activity levels of the neurotransmitters serotonin, norepinephrine, and dopamine.Some psychologists and researchers regard the Enneagram as a pseudoscience that uses an essentially arbitrary set of personality dimensions to make its characterizations. Such critics assert that claims for the Enneagram's validity cannot be verified using the empirical scientific method as they lack falsifiability and cannot be disproven. In this respect, the Enneagram is not considered to be any different from many other typological models, such as that of Carl Gustav Jung on which the Myers-Briggs Type Indicator is based.The Pontifical Council for Culture and the Pontifical Council for Interreligious Dialogue of the Roman Catholic Church have also expressed concerns about the Enneagram when it is used in a religious context, because it is claimed that it "introduces an ambiguity in the doctrine and the life of the Christian faith". [4]Some critics suspect that the claims for the Enneagram's validity may be attributed to the Forer effect, the tendency for people to believe a supposedly tailored description of themselves even when the description has been worded in very broad terms.。
Graduation Committee
GeNIeRate:An Interactive Generator of DiagnosticBayesian Network ModelsPieter KraaijeveldTHESISSubmitted in partial fulfillment of the requirementsfor the degree of Master of Science in Computer ScienceKnowledge Based Systems GroupFaculty of Electrical Engineering,Mathematics and Computer ScienceGraduation Committee:Dr.Drs.L.J.M.RothkrantzDr.A.H.J.OomesDr.K.van der MeerDr.Ir.Marek J.DruzdzelJune13,2005AbstractConstructing diagnostic Bayesian network models is a complex and time consuming task.In this thesis,we propose a methodology to simplify and speed up the design of very large Bayesian network models.The models produced using our methodology are based on two simplifying assumptions: (1)the structure of the model has three layers of variables and(2)the inter-action among the variables can be modeled by canonical models such as the Noisy-MAX gate.The methodology is implemented in an application named GeNIeRate,which aims at supporting construction of diagnostic Bayesian network models consisting of hundreds or even thousands of variables.Pre-liminary qualitative evaluation of GeNIeRate shows great promise.The prediction is that GeNIeRate can reduce the model building time for an inexperienced Bayesian network model builder by20-30%.We conducted an experiment comparing our approach to traditional techniques for build-ing Bayesian network models by rebuilding a diagnostic Bayesian network model for liver disorders,HEPAR-II.We found that the performance of the model created with GeNIeRate is better than the performance of the original HEPAR-II.iiiAcknowledgementsI want to thank several people who made it possible for me to do my thesis work at the Decision Systems Laboratory(DSL)of the University of Pitts-burgh.First of all,I want to thank my advisor at the University of Pittsburgh, Dr.Ir.Marek J.Druzdzel.Marek showed me how exciting research can be. During my six months in Pittsburgh I learned a lot of him.Without Marek it would not be possible for me to publish myfirst paper.Second,I want to thank Dr.Drs.L.J.M.Rothkrantz,who made it possible for me to visit the University of Pittsburgh and who has provided me with guidance and feedback in my work.Next,I also want to thank several people who helped me with my thesis work:Tomek Loboda,Doug Campbell,Adam Zagorecki,Agnieszka Oni´s ko, Tomek Sowinski,John Mark Agosta,Thomas Gardos,Mark Voortman, Hanna Wasyluk and last but not least my girlfriend Anna-Gerdien Bruna who supported me during my stay in Pittsburgh and helped making Ge-NIeRate look better due to her excellent graphical design skills.Furthermore,my stay at the University of Pittsburgh from October2004 to March2005would not have been possible without thefinancial support of:•My parents,Kees&Jetty Kraaijeveld•Fundatie van de Vrijvrouwe van Renswoude•Stimuleringsfonds voor Internationale Universitaire Samenwerkingsre-laties(STIR)•Faculteitsfonds•Universiteitenfonds DelftvContents1Introduction11.1Context (1)1.2Problem domain (2)1.3Objectives and assignment (3)1.4Overview of the thesis (4)2Background material52.1Bayesian networks (5)2.2Automating diagnosis (7)2.2.1Diagnosis (7)2.2.2Automated diagnosis (8)2.2.3Diagnostic Bayesian network models (10)2.3Canonical interaction models (11)2.3.1The Noisy-OR gate (12)2.3.2The Noisy-MAX gate (14)3Related work173.1BATS Author (17)3.2A user-friendly development tool for medical diagnosis basedon Bayesian networks (19)3.3The Design for Serviceability(DFS)Tool (19)3.4MEDICUS (20)3.5Nokia:Knowledge Acquisition Tool(KAT) (21)3.6Evaluation (22)4The BN3M model254.1Qualitative design (26)4.2Quantitative design (27)4.3Theoretical evaluation and methodology (28)viiviii CONTENTS 5GeNIeRate315.1SMILE and GeNIe (31)5.2System design (34)5.2.1Process (34)5.2.2Application architecture (34)5.3Graphical User Interface (36)5.3.1GUI design (36)5.3.2GeNIeRate’s GUI (37)6Empirical evaluation476.1Qualitative evaluation (47)6.1.1DSL members (47)6.1.2Professional consultant (48)6.2Quantitative evaluation (49)6.2.1The Hepar-II model (49)6.2.2HEPAR-II-BN3M (50)6.2.3The diagnostic performance (51)6.2.4Results and Discussion (52)7Conclusions and Future Research577.1Conclusions (57)7.2Future Research (59)A Learning CPT parameters from a data set61B Screen shots of GeNIeRate63C GeNIeRate tutorial65D Paper version67Chapter1IntroductionThis thesis describes my research work done at the Decision Systems Lab-oratory(DSL)of the School of Information Science(SIS)of the University of Pittsburgh.Basically,I developed an interactive application to speed up building large diagnostic Bayesian networks based on a methodology with several simplifying assumptions.1.1ContextDiagnosis is the process offinding the root cause of a system failure given a set of system observations:symptoms,sensor readings,error codes,test results,historicalfindings etc.While diagnosis applied in the medical do-main is well known,it is also applied in industry,management and various other domains.Over the years researchers tried to automate diagnosis with various techniques.These techniques helpfinding the cause(s)of the failure faster and,therefore,minimize the loss caused by that failure.The mod-eling methods include for example fault trees,rule bases,and probabilistic models.In the last two decades,the probabilistic models found great inter-est.One prominent tool for modeling diagnosis using probability theory is known as Bayesian networks,which we will use in this thesis as the modeling technique.Bayesian networks(BN)[Pearl,1988]are acyclic directed graphs with each node representing a variable and each arc representing typically a causal relation among two variables.Although exact and approximate inference in Bayesian networks are both worst-case NP-hard[Cooper,1990;Dagum and Luby,1997],they still perform well for practical diagnostic models consisting of several hundreds or even thousands of variables.12 1.IntroductionA BN consists of a qualitative and a quantitative part.The qualita-tive part is an acyclic directed graph reflecting the causal structure of the domain,the quantitative part represents the joint probability distribution over its variables.Every variable has a conditional probability table(CPT) representing the probabilities of each state given the states of the parent variables.If a variable does not have any parent variables in the graph, the CPT represents the prior probability distribution of the variable.A BN is able to calculate the posterior probability of an uncertain variable given some evidence obtained from related variables.This is called evidence prop-agation or belief updating.This property and the intuitive way BN model complex relationships among uncertain variables makes it a very suitable technique for building diagnostic models.Diagnosis is quite likely the most successful practical application of BNs.1.2Problem domainWhile the existing diagnostic BN models perform well,the technique is still not widely used and accepted.One of the main reasons for this is that building a BN model is a laborious and time consuming task.During the model building process,both the qualitative and quantitative parts of the BN have to be designed.This can be done in three different ways:(1)learn-ing both structure and parameters from data without human interaction,(2) consulting a domain expert to design the structure and the parameters,or (3)combining learning from data and expert knowledge.In order to learn successful diagnostic BN models from data one would need a very large data set,which is rarely available for diagnostic models.It is never available for new devices or devices that are designed for high reliability such as,for ex-ample airplanes,nuclear reactors,oil refineries,etc.So building a diagnostic BN model will most of the time come down to an interaction of a knowl-edge engineer and a domain expert.They will consult technical manuals, test procedures,and repair databases to define the variables in that domain, determine the interactions among them,and to elicit the parameters.To give an idea of the time needed to develop a diagnostic BN model:the con-struction of the HEPAR-II model[Oni´s ko et al.,2001]used to diagnose liver disorders took more than300hours of which roughly50hours were spent with domain experts.The model consists of70variables and the numer-ical parameters were learned from a data set of patient cases.In another diagnostic Bayesian network,the PATHFINDER system[Heckerman et al., 1992],used for lymph node pathology,14000conditional probabilities had1.3.Objectives and assignment3 to be assessed by expert pathologists.One could imagine that a lot of errors will be made when eliciting such a big amount of probabilities,this,because of tiring out the expert pathologists.Software to build Bayesian networks is widely available.However,most of this software is developed for educational use or for users who are BN experts.Thus,for example,if a medical doctor decides to build a BN to diagnose lung cancer,the doctor will need a BN model building expert to build the model using this software.Now,as is discussed above,the BN expert will help the doctor to identify the variables in the domain,creating the interactions among them,and eliciting the probabilities.The elicitation of the probabilities is a very difficult task.The BN expert will help the doctor to quantify the probabilities by asking questions.Most of the time a domain expert does have some feeling about the probabilities in a relative sense(X bigger/smaller than Y)but thinks it is hard to quantify them in an absolute matter.Most BN model building software do not support any visual feed-back of the elicited probabilities,like,for example,thickness of the arcs,to indicate the impact of the elicited probabilities.This means that when a large model is created,which does not perform as expected,it is very hard to debug the model andfind the erroneous conditional probabilities.1.3Objectives and assignmentThis being said,the main objective to be met in this thesis is to develop a methodology for building large diagnostic Bayesian network models in a fast and easy way.The methodology has to be such that a diagnostic Bayesian network model,which is complete and comprehensive for its domain,can be built by users who are not necessarily Bayesian network experts.The methodology has to be implemented in an application and has to meet some goals listed below.The application must:•be a general purpose application,which means it can be applied to a variety of diagnostic domains.•have an intuitive graphical user interface which helps a domain ex-pert without knowledge of Bayesian networks to construct a diagnostic model.To fulfill these requirements,we made the following two simplifying as-sumptions that will reduce the complexity of the models.The application, therefore,must:4 1.Introduction•generate Bayesian networks which have a simple structure,consisting of only a few layers of variables.•use special models of interaction among variables that(1)minimize the number of numerical parameters that used to be elicited from experts and(2)can be used in specialized belief updating algorithms.With these simplifications some theoretical modeling power and precision will be sacrificed.However,our study shows that the resulting models will not be significantly less accurate in terms of diagnostic performance,while being much easier and faster to build.To summarize the objectives into an assignment.The assignment of this thesis is to:•design a predefined Bayesian network model,which is based on the simplifying assumptions described above,and can be used as a“tem-plate”model to build very large diagnostic Bayesian networks.•design a methodology to build very large diagnostic BN models using the designed“template”.•implement a prototype application,based on the methodology,to sup-port the user,who is not necessarily a BN expert,to build a very large diagnostic BN model.•test the idea’s of the“template”,the methodology,and the prototype, and test whether the diagnostic performance of the created models suffered,because of the simplifying assumptions it is based on.1.4Overview of the thesisThe remainder of this thesis report is structured as follows.Chapter2dis-cusses some background material about Bayesian networks,diagnosis,and specialized canonical interaction models.Chapter3discusses related work on this topic.Chapter4explains our proposed“template”of diagnostic BN models,and the methodology to build them.Chapter5gives a complete description of our prototype application:GeNIeRate.Chapter6discusses a qualitative study of GeNIeRate,and a quantitative empirical study com-paring our methodology to traditional model building techniques.Finally, Chapter7states the conclusions and discusses different ideas for future re-search.Chapter2Background materialThis chapter provides some background material,it is a survey about Bayesian networks(Section2.1),automating diagnosis(Section2.2),and canonical interaction models(Section2.3).2.1Bayesian networksBayesian networks[Pearl,1988]are acyclic directed graphs in which nodes represent random variables and arcs represent direct probabilistic dependen-cies among them.A Bayesian network encodes the joint probability distrib-ution over a set of variables{X1,...,X n},where n isfinite,and decomposes it into a product of conditional probability distributions over each variable given its parents in the graph.In case of nodes with no parents,prior probability is used.The joint probability distribution over{X1,...,X n} can be obtained by taking the product of all of these prior and conditional probability distributions:Pr(x1,...,x n)=ni=1Pr(x i|P a(x i)).(2.1)Figure2.1shows a highly simplified example Bayesian network modeling causes of a car engine failing to start.The variables in this model are:Age of the car(A),dead Battery(B),dirty Connectors(C),Engine does not start (E)and Radio does not work(R).For the sake of simplicity,we assumed that each of these variables is binary.For example,R has two outcomes, denoted r and r,representing“Radio fails”and“Radio works,”respectively.A directed arc betweenB and E denotes the fact that whether or not the battery is dead will impact the likelihood of the engine failing to start.56 2.Background material Similarly,an arc from A to B denotes that the age of the car influences the likelihood of having a dead battery.©cd d d ©d d d ©A B C R EFigure 2.1:An example belief network for engine problemLack of directed arcs is also a way of expressing knowledge,notably assertions of (conditional)independence.For instance,lack of a directed arc between A and R encodes the knowledge that the age of the car does not influence the chance whether the radio of the car works or not,only indirectly through the variable dead battery B .These causal assertions can be translated into statements of conditional independence:R is independent of A given B .In mathematical notation,Pr(R |B )=Pr(R |B,A ).(2.2)Similarly,the absence of arc B →C means that whether or not the car has a dead battery will not influence the chance of having dirty connectors.These independence properties imply that:Pr(a,b,c,r,e )=Pr(a )Pr(b |a )Pr(c |a )Pr(r |b )Pr(e |a,b,c ),(2.3)i.e.,that the joint probability distribution over the graph nodes can be factored into the product of the conditional probabilities of each node given its parents in the graph.Please note that this expression is just an instance of Equation 2.1.The assignment of values to observed variables is usually called evidence .The most important type of reasoning in a probabilistic system based on Bayesian networks is known as belief updating or evidence propagation ,which amounts to computing the probability distribution over the variables of in-terest given the evidence.This evidence propagation makes Bayesian net-works very suitable for diagnosis.For example,in the model of Figure 2.1,the variables of interest for diagnosis could be B and C and the focus of com-putation could be the posterior probability distribution over B and C given2.2.Automating diagnosis7 the observed values of A,R,and E,i.e.,Pr(b,c|a,r,e),often approximated in practice as marginal probability distributions,Pr(b|a,r,e)and Pr(c|a,r,e). Bayesian network software can be applied to calculate these posterior prob-abilities.Although exact and approximate inference in Bayesian networks are both worst-case NP-hard[Cooper,1990;Dagum and Luby,1997],algo-rithms embedded in today’s software are capable of very fast belief updating in models consisting of hundreds or even thousands of variables.After the belief updating,the software can make a decision or support the user in making a decision what actions to perform given that probability.2.2Automating diagnosisIn Chapter1,a short description of diagnosis is already given.This section will cover a more thorough description of diagnosis and it will give some historical background of techniques to automate the diagnostic process. 2.2.1DiagnosisDiagnosis is a common and important problem that is performed daily in many different domains.The medical domain is probably the best known example but diagnosis is also applied in engineering,business,and several other domains.Diagnosis(from the Greek wordsδια=by andγνoσις= knowledge)is the process of identifying the disorders,diseases or malfunc-tions of an object by considering signs,symptoms,tests,historical facts, various diagnostic procedures,or other facts which caused or are caused by the disorder.De Kleer[1990]described it a little different.He claims that the diagnostic task is to determine why a correctly designed system is not functioning as it was intended.This task comes down to identifying what is wrong in a system given some observations.Both descriptions describe diagnosis as a process of searching the causes of a misbehavior.After identifying that something is not working correctly in a system,the diagnostic process consists of two different steps,thefirst step is to acquire as much information as possible explaining the misbehavior of the system by sequentially performing different tests.The second step is to,given all the gathered information,identify which component,or which combination of components of the system,is most probable of having a faulty behavior. After this process the(most probable)faulty components of the system can be repaired.Figure2.2shows this sequential process graphically.In the diagnostic process the diagnostician has to make a decision when to stop the process and conclude what is wrong.This has to be done as8 2.Background materialFigure2.2:The diagnostic processfast and cheap as possible.Therefore,it becomes an optimization problem which is of great interest of AI research.Over the years,several ways are developed to automate diagnosis and to support the decision making.The next section will give some historical remarks and an overview of some of the main approaches how to automate diagnosis.2.2.2Automated diagnosisEarly research to automate the diagnosis process used Bayesian reasoning and decision theory[Ledley and Lusted,1959]and proposed different tech-niques to help a medical doctor making a diagnosis.Their pioneering paper was followed by papers written by others describing a huge variety of tech-niques and methods to automate the(medical)diagnosis process.This inter-est became less at the end of1970s because the computers at that time were not powerful enough to compute the complex probabilistic queries within reasonable time.2.2.Automating diagnosis9In the beginning of the1980’s,there was some success to automate di-agnosis with expert systems using rule-bases.One of thefirst of these sys-tems,used to diagnose acute abdominal pain,was developed by De Dombal [1972].This system was intended for a narrow,well-defined diagnostic prob-lem where the clinician had to decide between a limited number of diagnoses. Another rule-based expert system is the INTERNIST-I system for internal medicine[Miller et al.,].The most valuable product of INTERNIST-I sys-tem was its medical knowledge base.This knowledge base was later used as a basis of successor systems.One of these successor systems was the Quick Medical Reference(QMR),which is a commercialized decision support sys-tem for internists[Myers,1987].While the rule-based systems to automate diagnosis had some successful results they also had some disadvantages.The main disadvantages of rule-based systems are that they have a strong do-main dependent character,they are very hard and time consuming to build and they are not practical to use.Other approaches to automate diagnosis include decision trees[Quinlan, 1986],fault trees[Madden and Nolan,1999],multi-layer perceptrons,and probability estimation[Stensmo and Sejnowski,1994].Almost all of these techniques have one common main disadvantage which is the need of a complete data set when inferring a diagnostic query.It takes a long time before such a data set is created and is therefore almost never available for diagnostic problems.The development of Bayesian networks(BNs)(Section2.1)used as prob-abilistic graphical models encouraged researchers to focus again on the prob-abilistic techniques to automate diagnosis.One of thefirst large diagnostic systems that used Bayesian networks was the QMR-DT system[Shwe et al., 1991;Middleton et al.,1991].This system was a probabilistic reformulation of the Quick Medical Reference(QMR)described above.The QMR-DT system contains approximately5000variables.Other examples of diagnos-tic BN models are the PATHFINDER system[Heckerman et al.,1992]used for lymph node pathology,and the HEPAR-II system as mentioned already before in Chapter1.An example of a non medical domain model is the SACSO project[Jensen et al.,2001].This system was built to diagnose printer failures and helps users to solve their printer problems themselves without calling a help desk or sending the printer back to the factory.Bayesian networks are able to calculate the posterior probability of a variable given some evidence from related variables(i.e.,evidence prop-agation,see Section2.1).This property and the intuitive way Bayesian networks model complex relationships among uncertain variables makes it a very suitable technique for building diagnostic models.Therefore,we will10 2.Background material use Bayesian networks as the modeling technique to build diagnostic models.The next section will give a more thorough description of Bayesian networks applied in diagnosis.2.2.3Diagnostic Bayesian network modelsThere are various techniques to model the graphical structure (qualitative part)of a Bayesian network when it is applied for diagnosis.The most simple one is based on the assumption that only one fault can occur at the same time and that observations are conditionally independent.The structure of this model is such that there is only one fault variable,which has a separate state for each single fault,with one or more observation variables as its children.This structure is called:Naive-Bayes model or Idiot’s Bayes model [Friedman et al.,1997],since the single fault assumption is a somewhat naive assumption.The main advantage of this single fault model is that the number of conditional probabilities is low which makes it attractive from the computational point of view.Figure 2.3shows an example Naive-Bayes structure,with one fault variable F and three observation variables O1 (3)¡¡¡¡¡¡¡ c e e e e e e eFO1O2O3Figure 2.3:Naive-Bayes modelIt is also possible to model diagnosis supporting more than one fault.Models like this are called:Multiple fault models .Multiple fault models have typically separate fault variables for every fault state.These fault variables are connected to the observation variables which are caused by them.A multiple fault structure gives a more realistic model,however,using this model will result in eliciting a lot more conditional probabilities.For example,if we have a binary observation variable which is caused by two binary parent fault variables,the CPT already consists of 8entries.The size2.3.Canonical interaction models 11of the CPT of the observation variable grows exponentially in the number of parent variables.There are various techniques available to reduce this exponential growth,these techniques are discussed in Section 2.3.Figure 2.4gives an example of the multiple fault model with two fault variables F1and F2and three observation variables O1,O2,and O3.¡¡ e e ¨¨¨¨¨%¡¡ e e F1F2O1O2O3Figure 2.4:Multiple fault modelThe multiple fault model in the example above is also based on two independence assumptions.There are no relations among the fault variables and also no relations among the observation variables.By allowing these relations an extended multiple fault model will be created.This can make the model more accurate but again it will increase the number of parameters that have to be elicited.Figure 2.5shows an example of an extension of the multiple fault model presented in Figure 2.4.¡¡ e e ¨¨¨¨¨%¡¡ e e E F1F2O1O2O3Figure 2.5:Multiple fault model with dependencies among variables of the same type2.3Canonical interaction modelsIn a Bayesian network,every variable contains a Conditional Probability Table (CPT)representing the probabilities of each state given the state of the parent variable.If a variable does not have any parent variables in the graph,the CPT represents the prior probability distribution of the variable.Figure 2.6shows a simple CPT of the variable E representing “Engine does not start”of the example of Figure 2.1.E has three parent variable:A ,C ,12 2.Background material and B representing the Age of the car,dirty Connectors,and dead Battery respectively.This CPT now consist of 24entries representing all possible scenarios among the four variables.The probabilities within the table are either learned from a data set or elicited by a domain expert.c d d d©B A CE Figure 2.6:The CPT of the variable:Engine does not startThe size of a CPT grows exponentially in the number of parents of that variable.Thus,in the example above,if the variable E gets one more parent variable,the size of the CPT will grow from 24to 48entries.In order to gain speed in the model building process,researchers developed canonical interaction models which approximate the CPTs and require fewer parameters.These canonical interaction models are known as:gates .One type of canonical interaction,widely used in Bayesian networks,is known as the Noisy-OR gate.This gate was first introduced outside the BN domain by [Good,1961].Later it was applied in the context of BNs [Pearl,1986].It became very popular among BN model builders since it reduces the growth of a CPT of a variable from exponential to linear in the number of parents.The Noisy-OR gate can only be used for binary variables,an extension for multi-valued variables is the Noisy-MAX gate [Henrion,1989;D´ıez,1993].For the sake of simplicity,we will discuss the Noisy-OR gate in depth in Section 2.3.1and give a shorter description of the extended Noisy-MAX gate in Section 2.3.2.2.3.1The Noisy-OR gateThe Noisy-OR gate models a non-deterministic interaction among n binary parent cause variables X and a binary effect variable Y .Every variable has two states:a distinguished state,which represents that the variable is in its normal working monly this is absent or false and a。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
20-09-2009
19:06
Pagina 417
ANNALS OF GEOPHYSICS, VOL. 52, N. 3/4, June/August 2009
The NeQuick model genesis, uses and evolution
Sandro M. Radicella ARPL, The Abdus Salam ICTP, Trieste, Italy
Mailing address: Dr. Sandro M. Radicella, ARPL, The Abdus Salam ICTP, Strada Costiera 11, 34151 Trieste, Italy: e-mail: rsandro@ictp.it
scribes the topside F2 region by introducing a constant shape factor k that modifies the thickness parameter for that region and can be derived empirically by comparison with experimental vertical TEC data. The improved version of the DGR profiler calculates vertical TEC values. The modeling effort described above was part of the activities done under the scheme of COST 238 (Prediction and Retrospective Ionospheric Modeling over Europe, PRIME). The improved DGR «profiler» was adopted by the action as part of its final product. A new family of electron density models, differing in complexity and whith different but related application areas, based on the DGR «profiler» concept has been developed in collaboration with the University of Graz, Austria. These models are particularly suited for the study of trans-ionospheric radiopropagation effects of interest to satellite navigation and positioning (Hochegger et al., 2000; Radicella and Leitinger, 2001 ). The models are: - NeQuick – a quick-run model for transionospheric applications; - COSTprof – a model suited for ionospheric and plasmaspheric satellite to ground applications; 417
Vol52,3,2009
20-09-2009
19:06
Pagina 418
model suited for assessment studies involving satellite-to satellite propagation of radio waves. All three models give electron density as a function of height, geographic latitude, geographic longitude, solar activity (specified by the sunspot number or by the 10.7 cm solar radio flux), season (month) and time (Universal Time UT or local time LT). The models, like the original DGR, are continuous in all spatial first derivatives, particularly needed in applications like ray tracing and location finding. They also allow the calculation of electron density along arbitrarily chosen ray paths and the corresponding total electron content (TEC). COST prof model was adopted in the final product of COST 251 action (Improved Quality of Service in Ionospheric Telecommunication Systems Planning and Operation). 1.2. The NeQuick model and its uses Particularly successful was the development of the NeQuick model. To describe the electron density of the ionosphere above 100 km and up to the F2 layer peak this model uses a modified DGR profile formulation. A semi-Epstein layer represents the electron density distribution in the topside with a height dependent thickness parameter empirically determined. The model has been adopted by the International Telecommunication Union, Radiocommunication Sector (ITU-R) Recommendation P. 531-6, now superseded by P. 531-9, (ITU, 2007) as a suitable method for TEC modeling. The NeQuick (Fortran 77) source code is available at: http://www.itu.int/ITU-R/software/studygroups/rsg3/databanks/ionosph/. The basic inputs of the NeQuick model code are: position, time and solar flux (or sunspot number); the output is the electron concentration at the given location in space and time. In addition the NeQuick package includes specific routines to evaluate the electron density along any ray-path and the corresponding TEC by numerical integration. The original version of the NeQuick model has been used by the European Geostationary Navigation Overlay Service (EGNOS) of the 418
Abstract The genesis and evolution of the NeQuick model is reviewed from the initial ionospheric efforts made in the framework of the European COST actions on ionospheric issues to the last version of the model (NeQuick 2). Attention is given to the uses of the model particularly by the European satellite navigation and positioning systems EGNOS and GALILEO. Recent assessment studies on the performance of NeQuick 2 are also reviewed.
Key words ionospheric model – model assessment – model uses
1. Introduction The NeQuick ionospheric model is based on a model (DGR from now on) introduced by Di Giovanni and Radicella (1990). The original DGR model uses a sum of Epstein layers to reproduce the electron density distribution in the ionosphere analytically. Its formulation is such that the function and its 1st derivative are always continuous. The construction of the analytical function is based on «anchor» points related to the ionospheric characteristics routinely scaled from ionograms (foF2, M(3000)F2, foF1, foE). For this reason the DGR model is essentially a «profiler». The original DGR «profiler» can be used with experimental or modeled data of the ionospheric characteristics. An improved version of the original DGR introduced by Radicella and Zhang (1995) de-