Geomodeling and Implementation of a Geodatabase Using CASE-tools and ArcCatalog Abstract

合集下载

Geometric sensing of known planar shapes

Geometric sensing of known planar shapes
International Journal of Robotics Research, 154:365-392, 1996.

Geometric Sensing of Known Planar Shapes
Yan-Bin Jia Michael Erdmann The Robotics Institute and School of Computer Science Carnegie Mellon University Pittsburgh, Pennsylvania 15213-3891 March 12, 1995
1 Introduction
Sensing a part involves determining both its shape and pose. By pose we mean the position as well as the orientation of the part. Prior to selecting a sensing method, we often will make some assumptions about the shape of the part to be sensed. The resulting sensing method is a ected greatly by what is known about the shape. For instance, without making any assumptions, we might not even be able to start segmentation of the part image, whereas knowing that the shape is convex polygonal, we can employ some simple non-vision technique such as nger probing. An e ective sensing method should make use of its knowledge about the part shape as much as possible to attain simplicity, e ciency, and robustness. Parts in many assembly applications are manufactured to high precisions, so we can make the assumption that their shapes are known reasonably well in advance. Accordingly, the design of sensing strategies should be based on the geometry of parts. The task of sensing reduces to obtaining enough geometric constraints that, when combined with the part geometry, su ce to derive the part poses. Consequently, minimizing the necessary geometric constraints becomes very important for reducing the sensing complexity. In this article, we propose two approaches for sensing polygonal parts in known shapes, one applicable to a continuum of possible poses and the other applicable to nite possible poses. Perhaps the simplest geometric constraint on a polygon is incidence|when some edge touches a xed point or some vertex is on a xed line. For instance, Figure 1a shows an 8-gon constrained by two points p1; p2 and two lines l1; l2. The question we want to ask is: Generally, how many such constraints are necessary to x the polygon in its real position? Note that any two such incidence constraints will con ne all possible positions to a locus curve which consists of a nite number of algebraic curves parameterized by the part's orientation. Three constraints, as long as not de ned by collinear points or concurrent lines, will allow only a nite number of valid poses. These poses occur when di erent locus curves, each given by a pair of constraints, intersect at the same orientations. An upper 3

Geometric Modeling

Geometric Modeling

Geometric ModelingGeometric modeling is a fundamental concept in computer graphics and design, playing a crucial role in various industries such as architecture, engineering, and entertainment. It involves creating digital representations of physical objects or environments using mathematical and computational techniques. Geometric modeling allows designers and engineers to visualize, analyze, and manipulate complex shapes and structures, leading to the development of innovative products and solutions. However, it also presents several challenges and limitations that need to be addressed to ensure its effectiveness and efficiency. One of the key challenges in geometric modeling is the accurate representation of real-world objects and environments. This requires the use of advanced mathematical algorithms and computational methods to capture the intricate details and complexities of physical entities. For example, creating a realistic 3D model of a human face or a natural landscape involves precise measurements, surface calculations, and texture mapping to achieve a lifelike appearance. This level of accuracy is essential in industries such as animation, virtual reality, and simulation, where visual realism is critical for creating immersive experiences. Another challenge in geometric modeling is the efficient manipulation and editing of geometric shapes. Designers and engineers often need to modify existing models or create new ones to meet specific requirements or constraints. This process can be time-consuming and labor-intensive, especially when dealing with large-scale or highly detailed models. As a result, there is a constant demand for more intuitive and user-friendly modeling tools that streamline the design process and enhance productivity. Additionally, the interoperability of geometric models across different software platforms and systems is a persistent issue that hinders seamless collaboration and data exchange. Moreover, geometric modeling also faces challenges in terms of computational resources and performance. Generating and rendering complex 3D models requires significant computing power and memory, which can limit the scalability and accessibility of geometric modeling applications. High-resolution models with intricate geometries may strain hardware capabilities and lead to slow processing times, making it difficult for designers and engineers to work efficiently. This is particularly relevant in industries such as gamingand virtual reality, where real-time rendering and interactive simulations are essential for delivering engaging and immersive experiences. Despite these challenges, geometric modeling continues to evolve and advance through technological innovations and research efforts. The development of advanced modeling techniques such as parametric modeling, procedural modeling, and non-uniform rational B-spline (NURBS) modeling has significantly improved the accuracy and flexibility of geometric representations. These techniques enable designersand engineers to create complex shapes and surfaces with greater precision and control, paving the way for more sophisticated and realistic virtual environments. Furthermore, the integration of geometric modeling with other disciplines such as physics-based simulation, material science, and machine learning has expanded its capabilities and applications. This interdisciplinary approach allows for the creation of interactive and dynamic models that accurately simulate physical behaviors and interactions, leading to more realistic and immersive experiences. For example, in the field of architecture and construction, geometric modeling combined with structural analysis and environmental simulation enables the design and evaluation of sustainable and resilient buildings and infrastructure. In conclusion, while geometric modeling presents several challenges and limitations, it remains an indispensable tool for innovation and creativity in various industries. The ongoing advancements in geometric modeling techniques and technologies continue to push the boundaries of what is possible, enabling designers and engineers to create increasingly realistic and complex digital representations of the physical world. As computational power and software capabilities continue to improve, the future of geometric modeling holds great promise for revolutionizing the way we design, visualize, and interact with the world around us.。

geoscientific model development

geoscientific model development

geoscientific model developmentGeoscientific model development refers to the development of models that simulate the physical, chemical, and biological processes occurring in an area of the Earth's landscape. This modelling is done with a combination of data and computer simulations to create a model of the area that can be used to make predictions regarding the future conditions of the area, as well as to assess the impact of human activities, such as land use and energy sources, on the environment.Geoscientific models are used in a variety of applications and research, including environmental decision-making, climate forecasting, resource exploration and management, and engineering and planning. For example, geoscientific models have been used to predict the impacts of climate change on global sea levels, identify population trends in different parts of the world, develop land use plans for sustainable development, and assess potential earthquake hazards.Developing an effective geoscientific model requires the interdisciplinary integration of data from several sources. This includes geospatial data like topography, hydrology, and climate, as well as observational data from satellites, aircraft, and other sources. The data is then used to create a 3D computer representation of the area, which is then used to simulate the various processes and interactions thataffect the area.Geoscientific modelling is constantly evolving andbecoming increasingly sophisticated. As computer power and capabilities continue to improve, so too do the modelling techniques and algorithms used to create geoscientific models. This is allowing for more realistic simulations of theEarth's environment, which will improve our understanding of the impacts of human activities on the planet.。

台湾历史街区更新中的容积转移制度实践反思

台湾历史街区更新中的容积转移制度实践反思

保护与更新:台湾案例 | 1Reflection on the Practice of FAR Transfer System in the Renewal of Historical Blocks in Taiwan: Based on the Implementation Evaluation of Dadaocheng台湾历史街区更新中的容积转移制度实践反思:基于大稻埕的实施评价*张若曦 王茹茹 ZHANG Ruoxi, WANG Ruru历史街区在保护中发展,一直以来是城市更新的难题。

容积转移作为平衡资源保护与市场开发的弹性调控机制,是协调历史街区更新中复杂多元问题的重要技术手段。

通过对我国台湾地区历史街区的容积转移制度建构和20多年实践分析,对制度整体实施成效进行探讨,重点对台北市大稻埕历史街区更新中的容积转移实践进行深入剖析,对制度实施情况进行综合评价,客观认识该制度在历史街区更新中的作用、优点和不足,并提出制度反思与建议,为我国大陆地区历史街区保护更新提供经验借鉴。

The development of historical blocks under protection has always been a difficult problem in urban renewal. As an elasticregulation mechanism to balance resource protection and market development, FAR transfer is an important technical means to coordinate the complex and diversified problems in the renewal of historical blocks. Based on the analysis of the construction and practice of the FAR transfer system of historical blocks in Taiwan, this paper discusses the overall implementation effects of the system, focuses on the in-depth analysis of the FAR transfer practice in the renewal of the historical blocks in Dadaocheng, Taipei, and provides experience for the protection and renewal of historical blocks in Mainland China.历史街区更新;容积转移;实施评价;台湾;大稻埕historical district renewal; FAR transfer; implementation evaluation; Taiwan; Dadaocheng文章编号 1673-8985(2022)04-0001-06 中图分类号 TU984 文献标志码 A DOI 10.11982/j.supr.20220401摘 要Abstract 关 键 词Key words 作者简介张若曦厦门大学建筑与土木工程学院副教授,******************.cn 王茹茹厦门大学建筑与土木工程学院硕士研究生*基金项目:国家自然科学基金青年基金“基于VR 眼动的闽南传统街巷风貌特征识别及保护机制研究”(编号51808472)资助。

城乡融合发展背景下县域干线公路沿线景观风貌规划研究——以安徽固镇县为例

城乡融合发展背景下县域干线公路沿线景观风貌规划研究——以安徽固镇县为例

Key words urban-rural integration; county; trunk roads; landscape planning; characteristic planning; landscape improvement; control methods城乡之间发展的不平衡是世界各国在现代化进程中所面临的共同问题,也是实现中国式现代化必须攻克的难题。

因此,党的二十大报告指出,要坚持城乡融合发展,畅通城乡要素流动。

本研究所指干线公路包括途径县域的国道、省道和县道等与沿线城镇、乡村无硬隔离设施的道路。

城乡融合发展背景下,此类公路是县域内外交通的要道,在县域发展中具有结构引导作用,其沿线空间是县域经济社会发展的重要承载空间,也是城乡融合发展的重要阵地。

对干线公路及其沿线空间的景观风貌进行规划提升,是以空间为对象促进城乡融合发展的重要方法之一。

1县域干线公路沿线的景观风貌特征1.1公路本体的景观风貌特征1.1.1交通性①具有道路的共性特征。

如交通标志、路面标线等交通设施,公交车站、路灯等家具小品,以及行车道、交叉口等交通空间。

②具有高等级道路特有的特征。

如连续的行道树、防护林,以及路幅较宽、车行道为主的空间断面特征。

1.1.2连续性①线形的连续。

干线公路在设计速度、设计通行能力等方面的高要求决定了其空间的线形连续。

②景观的连续。

连续的线形空间串接连续的景观要素,形成连续的景观风貌感知。

1.1.3统一性①体现在交通标志等工程要素的全国统一。

②体现在行道树等自然景观要素在同一自然条件下的地区统一。

摘要 城乡融合发展背景下,作为交通骨架与发展廊道的县域干线公路,是畅通城乡要素流动、推动城乡融合发展的重要空间。

其沿线的景观风貌规划具有提升干线公路景观、缓解沿线空间矛盾、展示地方风貌特色和助力城乡融合发展的作用。

研究综合目标导向和问题导向,通过特征梳理和实践经验归纳,提出县域干线公路沿线景观风貌规划在内容层面应包括梳理景观风貌基底与特色要素、归纳上位要求与空间诉求、明确景观风貌定位与空间意境、制定景观风貌结构与规划策略等内容。

Geometric Modeling

Geometric Modeling

Geometric ModelingGeometric modeling is a crucial aspect of computer-aided design (CAD) that involves the creation of mathematical representations of physical objects. It isan essential tool for designers, engineers, and architects, allowing them to visualize and manipulate complex three-dimensional shapes. In this essay, I will discuss the importance of geometric modeling, its various applications, and the challenges associated with it. One of the primary benefits of geometric modelingis that it allows designers to create accurate and precise models of physical objects. This is particularly important in industries such as aerospace, automotive, and manufacturing, where even the smallest errors can have significant consequences. With geometric modeling, designers can create virtual models oftheir products, test them for functionality, and make any necessary adjustments before the physical prototype is built. This not only saves time and money butalso ensures that the final product meets the required specifications. Geometric modeling is also essential in the field of architecture. Architects use it to create detailed 3D models of buildings, allowing them to visualize the final product and make any necessary changes before construction begins. This is particularly important in large-scale projects, where even minor design flaws can have significant consequences. With geometric modeling, architects can create accurate and detailed models of their buildings, test them for structuralintegrity, and make any necessary adjustments before construction begins. Another application of geometric modeling is in the field of animation and special effects. Animators use it to create 3D models of characters and objects, allowing them to manipulate and animate them in a virtual environment. This is particularly important in the film and gaming industries, where realistic and lifelike animations are essential. With geometric modeling, animators can create detailed and realistic models of their characters and objects, giving them more controlover the final product. Despite its many benefits, geometric modeling also presents several challenges. One of the most significant challenges is the complexity of the models themselves. As the complexity of the models increases, so does the computational power required to manipulate them. This can lead to long processing times and slow performance, which can be frustrating for designers andengineers. To address this challenge, researchers are developing new algorithms and techniques that can handle more complex models more efficiently. Another challenge associated with geometric modeling is the accuracy of the models themselves. Even small errors in the model can have significant consequences in industries such as aerospace and automotive, where precision is essential. To address this challenge, designers and engineers must ensure that their models are as accurate as possible, using techniques such as finite element analysis and computational fluid dynamics to test the models for functionality and structural integrity. In conclusion, geometric modeling is a crucial tool for designers, engineers, and architects, allowing them to create accurate and precise models of physical objects. Its applications are widespread, ranging from aerospace and automotive to architecture and animation. However, it also presents several challenges, including the complexity of the models and the accuracy of the models themselves. To address these challenges, researchers and practitioners must continue to develop new algorithms and techniques that can handle more complex models more efficiently and ensure that their models are as accurate as possible.。

Becoming a Scientist The Role of Undergraduate Research in Students ’ Cognitive, Personal,

Becoming a Scientist The Role of Undergraduate Research in Students ’ Cognitive, Personal,

Becoming a Scientist:The Roleof Undergraduate Research in Students’Cognitive,Personal, and Professional DevelopmentANNE-BARRIE HUNTER,SANDRA URSEN,ELAINE SEYMOUR Ethnography&Evaluation Research,Center to Advance Research and Teaching in the Social Sciences,University of Colorado,Campus Box580,Boulder,CO80309,USAReceived9November2005;revised2May2006;accepted2June2006DOI10.1002/sce.20173Published online12October2006in Wiley InterScience().ABSTRACT:In this ethnographic study of summer undergraduate research(UR)expe-riences at four liberal arts colleges,where faculty and students work collaboratively on aproject of mutual interest in an apprenticeship of authentic science research work,analysisof the accounts of faculty and student participants yields comparative insights into thestructural elements of this form of UR program and its benefits for parison ofthe perspectives of faculty and their students revealed considerable agreement on the nature,range,and extent of students’UR gains.Specific student gains relating to the process of “becoming a scientist”were described and illustrated by both groups.Faculty framed these gains as part of professional socialization into the sciences.In contrast,students emphasizedtheir personal and intellectual development,with little awareness of their socialization intoprofessional practice.Viewing studyfindings through the lens of social constructivist learn-ing theories demonstrates that the characteristics of these UR programs,how faculty practiceUR in these colleges,and students’outcomes—including cognitive and personal growth and the development of a professional identity—strongly exemplify many facets of these theo-ries,particularly,student-centered and situated learning as part of cognitive apprenticeshipin a community of practice.C 2006Wiley Periodicals,Inc.Sci Ed91:36–74,2007Correspondence to:Anne-Barrie Hunter;e-mail:abhunter@Contract grant sponsor:NSF-ROLE grant(#NSF PR REC-0087611):“Pilot Study to Establish the Nature and Impact of Effective Undergraduate Research Experiences on Learning,Attitudes and Career Choice.”Contract grant sponsor:Howard Hughes Medical Institute special projects grant,“Establishing the Processes and Mediating Factors that Contribute to Significant Outcomes in Undergraduate Research Experiences for both Students and Faculty:A Second Stage Study.”This paper was edited by former Editor Nancy W.Brickhouse.C 2006Wiley Periodicals,Inc.BECOMING A SCIENTIST37INTRODUCTIONIn1998,the Boyer Commission Report challenged United States’research universities to make research-based learning the standard of students’college education.Funding agencies and organizations promoting college science education have also strongly recommended that institutions of higher education provide greater opportunities for authentic,interdis-ciplinary,and student-centered learning(National Research Council,1999,2000,2003a, 2003b;National Science Foundation[NSF],2000,2003a).In line with these recommen-dations,tremendous resources are expended to provide undergraduates with opportunities to participate in faculty-mentored,hands-on research(e.g.,the NSF-sponsored Research Experience for Undergraduates[REU]program,Howard Hughes Medical Institute Science Education Initiatives).Notwithstanding widespread belief in the value of undergraduate research(UR)for stu-dents’education and career development,it is only recently that research and evaluation studies have produced results that begin to throw light on the benefits to students,faculty,or institutions that are generated by UR opportunities(Bauer&Bennett,2003;Lopatto,2004a; Russell,2005;Seymour,Hunter,Laursen,&DeAntoni,2004;Ward,Bennett,&Bauer, 2002;Zydney,Bennett,Shahid,&Bauer,2002a,2002b).Other reports focus on the effects of UR experiences on retention,persistence,and promotion of science career pathways for underrepresented groups(Adhikari&Nolan,2002;Barlow&Villarejo,2004;Hathaway, Nagda,&Gregerman,2002;Nagda et al.,1998).It is encouraging tofind strong convergence as to the types of gains reported by these studies(Hunter,Laursen,&Seymour,2006).How-ever,we note limited or no discussion of some of the stronger gains that we document,such as students’personal and professional growth(Hunter et al.,2006;Seymour et al.,2004) and significant variation in how particular gains(especially intellectual gains)are defined. Ongoing and current debates in the academic literature concerning how learning occurs, how students develop intellectually and personally during their college years,and how communities of practice encourage these types of growth posit effective practices and the processes of students’cognitive,epistemological,and interpersonal and intrapersonal de-velopment.Although a variety of theoretical papers and research studies exploring these topics are widely published,with the exception of a short article for Project Kaleidoscope (Lopatto,2004b),none has yet focused on intensive,summer apprentice-style UR experi-ences as a model to investigate the validity of these debates.1Findings from this research study to establish the nature and range of benefits from UR experiences in the sciences,and in particular,results from a comparative analysis of faculty and students’perceptions of gains from UR experiences,inform these theoretical discussions and bolsterfindings from empirical studies in different but related areas(i.e.,careers research,workplace learning, graduate training)on student learning,cognitive and personal growth,the development of professional identity,and how communities of practice contribute to these processes. This article will presentfindings from our faculty andfirst-round student data sets that manifest the concepts and theories underpinning constructivist learning,development of professional identity,and how apprentice-style UR experience operates as an effective community of practice.As these bodies of theory are central tenets of current science education reform efforts,empirical evidence that provides clearer understanding of the actual practices and outcomes of these approaches inform national science education pol-icy concerns for institutions of higher learning to increase diversity in science,numbers of students majoring in science,technology,engineering,or mathematics(STEM)disci-plines,student retention in undergraduate and graduate STEM programs and their entry 1David Lopatto was co-P.I.on this study and conducted quantitative survey research on the basis of our qualitativefindings at the same four liberal arts colleges.Science Education DOI10.1002/sce38HUNTER ET AL.into science careers,and,ultimately,the production of greater numbers of professional scientists.To frame discussion offindings from this research,we present a brief review of theory on student learning,communities of practice,and the development of personal and professional identity germane to our data.CONSTRUCTIVIST LEARNING,COMMUNITIES OF PRACTICE,AND IDENTITY DEVELOPMENTApprentice-style URfits a theoretical model of learning advanced by constructivism, in which learning is a process of integrating new knowledge with prior knowledge such that knowledge is continually constructed and reconstructed by the individual.Vygotsky’s social constructivist approach presented the notion of“the zone of proximal development,”referencing the potential of students’ability to learn and problem solve beyond their current knowledge level through careful guidance from and collaboration with an adult or group of more able peers(Vygotsky,1978).According to Green(2005),Vygotsky’s learning model moved beyond theories of“staged development”(i.e.,Piaget)and“led the way for educators to consider ways of working with others beyond the traditional didactic model”(p.294).In social constructivism,learning is student centered and“situated.”Situated learning,the hallmark of cultural and critical studies education theorists(Freire,1990; Giroux,1988;Shor,1987),takes into account students’own ways of making meaning and frames meaning-making as a negotiated,social,and contextual process.Crucial to student-centered learning is the role of educator as a“facilitator”of learning.In constructivist pedagogy,the teacher is engaged with the student in a two-way,dialog-ical sharing of meaning construction based upon an activity of mutual ve and Wenger(1991)and Wenger(1998)extended tenets of social constructivism into a model of learning built upon“communities of practice.”In a community of practice“newcomers”are socialized into the practice of the community(in this case,science research)through mutual engagement with,and direction and support from an“old-timer.”Lave and Wenger’s development of the concept and practice of this model centers on students’“legitimate pe-ripheral participation.”This construct describes the process whereby a novice is slowly,but increasingly,inducted into the knowledge and skills(both overt and tacit)of a particular practice under the guidance and expertise of the master.Legitimate peripheral participation requires that students actively participate in the authentic practice of the community,as this is the process by which the novice moves from the periphery toward full membership in the community(Lave&Wenger,1991).Similar to Lave and Wenger’s communities of practice, Brown,Collins,and Duguid(1989)and Farmer,Buckmaster,and LeGrand(1992)describe “cognitive apprenticeships.”A cognitive apprenticeship“starts with deliberate instruction by someone who acts as a model;it then proceeds to model-guided trials by practition-ers who progressively assume more responsibility for their learning”(Farmer et al.,1992, p.42).However,these latter authors especially emphasize the importance of students’ongoing opportunities for self-expression and reflective thinking facilitated by an“expert other”as necessary to effective legitimate peripheral participation.Beyond gains in understanding and exercising the practical and cultural knowledge of a community of practice,Brown et al.(1989)discuss the benefits of cognitive ap-prenticeship in helping learners to deal capably with ambiguity and uncertainty—a trait particularly relevant to conducting science research.In their view,cognitive apprenticeship “teaches individuals how to think and act satisfactorily in practice.It transmits useful, reliable knowledge based on the consensual agreement of the practitioners,about how to deal with situations,particularly those that are ill-defined,complex and risky.It teachesScience Education DOI10.1002/sceBECOMING A SCIENTIST39‘knowledge-in-action’that is‘situated”’(quoted in Farmer et al.,1992,p.42).Green(2005) points out that Bowden and Marton(1998,2004)also characterize effective communities of practice as teaching skills that prepare apprentices to negotiate undefined“spaces of learning”:“the‘expert other’...does not necessarily‘know’the answers in a traditional sense,but rather is willing to support collaborative learning focused on the‘unknown fu-ture.’In other words,the‘influential other’takes learning...to spaces where the journey itself is unknown to everyone”(p.295).Such conceptions of communities of practice are strikingly apposite to the processes of learning and growth that we have found among UR students,particularly in their understanding of the nature of scientific knowledge and in their capacity to confront the inherent difficulties of science research.These same issues are central to Baxter Magolda’s research on young adult development. The“epistemological reflection”(ER)model developed from her research posits four categories of intellectual development from simplistic to complex thinking:from“absolute knowing”(where students understand knowledge to be certain and view it as residing in an outside authority)to“transitional knowing”(where students believe that some knowledge is less than absolute and focus onfinding ways to search for truth),then to“independent knowing”(where students believe that most knowledge is less than absolute and individuals can think for themselves),and lastly to“contextual knowing”(where knowledge is shaped by the context in which it is situated and its veracity is debated according to its context) (Baxter Magolda,2004).In this model,epistemological development is closely tied to development of identity. The ER model of“ways of knowing”gradually shifts from an externally directed view of knowing to one that is internally directed.It is this epistemological shift that frames a student’s cognitive and personal development—where knowing and sense of self shift from external sources to reliance upon one’s own internal assessment of knowing and identity. This process of identity development is referred to as“self-authorship”and is supported by a constructivist-developmental pedagogy based on“validating students as knowers, situating learning in students’experience,and defining learning as mutually constructed meaning”(Baxter Magolda,1999,p.26).Baxter Magolda’s research provides examples of pedagogical practice that support the development of self-authorship,including learning through scientific inquiry.As in other social constructivist learning models,the teacher as facilitator is crucial to students’cognitive and personal development:Helping students make personal sense of the construction of knowledge claims and engagingstudents in knowledge construction from their own perspectives involves validating thestudents as knowers and situating learning in the students’own perspectives.Becoming socialized into the ways of knowing of the scientific community and participating in thediscipline’s collective knowledge creation effort involves mutually constructing meaning.(Baxter Magolda,1999,p.105)Here Baxter Magolda’s constructivist-developmental pedagogy converges with Lave and Wenger’s communities of practice,but more clearly emphasizes students’development of identity as part of the professional socialization process.Use of constructivist learning theory and pedagogies,including communities of practice, are plainly evident in the UR model as it is structured and practiced at the four institutions participating in this study,as we describe next.As such,the gains identified by student and faculty research advisors actively engaged in apprentice-style learning and teaching provide a means to test these theories and models and offer the opportunity to examine the processes,whereby these benefits are generated,including students’development of a professional identity.Science Education DOI10.1002/sce40HUNTER ET AL.THE APPRENTICESHIP MODEL FOR UNDERGRADUATE RESEARCH Effective UR is defined as,“an inquiry or investigation conducted by an undergraduate that makes an original intellectual or creative contribution to the discipline”(NSF,2003b, p.9).In the“best practice”of UR,the student draws on the“mentor’s expertise and resources...and the student is encouraged to take primary responsibility for the project and to provide substantial input into its direction”(American Chemical Society’s Committee on Professional Training,quoted in Wenzel,2003,p.1).Undergraduate research,as practiced in the four liberal arts colleges in this study,is based upon this apprenticeship model of learning:student researchers work collaboratively with faculty in conducting authentic, original research.In these colleges,students typically underwent a competitive application process(even when a faculty member directly invited a student to participate).After sorting applications, and ranking students’research preferences,faculty interviewed students to assure a good match between the student’s interests and the faculty member’s research and also between the faculty member and the student.Generally,once all application materials were reviewed (i.e.,students’statements of interest,course transcripts,grade point averages[GPA]), faculty negotiated as a group to distribute successful applicants among the available summer research advisors.Students were paid a stipend for their full-time work with faculty for 10weeks over summer.Depending on the amount of funding available and individual research needs,faculty research advisors supervised one or more students.Typically,a faculty research advisor worked with two students for the summer,but many worked with three or four,or even larger groups.In most cases,student researchers were assigned to work on predetermined facets of faculty research projects:each student project was open ended,but defined,so that a student had a reasonable chance of completing it in the short time frame and of producing useful results.Faculty research advisors described the importance of choosing a project appropriate to the student’s“level,”taking into account their students’interests,knowledge, and abilities and aiming to stretch their capacities,but not beyond students’reach.Research advisors were often willing to integrate students’specific interests into the design of their research projects.Faculty research advisors described the intensive nature of getting their student re-searchers“up and running”in the beginning weeks of the program.Orienting students to the laboratory and to the project,providing students with relevant background information and literature,and teaching them the various skills and instrumentation necessary to work effectively required adaptability to meet students at an array of preparation levels,advance planning,and a good deal of their time.Faculty engaged in directing UR discussed their role as facilitators of students’learning.In the beginning weeks of the project,faculty advisors often worked one-on-one with their students.They provided instruction,gave “mini-lectures,”explained step by step why and how processes were done in particular ways—all the time modeling how science research is done.When necessary,they closely guided students,but wherever possible,provided latitude for and encouraged students’own initiative and experimentation.As the summer progressed,faculty noted that,based on growing hands-on experience,students gained confidence(to a greater or lesser degree)in their abilities,and gradually and increasingly became self-directed and able,or even eager, to work independently.Although most faculty research advisors described regular contact with their student researchers,most did not work side by side with their students everyday.Many research advisors held a weekly meeting to review progress,discuss problems,and make sure students(and the projects)were on the right track.At points in the research work,facultyScience Education DOI10.1002/sceBECOMING A SCIENTIST41 could focus on other tasks while students worked more independently,and the former were available as necessary.When students encountered problems with the research,faculty would serve as a sounding board while students described their efforts to resolve difficulties. Faculty gave suggestions for methods that students could try themselves,and when problems seemed insurmountable to students,faculty would troubleshoot with them tofind a way to move the project forward.Faculty research advisors working with two or more student researchers often used the research peer group to further their students’development.Some faculty relied on more-senior student researchers to help guide new ones.Having multiple students working in the laboratory(whether or not on the same project)also gave student researchers an extra resource to draw upon when questions arose or they needed help.In some cases,several faculty members(from the same or different departments)scheduled weekly meetings for group discussion of their research monly,faculty assigned articles for students to summarize and present to the rest of the group.Toward the end of summer, weekly meetings were often devoted to students’practice of their presentations so that the research advisor and other students could provide constructive criticism.At the end of summer,with few exceptions,student researchers attended a campus-wide UR conference, where they presented posters and shared their research with peers,faculty,and institution administrators.Undergraduate research programs in these liberal arts colleges also offered a series of seminars andfield trips that explored various science careers,discussed the process of choosing and applying to graduate schools,and other topics that focused on students’professional development.We thus found that,at these four liberal arts colleges,the practice of UR embodies the principles of the apprenticeship model of learning where students engage in active,hands-on experience of doing science research in collaboration with and under the auspices of a faculty research advisor.RESEARCH DESIGNThis qualitative study was designed to address fundamental questions about the benefits (and costs)of undergraduate engagement in faculty-mentored,authentic research under-taken outside of class work,about which the existing literature offers fewfindings and many untested hypotheses.2Longitudinal and comparative,this study explores:•what students identify as the benefits of UR—both following the experience,and inthe longer term(particularly career outcomes);•what gains faculty advisors observe in their student researchers and how their view of gains converges with or diverges from those of their students;•the benefits and costs to faculty of their engagement in UR;•what,if anything,is lost by students who do not participate in UR;and•the processes by which gains to students are generated.This study was undertaken at four liberal arts colleges with a strong history of UR.All four offer UR in three core sciences—physics,chemistry,and biology—with additional programs in other STEMfields,including(at different campuses)computer science,engi-neering,biochemistry,mathematics,and psychology.In the apprenticeship model of UR practiced at these colleges,faculty alone directed students in research;however,in the few2An extensive review and discussion of the literature on UR is presented in Seymour et al.(2004). Science Education DOI10.1002/sce42HUNTER ET AL.instances where faculty conducted research at a nearby institution,some students did have contact with post docs,graduate students,or senior laboratory technicians who assisted in the research as well.We interviewed a cohort of(largely)“rising seniors”who were engaged in UR in summer2000on the four campuses(N=76).They were interviewed for a second time shortly before their graduation in spring2001(N=69),and a third time as graduates in 2003–2004(N=55).The faculty advisors(N=55)working with this cohort of students were also interviewed in summer2000,as were nine administrators with long experience of UR programs at their schools.We also interviewed a comparison group of students(N=62)who had not done UR. They were interviewed as graduating seniors in spring2001,and again as graduates in 2003–2004(N=25).A comparison group(N=16)of faculty who did not conduct UR in summer2000was also interviewed.Interview protocols focused upon the nature,value,and career consequences of UR experiences,and the methods by which these were achieved.3After classifying the range of benefits claimed in the literature,we constructed a“gains”checklist to discuss with all participants“what faculty think students may gain from undergraduate research.”Dur-ing the interview,UR students were asked to describe the gains from their research experience(or by other means).If,toward the end of the interview,a student had not mentioned a gain identified on our“checklist,”the student was queried as to whether he or she could claim to have gained the benefit and was invited to add further com-ment.Students also mentioned gains they had made that were not included in the list. With slight alterations in the protocol,we invited comments on the same list of possi-ble gains from students who had not experienced UR,and solicited information about gains from other types of experience.All students were asked to expand on their an-swers,to highlight gains most significant to them,and to describe the sources of any benefits.In the second set of interviews,the same students(nearing graduation)were asked to reflect back on their research experiences as undergraduates,and to comment on the rel-ative importance of their research-derived gains,both for the careers they planned and for other aspects of their lives.In thefinal set of interviews,they were asked to of-fer a retrospective summary of the origins of their career plans and the role that UR and other factors had played in them,and to comment on the longer term effects of their UR experiences—especially the consequences for their career choices and progress, including their current educational or professional engagement.Again,the sources of gains cited were explored;especially gains that were identified by some students as arising from UR experiences but may also arise from other aspects of their college education.The total of367interviews represents more than13,000pages of text data.We are currently analyzing other aspects of the data and will reportfindings on additional topics, including the benefits and costs to faculty of their participation in UR and longitudinal and comparative outcomes of students’career choices.This article discussesfindings from a comparative analysis of all faculty and administrator interviews(N=80),withfindings from thefirst-round UR student interviews(N=76),and provides empirical evidence of the role of UR experiences in encouraging the intellectual,personal,and professional development of student researchers,and how the apprenticeship modelfits theoretical discussions on these topics.3The protocol is available by request to the authors via abhunter@.Science Education DOI10.1002/sceBECOMING A SCIENTIST43METHODS OF DATA TRANSCRIPTION,CODING,AND ANAL YSISOur methods of data collection and analysis are ethnographic,rooted in theoretical work and methodological traditions from sociology,anthropology,and social psychol-ogy(Berger&Luckman,1967;Blumer,1969;Garfinkel,1967;Mead,1934;Schutz& Luckman,1974).Classically,qualitative studies such as ethnographies precede survey or experimental work,particularly where existing knowledge is limited,because these meth-ods of research can uncover and explore issues that shape informants’thinking and actions. Good qualitative software computer programs are now available that allow for the multiple, overlapping,and nested coding of a large volume of text data to a high degree of complexity, thus enabling ethnographers to disentangle patterns in large data sets and to reportfindings using descriptive statistics.Although conditions for statistical significance are rarely met, the results from analysis of text data gathered by careful sampling and consistency in data coding can be very powerful.Interviews took between60and90minutes.Taped interviews and focus groups were transcribed verbatim into a word-processing program and submitted to“The Ethnograph,”a qualitative computer software program(Seidel,1998).Each transcript was searched for information bearing upon the research questions.In this type of analysis,text segments referencing issues of different type are tagged by code names.Codes are not preconceived,but empirical:each new code references a discrete idea not previously raised.Interviewees also offer information in spontaneous narratives and examples,and may make several points in the same passage,each of which is separately coded.As transcripts are coded,both the codes and their associated passages are entered into“The Ethnograph,”creating a data set for each interview group(eight,in this study). Code words and their definitions are concurrently collected in a codebook.Groups of codes that cluster around particular themes are assigned and grouped by“parent”codes.Because an idea that is encapsulated by a code may relate to more than one theme,code words are often assigned multiple parent codes.Thus,a branching and interconnected structure of codes and parents emerges from the text data,which,at any point in time,represents the state of the analysis.As information is commonly embedded in speakers’accounts of their experience rather than offered in abstract statements,transcripts can be checked for internal consistency;that is,between the opinions or explanations offered by informants,their descriptions of events, and the reflections and feelings these evoke.Ongoing discussions between members of our research group continually reviewed the types of observations arising from the data sets to assess and refine category definitions and assure content validity.The clustered codes and parents and their relationships define themes of the qualita-tive analysis.In addition,frequency of use can be counted for codes across a data set, and for important subsets(e.g.,gender),using conservative counting conventions that are designed to avoid overestimation of the weight of particular opinions.Together,these frequencies describe the relative weighting of issues in participants’collective report. As they are drawn from targeted,intentional samples,rather than from random samples, these frequencies are not subjected to tests for statistical significance.They hypothesize the strength of particular variables and their relationships that may later be tested by random sample surveys or by other means.However,thefindings in this study are un-usually strong because of near-complete participation by members of each group under study.Before presentingfindings from this study,we provide an overview of the results of our comparative analysis and describe the evolution of our analysis of the student interview data as a result of emergentfindings from analysis of the faculty interview data.Science Education DOI10.1002/sce。

Research and implementation of an improved license plate recognition algorithm

Research and implementation of an improved license plate recognition algorithm

I.
INTRODUCTION
License plate recognition system plays an important role in the intelligent transportation system. Meanwhile, automated license plate recognition in the intelligent transportation application field of computer vision, image processing, and pattern recognition is one of the major issues, which mainly consists of four parts: license plate location, deflection correction, character segmentation and character recognition. At present, license plate location algorithm basically has the following kinds of methods: based on image color information, based on texture analysis, based on edge detection, based on morphological and based on genetic algorithm and neural network [1], [2], etc. About the license plate deflection correction are mainly in the following categories: based on geometry and texture analysis, based on Hough Transform which used to inspect the linear images, and based on edge detection [2], etc. About the license plate character segmentation are mainly in the following categories: based on texture and projection, based on SVM (Support Vector Machine), based on seeded region growing and Markov model segmentation algorithm based on prior knowledge [3], [4], etc. About the license plate character recognition are mainly in the following categories: mode-matching method, feature classification and classification based on neural network [2], etc.

7s管理知识30问

7s管理知识30问

7s管理知识30问 7S Management Knowledge 30 Questions. English Answer:1. What are the 7Ss of management?> Shared Values.> Strategy.> Structure.> Systems.> Staff.> Style.> Skills.2. How do the 7Ss interact with each other?> The 7Ss are interconnected and interdependent,forming a holistic view of an organization.3. What is the role of shared values in an organization?> Shared values guide the organization's behavior and decision-making.4. How does strategy influence the other 6Ss?> Strategy sets the direction for the organization and aligns the other 6Ss.5. What is the purpose of organizational structure?> Structure defines the relationships andresponsibilities within the organization.6. How do systems support organizational goals?> Systems provide the processes and procedures to achieve organizational objectives.7. What is the importance of staff in an organization?> Staff are the human resources that carry out the organization's activities.8. How does leadership style affect organizational performance?> Leadership style influences the way the organization is managed and employees are motivated.9. Why are skills essential for organizational success?> Skills enable the organization to execute its strategy and achieve its goals.10. How can the 7Ss be used to improve organizational performance?> By aligning and optimizing the 7Ss, organizations can enhance their effectiveness and efficiency.11. What are the benefits of using the 7S framework?> The 7S framework provides a comprehensive understanding of an organization and facilitates change management.12. How can organizations assess their 7Ss?> Organizations can use various tools and techniques to evaluate their 7Ss.13. What are some common challenges in implementing the 7Ss?> Challenges include resistance to change, lack of alignment, and resource constraints.14. How can organizations overcome challenges inimplementing the 7Ss?> Organizations can overcome challenges through communication, stakeholder involvement, and gradual implementation.15. What are the implications of the 7Ss for organizational culture?> The 7Ss shape and reflect the organization's culture.16. How can the 7Ss be used to diagnose organizational problems?> By examining the alignment and effectiveness of the7Ss, organizations can identify areas for improvement.17. What is the relationship between the 7Ss and organizational change?> The 7Ss provide a framework for understanding and managing organizational change.18. How can the 7Ss be used to create a sustainable organization?> By integrating environmental and social considerations into the 7Ss, organizations can promote sustainability.19. What are some best practices for implementing the 7Ss effectively?> Best practices include stakeholder involvement, continuous monitoring, and flexibility.20. How can organizations leverage technology to support the 7Ss?> Technology can enhance communication, collaboration, and data analysis for the 7Ss.21. What is the role of leadership in implementing the 7Ss?> Leadership plays a crucial role in communicating, facilitating, and sustaining the 7Ss.22. How can the 7Ss be used to promote innovation in an organization?> By fostering a culture of experimentation, risk-taking, and cross-functional collaboration, the 7Ss can support innovation.23. What are some ethical considerations in implementing the 7Ss?> Organizations must ensure that the 7Ss are implemented in an ethical and responsible manner.24. How can the 7Ss be used to build a learning organization?> By creating a culture of continuous learning, sharing, and knowledge management, the 7Ss can foster a learningorganization.25. What is the impact of globalization on the 7Ss?> Globalization influences the 7Ss by increasing interconnectedness, competition, and cultural diversity.26. How can organizations use the 7Ss to adapt to a rapidly changing environment?> By aligning the 7Ss with the external environment, organizations can enhance their adaptability.27. What are the key performance indicators for measuring the effectiveness of the 7Ss?> KPIs include employee satisfaction, customer satisfaction, financial performance, and innovation metrics.28. How can organizations create a balanced and aligned 7Ss?> By considering the interdependencies and trade-offs between the 7Ss, organizations can achieve balance and alignment.29. What is the future of the 7S framework?> The 7S framework continues to evolve and adapt to changes in the business landscape.30. How can organizations leverage the 7Ss to gain a competitive advantage?> By aligning and optimizing the 7Ss, organizations can differentiate themselves and enhance their competitiveness.中文回答:1. 7s管理知识包括哪些方面?> 战略(Strategy)。

Geomodeling_VVA 软件技术附件

Geomodeling_VVA 软件技术附件

附件1:地震属性可视化分析系统软件VisualVoxAt TM技术介绍.1 软件总体介绍VisualV oxAt TM(简称VV A)是加拿大Geomodeling公司针对隐蔽性油气藏研发的包括地震属性的提取与分析、三维可视化显示与解释以及常规构造解释等功能的地质、地球物理研究平台。

该软件功能齐、属性全、操作简便,提供多种的地震属性提取与分析工具,并可通过三维可视化解释与地震属性研究技术的结合对隐蔽性油气藏进行研究,是从多角度进行隐蔽性油气藏识别、描述及评价的有效工具。

VVA软件主要包括属性的提取与分析、三维可视化以及构造解释三部分。

其中属性提取包括体属性、层面属性、顺层属性、层间属性、地层体属性、滤波属性六大类型的属性提取功能;属性的分析工具则包括属性定量标定、属性交会、属性聚类、继承性分类、主成分分析、频谱成像、波形相关分析七大主要的分析工具。

上述新技术和方法代表了应用地球物理技术进行勘探生产的新成果,在隐蔽性油气藏研究中起到重要作用。

目前,在国内外的多种油藏勘探中,起到较好的效果。

VVA软件的地震数据体、层位、断层、地层体、井数据以及软件计算分析的各种属性数据都能在三维空间中方便地显示,软件同时提供了方便的井位设计功能以及三维透视功能。

VVA的构造解释部分提供了方便的交互解释功能:层位解释有多种追踪方式和模式,能在三维空间、二维剖面以及底图上即时交互解释;断层解释也能在三维、二维剖面上交互解释,并且能够通过底图进行即时质量控制。

1.1.1地震属性可视化分析系统VisualV oxAt TM具有如下的特点:1)提供了从面、层间到体的多种地震属性计算方法。

其地震属性既包括构造属性,又包括地层属性,为用户多视角挖掘地震地质数据的有效信息提供有力的手段。

2)创立了地层体的概念,在多层层位的约束下,其构建过程充分考虑地层的沉积在平面和空间上的变化。

地层体切片比时间切片更好地反映地质规律。

各种属性以及分析方法都可以应用到地层体中,使得对隐蔽性油气藏识别、描述和评价更具目的性。

Geometric Modeler

Geometric Modeler

Geometric Modeler Topology How to Associate Topology WithGeometryRules Between Topological and GeometricObjectsAbstractThe topology describes the limitation of a geometry. Hence, topological objects are related to geometric objects within specified rules, which are detailed here•Introduction•Representing Geometryo A CATEdgeCurve Represents CATCurveso A CATMacroPoint Represents CATPoints•The Cell Geometry Depends on What It Boundso What Is Related To a Volumeo What Is Related To an Edgeo What Is Related To a Vertex•Diagram•Main Steps to Create Cells Related to Geometry•Example: Wire Creation•In Short•ReferencesIntroductionThe topology is a building set for limiting the space. Vertex bound edges, which bound faces, which bound volumes. How to map these topological entities to geometric entities in order to limit the geometric space?• a CATMacroPoint corresponds to the geometric support of a vertex,• a CATEdgeCurve corresponds to the geometric support of an edge,• a CATSurface corresponds to the geometric support of a face.拓扑结构限制的空间设置一个建设。

Geological and geophysical aspects of the underground CO2 storage

Geological and geophysical aspects of the underground CO2 storage

Procedia Earth and Planetary Science 1 (2009) 7–12 /locate/procediaThe 6th International Conference on Mining Science & TechnologyGeological and geophysical aspects of the underground CO 2 storageDubi ński Józef*Central Mining Institute, 40-166 Katowice, PolandAbstractObserved impact of carbon dioxide (CO 2) emissions on the climate changes resulted in significant intensification of the research focused on the development of the technologies, which would enable CO 2 capture from the flu gases and its safe storage in the adequately selected geological formations. The member countries of the European Union (UE-27) worked out special CCS (CO 2 Capture and Storage) directive concerning industrial application of this technology. It must be emphasized, that extremely important and difficult from the technical point of view is its final stage connected with CO 2 storage process itself. The paper presents key geological problems, which may occur during above mentioned stage of CCS technology, it draws also attention on the problem of monitoring the locations selected for CO 2 storage. It points out significant role of geophysical methods for effective application in this domain.Keywords : CO 2 Injection; CO 2 Capture and Storage; CCS technology; CO 2 monitoring1. IntroductionIt is believed that the climate changes on earth, observed during last decade or so have direct impact on more frequent occurrence of the extreme phenomenon in different places of the earth. Theirs’ symptoms are: rising the sea levels, occurrence of the extreme meteorological phenomenon, glaciers’ regression but also changes in the productivity and quality of the crops and many more. They result in the concern of the society expressed in the mass media by the watchword: “intensification of the global warming”. Predominant is the opinion, that the main reason for the occurrence of above is the activity of the human being, which leads to the increase of the concentration of the gases colloquially defined as the greenhouse gases in the atmosphere [1]. Above effect is recognized through their continuously increasing emissions due to dynamic increase of the combustion of such hydrocarbons like coal, oil and natural gas. It can not be also neglected the influence of reduced sequestration of coal through the flora due to the deforesting of substantial territories of the globe and emissions of methane gas coming from the farming. In the years 1906 – 2005 average increase of the air temperature measured in the vicinity of the earth surface reached 0.74 ±0.18o C, and in Europe almost 10o C. According to the forecasts prepared by the Intergovernmental Panel on Climate Change – IPCC continuation of intensive activities of the people in XXI century can result in rising the average global temperature of the earth surface from 1.1 even up to 6.40C and intensification of above mentioned extreme phenomenon. For this reason in 1997 governments of many countries signed and ratified Kioto Protocol, aiming at reduction of greenhouse gases emissions. There is presently global political and public debate, especially intensive in the European Union, which concerns the activities driving to the reduction of the climate warming up rate [1].Another very important aspect is the analysis of the impact of above activities on the individual economies and the 187-/09/$– See front matter © 2009 Published by Elsevier B.V .doi:10.1016/j.pro .2009.09.0085220eps 4Procedia Earth and Planetary Sciencesocieties. There is also opinion opposite to above, which proves that observed climate changes are natural process developed by mutual interaction of the earth surface and its atmosphere, which is warmed up by the solar radiation with the variable cyclic intensity and that they can not be attributed exclusively to the humans. The opponents adduce mainly the evidence coming from the geological research, proving that the periodical changes of the climate were and still are fundamental feature of the earth’s climate throughout whole history of its evolution. There is no disagreement between those two groups however as far as the fact of partial increase of the emissions and concentration of the greenhouse gases and especially CO 2 coming from the human’s activities is concerned. That is why it seems reasonable to undertake all sorts of activities, which would aim at reducing above emissions based on the principles of sustainable development and mitigation of eventual results of present global warming.Significant role in the development of the technologies reducing volume of emitted CO 2 will be connected with the technology connected with its capture and storage in the suitably selected geological formations (CCS – CO 2 Capture and Storage) [8][9]. The process must be safe for the geological and natural environment on the ground surface, what will require application of advanced monitoring. That is why geological and geophysical aspects connected with these key processes will play considerable role in the implementation of CCS technology.2. The heart of the global warming effectThe global warming effect is the phenomena caused by the ability of the circumterrestrial atmosphere to let the major part of short-wave solar radiation with its waves’ length 0.1–4 mm in and stopping the long-wave earth’s radiation with the waves’ length 4-80 mm. As a result of above phenomena earth’s surface and lower layers of its atmosphere are warmer. The research in this domain indicates that if the earth was devoided of the atmosphere, temperature of its surface would be at the level of – 180C, whereas presently its average temperature is +150C. Thereby without above effect life on earth would not be established and could not evolve. Thus the layer of atmosphere creates kind of structure similar to the roof of the greenhouse, which let the visible light in and absorb energy coming out by the means of infrared radiation, stopping in this way the heat inside. That is why the warming effect is also called green house effect. The point of the problem is that so called greenhouse gases accumulated in the layers of circumterrestrial atmosphere intensify natural warming effect, which results in the increase of earth temperature. There are about 30 different greenhouse gases. The most important are: carbon dioxide, methane, nitrogen oxides, chlorofluorocarbons, ozone and also water steam. Fossil energy fuels – hard and brown coal, natural gas and oil – in the process of their combustion emit with different intensiveness different gases, including particularly carbon dioxide. The emission is the highest during the combustion of the brown and hard coal. Thus the power industry based on the coal has to face serious challenge of development the technology reducing value of the CO 2 emissions and other gaseous substances [1].3. Role of coal as source of a energy in the global economyCoal is one of the most important primary energy carriers in the global economy. It takes predominant place as a source for the electricity production.The forecasts of International Energy Agency (IEA) presented below in table 1 confirm global increase in coal demand, which in the years 2000-2007 reached 31%.Table 1. Global coal consumption in (Mtoe)Years World UE OECD2000 2364.3 316.2 1124.02001 2384.8 316.3 1114.52002 2437.2 314.9 1120.62003 2632.8 325.2 1151.52004 2805.5 320.1 1160.12005 2957.0 311.3 1170.32006 3090.1 318.9 1169.72007 3177.5 317.9 1184.3D. Józef / Procedia Earth and Planetary Science 1 (2009) 7–128Source: BP Statistical Review of World Energy, June 2008 [2]Above status of coal as an energy carrier is caused by many following reasons:- more even geological location of the coal resources in the world comparing with other energy carriers,- clearly bigger coal resources and what results from this their higher sufficiency (globally for another 200 years), - higher safety of stable deliveries of coal fuel,- lower cost of production the electricity from the coal comparing with gas and oil,- possibility of further increase the economical efficiency and reducing the inconvenience of coal for the natural environment.Above reasons make the coal, which was used for centuries as primary source of energy long lasting important energy carrier in the global energy economy. Coal is playing key role today in the fuel-energy balance of such countries like: China, India, USA, Japan, Republic of South Africa, Russia, Poland, Germany, Australia and many others. Some of above countries are the leading coal producers, and others just important coal consumers. The UE countries represented also by Poland are the third in the world largest coal consumers. It must be emphasized here, that EU coal production can fulfill only 57% of its demand. Poland is the largest European hard coal producer with Germany when brown coal is concerned.Unfortunately coal is recognized in the EUas a …dirty fuel” from the ecological point of view and fulfilling more and more restrictive environmental requirements is becoming big challenge both for the mining and energy sectors. Special attention is being paid on the reduction of CO 2 emissions, recognized as a major greenhouse gas. One of the methods to reduce its emission is development and implementation of the CCS – CO 2 Capture and Storage technologies.4. Characteristics of the CCS technologyCCS technology, which presently at its stage of intensive development is meant to be an effective tool enabling for permanent and safe storage of captured carbon in deep geological formations. Its point is to separate and capture CO 2 from the stream of the flu gases being released during different industrial processes, then to transport and storage it in inside appropriate geological formations [9]. The key stages of CCS technology were presented schematically on the figure 1 below.Fig. 1. Key stages of the CCS technologyThe CO 2 storage locations can be: depleted natural gas or oil fields, unmineable coal seams and saline aquifers of the water-bearing sandstones [8]. The last-one mentioned have the largest storage capacity and they are recognized as the most promising environment for effective underground CO 2 storage. The mechanism of underground CO 2 storage is simplified thanks to the fact, that its density is significantly growing up with the depth of the injection and below critical depth, which is in most cases about 800m, where it is becoming supercritical fluid. It has much smaller volume then and can easier fill up spaces of underground reservoirs. There are four principal mechanisms, which provide isolation of CO 2 in deep geological formations [9]. They are being developed in different time horizons. The first-one of above is structural isolation connected with existence of non-permeable rock overburden, which makes the migration of CO 2 from storage place impossible. The second mechanism consists in the isolating CO 2 by the capillary forces inside the pores of rock formation. The third mechanism is the solution isolation, consisting in dissolving CO 2 in the geological formation water. The fourth mechanism is mineral isolation consisting in chemical reaction of dissolved CO 2 with the rock environment what results in creation new mineral compounds. 9D. Józef / Procedia Earth and Planetary Science 1 (2009) 7–1210 D. Józef / Procedia Earth and Planetary Science 1 (2009) 7–125. Geological conditions for underground CO2 storageSelection of optimal geological structure for CO2 storage must secure both: satisfactory storage capacity and its safety with reference to the geological environment underground and natural environment on the ground surface. Very important are also economical aspects including type of applied technology, distance of the CO2 emitting source from the storage place determining cost of transportation as well as legal and social aspects. From the geological point of view the principal factors, which must be analyzed are: geological, geo-thermical and hydro-geological conditions. The geological structure must fulfill several conditions like: depth, volume, thickness of isolating overburden, tightness of the reservoir, permeability and porosity of the rocks, which determine its storage capacity for CO2, hydro-geological connections and many others [4][8]. The locations, which eliminate geological structures as the places for CO2 storage are: protected main underground water reservoirs, rock formations, which got into reaction with CO2, and those which contain important resources of various mineral resources.Safety criteria for underground CO2 storage cover also detailed recognition of potential geological structure with the aspect of identification its eventual ways of escape [6]. It can be caused by the leak of overburden layers, occurrence of cracks and fracture systems and faulting zones as well as by existing potable water intakes or completed oil or gas wells. CO2 leakages from its underground storage reservoirs may also happen through the leakiness in the injection and monitoring wells, as well as due to occurrence other circumstances. Figure 2 shows main potential roads of CO2 escape from its storage place [9].All above aspects are covered by the adequate EU Directive [5].Fig. 2. Example of potential leakage scenarios Source: Co2 GeoNet European Network of Excellence6. Geophysical exploration and monitoring of CO2 storage placesThe key role in the exploration of the geological structures considering their eventual suitability for CO2 storage locations is played by the geophysical methods and especially by the method of reflective seismic. Thanks to the innovative techniques of transmission and presentation of the results of seismic tests explored structure can be properly assessed with special focus on:- determination of the geometry of interesting sedimentary series and especially porous reservoir series and clay sealing layers,-identification of eventual faulting zones crossing sendimentary series, which can be potential ways for CO2 leaking,-identification of optimal reservoir faces considering their porosity and permeability.Seismic data represent significant values for farther tests connected with elaboration the model of analyzed geological structure and for assessing its volume. Based on them decisions concerning location of the exploratory and exploitation wells are being made.Ensuring safe storage of CO2 i.e. identification of its eventual ways of leaking, is the key task of the operator, whoobtained the concession for its storage. Thus extremely important role is being played by monitoring both of the injection installation during its operation and the storage location together with its surrounding. Monitoring should be performed not only during the injection but also after finishing it [7]. The subject of monitoring first of all should be surface environment, but also periodically underground geological environment. The principal methods of surface monitoring are first of all geochemical methods consisting in direct measurements of CO 2 concentration in the air, soil and in the soil water. Information on eventual surface deformations as a result of CO 2 storage can be also obtained from the satellite and aerial photographs.Extremely important role in monitoring of CO 2 storage places is being played by the group of so called indirect methods, where by the measurements of many physical parameters the assessment of the processes taking place in the rock environment can be made. Among them dominant position due to its properties have geophysical methods and especially seismic, electromagnetic and gravimetric methods. Images of the rock environment performed by the means of above methods, in different time give the basis for the analysis of the changes, which place in the structure of reservoir rocks under influence of CO 2 storage. Figure 3 shows selected results of measurements made with the usage of reflective seismic conducted since 1996 in the Norwegian gas field “Sleipner” on the Northern Sea, where CO 2 is being stored in the porous sandstones of Utsira geological formation [3].As one can see there are visible changes caused by CO 2 injection and storage confirming effectiveness of the process.Fig. 3. Seismic imaging to monitor the CO 2 plume at the Sleipner pilot; bright seismic reflections indicate thin layers of CO 25. Conclusions1. CCS – CO 2 capture and storage technology is one of the options for the reduction of CO 2 emissions. Its key stage is CO 2 storage in the suitable geological formations. Above process requires good examination of geological structures and defining reservoir parameters of selected structures as well as assessment of the risk connected with the CO 2 storage.2. Geological aspect of the process requires solving many specialized tasks defined in the CCS directive, in order to make if economically feasible and safe for the natural environment on the ground surface, including the citizens and geological environment.3. Important role during the process of CO 2 injection will play its monitoring and then location of injected CO 2 plume what requires application of appropriate monitoring methods enabling up to date assessment of the safety and risk connected with eventual leakage of CO 2.4. Significant role both in the examination of potential geological structures for CO 2 storage purposes and its underground monitoring later on belongs to the geophysical methods especially considering their forecasting and technical features.5. CCS technology in case of its industrial scale application will have to face new challenges in the scientific-research domain connected with its further development, and also in the domain of education the new technical specialists for the companies implementing this technology.Before injection 2.35 Mt CO 2 4.36 Mt CO 2 5.0 Mt CO 211D. Józef / Procedia Earth and Planetary Science 1 (2009) 7–1212 D. Józef / Procedia Earth and Planetary Science 1 (2009) 7–12References[1] A Vision for Zero Emission Fossil Fuel Power Plants. Report ETP ZEP, 2006.[2]BP Statistical Review of Word Energy, 2008.[3]Chadwick, Recent time-lapse seismic data show no indication of leakage at the Sleipner CO2– injection site. Proceedings of the 7thInternational Conference on Greenhouse Gas Technologies, Vancouver. I (2005) 653-662.[4]J. Dubiński and H.E. Solik, Uwarunkowania geologiczne dla składowania dwutlenku węgla. Uwarunkowania wdrożenia zero-emisyjnych technologii węglowych w energetyce. Praca zbiorowa pod red. M. Ściążko. Wyd. IChPW, Zabrze, 2007.[5]Directive of the European Parliament and of the Council on the geological storage of carbon dioxide and amending Council Directive85/337/EE, Directives 2000/60/EC, 2001/80/EC, 2004/35/EC, 2006/12/EC, 2008/1/EC and Regulation (EC) No 1013/2006. Brussels, 2009.[6]J. Rogut, M. Steen, G. DeSanti and J. Dubiński, Technological, Environmental and Regulatory Issues Related to CCS and UCG. CleanCoal Technology Conference “Geological Aspects of Underground Carbon Storage and Processing, 2008.[7]R. Tarkowski, B. Uliasz-Misiak and E. Szarawarska, Monitoring podziemnego składowania CO2. Gospodarka SurowcamiMineralnymi, 2005.[8]R. Tarkowski, Geologiczna sekwestracja CO2. Studia, Rozprawy, Monografie, 132, Wyd. IGSMiE PAN, Kraków, 2005.[9]What does CO2 geological storage really mean? Ed. CO2 GeoNet European Network of Excellence, 2008.。

Advances in

Advances in

Advances in Geosciences,4,17–22,2005 SRef-ID:1680-7359/adgeo/2005-4-17 European Geosciences Union©2005Author(s).This work is licensed under a Creative CommonsLicense.Advances in GeosciencesIncorporating level set methods in Geographical Information Systems(GIS)for land-surface process modelingD.PullarGeography Planning and Architecture,The University of Queensland,Brisbane QLD4072,Australia Received:1August2004–Revised:1November2004–Accepted:15November2004–Published:9August2005nd-surface processes include a broad class of models that operate at a landscape scale.Current modelling approaches tend to be specialised towards one type of pro-cess,yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management.This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as waterflow,biochemical diffusion,and plant dispersal.Its theoretical development applies a La-grangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique,called level set method,is implemented in a geographical information system(GIS).It fits with afield data model in GIS and is implemented as operators in map algebra.The paper describes an implemen-tation of the level set methods in a map algebra program-ming language,called MapScript,and gives example pro-gram scripts for applications in ecology and hydrology.1IntroductionOver the past decade there has been an explosion in the ap-plication of models to solve environmental issues.Many of these models are specific to one physical process and of-ten require expert knowledge to use.Increasingly generic modeling frameworks are being sought to provide analyti-cal tools to examine and resolve complex environmental and natural resource problems.These systems consider a vari-ety of land condition characteristics,interactions and driv-ing physical processes.Variables accounted for include cli-mate,topography,soils,geology,land cover,vegetation and hydro-geography(Moore et al.,1993).Physical interactions include processes for climatology,hydrology,topographic landsurface/sub-surfacefluxes and biological/ecological sys-Correspondence to:D.Pullar(d.pullar@.au)tems(Sklar and Costanza,1991).Progress has been made in linking model-specific systems with tools used by environ-mental managers,for instance geographical information sys-tems(GIS).While this approach,commonly referred to as loose coupling,provides a practical solution it still does not improve the scientific foundation of these models nor their integration with other models and related systems,such as decision support systems(Argent,2003).The alternative ap-proach is for tightly coupled systems which build functional-ity into a system or interface to domain libraries from which a user may build custom solutions using a macro language or program scripts.The approach supports integrated models through interface specifications which articulate the funda-mental assumptions and simplifications within these models. The problem is that there are no environmental modelling systems which are widely used by engineers and scientists that offer this level of interoperability,and the more com-monly used GIS systems do not currently support space and time representations and operations suitable for modelling environmental processes(Burrough,1998)(Sui and Magio, 1999).Providing a generic environmental modeling framework for practical environmental issues is challenging.It does not exist now despite an overwhelming demand because there are deep technical challenges to build integrated modeling frameworks in a scientifically rigorous manner.It is this chal-lenge this research addresses.1.1Background for ApproachThe paper describes a generic environmental modeling lan-guage integrated with a Geographical Information System (GIS)which supports spatial-temporal operators to model physical interactions occurring in two ways.The trivial case where interactions are isolated to a location,and the more common and complex case where interactions propa-gate spatially across landscape surfaces.The programming language has a strong theoretical and algorithmic basis.The-oretically,it assumes a Eulerian representation of state space,Fig.1.Shows a)a propagating interface parameterised by differ-ential equations,b)interface fronts have variable intensity and may expand or contract based onfield gradients and driving process. but propagates quantities across landscapes using Lagrangian equations of motion.In physics,a Lagrangian view focuses on how a quantity(water volume or particle)moves through space,whereas an Eulerian view focuses on a localfixed area of space and accounts for quantities moving through it.The benefit of this approach is that an Eulerian perspective is em-inently suited to representing the variation of environmen-tal phenomena across space,but it is difficult to conceptu-alise solutions for the equations of motion and has compu-tational drawbacks(Press et al.,1992).On the other hand, the Lagrangian view is often not favoured because it requires a global solution that makes it difficult to account for local variations,but has the advantage of solving equations of mo-tion in an intuitive and numerically direct way.The research will address this dilemma by adopting a novel approach from the image processing discipline that uses a Lagrangian ap-proach over an Eulerian grid.The approach,called level set methods,provides an efficient algorithm for modeling a natural advancing front in a host of settings(Sethian,1999). The reason the method works well over other approaches is that the advancing front is described by equations of motion (Lagrangian view),but computationally the front propagates over a vectorfield(Eulerian view).Hence,we have a very generic way to describe the motion of quantities,but can ex-plicitly solve their advancing properties locally as propagat-ing zones.The research work will adapt this technique for modeling the motion of environmental variables across time and space.Specifically,it will add new data models and op-erators to a geographical information system(GIS)for envi-ronmental modeling.This is considered to be a significant research imperative in spatial information science and tech-nology(Goodchild,2001).The main focus of this paper is to evaluate if the level set method(Sethian,1999)can:–provide a theoretically and empirically supportable methodology for modeling a range of integral landscape processes,–provide an algorithmic solution that is not sensitive to process timing,is computationally stable and efficient as compared to conventional explicit solutions to diffu-sive processes models,–be developed as part of a generic modelling language in GIS to express integrated models for natural resource and environmental problems?The outline for the paper is as follow.The next section will describe the theory for spatial-temporal processing us-ing level sets.Section3describes how this is implemented in a map algebra programming language.Two application examples are given–an ecological and a hydrological ex-ample–to demonstrate the use of operators for computing reactive-diffusive interactions in landscapes.Section4sum-marises the contribution of this research.2Theory2.1IntroductionLevel set methods(Sethian,1999)have been applied in a large collection of applications including,physics,chemistry,fluid dynamics,combustion,material science,fabrication of microelectronics,and computer vision.Level set methods compute an advancing interface using an Eulerian grid and the Lagrangian equations of motion.They are similar to cost distance modeling used in GIS(Burroughs and McDonnell, 1998)in that they compute the spread of a variable across space,but the motion is based upon partial differential equa-tions related to the physical process.The advancement of the interface is computed through time along a spatial gradient, and it may expand or contract in its extent.See Fig.1.2.2TheoryThe advantage of the level set method is that it models mo-tion along a state-space gradient.Level set methods start with the equation of motion,i.e.an advancing front with velocity F is characterised by an arrival surface T(x,y).Note that F is a velocityfield in a spatial sense.If F was constant this would result in an expanding series of circular fronts,but for different values in a velocityfield the front will have a more contorted appearance as shown in Fig.1b.The motion of thisinterface is always normal to the interface boundary,and its progress is regulated by several factors:F=f(L,G,I)(1)where L=local properties that determine the shape of advanc-ing front,G=global properties related to governing forces for its motion,I=independent properties that regulate and influ-ence the motion.If the advancing front is modeled strictly in terms of the movement of entity particles,then a straightfor-ward velocity equation describes its motion:|∇T|F=1given T0=0(2) where the arrival function T(x,y)is a travel cost surface,and T0is the initial position of the interface.Instead we use level sets to describe the interface as a complex function.The level set functionφis an evolving front consistent with the under-lying viscosity solution defined by partial differential equa-tions.This is expressed by the equation:φt+F|∇φ|=0givenφ(x,y,t=0)(3)whereφt is a complex interface function over time period 0..n,i.e.φ(x,y,t)=t0..tn,∇φis the spatial and temporal derivatives for viscosity equations.The Eulerian view over a spatial domain imposes a discretisation of space,i.e.the raster grid,which records changes in value z.Hence,the level set function becomesφ(x,y,z,t)to describe an evolv-ing surface over time.Further details are given in Sethian (1999)along with efficient algorithms.The next section de-scribes the integration of the level set methods with GIS.3Map algebra modelling3.1Map algebraSpatial models are written in a map algebra programming language.Map algebra is a function-oriented language that operates on four implicit spatial data types:point,neighbour-hood,zonal and whole landscape surfaces.Surfaces are typ-ically represented as a discrete raster where a point is a cell, a neighbourhood is a kernel centred on a cell,and zones are groups of mon examples of raster data include ter-rain models,categorical land cover maps,and scalar temper-ature surfaces.Map algebra is used to program many types of landscape models ranging from land suitability models to mineral exploration in the geosciences(Burrough and Mc-Donnell,1998;Bonham-Carter,1994).The syntax for map algebra follows a mathematical style with statements expressed as equations.These equations use operators to manipulate spatial data types for point and neighbourhoods.Expressions that manipulate a raster sur-face may use a global operation or alternatively iterate over the cells in a raster.For instance the GRID map algebra (Gao et al.,1993)defines an iteration construct,called do-cell,to apply equations on a cell-by-cell basis.This is triv-ially performed on columns and rows in a clockwork manner. However,for environmental phenomena there aresituations Fig.2.Spatial processing orders for raster.where the order of computations has a special significance. For instance,processes that involve spreading or transport acting along environmental gradients within the landscape. Therefore special control needs to be exercised on the order of execution.Burrough(1998)describes two extra control mechanisms for diffusion and directed topology.Figure2 shows the three principle types of processing orders,and they are:–row scan order governed by the clockwork lattice struc-ture,–spread order governed by the spreading or scattering ofa material from a more concentrated region,–flow order governed by advection which is the transport of a material due to velocity.Our implementation of map algebra,called MapScript (Pullar,2001),includes a special iteration construct that sup-ports these processing orders.MapScript is a lightweight lan-guage for processing raster-based GIS data using map alge-bra.The language parser and engine are built as a software component to interoperate with the IDRISI GIS(Eastman, 1997).MapScript is built in C++with a class hierarchy based upon a value type.Variants for value types include numeri-cal,boolean,template,cells,or a grid.MapScript supports combinations of these data types within equations with basic arithmetic and relational comparison operators.Algebra op-erations on templates typically result in an aggregate value assigned to a cell(Pullar,2001);this is similar to the con-volution integral in image algebras(Ritter et al.,1990).The language supports iteration to execute a block of statements in three ways:a)docell construct to process raster in a row scan order,b)dospread construct to process raster in a spreadwhile(time<100)dospreadpop=pop+(diffuse(kernel*pop))pop=pop+(r*pop*dt*(1-(pop/K)) enddoendwhere the diffusive constant is stored in thekernel:Fig.3.Map algebra script and convolution kernel for population dispersion.The variable pop is a raster,r,K and D are constants, dt is the model time step,and the kernel is a3×3template.It is assumed a time step is defined and the script is run in a simulation. Thefirst line contained in the nested cell processing construct(i.e. dospread)is the diffusive term and the second line is the population growth term.order,c)doflow to process raster byflow order.Examples are given in subsequent sections.Process models will also involve a timing loop which may be handled as a general while(<condition>)..end construct in MapScript where the condition expression includes a system time variable.This time variable is used in a specific fashion along with a system time step by certain operators,namely diffuse()andfluxflow() described in the next section,to model diffusion and advec-tion as a time evolving front.The evolving front represents quantities such as vegetation growth or surface runoff.3.2Ecological exampleThis section presents an ecological example based upon plant dispersal in a landscape.The population of a species follows a controlled growth rate and at the same time spreads across landscapes.The theory of the rate of spread of an organism is given in Tilman and Kareiva(1997).The area occupied by a species grows log-linear with time.This may be modelled by coupling a spatial diffusion term with an exponential pop-ulation growth term;the combination produces the familiar reaction-diffusion model.A simple growth population model is used where the reac-tion term considers one population controlled by births and mortalities is:dN dt =r·N1−NK(4)where N is the size of the population,r is the rate of change of population given in terms of the difference between birth and mortality rates,and K is the carrying capacity.Further dis-cussion of population models can be found in Jrgensen and Bendoricchio(2001).The diffusive term spreads a quantity through space at a specified rate:dudt=Dd2udx2(5) where u is the quantity which in our case is population size, and D is the diffusive coefficient.The model is operated as a coupled computation.Over a discretized space,or raster,the diffusive term is estimated using a numerical scheme(Press et al.,1992).The distance over which diffusion takes place in time step dt is minimally constrained by the raster resolution.For a stable computa-tional process the following condition must be satisfied:2Ddtdx2≤1(6) This basically states that to account for the diffusive pro-cess,the term2D·dx is less than the velocity of the advancing front.This would not be difficult to compute if D is constant, but is problematic if D is variable with respect to landscape conditions.This problem may be overcome by progressing along a diffusive front over the discrete raster based upon distance rather than being constrained by the cell resolution.The pro-cessing and diffusive operator is implemented in a map al-gebra programming language.The code fragment in Fig.3 shows a map algebra script for a single time step for the cou-pled reactive-diffusion model for population growth.The operator of interest in the script shown in Fig.3is the diffuse operator.It is assumed that the script is run with a given time step.The operator uses a system time step which is computed to balance the effect of process errors with effi-cient computation.With knowledge of the time step the it-erative construct applies an appropriate distance propagation such that the condition in Eq.(3)is not violated.The level set algorithm(Sethian,1999)is used to do this in a stable and accurate way.As a diffusive front propagates through the raster,a cost distance kernel assigns the proper time to each raster cell.The time assigned to the cell corresponds to the minimal cost it takes to reach that cell.Hence cell pro-cessing is controlled by propagating the kernel outward at a speed adaptive to the local context rather than meeting an arbitrary global constraint.3.3Hydrological exampleThis section presents a hydrological example based upon sur-face dispersal of excess rainfall across the terrain.The move-ment of water is described by the continuity equation:∂h∂t=e t−∇·q t(7) where h is the water depth(m),e t is the rainfall excess(m/s), q is the discharge(m/hr)at time t.Discharge is assumed to have steady uniformflow conditions,and is determined by Manning’s equation:q t=v t h t=1nh5/3ts1/2(8)putation of current cell(x+ x,t,t+ ).where q t is theflow velocity(m/s),h t is water depth,and s is the surface slope(m/m).An explicit method of calcula-tion is used to compute velocity and depth over raster cells, and equations are solved at each time step.A conservative form of afinite difference method solves for q t in Eq.(5). To simplify discussions we describe quasi-one-dimensional equations for theflow problem.The actual numerical com-putations are normally performed on an Eulerian grid(Julien et al.,1995).Finite-element approximations are made to solve the above partial differential equations for the one-dimensional case offlow along a strip of unit width.This leads to a cou-pled model with one term to maintain the continuity offlow and another term to compute theflow.In addition,all calcu-lations must progress from an uphill cell to the down slope cell.This is implemented in map algebra by a iteration con-struct,called doflow,which processes a raster byflow order. Flow distance is measured in cell size x per unit length. One strip is processed during a time interval t(Fig.4).The conservative solution for the continuity term using afirst or-der approximation for Eq.(5)is derived as:h x+ x,t+ t=h x+ x,t−q x+ x,t−q x,txt(9)where the inflow q x,t and outflow q x+x,t are calculated in the second term using Equation6as:q x,t=v x,t·h t(10) The calculations approximate discharge from previous time interval.Discharge is dynamically determined within the continuity equation by water depth.The rate of change in state variables for Equation6needs to satisfy a stability condition where v· t/ x≤1to maintain numerical stabil-ity.The physical interpretation of this is that afinite volume of water wouldflow across and out of a cell within the time step t.Typically the cell resolution isfixed for the raster, and adjusting the time step requires restarting the simulation while(time<120)doflow(dem)fvel=1/n*pow(depth,m)*sqrt(grade)depth=depth+(depth*fluxflow(fvel)) enddoendFig.5.Map algebra script for excess rainfallflow computed over a 120minute event.The variables depth and grade are rasters,fvel is theflow velocity,n and m are constants in Manning’s equation.It is assumed a time step is defined and the script is run in a simulation. Thefirst line in the nested cell processing(i.e.doflow)computes theflow velocity and the second line computes the change in depth from the previous value plus any net change(inflow–outflow)due to velocityflux across the cell.cycle.Flow velocities change dramatically over the course of a storm event,and it is problematic to set an appropriate time step which is efficient and yields a stable result.The hydrological model has been implemented in a map algebra programming language Pullar(2003).To overcome the problem mentioned above we have added high level oper-ators to compute theflow as an advancing front over a land-scape.The time step advances this front adaptively across the landscape based upon theflow velocity.The level set algorithm(Sethian,1999)is used to do this in a stable and accurate way.The map algebra script is given in Fig.5.The important operator is thefluxflow operator.It computes the advancing front for waterflow across a DEM by hydrologi-cal principles,and computes the local drainageflux rate for each cell.Theflux rate is used to compute the net change in a cell in terms offlow depth over an adaptive time step.4ConclusionsThe paper has described an approach to extend the function-ality of tightly coupled environmental models in GIS(Ar-gent,2004).A long standing criticism of GIS has been its in-ability to handle dynamic spatial models.Other researchers have also addressed this issue(Burrough,1998).The con-tribution of this paper is to describe how level set methods are:i)an appropriate scientific basis,and ii)able to perform stable time-space computations for modelling landscape pro-cesses.The level set method provides the following benefits:–it more directly models motion of spatial phenomena and may handle both expanding and contracting inter-faces,–is based upon differential equations related to the spatial dynamics of physical processes.Despite the potential for using level set methods in GIS and land-surface process modeling,there are no commercial or research systems that use this mercial sys-tems such as GRID(Gao et al.,1993),and research systems such as PCRaster(Wesseling et al.,1996)offerflexible andpowerful map algebra programming languages.But opera-tions that involve reaction-diffusive processing are specific to one context,such as groundwaterflow.We believe the level set method offers a more generic approach that allows a user to programflow and diffusive landscape processes for a variety of application contexts.We have shown that it pro-vides an appropriate theoretical underpinning and may be ef-ficiently implemented in a GIS.We have demonstrated its application for two landscape processes–albeit relatively simple examples–but these may be extended to deal with more complex and dynamic circumstances.The validation for improved environmental modeling tools ultimately rests in their uptake and usage by scientists and engineers.The tool may be accessed from the web site .au/projects/mapscript/(version with enhancements available April2005)for use with IDRSIS GIS(Eastman,1997)and in the future with ArcGIS. It is hoped that a larger community of users will make use of the methodology and implementation for a variety of environmental modeling applications.Edited by:P.Krause,S.Kralisch,and W.Fl¨u gelReviewed by:anonymous refereesReferencesArgent,R.:An Overview of Model Integration for Environmental Applications,Environmental Modelling and Software,19,219–234,2004.Bonham-Carter,G.F.:Geographic Information Systems for Geo-scientists,Elsevier Science Inc.,New York,1994. Burrough,P.A.:Dynamic Modelling and Geocomputation,in: Geocomputation:A Primer,edited by:Longley,P.A.,et al., Wiley,England,165-191,1998.Burrough,P.A.and McDonnell,R.:Principles of Geographic In-formation Systems,Oxford University Press,New York,1998. Gao,P.,Zhan,C.,and Menon,S.:An Overview of Cell-Based Mod-eling with GIS,in:Environmental Modeling with GIS,edited by: Goodchild,M.F.,et al.,Oxford University Press,325–331,1993.Goodchild,M.:A Geographer Looks at Spatial Information Theory, in:COSIT–Spatial Information Theory,edited by:Goos,G., Hertmanis,J.,and van Leeuwen,J.,LNCS2205,1–13,2001.Jørgensen,S.and Bendoricchio,G.:Fundamentals of Ecological Modelling,Elsevier,New York,2001.Julien,P.Y.,Saghafian,B.,and Ogden,F.:Raster-Based Hydro-logic Modelling of Spatially-Varied Surface Runoff,Water Re-sources Bulletin,31(3),523–536,1995.Moore,I.D.,Turner,A.,Wilson,J.,Jenson,S.,and Band,L.:GIS and Land-Surface-Subsurface Process Modeling,in:Environ-mental Modeling with GIS,edited by:Goodchild,M.F.,et al., Oxford University Press,New York,1993.Press,W.,Flannery,B.,Teukolsky,S.,and Vetterling,W.:Numeri-cal Recipes in C:The Art of Scientific Computing,2nd Ed.Cam-bridge University Press,Cambridge,1992.Pullar,D.:MapScript:A Map Algebra Programming Language Incorporating Neighborhood Analysis,GeoInformatica,5(2), 145–163,2001.Pullar,D.:Simulation Modelling Applied To Runoff Modelling Us-ing MapScript,Transactions in GIS,7(2),267–283,2003. Ritter,G.,Wilson,J.,and Davidson,J.:Image Algebra:An Overview,Computer Vision,Graphics,and Image Processing, 4,297–331,1990.Sethian,J.A.:Level Set Methods and Fast Marching Methods, Cambridge University Press,Cambridge,1999.Sklar,F.H.and Costanza,R.:The Development of Dynamic Spa-tial Models for Landscape Ecology:A Review and Progress,in: Quantitative Methods in Ecology,Springer-Verlag,New York, 239–288,1991.Sui,D.and R.Maggio:Integrating GIS with Hydrological Mod-eling:Practices,Problems,and Prospects,Computers,Environ-ment and Urban Systems,23(1),33–51,1999.Tilman,D.and P.Kareiva:Spatial Ecology:The Role of Space in Population Dynamics and Interspecific Interactions.Princeton University Press,Princeton,New Jersey,USA,1997. Wesseling C.G.,Karssenberg, D.,Burrough,P. A.,and van Deursen,W.P.:Integrating Dynamic Environmental Models in GIS:The Development of a Dynamic Modelling Language, Transactions in GIS,1(1),40–48,1996.。

001-Effect of aggregate gradation on properties of concrete

001-Effect of aggregate gradation on properties of concrete
los 1.15 Table 1 Chemical compositions of sulphoaluminate cement/% SiO SO 3 K 2 O Na 2 0 TiO 2 11.41 27.87 2.59 43.86 1.2 9.59 0.45 0.15 1.27 P2O5 0.11
1. Introduction It is known that aggregate takes up 60%-90% of total volume of concrete which is the most widely used building materials in the world. Concrete properties such as mechanical and durability properties are highly affected by physical properties of aggregate such as aggregate gradation(Ioannis P., et al., 2013; Siddique, et al., 2012;W. B. Ashraf, et al., 2011; ErgulYasar, et al., 2004; Mucteba Uysal, et al., 2004; Rafat Ronnen Levinson, et al., 2002; D. Sari, et al., 2005; Yahia A, et al., 2002). Good aggregate gradation corresponds to high bulk density. Some research results about the effect of aggregate gradation on the properties of concrete have been established, however, the related researches are focused on Portland cement concrete (OPC)( Hui Zhao, et al., 2012; E. Yasar, et al., 2003; Medhat H Shehata, et al., 2000; M. Gillot, et al., 1993; E.J. Garboczi, et al., 1993), information about the effect of aggregate gradation on the properties of sulphoaluminate cement concrete(SACC) is less documented, therefore, the effect of aggregate gradation on the mechanical properties and durability properties of SACC needs more investigate. The most well-known methods of aggregate gradation contain: 1) using two different segments of aggregate (i.e. fine aggregates and coarse aggregates); 2) using total aggregate gradation that is combined aggregate gradation (W. B. Ashraf, et al., 2011). The latter attracts more interest in recent years. Among these methods, Fuller distribution, the maximum density

Geometric Modeling

Geometric Modeling

Geometric ModelingGeometric modeling is essential in various fields such as engineering, architecture, animation, and computer-aided design. It is a crucial aspect of 3D modeling, enabling the creation of virtual representations of objects and environments. Geometric modeling encompasses a wide range of techniques and methods for representing and manipulating geometric shapes, surfaces, and solids. From simple wireframe models to complex parametric representations, geometric modeling plays a fundamental role in visualizing and conceptualizing objects inthe virtual space. Let's delve deeper into the significance and applications of geometric modeling, exploring its impact across different domains. In the realmof engineering, geometric modeling serves as the cornerstone for the design and analysis of mechanical components, industrial machinery, and structural systems. Engineers leverage geometric modeling to create precise and detailed representations of complex parts and assemblies, facilitating the exploration of form, fit, and function. By utilizing parametric modeling techniques, engineerscan establish relationships between different geometric entities, enabling the creation of designs that are adaptable and easily modifiable. Geometric modeling also plays a pivotal role in finite element analysis (FEA) and computational fluid dynamics (CFD), providing engineers with the necessary tools to simulate and evaluate the performance of engineered systems under varying conditions. Moreover, in the field of architecture, geometric modeling enables architects and designers to articulate their creative visions through the generation of virtualarchitectural models. From the initial conceptualization phase to the productionof construction drawings, geometric modeling software empowers architectural professionals to explore different design options, assess spatial relationships, and communicate design intent effectively. With the advancement of parametric and algorithmic design tools, architects can generate intricate geometric forms and patterns, pushing the boundaries of architectural expression and innovation. Geometric modeling also facilitates the integration of building information modeling (BIM), fostering seamless collaboration and coordination among multidisciplinary teams involved in the design and construction process. Furthermore, in the realm of computer graphics and animation, geometric modelingserves as the bedrock for the creation of compelling visual content in the entertainment industry and digital media. Artists and animators leverage geometric modeling techniques to sculpt and manipulate virtual characters, environments, and special effects, breathing life into imaginative worlds and narratives. With the advent of computer-aided sculpting and modeling tools, artists can craft highly detailed and expressive 3D assets, enriching the visual quality and storytelling potential of animated films, video games, and virtual simulations. Geometric modeling also underpins the development of realistic shaders, textures, andlighting effects, contributing to the immersive and captivating nature of digital experiences. In the context of manufacturing and prototyping, geometric modeling plays a pivotal role in additive manufacturing (3D printing), enabling the translation of digital designs into physical objects with precision and accuracy. By harnessing geometric modeling software, manufacturers can prepare and optimize3D models for the additive manufacturing process, ensuring that the designs are manufacturable and structurally sound. Geometric modeling also facilitates the generation of support structures and slicing patterns, essential for thesuccessful fabrication of intricate and complex geometries using additive manufacturing technologies. With the integration of geometric modeling and simulation tools, manufacturers can validate the manufacturability of designs and identify potential issues before initiating the production process, thereby reducing lead times and mitigating risks. Moreover, in the domain of medical imaging and scientific visualization, geometric modeling plays a vital role in the reconstruction and analysis of anatomical structures, biological tissues, and physical phenomena. Medical professionals and researchers utilize geometric modeling techniques to process and interpret volumetric data from imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and 3D ultrasound. By leveraging geometric modeling and computational anatomy, medical practitioners can delve into the intricacies of human physiology, diagnose abnormalities, and plan surgical interventions with enhanced precision and insight. Geometric modeling also facilitates the simulation and visualization of physiological processes and biomedical phenomena, contributing to the advancement of medical education and research. In conclusion, geometric modeling stands as afoundational pillar in the realms of engineering, architecture, computer graphics, manufacturing, and medical imaging, empowering professionals across diverse disciplines to visualize, design, and analyze complex geometric forms and structures. The evolution of geometric modeling software, coupled with advancements in computational power and visualization technologies, has propelled the boundaries of creativity and innovation, shaping the way we conceive, create, and interact with the digital and physical world. As we continue to push the frontiers of geometric modeling, its impact will resonate across industries, driving transformative changes in how we perceive, manipulate, and leverage geometric forms to fulfill our imaginative and functional pursuits.。

fundamentals of data engineering pdf

fundamentals of data engineering pdf

fundamentals of data engineering"Fundamentals of Data Engineering"refers to the foundational principles and concepts related to the field of data engineering.Data engineering involves the design,development,and management of data architecture,infrastructure,and systems to ensure efficient and reliable data processing.Key topics within the fundamentals of data engineering may include:Data Modeling:Understanding how to structure and represent data in a way that meets the needs of an organization.This involves designing databases,defining tables,and establishing relationships between different data entities.Database Management Systems(DBMS):Knowledge of various types of database systems and how to manage them.This includes relational databases(like MySQL,PostgreSQL),NoSQL databases(like MongoDB,Cassandra),and other data storage technologies.Data Processing:Techniques for processing and transforming data. This includes Extract,Transform,Load(ETL)processes,data cleaning,and data integration methods.Data Warehousing:Designing and managing data warehouses, which are large,centralized repositories of integrated data from various sources.Data warehouses support reporting and business intelligence activities.Big Data Technologies:Understanding and working with technologies that handle large volumes of data,such as Apache Hadoop,Apache Spark,and distributed computing frameworks.Data Quality and Governance:Ensuring the accuracy,completeness, and reliability of data.Implementing governance practices to maintain data integrity and security.Data Pipelines:Building and managing data pipelines for the efficient flow of data from source to destination.This involves orchestrating various data processing tasks.Cloud Data Services:Leveraging cloud platforms for data storage, processing,and analytics.Familiarity with cloud services like AWS,Azure, or Google Cloud Platform.Data Security and Privacy:Implementing measures to protect data from unauthorized access and ensuring compliance with data privacy regulations.Data Analytics and Visualization:Using data for analysis and creating visualizations to communicate insights effectively.Familiarity with tools like Tableau,Power BI,or programming languages like Python and R.Understanding the fundamentals of data engineering is crucial for professionals working in data-related roles,including data engineers, database administrators,and data scientists.It provides the groundwork for effective data management and utilization within an organization.。

Geometric crossover for multiway graph partitioning

Geometric crossover for multiway graph partitioning
Байду номын сангаас
1
many notions of edit distance are equally natural (based on swaps, adjacent swaps, insertions, reversals, transpositions and other edit moves). This leads to a number of natural notions of geometric crossover for permutations. Geometric crossovers for permutations are intimately connected with the notion of sorting algorithm: offspring are on the shortest path between parents, hence they are on the minimal sorting trajectory between parents using a specific edit move. Interestingly, this allows to implement geometric crossovers for permutations by using traditional sorting algorithms such as bubble sort, selection sort and insertion sort. Many pre-existing recombination operators for permutations are geometric crossovers. For example, PMX (partially matched crossover) and c
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Geomodeling and Implementation of a Geodatabase UsingCASE-tools and ArcCatalogWard Verlinden and Eric BayersDepartment of Cartography, National Geographical Institute – BelgiumAbstract“Seamless Geographic Information System of Reference” or SGISR is a development project of Belgium’s National Geographical Institute. One of the major goals of this project is the design and implementation of a centralized, spatially continuous GIS to store, manage, edit and distribute topo-geographic reference data. Based on a ISO19110 compliant Feature Catalogue, a Conceptual Data Model was developed to describe all object classes, attributes and relationships. This CDM was created with a CASE-tool, using UML class diagrams. Next, the CDM was translated to Esri’s geodatabase model, coupling all classes in the SGISR model to one of the basic ESRI classes. Tagged values were used to store information, specific to the geodatabase model. Finally, the resulting Logical Data Model was used to create a geodatabase. The model was exported to a XMI-file and validated using Esri’s “Semantic checker”. The “Schema wizard” was used to create the geodatabase.IntroductionAs Belgium’s NMA, the National Geographical Institute (NGI) is responsible for the production of geographical data for the whole of the Belgian territory. Based on this data, several topographical map series with scales ranging between 1:10,000 and 1:250,000 are produced.In 2001, NGI Belgium completed the first numerically produced edition of the 1:50,000 map series. The first numerically produced edition of the 1:10,000 map series was planned to be completed in another couple of years time. Until then the focus had been on the production of a separate vector data set for each cartographic product. Now however, NGI Belgium was faced with the challenge of systematically updating its vector data sets.At the same time, it was clear that the production of topographic maps was no longer the only important use of these data sets. The use of NGI vector data in GIS applications was steadily growing. This evolution called for a new way of structuring the data, which focused less on cartography and more on offering a geographically oriented model of the real world.Another important decision was to move away from the principle of two parallel production lines, resulting in two separate data sets. It was felt that there should be one centralized process to update a single geographical database containing reference data. This reference data would then be used for several applications – cartographic and other – at different scales. This way, updating information would be collected once and used many times.To develop these ideas into a solution for the needs of NGI Belgium, a project called Seamless Geographic Information System of Reference (SGISR) was started. Three main objectives were determined for SGISR:1.Design of one integrated production line to collect, manage and update topo-geographic referencedata.2.Implementation of a centralized and spatially continuous GIS to store, manage, edit and distributetopo-geographic reference data.3.Development of the necessary tools for the realization at NGI Belgium of applications with aconceptual scale between 1:10,000 and 1:50,000, based on topo-geographic reference data.At the core of SGISR is the development of a spatially continuous geographical database to store topo-geographical reference data. This paper discusses the different phases of the data model design. Conceptual modelingIn a first phase of the conceptual modeling process the existing data sets, used for the map series1:10,000 and 1:50,000, were analyzed as a starting point for determining the geographic content, necessary for the SGISR database. Because both data sets were originally designed for very specific cartographic applications, it was clear that modifications in terms both of content and data structure would be necessary.For some types of information, especially for networks which are partially underground (e.g. high tension lines), the existing data sets were not complete because the underground parts don’t feature on the topographic maps. In most of these cases it was decided to incorporate the whole of the network, including any underground parts, into the new data model. A number of attributes were also added to existing elements to describe additional characteristics. Domains for existing attributes were extended to provide more detailed information. In a limited number of cases some completely new features were added to the geographical content that was already present.Changes in data structure were mostly linked to the shift from a cartographical to a geographical point of view. The original structure of the existing data sets had been heavily influenced by the cartographical production process. While a restructuring effort a few years prior to the start of SGISR had already solved many of these problems, some issues still needed to be addressed to eliminate all purely cartographic aspects from the data model.Based on all these content and structure related considerations a feature catalogue was compiled, defining all feature types, their attributes and relationships for the new data model. This feature catalogue was designed to comply with the international standard for feature cataloguing, ISO 19110, which at that time, was only available as a draft version (Anon. 2001).All mandatory and some optional elements of the ISO 19110 template were used. In addition, certain elements were added to adapt it to the specific context of NGI Belgium and the SGISR project. For purely practical reasons the feature catalogue was split into eight domains, each domain containing a part of the model that describes a specific theme, for instance the road network. A field was also added to enter a geometrical primitive for each feature type into the catalogue. The most important adaptation to the template was made to create a bilingual feature catalogue. Because of the importance of both Dutch en French languages in Belgium, it was decided that all names and definitions of feature types, attributes and relationship types should be available in both of these languages.The SGISR feature catalogue is stored in a small Access database. A data input form and a customized reporting tool were added to this database, both of which make use of the data model domains to enable the user to focus on one specific subset of the model at a time.Alongside the feature catalogue some preliminary conceptual class diagrams were drawn up, using UML (Unified Modeling Language). These diagrams helped the project team understand and communicate the significance of each feature type and were instrumental in visualizing the different relationship classes that linked the feature types to one another. Based on these preliminary diagrams a complete Conceptual Data Model (CDM) was created, integrating all the information of the feature catalogue. A CASE tool (Computer Aided Software Engineering) called Visio was used to draw the CDM. A UML object class was created for every feature type, containing the feature type’s name, the names of the attributes and the attribute data types. For attributes with specific attribute value domains, data types defining these domains were added to the model. Relationships were modeled using binary association, composition and generalization relations. While most relationships between feature types were modeled using binary associations, characterized by their name and cardinalities as defined in the feature catalogue, in a limited number of cases composition relations were used to indicate strong part-whole relationships between feature types.Generalization relations were used extensively to move common attributes and associations from multiple descendant classes to a single ancestor class. The use of these – often abstract – ancestors helped to simplify the model and indicate common structures, shared by feature types of different domains. A good example of this is the way in which railroads, roads and watercourses were all modeled by network segments which form an interconnected network (Fig. 1). Segments from the three types of networks all share the same types of relationships with bridges.Logical modelingThe CDM was designed to be completely platform independent. In the next step of the data modeling process, a Logical Data Model (LDM) was derived from the CDM, adapting it to Esri’s geodatabase model as proposed by Perencsik et al. (2004a). Esri provides a Visio template document (.vst) that contains all the basic Esri classes and interfaces. The Esri interfaces can be used to construct custom features with specific behavior. As no operations were defined for the feature types in the SGISR model, this possibility was not used and the focus was on the linking of the SGISR objects to the correct Esri basic classes. First the CDM was loaded into the Arcinfo UML model, organizing it into a number of different packages, based on the domains present in the model.Next, the feature types were connected to the corresponding Esri classes via generalization relations. Non-geometrical types were coupled to the Esri object class while types with a geometrical component were coupled to the Esri feature class, inheriting its shape property and effectively identifying them as (geographical) features. For abstract types the IsAbstract property was checked to prevent them from being implemented later on as tables or feature classes in the database. Some feature types were linked to specialized Esri network features, such as the SimpleEdgeFeature and the SimpleJunctionFeatue, indicating their taking part in geometric networks.Once the SGISR model had been successfully coupled to the basic Esri classes and the geometric networks were created, the organization of the feature types had to be adapted to the specific geodatabase model rules concerning the use of feature datasets. According to these rules all feature classes associated to a geometric network should be grouped inside the same feature dataset. Also, no object class can exist within a feature dataset. To adapt the model to these rules, some of the feature classes were moved to new packages who were stereotyped as feature datasets. This stereotyping indicates the fact that each of these packages corresponds to a dataset that is to be created during creation of the physical database schema.The data types of the CDM attributes were replaced by the corresponding ESRI field types. The data types defining specific value domains were replaced by classes, stereotyped CodedValueDomain for coded value domains and stereotyped RangeDomain for range domains.Finally, a number of tagged values were added to the model to provide additional information necessary for the successful creation of the database schema later on. Tagged values provide a way to set additional properties for UML elements (Perencsik et al. 2004b). They consist of a keyword and a value which are paired to add a custom property to any UML element. Table 1 shows the tagged values which were used in the SGISR LDM.Table 1: Tagged values used in the SGISR Logical Data ModelTagged value name Values MeaningSets the geometry type for feature classesGeometryType esriGeometryPointesriGeometryPolygonesriGeometryPolylineHasZ True/False Indicates whether a feature class contains Z-valuesPrecision Integer Sets the number of digits for integer or double type fieldsScale Integer Sets the number of decimal places in single and double type fields Length Integer Sets the width of string type fieldsAllowNulls True/False Indicates whether null values are allowed for a fieldOriginClass Class name Defines the origin class for a relationship classOriginPrimaryKey Field name Sets the primary key field of the origin classOriginForeignKey Field name Sets the foreign key field of the origin class DestinationPrimaryKey Field name Sets the primary key field of the destination class DestinationForeignKey Field name Sets the foreign key field of the destination classCreating the physical database schemaIn the final phase of the data modeling process a physical database schema was created, based on the SGISR LDM that had been built in Visio.First, the model stored in Visio had to be exported to a XMI (XML Metadata Interchange) file. The functionality to do this is not present in the standard Visio interface so an add-on called Esri XMI Export needed to be installed. The install process was carried out according to the guidelines found on Esri’s support website (/geodatabase/uml). Two files were copied to the Visio install directory. The Esri XMIExport.vsl file is provided by Esri and can be downloaded from the Esri support website. The second file, XMIExprt.dll is a component provided by Microsoft and can be downloaded from the Microsoft download pages. Esri XMIExport.vsl was copied to “Visio install directory”\visio11\1033\ and XMIExprt.dll was copied to “Visio install directory”\visio11\DLL. To set up the add-on, some options needed to be set in Visio. In Tools/Options/Advanced/File paths, the directory where the Esri XMIExport.vsl file was located was added to the add-ons file paths. Although this completed the steps outlined in the guidelines, the export tool failed to appear in the Visio Add-On menu. Only after the directory where the XMIExprt.dll file had been placed had also been added to the add-ons file paths, did the add-on become available. It was subsequently used to export the SGISR model without any further problems.The resulting XMI file containing the exported model was then checked for errors. The Arcinfo UML model template contains a macro called the Semantics Checker, which can be used to verify whether a model, contained in an XMI file, is completely compatible with the geodatabase model.The macro uses a DLL file called UmlSemCheck.dll which can be found in the Arcgis installation folder. Before the macro could be executed, a reference had to be made to the UmlSemCheck.dll file in the Visual Basic Editor. The macro also uses a document type definition file called uml.dtd, which again can be found in the Arcgis installation folder. It defines the correct XMI document structure by providing a list of legal elements. For the Semantics Checker macro to work, the uml.dtd file had to be placed in the same directory as the XMI file containing the model.A first try resulting in an error message revealed that an error in the macro’s code prevented it from successfully accessing UmlSemCheck, although a reference to the DLL had been established. The original code of the macro consisted of these lines:Sub Semantics_Checker()StartCheckerEnd SubIt was adapted as follows to allow it to access UmlSemCheck:Sub Semantics_Checker()UmlSemCheck.StartCheckerEnd SubThe Semantics Checker was then run on the exported model. The resulting report contained a number of errors that were found in the model, documenting the names of the elements involved, the location of these elements within the model and a brief description of the problem. Most of these could be attributed to human errors in entering data in Visio and were easily resolved. One type of error however related directly to the data structure in the conceptual model. As previously stated, generalization relations were used extensively throughout the SGISR model to move shared properties and associations to common ancestors. Although the inheritance of attributes by classes from their ancestors is allowed by the Semantics Checker, the inheritance of associations is not, causing the following error message:A feature class that has descendants is involved in a relationship class.As a consequence, all the inherited associations in the model had to be replaced by the individual associations at the descendant objects’ level, adding a certain degree of – unnecessary – complexity to the LDM.Once all corrections were made, the model was exported and checked again. From the corrected XMI a physical database schema was created in ArcCatalog, using the Schema Wizard tool. While this tool provides the opportunity to set a number of properties of the geodatabase, most of these had already been incorporated in the SGISR LDM using tagged values. Because of this, the interactive work during the schema creation was very limited. Only the spatial references of feature datasets and standalone feature classes were added to complete the model.Some new errors that had not been detected by the Semantics Checker, appeared during the final steps of the Schema Wizard. It turned out that some of the names of feature attributes corresponded to reserved SQL keywords and had to be changed in the LDM. A more fundamental error occurred during the schema creation itself. A number of relationships were not created and the schema creation log file reported the following error message:ERROR ---> Item not found in this collection.After analysis of the location of these errors in the model, it was found that they were caused by the fact that two feature classes took part in both an association and a composition relationship class as illustrated in Figure 2. In these cases only one of the relationship classes was created, the other one causing the error message in the log file. To prevent this and have the two relationship classes implemented, the composition relationship had to be changed to a normal association.Discussion and conclusionsBased on an analysis of the existing NGI vector data sets and the objectives of the SGISR project, a conceptual data model was created.Having created the CDM in Visio, the SGISR project team opted to make use of a proposed workflow to adapt the conceptual model to the Esri geodatabase model, export this model to an XMI file and use this file to automatically create a physical database schema in ArcCatalog.The functionality to complete this process is not fully integrated in the standard interfaces of the software applications involved. Additional software components needed to be installed and set up for both the XMI export tool and the Semantics Checker. Unfortunately, these procedures were not very well documented. In fact, some crucial steps were missing from the guidelines that were found on the Esri support website. This, and the error in the Semantics Checker macro, meant that neither of the tools functioned upon completion of the set up.However, once the bugs had been fixed, both XMI export and Semantics Checker were easy to use and very helpful in adapting the SGISR model to comply with all the geodatabase rules. Together with the Schema Creation Wizard, they eliminated a lot of interactive work from the schema creation process. In the end, the CDM was successfully adapted to the geodatabase model and a physical schema for the SGISR database was created.Two data modeling issues were encountered during the process. These underline the difference between conceptual and logical data model and illustrate the usefulness of going through both stages. While some sacrifices had to be made in adapting to the chosen software platform, the platform independent vision of the model designer is preserved in the conceptual data model.AcknowledgementsThe authors would like to thank Jean-Charles Pruvost for his help during his internship at NGI Belgium. The continuing support of all the SGISR project team members is also greatly appreciated. ReferencesAnonymous. 2001. Draft International Standard ISO/DIS 19110, Geographic Information –Methodology for feature cataloguing. International Organization for Standardization. 53p. Perencsik, A., Idolyantes, E., Booth, B., and Andrade, J. 2004. Designing Geodatabases with Visio. ESRI. 60p.Perencsik, A., Idolyantes, E., Booth, B., and Andrade, J. 2004. Introduction to CASE Tools. ESRI. 12p.。

相关文档
最新文档