A Process-oriented Multi-representation of Gradual Changes

合集下载

基于特征分选策略的中文共指消解方法

基于特征分选策略的中文共指消解方法

Ch n s r f r n e Re o u i n M h d i e e Co e e e c s l t e o o t Ba e n Fe t r s e tv ee t n S r t g s d 0 a u eRe p c i eS l c i t a e y o
[ b ta t hspp r tde ieetetrs ae p0 etp f o np rs hn s oeee c slt nb sdo c ielann , A src]T i ae u is f rn a e sdu nt e u hae nC iee rfrner oui ae nmahn rig s df f u b h y on i c e o e
1 概 述
共指现象广泛存在于 自然语言 的各种表达 中,表示篇章 中的一个语言 单位 与之 前出现 的语言 单位存在语义 上的关联 ( 本文不 讨论 回指和零指) ,用于指 向的语言单位称为照应语 ,
r s e t e y O t i t o a e u e s me “ o s ” a d u ii e f a u e f e t e y Ex e i e t lr s ls s o t a t o a mp o e t e e p c i l ,S s me h d c n r d c o v h n i e n t z e t r s e c i l . p rm n a e u t h w h tt me d C l i r v l v he h l h p rb a c fc r f r n er s u i n s t m , n me s r e c e 0. % . e t r n eo o e e e c e ol t yse a d F- a u er a h s 8 72 m o

Sweden:LPJ-GUESS

Sweden:LPJ-GUESS

Sweden: LPJ-GUESSThe name of the group: Department of Physical Geography and Ecosystem AnalysisName, title and affiliation of Principal investigator: Assoc. Prof. Almut Arneth; Assoc. Prof. Ben Smith Name and affiliation of contact point, incl. detailed address: Almut Arneth & Ben Smith, INES,Sölvegatan12,22362Lund,*****************************.se;*******************.seURL: http://www.nateko.lu.se/INES/Svenska/main.aspPartner institutions: PIK, SMHI/EC-EARTHWhich components:land sfc yesatmospheric chemistry yesProject Description:LPJ-GUESS (Smith et al., 2001; Hickler et al., 2004) is a generalized, process-based model of vegetation dynamics and biogeochemistry designed for regional to global applications. It combines features of the widely used Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM; Sitch et al., 2003) with those of the General Ecosys-tem Simulator (GUESS; Smith et al., 2001) in a single, flexible modeling framework. The models have identical representations of ecophysiological and biogeochemical processes, including the hydrological cycle updates d e-scribed in Gerten et al. (2004). They differ in the level of detail with which vegetation dynamics and canopy stru c-ture are simulated: simplified, computationally efficient representations are used in the LPJ-DGVM, while in GUESS a "gap-model" approach is used which is particularly suitable for continental to regional simulations. Tepre-sentations of stochastic establishment, individual tree mortality and disturbance events ensure representation of suc-cessional vegetation dynamics which is important for vegetation response to extreme events.LPJ-GUESS models terrestrial carbon and water cycle from days to millennia (Sitch et al. 2003; Koca et al. 2006; Morales et al. 2005, 2007) and has been shown to reproduce the CO2 fertilisation effects seen in FACE sites (Hickler et al. in press). It has been widely applied to assess impacts on carbon cycle and veg etation based on scenarios from climate models (Gritti et al. 2006; Koca et al. 2006; Morales et al. 2007; Olesen et al. 2007). In addition it has several unique features that are currently not available in any of the Earth System Models:(1) A process-based description for the main biogenic volatile organic compounds (BVOC) emitted by vegetation. BVOC are crucial for air chemistry and climate models, since they contribute to formation and destruction of trop o-spheric O3 (depending on presence and absence of NOx), constrain the atmospheric lifetime of methane, and are key precursors to secondary organic aerosol formation. LPJ-GUESS is the only land surface model with a mechanistic BVOC representation that links their production to photosynthesis. It also uniquely accounts for the recently disco v-ered direct CO2-BVOC inhibition which has been shown to fundamentally alter future and past emissions compared to empirical BVOC algorithms that neglect this effect (Arneth et al., 2007a,b). (2) The possibility to simulate past and present vegetation description on a tree species (as well as PFT) level (Miller et al., in press, Hickler et al., 2004). This is crucial for simulations of BVOC and other reactive trace gases and allows for a much better represe ntation of vegetation heterogeneity in regional and continental atmospheric chemistry-climate studies (Arneth et al., 2007b), an important aspect since spatial heterogeneity must be accounted for with atmospherically reactive chemical species. (3) LPJ-GUESS accounts for deforestation by early human agriculture throughout the Holocene and the effects on global carbon cycle and atmospheric CO2 concentration (Olofsson & Hickler 2007). We currently investigate the impor-tance of Holocene human deforestation on BVOC and fire trace gas and aerosol emissions, and how these may affect Holocene CH4 levels, and simulations of pre-industrial O3. (4) LPJ-GUESS accounts for deforestation by early human agriculture throughout the Holocene and the effects on global carbon cycle and atmospheric CO2 concentra-tion (Olofsson & Hickler 2007). We currently investigate the importance of Holocene human deforestation on BVOC and fire trace gas and aerosol emissions, and how these may affect Holocene CH4 levels, and simulations of pre-industrial O3. (5) A novel global process-based fire description, SPITFIRE (Thonicke et al., 2007) has been incorporated; it is currently used to study effects of climate change and of human vs. natural ignition on carbon cycle and trace gas emissions in savanna ecosystems. (6) Prognostic schemes for agricultural and forest land use that p a-rameterise farmer and forestmanagement decisions under changing climate and productivity. The agricultural scheme has been implemented and applied at the global scale (Bondeau et al. 2007), the forest management scheme in a prototype form for Sweden (Koca et al. 2006). (7) Incorporation of a permafrost module, wetland processes and methane emissions, as well as vegetation nitrogen cycle is in progress.The vegetation dynamics module of LPJ-GUESS has been coupled to the land surface scheme of the Rossby Centre regional climate model RCA3 (Jones et al. 2004a,b) and is being applied to investigate biophysical feedbacks of land surface changes on climate at the regional scale in Europe. The above listed process descriptions are also applicable and available to global ESMs.(References available on request)。

multivariate statistical analysis

multivariate statistical analysis

Multivariate Statistical AnalysisMultivariate statistical analysis is a powerful tool used to analyze complex data sets that involve multiple variables. It provides insights into the relationship between variables and allows us to understating the underlying structure and patterns within the data.IntroductionIn a univariate analysis, we typically examine a single variable at a time, such as analyzing the distribution of income or the average temperature over time. However, real-world problems are often more complex, involving multiple variables that interact with each other.Multivariate statistical analysis aims to address this complexity by considering multiple variables simultaneously. It allows us to identify how different variables are related to each other and to understand the joint behavior of these variables.Types of Multivariate Statistical AnalysisThere are various types of multivariate statistical analysis techniques, each designed to answer different research questions and address different types of data. Some of the commonly used techniques include:Principal Component Analysis (PCA)PCA is a dimensionality reduction technique that aims to identify the most important variables or combinations of variables that explain the majority of the variance in the data. It transforms the original variables into a smaller set of uncorrelated variables called principal components.1.Identify the variables and their correlations.2.Determine the eigenvectors and eigenvalues of the correlationmatrix.3.Select the principal components based on their eigenvalues.4.Interpret the results and analyze the contribution of eachprincipal component.Factor AnalysisFactor analysis is another dimensionality reduction technique that aims to identify the underlying latent variables or factors that explain the covariance structure in the data. It is often used in psychology and social sciences to understand the factors that influence human behavior.1.Define the variables and their relationships.2.Estimate the factor loadings, which represent the strength of therelationship between each variable and each factor.3.Determine the number of factors to retain based on criteria suchas eigenvalues or explained variance.4.Interpret the factors and analyze their impact on the variables.Cluster AnalysisCluster analysis is a technique used to group similar observations or variables into clusters based on their characteristics. It helps to identify patterns and similarities within the data and can be used for segmenting customers, grouping genes, or classifying objects.1.Select the variables to be included in the cluster analysis.2.Choose a distance metric to measure the similarity betweenobservations or variables.3.Apply a clustering algorithm such as k-means or hierarchicalclustering.4.Evaluate the clusters and interpret the results.Discriminant AnalysisDiscriminant analysis aims to identify the variables that best discriminate between two or more groups. It is often used in marketing and biomedical research to predict group membership based on a set of predictors.1.Define the groups and the predictor variables.2.Estimate the discriminant function coefficients.3.Evaluate the model’s performance using measures such as accuracyor confusion matrix.4.Interpret the discriminant function and analyze the impact of eachpredictor variable.Advantages of Multivariate Statistical AnalysisMultivariate statistical analysis offers several advantages over univariate analysis:1.Identifying hidden patterns: By considering multiple variablessimultaneously, we can uncover hidden relationships and patternsthat are not visible in univariate analysis.2.Reducing dimensionality: Multivariate techniques such as PCA andfactor analysis help to reduce the dimensionality of the data byidentifying the most important variables or factors.3.Better understanding of complex systems: Multivariate analysisallows for a more comprehensive understanding of complex systemsby considering the interdependencies between variables.4.Improved predictive modeling: By incorporating multiplepredictors, multivariate techniques can improve the accuracy androbustness of predictive models.5.More efficient data exploration: Multivariate techniques providea systematic and structured approach to data exploration, enablingresearchers to efficiently explore and analyze large and complexdata sets.ConclusionMultivariate statistical analysis is a valuable tool for researchers and analysts working with complex data sets. It allows for a deeper understanding of the relationships between variables, uncovering hidden patterns, and providing insights into complex systems. By employing techniques such as PCA, factor analysis, cluster analysis, and discriminant analysis, researchers can make more informed decisions and develop robust models. It is essential to choose the appropriate technique based on the research question and the characteristics of the data.。

融合多尺度通道注意力的开放词汇语义分割模型SAN

融合多尺度通道注意力的开放词汇语义分割模型SAN

融合多尺度通道注意力的开放词汇语义分割模型SAN作者:武玲张虹来源:《现代信息科技》2024年第03期收稿日期:2023-11-29基金项目:太原师范学院研究生教育教学改革研究课题(SYYJSJG-2154)DOI:10.19850/ki.2096-4706.2024.03.035摘要:随着视觉语言模型的发展,开放词汇方法在识别带注释的标签空间之外的类别方面具有广泛应用。

相比于弱监督和零样本方法,开放词汇方法被证明更加通用和有效。

文章研究的目标是改进面向开放词汇分割的轻量化模型SAN,即引入基于多尺度通道注意力的特征融合机制AFF来改进该模型,并改进原始SAN结构中的双分支特征融合方法。

然后在多个语义分割基准上评估了该改进算法,结果显示在几乎不改变参数量的情况下,模型表现有所提升。

这一改进方案有助于简化未来开放词汇语义分割的研究。

关键词:开放词汇;语义分割;SAN;CLIP;多尺度通道注意力中图分类号:TP391.4;TP18 文献标识码:A 文章编号:2096-4706(2024)03-0164-06An Open Vocabulary Semantic Segmentation Model SAN Integrating Multi Scale Channel AttentionWU Ling, ZHANG Hong(Taiyuan Normal University, Jinzhong 030619, China)Abstract: With the development of visual language models, open vocabulary methods have been widely used in identifying categories outside the annotated label. Compared with the weakly supervised and zero sample method, the open vocabulary method is proved to be more versatile and effective. The goal of this study is to improve the lightweight model SAN for open vocabularysegmentation, which introduces a feature fusion mechanism AFF based on multi scale channel attention to improve the model, and improve the dual branch feature fusion method in the original SAN structure. Then, the improved algorithm is evaluated based on multiple semantic segmentation benchmarks, and the results show that the model performance has certain improvement with almost no change in the number of parameters. This improvement plan will help simplify future research on open vocabulary semantic segmentation.Keywords: open vocabulary; semantic segmentation; SAN; CLIP; multi scale channel attention 0 引言識别和分割任何类别的视觉元素是图像语义分割的追求。

SCI论文摘要中常用的表达方法

SCI论文摘要中常用的表达方法

SCI论文摘要中常用的表达方法要写好摘要,需要建立一个适合自己需要的句型库(选择的词汇来源于SCI高被引用论文)引言部分(1)回顾研究背景,常用词汇有review, summarize, present, outline, describe等(2)说明写作目的,常用词汇有purpose, attempt, aim等,另外还可以用动词不定式充当目的壮语老表达(3)介绍论文的重点内容或研究范围,常用词汇有study, present, include, focus, emphasize, emphasis, attention等方法部分(1)介绍研究或试验过程,常用词汇有test study, investigate, examine,experiment, discuss, consider, analyze, analysis等(2)说明研究或试验方法,常用词汇有measure, estimate, calculate等(3)介绍应用、用途,常用词汇有use, apply, application等结果部分(1)展示研究结果,常用词汇有show, result, present等(2)介绍结论,常用词汇有summary, introduce,conclude等讨论部分(1)陈述论文的论点和作者的观点,常用词汇有suggest, repot, present, expect, describe 等(2)说明论证,常用词汇有support, provide, indicate, identify, find, demonstrate, confirm, clarify等(3)推荐和建议,常用词汇有suggest,suggestion, recommend, recommendation, propose,necessity,necessary,expect等。

摘要引言部分案例词汇review•Author(s): ROBINSON, TE; BERRIDGE, KC•Title:THE NEURAL BASIS OF DRUG CRA VING - AN INCENTIVE-SENSITIZATION THEORY OF ADDICTION•Source: BRAIN RESEARCH REVIEWS, 18 (3): 247-291 SEP-DEC 1993 《脑研究评论》荷兰SCI被引用1774We review evidence for this view of addiction and discuss its implications for understanding the psychology and neurobiology of addiction.回顾研究背景SCI高被引摘要引言部分案例词汇summarizeAuthor(s): Barnett, RM; Carone, CD; 被引用1571Title: Particles and field .1. Review of particle physicsSource: PHYSICAL REVIEW D, 54 (1): 1-+ Part 1 JUL 1 1996:《物理学评论,D辑》美国引言部分回顾研究背景常用词汇summarizeAbstract: This biennial review summarizes much of Particle Physics. Using data from previous editions, plus 1900 new measurements from 700 papers, we list, evaluate, and average measuredproperties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review.SCI摘要引言部分案例attentionSCI摘要方法部分案例considerSCI高被引摘要引言部分案例词汇outline•Author(s): TIERNEY, L SCI引用728次•Title:MARKOV-CHAINS FOR EXPLORING POSTERIOR DISTRIBUTIONS 引言部分回顾研究背景,常用词汇outline•Source: ANNALS OF STATISTICS, 22 (4): 1701-1728 DEC 1994•《统计学纪事》美国•Abstract: Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm.In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several Variance reduction techniques and also gives guidance on the choice of sample size and allocation.SCI高被引摘要引言部分案例回顾研究背景presentAuthor(s): L YNCH, M; MILLIGAN, BG SC I被引用661Title: ANAL YSIS OF POPULATION GENETIC-STRUCTURE WITH RAPD MARKERS Source: MOLECULAR ECOLOGY, 3 (2): 91-99 APR 1994《分子生态学》英国Abstract: Recent advances in the application of the polymerase chain reaction make it possible to score individuals at a large number of loci. The RAPD (random amplified polymorphic DNA) method is one such technique that has attracted widespread interest.The analysis of population structure with RAPD data is hampered by the lack of complete genotypic information resulting from dominance, since this enhances the sampling variance associated with single loci as well as induces bias in parameter estimation. We present estimators for several population-genetic parameters (gene and genotype frequencies, within- and between-population heterozygosities, degree of inbreeding and population subdivision, and degree of individual relatedness) along with expressions for their sampling variances. Although completely unbiased estimators do not appear to be possible with RAPDs, several steps are suggested that will insure that the bias in parameter estimates is negligible. To achieve the same degree of statistical power, on the order of 2 to 10 times more individuals need to be sampled per locus when dominant markers are relied upon, as compared to codominant (RFLP, isozyme) markers. Moreover, to avoid bias in parameter estimation, the marker alleles for most of these loci should be in relatively low frequency. Due to the need for pruning loci with low-frequency null alleles, more loci also need to be sampled with RAPDs than with more conventional markers, and sole problems of bias cannot be completely eliminated.SCI高被引摘要引言部分案例词汇describe•Author(s): CLONINGER, CR; SVRAKIC, DM; PRZYBECK, TR•Title: A PSYCHOBIOLOGICAL MODEL OF TEMPERAMENT AND CHARACTER•Source: ARCHIVES OF GENERAL PSYCHIATRY, 50 (12): 975-990 DEC 1993《普通精神病学纪要》美国•引言部分回顾研究背景,常用词汇describe 被引用926•Abstract: In this study, we describe a psychobiological model of the structure and development of personality that accounts for dimensions of both temperament and character. Previous research has confirmed four dimensions of temperament: novelty seeking, harm avoidance, reward dependence, and persistence, which are independently heritable, manifest early in life, and involve preconceptual biases in perceptual memory and habit formation. For the first time, we describe three dimensions of character that mature in adulthood and influence personal and social effectiveness by insight learning about self-concepts.Self-concepts vary according to the extent to which a person identifies the self as (1) an autonomous individual, (2) an integral part of humanity, and (3) an integral part of the universe as a whole. Each aspect of self-concept corresponds to one of three character dimensions called self-directedness, cooperativeness, and self-transcendence, respectively. We also describe the conceptual background and development of a self-report measure of these dimensions, the Temperament and Character Inventory. Data on 300 individuals from the general population support the reliability and structure of these seven personality dimensions. We discuss the implications for studies of information processing, inheritance, development, diagnosis, and treatment.摘要引言部分案例•(2)说明写作目的,常用词汇有purpose, attempt, aimSCI高被引摘要引言部分案例attempt说明写作目的•Author(s): Donoho, DL; Johnstone, IM•Title: Adapting to unknown smoothness via wavelet shrinkage•Source: JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 90 (432): 1200-1224 DEC 1995 《美国统计学会志》被引用429次•Abstract: We attempt to recover a function of unknown smoothness from noisy sampled data. We introduce a procedure, SureShrink, that suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: A threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein unbiased estimate of risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N.log(N) as a function of the sample size N. SureShrink is smoothness adaptive: If the unknown function contains jumps, then the reconstruction (essentially) does also; if the unknown function has a smooth piece, then the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness adaptive: It is near minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods-kernels, splines, and orthogonal series estimates-even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale.Examples of SureShrink are given. The advantages of the method are particularly evident when the underlying function has jump discontinuities on a smooth backgroundSCI高被引摘要引言部分案例To investigate说明写作目的•Author(s): OLTV AI, ZN; MILLIMAN, CL; KORSMEYER, SJ•Title: BCL-2 HETERODIMERIZES IN-VIVO WITH A CONSERVED HOMOLOG, BAX, THAT ACCELERATES PROGRAMMED CELL-DEATH•Source: CELL, 74 (4): 609-619 AUG 27 1993 被引用3233•Abstract: Bcl-2 protein is able to repress a number of apoptotic death programs. To investigate the mechanism of Bcl-2's effect, we examined whether Bcl-2 interacted with other proteins. We identified an associated 21 kd protein partner, Bax, that has extensive amino acid homology with Bcl-2, focused within highly conserved domains I and II. Bax is encoded by six exons and demonstrates a complex pattern of alternative RNA splicing that predicts a 21 kd membrane (alpha) and two forms of cytosolic protein (beta and gamma). Bax homodimerizes and forms heterodimers with Bcl-2 in vivo. Overexpressed Bax accelerates apoptotic death induced by cytokine deprivation in an IL-3-dependent cell line. Overexpressed Bax also counters the death repressor activity of Bcl-2. These data suggest a model in which the ratio of Bcl-2 to Bax determines survival or death following an apoptotic stimulus.SCI高被引摘要引言部分案例purposes说明写作目的•Author(s): ROGERS, FJ; IGLESIAS, CA•Title: RADIATIVE ATOMIC ROSSELAND MEAN OPACITY TABLES•Source: ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES, 79 (2): 507-568 APR 1992 《天体物理学杂志增刊》美国SCI被引用512•Abstract: For more than two decades the astrophysics community has depended on opacity tables produced at Los Alamos. In the present work we offer new radiative Rosseland mean opacity tables calculated with the OPAL code developed independently at LLNL. We give extensive results for the recent Anders-Grevesse mixture which allow accurate interpolation in temperature, density, hydrogen mass fraction, as well as metal mass fraction. The tables are organized differently from previous work. Instead of rows and columns of constant temperature and density, we use temperature and follow tracks of constant R, where R = density/(temperature)3. The range of R and temperature are such as to cover typical stellar conditions from the interior through the envelope and the hotter atmospheres. Cool atmospheres are not considered since photoabsorption by molecules is neglected. Only radiative processes are taken into account so that electron conduction is not included. For comparison purposes we present some opacity tables for the Ross-Aller and Cox-Tabor metal abundances. Although in many regions the OPAL opacities are similar to previous work, large differences are reported.For example, factors of 2-3 opacity enhancements are found in stellar envelop conditions.SCI高被引摘要引言部分案例aim说明写作目的•Author(s):EDV ARDSSON, B; ANDERSEN, J; GUSTAFSSON, B; LAMBERT, DL;NISSEN, PE; TOMKIN, J•Title:THE CHEMICAL EVOLUTION OF THE GALACTIC DISK .1. ANALYSISAND RESULTS•Source: ASTRONOMY AND ASTROPHYSICS, 275 (1): 101-152 AUG 1993 《天文学与天体物理学》被引用934•Abstract:With the aim to provide observational constraints on the evolution of the galactic disk, we have derived abundances of 0, Na, Mg, Al, Si, Ca, Ti, Fe, Ni, Y, Zr, Ba and Nd, as well as individual photometric ages, for 189 nearby field F and G disk dwarfs.The galactic orbital properties of all stars have been derived from accurate kinematic data, enabling estimates to be made of the distances from the galactic center of the stars‘ birthplaces. 结构式摘要•Our extensive high resolution, high S/N, spectroscopic observations of carefully selected northern and southern stars provide accurate equivalent widths of up to 86 unblended absorption lines per star between 5000 and 9000 angstrom. The abundance analysis was made with greatly improved theoretical LTE model atmospheres. Through the inclusion of a great number of iron-peak element absorption lines the model fluxes reproduce the observed UV and visual fluxes with good accuracy. A new theoretical calibration of T(eff) as a function of Stromgren b - y for solar-type dwarfs has been established. The new models and T(eff) scale are shown to yield good agreement between photometric and spectroscopic measurements of effective temperatures and surface gravities, but the photometrically derived very high overall metallicities for the most metal rich stars are not supported by the spectroscopic analysis of weak spectral lines.•Author(s): PAYNE, MC; TETER, MP; ALLAN, DC; ARIAS, TA; JOANNOPOULOS, JD•Title:ITERA TIVE MINIMIZATION TECHNIQUES FOR ABINITIO TOTAL-ENERGY CALCULATIONS - MOLECULAR-DYNAMICS AND CONJUGA TE GRADIENTS•Source: REVIEWS OF MODERN PHYSICS, 64 (4): 1045-1097 OCT 1992 《现代物理学评论》美国American Physical Society SCI被引用2654 •Abstract: This article describes recent technical developments that have made the total-energy pseudopotential the most powerful ab initio quantum-mechanical modeling method presently available. In addition to presenting technical details of the pseudopotential method, the article aims to heighten awareness of the capabilities of the method in order to stimulate its application to as wide a range of problems in as many scientific disciplines as possible.SCI高被引摘要引言部分案例includes介绍论文的重点内容或研究范围•Author(s):MARCHESINI, G; WEBBER, BR; ABBIENDI, G; KNOWLES, IG;SEYMOUR, MH; STANCO, L•Title: HERWIG 5.1 - A MONTE-CARLO EVENT GENERA TOR FOR SIMULATING HADRON EMISSION REACTIONS WITH INTERFERING GLUONS SCI被引用955次•Source: COMPUTER PHYSICS COMMUNICATIONS, 67 (3): 465-508 JAN 1992:《计算机物理学通讯》荷兰Elsevier•Abstract: HERWIG is a general-purpose particle-physics event generator, which includes the simulation of hard lepton-lepton, lepton-hadron and hadron-hadron scattering and soft hadron-hadron collisions in one package. It uses the parton-shower approach for initial-state and final-state QCD radiation, including colour coherence effects and azimuthal correlations both within and between jets. This article includes a brief review of the physics underlying HERWIG, followed by a description of the program itself. This includes details of the input and control parameters used by the program, and the output data provided by it. Sample output from a typical simulation is given and annotated.SCI高被引摘要引言部分案例presents介绍论文的重点内容或研究范围•Author(s): IDSO, KE; IDSO, SB•Title: PLANT-RESPONSES TO ATMOSPHERIC CO2 ENRICHMENT IN THE FACE OF ENVIRONMENTAL CONSTRAINTS - A REVIEW OF THE PAST 10 YEARS RESEARCH•Source: AGRICULTURAL AND FOREST METEOROLOGY, 69 (3-4): 153-203 JUL 1994 《农业和林业气象学》荷兰Elsevier 被引用225•Abstract:This paper presents a detailed analysis of several hundred plant carbon exchange rate (CER) and dry weight (DW) responses to atmospheric CO2 enrichment determined over the past 10 years. It demonstrates that the percentage increase in plant growth produced by raising the air's CO2 content is generally not reduced by less than optimal levels of light, water or soil nutrients, nor by high temperatures, salinity or gaseous air pollution. More often than not, in fact, the data show the relative growth-enhancing effects of atmospheric CO2 enrichment to be greatest when resource limitations and environmental stresses are most severe.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围emphasizing •Author(s): BESAG, J; GREEN, P; HIGDON, D; MENGERSEN, K•Title: BAYESIAN COMPUTATION AND STOCHASTIC-SYSTEMS•Source: STATISTICAL SCIENCE, 10 (1): 3-41 FEB 1995《统计科学》美国•SCI被引用296次•Abstract: Markov chain Monte Carlo (MCMC) methods have been used extensively in statistical physics over the last 40 years, in spatial statistics for the past 20 and in Bayesian image analysis over the last decade. In the last five years, MCMC has been introduced into significance testing, general Bayesian inference and maximum likelihood estimation. This paper presents basic methodology of MCMC, emphasizing the Bayesian paradigm, conditional probability and the intimate relationship with Markov random fields in spatial statistics.Hastings algorithms are discussed, including Gibbs, Metropolis and some other variations. Pairwise difference priors are described and are used subsequently in three Bayesian applications, in each of which there is a pronounced spatial or temporal aspect to the modeling. The examples involve logistic regression in the presence of unobserved covariates and ordinal factors; the analysis of agricultural field experiments, with adjustment for fertility gradients; and processing oflow-resolution medical images obtained by a gamma camera. Additional methodological issues arise in each of these applications and in the Appendices. The paper lays particular emphasis on the calculation of posterior probabilities and concurs with others in its view that MCMC facilitates a fundamental breakthrough in applied Bayesian modeling.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围focuses •Author(s): HUNT, KJ; SBARBARO, D; ZBIKOWSKI, R; GAWTHROP, PJ•Title: NEURAL NETWORKS FOR CONTROL-SYSTEMS - A SURVEY•Source: AUTOMA TICA, 28 (6): 1083-1112 NOV 1992《自动学》荷兰Elsevier•SCI被引用427次•Abstract:This paper focuses on the promise of artificial neural networks in the realm of modelling, identification and control of nonlinear systems. The basic ideas and techniques of artificial neural networks are presented in language and notation familiar to control engineers. Applications of a variety of neural network architectures in control are surveyed. We explore the links between the fields of control science and neural networks in a unified presentation and identify key areas for future research.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围focus•Author(s): Stuiver, M; Reimer, PJ; Bard, E; Beck, JW;•Title: INTCAL98 radiocarbon age calibration, 24,000-0 cal BP•Source: RADIOCARBON, 40 (3): 1041-1083 1998《放射性碳》美国SCI被引用2131次•Abstract: The focus of this paper is the conversion of radiocarbon ages to calibrated (cal) ages for the interval 24,000-0 cal BP (Before Present, 0 cal BP = AD 1950), based upon a sample set of dendrochronologically dated tree rings, uranium-thorium dated corals, and varve-counted marine sediment. The C-14 age-cal age information, produced by many laboratories, is converted to Delta(14)C profiles and calibration curves, for the atmosphere as well as the oceans. We discuss offsets in measured C-14 ages and the errors therein, regional C-14 age differences, tree-coral C-14 age comparisons and the time dependence of marine reservoir ages, and evaluate decadal vs. single-year C-14 results. Changes in oceanic deepwater circulation, especially for the 16,000-11,000 cal sp interval, are reflected in the Delta(14)C values of INTCAL98.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围emphasis •Author(s): LEBRETON, JD; BURNHAM, KP; CLOBERT, J; ANDERSON, DR•Title: MODELING SURVIV AL AND TESTING BIOLOGICAL HYPOTHESES USING MARKED ANIMALS - A UNIFIED APPROACH WITH CASE-STUDIES •Source: ECOLOGICAL MONOGRAPHS, 62 (1): 67-118 MAR 1992•《生态学论丛》美国•Abstract: The understanding of the dynamics of animal populations and of related ecological and evolutionary issues frequently depends on a direct analysis of life history parameters. For instance, examination of trade-offs between reproduction and survival usually rely on individually marked animals, for which the exact time of death is most often unknown, because marked individuals cannot be followed closely through time.Thus, the quantitative analysis of survival studies and experiments must be based oncapture-recapture (or resighting) models which consider, besides the parameters of primary interest, recapture or resighting rates that are nuisance parameters. 结构式摘要•T his paper synthesizes, using a common framework, these recent developments together with new ones, with an emphasis on flexibility in modeling, model selection, and the analysis of multiple data sets. The effects on survival and capture rates of time, age, and categorical variables characterizing the individuals (e.g., sex) can be considered, as well as interactions between such effects. This "analysis of variance" philosophy emphasizes the structure of the survival and capture process rather than the technical characteristics of any particular model. The flexible array of models encompassed in this synthesis uses a common notation. As a result of the great level of flexibility and relevance achieved, the focus is changed from fitting a particular model to model building and model selection.SCI摘要方法部分案例•方法部分•(1)介绍研究或试验过程,常用词汇有test,study, investigate, examine,experiment, discuss, consider, analyze, analysis等•(2)说明研究或试验方法,常用词汇有measure, estimate, calculate等•(3)介绍应用、用途,常用词汇有use, apply, application等SCI高被引摘要方法部分案例discusses介绍研究或试验过程•Author(s): LIANG, KY; ZEGER, SL; QAQISH, B•Title: MULTIV ARIATE REGRESSION-ANAL YSES FOR CATEGORICAL-DATA •Source:JOURNAL OF THE ROY AL STA TISTICAL SOCIETY SERIES B-METHODOLOGICAL, 54 (1): 3-40 1992《皇家统计学会志,B辑:统计方法论》•SCI被引用298•Abstract: It is common to observe a vector of discrete and/or continuous responses in scientific problems where the objective is to characterize the dependence of each response on explanatory variables and to account for the association between the outcomes. The response vector can comprise repeated observations on one variable, as in longitudinal studies or genetic studies of families, or can include observations for different variables.This paper discusses a class of models for the marginal expectations of each response and for pairwise associations. The marginal models are contrasted with log-linear models.Two generalized estimating equation approaches are compared for parameter estimation.The first focuses on the regression parameters; the second simultaneously estimates the regression and association parameters. The robustness and efficiency of each is discussed.The methods are illustrated with analyses of two data sets from public health research SCI高被引摘要方法部分案例介绍研究或试验过程examines•Author(s): Huo, QS; Margolese, DI; Stucky, GD•Title: Surfactant control of phases in the synthesis of mesoporous silica-based materials •Source: CHEMISTRY OF MATERIALS, 8 (5): 1147-1160 MAY 1996•SCI被引用643次《材料的化学性质》美国•Abstract: The low-temperature formation of liquid-crystal-like arrays made up of molecular complexes formed between molecular inorganic species and amphiphilic organic molecules is a convenient approach for the synthesis of mesostructure materials.This paper examines how the molecular shapes of covalent organosilanes, quaternary ammonium surfactants, and mixed surfactants in various reaction conditions can be used to synthesize silica-based mesophase configurations, MCM-41 (2d hexagonal, p6m), MCM-48 (cubic Ia3d), MCM-50 (lamellar), SBA-1 (cubic Pm3n), SBA-2 (3d hexagonal P6(3)/mmc), and SBA-3(hexagonal p6m from acidic synthesis media). The structural function of surfactants in mesophase formation can to a first approximation be related to that of classical surfactants in water or other solvents with parallel roles for organic additives. The effective surfactant ion pair packing parameter, g = V/alpha(0)l, remains a useful molecular structure-directing index to characterize the geometry of the mesophase products, and phase transitions may be viewed as a variation of g in the liquid-crystal-Like solid phase. Solvent and cosolvent structure direction can be effectively used by varying polarity, hydrophobic/hydrophilic properties and functionalizing the surfactant molecule, for example with hydroxy group or variable charge. Surfactants and synthesis conditions can be chosen and controlled to obtain predicted silica-based mesophase products. A room-temperature synthesis of the bicontinuous cubic phase, MCM-48, is presented. A low-temperature (100 degrees C) and low-pH (7-10) treatment approach that can be used to give MCM-41 with high-quality, large pores (up to 60 Angstrom), and pore volumes as large as 1.6 cm(3)/g is described.Estimates 介绍研究或试验过程SCI高被引摘要方法部分案例•Author(s): KESSLER, RC; MCGONAGLE, KA; ZHAO, SY; NELSON, CB; HUGHES, M; ESHLEMAN, S; WITTCHEN, HU; KENDLER, KS•Title:LIFETIME AND 12-MONTH PREV ALENCE OF DSM-III-R PSYCHIATRIC-DISORDERS IN THE UNITED-STA TES - RESULTS FROM THE NATIONAL-COMORBIDITY-SURVEY•Source: ARCHIVES OF GENERAL PSYCHIATRY, 51 (1): 8-19 JAN 1994•《普通精神病学纪要》美国SCI被引用4350次•Abstract: Background: This study presents estimates of lifetime and 12-month prevalence of 14 DSM-III-R psychiatric disorders from the National Comorbidity Survey, the first survey to administer a structured psychiatric interview to a national probability sample in the United States.Methods: The DSM-III-R psychiatric disorders among persons aged 15 to 54 years in the noninstitutionalized civilian population of the United States were assessed with data collected by lay interviewers using a revised version of the Composite International Diagnostic Interview. Results: Nearly 50% of respondents reported at least one lifetime disorder, and close to 30% reported at least one 12-month disorder. The most common disorders were major depressive episode, alcohol dependence, social phobia, and simple phobia. More than half of all lifetime disorders occurred in the 14% of the population who had a history of three or more comorbid disorders. These highly comorbid people also included the vast majority of people with severe disorders.Less than 40% of those with a lifetime disorder had ever received professional treatment,and less than 20% of those with a recent disorder had been in treatment during the past 12 months. Consistent with previous risk factor research, it was found that women had elevated rates of affective disorders and anxiety disorders, that men had elevated rates of substance use disorders and antisocial personality disorder, and that most disorders declined with age and with higher socioeconomic status. Conclusions: The prevalence of psychiatric disorders is greater than previously thought to be the case. Furthermore, this morbidity is more highly concentrated than previously recognized in roughly one sixth of the population who have a history of three or more comorbid disorders. This suggests that the causes and consequences of high comorbidity should be the focus of research attention. The majority of people with psychiatric disorders fail to obtain professional treatment. Even among people with a lifetime history of three or more comorbid disorders, the proportion who ever obtain specialty sector mental health treatment is less than 50%.These results argue for the importance of more outreach and more research on barriers to professional help-seekingSCI高被引摘要方法部分案例说明研究或试验方法measure•Author(s): Schlegel, DJ; Finkbeiner, DP; Davis, M•Title:Maps of dust infrared emission for use in estimation of reddening and cosmic microwave background radiation foregrounds•Source: ASTROPHYSICAL JOURNAL, 500 (2): 525-553 Part 1 JUN 20 1998 SCI 被引用2972 次《天体物理学杂志》美国•The primary use of these maps is likely to be as a new estimator of Galactic extinction. To calibrate our maps, we assume a standard reddening law and use the colors of elliptical galaxies to measure the reddening per unit flux density of 100 mu m emission. We find consistent calibration using the B-R color distribution of a sample of the 106 brightest cluster ellipticals, as well as a sample of 384 ellipticals with B-V and Mg line strength measurements. For the latter sample, we use the correlation of intrinsic B-V versus Mg, index to tighten the power of the test greatly. We demonstrate that the new maps are twice as accurate as the older Burstein-Heiles reddening estimates in regions of low and moderate reddening. The maps are expected to be significantly more accurate in regions of high reddening. These dust maps will also be useful for estimating millimeter emission that contaminates cosmic microwave background radiation experiments and for estimating soft X-ray absorption. We describe how to access our maps readily for general use.SCI高被引摘要结果部分案例application介绍应用、用途•Author(s): MALLAT, S; ZHONG, S•Title: CHARACTERIZATION OF SIGNALS FROM MULTISCALE EDGES•Source: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 14 (7): 710-732 JUL 1992•SCI被引用508次《IEEE模式分析与机器智能汇刊》美国•Abstract: A multiscale Canny edge detection is equivalent to finding the local maxima ofa wavelet transform. We study the properties of multiscale edges through the wavelet。

新药开发的途径与方法

新药开发的途径与方法

化学结构参数
定量表示化合物的结构特征 –电性参数 –立体参数 –疏水参数(脂水分配参数) –结构参数
参数的获得
需用化合物测定 可根据已有数据进行计算 –可推算未合成的化合物的数据 –进行活性预测
3D-QSAR
已具有X-射线结晶学数据的酶可通 过计算机显示三维结构
把抑制剂或底物与酶的活性部位结 合
Lead optimization
Lead optimization is the synthetic modification of a biologically active compound, to fulfill all stereoelectronic, physicochemical, pharmacokinetic and toxicologic required for clinical usefulness.
提高药物作用的选择性及疗效
如果化合物具有较高毒性,但对病理 组织细胞有良好治疗作用
则可以在药物分子上引入一个载体, 使药物能转运到靶组织细胞部位
通过酶的作用或化学环境的差异使前 药在该组织部位分解,释放出母体药 物来,以达到治疗目的。
环磷酰胺
本身不具备细胞毒活性,而是通 过在体内的代谢转化,经肝微粒 体混合功能氧化酶活化才有烷基 化活性
相互关系
先导化合物的发现 –为寻找最佳化合物提供了基础和新 的结构类型,
先导化合物优化 –先导化合物的深入和发展
两者相辅相成
一、先导化合物的发现
一、先导化合物的发现 1、从天然资源 2、以现有的药物 3、用活性内源性物质 4、利用组合化学和高通量筛选
1、从天然资源得到先导
植物 微生物 动物
植物成份作为先导

eride

eride

RESEARCH ARTICLEQuality by design:screening of critical variables and formulation optimization of Eudragit E nanoparticles containing dutasterideSe-Jin Park •Gwang-Ho Choo •Sung-Joo Hwang •Min-Soo KimReceived:25September 2012/Accepted:8January 2013/Published online:28February 2013ÓThe Pharmaceutical Society of Korea 2013Abstract The study was aimed at screening,understand-ing,and optimizing product variability of dutasteride-loaded Eudragit E nanoparticles prepared by solvent displacement using Plackett–Burman screening and a central composite design.The independent process and formulation factors selected included:drug loading (%),solute concentration (mg/mL),Soluplus concentration (mg/mL),injection rate (mL/min),organic solvent type (methanol or ethanol),stir-ring rate (rpm),and organic-to-aqueous phase volume ratio.Among these factors,solute concentration was associated with increased particle size,broad particle size distribution,and enhanced entrapment efficiency.On the other hand,Soluplus concentration played a role in decreasing particle size,narrowing particle size distribution,and reducing entrapment efficiency.Other formulation and process factors did not have a significant impact on nanoparticle properties,assuming they were within the limits used in this study.The optimized formulation was achieved with 20mg/mL solute and 3.22mg/mL Soluplus,and the observed responses were very close to the values predicted using the response surface methodology.The results clearly showed that quality by design concept could be effectively applied to optimize du-tasteride-loaded Eudragit E nanoparticles.Keywords Dutasteride ÁEudragit E ÁNanoparticle ÁDissolution ÁQbD ÁOptimizationIntroductionpH-Sensitive polymeric nanoparticles have shown promise for oral drug delivery,especially for peptide/protein drugs and poorly water-soluble medicines (Wang and Zhang 2012).Eudragit compounds are extensively used in the preparation of pH-sensitive nanoparticles.Basic butylated methacrylate copolymer (Eudragit E100)is a cationic copolymer based on dimethylaminoethyl methacrylate,butyl methacrylate,and methyl methacrylate,which can dissolve in the stomach.The use of Eudragit E nanoparticles has previously shown potential as an approach for improving the solubility,bioavailability,and performance of poorly water-soluble drugs such as andrographolide (Chellampillai and Pawar 2011),cyclosporine (Dai et al.2004),genistein (Tang et al.2011),meloxicam (Khachane et al.2011),and quercetin (Wu et al.2008).Eudragit E nanoparticles can be produced by a solvent displacement method using antisol-vent nanoprecipitation (involving the addition of an organic phase containing a polymer and drug into an aqueous phase containing a stabilizer)(Molpeceres et al.1996;Quintanar-Guerrero et al.1999).Stabilization of nanoparticle suspen-sions is critical to ensure their usability in various drug formulations.For stabilization,surfactants,non-ionic poly-mers,and amphiphilic block copolymers are usually used (Thorat and Dalvi 2012).In this study,a relatively new amphiphilic polymeric solubilizer,Soluplus,was used for the first time as a stabilizer for the preparation of Eudragit E nanoparticles.Soluplus is a graft copolymer comprising polyethylene glycol,polyvinyl acetate,and polyvinyl cap-rolactam.Soluplus shows excellent solubilizing propertiesS.-J.Park ÁG.-H.Choo ÁM.-S.Kim (&)Department of Pharmaceutical Engineering,Inje University,Gimhae,Gyeongnam 621-749,Republic of Korea e-mail:mskim@inje.ac.krS.-J.HwangYonsei Institute of Pharmaceutical Sciences,Yonsei University,162-1Songdo-dong,Yeonsu-gu,Incheon 406-840,Republic of KoreaS.-J.HwangCollege of Pharmacy,Yonsei University,162-1Songdo-dong,Yeonsu-gu,Incheon 406-840,Republic of KoreaArch.Pharm.Res.(2013)36:593–601DOI10.1007/s12272-013-0064-zfor Biopharmaceutics Classification System(BCS)class II substances and offers the possibility of producing solid solutions by hot-melt extrusion(Linn et al.2012).Thefirst objective of this investigation was to fabricate Eudragit E nanoparticles by using very-low-energy methods and Solu-plus to stabilize the product.The5-a reductase inhibitor dutasteride was used as a representative poorly water-soluble drug.It is insoluble in water with solubility below the quantitation limit of the assay(0.038ng/mL)(US FDA). However,it is soluble in methanol(64mg/mL)and ethanol (44mg/mL)(GSK).It is presently marketed as a soft gelatin capsule that contains0.5mg of dutasteride in a mixture of oils,such as mono-diglycerides of caprylic/capric acid and butylated hydroxytoluene,because of its insolubility in water.To improve solubility and bioavailability,the target product profile of the intended dutasteride-loaded Eudragit E nanoparticles is(a)high drug entrapment efficiency with low and predictable variation;(b)mean particle size below 200nm,and(c)polydispersity index of\0.200.To achieve the above target product profile,a quality-by-design(QbD)approach was used.QbD emphasizes the systematic development of pharmaceutical products on the basis of sound scientific principles and refers to the achievement of a predictable quality with desired,prede-termined specifications(Verma et al.2009).Therefore,a very useful component of the QbD approach is an under-standing of relevant factors and their interaction in a desired set of experiments(Lee et al.2012).Process and formulation can be understood by developing them based on multivariate analysis of designed experiments and/or historical data that identify and characterize the critical-to-quality process parameters,as well as the root causes of variability(Rahman et al.2010).Design of experiments (DOE)is a very useful tool to identify critical process parameters and to optimize the respective process condi-tions(Wu et al.2011).To understand and optimize process and formulation,many statistical DOEs are used during the pharmaceutical product/process development(Cha et al. 2010;Xu et al.2011;Lee et al.2012).Especially,Plackett–Burman screening design is extremely useful to explore critical process parameters and critical material attributes (Jin et al.2008;Rahman et al.2010;Xu et al.2012).The response surface methodology including statistical experi-mental designs such as central composite design and box-behnken design has been commonly used for the optimi-zation of formulation and process in development of the pharmaceutical product(Kim et al.2007;Singh et al.2011; Xu et al.2012).The aim of this study was to screen,understand,and optimize product variability resulting from critical factors affecting the characteristics of dutasteride-loaded Eudragit E nanoparticles prepared by a solvent displacement method.First,a Plackett–Burman screening design was used to identify the critical factors affecting the properties of drug-loaded Eudragit E nanoparticles.Then,a central composite design was applied to explore the optimum levels of critical factors.Finally,further experimental tests were performed to test the accuracy of the generated sta-tistical model.Materials and methodsMaterialsDutasteride was obtained from Dr.Reddy’s Laboratories Ltd.(purity99.6%,India).Eudragit E100and Soluplus were kindly supplied by BASF(BASF,Ludwigshafen, Germany).All organic solvents were HPLC-grade.All other chemicals were analytical-grade,and double-distilled water was used throughout the study.Preparation of drug-loaded Eudragit E nanoparticlesThe preparation of dutasteride-loaded Eudragit E nano-particles was based on the solvent displacement process (Molpeceres et al.1996).Briefly,Eudragit E100and du-tasteride were dissolved in organic solvent(methanol or ethanol),and this organic phase was injected into an aqueous medium containing Soluplus by using a peristaltic pump(KMC-1303P2,Vision Scientific Co.Ltd.,Korea) with a polyethylene tubing nozzle(i.d.0.6mm,o.d.2mm) under magnetic ter,the organic solvent was eliminated under reduced pressure by using rotovapor apparatus(Rotovapor R-114,Buchi,Flawil,Switzerland). The nanoparticle suspension sample was placed in aflask and vacuum was applied at40°C.Plackett–Burman screening designThe Plackett–Burman screening design is extremely useful in preliminary studies in which the aim is to identify factors that can be addressed or eliminated in further investigation (Jin et al.2008;Rahman et al.2010).The Plackett–Burman design was employed to identify critical factors in a min-imal number of runs with a good degree of accuracy.The linear equation of the model is as follows:Y¼A0þA1X1þA2X2þA3X3þA4X4þA5X5þA6X6þA7X7where Y is the response,A0is a constant and A1to A7are the coefficients of the response values.The selected independent process and formulation factors were as follows:drug loading(%)in Eudragit E nanoparticles,solute concentra-tion(concentration of drug and Eudragit E in organic594S.-J.Park et al.solvent),Soluplus concentration in aqueous medium,injec-tion rate of drug solution into aqueous medium,organic solvent type,stirring rate,and organic-to-aqueous phase volume ratio.The parameter level selection was based on a preliminary study and onfindings in the literature(Table1). Solvents were selected based on the solubility of dutasteride and Eudragit E and their miscibility in water.Mean particle size(Y1),polydispersity index(Y2),and entrapment effi-ciency(%)(Y3)were used as dependent variables.Design-Expert software(V.8.0,Stat-Ease Inc.,Minneapolis,MN, USA)was used for the generation and evaluation of the statistical experimental design.Central composite designA central composite design was used to statistically opti-mize critical factors and to evaluate main,interaction,and quadratic effects of the factors on properties of dutasteride-loaded Eudragit E nanoparticles.Solute concentration and Soluplus concentration were selected as critical factors based on the screening study results.Each of the factors was tested at5different levels and5center points were included.Design-Expert software was used for the design, analysis,and plotting of the various3D and contour graphs. Particle size analysisParticle size distribution,mean diameters,and polydisper-sity index of drug-loaded Eudragit E nanoparticles were determined by dynamic light scattering techniques using a laser particle size analyzer(BI-9000,Brookhaven,Upton, NY)at a scattering angle of90°at room temperature.For each sample,measurements were performed in triplicate.Entrapment efficiencyThe entrapment efficiency(%),which corresponds to the percentage of dutasteride encapsulated within and adsorbed onto the Eudragit E nanoparticles,was determined by measuring the concentration of free dutasteride in the dis-persion medium.Nanoparticle dispersion was ultracentri-fuged(15,000rpm for2h)to separate nanoparticles.The supernatant was analyzed for free dutasteride by using HPLC after suitable dilution with methanol.HPLC analy-ses of in vitro samples of dutasteride were performed using an HPLC system consisting of an LC10ADvp pump,SIL-10A autoinjector,and an SPD-10ADvp UV detector(Shi-madzu,Japan).A C18analytical column(Inertsil ODS-3, 5l m,4.69150mm,GL Sciences Inc.,Japan)was used at room temperature.The mobile phase was60%aceto-nitrile and40%water.The injection volume was20l L, and the eluentflow rate was1.0mL/min.The signal was monitored at210nm(Baek and Kim2012).The entrap-ment efficiency was calculated from the following equation:Entrapment efficiency%ðÞ¼weight of the feeding drugÀweight of the free drugweight of the feeding drugÂ100 Results and discussionEffect of factors on mean particle size(Y1)In this study,Eudragit E nanoparticles containing du-tasteride were prepared using a solvent displacement nanoprecipitation method.The experimental runs with variables and corresponding responses for the12formu-lations tested are presented in Table2.The mean particle size ranged from62.2nm(PB11)to180.6nm(PB4) depending on the factor level selected during preparation (Table2).The linear model explaining the effects of fac-tors on the mean particle size(Y1)is:Y1¼115:81À3:09X1þ29:39X2À17:01X3þ0:29X4þ9:77X5þ1:21X6þ2:73X7R2¼0:9271;F¼7:27;p¼0:0370ÀÁStatistical analysis revealed that the most significant factors were solute concentration(X2)and Soluplus con-centration(X3)(p\0.05)relative to other factors influ-encing mean particle size(Y1),as shown in Table3. Factors with coefficients of greater magnitude show a large effect on the responses shown in Table3.Positive or negative signs before a coefficient in quadratic models indicate a synergistic effect or an antagonistic effect of the indicated factor.An increase in mean particle size wasTable1Variables in the Plackett–Burman designIndependent variables LevelsLow HighX1:Drug-loading(%)1030X2:Solute concentration(mg/mL)1030X3:Soluplus concentration(mg/mL)15X4:Injection rate(mL/min)15X5:Solvent type Methanol EthanolX6:Stirring rate(rpm)6001200X7:Volume ratio(organic-to-aqueous phase)1020Dependent variablesY1:Mean particle size(nm)Y2:Polydispersity indexY3:Entrapment efficiency(%)Optimization of Eudragit E nanoparticles595observed with either an increase in solute concentration (X2)or a decrease in Soluplus concentration(X3).All other tested factor levels showed no significant impact on mean particle size(p[0.05).Effect of factors on polydispersity index(Y2)The polydispersity index as an estimation of the particle size distribution width is dimensionless and is scaled between0and1.Samples with a very broad size distri-bution have polydispersity index values of[0.7.Values of polydispersity index lower than0.200describe good-to-average distribution,taking into account that the colloidal suspension is considered a homogeneous suspension.The polydispersity index(Y2)of nanoparticles ranged from 0.073(PB2)to0.226(PB11)(Table2).The following model was used to describe the effect of various factors on the polydispersity index:Y2¼0:15þ5:5Â10À3X1þ0:054X2þ0:017X3À2:00Â10À3X4þ4:33Â10À3X5þ1:67Â10À4X6À7:83Â10À3X7R2¼0:9327;F¼7:92;p¼0:0318ÀÁThe most significant factor affecting the polydispersity index was shown to be solute concentration(X2)used in the preparation of the nanoparticles(p\0.05).In addition, the Soluplus concentration(X3)had a positive effect on the polydispersity index that was non-significant(p[0.05).Effect of factors on entrapment efficiency(Y3)As shown in Table2,for all formulations the entrapment efficiency of nanoparticles was above95%and varied between95.04%(PB6)to99.85%(PB7).The equation describing the effect of factors on entrapment efficiency is as follows:Table2Plackett–Burman design randomized runs and responsesRun X1X2X3X4X5X6X7Mean particlesize(nm)(Y1)Polydispersityindex(Y2)Entrapment efficiency(%)(Y3)PB1103055Methanol120020130.5±2.10.21097.89±0.55PB2101015Ethanol120010119.5±1.50.07398.13±0.43PB3101011Methanol6001089.6±0.90.09398.11±0.36PB4303015Ethanol60020180.6±2.30.21099.82±0.15PB5103011Methanol120020154.4±1.70.16199.02±0.25PB6101055Ethanol6002066.5±0.60.08295.04±1.11PB7303015Methanol60010135.8±2.10.16999.85±0.12PB8301011Ethanol120020117±0.90.08798.85±0.23PB9301055Methanol12001063.7±0.50.13898.56±0.19PB10303051Ethanol120010117±1.60.22698.18±0.61PB11301051Methanol6002062.2±0.80.09797.51±0.55PB12103051Ethanol60010152.9±3.20.24298.61±0.48Table3Statistical analysis of particle size(Y1),polydispersity index(Y2),and entrapment efficiency(Y3)in the Plackett–Burman designParticle size(Y1)Polydispersity index(Y2)Entrapment efficiency(Y3)Coefficient p value Coefficient p value Coefficient p valueDrug-loading,A1-3.090.5691 5.50910-30.51610.500.1158 Solute concentration,A229.390.0042*0.0540.0022*0.600.0738 Soluplus concentration,A3-17.010.0271*0.0170.0950-0.670.0549 Injection rate,A40.290.9562-2.00910-30.8086-0.0820.7578 Solvent type,A59.770.1217 4.33910-30.6050-0.190.4805 Stirring rate,A6 1.210.8206 1.67910-40.98380.140.5987 Volume ratio,A7 2.730.6141-7.83910-30.3682-0.280.3272*Significant values at p\0.05596S.-J.Park et al.Y 3¼98:30þ0:50X 1þ0:60X 2À0:67X 3À0:082X 4À0:19X 5þ0:14X 6À0:28X 7R 2¼0:8283;F ¼2:76;p ¼0:1720ÀÁHowever,statistical analysis confirmed that the model was not significant and independent factors did not have a significant impact on the entrapment efficiency (p [0.05).Nevertheless,an increase in the entrapment efficiencywould be expected with increased solute concentration (X 2)and decreased Soluplus concentration (X 3),among the various tested factors (p \0.1).Central composite designAfter the solute and Soluplus concentrations were selected as critical factors based on the Plackett–Burman screeningTable 4The composition and observed responses fromrandomized runs in the central composite designRunSoluteconcentration (mg/mL)Soluplus concentration (mg/mL)Mean particle size (nm)(Y 1)Polydispersity index (Y 2)Entrapment efficiency (%)(Y 3)CCD120.00 2.00166.7±1.60.15197.89±0.78CCD240.00 2.00773.2±24.30.29899.14±0.15CCD320.008.0080.4±0.10.11495.83±0.95CCD440.008.00352.3±10.40.23996.23±0.78CCD515.86 5.0067.3±0.30.10197.34±0.45CCD644.14 5.00687.0±8.30.29699.02±0.23CCD730.000.76501.5±24.90.25399.85±0.02CCD830.009.24129.7±0.60.15895.46±0.75CCD930.00 5.00161.5±1.30.17897.52±0.72CCD1030.00 5.00165.1±2.10.17598.01±0.49CCD1130.00 5.00165.3±0.50.18097.89±0.34CCD1230.00 5.00160.1±1.50.17998.21±0.32CCD1330.005.00158.9±1.80.18698.59±0.41Table 5Statistical analysis of particle size (Y 1),polydispersity index (Y 2),and entrapment efficiency (Y 3)in the central composite designParticle size (Y 1)Polydispersity index (Y 2)Entrapment efficiency (Y 3)Coefficientp value Coefficient p value Coefficient p value Solute conc,B 1219.35\0.0001*0.069\0.0001*0.500.0225*Soluplus conc,B 2-129.13\0.0001*-0.028\0.0001*-1.40\0.0001*Solute conc 9Soluplus conc,B 3-83.65\0.0001*-3.75910-30.3065––Solute conc 9Solute conc,B 4106.68\0.0001*9.51910-30.0078*––Soluplus conc 9Soluplus conc,B 575.90\0.0001*0.0130.0015*––*Significant values at p \0.05Fig.1Effect of solvent type on mean particle size (a ),polydispersity index (b ),and entrapment efficiency (c )in the Plackett–Burman designOptimization of Eudragit E nanoparticles 597design,a 2-factor,5-level central composite design was applied to explore the optimum levels of these factors (Table 4).This methodology consisted of 2groups of design points,including 2-level factorial design points,axial or star points,and center points (Hao et al.2012).2independent factors were studied at 5different levels,coded as -a ,-1,0,1,and ?a ,to determine the main,interaction and quadratic effects of the solute and Soluplus concentrations on the selected responses.The value for alpha (1.414)was intended to fulfill the rotatability in the design (Myers and Montgomery 2002a ).The other variables were fixed at the following values:drug-load-ing%,10%;injection rate,5mL/min;stirring rate,1200rpm;and volume ratio,20(water/ethanol).As shown in Fig.1,the mean particle size (nm)of nanoparticles prepared using methanol was decreased to the same as those prepared using ethanol,and the size difference was non-significant (p [0.05).Here,ethanol was used as an organic solvent due to the fact that its low toxicity will facilitate further study.The experimental runs with for-mulation variables and corresponding responses for the 13tested formulations are presented in Table 4.The bestfitFig.2Effect of theconcentration of solute and Soluplus on responses using a response surface plot (a ,b ,and c )and its contour plot (d ,e ,and f )598S.-J.Park et al.for each of the responses was found for the quadratic models of Y 1and Y 2,and the linear model of Y 3,respec-tively because these provided the largest R 2values and smallest predicted residual sum of squares (PRESS)values.PRESS is a measure of the fitness of the model to the points in the design and is provided by Design-Expert software.The model showed a statistically non-significant lack of fit and the adequacy of the model was confirmed with residual plot tests of regression models for all responses (data not shown).The quadratic model generated by the design is of the form:Y ¼B 0þB 1X 1þB 2X 2þB 3X 1X 2þB 4X 21þB 5X 22where B 0is an intercept and B 1-B 5are the coefficients of respective factors and their interaction terms.The follow-ing models were used to describe the effect of various factors on the tested responses (Table 5):Mean particle size (nm)=162.18?219.359solute conc -129.139Soluplus conc -83.659solute conc 9Soluplus conc ?106.689solute conc 2?75.909Soluplus conc 2Polydispersity Index =0.18?0.0699solute conc -0.0289Soluplus conc -3.75910-39solute conc 9Soluplus conc ?9.51910-39solute conc 2?0.0139Soluplus conc 2Entrapment efficiency (%)=97.77?0.509solute conc -1.409Soluplus concContour plots and three-dimensional response surfaces were drawn to estimate the effects of the independent variables on each response (Fig.2).Increased mean parti-cle size and polydispersity index were observed with an increase in solute (drug and Eudragit E)concentration as shown in Fig.2.The solvent displacement method relies on the rapid diffusion of the solvent into the external aqueous phase,which thereby provokes polymer aggrega-tion in the form of colloidal nanoparticles (Beck-Broich-sitter et al.2010).This mechanism of particle formation is comparable to the ‘diffusion and stranding’process found in spontaneous emulsification (Moinard-Checot et al.2006).These results could thus be explained by the increased viscosity of organic phase.In fact,the higher the concentration of the solute (drug and Eudragit E)in etha-nol,the lower the velocity of diffusion owing to the increasing viscosity of the solute solution.Thus,the resulting particles formed from the turbulent flow of eth-anol are larger,as supported by a previous report (Galindo-Rodriguez et al.2004;Legrand et al.2007).On the other hand,Soluplus concentration played a role in decreasing both the particle size and polydispersity index.This could be attributed to decreased interfacial tension of the aqueous phase,resulting in a smaller particle size.As shown in Table 4,the entrapment efficiency of nano-particles ranged from 95.46%(CCD8)to 99.85%(CCD7).As expected from screening design,the entrapment effi-ciency would increase with increasing solute concentration and decreasing Soluplus concentration.Increased solute concentration in the organic phase would result in faster solidification,and would further prevent drug diffusion to the aqueous phase.However,increased Soluplus concentration would increase the solubility of dutasteride in the aqueous phase and further enhance drug diffusion to the aqueous phase.Nevertheless,the entrapment efficiency of nanopar-ticles was above 95%for all formulations.Table 6Comparative levels of predicted and observed responses for the optimized formulation ResponsesPredicted valuesObserved Values a Predicted error a (%)Batch 1Batch 2Batch 3Mean Y 1(nm)103.5107.7±0.5113.6±1.3105.9±1.9109.1±4.0 5.41Y 20.1390.1560.1450.1510.151±0.00558.39Y 3(%)98.0998.31±0.4898.45±0.2398.59±0.2998.45±0.140.37a Data represented mean ±standard deviation,n =3bPredicted error (%)=(observed value -predicted value)/predicted value 9100%Fig.3Overlay plot for the effect of different variables on mean particle size,polydispersity index,and entrapment efficiencyOptimization of Eudragit E nanoparticles 599After analyzing the polynomial equations depicting the dependent and independent variables,the optimum values of the variables were obtained by graphical and numerical analyses using the Design-Expert software,based on the desired criteria.The aim of optimization of pharmaceutical formulations is generally to determine the levels of the variables from which a robust product with high quality characteristics may be produced.In this study,the target profile of the intended dutasteride-loaded Eudragit E nanoparticles was:(a)mean particle size of\200nm (b)polydispersity index of\0.200,and(c)entrapment efficiency of[95.0%.Using these criteria,the3variables were then combined to determine an overall optimum design.Figure3shows an acceptable region that describes the requirements of these responses.This optimum region could therefore be used to construct the design space of dutasteride-loaded Eudragit E nanoparticles with high quality characteristics.A further optimization and valida-tion process was then undertaken using desirable charac-teristics that depended on the prescriptive criteria of maximum entrapment efficiency,minimum mean particle size,and polydispersity index.The process was based on the methodology described by Myers and Montgomery (2002b).It was found that a maximum level of desirability of0.780could be achieved,whereby the composition of the optimal formulation was determined to be20mg/mL solute concentration and3.22mg/mL Soluplus concentra-tion.At these levels,the predicted values of Y1,Y2,and Y3 were103.5nm,0.139,and98.09%,respectively.Three new batches of nanoparticles with the predicted levels of the formulation factors were prepared to confirm the validity of the optimization procedure.As shown in Table6,the observed optimized formulation had a mean particle size of109.1±4.0nm,a polydispersity index of 0.151±0.0055,and an entrapment efficiency of 98.45%±0.14%,all of which were in good agreement with the predicted values.ConclusionIn this study,Plackett–Burman screening and central composite designs were shown to improve the fundamental understanding of the nanoparticle preparation process,and to help identify critical formulation and process parameters that affect dutasteride-loaded Eudragit E nanoparticles. Among the critical factors,solute concentration was asso-ciated with increased particle size,broadened particle size distribution,and enhanced entrapment efficiency.In con-trast,Soluplus concentration was shown to be related to decreased particle size,narrowed particle size distribution, and reduced entrapment efficiency.Other formulation and process factors do not have a significant impact on du-tasteride-loaded Eudragit E nanoparticle properties, assuming they are within the limits used in this study.The optimized formulation was achieved with20mg/mL solute concentration and3.22mg/mL Soluplus concentration,and the observed responses of the optimized formulation were very close to the values predicted using the response sur-face methodology.The information obtained by this sta-tistical design study can be expected to be useful for further studies.Future studies will focus on development of a solid dosage form and in vivo oral absorption of dutasteride-loaded Eudragit E nanoparticles in animal models. ReferencesBaek,I.H.,and M.S.Kim.2012.Improved supersaturation and oral absorption of dutasteride by amorphous solid dispersions.Chemical&Pharmaceutical Bulletin60:1468–1473.Beck-Broichsitter,M.,E.Rytting,T.Lebhardt,X.Wang,and T.Kissel.2010.Preparation of nanoparticles by solvent displacement for drug delivery:A shift in the‘‘ouzo region’’upon drug loading.European Journal of Pharmaceutical Sciences41:244–253. Cha,K.-H.,N.Lee,M.-S.Kim,J.-S.Kim,H.J.Park,J.Park,W.Cho, and S.-J.Hwang.2010.Development and optimization of a novel sustained-release tablet formulation for bupropion hydro-chloride using Box-Behnken design.Journal of Pharmaceutical Investigation40:313–319.Chellampillai,B.,and A.P.Pawar.2011.Improved bioavailability of orally administered andrographolide from pH-sensitive nanopar-ticles.European Journal of Drug Metabolism and Pharmacoki-netics35:123–129.Dai,J.D.,T.Nagai,X.Q.Wang,T.Zhang,M.Meng,and Q.Zhang.2004.pH-sensitive nanoparticles for improving the oral bio-availability of cyclosporine A.International Journal of Phar-maceutics280:229–240.Galindo-Rodriguez,S.,E.Alle´mann,H.Fessi,and E.Doelker.2004.Physicochemical parameters associated with nanoparticle for-mation in the salting-out,emulsification-diffusion,and nanopre-cipitation methods.Pharmaceutical Research21:1428–1439. GlaxoSmithKline.AvodartÒ:Product Monograph http://www.gsk.ca/ english/docs-pdf/product-monographs/Avodart.pdf,cited1December, 2012.Hao,J.,F.Wang,X.Wang,D.Zhang,Y.Bi,Y.Gao,X.Zhao,and Q.Zhang.2012.Development and optimization of baicalin-loaded solid lipid nanoparticles prepared by coacervation method using central composite design.European Journal of Pharmaceutical Sciences47:497–505.Jin,S.J.,Y.H.Yoo,M.S.Kim,J.S.Kim,J.S.Park,and S.J.Hwang.2008.Paroxetine hydrochloride controlled release POLYOX matrix tablets:Screening of formulation variables using Plackett–Burman screening design.Archives of Pharmacal Research31:399–405. Khachane,P.,A.A.Date,and M.S.Nagarsenker.2011.Eudragit EPO nanoparticles:Application in improving therapeutic efficacy and reducing ulcerogenicity of meloxicam on oral administration.Journal of Biomedical Nanotechnology7:590–597.Lee,Y.-L.,M.-S.Kim,M.-Y.Park,and K.Han.2012.Quality by design:Understanding the formulation variables and optimiza-tion of metformin hydrochloride750mg sustained release tablet by Box-Behnken design.Journal of Pharmaceutical Investiga-tion42:213–220.600S.-J.Park et al.。

Rough Approximation of a perference Relation by Dominance Relations

Rough Approximation of a perference Relation by Dominance Relations
This explains our interest in the rough sets theory (Pawlak, 1982, 1991), which proved to be a useful tool for analysis of vague description of decision situations (Pawlak and Slowinski, 1994). We remember that the rough set concept is founded on the assumption that with every object of the universe of discourse there is associated some information (data, knowledge). For example, if objects are potential projects, their technical and economic characteristics form information (description) about the projects. Objects characterized by the same information are indiscernible (similar) in view of available information about them. The indiscernibility relation generated in this way is the mathematical basis of the rough sets theory. Any set of indiscernible objects is called elementary set. Any subset of the universe can either be expressed precisely in terms of elementary sets or roughly only. In the latter case, this subset can be characterized by two ordinary sets, called lower and upper approximations. The lower approximation contains objects surely belonging to the subset considered; the upper approximation contains objects possibly belonging to the subset considered.

Text Categorization with Support Vector Machines

Text Categorization with Support Vector Machines
Text Categorization with Support Vector Machines: Learning with Many Relevant
Features
Thorsten Joachims
Universitat Dortmund Informatik LS8, Baroper Str. 301
3 Support Vector Machines
Support vector machines are based on the Structural Risk Minimization principle 9 from computational learning theory. The idea of structural risk minimization is to nd a hypothesis h for which we can guarantee the lowest true error. The true error of h is the probability that h will make an error on an unseen and randomly selected test example. An upper bound can be used to connect the true error of a hypothesis h with the error of h on the training set and the complexity of H measured by VC-Dimension, the hypothesis space containing h 9 . Support vector machines nd the hypothesis h which approximately minimizes this bound on the true error by e ectively and e ciently controlling the VC-Dimension of H.

正规树文法的产生式相交判定

正规树文法的产生式相交判定
ef i t t su e f c n.I i sd i XML t p h c igb s do e ua reg a ie n y ec e k a e nrg l t rmma i i on r d ci ls n r e rw t d ̄ it o u t n r e . h p o u
ga r mma t ion rd ci ue. An a oi m i d sr e ,whc ห้องสมุดไป่ตู้c s te itret n b t e wo a tmaa rwi d jitpo ut n rls h s o l r h s eci d g t b i c ek h nesci e h o wen t uo t
Ke r s ywo d :XM L y ec e kn t p h c ig;rg lrte r mma ;p o u t n r ls ne scin c e kn e ua r eg a r r d ci ue ;itr e t h c ig;rg lre p eso a tmaa o o e ua x r sin u o t
su e Tei mli O I 1 ・ 2 ・∑ⅡU∑ E『 Tex r e os a h ot s n a t cd ht eo pxy t I I r t . c eti ( E I E I m s I 2) he emns w t i lr m s u d . pi t h t s g i i o dn ht a h
t ec n e tmo e fp o u t n r ls h o tn d l r d ci ue .whc e ua x r sin。t e h c st eitret no h woa t mao o — o o ih i arg lre pe so S h ni c ek h es ci f et uo tn i c n t n o t S

新高考英语题型精析精练与话题拓展:话题拓展02.人工智能(解析版)

新高考英语题型精析精练与话题拓展:话题拓展02.人工智能(解析版)

02.人工智能养成良好的答题习惯,是决定高考英语成败的决定性因素之一。

做题前,要认真阅读题目要求、题干和选项,并对答案内容作出合理预测;答题时,切忌跟着感觉走,最好按照题目序号来做,不会的或存在疑问的,要做好标记,要善于发现,找到题目的题眼所在,规范答题,书写工整;答题完毕时,要认真检查,查漏补缺,纠正错误。

一、阅读理解1Some are concerned that AI tools are turning language learning into a weakening pursuit. More and more people are using simple, free tools, not only to decode text but also to speak. With these apps’ conversation mode, you talk into a phone and a spoken translation is heard moments later; the app can also listen for another language and produce a translation in yours.Others are less worried. Most people do not move abroad or have the kind of on-going contact with a foreign culture that requires them to put in the work to become fluent. Nor do most people learn languages for the purpose of humanising themselves or training their brains. On their holiday, they just want a beer and the spaghetti without incident.Douglas Hofstadter, an expert in many languages, has argued that something profound (深刻的) will disappear when people talk through machines. He describes giving a broken, difficult speech in Chinese, which required a lot of work but offered a sense of satisfaction at the end.As AI translation becomes an even more popular labor-saving tool, people can be divided into two groups. There will be those who want to stretch their minds, expose themselves to other cultures or force their thinking into new pathways. This group will still take on language study, often aided by technology. Others will look at learning a new language with a mix of admiration and puzzlement, as they might with extreme endurance (耐力) sports: “Good for you, if that’s your thing, but a bit painful for my taste.”But a focus on the learner alone misses the fundamentally social nature of language. It is a bit like analysing the benefits of close relationships to heart-health but overlooking the inherent (固有的) value of those bondsthemselves. When you try to ask directions in broken Japanese or ruin a joke in broken German, you are making direct contact with someone. And when you speak a language well enough to tell a story with perfect timing or put delicate differences on an argument, that connection is more profound still. The best relationships do not require a medium.1. What is the first two paragraphs mainly about?A. Communicating through apps is simple.B. Apps provide a one-way interactive process.C. Using apps becomes more and more popular.D. AI tools weaken the needs of language learning.2. What is Douglas’ attitude to language learning?A. Favorable.B. Objective.C. Doubtful.D. Unclear3. What do we know about the second group mentioned in paragraph 4?A. They are keen on foreign culture.B. They long to join in endurance sports.C. They find Al tools too complex to operate.D. They lack the motivation to learn language.4. How does the author highlight his argument in the last paragraph?A. By providing examples.B. By explaining concepts.C. By stating reasons.D. By offering advice.【答案】1. D 2. A 3. D 4. A【解析】这是一篇说明文。

指称化过程中述谓性中心语的意义衰减——兼论“NP+的+VP”难题

指称化过程中述谓性中心语的意义衰减——兼论“NP+的+VP”难题
(5)【方所 -动作】在家乡舞龙灯[W]家 乡的舞龙灯
(6)【工具 -动作】用小提琴伴奏[W]小 提琴的伴奏
可见,“NP+的 +VP”并不特殊、也不陌生, 与提取主客体所形成的定中结构一样,均是指称 化衰减的产物,只 不 过 这 里 提 取 “VP”所 形 成 的 定中结构,“的 ”是 指 称 化 的 结 构 标 记。 在 提 取 过程中陈述式形义发生衰减,其中的体标记、部 分态标记、语气标记等被删略或滤除,中心语位 置上“VP”的语义发生衰减,具体表现在:1.V后 不能带动态助词;2.V不具备重叠形式;3.V后 带宾语的例子非常少,即使能带,从语感上也不 自然;4.V后 出 现 补 语 非 常 少,仅 限 于 结 果 补 语;5.V可受定语修饰。
一、相关文献回顾 学界一般将转指类指称式 “NP+的 +VP” 称为特殊的结构形式,如“这本书的出版”“春天 的到来”“柠檬的酸”等,以往研究限于结构中心 “VP”的谓词性与结构整体名词性间的矛盾,意 见并不统一,形成了一个“难题”。主要争论有: 1.“NP+的 +VP”结构是名词性还是谓词 性? “NP”与 “VP”到 底 是 偏 正 还 是 主 谓 关 系? 传统认为是“定 -中”偏正结构,以朱德熙 、 [3]140
三、谓词性成分指称化后的类别 古汉语中指称化现象就大量存在,自指结构 中无标记的指称化比有标记的指称化更常见。 现代汉语中谓词是句子的中心,指称化后结构信 息格局发生变化,提取谓词及相关成分,谓词性 成分被囚禁在名词性短语的中心语位置,因句法 地位的变化而占据了名词位置,谓词性较弱而名 词性增强。核心谓词不能再添加体标记和受绝 大多数副词修饰,只能做主语、宾语、介词短语的 中心,不能做谓词或动词的中心,这也是指称化 形义衰减的方式之一。 1.单动词做中心语 单个动词指称化后做中 心语,占据了名词的位置,句法功能发生转移,由 述事向述物转变。动词本指一个动态动作,在中 心语位置变为指称动作整个过程。 (7)【N是 V的施事】贺子珍走了[W]贺 子珍的走(例:~给江青提供了乘虚而入的绝好

基于句法依存分析的图网络生物医学命名实体识别

基于句法依存分析的图网络生物医学命名实体识别

2021⁃02⁃10计算机应用,Journal of Computer Applications 2021,41(2):357-362ISSN 1001⁃9081CODEN JYIIDU http ://基于句法依存分析的图网络生物医学命名实体识别许力,李建华*(华东理工大学信息科学与工程学院,上海200237)(∗通信作者电子邮箱jhli@ )摘要:现有的生物医学命名实体识别方法没有利用语料中的句法信息,准确率不高。

针对这一问题,提出基于句法依存分析的图网络生物医学命名实体识别模型。

首先利用卷积神经网络(CNN )生成字符向量并将其与词向量拼接,然后将其送入双向长短期记忆(BiLSTM )网络进行训练;其次以句子为单位对语料进行句法依存分析,并构建邻接矩阵;最后将BiLSTM 的输出和通过句法依存分析构建的邻接矩阵送入图卷积网络(GCN )进行训练,并引入图注意力机制优化邻接节点的特征权重得到模型输出。

所提模型在JNLPBA 和NCBI -disease 数据集上的F1值分别达到了76.91%和87.80%,相比基准模型分别提升了2.62和1.66个百分点。

实验结果证明,提出的方法能有效提升模型在生物医学命名实体识别任务上的表现。

关键词:生物医学;命名实体识别;双向长短期记忆网络;图卷积网络;句法依存分析;图注意力机制中图分类号:TP391.1文献标志码:ABiomedical named entity recognition with graph network based onsyntactic dependency parsingXU Li ,LI Jianhua *(School of Information Science and Engineering ,East University of Science and Technology ,Shanghai 200237,China )Abstract:The existing biomedical named entity recognition methods do not use the syntactic information in the corpus ,resulting in low precision.To solve this problem ,a biomedical named entity recognition model with graph network based onsyntactic dependency parsing was proposed.Firstly ,the Convolutional Nerual Network (CNN )was used to generate character vectors which were concatenated with word vectors ,then they were sent to Bidirectional Long Short -Term Memory (BiLSTM )network for training.Secondly ,syntactic dependency parsing to the corpus was conducted with a sentence as a unit ,and the adjacency matrix was constructed.Finally ,the output of BiLSTM and the adjacency matrix constructed bysyntactic dependency parsing were sent to Graph Convolutional Network (GCN )for training ,and the graph attention mechanism was introduced to optimize the feature weights of adjacency nodes to obtain the model output.On JNLPBAdataset and NCBI -disease dataset ,the proposed model reached F1score of 76.91%and 87.80%respectively ,which were2.62and 1.66percentage points higher than those of the baseline model respectively.Experimental results prove that the proposed method can effectively improve the performance of the model in the biomedical named entity recognition task.Key words:biomedicine;named entity recognition;Bidirectional Long Short -Term Memory (BiLSTM)network;GraphConvolutional Network (GCN);syntactic dependency parsing;graph attention mechanism引言在生物医学领域,每年都会新增大量的专利、期刊和报告等文献。

清华大学遗传算法PPT

清华大学遗传算法PPT
4.1 Basic Concept of lc-MST 4.2 Genetic Algorithms Approach 4.3 GA procedure for lc-MST 4.4 Numerical Experiments
Soft Computing Lab.
WASEDA UNIVERSITY , IPS
Transportation problems Telecommunication network design Distribution systems etc. Kershenbaum, A.: Telecommunications Network Design Algorithms, McGrawHill, New York, 1993.
Leaf-constrained MST
Fernandes, L. M. & L. Gouveia: Minimal spanning trees with a constraint on the number of leaves, European J. of Operational Research, vol.104, pp.250-261, 1998. Soft Computing Lab. WASEDA UNIVERSITY , IPS 7
3.1 Concept on Degree-based Permutation GA 3.2 Genetic Algorithms Approach 3.3 Degree-based Permutation GA for dc-MST 3.4 Numerical Experiments
4.
Leaf-constrained Minimum Spanning Tree
7. Minimum Spanning Tree Problem

Introduction by the Organisers

Introduction by the Organisers

Mathematisches Forschungsinstitut OberwolfachReport No.26/2005Complexity TheoryOrganised byJoachim von zur Gathen(Bonn)Oded Goldreich(Rehovot)Claus-Peter Schnorr(Frankfurt)Madhu Sudan(Cambridge)June5th–June11th,2005putational Complexity Theory is the mathematical study ofresources like time,space,or randomness that are required to solve computa-tional problems.The current workshop was focused on recent developments,and the interplay between randomness and computation played a central rolein many of them.Mathematics Subject Classification(2000):68-06,68Q01,68Q10,68Q15,68Q17,68Q25.Introduction by the OrganisersThe workshop Complexity Theory was organised by Joachim von zur Gathen (Bonn),Oded Goldreich(Rehovot),Claus-Peter Schnorr(Frankfurt),and Madhu Sudan(Cambridge).The workshop was held on June5th–11th2005,and attended by approximately50participants spanning a wide range of interests within thefield of Computational Complexity.Sixteen talks were presented in the mornings,at-tended by all participants.In addition,extensive interaction took place in smaller groups.Specifically,several more specialized sessions were held in the afternoons and were typically attended by5-20participants,and numerous imformal meetings of2-5participants took place at various times(including at night).The Oberwolfach Meeting on Complexity Theory is marked by a long tradition and a continuous transformation.Originally starting with a focus on Algebraic and Boolean Complexity,the meeting has continuously evolved to cover a wide variety of areas,most of which were not even in existence at the time of thefirst meeting(in1972).The format of the meetings has also been drastically changed in the recent meetings so that the focus is on interactions in small specialized sessions,maintaining unity via general plenary sessions.While inviting many of the most prominent researchers in thefield,the organizers try to identify and1432Oberwolfach Report26/2005Complexity Theory14331434Oberwolfach Report26/2005Complexity Theory14351436Oberwolfach Report26/2005Complexity Theory14371438Oberwolfach Report26/2005Complexity Theory14391440Oberwolfach Report26/20051We thank Scott Aaronson for suggesting the name‘unique-game-hardness’during the Ober-wolfach talk.1The poly(1/δ)-source extractor was proposed before by Zuckerman[13],but his analysis relies on an unproven number theoretic conjecture.A2-source extractor for any constant min-entropy rate,follows from a seemingly weaker number theoretic conjecture(cf.[5,Cor.11]).sat(C),which is the smallest fraction of constraints that every assignment must leave unsatisfied.The outline of our proof is as follows.We start with a constraintsat(C)=0and(ii)sat(C′)=0,and in the second caset.This is true as long as the initial underlying graph is sufficiently“well-structured”.By this we mean that the graph is d-regular for a constant d,has self-loops,and is an expander.All of these properties are easily obtained in the preprocessing stage.The main advantage of this operation is that it does not increase the number of variables in each constraint(which stays2throughout).Moreover,when applied to d-regular graphs for d=O(1),it only incurs a linear blowup in the graph size(the number of edges is multiplied by d t−1),and an affordable increase in the alphabet size(which goes fromΣtoΣd t).Combined with an operation thatt≤i≤t/2+√2,1[log2(n·poly log n),O(1)].This construction uses the PCP of[9]as starting point.log n)-approximation algorithms for a host of NP-hard optimization problems,starting with the discov-ery of such an algorithm for sparsest cut in[3].These new algorithms rely on a new analysis of a family of semidefinite programs.log n log log n).There have also been a spate of results ruling out certain types of embeddings,most notably a paper of Khot and Vishnoi which rules out O(1)-distortion embedding ofℓ22intoℓ1.Constructions of PCPs in recent years have relied upon theorems in Fourier Analysis which are also geometric in nature,and this has also become clearer thanks to the results on embeddings.Yet another connection between geometry and expansion is that the above re-sults rely upon a geometric analog of the study of expansion,namely,isoperimetric problems.The simplest is the classical result that every closed set inℜ2whose√area is A has perimeter at least2πA,then it”looks like”a circle of area A.The latter type of theorems we be referred to as Strong Isoperimetric Theorems.Isoperimetric theorems about the n-dimensional sphere and the boolean hypercube play an important role in the above results.References√[1]S.Arora,E.Hazan,and S.Kale.O(2+2−Θ(n).Hence,in order to obtain thefirst bit with good confidence,we have to repeat the whole procedure2Θ(n)times.This yields an algorithm that uses2O(n)equations and2O(n)time.In fact,it can be shown that given only O(n)equations,the s′∈Z n2that maximizes the number of satisfied equations is with high probability s.This yields a simple maximum likelihood algorithm that requires only O(n)equations and runs in time2O(n).Blum,Kalai,and Wasserman[8]provided thefirst subexponential algorithm for this problem.Their algorithm requires only2O(n/log n)equations/time and is currently the best known algorithm for the problem.It is based on a clever idea that allows tofind a small set S of equations(say,O(√2+2−Θ(√n)times.Their algorithm was later shown to have other important applications,such as thefirst2O(n)-time algorithm for solving the shortest vector problem in a lattice [11,5].An important open question is to explain the apparent difficulty infinding efficient algorithms for this learning problem.Our main theorem explains this difficulty for a natural extension of this problem to higher moduli,defined next.n.If there exists a polynomial time algorithm that solves LWE p,¯Ψαthenthere exists a quantum algorithm that approximates the shortest vector problem (SVP)and the shortest independent vectors problem(SIVP)to within˜O(n/α)in the worst case.We define¯Ψαas a distribution on Z p that has the shape of a discrete Gaussian centered around0with standard deviationαp.Also,the probability of0(i.e.,no error)is roughly1/(αp).A possible setting for the parameters is p=O(n2)and α=1/(√1This is not the case for non-cryptographic PRGs such asǫ-biased generators,for which we do obtain unconditional results.BSAT that conforms with confidence level1−2−2n.As in Adelman’s argument[1],for almost all choices of random strings u,BSAT(·,u)for a uniformly chosen u.We willfind hard instances for1Various natural extensions and generalizations of this problem are discussed in the literature but are ignored in this survey.Examples of such extensions include the problem of computational PIR(where privacy is obtained by using cryptographic assumption and under the assumption that the server(s)are limited to efficient computations)[5,12,7],symmetric PIR(where there is an additional requirement that the user learns no information on x other than the value of x i) [8],PIR against coalitions of t servers[6,9,13],etc.2Each of the papers in this sequence of works achieves some improvements over the previous ones in various aspects;e.g.,in the dependency of the complexity in k.2k−1)barrier for information-theoretic private information retrieval.In Proc.of the43rd IEEE Symp.on Foundations of Computer Science,pages261–270,2002.[5]B.Chor and putationally private information retrieval.In Proc.of the29thACM Symp.on the Theory of Computing,pages304–313,1997.[6]B.Chor,O.Goldreich,E.Kushilevitz,and M.Sudan.Private information retrieval.In Proc.of the36th IEEE Symp.on Foundations of Computer Science,pages41–51,1995.Journal version:J.of the ACM,45:965–981,1998.[7]C.Cachin,S.Micali,and putationally private information retrieval withpolylogarithmic communication.In J.Stern,editor,Advances in Cryptology–EURO-CRYPT’99,volume1592of Lecture Notes in Computer Science,pages402–414.Springer-Verlag,1999.[8]Y.Gertner,Y.Ishai,E.Kushilevitz,and T.Malkin.Protecting data privacy in privateinformation retrieval schemes.In Proc.of the30th ACM Symp.on the Theory of Computing, pages151–160,1998.Journal version:J.of Computer and System Sciences,60(3):592–629, 2000.√1Sanjeev Arora asked why I don’t have three prongs,thereby forming aψ-shaped pitchfork.√√2On the other hand,I do not know whether a quantum computer restricted to tree states always has an efficient classical simulation.All I can show is that such a computer would be simulable inΣp3∩Πp3,the third level of the polynomial-time hierarchy.3I did manage to prove an exponential lower bound,provided we restrict ourselves to linear combinationsα|ψ +β|ϕ that are“manifestly orthogonal”—which means that for all compu-tational basis states|x .either ψ|x =0or ϕ|x =0.4By contrast,I can show that1-D cluster states have tree size O n4.m))[12,5,13]where n and m denote the number of vertices and edges respectively in the input graph.This upper bound is matched by an Ω(m1/2−ǫ)-hardness due to Guruswami et al.[10].Therefore,the directed version of the problem is quite well-understood.However,this is not the case with undi-rected graphs,for which the problem is still widely open.The best current upper bound is O(√log log n )-approximation algorithmfor both directed and undirected versions.When the input graph is directed, anΩ(log log n)-hardness was proved by Chuzhoy and Naor[8].However,until recently no non-trivial lower bounds were known for the undirected version.The last few years have seen a significant progress in understanding the hard-ness of undirected routing problems.In particular,Andrews[1]introduced a new approach for proving hardness of undirected routing problems,and showed Ω(log1/2−ǫ)-hardness of the Buy-at-Bulk problem.Following[1],Andrews and Zhang[3]provedΩ(log log1−ǫn)hardness of undirected Congestion Minimization. Andrews and Zhang[2]also showed that undirected EDP isΩ(log1/3−ǫn)-hard to approximate.This result was recently improved toΩ(log1/2−ǫ)-hardness by Chuzhoy and Khanna[7].We demonstrate this new approach on the hardness of Edge Disjoint Paths problem.Consider the following reduction from the Maximum Independent Set problem(MIS)to EDP:for each vertex in the MIS instance,we create a source-sink pair and a canonical path that connects this pair.The canonical paths are defined in such a way that whenever there is an edge between vertices u and v in the MIS instance,the two corresponding canonical paths share an edge.It is easy to see that if the solution to the resulting EDP instance consists of canonical paths only,then it can be translated into a solution of the MIS instance of the same cost.The opposite is also true:any solution to the MIS instance naturally defines a solution to the EDP instance.The problem is that in general,solutions of the EDP instance do not necessarily follow the canonical paths,and if such a solution has many non-canonical paths,then it cannot be translated into a large cardinality independent set.The main idea is to convert the above EDP instance into a random graph with“almost”high girth.Roughly speaking,in order to create the random instance,we make many copies of each vertex from the original EDP instance.Each edge in the original EDP instance is then replaced by a random matching between the copies of its endpoints.In the new instance,we can bound the number of non-canonical paths in any solution as follows:the number of long non-canonical paths is restricted due to the graph capacity.As for the shortc+2)-hardness was proved independently by[7,4,11].We present the construction of[7]in this talk.Finally,we study the multicommodityflow relaxation of the Edge Disjoint Paths√problem.It is known that the linear program has integrality gap ofΩc+2).However, the constructions are unnecessarily complex.In this talk we describe a direct simple construction ofΩ log n。

BClustLonG 软件包说明书

BClustLonG 软件包说明书

Package‘BClustLonG’October12,2022Type PackageTitle A Dirichlet Process Mixture Model for Clustering LongitudinalGene Expression DataVersion0.1.3Author Jiehuan Sun[aut,cre],Jose D.Herazo-Maya[aut],Naftali Kaminski[aut],Hongyu Zhao[aut],and Joshua L.Warren[aut], Maintainer Jiehuan Sun<*********************>Description Many clustering methods have been proposed,butmost of them cannot work for longitudinal gene expression data.'BClustLonG'is a package that allows us to perform clustering analysis forlongitudinal gene expression data.It adopts a linear-mixed effects frameworkto model the trajectory of genes over time,while clustering is jointlyconducted based on the regression coefficients obtained from all genes.To account for the correlations among genes and alleviate thehigh dimensionality challenges,factor analysis models are adoptedfor the regression coefficients.The Dirichlet process prior distributionis utilized for the means of the regression coefficients to induce clustering.This package allows users to specify which variables to use for clustering(intercepts or slopes or both)and whether a factor analysis model is desired.More de-tails about this method can be found in Jiehuan Sun,et al.(2017)<doi:10.1002/sim.7374>. License GPL-2Encoding UTF-8LazyData trueDepends R(>=3.4.0),MASS(>=7.3-47),lme4(>=1.1-13),mcclust(>=1.0)Imports Rcpp(>=0.12.7)Suggests knitr,latticeVignetteBuilder knitrLinkingTo Rcpp,RcppArmadilloRoxygenNote7.1.0NeedsCompilation yes12BClustLonGRepository CRANDate/Publication2020-05-0704:10:02UTCR topics documented:BClustLonG (2)calSim (3)data (4)Index5 BClustLonG A Dirichlet process mixture model for clustering longitudinal gene ex-pression data.DescriptionA Dirichlet process mixture model for clustering longitudinal gene expression data.UsageBClustLonG(data=NULL,iter=20000,thin=2,savePara=FALSE,infoVar=c("both","int")[1],factor=TRUE,hyperPara=list(v1=0.1,v2=0.1,v=1.5,c=1,a=0,b=10,cd=1,aa1=2, aa2=1,alpha0=-1,alpha1=-1e-04,cutoff=1e-04,h=100) )Argumentsdata Data list with three elements:Y(gene expression data with each column being one gene),ID,and years.(The names of the elements have to be matachedexactly.See the data in the example section more info)iter Number of iterations(excluding the thinning).thin Number of thinnings.savePara Logical variable indicating if all the parameters needed to be saved.Default value is FALSE,in which case only the membership indicators are saved.infoVar Either"both"(using both intercepts and slopes for clustering)or"int"(using only intercepts for clustering)factor Logical variable indicating whether factor analysis model is wanted.hyperPara A list of hyperparameters with default values.calSim3 Valuereturns a list with following objects.e.mat Membership indicators from all iterations.All other parametersonly returned when savePara=TRUE.ReferencesJiehuan Sun,Jose D.Herazo-Maya,Naftali Kaminski,Hongyu Zhao,and Joshua L.Warren."A Dirichlet process mixture model for clustering longitudinal gene expression data."Statistics in Medicine36,No.22(2017):3495-3506.Examplesdata(data)##increase the number of iterations##to ensure convergence of the algorithmres=BClustLonG(data,iter=20,thin=2,savePara=FALSE,infoVar="both",factor=TRUE)##discard the first10burn-ins in the e.mat##and calculate similarity matrix##the number of burn-ins has be chosen s.t.the algorithm is converged.mat=calSim(t(res$e.mat[,11:20]))clust=maxpear(mat)$cl##the clustering results.##Not run:##if only want to include intercepts for clustering##set infoVar="int"res=BClustLonG(data,iter=10,thin=2,savePara=FALSE,infoVar="int",factor=TRUE)##if no factor analysis model is wanted##set factor=FALSEres=BClustLonG(data,iter=10,thin=2,savePara=FALSE,infoVar="int",factor=TRUE)##End(Not run)calSim Function to calculate the similarity matrix based on the cluster mem-bership indicator of each iteration.DescriptionFunction to calculate the similarity matrix based on the cluster membership indicator of each itera-tion.4dataUsagecalSim(mat)Argumentsmat Matrix of cluster membership indicator from all iterationsExamplesn=90##number of subjectsiters=200##number of iterations##matrix of cluster membership indicators##perfect clustering with three clustersmat=matrix(rep(1:3,each=n/3),nrow=n,ncol=iters)sim=calSim(t(mat))data Simulated dataset for testing the algorithmDescriptionSimulated dataset for testing the algorithmUsagedata(data)FormatAn object of class list of length3.Examplesdata(data)##this is the required data input formathead(data.frame(ID=data$ID,years=data$years,data$Y))Index∗datasetsdata,4BClustLonG,2calSim,3data,45。

Global Optimization for Estimating a BRDF with Multiple Specular Lobes

Global Optimization for Estimating a BRDF with Multiple Specular Lobes
{ckyu, yndk, slee}@sogang.ac.kr
Abstract
This paper presents a global minimization framework for estimating analytical BRDF model parameters using the techniques of convex programming and branch and bound. Traditional local minimization suffers from local minima and requires a large number of initial conditions and supervision for successful results especially when a model is highly complex and nonlinear. We consider the Cook-Torrance model, a parametric model with the Gaussian-like Beckmann distributions for specular reflectances. Instead of optimizing the multiple parameters simultaneously, we search over all possible surface roughness values based on a branch-and-bound algorithm, and reduce the estimation problem to convex minimization with known fixed surface roughness. Our algorithm guarantees globally optimal solutions. Experiments have been carried out for isotropic surfaces to validate the method using the extensive high-precision measurements from the MERL BRDF database.*

Multivariate Data Analysis

Multivariate Data Analysis

Multivariate Data Analysis Multivariate data analysis is a powerful statistical technique used to analyze data sets with multiple variables. It allows researchers to explore relationships between variables, identify patterns, and make predictions. This field ofstatistics encompasses a wide range of methods, each tailored to address specific research questions and data characteristics. One commonly used method in multivariate data analysis is principal component analysis (PCA). PCA aims to reduce the dimensionality of the data by identifying a smaller set of uncorrelated variables, called principal components, that capture most of the variation in the original data. These components are linear combinations of the original variables and can be used to visualize the data in a lower-dimensional space, making it easier to identify patterns and clusters. Another widely employed method isfactor analysis. Similar to PCA, factor analysis aims to reduce the dimensionality of the data. However, it differs in its underlying assumptions and interpretation. Factor analysis assumes that the observed variables are influenced by a smaller number of unobserved, latent variables called factors. By identifying thesefactors and their relationships to the observed variables, researchers can gain insights into the underlying structure of the data. Cluster analysis is another important technique used in multivariate data analysis. This method aims to group observations into clusters based on their similarity across multiple variables. Different clustering algorithms employ different distance metrics and criteria for cluster formation, allowing researchers to choose the most appropriate method for their data and research question. Cluster analysis can be used to identify groups of customers with similar purchasing patterns, classify diseases based on patient symptoms, or segment images based on pixel characteristics. Canonical correlation analysis (CCA) is a method used to investigate the relationships between two sets of variables. It identifies pairs of linear combinations of variables, one from each set, that have the highest correlation. CCA can be applied to study the relationship between consumer attitudes and purchasing behavior, the link between gene expression and disease progression, or the association between environmental factors and species distribution. Multivariate analysis of variance (MANOVA) is a statistical test used to compare the means of multiple groups across severaldependent variables simultaneously. It is an extension of the univariate analysis of variance (ANOVA) and is particularly useful when the dependent variables are correlated. MANOVA can be applied to test the effectiveness of different treatments on multiple health outcomes, compare the performance of different car models on multiple safety features, or evaluate the impact of different educational programs on various student learning outcomes. In conclusion, multivariate data analysis offers a rich toolkit for researchers to explore complex data sets involving multiple variables. By employing methods such as principal component analysis, factor analysis, cluster analysis, canonical correlation analysis, and multivariate analysis of variance, researchers can gain valuable insights into the relationships between variables, identify patterns, make predictions, and test hypotheses. These techniques have become indispensable in various fields, including business, healthcare, engineering, and social sciences, enabling researchers to unravel the complexities of multivariate data and make informed decisions based on data-driven evidence.。

newmark法程序法计算多自由度体系的动力响应知识讲解

newmark法程序法计算多自由度体系的动力响应知识讲解

用matlab 编程实现Newmark -β法计算多自由度体系的动力响应用matlab 编程实现Newmark -β法 计算多自由度体系的动力响应一、Newmark -β法的基本原理Newmark-β法是一种逐步积分的方法,避免了任何叠加的应用,能很好的适应非线性的反应分析。

Newmark-β法假定:t u u u ut t t t t t ∆ββ∆∆]}{}){1[(}{}{+++-+= (1-1)2]}{}){21[(}{}{}{t u u t uu u t t t t t t ∆γγ∆∆∆+++-++= (1-2) 式中,β和γ是按积分的精度和稳定性要求进行调整的参数。

当β=0.5,γ=0.25时,为常平均加速度法,即假定从t 到t +∆t 时刻的速度不变,取为常数)}{}({21t t t u u ∆++ 。

研究表明,当β≥0.5, γ≥0.25(0.5+β)2时,Newmark-β法是一种无条件稳定的格式。

由式(2-141)和式(2-142)可得到用t t u ∆+}{及t u }{,t u}{ ,t u }{ 表示的t t u ∆+}{ ,t t u ∆+}{ 表达式,即有t t t t t t t u u t u u tu}){121(}{1)}{}({1}{2----=++γ∆γ∆γ∆∆ (1-3) t t t t t t t u t uu u t u}{)21(}){1()}{}({}{ ∆γβγβ∆γβ∆∆-+-+-=++ (1-4) 考虑t +∆t 时刻的振动微分方程为:t t t t t t t t R u K u C uM ∆∆∆∆++++=++}{}]{[}]{[}]{[ (1-5) 将式(2-143)、式(2-144) 代入(2-145),得到关于u t +∆t 的方程t t t t R u K ∆∆++=}{}]{[ (1-6)式中][][1][][2C t M tK K ∆γβ∆γ++= )}{)12(}){1(}{]([)}){121(}{1}{1]([}{}{2t t t t t t t t u t uu t C u u t u tM R R ∆γβγβ∆γβγ∆γ∆γ∆-+-++-+++=+求解式(2-146)可得t t u ∆+}{,然后由式(2-143)和式(2-144)可解出t t u∆+}{ 和t t u ∆+}{ 。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

GIDA________________________Journal of Geographic Information and Decision Analysis2003, Vol. 7, No. 1, pp. 1-13A Process-oriented Multi-representation of Gradual ChangesYanwu YangInstitute of Geographic Sciences and Natural Resources ResearchChinese Academy of SciencesBeijing 100101, Chinayangyw@andNaval Academy Research InstituteBP 600, 29240, Brest Naval, Franceyang@ecole-navale.frChristophe ClaramuntNaval Academy Research InstituteBP 600, 29240, Brest Naval, Franceclaramunt@ecole-navale.frABSTRACT Although GIS has been applied to many application areas, there is still a need for successful integration of the temporal dimension in current spatial data models in order to deal with the complexity of urban and environmental systems. In particular, there are a few spatio-temporal data models for representing gradual changes. The objective of this paper is to design, at the conceptual level, a spatio-temporal data model that represents and reasons over gradual changes. Our modeling approach considers entities, changes, evolution, events, and processes as modeling primitives in the spatial, temporal and thematic dimensions. Gradual changes are represented at different levels of abstraction while spatio-temporal predicates manipulate the modeling primitives identified. The potential of our modeling approach is illustrated by a case study oriented to the study of “dynamic field” changes in air temperatures in a region of North-East China.KEYWORDS: Process-oriented, Multi-represen- tation, Spatio-temporal model, Gradual changesA Process-oriented Multi-representation of Gradual Changes1. IntroductionThe integration of time in Geographical Information Systems (GIS) is an important objective to explore in order to make those systems successful for real-world applications from the social, environmental to the urban sciences. Over the past years, temporal GISs have been an important and active research area for many research communities. At the conceptual level, many spatio-temporal data models have been developed such as the snapshot model (Armstrong 1988), the space-time composite model (Langran and Chrisman 1988, Langran 1992), the triad model (Peuquet 1994, Peuquet and Qian 1996), the three-domain model (Yuan 1996a and 1996b), event-oriented models (Frank 1994, Peuquet and Duan 1995, Claramunt and Thériault 1995; 1996, Claramunt et al., 1999), and extensions to the object-oriented model (Worboys 1992; 1994).While most of these approaches represent discrete changes in the spatial domain, many spatio-temporal phenomena require a continuous treatment of changes. In order to take into account the complexity of continuous phenomena, there is a current trend in database research to represent and manipulate gradually moving objects (Wu et al. 1998, Roddick and Spiliopoulou 1999, Roddick et al. 2000). Those advances represent significant contributions to model and reason over continuous or gradual changes at the database modeling level. However, these advances should be completed by comparable facilities at the conceptual level in order to offer appropriate modeling primitives. This will provide a bridge between the conceptual and the logical database levels, that is, between the application and the database.The research described in this paper introduces a preliminary conceptual model oriented to the modeling of gradual changes in the spatial, temporal and thematic domains. This approach is based on an entity and process-based model that allows for the representation and propagation of continuous changes at different levels of abstraction. Entities model either moving regions or moving lines (e.g. region boundaries). The proposal integrates spatio-temporal operators that manipulate those modeling primitives. It is illustrated by a case study applied to the modeling of air temperature changes in a region of China. The remaining of the paper is organized as follows. Section 2 presents the modeling background and related work. Section 3 introduces a conceptual model and query operators for gradual changes. Section 4 applies those concepts to a case study. Finally section 5 draws some conclusions.2. Modeling background and related workEntities, processes and events are elementary primitives necessary to the modeling of change. Each entity forms a spatial or non-spatial lifeline that begins with its birth, and goes through its whole (or part of) life (similar to the term "geospatial lifeline" suggested by Hornsby and Egenhofer 2002). The life cycle of an entity along its spatial or non-spatial lifeline consists of a series of contiguous states (versions) separated by evolutions or mutations (Langran and Chrisman 1988). At the database modeling level, each state corresponds to an entity version valid for a specific instant or a given period of time. A process is a concept developed by scientists to understand and relate changes in space and time (Claramunt et al. 1997). Processes record spatio-temporal relationships, that is, dependence links, among entity statesYang and Claramunt(Claramunt and Thériault 1996). The notion of process establishes a form of relationship between space and time (Langran 1992). (Claramunt and Li 1999) defined “a process” as “an action that modifies a set or real world entities” (the action is of interest), and “composite process” as an “expression formed through processes and logical operators”. Events are things that happen; they are conditions, processes, or entities that exist and can be observed (Abler et al. 1971).Several conceptual and logical models have been proposed to represent changes, processes and events in GIS. (Claramunt and Thériault 1995 and 1996) proposed a model to describe primitive and composite spatio-temporal processes using an event-oriented approach, where changes and events are represented as explicit events. An event is modeled as a set of processes that transform entities (the result of the action is of interest). A taxonomy of spatio-temporal processes and changes is identified, it retains the semantics of processes and events. This model makes the difference between events that happen either at a given instant or over a period of time. Events are formalized by expressions of basic and composite processes using logical operators that explicitly define a temporal ordering on those processes. In a related effort (Hornsby and Egenhofer 1997, 2000) introduced another taxonomy of changes and processes, with an explicit description of changes based on the notion of identity. Renolen (1999) proposed a general framework in which temporal relationships between states and events are preserved explicitly and linked together in a directed a-cyclic graph, called a “history graph”, schematized using a graphical notation.3. Towards a representation of gradually changing phenomenaA “field” can be roughly defined as a region of space characterized by a physical property continuously distributed. This definition is similar to the definition of “continuous phenomena” given in (Yuan 2001), which stands for a distribution of single-value variables that model a geographical phenomena, such as air temperature, fluid pressure, and gravitational or electromagnetic force. The values of an attribute distributed in a field may also change along the time line giving thus a form of “gradually changing phenomena”. The distribution of air temperatures can be represented as a “field” or by a spatio-temporal index as temperature values are continuously distributed in those two dimensions.The proposed conceptual framework builds on a clear demarcation between the spatial, temporal, and thematic domains as in (Claramunt and Thériault 1995, Yuan 1996a). This fulfills an “orthogonal principle” where the spatial, temporal, and thematic domains are independently modeled but interconnected by “domain links” (Claramunt and Thériault 1995). In this paper, we consider life and motion processes, and make a separation between thematic and spatial processes: life and stability, motion and more generally geometrical processes, and thematic processes. Our modeling approach is based on an object-oriented spatio-temporal model that favors integration of complex data types and operations (Worboys 1992 and 1994, Egenhofer and Frank 1992). It has been elsewhere shown that temporal and spatio-temporal applications require complex types to support and advanced modeling concepts (Fegaras and Elmasri 1998). An entity is defined as “a real-world abstraction of an existing feature”, object means its representation in the database. ST acts as the root class of all the spatio-temporalA Process-oriented Multi-representation of Gradual Changestypes (elements), as well as an abstract class (an abstract class is a class which is not directly instantiable). The class ST has two subclasses, STEntity and STChange (Table 1). The former is the root class of all spatio-temporal entity types, also an abstract class; the latter is the root class of all spatio-temporal change types, also an abstract class. Spatio-temporal entity types are defined according to application and user needs. To reflect the fact that one might wish to model a given phenomena at different levels of abstraction, spatio-temporal entities and their states, processes, and events are defined at one to many specific levels of representation in the spatial, temporal, and thematic domains. The ST class is basically specified as follows (where we make the difference between spatial, temporal, thematic properties and relationships, and spatio-temporal operations):ST {… //spatial properties and relationships… //temporal properties and relationships… //thematic properties and relationships… //spatio-temporal operations}Spatio-temporal entities and changes are modeled as either instances of classes (i.e. objects), properties of those objects in the spatial, temporal (either instant or period stamped), and thematic domains, or spatio-temporal operations defined within those objects. In the example below the attributes Position and SpatialExtent denote the center or gravity and the extent of a ST entity, respectively, while TracePosition and TraceExtent denote the center or gravity and the extent of a ST change, if any (e.g. modeling of a displacement). Let us consider the example illustrated in Figure 1(1) where the 10 °C isotherm moves towards Northwest from 1980s(“a”) to 1990s(“A”). The position of the isotherm “a” is denoted by its center or gravity, its extent is the spatial extent occupied. The TracePosition of the process “displacement” is a polyline which starts from position of “a” to that of “A” (i.e. that denotes the displacement of the center of gravity), TraceExtent, the spatial extent (i.e. polygon) bounded by the respective boundaries of the two versions of the 10 °C isotherm. This displacement is modeled as an event expression that gives the Traceposition and the Traceextent of the origin and destination of a moving entity.a:: DisplacementIso (traceposition (position1, position2), traceextent; 1980s’,1990s’; entilist (a, A))The links between the classes STEntity and STChange (Table 1) denote some semantic dependencies, for instance, C_attrlist in STChange corresponds to a series of thematic data on the property involved, and Entilist, a list of versions of one to several entities involved in the process.The spatial properties of the class STEntity denote the spatial position and extent occupied by this entity including some additional spatial properties such as the main direction of a displacement (e.g. in the example above this is the case of the attribute DisplacementIso). Note that the properties of the ST entities are either valued or undetermined (such as the position andYang and Claramuntspatial extent of a moving region or moving line). For instance, in the case of a displacement of a moving line, the values of the attributes TracePosition and TraceExtent may be not known at the end time of that process. In this case the attribute Direction can be used to give alternative data on this spatial displacement.Table 1. STEntity and STChangeSTIsotherm a{Position;Extent;Start time;End time;Type;… //spatio-temporal relationships … //spatio-temporal queries}DisplacementIso { Traceposition; Traceextent;Direction;C_attrlist (position1, position2); Start time;End time;Entilist;…//spatio-temporal relationships … //spatio-temporal queries}This spatio-temporal object-oriented model supports entity-based and change-based queries due to the fact it treats spatio-temporal entities and changes as different classes. A second class of operations is inspired by Active Databases advances in the manipulation of complex patterns of composite temporal events. Several active database languages such as ODE, Snoop, SAMOS and EPL were developed to support the specification of rules that are fired when complex patterns of events are detected (Gehani et al. 1992, Giuffrida and Zaniolo 1994). We integrate EPL event-pattern operators and apply them to represent and reason over spatio-temporal processes as in (Claramunt and Thériault 1996). EPL provides some basic constructs whose main logical operators are briefly introduced. Let E1, E2,..., En denote a event or a process, an EPL expression includes the immediate sequence (E1, E2, …, En), the immediate sequence of n instances of E *:E, the negation !E, the conjunction (E1&E2&...&En), the disjunction {E1, E2, ..., En}, and the relaxed sequence [E1, E2,..., En].4. Application to a case study4.1. Modeling elements – isotherms and homogeneous regionsA dynamic field models a phenomenon which is spatially distributed, and continuously changes over time. Let us consider the example of an air temperature phenomena represented by isotherms (cf. Figure 1). In order to represent the spatial distribution of air temperature, we derive homogeneous regions from those isotherms (e.g. above 0°C and below0°C). We model isotherm changes over time using the concepts of either “moving lines” or “moving regions”. This section introduces the principles behind this approach and illustrates its potential by entity- and process-based queries.A Process-oriented Multi-representation of Gradual Changes4.2. Air temperature changes as moving linesMoving lines (e.g. isotherms) can appear, disappear, reappear, and the whole or even some parts of moving lines may move towards one to several different directions according to the modeling needs and the level of granularity chosen. One can remark that a moving line does not hold – at the database modeling level - a valued position along the timeline of a supposed displacement between two known positions (i.e. between 1980 and 1990 in Figure 1(1)). However a displacement can be modeled using an event-based expression. As an example, let us consider the case given in Figure 1(1) and the evolution of the 10°C from 1980 to 1990. The spatial scale is 1:60,000,000, and temporal granularity, 10 years (note that this process is spatial only). There are three 10°C isotherms in 1980s (a, b, and c) and five ones in 1990 (A, B, C, D, and E). These processes, at the level described in Figure 1(1), are modelled as follows where displacement, fissiparism, contraction and appearance are elementary processes (the complete semantics of these logical expressions is given in Yang 2003).a:: (displacement (northwest; 1980s’,1990s’; a, A) & fissiparism (position-E, spatial extent-E; 1980s’,1990s’; aà A, E)) & b:: contraction (position-b, traceExtent (union(extent-b, extent-B)); 1980s’,1990s’; b, B) & c:: contraction (position-c, traceExtent (union(extent-c, extent-C)); 1980s’,1990s’; c, C) & D:: appearance(position-D, spatial extent-D; 1980s’,1990s’; D)Figure 1(2) shows the same phenomena but at a lower level of abstraction with a temporal granularity of 5 years. Compared to Figure 1(1), more detailed information on spatio-temporal entities and changes appeared. For instance, a new entity F’ appeared, the 10°C isotherms “A”, “D” and E take on different patterns, etc. Changes on the isotherm “a” are modeled as follows (Figure 1(2))a:: displacement (northwest; 1981~1985, 1986~1990; air temperature isotherm), a’:: displacement (northwest; 1986~1990, 1991~1995; air temperature isotherm), A’:: displacement (northwest; 1991~1995, 1996~2000; air temperature isotherm).Figure 1. Moving isolines: (1) 10°C isotherm in 1980s’and in 1990s’,and (2) 10°C isotherms in 1981~1985, 1986~1990, 1991~1995, 1996~200.Yang and Claramunt4.3. Moving regionsA moving region is modelled as a whole entity, which can appear, disappear, reappear, contract, extend, deform, and move from one position to another. Using the concept of moving region, changes on air temperatures regions can also be modelled. Let us consider a representation of gradual changes in air temperature over the period from 1980s to 1990s in the region of interest (note that those regions are computed using homogeneous areas of air temperatures). Those regions are represented at different levels of abstraction (i.e. so-called multi-representation changes as denoted in Figure 2(a) to (f)). At levelA (Figure 2(a) and (b)), let us consider the case of the regions “a” and “d”. The entity “a” contracts while “d” expands. During the same period of time, both “a” and “d” experience a displacement process – not completely determined – together with a “deformation”. LevelB (Figure 2(c),(d),(e) and (f)) gives a more precise representation of the phenomenon with a lower level of spatial resolution (i.e. 1/10 of levelA), a temporal granularity of 5-years, and additional classes of air temperatures. Let us model changes on the entity “a” according to those two levels of abstraction:LevelAa:: (deformation (position-a, traceExtent (spatial extent-a, spatial extent-A); 1980s’, 1990s’; a, a’) & displacement (position(position-a, position-A), traceExtent (union(spatial extent-a, spatial extent-A)); 1980s’, 1990s’; a, a’)LevelBa:: (contraction (position-a, traceExtent (spatial extent-a, spatial extent-a’); 1981~1985, 1986~1990; a, a’) & displacement (position(position-a, position-a’), traceExtent (union(spatial extent-a, spatial extent-a’)); 1981~1985, 1986~1990; a, a’)& deformation() & fissiparism (position-a, position-a’, position-b’, traceExtent (union (spatial extent-a, spatial extent-a’, spatial extent-b’); 1981~1985, 1986~1990; a à a’ + b’)),a’:: (contraction (position-a’, traceExtent (spatial extent-a’, spatial extent-A’);1986~1990, 1991~1995; a’, A’) & displacement (position(position-a’, position-A’), traceExtent (union(spatial extent-a’, spatial extent-A’)); 1986~1990, 1991~1995; a’, A’) & deformation()) & b’:: contraction (position-b’, traceExtent (spatial extent-b’, spatial extent-B’); 1986~1990, 1991~1995; b’, B’),A’:: (contraction (position-A’, traceExtent (spatial extent-A’, spatial extent-A);1991~1995, 1996~2000; A’, A) & displacement (position(position-A’, position-A), traceExtent (union(spatial extent-A’, spatial extent-A)); 1991~1995, 1996~2000; A’,A)&deformation() & fissiparism (position-A’, position-A, position-G, traceExtent(union (spatial extent-A’, spatial extent-A, spatial extent-G); 1991~1995, 1996~2000;A’ à A + G)) &B’:: disappearance (position-B’, spatial extent-B’; 1991~1995, 1996~2000; B’, B)A Process-oriented Multi-representation of Gradual Changes(a)1980s (b)1990s(c)1981~1985 (d)1986~1990(e) 1991~1995 (f)1996~2000Figure 2. Multiple representation of air temperature distribution and changesEntity-based queriesEntity-based queries involve spatial and thematic properties, possibly over time. We identify three types of entity-based queries depending on the values returned:(1) return-spatial queries. Let us consider the following query example “Where are the air temperature areas up to 0°C(or above 10°C, below 10°C) in 1980s north of 36°N in the region of interest”. This query is expressed as follows (the following queries are expressed in a basic grammatical form illustrated with some values that correspond to the semantic of the examples given)SpatialE_QS(36°N~42’N: 111´E ~123’E, 1980s, air temperature, {value| value>0°C})Yang and ClaramuntAt levelA (Figure 2(a) and (b)), this query returns (i.e. the output of the query) some spatial values, that is, the homogenous region entities “a”, “b”, “d”, but not “e” as “e” is not in the given spatial extent, and not “c” as it’s below 0°C. At levelB (Figure 2(c),(d),(e),(f)), the query returns the regions “A”, “B”, “D”, “F”.(2) return-thematic queries. This is illustrated by the following example “What is the air temperature in Beijing (39°56'N: 116°17'E) in 1990s?”. This query is specified as followsThematicE_Q(39°56'N: 116°17'E, 1990s),where the query returns a thematic value{value| value>10°C} at levelA.(3) return-temporal query. This is illustrated at the levelB by the query “when is the air temperature in entity o(o’) {value| value>15°C}?”. The query is as followsTemporalE_Q(spatial extent of entity o(o’), air temperature, {value| value>15°C}), where the query returns two periods: 1991~1995 and 1995~2000.Let us finally introduce the example of a spatio-temporal “when and where air temperature is up to 10°C in the region of interest over the period from 1980s to 1990s?”. The query is as followsSpatio-temporalE_Q(28°N~42’N: 111°E ~123’E, 1980s, 1990s, air temperature, {value| value>10°C }),This query returns a list of moving regions together with their temporal values. At levelA, the result is, “d” in 1980s, “B” in 1990s, “D” in 1990s. At levelB, the result is, “d”, “l”, “k”, “m”, “o”, “i” from 1981 to 1985, “d’”, “i’”, “o’”, “p’” from 1986 to 1990, “D’”, “I’”, “K’”, “P’” from 1991 to 1995, and “D”, “H”, “I”, “J”, “N”, “Q”, “R” from 1996 to 2000.4.4. Change-based queriesChange-based queries model changes on either moving lines or moving regions. We illustrate this class of query using either increase or decrease on air temperature, where these changes happen, and when those changes happen (or the combination of those criteria). An example of change-based query is as follows: “where are the places where the air temperature increase from 1980s to 1990s?”. The signature of this query is as followsC_QSTA(28°N ~42’N: 111°E ~123’E, 1980s, 1990s, air temperature increase, 10°C, 11°C),where this query returns the intersection of the isotherms “a” and “a1” as illustrated in Figure 3. Another example is the retrieval of the places where air temperature increases from {value| value<10°C} to {value|value>10°C} from 1980s to 1990s at the levelA. The query is as followsC_QSTA(28°N~42’N: 111°E ~ 123’E, 1980s, 1990s, air temperature increase, {value|0°C < value<10°C}, {value| value>10°C}).A Process-oriented Multi-representation of Gradual Changeswhere this query returns the intersection of the entities “a” and “D”. Finally let us give the example of a spatio-temporal and thematic query related to the changes on some specified entities “when does the air temperature of the entity o’ increase from {value| 10°C<value<15°C} to {value| value>15°C}?”. This query is expressed asTemporalC_Q(points in o’:: increase({value| 10°C <value<15°C}, {value| value>15°C})), where this query returns “from 1986~1990 to 1991~1995”.Figure 3. 10°C isotherm in 1980s and 11°C isotherm in 1990s4.5. DiscussionThis case study provides an illustration of the potential of our modeling approach. Moving regions and moving lines give alternative representation of geographical changes. They help in the inductive observation of the distribution of air temperature changes and the classes of process that generate those changes. Changes can be observed at different levels of abstraction, resolution and granularity in the spatial and thematic dimensions. that is, when and where air temperature increased, qualitatively (e.g. where a given change appeared) and quantitatively (e.g. total area of the regions transformed by an air temperature increase). At a high level, it can reflect long-term (even globally) climatic or environmental changes. At a low level, it can help to explore microclimate or short-term environmental changes. This gives a large range of applications from the observation of environmental to the understanding of the physical mechanism behind the phenomena observed.The modeling approach is entity- and change-based at the conceptual level, object-oriented at the database level. Entities are derived from continuous representations of space at different time steps, processes from a modeling analysis of entity changes. This allows for a form of multi-representation as the temporal granularity and the spatial resolution are redefined during those derivation. Three-domain queries support then an observation of changes, queries manipulate moving regions, moving isotherms or even the processes that trigger those changes. Such a query language provides a new way of exploring geographical changes at the entity level and a support for preliminary exploration of spatio-temporal patterns. These facilities might complement spatial data analysis (e.g. map algebra and overlay operations), this enlarging the range of operations available to scientists and decision-makers. As illustrated in this case study the proposed approach is particularly adapted to the modeling of gradual changes. Overall the model helps to describe and reason over what and how (e.g.Yang and Claramuntappearance vs. disappearance of an entity, displacement of a region or an isotherm), when (e.g. at an instant or over a period of time), where (e.g. at a given location, along a line, over a region) a phenomenon happens in a geographical space. These information are particularly useful for environmental scientists that wish to represent and analyze geographical changes and discover the physical rules behind them.5.ConclusionThis paper introduces a process-based approach oriented to the modeling of gradual changes, The model considers entities and changes as modeling primitives. Entities and processes together with their properties are described in the spatial, temporal and thematic dimensions. Processes are classified using a taxonomy of elementary processes, and composite processes using logical expressions. Those logical expressions allow for a description of geographical phenomena in a given region of interest, at different levels of abstraction. The model is completed by spatial, temporal and thematic operators that allows users to interact with entities and changes and explore patterns in all those dimensions.The model is illustrated by a case study oriented to the observation of air temperature changes in North and partly East China. This work constitutes a preliminary step of our research. We plan to enrich the semantics of our model by exploiting the capabilities of object-oriented modeling at both the geographical data description and manipulation levels. This should facilitate further implementation developments such as the design of an entity-based and change-based query interface that manipulates gradually changing phenomena in the spatial, temporal and thematic dimensions.ReferencesAbler, R., Adams, J. S., and Gould, P. (1971) Spatial Organization: The Geographer’s View of the World. London, Englewood Cliffs: Prentice Hall.Armstrong, M.P. (1988) Temporality in spatial databases. In GIS/LIS ‘88 Proceedings: Accessing the World, Volume II. Falls Church, VA: American Society for Photogrammetry and Remote Sensing, pp. 880-889.Claramunt, C. and Thériault, M. (1995) Managing Time in GIS: An event-oriented approach, Clifford, J. and Tuzhilin, A. (eds.), Recent Advances on Temporal Databases, Zurich, Switzerland: Springer-Verlag, pp. 23-42.Claramunt, C. and Thériault, M. (1996) Toward semantics for modelling spatio-temporal processes within GIS, Kraak, M. J. and Molenaar, M. (eds.), Advances in GIS II, Delft, the Netherlands: Taylor and Francis, pp. 47-64.Claramunt, C., Parent, C. and Thériault, M. (1997) Design patterns for spatio-temporal processes, Spaccapietra S. and Maryanski F. (eds.), in Searching for Semantics: Data Mining, Reverse Engineering, Chapman & Hall, pp. 415-428.Claramunt, C., Parent, C., Spaccapietra, S. and Thériault, M. (1999) Database modelling for environmental and land use changes, Geertman, S., Openshaw S. and Stillwell J. (eds.), Chapter 10 in Geographical Information and Planning: European Perspectives, Springer-Verlag, pp. 181-202.Claramunt, C. and Li, B. (1999) A multi-scale approach to the propagation of temporal。

相关文档
最新文档