Nearly Kaehler and nearly parallel G_2-structures on spheres

合集下载

数据分析英语试题及答案

数据分析英语试题及答案

数据分析英语试题及答案一、选择题(每题2分,共10分)1. Which of the following is not a common data type in data analysis?A. NumericalB. CategoricalC. TextualD. Binary2. What is the process of transforming raw data into an understandable format called?A. Data cleaningB. Data transformationC. Data miningD. Data visualization3. In data analysis, what does the term "variance" refer to?A. The average of the data pointsB. The spread of the data points around the meanC. The sum of the data pointsD. The highest value in the data set4. Which statistical measure is used to determine the central tendency of a data set?A. ModeB. MedianC. MeanD. All of the above5. What is the purpose of using a correlation coefficient in data analysis?A. To measure the strength and direction of a linear relationship between two variablesB. To calculate the mean of the data pointsC. To identify outliers in the data setD. To predict future data points二、填空题(每题2分,共10分)6. The process of identifying and correcting (or removing) errors and inconsistencies in data is known as ________.7. A type of data that can be ordered or ranked is called________ data.8. The ________ is a statistical measure that shows the average of a data set.9. A ________ is a graphical representation of data that uses bars to show comparisons among categories.10. When two variables move in opposite directions, the correlation between them is ________.三、简答题(每题5分,共20分)11. Explain the difference between descriptive andinferential statistics.12. What is the significance of a p-value in hypothesis testing?13. Describe the concept of data normalization and its importance in data analysis.14. How can data visualization help in understanding complex data sets?四、计算题(每题10分,共20分)15. Given a data set with the following values: 10, 12, 15, 18, 20, calculate the mean and standard deviation.16. If a data analyst wants to compare the performance of two different marketing campaigns, what type of statistical test might they use and why?五、案例分析题(每题15分,共30分)17. A company wants to analyze the sales data of its products over the last year. What steps should the data analyst take to prepare the data for analysis?18. Discuss the ethical considerations a data analyst should keep in mind when handling sensitive customer data.答案:一、选择题1. D2. B3. B4. D5. A二、填空题6. Data cleaning7. Ordinal8. Mean9. Bar chart10. Negative三、简答题11. Descriptive statistics summarize and describe thefeatures of a data set, while inferential statistics make predictions or inferences about a population based on a sample.12. A p-value indicates the probability of observing the data, or something more extreme, if the null hypothesis is true. A small p-value suggests that the observed data is unlikely under the null hypothesis, leading to its rejection.13. Data normalization is the process of scaling data to a common scale. It is important because it allows formeaningful comparisons between variables and can improve the performance of certain algorithms.14. Data visualization can help in understanding complex data sets by providing a visual representation of the data, making it easier to identify patterns, trends, and outliers.四、计算题15. Mean = (10 + 12 + 15 + 18 + 20) / 5 = 14, Standard Deviation = √[(Σ(xi - mean)^2) / N] = √[(10 + 4 + 1 + 16 + 36) / 5] = √52 / 5 ≈ 3.816. A t-test or ANOVA might be used to compare the means ofthe two campaigns, as these tests can determine if there is a statistically significant difference between the groups.五、案例分析题17. The data analyst should first clean the data by removing any errors or inconsistencies. Then, they should transformthe data into a suitable format for analysis, such ascreating a time series for monthly sales. They might also normalize the data if necessary and perform exploratory data analysis to identify any patterns or trends.18. A data analyst should ensure the confidentiality andprivacy of customer data, comply with relevant data protection laws, and obtain consent where required. They should also be transparent about how the data will be used and take steps to prevent any potential misuse of the data.。

Two-Dimensional Gas of Massless Dirac Fermions in Graphene

Two-Dimensional Gas of Massless Dirac Fermions in Graphene

Two-Dimensional Gas of Massless Dirac Fermions in Graphene K.S. Novoselov1, A.K. Geim1, S.V. Morozov2, D. Jiang1, M.I. Katsnelson3, I.V. Grigorieva1, S.V. Dubonos2, A.A. Firsov21Manchester Centre for Mesoscience and Nanotechnology, University of Manchester, Manchester, M13 9PL, UK2Institute for Microelectronics Technology, 142432, Chernogolovka, Russia3Institute for Molecules and Materials, Radboud University of Nijmegen, Toernooiveld 1, 6525 ED Nijmegen, the NetherlandsElectronic properties of materials are commonly described by quasiparticles that behave as nonrelativistic electrons with a finite mass and obey the Schrödinger equation. Here we report a condensed matter system where electron transport is essentially governed by the Dirac equation and charge carriers mimic relativistic particles with zero mass and an effective “speed of light” c∗ ≈106m/s. Our studies of graphene – a single atomic layer of carbon – have revealed a variety of unusual phenomena characteristic of two-dimensional (2D) Dirac fermions. In particular, we have observed that a) the integer quantum Hall effect in graphene is anomalous in that it occurs at halfinteger filling factors; b) graphene’s conductivity never falls below a minimum value corresponding to the conductance quantum e2/h, even when carrier concentrations tend to zero; c) the cyclotron mass mc of massless carriers with energy E in graphene is described by equation E =mcc∗2; and d) Shubnikov-de Haas oscillations in graphene exhibit a phase shift of π due to Berry’s phase.Graphene is a monolayer of carbon atoms packed into a dense honeycomb crystal structure that can be viewed as either an individual atomic plane extracted from graphite or unrolled single-wall carbon nanotubes or as a giant flat fullerene molecule. This material was not studied experimentally before and, until recently [1,2], presumed not to exist. To obtain graphene samples, we used the original procedures described in [1], which involve micromechanical cleavage of graphite followed by identification and selection of monolayers using a combination of optical, scanning-electron and atomic-force microscopies. The selected graphene films were further processed into multi-terminal devices such as the one shown in Fig. 1, following standard microfabrication procedures [2]. Despite being only one atom thick and unprotected from the environment, our graphene devices remain stable under ambient conditions and exhibit high mobility of charge carriers. Below we focus on the physics of “ideal” (single-layer) graphene which has a different electronic structure and exhibits properties qualitatively different from those characteristic of either ultra-thin graphite films (which are semimetals and whose material properties were studied recently [2-5]) or even of our other devices consisting of just two layers of graphene (see further). Figure 1 shows the electric field effect [2-4] in graphene. Its conductivity σ increases linearly with increasing gate voltage Vg for both polarities and the Hall effect changes its sign at Vg ≈0. This behaviour shows that substantial concentrations of electrons (holes) are induced by positive (negative) gate voltages. Away from the transition region Vg ≈0, Hall coefficient RH = 1/ne varies as 1/Vg where n is the concentration of electrons or holes and e the electron charge. The linear dependence 1/RH ∝Vg yields n =α·Vg with α ≈7.3·1010cm-2/V, in agreement with the theoretical estimate n/Vg ≈7.2·1010cm-2/V for the surface charge density induced by the field effect (see Fig. 1’s caption). The agreement indicates that all the induced carriers are mobile and there are no trapped charges in graphene. From the linear dependence σ(Vg) we found carrier mobilities µ =σ/ne, whichreached up to 5,000 cm2/Vs for both electrons and holes, were independent of temperature T between 10 and 100K and probably still limited by defects in parent graphite. To characterise graphene further, we studied Shubnikov-de Haas oscillations (SdHO). Figure 2 shows examples of these oscillations for different magnetic fields B, gate voltages and temperatures. Unlike ultra-thin graphite [2], graphene exhibits only one set of SdHO for both electrons and holes. By using standard fan diagrams [2,3], we have determined the fundamental SdHO frequency BF for various Vg. The resulting dependence of BF as a function of n is plotted in Fig. 3a. Both carriers exhibit the same linear dependence BF = β·n with β ≈1.04·10-15 T·m2 (±2%). Theoretically, for any 2D system β is defined only by its degeneracy f so that BF =φ0n/f, where φ0 =4.14·10-15 T·m2 is the flux quantum. Comparison with the experiment yields f =4, in agreement with the double-spin and double-valley degeneracy expected for graphene [6,7] (cf. caption of Fig. 2). Note however an anomalous feature of SdHO in graphene, which is their phase. In contrast to conventional metals, graphene’s longitudinal resistance ρxx(B) exhibits maxima rather than minima at integer values of the Landau filling factor ν (Fig. 2a). Fig. 3b emphasizes this fact by comparing the phase of SdHO in graphene with that in a thin graphite film [2]. The origin of the “odd” phase is explained below. Another unusual feature of 2D transport in graphene clearly reveals itself in the T-dependence of SdHO (Fig. 2b). Indeed, with increasing T the oscillations at high Vg (high n) decay more rapidly. One can see that the last oscillation (Vg ≈100V) becomes practically invisible already at 80K whereas the first one (Vg <10V) clearly survives at 140K and, in fact, remains notable even at room temperature. To quantify this behaviour we measured the T-dependence of SdHO’s amplitude at various gate voltages and magnetic fields. The results could be fitted accurately (Fig. 3c) by the standard expression T/sinh(2π2kBTmc/heB), which yielded mc varying between ≈ 0.02 and 0.07m0 (m0 is the free electron mass). Changes in mc are well described by a square-root dependence mc ∝n1/2 (Fig. 3d). To explain the observed behaviour of mc, we refer to the semiclassical expressions BF = (h/2πe)S(E) and mc =(h2/2π)∂S(E)/∂E where S(E) =πk2 is the area in k-space of the orbits at the Fermi energy E(k) [8]. Combining these expressions with the experimentally-found dependences mc ∝n1/2 and BF =(h/4e)n it is straightforward to show that S must be proportional to E2 which yields E ∝k. Hence, the data in Fig. 3 unambiguously prove the linear dispersion E =hkc∗ for both electrons and holes with a common origin at E =0 [6,7]. Furthermore, the above equations also imply mc =E/c∗2 =(h2n/4πc∗2)1/2 and the best fit to our data yields c∗ ≈1⋅106 m/s, in agreement with band structure calculations [6,7]. The employed semiclassical model is fully justified by a recent theory for graphene [9], which shows that SdHO’s amplitude can indeed be described by the above expression T/sinh(2π2kBTmc/heB) with mc =E/c∗2. Note that, even though the linear spectrum of fermions in graphene (Fig. 3e) implies zero rest mass, their cyclotron mass is not zero. The unusual response of massless fermions to magnetic field is highlighted further by their behaviour in the high-field limit where SdHO evolve into the quantum Hall effect (QHE). Figure 4 shows Hall conductivity σxy of graphene plotted as a function of electron and hole concentrations in a constant field B. Pronounced QHE plateaux are clearly seen but, surprisingly, they do not occur in the expected sequence σxy =(4e2/h)N where N is integer. On the contrary, the plateaux correspond to half-integer ν so that the first plateau occurs at 2e2/h and the sequence is (4e2/h)(N + ½). Note that the transition from the lowest hole (ν =–½) to lowest electron (ν =+½) Landau level (LL) in graphene requires the same number of carriers (∆n =4B/φ0 ≈1.2·1012cm-2) as the transition between other nearest levels (cf. distances between minima in ρxx). This results in a ladder of equidistant steps in σxy which are not interrupted when passing through zero. To emphasize this highly unusual behaviour, Fig. 4 also shows σxy for a graphite film consisting of only two graphene layers where the sequence of plateaux returns to normal and the first plateau is at 4e2/h, as in the conventional QHE. We attribute this qualitative transition between graphene and its two-layer counterpart to the fact that fermions in the latter exhibit a finite mass near n ≈0 (as found experimentally; to be published elsewhere) and can no longer be described as massless Dirac particles. 2The half-integer QHE in graphene has recently been suggested by two theory groups [10,11], stimulated by our work on thin graphite films [2] but unaware of the present experiment. The effect is single-particle and intimately related to subtle properties of massless Dirac fermions, in particular, to the existence of both electron- and hole-like Landau states at exactly zero energy [912]. The latter can be viewed as a direct consequence of the Atiyah-Singer index theorem that plays an important role in quantum field theory and the theory of superstrings [13,14]. For the case of 2D massless Dirac fermions, the theorem guarantees the existence of Landau states at E=0 by relating the difference in the number of such states with opposite chiralities to the total flux through the system (note that magnetic field can also be inhomogeneous). To explain the half-integer QHE qualitatively, we invoke the formal expression [9-12] for the energy of massless relativistic fermions in quantized fields, EN =[2ehc∗2B(N +½ ±½)]1/2. In QED, sign ± describes two spins whereas in the case of graphene it refers to “pseudospins”. The latter have nothing to do with the real spin but are “built in” the Dirac-like spectrum of graphene, and their origin can be traced to the presence of two carbon sublattices. The above formula shows that the lowest LL (N =0) appears at E =0 (in agreement with the index theorem) and accommodates fermions with only one (minus) projection of the pseudospin. All other levels N ≥1 are occupied by fermions with both (±) pseudospins. This implies that for N =0 the degeneracy is half of that for any other N. Alternatively, one can say that all LL have the same “compound” degeneracy but zeroenergy LL is shared equally by electrons and holes. As a result the first Hall plateau occurs at half the normal filling and, oddly, both ν = –½ and +½ correspond to the same LL (N =0). All other levels have normal degeneracy 4B/φ0 and, therefore, remain shifted by the same ½ from the standard sequence. This explains the QHE at ν =N + ½ and, at the same time, the “odd” phase of SdHO (minima in ρxx correspond to plateaux in ρxy and, hence, occur at half-integer ν; see Figs. 2&3), in agreement with theory [9-12]. Note however that from another perspective the phase shift can be viewed as the direct manifestation of Berry’s phase acquired by Dirac fermions moving in magnetic field [15,16]. Finally, we return to zero-field behaviour and discuss another feature related to graphene’s relativistic-like spectrum. The spectrum implies vanishing concentrations of both carriers near the Dirac point E =0 (Fig. 3e), which suggests that low-T resistivity of the zero-gap semiconductor should diverge at Vg ≈0. However, neither of our devices showed such behaviour. On the contrary, in the transition region between holes and electrons graphene’s conductivity never falls below a well-defined value, practically independent of T between 4 and 100K. Fig. 1c plots values of the maximum resistivity ρmax(B =0) found in 15 different devices, which within an experimental error of ≈15% all exhibit ρmax ≈6.5kΩ, independent of their mobility that varies by a factor of 10. Given the quadruple degeneracy f, it is obvious to associate ρmax with h/fe2 =6.45kΩ where h/e2 is the resistance quantum. We emphasize that it is the resistivity (or conductivity) rather than resistance (or conductance), which is quantized in graphene (i.e., resistance R measured experimentally was not quantized but scaled in the usual manner as R =ρL/w with changing length L and width w of our devices). Thus, the effect is completely different from the conductance quantization observed previously in quantum transport experiments. However surprising, the minimum conductivity is an intrinsic property of electronic systems described by the Dirac equation [17-20]. It is due to the fact that, in the presence of disorder, localization effects in such systems are strongly suppressed and emerge only at exponentially large length scales. Assuming the absence of localization, the observed minimum conductivity can be explained qualitatively by invoking Mott’s argument [21] that mean-free-path l of charge carriers in a metal can never be shorter that their wavelength λF. Then, σ =neµ can be re-written as σ = (e2/h)kFl and, hence, σ cannot be smaller than ≈e2/h per each type of carriers. This argument is known to have failed for 2D systems with a parabolic spectrum where disorder leads to localization and eventually to insulating behaviour [17,18]. For the case of 2D Dirac fermions, no localization is expected [17-20] and, accordingly, Mott’s argument can be used. Although there is a broad theoretical consensus [18-23,10,11] that a 2D gas of Dirac fermions should exhibit a minimum 3conductivity of about e2/h, this quantization was not expected to be accurate and most theories suggest a value of ≈e2/πh, in disagreement with the experiment. In conclusion, graphene exhibits electronic properties distinctive for a 2D gas of particles described by the Dirac rather than Schrödinger equation. This 2D system is not only interesting in itself but also allows one to access – in a condensed matter experiment – the subtle and rich physics of quantum electrodynamics [24-27] and provides a bench-top setting for studies of phenomena relevant to cosmology and astrophysics [27,28].1. Novoselov, K.S. et al. PNAS 102, 10451 (2005). 2. Novoselov, K.S. et al. Science 306, 666 (2004); cond-mat/0505319. 3. Zhang, Y., Small, J.P., Amori, M.E.S. & Kim, P. Phys. Rev. Lett. 94, 176803 (2005). 4. Berger, C. et al. J. Phys. Chem. B, 108, 19912 (2004). 5. Bunch, J.S., Yaish, Y., Brink, M., Bolotin, K. & McEuen, P.L. Nanoletters 5, 287 (2005). 6. Dresselhaus, M.S. & Dresselhaus, G. Adv. Phys. 51, 1 (2002). 7. Brandt, N.B., Chudinov, S.M. & Ponomarev, Y.G. Semimetals 1: Graphite and Its Compounds (North-Holland, Amsterdam, 1988). 8. Vonsovsky, S.V. and Katsnelson, M.I. Quantum Solid State Physics (Springer, New York, 1989). 9. Gusynin, V.P. & Sharapov, S.G. Phys. Rev. B 71, 125124 (2005). 10. Gusynin, V.P. & Sharapov, S.G. cond-mat/0506575. 11. Peres, N.M.R., Guinea, F. & Castro Neto, A.H. cond-mat/0506709. 12. Zheng, Y. & Ando, T. Phys. Rev. B 65, 245420 (2002). 13. Kaku, M. Introduction to Superstrings (Springer, New York, 1988). 14. Nakahara, M. Geometry, Topology and Physics (IOP Publishing, Bristol, 1990). 15. Mikitik, G. P. & Sharlai, Yu.V. Phys. Rev. Lett. 82, 2147 (1999). 16. Luk’yanchuk, I.A. & Kopelevich, Y. Phys. Rev. Lett. 93, 166402 (2004). 17. Abrahams, E., Anderson, P.W., Licciardello, D.C. & Ramakrishnan, T.V. Phys. Rev. Lett. 42, 673 (1979). 18. Fradkin, E. Phys. Rev. B 33, 3263 (1986). 19. Lee, P.A. Phys. Rev. Lett. 71, 1887 (1993). 20. Ziegler, K. Phys. Rev. Lett. 80, 3113 (1998). 21. Mott, N.F. & Davis, E.A. Electron Processes in Non-Crystalline Materials (Clarendon Press, Oxford, 1979). 22. Morita, Y. & Hatsugai, Y. Phys. Rev. Lett. 79, 3728 (1997). 23. Nersesyan, A.A., Tsvelik, A.M. & Wenger, F. Phys. Rev. Lett. 72, 2628 (1997). 24. Rose, M.E. Relativistic Electron Theory (John Wiley, New York, 1961). 25. Berestetskii, V.B., Lifshitz, E.M. & Pitaevskii, L.P. Relativistic Quantum Theory (Pergamon Press, Oxford, 1971). 26. Lai, D. Rev. Mod. Phys. 73, 629 (2001). 27. Fradkin, E. Field Theories of Condensed Matter Systems (Westview Press, Oxford, 1997). 28. Volovik, G.E. The Universe in a Helium Droplet (Clarendon Press, Oxford, 2003).Acknowledgements This research was supported by the EPSRC (UK). We are most grateful to L. Glazman, V. Falko, S. Sharapov and A. Castro Netto for helpful discussions. K.S.N. was supported by Leverhulme Trust. S.V.M., S.V.D. and A.A.F. acknowledge support from the Russian Academy of Science and INTAS.43µ (m2/Vs)0.8c4P0.4 22 σ (1/kΩ)10K0 0 1/RH(T/kΩ) 1 2ρmax (h/4e2)1-5010 Vg (V) 50 -10ab 0 -100-500 Vg (V)50100Figure 1. Electric field effect in graphene. a, Scanning electron microscope image of one of our experimental devices (width of the central wire is 0.2µm). False colours are chosen to match real colours as seen in an optical microscope for larger areas of the same materials. Changes in graphene’s conductivity σ (main panel) and Hall coefficient RH (b) as a function of gate voltage Vg. σ and RH were measured in magnetic fields B =0 and 2T, respectively. The induced carrier concentrations n are described by [2] n/Vg =ε0ε/te where ε0 and ε are permittivities of free space and SiO2, respectively, and t ≈300 nm is the thickness of SiO2 on top of the Si wafer used as a substrate. RH = 1/ne is inverted to emphasize the linear dependence n ∝Vg. 1/RH diverges at small n because the Hall effect changes its sign around Vg =0 indicating a transition between electrons and holes. Note that the transition region (RH ≈ 0) was often shifted from zero Vg due to chemical doping [2] but annealing of our devices in vacuum normally allowed us to eliminate the shift. The extrapolation of the linear slopes σ(Vg) for electrons and holes results in their intersection at a value of σ indistinguishable from zero. c, Maximum values of resistivity ρ =1/σ (circles) exhibited by devices with different mobilites µ (left y-axis). The histogram (orange background) shows the number P of devices exhibiting ρmax within 10% intervals around the average value of ≈h/4e2. Several of the devices shown were made from 2 or 3 layers of graphene indicating that the quantized minimum conductivity is a robust effect and does not require “ideal” graphene.ρxx (kΩ)0.60 aVg = -60V4B (T)810K12∆σxx (1/kΩ)0.4 1ν=4 140K 80K B =12T0 b 0 25 50 Vg (V) 7520K100Figure 2. Quantum oscillations in graphene. SdHO at constant gate voltage Vg as a function of magnetic field B (a) and at constant B as a function of Vg (b). Because µ does not change much with Vg, the constant-B measurements (at a constant ωcτ =µB) were found more informative. Panel b illustrates that SdHO in graphene are more sensitive to T at high carrier concentrations. The ∆σxx-curves were obtained by subtracting a smooth (nearly linear) increase in σ with increasing Vg and are shifted for clarity. SdHO periodicity ∆Vg in a constant B is determined by the density of states at each Landau level (α∆Vg = fB/φ0) which for the observed periodicity of ≈15.8V at B =12T yields a quadruple degeneracy. Arrows in a indicate integer ν (e.g., ν =4 corresponds to 10.9T) as found from SdHO frequency BF ≈43.5T. Note the absence of any significant contribution of universal conductance fluctuations (see also Fig. 1) and weak localization magnetoresistance, which are normally intrinsic for 2D materials with so high resistivity.75 BF (T) 500.2 0.11/B (1/T)b5 10 N 1/2025 a 0 0.061dmc /m00.04∆0.02 0c0 0 T (K) 150n =0e-6-3036Figure 3. Dirac fermions of graphene. a, Dependence of BF on carrier concentration n (positive n correspond to electrons; negative to holes). b, Examples of fan diagrams used in our analysis [2] to find BF. N is the number associated with different minima of oscillations. Lower and upper curves are for graphene (sample of Fig. 2a) and a 5-nm-thick film of graphite with a similar value of BF, respectively. Note that the curves extrapolate to different origins; namely, to N = ½ and 0. In graphene, curves for all n extrapolate to N = ½ (cf. [2]). This indicates a phase shift of π with respect to the conventional Landau quantization in metals. The shift is due to Berry’s phase [9,15]. c, Examples of the behaviour of SdHO amplitude ∆ (symbols) as a function of T for mc ≈0.069 and 0.023m0; solid curves are best fits. d, Cyclotron mass mc of electrons and holes as a function of their concentration. Symbols are experimental data, solid curves the best fit to theory. e, Electronic spectrum of graphene, as inferred experimentally and in agreement with theory. This is the spectrum of a zero-gap 2D semiconductor that describes massless Dirac fermions with c∗ 300 times less than the speed of light.n (1012 cm-2)σxy (4e2/h)4 3 2 -2 1 -1 -2 -3 2 44Kn7/ 5/ 3/ 1/2 2 2 210 ρxx (kΩ)-4σxy (4e2/h)0-1/2 -3/2 -5/2514T0-7/2 -4 -2 0 2 4 n (1012 cm-2)Figure 4. Quantum Hall effect for massless Dirac fermions. Hall conductivity σxy and longitudinal resistivity ρxx of graphene as a function of their concentration at B =14T. σxy =(4e2/h)ν is calculated from the measured dependences of ρxy(Vg) and ρxx(Vg) as σxy = ρxy/(ρxy + ρxx)2. The behaviour of 1/ρxy is similar but exhibits a discontinuity at Vg ≈0, which is avoided by plotting σxy. Inset: σxy in “two-layer graphene” where the quantization sequence is normal and occurs at integer ν. The latter shows that the half-integer QHE is exclusive to “ideal” graphene.。

新视野大学英语快速阅读4第二版课后练习题含答案

新视野大学英语快速阅读4第二版课后练习题含答案

新视野大学英语快速阅读4第二版课后练习题含答案第一部分Passage 1短文大意:在该文章中,我们将解释“洋葱法则”以及如何该使用这种方法来提高产品质量并满足客户需求。

答案:1.What is the Onion Method?Answer: It is a method that relates to product development that incorporates customer needs.2.What is the purpose of the method?Answer: The purpose of the method is to ensure that all customer needs are being met by the product.3.What is the first layer of the Onion Method?Answer: The first layer is customer needs as it is the foundation for the other layers.4.What is the fourth layer of the Onion Method?Answer: The fourth layer is product design as it determines how well the product will cater to customer needs.Passage 2短文大意:在该文章中,我们将了解什么是价值流图以及价值流图如何帮助公司更好地掌握生产过程并提高生产效率。

答案:1.What is a Value Stream Map?Answer: It is a representation of the steps involved in a process, as well as the time it takes for each step to be completed.2.What is the purpose of a Value Stream Map?Answer: The purpose of a Value Stream Map is to help a company identify inefficiencies in their processes and to improve productivity.3.What is the first step in creating a Value StreamMap?Answer: The first step is to identify the product or service being produced.4.What is the final step in creating a Value StreamMap?Answer: The final step is to implement changes based onthe discoveries made during the mapping process.第二部分Passage 3短文大意:在该文章中,我们将讨论关于中小企业如何利用社交媒体来拓展客户群以及提高销售额的策略。

a fast and elitist multiobjective genetic algorithm NSGA-II

a fast and elitist multiobjective genetic algorithm NSGA-II

A Fast and Elitist Multiobjective Genetic Algorithm:NSGA-IIKalyanmoy Deb ,Associate Member,IEEE ,Amrit Pratap,Sameer Agarwal,and T.MeyarivanAbstract—Multiobjective evolutionary algorithms (EAs)that use nondominated sorting and sharing have been criti-cizedmainly for their:1)is thenumber of objectives and is the populationsize);2)nonelitism approach;and 3)the need for specifying a sharing parameter.In this paper,we suggest a nondominated sorting-based multiobjective EA (MOEA),called nondominated sorting genetic algorithm II (NSGA-II),which alleviates all the above three difficulties.Specifically,a fast nondominated sorting approach withsolutions.Simulation results on difficult test problems show that the proposed NSGA-II,in most problems,is able to find much better spread of solutions and better convergence near the true Pareto-optimal front compared to Pareto-archived evolution strategy and strength-Pareto EA—two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front.Moreover,we modify the definition of dominance in order to solve constrained multiobjective problems efficiently.Simulation results of the constrained NSGA-II on a number of test problems,including a five-objective seven-constraint nonlinear problem,are compared with another constrained multiobjective optimizer and much better performance of NSGA-II is observed.Index Terms—Constraint handling,elitism,genetic algorithms,multicriterion decision making,multiobjective optimization,Pareto-optimal solutions.I.I NTRODUCTIONTHE PRESENCE of multiple objectives in a problem,in principle,gives rise to a set of optimal solutions (largely known as Pareto-optimal solutions),instead of a single optimal solution.In the absence of any further information,one of these multiple solutions,it has to be applied many times,hopefully finding a different solution at each simulation run.of multiobjective evolu-been suggested [1],[7],[13],revised February 5,2001andSeptember 7,2001.The work of K.Deb was supported by the Ministry of Human Resources and Development,India,under the Research and Development Scheme.The authors are with the Kanpur Genetic Algorithms Laboratory,Indian In-stitute of Technology,Kanpur PIN 208016,India (e-mail:deb@iitk.ac.in).Publisher Item Identifier S 1089-778X(02)04101-2.[20],[26].The primary reason for this is their ability to find multiple Pareto-optimal solutions in one single simulation run.Since evolutionary algorithms (EAs)work with a population of solutions,a simple EA can be extended to maintain a diverse set of solutions.With an emphasis for moving toward the true Pareto-optimal region,an EA can be used to find multiple Pareto-optimal solutions in one single simulation run.The nondominated sorting genetic algorithm (NSGA)pro-posed in [20]was one of the first such EAs.Over the years,the main criticisms of the NSGA approach have been as follows.1)High computational complexity of nondominated sorting:The currently-used nondominated sorting algorithm has acomputational complexity of(where is the population size).Thismakes NSGA computationally expensive for large popu-lation sizes.This large complexity arises because of the complexity involved in the nondominated sorting proce-dure in every generation.2)Lack of elitism:Recent results [25],[18]show that elitism can speed up the performance of the GA significantly,which also can help preventing the loss of good solutions once they are found.3)Need for specifying the sharing parameterwe describe the proposed NSGA-II algorithm in details.Sec-tion IV presents simulation results of NSGA-II and compares them with two other elitist MOEAs(PAES and SPEA).In Sec-tion V,we highlight the issue of parameter interactions,a matter that is important in evolutionary computation research.The next section extends NSGA-II for handling constraints and compares the results with another recently proposed constraint-handling method.Finally,we outline the conclusions of this paper.II.E LITIST M ULTIOBJECTIVE E VOLUTIONARY A LGORITHMS During1993–1995,a number of different EAs were sug-gested to solve multiobjective optimization problems.Of them,Fonseca and Fleming’s MOGA[7],Srinivas and Deb’s NSGA[20],and Horn et al.’s NPGA[13]enjoyed more attention.These algorithms demonstrated the necessary additional oper-ators for converting a simple EA to a MOEA.Two commonfeatures on all three operators were the following:i)assigningfitness to population members based on nondominated sortingand ii)preserving diversity among solutions of the samenondominated front.Although they have been shown to findmultiple nondominated solutions on many test problems and anumber of engineering design problems,researchers realizedthe need of introducing more useful operators(which havebeen found useful in single-objective EA’s)so as to solvemultiobjective optimization problems better.Particularly,the interest has been to introduce elitism to enhance theconvergence properties of a MOEA.Reference[25]showedthat elitism helps in achieving better convergence in MOEAs.Among the existing elitist MOEAs,Zitzler and Thiele’s SPEA[26],Knowles and Corne’s Pareto-archived PAES[14],andRudolph’s elitist GA[18]are well studied.We describe theseapproaches in brief.For details,readers are encouraged to referto the original studies.Zitzler and Thiele[26]suggested an elitist multicriterion EAwith the concept of nondomination in their SPEA.They sug-gested maintaining an external population at every generationstoring all nondominated solutions discovered so far beginningfrom the initial population.This external population partici-pates in all genetic operations.At each generation,a combinedpopulation with the external and the current population is firstconstructed.All nondominated solutions in the combined pop-ulation are assigned a fitness based on the number of solutionsthey dominate and dominated solutions are assigned fitnessworse than the worst fitness of any nondominated solution.This assignment of fitness makes sure that the search is directedtoward the nondominated solutions.A deterministic clusteringtechnique is used to ensure diversity among nondominatedsolutions.Although the implementation suggested in[26]is,with proper bookkeeping the complexity of SPEAcan be reduced to1)-evolutionstrategy.Instead of using real parameters,binary strings wereused and bitwise mutations were employed to create offsprings.In their PAES,with one parent and one offspring,the offspringis compared with respect to the parent.If the offspring domi-nates the parent,the offspring is accepted as the next parent andthe iteration continues.On the other hand,if the parent dom-inates the offspring,the offspring is discarded and a new mu-tated solution(a new offspring)is found.However,if the off-spring and the parent do not dominate each other,the choice be-tween the offspring and the parent is made by comparing themwith an archive of best solutions found so far.The offspring iscompared with the archive to check if it dominates any memberof the archive.If it does,the offspring is accepted as the newparent and all the dominated solutions are eliminated from thearchive.If the offspring does not dominate any member of thearchive,both parent and offspring are checked for their near-ness with the solutions of the archive.If the offspring resides ina least crowded region in the objective space among the mem-bers of the archive,it is accepted as a parent and a copy of addedto the archive.Crowding is maintained by dividing the entiresearch space deterministically in is thedepth parameter andevaluationsas,where,theoverall complexity of the algorithm is,each solutioncan be compared with every other solution in the population tofind if it is dominated.This requiresis the number of objectives.When thisprocess is continued to find all members of the first nondomi-nated level in the population,the total complexity isfront,the solutions of the first front are discounted temporarily and the above procedure is repeated.In the worst case,the task of finding the second front alsorequiresfronts and there exists onlyone solution in each front.This requires anoverall computations.Notethat,and 2)dom-inates.This requireswith,we visit each member (the domi-nation count becomes zero,we put it in a separate list and thethird front is identified.This process continues until all fronts are identified.For each solution.Thus,each solutiontimes before its domination count becomes zero.At this point,the solution is assigned a nondomination level and will never be visited again.Since there are at most.Thus,the overall complexity of the procedureistimes as each individual can be the memberof at most one front and the second inner loop (for eachtimes for each individual[each individual dominatescomparisons]resultsin the overall.B.Diversity PreservationWe mentioned earlier that,along with convergence to the Pareto-optimal set,it is also desired that an EA maintains a good spread of solutions in the obtained set of solutions.The original NSGA used the well-known sharing function approach,which has been found to maintain sustainable diversity in a popula-tion with appropriate setting of its associated parameters.The sharing function method involves a sharing parameter--for each if thenAddelse if thenifthenUsed to store the members of the next frontfor eachif then比pFig.1.Crowding-distance calculation.Points marked in filled circles are solutions of the same nondominated front.2)Since each solution must be compared with all other so-lutions in the population,the overall complexity of thesharing function approachis.In the proposed NSGA-II,we replace the sharing function approach with a crowded-comparison approach that eliminates both the above difficulties to some extent.The new approach does not require any user-defined parameter for maintaining diversity among population members.Also,the suggested ap-proach has a better computational complexity.To describe this approach,we first define a density-estimation metric and then present the crowded-comparison operator.1)Density Estimation:To get an estimate of the density of solutions surrounding a particular solution in the population,we calculate the average distance of two points on either side of this point along each of the objectives.Thisquantityth solution inits front (marked with solid circles)is the average side length of the cuboid (shown with a dashed box).The crowding-distance computation requires sorting the pop-ulation according to each objective function value in ascending order of magnitude.Thereafter,for each objective function,the boundary solutions (solutions with smallest and largest function values)are assigned an infinite distance value.All other inter-mediate solutions are assigned a distance value equal to the ab-solute normalized difference in the function values of two adja-cent solutions.This calculation is continued with other objective functions.The overall crowding-distance value is calculated as the sum of individual distance values corresponding to each ob-jective.Each objective function is normalized before calculating the crowding distance.The algorithm as shown at the bottom of the page outlines the crowding-distance computation procedure of all solutions in an nondominatedsetth objective function value of the and theparametersand arethe maximum and minimum values oftheindependent sortings of atmost)are in-volved,the above algorithmhasare assigned adistance metric,we can compare two solutions for their extent of proximity with other solutions.A solution with a smaller value of this distance measure is,in some sense,more crowded by other solutions.This is exactly what we compare in the proposed crowded-comparison operator,described below.Although Fig.1illustrates the crowding-distance computation for two objectives,the procedure is applicable to more than two objectives as well.2)Crowded-Comparison Operator:The crowded-compar-ison operator(in the populationhas two attributes:1)nondomination rank();2)crowding distance(ifandThat is,between two solutions with differing nondomination ranks,we prefer the solution with the lower (better)rank.Other-wise,if both solutions belong to the same front,then we prefer the solution that is located in a lesser crowded region.With these three new innovations—a fast nondominated sorting procedure,a fast crowded distance estimation proce-dure,and a simple crowded comparison operator,we are now ready to describe the NSGA-II algorithm.C.Main LoopInitially,a random parentpopulation.Sinceelitism--foreachtois introduced by comparing current population with previously found best nondominated solutions,the procedure is different after the initial generation.We first describethe,we definitely choose all members of thesetpopulation members,we sort the solutionsof the lastfrontis now used for se-lection,crossover,and mutation to create a newpopulationofsize;2)crowding-distance assignmentis;3)sortingon.The overall complexity of the algorithmismem-bersin----th nondominated front in the parentpopsort in descending orderusing-use selection,crossover and mutation to createa newpopulationTABLE IT EST P ROBLEMS U SED IN T HIS STUDYAll objective functions are to be minimized.A.Test ProblemsWe first describe the test problems used to compare different MOEAs.Test problems are chosen from a number of signifi-cant past studies in this area.Veldhuizen [22]cited a number of test problems that have been used in the past.Of them,we choose four problems:Schaffer’s study (SCH)[19],Fonseca and Fleming’s study (FON)[10],Poloni’s study (POL)[16],and Kursawe’s study (KUR)[15].In 1999,the first author suggested a systematic way of developing test problems for multiobjec-tive optimization [3].Zitzler et al.[25]followed those guide-lines and suggested six test problems.We choose five of those six problems here and call them ZDT1,ZDT2,ZDT3,ZDT4,and ZDT6.All problems have two objective functions.None of these problems have any constraint.We describe these prob-lems in Table I.The table also shows the number of variables,their bounds,the Pareto-optimal solutions,and the nature of the Pareto-optimal front for each problem.All approaches are run for a maximum of 25000function evaluations.We use the single-point crossover and bitwisemutation for binary-coded GAs and the simulated binary crossover (SBX)operator and polynomial mutation [6]forreal-coded GAs.The crossover probabilityofand a mutation probabilityofis the number of decision variables for real-coded GAsandequalto four and an archivesizeFig.3.Distance metric7.nondominated solutions of the combined GA and external populations at the final generation to calculate the performance metrics used in this study.For PAES,SPEA,and binary-coded NSGA-II,we have used30bits to code each decision variable.B.Performance MeasuresUnlike in single-objective optimization,there are two goals in a multiobjective optimization:1)convergence to the Pareto-op-timal set and2)maintenance of diversity in solutions of the Pareto-optimal set.These two tasks cannot be measured ade-quately with one performance metric.Many performance met-rics have been suggested[1],[8],[24].Here,we define two per-formance metrics that are more direct in evaluating each of the above two goals in a solution set obtained by a multiobjective optimization algorithm.The firstmetricchosen solutions on the Pareto-optimal front.The averageof these distances is used as the firstmetricchosen solutions on the Pareto-op-timal front for the calculation of the convergence metric and so-lutions marked with dark circles are solutions obtained by analgorithm.The smaller the value of this metric,the better theconvergence toward the Pareto-optimal front.When all obtainedsolutions lie exactlyonandvariance of this metric calculated for solutionsets obtained in multiple runs.Even when all solutions converge to the Pareto-optimal front,the above convergence metric does not have a value of zero.Themetric will yield zero only when each obtained solution lies ex-actly on each of the chosen solutions.Although this metricaloneFig.4.Diversity metric1.can provide some information about the spread in obtained so-lutions,we define an different metric to measure the spread insolutions obtained by an algorithm directly.The secondmetricof these distances.Thereafter,from the obtained set of non-dominated solutions,we first calculate the extreme solutions(inthe objective space)by fitting a curve parallel to that of the truePareto-optimal front.Then,we use the following metric to cal-culate the nonuniformity in thedistribution:is the average of alldistances,assuming that therearesolutions,there aresolutions lie on one solution.It is interesting tonote that this is not the worst case spread of solutions possible.We can have a scenario in which there is a large varianceinand wouldmakewould be zero,making the metric to take a valuezero.For any other distribution,the value of the metric would begreater than zero.For two distributions having identical valuesof takes a higher value with worse distri-butions of solutions within the extreme solutions.Note that theabove diversity metric can be used on any nondominated set ofsolutions,including one that is not the Pareto-optimal ingTABLE IIM EAN(F IRST R OWS)AND V ARIANCE(S ECOND R OWS)OF THE C ONVERGENCE M ETRIC7TABLE IIIM EAN(F IRST R OWS)AND V ARIANCE(S ECOND R OWS)OF THE D IVERSITY M ETRIC1a triangularization technique or a V oronoi diagram approach[1]to calculateobtained using four algorithms NSGA-II(real-coded),NSGA-II(binary-coded),SPEA,and PAES.NSGA-II(real coded or binary coded)is able to convergebetter in all problems except in ZDT3and ZDT6,where PAESfound better convergence.In all cases with NSGA-II,the vari-ance in ten runs is also small,except in ZDT4with NSGA-II(binary coded).The fixed archive strategy of PAES allows betterconvergence to be achieved in two out of nine problems.Table III shows the mean and variance of the diversity metricFig.7.Nondominated solutions with SPEA onKUR.Fig.8.Nondominated solutions with NSGA-II (binary-coded)on ZDT2.In both aspects of convergence and distribution of solutions,NSGA-II performed better than SPEA in this problem.Since SPEA could not maintain enough nondominated solutions in the final GA population,the overall number of nondominated solutions is much less compared to that obtained in the final population of NSGA-II.Next,we show the nondominated solutions on the problem ZDT2in Figs.8and 9.This problem has a nonconvex Pareto-op-timal front.We show the performance of binary-coded NSGA-II and SPEA on this function.Although the convergence is not a difficulty here with both of these algorithms,both real-and binary-coded NSGA-II have found a better spread and more solutions in the entire Pareto-optimal region than SPEA (the next-best algorithm observed for this problem).The problem ZDT4has21here],that study clearly showed that a population of size of about at least 500is needed for single-objective binary-coded GAs (with tournament selection,single-point crossover and bitwise mutation)to find the global optimum solution in more than 50%of the simulation runs.Since we have used a population of size 100,it is not expected that a multiobjective GA would find the global Pareto-optimal solution,but NSGA-II is able to find a good spread of solutions even at a local Pareto-optimal front.Since SPEA converges poorly on this problem (see Table II),we do not show SPEA results on this figure.Finally,Fig.11shows that SPEA finds a better converged set of nondominated solutions in ZDT6compared to any other algorithm.However,the distribution in solutions is better with real-coded NSGA-II.D.Different Parameter SettingsIn this study,we do not make any serious attempt to find the best parameter setting for NSGA-II.But in this section,we per-Fig.11.Real-coded NSGA-II finds better spread of solutions than SPEA on ZDT6,but SPEA has a better convergence.TABLE IVM EAN AND V ARIANCE OF THE C ONVERGENCE AND D IVERSITY M ETRICSUP TO 500GENERATIONSform additional experiments to show the effect of a couple of different parameter settings on the performance of NSGA-II.First,we keep all other parameters as before,but increase the number of maximum generations to 500(instead of 250used before).Table IV shows the convergence and diversity metrics for problems POL,KUR,ZDT3,ZDT4,and ZDT6.Now,we achieve a convergence very close to the true Pareto-optimal front and with a much better distribution.The table shows that in all these difficult problems,the real-coded NSGA-II has converged very close to the true optimal front,except in ZDT6,which prob-ably requires a different parameter setting with NSGA-II.Par-ticularly,the results on ZDT3and ZDT4improve with genera-tion number.The problem ZDT4has a number of local Pareto-optimalfronts,each corresponding to particular valueof.A large change in the decision vector is needed to get out of a local optimum.Unless mutation or crossover operators are capable of creating solutions in the basin of another better attractor,the improvement in the convergence toward the true Pareto-op-timal front is not possible.We use NSGA-II (real-coded)with a smaller distributionindexand diversitymeasureFig.12.Obtained nondominated solutions with NSGA-II on problem ZDT4.These results are much better than PAES and SPEA,as shown in Table II.To demonstrate the convergence and spread of so-lutions,we plot the nondominated solutions of one of the runs after 250generations in Fig.12.The figure shows that NSGA-II is able to find solutions on the true Pareto-optimal frontwith.V .R OTATED P ROBLEMSIt has been discussed in an earlier study [3]that interactions among decision variables can introduce another level of dif-ficulty to any multiobjective optimization algorithm including EAs.In this section,we create one such problem and investi-gate the working of previously three MOEAs on the following epistatic problem:minimizewhere,but the aboveobjective functions are defined in terms of the variablevector by a fixed rotationmatrixFig.13.Obtained nondominated solutions with NSGA-II,PAES,and SPEA on the rotated problem.within the prescribed variable bounds,we discourage solutionswithresulting].This example problem demonstrates that one of the known dif-ficulties(the linkage problem[11],[12])of single-objective op-timization algorithm can also cause difficulties in a multiobjec-tive problem.However,more systematic studies are needed toamply address the linkage issue in multiobjective optimization.VI.C ONSTRAINT H ANDLINGIn the past,the first author and his students implemented apenalty-parameterless constraint-handling approach for single-objective optimization.Those studies[2],[6]have shown howa tournament selection based algorithm can be used to handleconstraints in a population approach much better than a numberof other existing constraint-handling approaches.A similar ap-proach can be introduced with the above NSGA-II for solvingconstrained multiobjective optimization problems.A.Proposed Constraint-Handling Approach(ConstrainedNSGA-II)This constraint-handling method uses the binary tournamentselection,where two solutions are picked from the populationand the better solution is chosen.In the presence of constraints,each solution can be either feasible or infeasible.Thus,theremay be at most three situations:1)both solutions are feasible;2)one is feasible and other is not;and3)both are infeasible.For single objective optimization,we used a simple rule for eachcase.Case1)Choose the solution with better objective functionvalue.Case2)Choose the feasible solution.Case3)Choose the solution with smaller overall constraintviolation.Since in no case constraints and objective function values arecompared with each other,there is no need of having any penaltyparameter,a matter that makes the proposed constraint-handlingapproach useful and attractive.In the context of multiobjective optimization,the latter twocases can be used as they are and the first case can be resolved byusing the crowded-comparison operator as before.To maintainthe modularity in the procedures of NSGA-II,we simply modifythe definition of domination between two solutions.Definition1:A solution,if any of the following conditions is true.1)Solution is not.2)Solutions are both infeasible,but solutionand dominatessolutionTABLE VC ONSTRAINED T EST P ROBLEMS U SED IN T HIS S TUDYAll objective functions are to be minimized.Three different nondominated rankings of the population arefirst performed.The first ranking is performed using-di-mensional vector.The second ranking is performedusing only the constraint violation values of all(Fig.14.Obtained nondominated solutions with NSGA-II on the constrained problemCONSTR.Fig.15.Obtained nondominated solutions with Ray-Tai-Seow’s algorithm on the constrained problem CONSTR.can be maintained for a large number of generations.However,in each case,we obtain a reasonably good spread of solutions as early as 200generations.Crossover and mutation probabilities are the same as before.Fig.14shows the obtained set of 100nondominated solu-tions after 500generations using NSGA-II.The figure shows that NSGA-II is able to uniformly maintain solutions in both Pareto-optimal region.It is important to note that in order to maintain a spread of solutions on the constraint boundary,the solutions must have to be modified in a particular manner dic-tated by the constraint function.This becomes a difficult task of any search operator.Fig.15shows the obtained solutions using Ray-Tai-Seow’s algorithm after 500generations.It is clear that NSGA-II performs better than Ray–Tai–Seow’s algorithm in terms of converging to the true Pareto-optimal front and also in terms of maintaining a diverse population of nondominated solutions.Next,we consider the test problem SRN.Fig.16shows the nondominated solutions after 500generations usingNSGA-II.Fig.16.Obtained nondominated solutions with NSGA-II on the constrained problemSRN.Fig.17.Obtained nondominated solutions with Ray–Tai–Seow’s algorithm on the constrained problem SRN.The figure shows how NSGA-II can bring a random population on the Pareto-optimal front.Ray–Tai–Seow’s algorithm is also able to come close to the front on this test problem (Fig.17).Figs.18and 19show the feasible objective space and the obtained nondominated solutions with NSGA-II and Ray–Tai–Seow’s algorithm.Here,the Pareto-optimal region is discontinuous and NSGA-II does not have any difficulty in finding a wide spread of solutions over the true Pareto-optimal region.Although Ray–Tai–Seow’s algorithm found a number of solutions on the Pareto-optimal front,there exist many infeasible solutions even after 500generations.In order to demonstrate the working of Fonseca–Fleming’s constraint-han-dling strategy,we implement it with NSGA-II and apply on TNK.Fig.20shows 100population members at the end of 500generations and with identical parameter setting as used in Fig.18.Both these figures demonstrate that the proposed and Fonseca–Fleming’s constraint-handling strategies work well with NSGA-II.。

最优化方法有关牛顿法的矩阵的秩为一的题目

最优化方法有关牛顿法的矩阵的秩为一的题目

英文回答:The Newton-Raphson method is an iterative optimization algorithm utilized for locating the local minimum or maximumof a given function. Within the realm of optimization, the Newton-Raphson method iteratively updates the current solution by leveraging the second derivative information of the objective function. This approach enables the method to converge towards the optimal solution at an accelerated pacepared to first-order optimization algorithms, such as the gradient descent method. Nonetheless, the Newton-Raphson method necessitates the solution of a system of linear equations involving the Hessian matrix, which denotes the second derivative of the objective function. Of particular note, when the Hessian matrix possesses a rank of one, it introduces a special case for the Newton-Raphson method.牛顿—拉弗森方法是一种迭代优化算法,用于定位特定函数的局部最小或最大值。

stable diffusion prompt 故事

stable diffusion prompt 故事

stable diffusion prompt 故事
在一个遥远的星球上,有一个名为稳定扩散的城市。

稳定扩散是一个现代化且充满活力的地方,它的居民在和谐、宽容的社会中共同生活着。

这个城市以其高效的交通系统而闻名。

无论是公共汽车、轻轨,还是自行车共享系统,稳定扩散的居民都能够快速、便捷地到达目的地。

城市里的道路宽敞而整洁,设计合理的交通规则使行车相当安全。

这使得居民们能够更好地利用时间,享受更多的休闲和娱乐活动。

稳定扩散还以其绿色环保而自豪。

城市内大量种植了树木和花草,公园和广场随处可见。

居民们乐于在这些绿地上散步、锻炼和与朋友聚会。

同时,城市还鼓励人们采取可持续的生活方式,如垃圾分类回收和能源节约。

这些努力使稳定扩散成为了一个宜居的城市,为居民们提供了健康而舒适的生活环境。

稳定扩散亦以其先进的科技而闻名。

城市内充满了高科技公司和创新中心,吸引了许多年轻人和创业者前来寻找机会。

科技的发展也为城市提供了便利,比如自动化系统和智能家居设备,使居民能够更加舒适和便捷地生活。

稳定扩散是一个多元化的城市,具有文化交融的特点。

人们来自各行各业,背景各异,互相尊重和包容。

城市里有各式各样的餐馆、艺术展览和音乐演出,让居民们享受不同文化的体验。

这种多样性使得稳定扩散成为了一个充满活力和创意的地方。

总而言之,稳定扩散是一个先进、绿色、多元化且充满活力的城市。

它以其高效的交通系统、绿色环保、先进科技和文化多样性而闻名。

居民们在这里享受着舒适和宜居的生活,他们为创建这个城市的繁荣共同努力着。

Unit-2-Principles-ofCorr精选全文

Unit-2-Principles-ofCorr精选全文
gloss translation: the translator attempts to reproduce as literally and meaningfully as possible the form and content of the original.
A translation of dynamic equivalence aims at complete naturalness of expression, and tries to relate the receptor to modes of behavior relevant within the context of his own culture.
Three basic factors in translating
(1) the nature of the message, (2) the purpose or purposes of the author
and, by proxy, of the translator, and (3) the type of audience.
• In 1937, Nida undertook studies at the Uni. of Southern California.
• In 1943, Nida received his Ph.D in linguistics from the Uni. of Michigan.
• In the early 1980s, Nida retired but kept on giving lectures in universities.
equivalence
6. Principles governing translations oriented toward dynamic

Progress in structural materialsfor aerospace systems

Progress in structural materialsfor aerospace systems

1359-6454/$30.00 2003 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.actamat.2003.08.023
5776
J.C. Williams, E.A. Starke, Jr. / Acta Materialia 51 (2003) 5775–5799
choices available to the designer are more numerous. In this paper we will attempt to describe, in qualitative terms, the evolution of structural materials for application in aerospace systems and automotive products and the role of materials in creating customer value. Environmental impact also has several associated dimensions, including emissions of the product, effluent created during product manufacture and disposal/recycling capability. A recent trend in the transportation industry is to improve customer value through creation of products that incorporate advanced structural materials and benefit from new manufacturing technologies. The objective is to create value by improving performance, reducing ownership costs, extending the system life and reducing environmental impact. Improved performance, as determined by materials, typically translates into higher structural efficiency, resulting in reduced product weight. Structural efficiency is the combined result of materials capability and design methodology. For example a stiffness limited component may incorporate a higher modulus material using the same des that increases the section modulus or both. Honeycomb construction used in aircraft is an example of an increased section modulus design. The decision to introduce such a design impacts material selection because not all materials can be fabricated into honeycomb. Moreover, in recent years, honeycomb construction has fallen from favor because of the propensity for corrosion in the core when moisture is allowed to enter. This simple example illustrates the interaction between materials selection, manufacturability, design methods and ultimate product acceptance. Performance also must be normalized by damage tolerance because of the need for reliability. Improved reliability adds value through increased availability of the product to perform its intended function. Due to the increased number of design constraints driven by the growing number of product requirements and the greater range of structural materials now available, the designer is faced with complex choices for selecting a material to meet the requirements for a particular system. The outcome of competition between various classes of materials may also be associated with

最新Unit 1 Text A Neuron Overload and the Juggling Physician

最新Unit 1 Text A Neuron Overload and the Juggling Physician

1Unit 1 Text A神经过载与千头万绪的医生23患者经常抱怨自己的医生不会聆听他们的诉说。

虽然可能会有那么几个医生确实充耳不闻,但是大多数医生通情达理,还是能够感同身受的人。

我就纳闷45为什么即使这些医生似乎成为批评的牺牲品。

我常常想这个问题的成因是不是6就是医生所受的神经过载。

有时我感觉像变戏法,大脑千头万绪,事无巨细,7不能挂一漏万。

如果病人冷不丁提个要求,即使所提要求十分中肯,也会让我8那内心脆弱的平衡乱作一团,就像井然有序同时演出三台节目的大马戏场突然9间崩塌了一样。

有一天,我算过一次常规就诊过程中我脑子里有多少想法在翻腾,试图据此1011弄清楚为了完满完成一项工作,一个医生的脑海机灵转动,需要处理多少个细12节。

奥索里奥夫人56岁,是我的病人。

她有点超重。

她的糖尿病和高血压一直控制良好,恰到好处。

她的胆固醇偏高,但并没有服用任何药物。

她锻炼不够1314多,最后一次DEXA骨密度检测显示她的骨质变得有点疏松。

尽管她一直没有爽15约,按时看病,并能按时做血液化验,但是她形容自己的生活还有压力。

总的16说来,她健康良好,在医疗实践中很可能被描述为一个普通患者,并非过于复17杂。

18以下是整个20分钟看病的过程中我脑海中闪过的念头。

她做了血液化验,这是好事。

血糖好点了。

胆固醇不是很好。

可能需1920要考虑开始服用他汀类药物。

她的肝酶正常吗?21她的体重有点增加。

我需要和她谈谈每天吃五种蔬果、每天步行30分钟的事。

2223糖尿病:她早上的血糖水平和晚上的比对结果如何?她最近是否和营养24师谈过?她是否看过眼科医生?足科医生呢?25她的血压还好,但不是很好。

我是不是应该再加一种降血压的药?药片26多了是否让她困惑?更好地控制血压的益处和她可能什么药都不吃带来的27风险孰重孰轻?骨密度DEXA扫描显示她的骨质有点疏松。

我是否应该让她服用二磷酸盐,2829因为这可以预防骨质疏松症?而我现在又要给她加一种药丸,而这种药需30要详细说明。

Unit 11 P238问题和P248翻译

Unit 11 P238问题和P248翻译

Unit 11P238ⅠTopics for group discussion.1.How is a professional paper defined? What’s your understandingof a professional paper?A professional paper is a typewritten paper in which professionals present their views and research findings on a chosen topic. It must conform to a specific format. It differs from other non-professional essays in that it involves the use of library sources from which facts, quotations, and the options of others are drawn to explain, support, or authenticates ideas in the paper. It usually concludes with a bibliography, and an alphabetical list of all sources cited.I think that the professional paper is a thorough research on the specific field. It deals with the study of some objective facts or problems, and the conclusion that is drawn should be based on relevant data, not on personal likes or dislikes. It does not matter who is conducting the experiment or investigation.2. How are papers classified? What are the similarities and differences between/among them?The four kinds of papers usually assigned in university and colleges.Similarities: professional paper, the task of the author: read on a particular topic, gather information about it and report the findings.Differences: 1)Report paper: Merely catalogs findings in a sensible sequence.2)Research paper: Draw new conclusions on the basis of obtained data .or observed facts, or to present the material in the light of a new interest.3)Course paper are written after a specific course or at the end of a term.4.)Thesis paper: Take a definite stand on an issue.3. Linguistic Features of Professional Papers(1) Formal StyleA professional paper deals with the study of some objective facts or problems, and the conclusion that is drawn should be based on relevant data, not on personal likes and dislikes. It is particularly important in any kind of scientific inquiry; it does not matter who is conducting the experiment or investigation. Being impersonal and free from emotional factors is one of the important features in professional writing. The need to be formal comes from the fact that science reflects the objective facts, and it is free from bias and prejudice. The need for objectivity becomes a matter of special concern whenever a research or investigation touches upon human actions or attitudes. The focus of professional writing is upon the data and upon the analysis of the data, for example, instead of writing:I carried out an experiment to investigate the effect of light on plantgrowth.It would be more conventional to say:An experiment was carried out to investigate the effect of light on plant growth.Generally speaking, formal writing sets an unusually high value on objectivity, meticulousness, accuracy, and restraint. It is directed to the reader’s mind and makes little efforts to appeal to his emotions. Its purposes are utilitarian, and it is usually intended for readers who already have, to some degree, a special interest in the subject matter or are even experienced colleagues in the same trade. Consequently, though it places a high value on interest, it does not try to be so colorful and entertaining that it runs the risk of becoming flashy.(2)Specialized TermsThe terms in professional papers are typically specialized. Take the word “normal” as an example. Generally, it means “正常的”; but in mathematics, it represents “法线”; and in the field of chemistry, “当量.” Again the word “power.” In electronics, it is rendered as “电力” or “电源”; in mechanics, “动力”; whereas in mathematics, “幂.”Even in the same field, the meanings of the same word may vary slightly due to its different collocations, for example:filter 滤波器,滤色器tramp filter 干扰滤除器amplitude filter 振幅滤波器filter paper 滤纸primary filter 基色滤色器What is more, a great number of professional word and terms can only be understood by the specialists in the fields, e.g., decoder (译码器), photophor (磷光核), multi-quantum transition (多量子跃迁), Read Only Memory (只读存储器) and conversation implicatures(会话含义), etc. Examples like these are too numerous to mention one by one.(3)Rigid Sentence StructureAs we know, the function of professional papers is to reveal creative research achievements and exchange latest research information. The arguments in professional papers will be convincing if they are presented concisely and concretely. A rigid sentence structure is therefore reflected to meet this requirement.(4)Formatted ElementsThough there are no set rules, a complete professional paper in its finished form usually has a regular format composed of the following elements: the title, author(s), affiliation(s), abstract, keywords, introduction, body of the paper (theoretical description including calculation, inference, reasoning, conclusion, etc. or experimental description including techniques, methods, materials, results and analysis, etc.), acknowledgments, appendices, references or bibliography, etc.4.Where can you search for different kinds of papers from varioussources?1) JournalsJournals are usually edited and published by learned societies or associations monthly, bi-monthly, or quarterly.2) ActaActa are mainly published by institutions of higher learning.3) Bulletins, Circulars or GazettesSuch bulletins are mainly edited and designated for the publication of briefs of research findings, preliminary results on some research programs, science news or notices of scientific seminars and conferences.4) Rapid CommunicationsThis kind of publication belongs to the public correspondence and letter-form publications.5) ReviewsCommentary or summary articles are usually carried in specialized journals called Reviews.6) ProceedingsProceedings are collections of papers of the corresponding academic conferences at which these papers or commentaries are presented.7) Dissertation Abstract InternationalDissertation Abstract International(DAI)is published monthly by University Microfilms International and includes abstracts of doctoraldissertations submitted to UMI by 550 participating institutions in North America and throughout the word.8) Comprehensive Dissertation Index(CDI)Comprehensive Dissertation Index(CDI) provides citations from 1861. It includes international coverage of engineering and technological literature.9) On line Access to DissertationOn line Access to Dissertation describes searching the Comprehensive Dissertation Index database on DIALOG.P248ⅡTranslation1. Translate the following sentences, paying attention to the attributive clauses in English and the “···的···”pattern in Chinese. (1) Late last century all the universities in the United States adopted the credit system which benefited students a great deal.(1)上个世纪末,美国所有大学已实行学分制,学生们从中受益匪浅。

An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination

An Overview of Recent Progress in the Study of Distributed Multi-agent Coordination

An Overview of Recent Progress in the Study of Distributed Multi-agent CoordinationYongcan Cao,Member,IEEE,Wenwu Yu,Member,IEEE,Wei Ren,Member,IEEE,and Guanrong Chen,Fellow,IEEEAbstract—This article reviews some main results and progress in distributed multi-agent coordination,focusing on papers pub-lished in major control systems and robotics journals since 2006.Distributed coordination of multiple vehicles,including unmanned aerial vehicles,unmanned ground vehicles and un-manned underwater vehicles,has been a very active research subject studied extensively by the systems and control community. The recent results in this area are categorized into several directions,such as consensus,formation control,optimization, and estimation.After the review,a short discussion section is included to summarize the existing research and to propose several promising research directions along with some open problems that are deemed important for further investigations.Index Terms—Distributed coordination,formation control,sen-sor networks,multi-agent systemI.I NTRODUCTIONC ONTROL theory and practice may date back to thebeginning of the last century when Wright Brothers attempted theirfirst testflight in1903.Since then,control theory has gradually gained popularity,receiving more and wider attention especially during the World War II when it was developed and applied tofire-control systems,missile nav-igation and guidance,as well as various electronic automation devices.In the past several decades,modern control theory was further advanced due to the booming of aerospace technology based on large-scale engineering systems.During the rapid and sustained development of the modern control theory,technology for controlling a single vehicle, albeit higher-dimensional and complex,has become relatively mature and has produced many effective tools such as PID control,adaptive control,nonlinear control,intelligent control, This work was supported by the National Science Foundation under CAREER Award ECCS-1213291,the National Natural Science Foundation of China under Grant No.61104145and61120106010,the Natural Science Foundation of Jiangsu Province of China under Grant No.BK2011581,the Research Fund for the Doctoral Program of Higher Education of China under Grant No.20110092120024,the Fundamental Research Funds for the Central Universities of China,and the Hong Kong RGC under GRF Grant CityU1114/11E.The work of Yongcan Cao was supported by a National Research Council Research Associateship Award at AFRL.Y.Cao is with the Control Science Center of Excellence,Air Force Research Laboratory,Wright-Patterson AFB,OH45433,USA.W.Yu is with the Department of Mathematics,Southeast University,Nanjing210096,China and also with the School of Electrical and Computer Engineering,RMIT University,Melbourne VIC3001,Australia.W.Ren is with the Department of Electrical Engineering,University of California,Riverside,CA92521,USA.G.Chen is with the Department of Electronic Engineering,City University of Hong Kong,Hong Kong SAR,China.Copyright(c)2009IEEE.Personal use of this material is permitted. However,permission to use this material for any other purposes must be obtained from the IEEE by sending a request to pubs-permissions@.and robust control methodologies.In the past two decades in particular,control of multiple vehicles has received increas-ing demands spurred by the fact that many benefits can be obtained when a single complicated vehicle is equivalently replaced by multiple yet simpler vehicles.In this endeavor, two approaches are commonly adopted for controlling multiple vehicles:a centralized approach and a distributed approach. The centralized approach is based on the assumption that a central station is available and powerful enough to control a whole group of vehicles.Essentially,the centralized ap-proach is a direct extension of the traditional single-vehicle-based control philosophy and strategy.On the contrary,the distributed approach does not require a central station for control,at the cost of becoming far more complex in structure and organization.Although both approaches are considered practical depending on the situations and conditions of the real applications,the distributed approach is believed more promising due to many inevitable physical constraints such as limited resources and energy,short wireless communication ranges,narrow bandwidths,and large sizes of vehicles to manage and control.Therefore,the focus of this overview is placed on the distributed approach.In distributed control of a group of autonomous vehicles,the main objective typically is to have the whole group of vehicles working in a cooperative fashion throughout a distributed pro-tocol.Here,cooperative refers to a close relationship among all vehicles in the group where information sharing plays a central role.The distributed approach has many advantages in achieving cooperative group performances,especially with low operational costs,less system requirements,high robustness, strong adaptivity,andflexible scalability,therefore has been widely recognized and appreciated.The study of distributed control of multiple vehicles was perhapsfirst motivated by the work in distributed comput-ing[1],management science[2],and statistical physics[3]. In the control systems society,some pioneering works are generally referred to[4],[5],where an asynchronous agree-ment problem was studied for distributed decision-making problems.Thereafter,some consensus algorithms were studied under various information-flow constraints[6]–[10].There are several journal special issues on the related topics published af-ter2006,including the IEEE Transactions on Control Systems Technology(vol.15,no.4,2007),Proceedings of the IEEE (vol.94,no.4,2007),ASME Journal of Dynamic Systems, Measurement,and Control(vol.129,no.5,2007),SIAM Journal of Control and Optimization(vol.48,no.1,2009),and International Journal of Robust and Nonlinear Control(vol.21,no.12,2011).In addition,there are some recent reviewsand progress reports given in the surveys[11]–[15]and thebooks[16]–[23],among others.This article reviews some main results and recent progressin distributed multi-agent coordination,published in majorcontrol systems and robotics journals since2006.Due to space limitations,we refer the readers to[24]for a more completeversion of the same overview.For results before2006,thereaders are referred to[11]–[14].Specifically,this article reviews the recent research resultsin the following directions,which are not independent but actually may have overlapping to some extent:1.Consensus and the like(synchronization,rendezvous).Consensus refers to the group behavior that all theagents asymptotically reach a certain common agreementthrough a local distributed protocol,with or without predefined common speed and orientation.2.Distributed formation and the like(flocking).Distributedformation refers to the group behavior that all the agents form a pre-designed geometrical configuration throughlocal interactions with or without a common reference.3.Distributed optimization.This refers to algorithmic devel-opments for the analysis and optimization of large-scaledistributed systems.4.Distributed estimation and control.This refers to dis-tributed control design based on local estimation aboutthe needed global information.The rest of this article is organized as follows.In Section II,basic notations of graph theory and stochastic matrices are introduced.Sections III,IV,V,and VI describe the recentresearch results and progress in consensus,formation control, optimization,and estimation.Finally,the article is concludedby a short section of discussions with future perspectives.II.P RELIMINARIESA.Graph TheoryFor a system of n connected agents,its network topology can be modeled as a directed graph denoted by G=(V,W),where V={v1,v2,···,v n}and W⊆V×V are,respectively, the set of agents and the set of edges which directionallyconnect the agents together.Specifically,the directed edgedenoted by an ordered pair(v i,v j)means that agent j can access the state information of agent i.Accordingly,agent i is a neighbor of agent j.A directed path is a sequence of directed edges in the form of(v1,v2),(v2,v3),···,with all v i∈V.A directed graph has a directed spanning tree if there exists at least one agent that has a directed path to every other agent.The union of a set of directed graphs with the same setof agents,{G i1,···,G im},is a directed graph with the sameset of agents and its set of edges is given by the union of the edge sets of all the directed graphs G ij,j=1,···,m.A complete directed graph is a directed graph in which each pair of distinct agents is bidirectionally connected by an edge,thus there is a directed path from any agent to any other agent in the network.Two matrices are used to represent the network topology: the adjacency matrix A=[a ij]∈R n×n with a ij>0if (v j,v i)∈W and a ij=0otherwise,and the Laplacian matrix L=[ℓij]∈R n×n withℓii= n j=1a ij andℓij=−a ij,i=j, which is generally asymmetric for directed graphs.B.Stochastic MatricesA nonnegative square matrix is called(row)stochastic matrix if its every row is summed up to one.The product of two stochastic matrices is still a stochastic matrix.A row stochastic matrix P∈R n×n is called indecomposable and aperiodic if lim k→∞P k=1y T for some y∈R n[25],where 1is a vector with all elements being1.III.C ONSENSUSConsider a group of n agents,each with single-integrator kinematics described by˙x i(t)=u i(t),i=1,···,n,(1) where x i(t)and u i(t)are,respectively,the state and the control input of the i th agent.A typical consensus control algorithm is designed asu i(t)=nj=1a ij(t)[x j(t)−x i(t)],(2)where a ij(t)is the(i,j)th entry of the corresponding ad-jacency matrix at time t.The main idea behind(2)is that each agent moves towards the weighted average of the states of its neighbors.Given the switching network pattern due to the continuous motions of the dynamic agents,coupling coefficients a ij(t)in(2),hence the graph topologies,are generally time-varying.It is shown in[9],[10]that consensus is achieved if the underlying directed graph has a directed spanning tree in some jointly fashion in terms of a union of its time-varying graph topologies.The idea behind consensus serves as a fundamental principle for the design of distributed multi-agent coordination algo-rithms.Therefore,investigating consensus has been a main research direction in the study of distributed multi-agent co-ordination.To bridge the gap between the study of consensus algorithms and many physical properties inherited in practical systems,it is necessary and meaningful to study consensus by considering many practical factors,such as actuation,control, communication,computation,and vehicle dynamics,which characterize some important features of practical systems.This is the main motivation to study consensus.In the following part of the section,an overview of the research progress in the study of consensus is given,regarding stochastic network topologies and dynamics,complex dynamical systems,delay effects,and quantization,mainly after2006.Several milestone results prior to2006can be found in[2],[4]–[6],[8]–[10], [26].A.Stochastic Network Topologies and DynamicsIn multi-agent systems,the network topology among all vehicles plays a crucial role in determining consensus.The objective here is to explicitly identify necessary and/or suffi-cient conditions on the network topology such that consensus can be achieved under properly designed algorithms.It is often reasonable to consider the case when the network topology is deterministic under ideal communication chan-nels.Accordingly,main research on the consensus problem was conducted under a deterministicfixed/switching network topology.That is,the adjacency matrix A(t)is deterministic. Some other times,when considering random communication failures,random packet drops,and communication channel instabilities inherited in physical communication channels,it is necessary and important to study consensus problem in the stochastic setting where a network topology evolves according to some random distributions.That is,the adjacency matrix A(t)is stochastically evolving.In the deterministic setting,consensus is said to be achieved if all agents eventually reach agreement on a common state. In the stochastic setting,consensus is said to be achieved almost surely(respectively,in mean-square or in probability)if all agents reach agreement on a common state almost surely (respectively,in mean-square or with probability one).Note that the problem studied in the stochastic setting is slightly different from that studied in the deterministic setting due to the different assumptions in terms of the network topology. Consensus over a stochastic network topology was perhaps first studied in[27],where some sufficient conditions on the network topology were given to guarantee consensus with probability one for systems with single-integrator kinemat-ics(1),where the rate of convergence was also studied.Further results for consensus under a stochastic network topology were reported in[28]–[30],where research effort was conducted for systems with single-integrator kinematics[28],[29]or double-integrator dynamics[30].Consensus for single-integrator kine-matics under stochastic network topology has been exten-sively studied in particular,where some general conditions for almost-surely consensus was derived[29].Loosely speaking, almost-surely consensus for single-integrator kinematics can be achieved,i.e.,x i(t)−x j(t)→0almost surely,if and only if the expectation of the network topology,namely,the network topology associated with expectation E[A(t)],has a directed spanning tree.It is worth noting that the conditions are analogous to that in[9],[10],but in the stochastic setting. In view of the special structure of the closed-loop systems concerning consensus for single-integrator kinematics,basic properties of the stochastic matrices play a crucial role in the convergence analysis of the associated control algorithms. Consensus for double-integrator dynamics was studied in[30], where the switching network topology is assumed to be driven by a Bernoulli process,and it was shown that consensus can be achieved if the union of all the graphs has a directed spanning tree.Apparently,the requirement on the network topology for double-integrator dynamics is a special case of that for single-integrator kinematics due to the difference nature of thefinal states(constantfinal states for single-integrator kinematics and possible dynamicfinal states for double-integrator dynamics) caused by the substantial dynamical difference.It is still an open question as if some general conditions(corresponding to some specific algorithms)can be found for consensus with double-integrator dynamics.In addition to analyzing the conditions on the network topology such that consensus can be achieved,a special type of consensus algorithm,the so-called gossip algorithm[31],[32], has been used to achieve consensus in the stochastic setting. The gossip algorithm can always guarantee consensus almost surely if the available pairwise communication channels satisfy certain conditions(such as a connected graph).The way of network topology switching does not play any role in the consideration of consensus.The current study on consensus over stochastic network topologies has shown some interesting results regarding:(1) consensus algorithm design for various multi-agent systems,(2)conditions of the network topologies on consensus,and(3)effects of the stochastic network topologies on the con-vergence rate.Future research on this topic includes,but not limited to,the following two directions:(1)when the network topology itself is stochastic,how to determine the probability of reaching consensus almost surely?(2)compared with the deterministic network topology,what are the advantages and disadvantages of the stochastic network topology,regarding such as robustness and convergence rate?As is well known,disturbances and uncertainties often exist in networked systems,for example,channel noise,commu-nication noise,uncertainties in network parameters,etc.In addition to the stochastic network topologies discussed above, the effect of stochastic disturbances[33],[34]and uncertain-ties[35]on the consensus problem also needs investigation. Study has been mainly devoted to analyzing the performance of consensus algorithms subject to disturbances and to present-ing conditions on the uncertainties such that consensus can be achieved.In addition,another interesting direction in dealing with disturbances and uncertainties is to design distributed localfiltering algorithms so as to save energy and improve computational efficiency.Distributed localfiltering algorithms play an important role and are more effective than traditional centralizedfiltering algorithms for multi-agent systems.For example,in[36]–[38]some distributed Kalmanfilters are designed to implement data fusion.In[39],by analyzing consensus and pinning control in synchronization of complex networks,distributed consensusfiltering in sensor networks is addressed.Recently,Kalmanfiltering over a packet-dropping network is designed through a probabilistic approach[40]. Today,it remains a challenging problem to incorporate both dynamics of consensus and probabilistic(Kalman)filtering into a unified framework.plex Dynamical SystemsSince consensus is concerned with the behavior of a group of vehicles,it is natural to consider the system dynamics for practical vehicles in the study of the consensus problem. Although the study of consensus under various system dynam-ics is due to the existence of complex dynamics in practical systems,it is also interesting to observe that system dynamics play an important role in determining thefinal consensus state.For instance,the well-studied consensus of multi-agent systems with single-integrator kinematics often converges to a constantfinal value instead.However,consensus for double-integrator dynamics might admit a dynamicfinal value(i.e.,a time function).These important issues motivate the study of consensus under various system dynamics.As a direct extension of the study of the consensus prob-lem for systems with simple dynamics,for example,with single-integrator kinematics or double-integrator dynamics, consensus with general linear dynamics was also studied recently[41]–[43],where research is mainly devoted tofinding feedback control laws such that consensus(in terms of the output states)can be achieved for general linear systems˙x i=Ax i+Bu i,y i=Cx i,(3) where A,B,and C are constant matrices with compatible sizes.Apparently,the well-studied single-integrator kinematics and double-integrator dynamics are special cases of(3)for properly choosing A,B,and C.As a further extension,consensus for complex systems has also been extensively studied.Here,the term consensus for complex systems is used for the study of consensus problem when the system dynamics are nonlinear[44]–[48]or with nonlinear consensus algorithms[49],[50].Examples of the nonlinear system dynamics include:•Nonlinear oscillators[45].The dynamics are often as-sumed to be governed by the Kuramoto equation˙θi=ωi+Kstability.A well-studied consensus algorithm for(1)is given in(2),where it is now assumed that time delay exists.Two types of time delays,communication delay and input delay, have been considered in the munication delay accounts for the time for transmitting information from origin to destination.More precisely,if it takes time T ij for agent i to receive information from agent j,the closed-loop system of(1)using(2)under afixed network topology becomes˙x i(t)=nj=1a ij(t)[x j(t−T ij)−x i(t)].(7)An interpretation of(7)is that at time t,agent i receives information from agent j and uses data x j(t−T ij)instead of x j(t)due to the time delay.Note that agent i can get its own information instantly,therefore,input delay can be considered as the summation of computation time and execution time. More precisely,if the input delay for agent i is given by T p i, then the closed-loop system of(1)using(2)becomes˙x i(t)=nj=1a ij(t)[x j(t−T p i)−x i(t−T p i)].(8)Clearly,(7)refers to the case when only communication delay is considered while(8)refers to the case when only input delay is considered.It should be emphasized that both communication delay and input delay might be time-varying and they might co-exist at the same time.In addition to time delay,it is also important to consider packet drops in exchanging state information.Fortunately, consensus with packet drops can be considered as a special case of consensus with time delay,because re-sending packets after they were dropped can be easily done but just having time delay in the data transmission channels.Thus,the main problem involved in consensus with time delay is to study the effects of time delay on the convergence and performance of consensus,referred to as consensusabil-ity[52].Because time delay might affect the system stability,it is important to study under what conditions consensus can still be guaranteed even if time delay exists.In other words,can onefind conditions on the time delay such that consensus can be achieved?For this purpose,the effect of time delay on the consensusability of(1)using(2)was investigated.When there exists only(constant)input delay,a sufficient condition on the time delay to guarantee consensus under afixed undirected interaction graph is presented in[8].Specifically,an upper bound for the time delay is derived under which consensus can be achieved.This is a well-expected result because time delay normally degrades the system performance gradually but will not destroy the system stability unless the time delay is above a certain threshold.Further studies can be found in, e.g.,[53],[54],which demonstrate that for(1)using(2),the communication delay does not affect the consensusability but the input delay does.In a similar manner,consensus with time delay was studied for systems with different dynamics, where the dynamics(1)are replaced by other more complex ones,such as double-integrator dynamics[55],[56],complex networks[57],[58],rigid bodies[59],[60],and general nonlinear dynamics[61].In summary,the existing study of consensus with time delay mainly focuses on analyzing the stability of consensus algo-rithms with time delay for various types of system dynamics, including linear and nonlinear dynamics.Generally speaking, consensus with time delay for systems with nonlinear dynam-ics is more challenging.For most consensus algorithms with time delays,the main research question is to determine an upper bound of the time delay under which time delay does not affect the consensusability.For communication delay,it is possible to achieve consensus under a relatively large time delay threshold.A notable phenomenon in this case is that thefinal consensus state is constant.Considering both linear and nonlinear system dynamics in consensus,the main tools for stability analysis of the closed-loop systems include matrix theory[53],Lyapunov functions[57],frequency-domain ap-proach[54],passivity[58],and the contraction principle[62]. Although consensus with time delay has been studied extensively,it is often assumed that time delay is either constant or random.However,time delay itself might obey its own dynamics,which possibly depend on the communication distance,total computation load and computation capability, etc.Therefore,it is more suitable to represent the time delay as another system variable to be considered in the study of the consensus problem.In addition,it is also important to consider time delay and other physical constraints simultaneously in the study of the consensus problem.D.QuantizationQuantized consensus has been studied recently with motiva-tion from digital signal processing.Here,quantized consensus refers to consensus when the measurements are digital rather than analog therefore the information received by each agent is not continuous and might have been truncated due to digital finite precision constraints.Roughly speaking,for an analog signal s,a typical quantizer with an accuracy parameterδ, also referred to as quantization step size,is described by Q(s)=q(s,δ),where Q(s)is the quantized signal and q(·,·) is the associated quantization function.For instance[63],a quantizer rounding a signal s to its nearest integer can be expressed as Q(s)=n,if s∈[(n−1/2)δ,(n+1/2)δ],n∈Z, where Z denotes the integer set.Note that the types of quantizers might be different for different systems,hence Q(s) may differ for different systems.Due to the truncation of the signals received,consensus is now considered achieved if the maximal state difference is not larger than the accuracy level associated with the whole system.A notable feature for consensus with quantization is that the time to reach consensus is usuallyfinite.That is,it often takes afinite period of time for all agents’states to converge to an accuracy interval.Accordingly,the main research is to investigate the convergence time associated with the proposed consensus algorithm.Quantized consensus was probablyfirst studied in[63], where a quantized gossip algorithm was proposed and its convergence was analyzed.In particular,the bound of theconvergence time for a complete graph was shown to be poly-nomial in the network size.In[64],coding/decoding strate-gies were introduced to the quantized consensus algorithms, where it was shown that the convergence rate depends on the accuracy of the quantization but not the coding/decoding schemes.In[65],quantized consensus was studied via the gossip algorithm,with both lower and upper bounds of the expected convergence time in the worst case derived in terms of the principle submatrices of the Laplacian matrix.Further results regarding quantized consensus were reported in[66]–[68],where the main research was also on the convergence time for various proposed quantized consensus algorithms as well as the quantization effects on the convergence time.It is intuitively reasonable that the convergence time depends on both the quantization level and the network topology.It is then natural to ask if and how the quantization methods affect the convergence time.This is an important measure of the robustness of a quantized consensus algorithm(with respect to the quantization method).Note that it is interesting but also more challenging to study consensus for general linear/nonlinear systems with quantiza-tion.Because the difference between the truncated signal and the original signal is bounded,consensus with quantization can be considered as a special case of one without quantization when there exist bounded disturbances.Therefore,if consensus can be achieved for a group of vehicles in the absence of quantization,it might be intuitively correct to say that the differences among the states of all vehicles will be bounded if the quantization precision is small enough.However,it is still an open question to rigorously describe the quantization effects on consensus with general linear/nonlinear systems.E.RemarksIn summary,the existing research on the consensus problem has covered a number of physical properties for practical systems and control performance analysis.However,the study of the consensus problem covering multiple physical properties and/or control performance analysis has been largely ignored. In other words,two or more problems discussed in the above subsections might need to be taken into consideration simul-taneously when studying the consensus problem.In addition, consensus algorithms normally guarantee the agreement of a team of agents on some common states without taking group formation into consideration.To reflect many practical applications where a group of agents are normally required to form some preferred geometric structure,it is desirable to consider a task-oriented formation control problem for a group of mobile agents,which motivates the study of formation control presented in the next section.IV.F ORMATION C ONTROLCompared with the consensus problem where thefinal states of all agents typically reach a singleton,thefinal states of all agents can be more diversified under the formation control scenario.Indeed,formation control is more desirable in many practical applications such as formationflying,co-operative transportation,sensor networks,as well as combat intelligence,surveillance,and reconnaissance.In addition,theperformance of a team of agents working cooperatively oftenexceeds the simple integration of the performances of all individual agents.For its broad applications and advantages,formation control has been a very active research subject inthe control systems community,where a certain geometric pattern is aimed to form with or without a group reference.More precisely,the main objective of formation control is to coordinate a group of agents such that they can achievesome desired formation so that some tasks can befinished bythe collaboration of the agents.Generally speaking,formation control can be categorized according to the group reference.Formation control without a group reference,called formationproducing,refers to the algorithm design for a group of agents to reach some pre-desired geometric pattern in the absenceof a group reference,which can also be considered as the control objective.Formation control with a group reference,called formation tracking,refers to the same task but followingthe predesignated group reference.Due to the existence of the group reference,formation tracking is usually much morechallenging than formation producing and control algorithmsfor the latter might not be useful for the former.As of today, there are still many open questions in solving the formationtracking problem.The following part of the section reviews and discussesrecent research results and progress in formation control, including formation producing and formation tracking,mainlyaccomplished after2006.Several milestone results prior to 2006can be found in[69]–[71].A.Formation ProducingThe existing work in formation control aims at analyzingthe formation behavior under certain control laws,along with stability analysis.1)Matrix Theory Approach:Due to the nature of multi-agent systems,matrix theory has been frequently used in thestability analysis of their distributed coordination.Note that consensus input to each agent(see e.g.,(2))isessentially a weighted average of the differences between the states of the agent’s neighbors and its own.As an extensionof the consensus algorithms,some coupling matrices wereintroduced here to offset the corresponding control inputs by some angles[72],[73].For example,given(1),the controlinput(2)is revised as u i(t)= n j=1a ij(t)C[x j(t)−x i(t)], where C is a coupling matrix with compatible size.If x i∈R3, then C can be viewed as the3-D rotational matrix.The mainidea behind the revised algorithm is that the original controlinput for reaching consensus is now rotated by some angles. The closed-loop system can be expressed in a vector form, whose stability can be determined by studying the distribution of the eigenvalues of a certain transfer matrix.Main research work was conducted in[72],[73]to analyze the collective motions for systems with single-integrator kinematics and double-integrator dynamics,where the network topology,the damping gain,and C were shown to affect the collective motions.Analogously,the collective motions for a team of nonlinear self-propelling agents were shown to be affected by。

国家开放大学理工英语1边学边练

国家开放大学理工英语1边学边练

Unit 1Future HousesReading 1(边学边练)1.According to the passage, the next big trend in U.S. real estate is _____.单选题(1 分)答案:B.micro apartmentA.big houseB.micro apartmentC.traditional house2.As the population keeps climbing, people in the city have to face the reality that _____.单选题(1 分)答案:A.housing is in short supplyA.housing is in short supplyB.housing is very sufficientC.housing is a luxury goods3.Why is the micro apartment so appealing?单选题(1 分)答案:A.It meets the need of someone.A.It meets the need of someone.B.It's very strange.C.It's excellent.4.Micro apartments are very _____ in Tokyo.单选题(1 分)答案:A.raveA.raveB.essentialmon5.How do people think of the micro apartment?单选题(1 分)答案:C.Not everyone is in favor of the trend.A.Everyone likes it very much.B.Some people think it's humorous and fun.C.Not everyone is in favor of the trend.Reading 2(边学边练)1.学完上面的短文,试着选出正确选项吧。

行人碰撞腿部保护研究

行人碰撞腿部保护研究
大多数的胫骨伤害都归因于保 险杠碰撞而引起的弯曲力矩,弯曲 导致胫骨在发生撞击的一侧出现压 缩应力,而另一侧则出现拉伸应 力。当应力超过极限时,胫骨发生
图1 行人下肢伤害的主要模式
骨折。股骨和腓骨也具有同样的伤 害机理。
Kajzer对膝关节受到横向碰撞 时的伤害机理进行了详细的研究, 指出膝关节伤害主要是由于横向平 移位移导致的剪切以及角位移导致 的弯曲两种伤害机理造成的。行人 腿部膝关节位置通常是直接受到车 辆保险杠的撞击,由于股骨运动的 滞后使得关节面间发生剪切错位。 这种剪切错位导致了膝关节韧带的 拉伸,并在股骨髁和胫骨髁间隆凸 间产生横向压缩力。横向压缩力导 致关节接触表面出现集中应力,当 应力超过其容忍极限时,胫骨髁间 隆凸或股骨髁就会发生横向骨折。 当膝关节横向弯曲时,关节一侧的 韧带受到拉伸力的作用发生拉伸变 形,与此同时,关节表面的另一侧 则会受到轴向压缩力作用,导致集 中应力的出现。当集中应力超过骨 的压缩强度时,也会出现骨折伤 害,如图 2 所示。
综 述
行人碰撞腿部保护研究
郑 巍
内容提要:本文从生物力学角度综合分析了行人与车辆碰撞过程中其腿部的伤害机理,并根据EEVC 行 人碰撞保护试验法规建立了腿部撞击器的有限元模型。利用该数值模型,本文针对某国产轿车进行了行人腿 部保护的相关研究,并提出了相应的结构改进方案。计算结果表明,通过对保险杠的结构改进可以有效地减 轻车辆对行人腿部的伤害,具有较高的可行性。
分析车身保险杠的结构可以发 现,整个保险杠结构类似于一根简 支梁,最外层为保险杠蒙皮,它通 过螺钉固定在保险杠骨架上,保险 杠骨架又通过保险杠支架与车身前 纵梁相连接。无疑,在梁支承处的 撞击工况相对于其它位置的碰撞而 言更为恶劣,而L2碰撞位置正位于 保险杠支架(梁支承处)附近。

研究生英语阅读教程(基础级2版)课文06及其翻译

研究生英语阅读教程(基础级2版)课文06及其翻译

Thank God It's MondayBy Jyoti Thottam[1] As researchers in psychology, economics and organizational behavior have been gradually discovering, the experience of being happy at work looks very similar across professions. People, who love their jobs, feel challenged by their work but in control of it. They have bosses who make them feel appreciated (enjoyed) and co-workers they like. They can find meaning (interest/And they aren't just lucky. It takes real effort to reach that[2] An even bigger obstacle, though (however), may be our low expectations on the job. Love, family, community (society) — those are supposed (thought) to be the true sources of happiness, while work simply (only) gives us the means (tools) to enjoy them. Mihaly Csikszentmihalyi, who coined the term flow (happiness<->ebb), which adherents (supporter) of positive psychology would use to describe the job-induced highs (high spirit/ happiness), says that distinction (difference) is a false one. "Anything can be enjoyable if the elements of flow are present," he writes in his book Good Business." Within that framework, doing a seemingly boring job can be a source of greater fulfillment (achievement) than one (anybody) ever thought possible."[3] Csikszentmihalyi encourages (urge) us to reach a state (level/ status) in which work is an extension of what we naturally want to do. Immersed (absorbed) in the pleasure of work, we don't worry about its ultimate (final) reward. If that sounds out of reach, take heart (try one’s best). You may soon get some encouragement from the head office (headquarters). A growing (increasing) body (amount) of research is demonstrating (showing) that happy workers not only are happier in life but are also crucial (most important) to the health of a company.[4] Thirty-five years ago, the Gallup Organization started researching why people in certain work groups, even within the same company, were so much more effective (->efficient) than others. Donald Clifton, the Gallup researcher who pioneered that work, conducted (directed) a series of extensive interviews with highly productive teams of workers. From those interviews, Gallup developed a set of 12 statements (rules/ points) designed to measure employees' overall (general) level of happiness with their work, which Gallup calls "engagement". Some of the (criterion->) criteria reflect the obvious requirements of any worker (Do you have what you need to do your job? Do you know what's expected of you at work?), while others reveal (show) more subtle variables (Do you have a best friend at work? Does your supervisor (boss) or someone else at work care about you as a person?). Gallup started the survey in 1998, and it now includes 5. 4 million employees at 474 organizations; Gallup also does periodic random polls of workers in different countries.[5] The polls paint a picture of a rather disaffected (unpleasant/ unsatisfying) U.S. work force. In the most recent poll, from September 2004, only 29% of workers said they were engaged with their work. More than half, 55%, were not engaged, and 16% were actively disengaged. Still (Furthermore), those numbers are better than those (figures) in many other countries. The percentage of engaged workers in the U. S. is more than twice as large as Germany's and three times as great as Singapore's. But neither the late 1990s boom nor the subsequent (following)bust (depression) had much impact (influence) in either direction, indicating (showing/ implying) that the state of worker happiness goes much deeper than the swings (waves) of the economy.[6] James Harter, a psychologist directing (conducting) that research at Gallup, says manycompanies are simply misreading (->misled/ don’t know) what makes people happy at work. Beyond a certain minimum level, it isn't pay or benefits; it's strong relationships with co-workers and a supportive boss. "These are basic human needs in the workplace, but they're not the ones thought by managers to be very important." Harter says. Gallup has found that a strong positive response to the statement (question on questioner) "I have a best friend at work", for example, is a powerful predictor for engagement at work and is correlated with profitability and connection with customers. "It indicates (shows) a high level of belonging," Hatter says.[7] Without it, a job that looks (seems) good on paper (theoretically) can make a worker miserable [to live/lead a miserable/ happy life]. Martina Radix, 41, traded a high-pressure job as an executive assistant at a company where she liked her colleagues for a less taxing position as a clerical worker (clerk) in a law firm six years ago. She has more (free) time and flexibility but feels stifled (depressed) by her co-workers and unappreciated by her boss. "I am a misfit (mismatch) in that department," she says. "No matter how good your personal life is, if you go in to a bad (atom->) atmosphere at work, it takes away from it."In fact, engagement at workHarter estimates thatonly about 30% of the difference between employees who are highly engaged and those who are not. The rest of it is shaped (decided) by the hundreds of interactions that employees have every day with co-workers, supervisors and customers.[9] The most direct fix (remedy/ cure/ solution), then, is to seek out (look for) a supportive (positive) workplace. Finding a life calling (need) unlocks the door to happiness. Lissette Mendez, 33, says her job coordinating the annual book fair at Miami Dade College is the one she was born to do. "Books are an inextricable (inseparable) part of my life," she says.[10] Even if your passion (->passionate) does not easily translate into a profession (job->career), you can still find happiness on the job. Numerous studies have shown correlations between meaningful work and happiness, job satisfaction and even physical health. That sense (feeling/ significance) of meaning, however, can take many different forms. Some people find it in the work itself; others take pride in (be proud of) their company's mission (task) rather than in their specific job. People can find meaning in anything.[11] The desire for meaning is so strong that sometimes people simply (only) create it, especially to make sense (make sth. meaningful) of difficult or unpleasant work. In a recently completed six-year study of physicians (->surgeon) during their surgical residency, for example, it was found that the surgeons were extremely dissatisfied in the first year, when the menial (slave) work they were assigned, like (such as) filling out endless copies of patient records, seemed pointless (meaningless). Once they started to think of (regard) the training as part of the larger process of joining an elite group of doctors, their attitude changed. They're able to reconstruct (reconsider) and make sense of their work and what they do. By the end of year one, they've started to create (feel) some meanings.[12] While positive psychology has mostly focused on (stressed/ emphasized) the individual (pursue->)pursuit of happiness, a new field — positive organizational scholarship — has begun to examine the connection between happy employees and happy (successful) businesses. Instead of focusing on profitability and competition to explain success, researchers in this field are studying meaningfulness, authentic leadership and emotional competence (ability). Not the typical B-school buzzwords, but they may soon become part of the language spoken by every M. B. A.domain (field) and kind of (a little/ somewhat) fringe-ish", says Thomas Wright, a professor of organizational behavior at the University of Nevada, Reno. Early hints (clues) of the importance of worker happiness were slow (dull/ stupid) to be accepted (admitted/ understood). A 1920s study on the topic at the Hawthorne Plant of the Western Electric Co. in Cicero. It looked at (examined) whether increased lighting, shorter workdays and other worker-friendly fixes (measures) would improve (increase) productivity. While (Although) the workplace changes boosted (improved) performance, the experimenters eventually (finally) discovered (found) that the differences workers were responding to not in the physical environment but in the social one (factor). In other words, the attention they were getting was what made them happier and more effective. This phenomenon came to be known as the Hawthorne effect. "The researchers came to realize that it was people'ssays. But later studies that looked at job-satisfaction ratings were inconsistent. Broader measures (degree) of happiness, it turns out, are better predictors[14] Making any of those changes depends on the boss, although not necessarily, the CEO. So a handful of (many) business schools are trying to create (educate) a new kind of frontline manager, based on the idea of "authentic leadership". Instead of imposing faddish (fashionable) management techniques on each supervisor, authentic leadership begins with self-awareness. Introverted bosses have to know their own style and then find strategies to manage (administrate) people that feel natural (friendly). In other words, by figuring out (working out) their strengths (advantages), they[15] The goal (objective->purpose->aim) not necessarily a world (field) in which people love their work above everything else. Work, by definition, is somewhat (a little) unpleasant relative to all the other things we could be doing. That's why we still expect to get paid for doing it. But at the very least, businesses (companies or organizations) could do better just by paying attention to what their employees want and need (financially and spiritually). Then more of us could find a measure (degree) of fulfillment (achievement) in what we do. And once in a while (now and then/ occasionally), we might hope to transcend (surpass) it all. It can happen on the basketball court (field), in front of a roaring crowd, or in a classroom, in front of just one grateful (thankful) student. (1, 669 words)ABOUT THE AUTHORJyoti Thottam is a writer and a business reporter for Time magazine in New York. She was the president of the South Asian Journalists' Association from 2001-2002.EXERCISESI . Reading ComprehensionAnswer the following questions or complete the following statements.1. By the title "Thank God It's Monday", the author wanted to convey the idea that _____.A. people love their work above everything elseB. people can find happiness in their workC. most people have the experience of being happy at workD. people can find meaning in whatever they do2. According to Mihaly Csikszentmihalyi, _____.A. love, family and community are not supposed to be the true sources of happinessB. work simply gives us the means to enjoy the happiness we get from love, family and communityC. even a seemingly boring job can be a source of happiness for usD. the positive psychology that is used to describe the job-induced highs is false3. According to the research made by the Gallup Organization, what makes people happy at work?A. Reasonable pay or benefits.B. Positive relationship with co-workers and boss.C. People's engagement with their work.D. Both A and B.4. According to the research made by the Gallup Organization, the number of engaged workers in Singapore was about _____.A.10%B. 14.5%C.16%D.29%5. Now Martina Radix _____.A. has a high-pressure job but she has positive relationship with her co-workersB. has a less demanding job but she has a bad relationship at workC. has more time and flexibility so she is satisfied with her personal lifeD. is an executive assistant at a company but she feels she is a misfit in that department6. People can find meaning in their work in the following situations EXCEPT _____.A. if they love their job very muchB. if their work itself is very importantC. if their company's mission is very importantD. if they are paid at a minimum level7. By the end of year one, surgical residents can find their menial work meaningful because _____.A. in the past year, they have become accustomed to the workB. they can stop doing such pointless jobs as filling out endless copies of patient recordsC. they realize that the menial work is a necessary step to become a doctorD. they're able to construct their fame if they deal with patients more often8. What made the workers happier and more effective, according to the study at the Hawthorne Plant of the Western Electric Co. in Cicero, in the 1920s?A. The attention paid to the workers.B. The new worker-friendly measures.C. The improvement of the physical environment.D. The improvement of the social environment.9. According to the article, which of the following statements is true?A. The better productivity of a company depends on its CEO.B. Authentic leaders should learn more management techniques.C. Bosses should find strength in both themselves and their employees.D. The results of the studies on job-satisfaction ratings were all similar.10. The author's purpose in writing this article is _____.A. to make more people enjoy their workB. for people to find fulfillment in what they doC. to reevaluate some theories in positive psychologyD. to help business be more effective and productiveII. VocabularyA. Read the following sentences and decide winch of the four choices below each sentence is closest in meaning to the underlined word.1. I advocate a holistic recognition that biology and in an inextricable manner (way).A. complicatedB. unavoidableC. customarylove of the picturesque and sublime nature.A. immenseB. fascinating (attractive)C. magnificent (great/ noble)D. enchanting (attractive)3. One important feature (property/ character) of the period was the growth (development) of Buddhism. Its adherents honored the Buddha in order to be reborn in his paradise.A. sponsors C. advocators D. advisors4. As censorship was extremely strict in that period, little authentic news came out of the country.A. negativeB. disastrousC. officialD. reliable5. If a block of wood is completely immersed in water, the upward force is greater than the weight of the wood.A. dippedB. pressedC. forcedD. pushed6. According to Zhuangzi, a Daoist (道家) philosopher of the late 4th century B.C., through mystical union with the Dao the individual could transcend nature and even life and death.A. dissolveB. upraise (bring up)C. surpassD. depress (->suppress)7. As economic growth ground to a halt (stop), the local populations grew (became) more and more disaffected.A. indifferentB. resentfulC. unvaluedD. (dignity->)indignant (>angry)8. Capitalism was beset (be troubled) by cycles of "boom and bust", periods of expansion and prosperity followed by economic collapse [->collapsible] and waves of unemployment. [beheaded= killed]A. failureB. transitionC. (lose->)lossD. depression [the Great Depression]9. At that time (=then), life was nearly as taxing (burdensome) for all-black bands: black musicians were required to use kitchen entrances and service elevators (=lift), which forced them to confront the ugly realities of racial discrimination. [Hard Times]A. miserableB. hard (=difficult)C. unbearableD. harsh10. Modern and implicit (<->explicit) censorship has nothing like the power of the old system and contrary opinion is never entirely stifled.A. releasedB. arrestedC. retarded (->retardant)D. prohibited [pro-: (1)officially; (2)forward]B. Choose the best word or expression from the list given for each blank. Use each word orexpression only once and make proper changes where necessary.in control of within the framework variables it turns out on papertake away from once in a while trade... for make sense take heartattended by those who can afford (=pay for) the fees (->fare). [(1)border; (2)](now and then/ occasionally).if the expression on theof the Security Council. [city council]5. He lost his confidence after he lost the first two trails, but his coach told him to(<->lose one’s heart), so that he could win at last.his success in writing it.7. The presentation of his paper was highly praised, but that the paper was copied from the Internet. [think great/ much of sb./ think highly of sb.<->think little of sb./ look down upon sb.; Turn out: (1)The police turned out to the site of the crime; (2)The produce or product turned out;(3) It has been proved that…;]to her. [She doesn’t understand it].the meeting, and after singing and prayer she10. The early settlers copper for corn from natives. [to settle in somewhere/ ~ an argument][scorn (look down upon sb.;)]IV. ClozeThere are ten blanks in the following passage. Read the passage carefully and choose theright word or phrase from the list given below for each of the blanks. Change the form if necessary. supposed to be unless all too often which externalthoroughly that on the other hand in return ironically Although, as we have seen, people generally long (want/ desire) to leave their places of workand get home, ready (=willing) to put their hard-earned free time to good use, 1 all too often (frequently)they have no idea (=don’t know) what to do there. 2 Ironically , jobs are actually easier to enjoy than free time, because like flow activities they (work) have built-in goals, feedback, rulesencourage one (anybody) to become involved (join) in one's work, to concentrate and lose oneself (be absorbed) in it. Free time, 4 on the other hand, is unstructured (unorganized), and requires much greater effort to be shaped into something (meaningful) that can and especially inner discipline, help to make leisure (free time) what it is 5chance for "re-creation" . But on the whole (in general), people miss the opportunity to enjoy leisureeven more 6 thoroughly (completely)than they do with working time. It is in the improvidentthe greatest wastes of American life occur. [tourism and recreation industry]Mass leisure, mass culture, arid even high culture when only attended to (actively<->)8 external Reasons — such as the wish to display (show) one's status — are parasites of the mind. They absorb (=exhaust) psychic energy without providing substantive (considerable) strength (energy) 9 in return. They leave (=make) us more exhausted, more disheartened (depressed) than we were before. 10and free time are likely (possible) to be disappointing. Most jobs and many leisure activities —especially those involving the passive consumption of mass media — are not designed (intended) to make us happy and strong, or to make us learn to enjoy our work. [attend a meeting/ a class]IV. TranslationPut the following party into Chinese.1. Mihaly Csikszentmihalyi, who coined the term flow, which adherents of positive psychology would use to describe the job-induced highs, says that distinction is a false one. "Anything can be enjoyable if the elements of flow are present," he writes in his book Good Business. "Within that framework, doing a seemingly boring job can be a source of greater fulfillment than one ever thought possible."米哈里·奇凯因特米哈里认为这种区分是错误的。

Neuron overload and the juggling physician

Neuron overload and the juggling physician

Neuron overload and the juggling physicianDanielle Ofri aPatients often complain that their doctors don't listen. Although there are probably a few doctors who truly are tone deaf, most are reasonably empathic human beings, and I wonder why even these doctors seem prey to this criticism. I often wonder whether it is sheer neuron overload on the doctor side that leads to this problem. Sometimes it feels as though my brain is juggling so many competing details, that one stray request from a patient—even one that is quite relevant—might send the delicately balanced three-ring circus tumbling down.One day, I tried to work out how many details a doctor needs to keep spinning in her head in order to do a satisfactory job, by calculating how many thoughts I have to juggle in a typical office visit. Mrs Osorio is a 56-year-old woman in my practice. She is somewhat overweight. She has reasonably well-controlled diabetes and hypertension. Her cholesterol is on the high side but she doesn't take any medications for this. She doesn't exercise as much as she should, and her last DEXA scan showed some thinning of her bones. She describes her life as stressful, although she's been good about keeping her appointments and getting her blood tests. She's generally healthy, someone who'd probably be described as an average patient in a medical practice, not excessively complicated.Here are the thoughts that run through my head as I proceed through our 20-min consultation.Good thing she did her blood tests. Glucose is a little better. Cholesterol isn't great. May need to think about starting a statin. Are her liver enzymes normal?Her weight is a little up. I need to give her my talk about five fruits and vegetables and 30 min of walking each day.Diabetes: how do her morning sugars compare to her evening sugars? Has she spoken with the nutritionist lately? Has she been to the eye doctor? The podiatrist?Her blood pressure is good but not great. Should I add another BP med? Will more pills be confusing? Does the benefit of possible better blood pressure control outweigh the risk of her possibly not taking all of her meds?Her bones are a little thin on the DEXA. Should I start a bisphosphonate that might prevent osteoporosis? But now I'm piling yet another pill onto her, and one that requires detailed instructions. Maybe leave this until next time?How are things at home? Is she experiencing just the usual stress of life, or might there be depression or anxiety disorder lurking? Is there time for the depression questionnaire?Health maintenance: when was her last mammogram? PAP smear? Has she had a colonoscopy since she turned 50? Has she had a tetanus booster in the past 10 years? Does she qualify for a pneumonia vaccine?Ms Osorio interrupts my train of thought to tell me that her back has been aching for the past few months. From her perspective, this is probably the most important item in our visit, but the fact is that she's caught one of my neurons in mid-fire (the one that's thinking about her blood sugar, which is segueing into the neuron that's preparing the diet-and-exercise discussion, which is intersecting with the one that's debating about initiating a statin). My instinct is to put one hand up and keep all interruptions at bay. It's not that I don't want to hear what she has to say, but the sensation that I'm juggling so many thoughts, and need to resolve them all before the clock runs down, that keeps me in moderate state of panic. What if I drop one—what if one of my thoughts evaporates while I address another concern? I'm trying to type as fast as I can, for the very sake of not letting any thoughts escape, but every time I turn to the computer to write, I'm not making eye contact with Mrs Osorio. I don't want my patient to think that the computer is more important than she is, but I have to keep looking toward the screen to get her lab results, check her mammogram report, document the progress of her illnesses, order the tests, refill her prescriptions.Then she pulls a form out her of bag: her insurance company needs this form for some reason or another. An innocent—and completely justified—request, but I feel that this could be the straw that breaks the camel's back, that the precarious balance of all that I'm keeping in the air will be simply unhinged. I nod, but indicate that we need to do her physical examination first. I barrel through the basics, then quickly check for any red-flag signs that might suggest that her back pain is anything more than routine muscle strain. I return to the computer to input all the information, mentally running through my checklist, anxious that nothing important slips from my brain's holding bay.I want to do everything properly and cover all our bases, but the more effort I place into accurate and thorough documentation, the less time I have to actually interact with my patient. A glance at the clock tells me that we've gone well beyond our allotted time. I stand up and hand Mrs Os orio her prescriptions. “What about my insurance form,” she asks. “It needs to be in by Friday, otherwise I might lose my coverage.” I clap my hand against my forehead; I've completely forgotten about the form she'd asked about just a few minutes ago.Studies have debunked the myth of multitasking in human beings. The concept of multitasking was developed in the computer field to explain the idea of a microprocessor doing two jobs at one time. It turns out that microprocessors are in fact linear, and actually perform only one task at a time. Our computers give the illusion of simultaneous action based on the microprocessor “scheduling” competing activities in a complicated integratedalgorithm. Like microprocessors, we humans can't actually concentrate on two thoughts at the same exact time. We merely zip back and forth between them, generally losing accuracy in the process. At best, we can juggle only a handful of thoughts in this manner. The more thoughts we juggle, the less we are able to attune fully to any given thought. To me, this is a recipe for disaster. Today I only forgot an insurance company form. But what if I'd forgotten to order her mammogram, or what if I'd refilled only five of her six medicines? What if I'd forgotten to fully explain the side-effects of one of her medications? The list goes on, as does the anxiety.At the end of the day, my mind spins as I try to remember if I've forgotten anything. Mrs Osorio had seven medical issues to consider, each of which required at least five separate thoughts: that's 35 thoughts. I saw ten patients that afternoon: that's 350. I'd supervised five residents that morning, each of whom saw four patients, each of whom generated at least ten thoughts. That's another 200 thoughts. It's not to say that we can't handle 550 thoughts in a working day, but each of these thoughts potentially carries great risk if improperly evaluated. If I do a good job juggling 98% of the time, that still leaves ten thoughts that might get lost in the process. Any one of those lost thoughts could translate into a disastrous outcome, not to mention a possible lawsuit. Most doctors are reasonably competent, caring individuals, but the overwhelming swirl of thoughts that we must keep track of leaves many of us in a perpetual panic that something serious might slip. This is what keeps us awake at night.There are many proposed solutions—computer-generated reminders, case managers, ancillary services. To me, the simplest one would be time. If I had an hour for each patient, I'd be a spectacular doctor. If I could let my thoughts roll linearly and singularly, rather than simultaneously and haphazardly, I wouldn't fear losing anything. I suspect that it would actually be more efficient, as my patients probably wouldn't have to return as frequently. But realistically, no one is going to hand me a golden hour for each of my patients. My choices seem to boil down to entertaining fewer thoughts, accepting decreased accuracy for each thought, giving up on thorough documentation, or having a constant headache from neuronal overload.These are the choices that practising physicians face every day, with every patient. Mostly we rely on our clinical judgment to prioritise, accepting the trade-off that is inevitable with any compromise. We attend to the medical issues that carry the greatest weight and then have to let some of the lesser ones slide, with the hope that none of these seemingly lesser ones masks something grave.Some computers have indeed achieved the goal of true multitasking, by virtue of having more than one microprocessor. In practice, that is like possessing an additional brain that can function independently and thus truly simultaneously. Unless the transplant field advances drastically, there is little hope for that particular deus ex machina. In some cases,having a dedicated and competent clinical partner such as a one-on-one nurse can come close to simulating a second brain, but most medical budgets don't allow for such staffing indulgence.As it stands, it seems that we will simply have to continue this impossible mental high-wire act, juggling dozens of clinical issues in our brains, panicking about dropping a critical one. The resultant neuronal overload will continue to present a distracted air to our patients that may be interpreted as us not listening, or perhaps not caring.When my computer becomes overloaded, it simply crashes. Usually, I reboot in a fury, angry about all my lost work. Now, however, I view my computer with a tinge of envy. It has the luxury of being able to crash, and of a reassuring, omniscient hand to press the reboot button. Physicians are permitted no such extravagance. I pull out the bottle of paracetamol tablets from my desk drawer and set about disabling the childproof cap. It's about the only thing I truly have control over.。

15ICCV_Weakly- and Semi-Supervised Learning of a Deep Convolutional Network for

15ICCV_Weakly- and Semi-Supervised Learning of a Deep Convolutional Network for

Weakly-and Semi-Supervised Learning of a Deep Convolutional Network forSemantic Image SegmentationGeorge Papandreou∗Google,Inc. gpapan@ Liang-Chieh Chen∗UCLAlcchen@Kevin P.MurphyGoogle,Inc.kpmurphy@Alan L.YuilleUCLAyuille@AbstractDeep convolutional neural networks(DCNNs)trained on a large number of images with strong pixel-level anno-tations have recently significantly pushed the state-of-art in semantic image segmentation.We study the more challeng-ing problem of learning DCNNs for semantic image seg-mentation from either(1)weakly annotated training data such as bounding boxes or image-level labels or(2)a com-bination of few strongly labeled and many weakly labeled images,sourced from one or multiple datasets.We develop Expectation-Maximization(EM)methods for semantic im-age segmentation model training under these weakly super-vised and semi-supervised settings.Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC2012image segmentation benchmark,while requiring significantly less annotation effort.We share source code implementing the proposed system at https: ///deeplab/deeplab-public.1.IntroductionSemantic image segmentation refers to the problem of assigning a semantic label(such as“person”,“car”or “dog”)to every pixel in the image.Various approaches have been tried over the years,but according to the results on the challenging Pascal VOC2012segmentation benchmark,the best performing methods all use some kind of Deep Convo-lutional Neural Network(DCNN)[2,5,8,14,25,27,41].In this paper,we work with the DeepLab-CRF approach of[5,41].This combines a DCNN with a fully connected Conditional Random Field(CRF)[19],in order to get high resolution segmentations.This model achieves state-of-art results on the challenging PASCAL VOC segmentation benchmark[13],delivering a mean intersection-over-union (IOU)score exceeding70%.A key bottleneck in building this class of DCNN-based∗Thefirst two authors contributed equally to this work.segmentation models is that they typically require pixel-level annotated images during training.Acquiring such data is an expensive,time-consuming annotation effort.Weak annotations,in the form of bounding boxes(i.e.,coarse object locations)or image-level labels(i.e.,information about which object classes are present)are far easier to collect than detailed pixel-level annotations.We develop new methods for training DCNN image segmentation mod-els from weak annotations,either alone or in combination with a small number of strong annotations.Extensive ex-periments,in which we achieve performance up to69.0%, demonstrate the effectiveness of the proposed techniques.According to[24],collecting bounding boxes around each class instance in the image is about15times faster/cheaper than labeling images at the pixel level.We demonstrate that it is possible to learn a DeepLab-CRF model delivering62.2%IOU on the PASCAL VOC2012 test set by training it on a simple foreground/background segmentation of the bounding box annotations.An even cheaper form of data to collect is image-level labels,which specify the presence or absence of se-mantic classes,but not the object locations.Most exist-ing approaches for training semantic segmentation models from this kind of very weak labels use multiple instance learning(MIL)techniques.However,even recent weakly-supervised methods such as[25]deliver significantly infe-rior results compared to their fully-supervised counterparts, only achieving25.7%.Including additional trainable ob-jectness[7]or segmentation[1]modules that largely in-crease the system complexity,[31]has improved perfor-mance to40.6%,which still significantly lags performance of fully-supervised systems.We develop novel online Expectation-Maximization (EM)methods for training DCNN semantic segmentation models from weakly annotated data.The proposed algo-rithms alternate between estimating the latent pixel labels (subject to the weak annotation constraints),and optimiz-ing the DCNN parameters using stochastic gradient descent (SGD).When we only have access to image-level anno-tated training data,we achieve39.6%,close to[31]butwithout relying on any external objectness or segmenta-tion module.More importantly,our EM approach also excels in the semi-supervised scenario which is very im-portant in practice.Having access to a small number of strongly (pixel-level)annotated images and a large number of weakly (bounding box or image-level)annotated images,the proposed algorithm can almost match the performance of the fully-supervised system.For example,having access to 2.9k pixel-level images and 9k image-level annotated im-ages yields 68.5%,only 2%inferior the performance of the system trained with all 12k images strongly annotated at the pixel level.Finally,we show that using additional weak or strong annotations from the MS-COCO dataset can further improve results,yielding 73.9%on the PASCAL VOC 2012benchmark.Contributions In summary,our main contributions are:1.We present EM algorithms for training with image-level or bounding box annotation,applicable to both the weakly-supervised and semi-supervised settings.2.We show that our approach achieves excellent per-formance when combining a small number of pixel-level annotated images with a large number of image-level or bounding box annotated images,nearly match-ing the results achieved when all training images have pixel-level annotations.3.We show that combining weak or strong annotations across datasets yields further improvements.In partic-ular,we reach 73.9%IOU performance on PASCAL VOC 2012by combining annotations from the PAS-CAL and MS-COCO datasets.2.Related workTraining segmentation models with only image-level labels has been a challenging problem in the literature [12,36,37,39].Our work is most related to other re-cent DCNN models such as [30,31],who also study the weakly supervised setting.They both develop MIL-based algorithms for the problem.In contrast,our model em-ploys an EM algorithm,which similarly to [26]takes into account the weak labels when inferring the latent image seg-mentations.Moreover,[31]proposed to smooth the predic-tion results by region proposal algorithms,e.g .,CPMC [3]and MCG [1],learned on pixel-segmented images.Neither [30,31]cover the semi-supervised setting.Bounding box annotations have been utilized for seman-tic segmentation by [38,42],while [15,21,40]describe schemes exploiting both image-level labels and bounding box annotations.[4]attained human-level accuracy for car segmentation by using 3D bounding boxes.Bounding box annotations are also commonly used in interactive segmen-tation [22,33];we show that such foreground/backgroundPixel annotationsImage Deep Convolutional Neural NetworkLossFigure 1.DeepLab model training from fully annotated images.segmentation methods can effectively estimate object seg-ments accurate enough for training a DCNN semantic seg-mentation system.Working in a setting very similar to ours,[9]employed MCG [1](which requires training from pixel-level annotations)to infer object masks from bounding box labels during DCNN training.3.Proposed MethodsWe build on the DeepLab model for semantic image seg-mentation proposed in [5].This uses a DCNN to predict the label distribution per pixel,followed by a fully-connected (dense)CRF [19]to smooth the predictions while preserv-ing image edges.In this paper,we focus for simplicity on methods for training the DCNN parameters from weak la-bels,only using the CRF at test time.Additional gains can be obtained by integrated end-to-end training of the DCNN and CRF parameters [41,6].Notation We denote by x the image values and y the seg-mentation map.In particular,y m ∈{0,...,L }is the pixel label at position m ∈{1,...,M },assuming that we have the background as well as L possible foreground labels and M is the number of pixels.Note that these pixel-level la-bels may not be visible in the training set.We encode the set of image-level labels by z ,with z l =1,if the l -th label is present anywhere in the image,i.e .,if m [y m =l ]>0.3.1.Pixel-level annotationsIn the fully supervised case illustrated in Fig.1,the ob-jective function isJ (θ)=log P (y |x ;θ)=Mm =1log P (y m |x ;θ),(1)where θis the vector of DCNN parameters.The per-pixellabel distributions are computed byP (y m |x ;θ)∝exp(f m (y m |x ;θ)),(2)where f m (y m |x ;θ)is the output of the DCNN at pixel m .We optimize J (θ)by mini-batch SGD.3.2.Image-level annotationsWhen only image-level annotation is available,we can observe the image values x and the image-level labels z ,but the pixel-level segmentations y are latent variables.WeAlgorithm 1Weakly-Supervised EM (fixed bias version)Input:Initial CNN parameters θ′,potential parameters b l ,l ∈{0,...,L },image x ,image-level label set z .E-Step:For each image position m1:ˆf m (l )=f m (l |x ;θ′)+b l ,if z l =12:ˆf m (l )=f m (l |x ;θ′),if z l =03:ˆy m =argmax l ˆf m (l )M-Step:4:Q (θ;θ′)=log P (ˆy |x ,θ)= M m =1log P (ˆy m |x ,θ)5:Compute ∇θQ (θ;θ′)and use SGD to update θ′.have the following probabilistic graphical model:P (x ,y ,z ;θ)=P (x )Mm =1P (y m |x ;θ)P (z |y ).(3)We pursue an EM-approach in order to learn the model parameters θfrom training data.If we ignore terms that do not depend on θ,the expected complete-data log-likelihood given the previous parameter estimate θ′isQ (θ;θ′)= yP (y |x ,z ;θ′)log P (y |x ;θ)≈log P (ˆy |x ;θ),(4)where we adopt a hard-EM approximation,estimating in the E-step of the algorithm the latent segmentation by ˆy =argmax yP (y |x ;θ′)P (z |y )(5)=argmax ylog P (y |x ;θ′)+log P (z |y )(6)=argmaxyMm =1f m (y m |x ;θ′)+log P (z |y ) .(7)In the M-step of the algorithm,we optimize Q (θ;θ′)≈log P (ˆy |x ;θ)by mini-batch SGD similarly to (1),treatingˆyas ground truth segmentation.To completely identify the E-step (7),we need to specifythe observation model P (z |y ).We have experimented withtwo variants,EM-Fixed and EM-Adapt .EM-Fixed In this variant,we assume that log P (z |y )fac-torizes over pixel positions aslog P (z |y )=Mm =1φ(y m ,z )+(const),(8)allowing us to estimate the E-step segmentation at eachpixel separatelyˆy m =argmaxy mˆf m (y m ).=f m (y m |x ;θ′)+φ(y m ,z ).(9)ImageFigure 2.DeepLab model training using image-level labels.We assume thatφ(y m =l,z )=b l if z l =10if z l =0(10)We set the parameters b l =b fg ,if l >0and b 0=b bg ,with b fg >b bg >0.Intuitively,this potential encourages a pixel to be assigned to one of the image-level labels z .We choose b fg >b bg ,boosting present foreground classes more than the background,to encourage full object coverage andavoid a degenerate solution of all pixels being assigned to background.The procedure is summarized in Algorithm 1and illustrated in Fig.2.EM-Adapt In this method,we assume that log P (z |y )=φ(y ,z )+(const),where φ(y ,z )takes the form of a cardi-nality potential [23,32,35].In particular,we encourage atleast a ρl portion of the image area to be assigned to classl ,if z l =1,and enforce that no pixel is assigned to classl ,if z l =0.We set the parameters ρl =ρfg ,if l >0andρ0=ρbg .Similar constraints appear in [10,20].In practice,we employ a variant of Algorithm 1.Weadaptively set the image-and class-dependent biases b l so as the prescribed proportion of the image area is assigned to the background or foreground object classes.This acts as a powerful constraint that explicitly prevents the background score from prevailing in the whole image,also promoting higher foreground object coverage.The detailed algorithm is described in the supplementary material.EM It is instructive to compare our EM-based approach with two recent Multiple Instance Learning (MIL)methods for learning semantic image segmentation models [30,31].The method in [30]defines an MIL classification objective based on the per-class spatial maximum of the lo-cal label distributions of (2),ˆP (l |x ;θ).=max m P (y m =l |x ;θ),and [31]adopts a softmax function.While this approach has worked well for image classification tasks [28,29],it is less suited for segmentation as it does not pro-mote full object coverage:The DCNN becomes tuned to focus on the most distinctive object parts (e.g .,human face)instead of capturing the whole object (e.g .,human body).ImageBbox annotationsDeep ConvolutionalNeural NetworkDenseCRFargmaxLossFigure3.DeepLab model training from bounding boxes.3.3.Bounding Box AnnotationsWe explore three alternative methods for training our segmentation model from labeled bounding boxes.Thefirst Bbox-Rect method amounts to simply consider-ing each pixel within the bounding box as positive example for the respective object class.Ambiguities are resolved by assigning pixels that belong to multiple bounding boxes to the one that has the smallest area.The bounding boxes fully surround objects but also contain background pixels that contaminate the training set with false positive examples for the respective object classes.Tofilter out these background pixels,we have also explored a second Bbox-Seg method in which we per-form automatic foreground/background segmentation.To perform this segmentation,we use the same CRF as in DeepLab.More specifically,we constrain the center area of the bounding box(α%of pixels within the box)to be fore-ground,while we constrain pixels outside the bounding box to be background.We implement this by appropriately set-ting the unary terms of the CRF.We then infer the labels for pixels in between.We cross-validate the CRF parameters to maximize segmentation accuracy in a small held-out set of fully-annotated images.This approach is similar to the grabcut method of[33].Examples of estimated segmenta-tions with the two methods are shown in Fig.4.The two methods above,illustrated in Fig.3,estimate segmentation maps from the bounding box annotation as a pre-processing step,then employ the training procedure of Sec.3.1,treating these estimated labels as ground-truth.Our third Bbox-EM-Fixed method is an EM algorithm that allows us to refine the estimated segmentation maps throughout training.The method is a variant of the EM-Fixed algorithm in Sec.3.2,in which we boost the present foreground object scores only within the bounding box area.3.4.Mixed strong and weak annotationsIn practice,we often have access to a large number of weakly image-level annotated images and can only afford to procure detailed pixel-level annotations for a small fraction of these images.We handlethishybrid training scenario byImage with Bbox Ground-Truth Bbox-Rect Bbox-SegFigure4.Estimatedsegmentation frombounding box annotation.+Pixel AnnotationsFG/BGBiasargmax1. Car2. Person3. HorseDeep ConvolutionalNeural Network LossDeep ConvolutionalNeural NetworkLossScore mapsFigure5.DeepLab model training on a union of full(strong labels)and image-level(weak labels)annotations.combining the methods presented in the previous sections,as illustrated in Figure5.In SGD training of our deep CNNmodels,we bundle to each mini-batch afixed proportionof strongly/weakly annotated images,and employ our EMalgorithm in estimating at each iteration the latent semanticsegmentations for the weakly annotated images.4.Experimental Evaluation4.1.Experimental ProtocolDatasets The proposed training methods are evaluatedon the PASCAL VOC2012segmentation benchmark[13],consisting of20foreground object classes and one back-ground class.The segmentation part of the original PAS-CAL VOC2012dataset contains1464(train),1449(val),and1456(test)images for training,validation,and test,re-spectively.We also use the extra annotations provided by[16],resulting in augmented sets of10,582(train aug)and12,031(trainval aug)images.We have also experimentedwith the large MS-COCO2014dataset[24],which con-tains123,287images in its trainval set.The MS-COCO2014dataset has80foreground object classes and one back-ground class and is also annotated at the pixel level.The performance is measured in terms of pixelintersection-over-union(IOU)averaged across the21classes.Wefirst evaluate our proposed methods on the PAS-CAL VOC2012val set.We then report our results on the official PASCAL VOC2012benchmark test set(whose an-notations are not released).We also compare our test set results with other competing methods.Reproducibility We have implemented the proposed methods by extending the excellent Caffe framework[18]. We share our source code,configurationfiles,and trained models that allow reproducing the results in this paper at a companion web site https:/// deeplab/deeplab-public.Weak annotations In order to simulate the situations where only weak annotations are available and to have fair comparisons(e.g.,use the same images for all settings),we generate the weak annotations from the pixel-level annota-tions.The image-level labels are easily generated by sum-marizing the pixel-level annotations,while the bounding box annotations are produced by drawing rectangles tightly containing each object instance(PASCAL VOC2012also provides instance-level annotations)in the dataset. Network architectures We have experimented with the two DCNN architectures of[5],with parameters initialized from the VGG-16ImageNet[11]pretrained model of[34]. They differ in the receptivefield of view(FOV)size.We have found that large FOV(224×224)performs best when at least some training images are annotated at the pixel level, whereas small FOV(128×128)performs better when only image-level annotations are available.In the main paper we report the results of the best architecture for each setup and defer the full comparison between the two FOVs to the supplementary material.Training We employ our proposed training methods to learn the DCNN component of the DeepLab-CRF model of [5].For SGD,we use a mini-batch of20-30images and ini-tial learning rate of0.001(0.01for thefinal classifier layer), multiplying the learning rate by0.1after afixed number of iterations.We use momentum of0.9and a weight decay of 0.0005.Fine-tuning our network on PASCAL VOC2012 takes about12hours on a NVIDIA Tesla K40GPU.Similarly to[5],we decouple the DCNN and Dense CRF training stages and learn the CRF parameters by cross val-idation to maximize IOU segmentation accuracy in a held-out set of100Pascal val fully-annotated images.We use10 mean-field iterations for Dense CRF inference[19].Note that the IOU scores are typically3-5%worse if we don’t use the CRF for post-processing of the results.4.2.Pixel-level annotationsWe havefirst reproduced the results of[5].Training the DeepLab-CRF model with strong pixel-level annota-tions on PASCAL VOC2012,we achieve a mean IOU scoreMethod#Strong#Weak val IOUEM-Fixed(Weak)-10,58220.8EM-Adapt(Weak)-10,58238.2EM-Fixed(Semi)20010,38247.650010,08256.97509,83259.81,0009,58262.01,4645,00063.21,4649,11864.6Strong1,464-62.510,582-67.6Table1.VOC2012val performance for varying number of pixel-level(strong)and image-level(weak)annotations(Sec.4.3).Method#Strong#Weak test IOUMIL-FCN[30]-10k25.7MIL-sppxl[31]-760k35.8MIL-obj[31]BING760k37.0MIL-seg[31]MCG760k40.6EM-Adapt(Weak)-12k39.6EM-Fixed(Semi)1.4k10k66.22.9k9k68.5Strong[5]12k-70.3Table2.VOC2012test performance for varying number of pixel-level(strong)and image-level(weak)annotations(Sec.4.3).of67.6%on val and70.3%on test;see method DeepLab-CRF-LargeFOV in[5,Table1].4.3.Image-level annotationsValidation results We evaluate our proposed methods in training the DeepLab-CRF model using image-level weak annotations from the10,582PASCAL VOC2012train aug set,generated as described in Sec.4.1above.We report the val performance of our two weakly-supervised EM vari-ants described in Sec.3.2.In the EM-Fixed variant we use b fg=5and b bg=3asfixed foreground and background biases.We found the results to be quite sensitive to the dif-ference b fg−b bg but not very sensitive to their absolute val-ues.In the adaptive EM-Adapt variant we constrain at least ρbg=40%of the image area to be assigned to background and at leastρfg=20%of the image area to be assigned to foreground(as specified by the weak label set).We also examine using weak image-level annotations in addition to a varying number of pixel-level annotations, within the semi-supervised learning scheme of Sec.3.4. In this Semi setting we employ strong annotations of a subset of PASCAL VOC2012train set and use the weak image-level labels from another non-overlapping subset of the train aug set.We perform segmentation inference for the images that only have image-level labels by means of EM-Fixed,which we have found to perform better than EM-Adapt in the semi-supervised training setting.The results are summarized in Table1.We see that the EM-Adapt algorithm works much better than the EM-Fixed algorithm when we only have access to image level an-notations,20.8%vs.38.2%validation ing1,464 pixel-level and9,118image-level annotations in the EM-Fixed semi-supervised setting significantly improves per-formance,yielding64.6%.Note that image-level annota-tions are helpful,as training only with the1,464pixel-level annotations only yields62.5%.Test results In Table2we report our test results.We com-pare the proposed methods with the recent MIL-based ap-proaches of[30,31],which also report results obtained with image-level annotations on the VOC benchmark.Our EM-Adapt method yields39.6%,which improves over MIL-FCN[30]by a large13.9%margin.As[31]shows,MIL can become more competitive if additional segmentation in-formation is introduced:Using low-level superpixels,MIL-sppxl[31]yields35.8%and is still inferior to our EM algo-rithm.Only if augmented with BING[7]or MCG[1]can MIL obtain results comparable to ours(MIL-obj:37.0%, MIL-seg:40.6%)[31].Note,however,that both BING and MCG have been trained with bounding box or pixel-annotated data on the PASCAL train set,and thus both MIL-obj and MIL-seg indirectly rely on bounding box or pixel-level PASCAL annotations.The more interestingfinding of this experiment is that including very few strongly annotated images in the semi-supervised setting significantly improves the performance compared to the pure weakly-supervised baseline.For example,using 2.9k pixel-level annotations along with 9k image-level annotations in the semi-supervised setting yields68.5%.We would like to highlight that this re-sult surpasses all techniques which are not based on the DCNN+CRF pipeline of[5](see Table6),even if trained with all available pixel-level annotations.4.4.Bounding box annotationsValidation results In this experiment,we train the DeepLab-CRF model using bounding box annotations from the train aug set.We estimate the training set segmentations in a pre-processing step using the Bbox-Rect and Bbox-Seg methods described in Sec.3.3.We assume that we also have access to100fully-annotated PASCAL VOC2012val images which we have used to cross-validate the value of the single Bbox-Seg parameterα(percentage of the cen-ter bounding box area constrained to be foreground).We variedαfrom20%to80%,finding thatα=20%maxi-mizes accuracy in terms of IOU in recovering the ground truth foreground from the bounding box.We also examine the effect of combining these weak bounding box annota-tions with strong pixel-level annotations,using the semi-supervised learning methods of Sec.3.4.The results are summarized in Table3.When using only bounding box annotations,we see that Bbox-Seg improves over Bbox-Rect by8.1%,and gets within7.0%of the strong pixel-level annotation result.We observe that combining 1,464strong pixel-level annotations with weak bounding box annotations yields65.1%,only2.5%worse than the strong pixel-level annotation result.In the semi-supervisedMethod#Strong#Box val IOUBbox-Rect(Weak)-10,58252.5Bbox-EM-Fixed(Weak)-10,58254.1Bbox-Seg(Weak)-10,58260.6Bbox-Rect(Semi)1,4649,11862.1Bbox-EM-Fixed(Semi)1,4649,11864.8Bbox-Seg(Semi)1,4649,11865.1Strong1,464-62.510,582-67.6Table3.VOC2012val performance for varying number of pixel-level(strong)and bounding box(weak)annotations(Sec.4.4).Method#Strong#Box test IOUBoxSup[9]MCG10k64.6BoxSup[9] 1.4k(+MCG)9k66.2Bbox-Rect(Weak)-12k54.2Bbox-Seg(Weak)-12k62.2Bbox-Seg(Semi) 1.4k10k66.6Bbox-EM-Fixed(Semi) 1.4k10k66.6Bbox-Seg(Semi) 2.9k9k68.0Bbox-EM-Fixed(Semi) 2.9k9k69.0Strong[5]12k-70.3Table4.VOC2012test performance for varying number of pixel-level(strong)and bounding box(weak)annotations(Sec.4.4).learning settings and1,464strong annotations,Semi-Bbox-EM-Fixed and Semi-Bbox-Seg perform similarly.Test results In Table4we report our test results.We com-pare the proposed methods with the very recent BoxSup ap-proach of[9],which also uses bounding box annotations on the VOC2012segmentation paring our al-ternative Bbox-Rect(54.2%)and Bbox-Seg(62.2%)meth-ods,we see that simple foreground-background segmenta-tion provides much better segmentation masks for DCNN training than using the raw bounding boxes.BoxSup does 2.4%better,however it employs the MCG segmentation proposal mechanism[1],which has been trained with pixel-annotated data on the PASCAL train set;it thus indirectly relies on pixel-level annotations.When we also have access to pixel-level annotated im-ages,our performance improves to66.6%(1.4k strong annotations)or69.0%(2.9k strong annotations).In this semi-supervised setting we outperform BoxSup(66.6%vs.66.2%with1.4k strong annotations),although we do not use MCG.Interestingly,Bbox-EM-Fixed improves over Bbox-Seg as we add more strong annotations,and it per-forms1.0%better(69.0%vs.68.0%)with2.9k strong an-notations.This shows that the E-step of our EM algorithm can estimate the object masks better than the foreground-background segmentation pre-processing step when enough pixel-level annotated images are available.Comparing with Sec.4.3,note that2.9k strong+9k image-level annotations yield68.5%(Table2),while2.9k strong+9k bounding box annotations yield69.0%(Ta-ble3).Thisfinding suggests that bounding box annotations add little value over image-level annotations when a suffi-cient number of pixel-level annotations is also available.Method#Strong COCO#Weak COCO val IOU PASCAL-only--67.6EM-Fixed(Semi)-123,28767.7Cross-Joint(Semi)5,000118,28770.0Cross-Joint(Strong)5,000-68.7Cross-Pretrain(Strong)123,287-71.0Cross-Joint(Strong)123,287-71.7 Table5.VOC2012val performance using strong annotations for all10,582train aug PASCAL images and a varying number of strong and weak MS-COCO annotations(Sec.4.5).Method test IOUMSRA-CFM[8]61.8FCN-8s[25]62.2Hypercolumn[17]62.6TTI-Zoomout-16[27]64.4DeepLab-CRF-LargeFOV[5]70.3BoxSup(Semi,with weak COCO)[9]71.0DeepLab-CRF-LargeFOV(Multi-scale net)[5]71.6Oxford TVG CRF RNN VOC[41]72.0Oxford TVG CRF RNN COCO[41]74.7Cross-Pretrain(Strong)72.7Cross-Joint(Strong)73.0Cross-Pretrain(Strong,Multi-scale net)73.6Cross-Joint(Strong,Multi-scale net)73.9Table6.VOC2012test performance using PASCAL and MS-COCO annotations(Sec.4.5).4.5.Exploiting Annotations Across Datasets Validation results We present experiments leveraging the 81-label MS-COCO dataset as an additional source of data in learning the DeepLab model for the21-label PASCAL VOC2012segmentation task.We consider three scenarios:•Cross-Pretrain(Strong):Pre-train DeepLab on MS-COCO,then replace the top-level network weights and fine-tune on Pascal VOC2012,using pixel-level anno-tation in both datasets.•Cross-Joint(Strong):Jointly train DeepLab on Pas-cal VOC2012and MS-COCO,sharing the top-level network weights for the common classes,using pixel-level annotation in both datasets.•Cross-Joint(Semi):Jointly train DeepLab on Pascal VOC2012and MS-COCO,sharing the top-level net-work weights for the common classes,using the pixel-level labels from PASCAL and varying the number of pixel-and image-level labels from MS-COCO.In all cases we use strong pixel-level annotations for all 10,582train aug PASCAL images.We report our results on the PASCAL VOC2012val in Table5,also including for comparison our best PASCAL-only67.6%result exploiting all10,582strong annotations as a baseline.When we employ the weak MS-COCO an-notations(EM-Fixed(Semi))we obtain67.7%IOU,which does not improve over the PASCAL-only baseline.How-ever,using strong labels from5,000MS-COCO images (4.0%of the MS-COCO dataset)and weak labels from the remaining MS-COCO images in the Cross-Joint(Semi) semi-supervised scenario yields70.0%,a significant2.4%boost over the baseline.This Cross-Joint(Semi)result is also1.3%better than the68.7%performance obtained us-ing only the5,000strong and no weak annotations from MS-COCO.As expected,our best results are obtained by using all123,287strong MS-COCO annotations,71.0%for Cross-Pretrain(Strong)and71.7%for Cross-Joint(Strong). We observe that cross-dataset augmentation improves by 4.1%over the best PASCAL-only ing only a small portion of pixel-level annotations and a large portion of image-level annotations in the semi-supervised setting reaps about half of this benefit.Test results We report our PASCAL VOC2012test re-sults in Table6.We include results of other leading models from the PASCAL leaderboard.All our models have been trained with pixel-level annotated images on the PASCAL trainval aug and the MS-COCO2014trainval datasets.Methods based on the DCNN+CRF pipeline of DeepLab-CRF[5]are the most competitive,with perfor-mance surpassing70%,even when only trained on PAS-CAL data.Leveraging the MS-COCO annotations brings about2%improvement.Our top model yields73.9%,using the multi-scale network architecture of[5].Also see[41], which also uses joint PASCAL and MS-COCO training,and further improves performance(74.7%)by end-to-end learn-ing of the DCNN and CRF parameters.4.6.Qualitative Segmentation ResultsIn Fig.6we provide visual comparisons of the results obtained by the DeepLab-CRF model learned with some of the proposed training methods.5.ConclusionsThe paper has explored the use of weak or partial anno-tation in training a state of art semantic image segmenta-tion model.Extensive experiments on the challenging PAS-CAL VOC2012dataset have shown that:(1)Using weak annotation solely at the image-level seems insufficient to train a high-quality segmentation model.(2)Using weak bounding-box annotation in conjunction with careful seg-mentation inference for images in the training set suffices to train a competitive model.(3)Excellent performance is obtained when combining a small number of pixel-level an-notated images with a large number of weakly annotated images in a semi-supervised setting,nearly matching the results achieved when all training images have pixel-level annotations.(4)Exploiting extra weak or strong annota-tions from other datasets can lead to large improvements. AcknowledgmentsThis work was partly supported by ARO62250-CS,and NIH5R01EY022247-03.We also gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research.。

2022年北京高考英语阅读D篇分析讲义

2022年北京高考英语阅读D篇分析讲义

2022年北京高考英语阅读D篇分析讲义本篇文章涉及量子计算(quantum computing)和量子计算机(quantum computers)。

为了便于更好地理解该文,我们先普及一点关于量子的知识。

百度有言:量子假设的提出有力地冲击了经典物理学,促进物理学进入微观层面,奠基现代物理学。

但直到现在,物理学家关于量子力学的一些假设仍然不能被充分地证明,仍有很多需要研究的地方。

Since some hypotheses of quantum mechanics can’t be fully proved and there is still much to study, the controversy and uncertainty about quantum computers is reasonable, 既然量子力学的一些假设仍然不能被充分证明,且有很多需要研究之处,那关于量子计算机的争议和不确定性便在情理之中。

在资金追逐和媒体热捧下,量子计算能否达到研究者所做出的承诺?(原文)Quantum (量子) computers have been on my mind a lot lately. A friend has been sending me articles on how quantum computers might help solve some of the biggest challenges we face as humans. I’ve also had exchanges with two quantum-computing experts. One is computer scientist Chris Johnson who I see as someone who helps keep the field honest. The other is physicist Philip Taylor.对于牵挂心头的quantum computers,作者通过两种方式加以了解:一是articles on how quantum computers might help solve some of the biggest challenges we face as humans,也即通过朋友所送的关于量子计算机如何可能帮助解决我们人类遇见的一些最大挑战的文章;再有exchanges with two quantum-computing experts,也即,和两位量子计算专家进行交流。

费恩曼物理学讲义第二卷 英文版

费恩曼物理学讲义第二卷 英文版

费恩曼物理学讲义第二卷英文版The second volume of "The Feynman Lectures on Physics" provides a comprehensive introduction to the topics of electromagnetism and matter. Authored by Nobel laureate Richard P. Feynman, this book is known for its clear explanations and engaging writing style.One of the key subjects covered in this volume is electricity and magnetism. Feynman starts by introducing the concept of electric charge and the fundamental laws that govern electric fields. Readers are then guided through the principles of Gauss's law, electric potential, and capacitance. The discussion on magnetism explores magnetic forces and fields, as well as the principles of electromagnetic induction.Another important topic in the book is electromagnetic waves. Feynman explains the nature of light as an electromagnetic wave and delves into the properties of light, such as polarization and diffraction. The chapter on Maxwell's equations ties together the laws of electromagnetism and serves as a foundation for understanding modern physics.In addition to electromagnetism, the book also covers the structure of matter. Feynman discusses the properties of solids,liquids, and gases, as well as the behavior of atoms and molecules. Readers will learn about thermal physics, including concepts such as temperature, heat, and entropy.Throughout the book, Feynman uses a combination of text, diagrams, and examples to make complex concepts accessible to readers. His engaging storytelling style and insightful commentary add a unique perspective to the study of physics.Overall, "The Feynman Lectures on Physics, Volume 2" is a valuable resource for students, educators, and anyone interested in the fascinating world of physics. The book's blend of theoretical rigor and practical applications makes it a must-read for anyone looking to deepen their understanding of electromagnetism and matter.。

赫尔曼·哈肯协同学文献引用

赫尔曼·哈肯协同学文献引用

赫尔曼·哈肯协同学文献引用
赫尔曼·哈肯(Hermann Haken)是一位著名的德国物理学家,
他是复杂系统和非线性动力学领域的先驱之一。

他在这些领域做出
了许多重要贡献,并且他的工作对于理解自然界中复杂系统的行为
具有重要意义。

在文献引用方面,赫尔曼·哈肯的研究成果被广泛引用和应用。

他的著作《Synergetics: An Introduction》是他关于协同学(Synergetics)理论的经典著作,该书对于复杂系统的自组织行为
和结构形成等问题进行了深入的探讨。

这本书影响了许多领域的研究,包括物理学、化学、生物学、工程学等,因此在相关领域的文
献中可以看到对该书的引用。

此外,赫尔曼·哈肯在非线性动力学领域的研究成果也被广泛
引用。

他提出的“双稳态理论”和“自组织模式”等概念对于描述
非线性系统的行为具有重要意义,因此在相关领域的文献中也经常
可以看到对他的研究成果的引用。

总的来说,赫尔曼·哈肯的研究成果在复杂系统和非线性动力
学领域具有重要影响,他的著作和论文被广泛引用,对于推动相关领域的研究具有重要作用。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :m a t h /0509146v 1 [m a t h .D G ] 7 S e p 2005NEARLY K ¨AHLER AND NEARLY PARALLEL G 2-STRUCTURESON SPHERESTHOMAS FRIEDRICH Abstract.In some other context,the question was raised how many nearly K¨a hler structures exist on the sphere S 6equipped with the standard Riemannian metric.In this short note,we prove that,up to isometry,there exists only one.This is a consequence of the description of the eigenspace to the eigenvalue λ=12of the Laplacian acting on 2-forms.A similar result concerning nearly parallel G 2-structures on the round sphere S 7holds,too.An alternative proof by Riemannian Killing spinors is also indicated.Consider the 6-dimensional sphere S 6⊂R 7equipped with its standard metric.Denote by ∆the Hodge-Laplace operator acting an 2-forms of S 6and consider the space E 12:= ω2∈Γ(Λ2(S 6)):d ∗ω2=0,∆(ω2)=12·ω2 .This space is an SO(7)-representation.Moreover,it coincides with the full eigenspace of the Laplace operator acting on 2-forms with eigenvalue λ=12.Proposition 1.The SO(7)-representation E 12is isomorphic to Λ3(R 7).More pre-cisely,for any 2-form ω2∈E 12,there exists a unique algebraic 3-form A ∈Λ3(R 7)such that ω2x (y,z )=A (x,y,z )holds at any point x ∈S 6for any two tangent vectors y,z ∈T x (S 6).Proof.It is easy to check that any 2-form ω2on S 6defined by a 3-form A ∈Λ3(R 7)as indicated satisfies the differential equations d ∗ω2=0,∆(ω2)=12·ω2.Consequently,we obtain an SO(7)-equivariant map Λ3(R 7)−→E 12.Since Λ3(R 7)is an irreducible SO(7)-representation,the map is injective.On the other hand,by Frobenius reciprocity,one computes the dimension of the eigenspace of theLaplace operator on 2-forms to the eigenvalue λ=12.Its dimension equals 35. We recall some basic properties of nearly K¨a hler manifolds in dimension six (see the paper [1]).Let (M 6,J ,g )be a nearly K¨a hler 6-manifold.Then it is an Einstein space with positive scalar curvature Scal >0.The K¨a hler form Ωsatisfies the differential equations d ∗Ω=0,∆(Ω)=2Date :8th February 2008.2000Mathematics Subject Classification.Primary 53C 25;Secondary 81T 30.Key words and phrases.nearly K¨a hler structures,nearly parallel G 2-structures.Supported by the SFB 647”Raum,Zeit,Materie”and the SPP 1154“Globale Differentialgeometrie”of the DFG.12THOMAS FRIEDRICHIn particular,the K¨a hler formΩJ of any nearly K¨a hler structure(S6,J,g can)on the standard sphere S6is a2-form on S6satisfying the equations d∗ΩJ=0and∆(ΩJ)= 12·ΩJ.This observation yields the following result.Proposition2.The K¨a hler formΩJ of any nearly K¨a hler structure(S6,J,g can)on the standard sphere is given by an algebraic3-form A∈Λ3(R7)via the formulaΩJ x(y,z)=A(x,y,z)where x∈S6is a point in the sphere and y,z∈T x(S6)are tangent vectors.Since the K¨a hler formΩJ is a non-degenerate2-form at any point of the sphere S6, the3-form A∈Λ3(R7)is a non-degenerate vector cross product in the sense of Gray (see[2],[4],[5]).For purely algebraic reasons it follows that two forms of that type are equivalent under the action of the group SO(7).Finally,we obtain the following Theorem1.Let(S6,J,g can)be a nearly K¨a hler structure on the standard6-sphere. Then the almost complex structure J is conjugated–under the action of the isometry group SO(7)–to the standard nearly K¨a hler structure of S6.A similar argument applies in dimension seven,too.Theorem 2.Let(S7,ω,g can)be a nearly parallel G2-structure on the standard7-sphere.Then it is conjugated–under the action of the isometry group SO(8)–to the standard nearly parallel G2-structure of S7.Remark.Nearly K¨a hler structures in dimension six and nearly parallel structures in dimension seven correspond to Riemannian Killing spinors.It is well-known that the isometry group of the spheres S6and S7acts transitively on the set of Killing spinor of length one.This observation yields a second proof of the latter Theorems(see[3]and [6]).References[1] B.Alexandrov,Th.Friedrich,and N.Schoemann,Almost hermitian6-manifolds revisited,J.Geom.Phys.53(2005),1-30..[2]R.B.Brown,A.Gray,Vector cross products,Comment.Math.Helv.42(1967),222-236.[3]Th.Friedrich,I.Kath,A.Moroianu,and U.Semmelmann,On nearly parallel G2-structures,J.Geom.Phys.23(1997),259-286.[4] A.Gray,Vector cross products on manifolds,Trans.Am.Math.Soc.141(1969),465-504.[5] A.Gray,Six-dimensional almost complex manifolds defined by means of three-fold vector crossproducts,Tohoku Math.Journ.,II.Ser.,21(1969),614-620.[6]R.Grunewald,Six-dimensional Riemannian manifolds with a real Killing spinor,Ann.Glob.Anal.Geom.8(1990),43-59.friedric@mathematik.hu-berlin.deInstitut f¨u r MathematikHumboldt-Universit¨a t zu BerlinSitz:WBC AdlershofD-10099Berlin,Germany。

相关文档
最新文档