Reducing Uncertainty In Location Prediction Of Moving Objects In Road Networks

合集下载

环境政策分析,英文版(练习翻译)

环境政策分析,英文版(练习翻译)

123 Array 4567891011111121314151617181920111212223242526272829301113132333435363738394011171should not automatically bar adoption of measures to prohibit or otherwise regulate the activity, and, in stronger versions, further asserts that uncertainty provides an affirmative justification for regulating an activity or regulating it more stringently than in the absence of uncertainty. Strong versions of PP hold that regulators should adopt “worst case” presumptions regarding the harms of activities posing an uncertain potential for significant harm; should prohibit such activities or require them to adopt best available technology measures;that regulatory costs should be disregarded or downplayed in such decisions;and that the proponents of such activities should bear the burden of establishing their safety in order to avoid such regulatory controls. The article also considers the relevance for regulatory decisions of uncertainties regarding the costs of regulating an activity as well as uncertainties regarding harms. It further considers the implications of the circumstance that regulatory decisions about a given environmental issue may be made sequentially over time and benefit from additional information developed in the interim between earlier and later decisions. The essay concludes that, while preventive regulation of uncertain risks is often appropriate and should incorporate precautionary elements where warranted by consideration of risk aversion or information acquisition, strong versions of PP do not provide a conceptually sound or socially desirable prescription for regulation.I. INTRODUCTION: ENVIRONMENTAL DECISION MAKING UNDER UNCERTAINTY AND THE PRECAUTIONARY PRINCIPLE The following are examples of regulatory decisions involving uncertain risks. In each case, consider whether a regulator should permit the potentially harmful activity to commence or continue, or, alternatively, to prohibit or otherwise regulate it, and the implications of sequential regulatory decision making and the opportunity to develop additional information to reduce uncertainties regarding the risks of harms posed by an activity and/or the costs of regulation.•Whether to prohibit the sale of meat products from cattle that have received bovine growth hormone injections.•Whether to prohibit the construction of an astronomic observatory atop Mt.G raham, in New Mexico, the only known habitat of the Mt. G raham Red Squirrel, a subspecies of the western red squirrel located only on Mt. Graham;other subspecies of the red squirrel are abundant in the Western United States.72RICHARD B. STEWART12345678910111111213141516171819201112122232425262728293011131323334353637383940111•Whether to prohibit field releases of crop plants that have been genetically modified using DNA technologies.•Whether to adopt a National Ambient Air Quality Standard (NAAQS) to limit short-term (10-minute) exposures to elevated levels of sulfur dioxide (SO 2). Several laboratory studies indicate that asthmatics exposed to higher short-term SO 2exposures experience temporary airway resistance that makes breathing more difficult.•Whether to prohibit introduction of sucralose, an artificial beverage sweetener that has been touted as safer than saccharin or aspartame, the artificial sweeteners currently in use.•Whether to prohibit the dumping of wastes of any sort at sea.•Whether to develop defenses against collisions with the earth by asteroids and other near-earth objects.•Whether to eliminate the use of chlorine to treat drinking water.•Whether to prohibit the use of glyphosate (“Roundup”), a broad-spectrum non-selective herbicide that is harmless to animals.•Whether to prohibit or tightly regulate the conversion of rainforest to agricultural or other uses.•Whether to immediately initiate strict limits on greenhouse gas emissions in order to limit the potential adverse effects of climate changes attributable to such emissions.In evaluating these and other environmental regulatory decision-making issues,the extent of available knowledge regarding environmental harm that may be caused by the activity in question can be conceptually classified in three ideal type categories:Type 1. The harm that the activity will cause is known and determinate. If,for example, the Mt. Graham observatory is built, the Mt. Graham red squirrel population will be wiped out within 20 years.Type 2. The harm is probabilistic in character but its probability distribution is known. For example, if the observatory is built, there is a 40% probability that the squirrel population will be wiped out within 20 years and a 60% probability that it will survive another 1000 years, at which point it will become extinct from natural “background” causes. In this situation we deal with a risk of harm, but the risk (comprising both the probability of an adverse effect occurring and the magnitude of the adverse effect if it occurs) is determinate.Type 3. There is a risk of harm that is uncertain. Thus, the probability of harm occurring, and/or the magnitude of the harm if it occurs, is not determi-nate and is subject to substantial uncertainty. To take the Mt. Graham example,Environment al Regulat ory Decision Making Under Uncert aint y 737312345678910111111213141516171819201112122232425262728293011131323334353637383940111it may be uncertain, based on current knowledge, whether any adverse effect on the squirrels will occur. If any adverse effect does occur, its magnitude is uncertain. Thus, it may be uncertain what percentage of the population may be lost, whether any given level of loss will result in extinction of the subspecies,and when possible losses or extinction may occur. There may also be cases where the type of harm, if any, that may occur is not known.The difference between Type 2 cases and Type 3 cases is obviously one of degree, but the distinction between the two ideal types is very useful for purposes of analysis. Very many environmental problems are Type 3 cases,characterized by uncertainty regarding risks of harm, although the nature and degree of uncertainty varies widely from case to case. These uncertainties have many potential causes, including lack of data, limitations in scientific understanding of causal relationships, medical and ecosystem complexity, and “trans scientific” gaps in the capacities of science.1There may also be substantial uncertainties regarding the costs of prohibiting or otherwise regulating the activity in question. In many situations, such uncertainties can be reduced by the development of additional information and knowledge as discussed further below.The common law traditionally awards damages only ex post for harm that has occurred and has been shown to have been caused by another’s activity. It grants injunctive relief ex ant e only if an activity poses an imminent and substantial likelihood of serious irreparable harm. Many environmental risks of types 2 and 3 would not qualify for an award of damages or prophylactic relief under this standard. In theory, ex post liability for harm caused could provide the requisite incentives for actors to manage their activities so as to prevent excessive risks of harm appropriately. In practice, however, these incentives have for a variety of reasons proven inadequate to prevent excessive environ-mental harm from occurring.2Accordingly, administrative programs of preventive ex an t e regulation have been widely adopted in the United States and other countries to regulate activities that pose substantial risks of environmental harm, even in cases where it is not certain that harm will actually occur. International agreements, such as the Vienna Convention for the Protection of the Ozone Layer and the Framework Convention on Climate Change, have also been adopted to address such risks.Preventive regulatory programs have been adopted not only in cases where activities have been shown to cause harm, but also in cases involving risks of harm, including cases of substantial uncertainty in the risk of harm.3Quantitative risk analysis and cost-benefit analysis are increasingly being used in connection with the preventive approach to regulation.4Under many preventive regulatory74RICHARD B. STEWART12345678910111111213141516171819201112122232425262728293011131323334353637383940111programs, regulators have the burden of establishing a significant risk of harm before imposing regulatory controls,5although under some programs, such as U.S. FDA new drug and food additive approvals and EPA registration of pesticides, the applicant bears the burden of showing product safety.In recent years, environmental advocates and many environmental law scholars, particularly in the field of international environmental law, have argued that environmental regulatory decisions and policies should follow a precautionary principle (PP).6The focus of PP is on appropriate regulatory policy in Type 3 cases where the risks of harm posed by an activity are characterized by substantial uncertainty. PP advocates argue for a precautionary approach to regulation in the face of such uncertainty. They often criticize prevailing preventive approaches to regulation on the grounds that they place the burden on regulators to show that an activity will cause serious harm or poses a high probability of serious harm before regulatory controls may be adopted. They argue that, given the lack of scientific capacities to predict which activities will cause serious or irreversible harms, this approach results in seriously inadequate environmental protection. They often also contend that existing preventive regulatory approaches give undue weight to costs in establishing controls.7Various versions of PP, mostly weak ones, have been incorporated or invoked in a number of recent international environmental declarations and conventions,including the Framework Convention on Climate Change 8and the EU Maastricht treaty.9These documents and the writings of PP advocates and of academics provide widely varying formulations of PP. It has been claimed that PP is already, or is becoming established as a binding principle of customary international law.10PP skeptics and critics, however, have contended that the heterogeneity of PP formulations, many of which are quite vague and indeterminate, demonstrates that that there is no single or determinate PP.11Thus, they have concluded that the precautionary principle is a “composite of several value-laden notions and loose, qualitative descriptions” and that accordingly its “operational usefulness ... is doubtful.”12They also deny that PP has been established as customary international law.13Criticisms of PP as indeterminate and conceptually fuzzy have merit. With a very few exceptions, there is a remarkable lack of analytic care or rigor regarding the substance of, and justification for, various versions of PP by those who advocate or favor their adoption. One can, however, identify four different PP conceptions that have emerged in legal instruments, international and national governmental declarations, advocacy statements, and the academic liter-ature that can serve as a useful basis for analysis and evaluation. These four versions of PP are as follows:Environment al Regulat ory Decision Making Under Uncert aint y 757512345678910111111213141516171819201112122232425262728293011131323334353637383940111PP1. Scientific uncertainty should not automatically preclude regulation of activities that pose a potential risk of significant harm (Non-Preclusion PP).PP2. Regulatory controls should incorporate a margin of safety; activities should be limited below the level at which no adverse effect has been observed or predicted (Margin of Safety PP).PP3. Activities that present an uncertain potential for significant harm should be subject to best technology available requirements to minimize the risk of harm unless the proponent of the activity shows that they present no appreciable risk of harm (BAT PP).14PP4. Activities that present an uncertain potential for significant harm should be prohibited unless the proponent of the activity shows that it presents no appreciable risk of harm (Prohibitory PP).What unites these different formulations is a focus on uncertainty regarding risks as the key factor guiding regulatory decisions. Some discussions of the PP blur the distinction between known (Type 2) and uncertain risks (Type 3),but the most careful commentators make clear that the precautionary principle is addressed to uncertain risks (Type 3) as such.15PP1 and PP2 are weak versions of precautionary approaches. Unlike the strong versions, PP3 and PP4, they do not mandate regulatory action and do not make uncertainty regarding risks an affirmative justification for such regulation.Thus, PP1 is negative in character; it states that uncertainty should not preclude regulation but does not provide affirmative guidance as to when regulatory controls should be adopted or what form they should take. This is the approach that is most widely invoked in international treaties and declarations. While the exact wording may vary, this principle of non-preclusion always sets up a threshold, e.g. an uncertain risk of serious damage,and then makes the negative prescription that, once that threshold has been triggered, regulators cannot rely on this fact alone to deny regulation. For example, the Bergen Ministerial Declaration states: “Where there are threats of serious or irreversible damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation.”16The Cartagena Protocol goes further by clarifying that uncertainty can not, in and of itself, justify the decision not to regulate, nor,presumably, the alternative decision to impose regulation:Lack of scientific certainty due to insufficient relevant scientific information and knowledge regarding the extent of the potential adverse effects of a living modified organism shall not prevent [a] Party from taking a decision, as appropriate. Lack of scientific knowledge or scientific consensus should not necessarily be interpreted as indicating a particular level ofrisk, an absence of risk, or an acceptable risk.1776RICHARD B. STEWART12345678910111111213141516171819201112122232425262728293011131323334353637383940111This principle of non-preclusion rejects the common law position that harm must be shown to have occurred or be imminent before legal liabilities or controls may be imposed. It also rejects the position, often asserted by industry,that significant uncertainty about risks should preclude imposition of preven-tive regulatory controls. Of all the formulations of the PP, this approach is the most often invoked and is most likely to be recognized as a part of customary international law; it is already widely accepted that a preventive approach, under which regulatory controls are adopted to prevent or reduce risks of harm even though the magnitude or even the occurrence of harm is uncertain, is justified in at least some circumstances.18Yet, the very generality and lack of specific prescriptions of PP1 may preclude it from being recognized as a binding norm.19PP2 likewise fails to specify when or what form of regulation should be adopted, but instructs that, whenever regulation is adopted, it should incorporate a margin of safety. Unlike PP1, PP2 is operative only after regulators have made the determination to regulate. Once this decision is made,regulators must first determine the maximum “safe” level of an activity, and then only allow the activity at some degree lower than that level (the “margin of safety”). This is a common approach in U.S. environmental law. An example is the Sustainable Fisheries Act of 1996, in which the optimum allowable yield from a fishery “is prescribed on the basis of the maximum sustainable yield from the fishery, as reduced by” relevant factors including “ecological” factors.20PP2 is consistent with (although it does not necessarily mandate) many commen-tators’ views that PP requires that regulators allow “large margins for error” in risk assessments.21It represents one formulation of the PP premise that: “Given scientific ignorance, prudent pessimism should be favoured over hazardous optimism.”22PP2 is not explicitly set forth in any international agreements and declarations, but its approach is implicit in some international agreements that require or provide for the adoption of precautionary measures.23The weak versions of the PP are fully compatible with and are often reflected in many well-established preventative regulatory programs that have been adopted at the domestic level by many countries and by international agreement over the past 30 years. These programs often authorize prophylactic regulation of uncertain risks in appropriate circumstances even in the absence of a showing that harm will actually occur. In many cases, they explicitly require a margin of safety in setting regulatory standards.24Thus the weak versions of PP do not represent or justify any basic change in the preventive approach to regulation that has generally prevailed over the past 30 years. They accordingly provide no basis for arguing that existing preventive regulatory programs are not sufficiently “precautionary” and need to be fundamentally changed in order to reflect precautionary principles.Environment al Regulat ory Decision Making Under Uncert aint y 777712345678910111111213141516171819201112122232425262728293011131323334353637383940111There are, however, important differences between established programs of preventive regulation and the strong versions of PP. Weak precautionary programs generally do not make the existence of uncertainty regarding risks as such a mandatory or distinct basis for imposing regulatory controls. PP3 and PP4, on the other hand, require regulators to regulate, or regulate more stringently, activities that pose risks that are more uncertain relative to risks that are less uncertain, and thus represent a significant change in regulatory concept and result.Under PP3, when regulators determine that there is a serious but uncertain risk, they must impose BAT measures. For example, the Second International Conference on the Protection of the North Sea calls for parties to:[R]educ[e] polluting emissions of substances that are persistent, toxic and liable to bio-accumulate at source by the use of the best available technology and other appropriate measures. This applies especially when there is reason to assume that certain damage or harmful effects on the living resources of the sea are likely to be caused by such substances,even where there is no scientific evidence to prove a causal link between emissions and effects (“the principle of precautionary action”).25Such a prescription does not appear to allow regulators to decide what sort of regulation is required, including no regulation: if there is an uncertain risk of serious harm, BAT measures should be imposed. However, some flexibility may remain under PP3 since the intensity of BAT controls may vary depending on the magnitude of the potential risk relative to the costs of controls, in accor-dance with a principle of proportionality.26PP4 imposes an even more stringent prescription upon regulators. Under this formulation, if there is an uncertain but serious risk of harm, the activity in question should not be undertaken at all until it is proven to be safe by the proponent of the activity. Thus, the Final Declaration of the First European “Seas at Risk” Conference provides that:The “burden of proof” is shifted from the regulator to the person or persons responsible for the potentially harmful activity, who will now have to demonstrate that their actions are not/will not cause harm to the environment. If the “worst case scenario” for a certain activity is serious enough, then even a small amount of doubt as to safety of that activity is sufficient to stop it taking place.27While this version of the PP presumably allows regulators some latitude to determine how serious an uncertain risk must be to invite regulation, it requires prohibition of the activity once the relevant risk threshold is met.The strong versions of PP, PP3 and PP4, are the focus of this essay.Accordingly, unqualified references to PP in the following discussion should be understood as referring to the strong versions of PP. Unlike the weak versions 78RICHARD B. STEWART12345678910111111213141516171819201112122232425262728293011131323334353637383940111of PP and the preventive approach to regulation generally, they make the existence of uncertain risks of significant harm both a sufficient and mandatory basis for imposing regulatory controls. We may term this the “uncertainty-based potential for harm” prescription for regulation. Different PP formulations incorporating this precept vary in the criteria for determining the potential for harm threshold that triggers the requirement of regulation, including how great the probability of harm must be, its character, and its magnitude. Some formulations, for example, stress that the probability of harm must be substan-tial and the harm that may eventuate must be “serious and irreversible.”28Other formulations enunciate less demanding criteria.29In some strong PP formula-tions, once the applicable risk threshold is met, regulation is mandatory;regulatory compliance costs, including the social costs involved in forgoing the benefits of activities subject to regulatory prohibition or restriction, are not included as a factor to be considered in the regulatory decision.30Some formu-lations explicitly allow for consideration of costs, but relegate them to a distinctly secondary role, while others introduce the principle of proportion-ality, tailoring the extent and character of the regulatory response adopted to the gravity of the risk in question.31Under PP3, for example, the costs of BAT controls might be taken into account in determining whether a given technology is “available.” It might be concluded that very costly technology controls are not as a practical matter “available.” Under PP4, in cases where potential risks are judged less serious or where the social benefits of the activity are high,prohibitory controls might be adopted for only a limited initial period subject to “sunset” provisions or reconsideration, or field trials may be permitted.32Strong versions of the PP also often hold that the burden of resolving uncertainty should be borne by the proponent of an activity rather than by regulators or opponents of the activity.33Accordingly, in order to avoid or lift regulatory prohibitions or BAT requirements, the proponent of an activity bears the burden of demonstrating that it does not present a potential for significant harm. Proponents of regulation, however, bear some initial threshold burden of production and persuasion. They must establish that an activity poses risks (albeit uncertain) of harm, including a potential for significant harm. Once that threshold burden is satisfied, however, the burden shifts to the activity proponent to resolve the uncertainty and show that that it does not have a potential for significant harm.34The normative core of the strong versions of PP, which distinguishes PP-based regulation from preventive regulation generally, is the principle that uncertainty regarding risks is an affirmative justification for adopting regulatory controls or adopting more stringent controls than would be appropriate in the case of activities posing more determinate risks. In the face of uncertainties regardingEnvironment al Regulat ory Decision Making Under Uncert aint y 797912345678910111111213141516171819201112122232425262728293011131323334353637383940111risk, PP holds that decision makers should err on the side of precaution and envi-ronmental protection and, in effect, make “worst case” presumptions about the probability and magnitude of harm that an activity poses; precisely how “worst case” is defined (“reasonable worst case,” etc.) varies in different PP formula-tions.35The justifications advanced by PP proponents for adopting its prescrip-tions center around limitations in our ability to predict which activities will cause serious, irreversible environmental harms.36The predictive capacity of science is limited. For example, science has often been unable to predict, in a sufficiently timely fashion to support effective preventive action, the occurrence of serious environmental harms such as asbestosis, stratospheric ozone depletion, or the ecological harms caused by DDT. Thus, a regulatory policy that requires regulators to demonstrate that an activity causes harm or even a significant risk of harm before imposing controls will result in the occurrence of serious environmental harms. Some of these harms, such as biodiversity loss or highly disruptive changes in natural systems resulting from rapid global warming, may be irreversible and seriously harm future generations. Accordingly, decision makers should err on the side of pre-caution and protection of the planet by adopting PP-based regulatory controls on activities involving uncertain risks that pose a potential for significant harm.The PP literature provides little in the way of helpful guidance on what regulators must show in order to establish a potential for harm that triggers PP.37While some PP proponents appear to assume that nature is inherently vulnerable and precarious rather than resilient, such a general presumption is not sufficient to show that a given activity triggers PP. The bovine growth hormone dispute suggests that a showing that a substance similar in chemical structure to the substance in question can cause harm may be sufficient.38The BtCorn-Monarch butterfly controversy suggests that a report of a single experimental study, albeit one quite unrepresentative of field conditions, can be enough to trigger PP controls if it is sufficiently widely publicized.39Under strong versions of PP, once the risk posed by an activity satisfies the threshold that triggers a worst case presumption, regulators must then follow a set of relatively stringent regulatory prescriptions. They must prohibit or impose BAT requirements on the activity; shift the burden to the activity proponent to show that the activity is “safe” in order to avoid or lift these regulatory require-ments; and disregard or downplay regulatory costs in implementing regulatory requirements. Thus, PP can be analyzed as containing two basic components:First, a worst case presumption for uncertain risks that meet a triggering threshold. Second, a set of regulatory decision rules that are mandatory once the presumption is triggered. These components can be analyzed separately.80RICHARD B. STEWART12345678910111111213141516171819201112122232425262728293011131323334353637383940111。

不确定性原理的英语作文

不确定性原理的英语作文

不确定性原理的英语作文The Uncertainty PrincipleThe Uncertainty Principle, also known as Heisenberg's Uncertainty Principle, is a fundamental concept in quantum mechanics that states that it is impossible to simultaneously know the exact position and momentum of a particle with absolute certainty. This principle was formulated by German physicist Werner Heisenberg in 1927 and has since become a cornerstone of modern physics.According to the Uncertainty Principle, the more precisely one tries to determine the position of a particle, the less precisely the momentum of that particle can be known, and vice versa. This is due to thewave-particle duality of quantum mechanics, which states that particles can exhibit both wave-like and particle-like properties depending on how they are observed or measured.One of the implications of the Uncertainty Principle is that there are inherent limits to the precision with which certain pairs of physical properties can be measured. For example, the position and momentum of an electron cannot be known simultaneously with arbitrary accuracy. This fundamental limitation has profound implications for ourunderstanding of the microscopic world and has led to the development of new mathematical frameworks and experimental techniques in quantum mechanics.The Uncertainty Principle has also had a profound impact on our understanding of reality. It challenges the classical notion of a deterministic universe, where exact predictions of future events would be possible if the initial conditions were known precisely. In the quantum world, however, the Uncertainty Principle introduces an element of inherent randomness and unpredictability.Despite its name, the Uncertainty Principle is not a statement of our ignorance or limitations as observers, but rather a fundamental characteristic of the quantum world. It is a reflection of the limitations of measuring devices and the inherent probabilistic nature of quantum mechanics.In conclusion, the Uncertainty Principle is a fundamental principle in quantum mechanics that states that it is impossible to simultaneously know the exact position and momentum of a particle with absolute certainty. It has profound implications for our understanding of themicroscopic world and challenges the classical notion of determinism. The Uncertainty Principle highlights the inherent randomness and unpredictability of the quantum world and continues to shape our understanding of the fundamental principles of physics.。

可再生能源英语作文

可再生能源英语作文

Renewable energy is a topic of great importance in todays world.It encompasses various sources of energy that can be replenished naturally and sustainably over time. Here are some key points to consider when discussing renewable energy in an English composition:1.Definition of Renewable Energy:Renewable energy is derived from natural resources that are constantly replenished,such as sunlight,wind,rain,tides,waves,and geothermal heat.2.Types of Renewable Energy:Solar Energy:Harnessed by solar panels that convert sunlight into electricity.Wind Energy:Generated by wind turbines that convert the kinetic energy of wind into electrical power.Hydropower:Produced by the movement of water in rivers or through tidal forces. Biomass Energy:Derived from organic materials such as wood,crops,and waste, which are burned to produce heat or electricity.Geothermal Energy:Extracted from the Earths internal heat,often used for heating and electricity production.3.Advantages of Renewable Energy:Environmental Benefits:Reduces greenhouse gas emissions and dependence on fossil fuels.Sustainability:Unlike finite resources,renewable energy sources are virtually inexhaustible.Economic Benefits:Can create jobs and stimulate economic growth in the energy sector.Energy Security:Diversifies energy sources,reducing reliance on imported fuels.4.Challenges of Renewable Energy:Intermittency:Renewable sources like solar and wind are not always available, requiring energy storage solutions.Infrastructure:Requires significant investment in new technologies and grid systems. Cost:Although costs are decreasing,initial investment in renewable energy projects can be high.Land Use:Largescale renewable energy projects may require significant land or water areas.5.Technological Advancements:Improvements in solar panel efficiency and wind turbine design.Development of energy storage technologies,such as batteries and pumped hydrostorage.Innovations in smart grid technology to better integrate renewable energy sources.ernment Policies and Incentives:Subsidies and tax incentives to encourage investment in renewable energy. Renewable energy targets and mandates to increase the share of renewable energy in the energy mix.Research and development funding to support technological advancements.7.Future Prospects:The potential for renewable energy to meet a growing portion of global energy demand. The role of renewable energy in combating climate change and achieving sustainable development goals.8.Conclusion:The importance of transitioning to a renewable energybased economy for a cleaner, more sustainable future.The need for continued research,investment,and policy support to overcome current challenges.When writing an essay on renewable energy,its essential to provide a balanced view, highlighting both the benefits and the challenges.Additionally,incorporating examples of how different countries or regions are adopting renewable energy can make the essay more engaging and informative.。

阱边缘效应

阱边缘效应

阱边缘效应20世纪50年代,波兰生物学家耶夫斯基提出了一个新的理论“阱边缘效应”。

他认为,一种物种在它的生存范围边缘处可能有更多的基因变异,从而产生更新颖的品种。

耶夫斯基的理论是在发现地缘物种时被证明的。

尽管一个物种有可能在一个地区的大部分地区中有同样的形态、色彩和行为,但在边缘地区的物种可能有很大的不同。

例如,青蛙在中部地区可能拥有深绿色、暗褐色或黄色的色彩,但在西部地区可能有鲜红色的青蛙。

耶夫斯基还认为,这种不同是由于不断变化的环境造成的。

当物种迁徙到一个新的地方时,它们可能会遇到新的环境,而且新环境中可能存在新的捕食者和食物选择。

为了使物种适应新环境,它们就需要拥有更多的基因变异。

因此,耶夫斯基认为,阱边缘效应是物种迁徙中最常见的基因变异机制。

在这种情况下,地缘物种往往拥有更多的基因变异,从而在迁徙中形成了新的物种。

阱边缘效应也被发现在其他自然系统中,例如在植物和昆虫迁徙时。

通过研究,科学家发现,地缘种类可能有更多的基因多样性,比如植物中颜色、形状和品种多样性,昆虫中也可能有许多种类的变化。

此外,阱边缘效应还可以应用到人类社会中。

社会的边缘处,如城市的郊区、农村和偏远地区,往往拥有更多的多样性。

在这些地区,人们可以更容易创新,比如创办新公司或开发新产品,而且他们的想法往往更有创造力,因为他们受到不同文化的影响。

因此,阱边缘效应在自然系统以及人类社会中都发挥着重要作用。

它不仅有助于物种的进化,还可以帮助人类创新和发展。

因此,要想在不断变化的世界中保持竞争力,我们就要拥抱这种多样性,以充分利用自然界中的潜力,以及保护我们种群的多样性,把它发挥到极致。

重型燃气轮机高雷诺数CDA_叶型转捩特性数值计算

重型燃气轮机高雷诺数CDA_叶型转捩特性数值计算

收稿日期:2021-08-26作者简介:王润禾(1997),女,硕士。

引用格式:王润禾,童歆,羌晓青,等.重型燃气轮机高雷诺数CDA 叶型转捩特性数值计算[J].航空发动机,2023,49(5):136-142.WANG Runhe ,TONG Xin ,QIANG Xiaoqing ,et al.Numerical calculation of controlled diffusion airfoils of transition characteristics for heavy-duty gas turbine at high Reynolds number[J].Aeroengine ,2023,49(5):136-142.航空发动机Aeroengine重型燃气轮机高雷诺数CDA 叶型转捩特性数值计算王润禾1,童歆1,羌晓青2,3,杜朝辉1,3,欧阳华1,3(上海交通大学机械与动力工程学院1,航空航天学院2:上海200240;3.燃气轮机与民用航空发动机教育部工程研究中心,上海201306)摘要:为研究重型燃气轮机的压气机叶片在高雷诺数工况下的气动性能,基于Gamma-Theta 转捩模型的雷诺时均方程对某可控扩散叶型进行了数值计算。

通过对比不控制马赫数与控制马赫数,分析高雷诺数对可控扩散叶型气动性能及转捩特性的影响。

结果表明:在不控制马赫数条件下,在零攻角时,雷诺数从7×105增大为9×105,总压损失增加了约391.95%;在高雷诺数工况下随着雷诺数的增大,叶片流动损失不断增大,叶片可用攻角范围减小,同时在叶片吸力面出现激波,干扰转捩的产生。

在控制马赫数条件下,当Ma =0.6时,在零攻角工况下,雷诺数从8.2×105增大为1×107,总压损失减小了约38.98%,吸力面转捩起始点从4.78%弦长处前移至1.11%弦长处;在高雷诺数工况下,叶片流动损失随着雷诺数的增大不断减小,吸力面转捩位置前移。

确定性等值名词解释

确定性等值名词解释

确定性等值名词解释确定性等值名词解释是一种特殊的语法,它可以帮助读者和听者更清楚地理解一段话中的某个或某些词语。

这种语法的特征是用一个词语来代表另一个词语,以便能更清楚地表达所要传达的思想。

在日常生活中,确定性等值名词解释可以帮助我们更容易理解另一个人说话的内容,同时也可以帮助我们更有效率地表达自己想要表达的思想。

确定性等值名词解释与同义词解释有密切的联系,但它们也有自己的特征。

确定性等值名词解释指的是把一个词语替换成另一个词语,但这两个词语的意思是一致的,无论是写作还是说话,这种方式的使用是非常普遍的,也是一种很有用的表达方式。

确定性等值名词解释的使用非常广泛,比如,在学习英语的过程中,我们经常会发现单词之间有着一定的联系,这时我们可以使用确定性等值名词解释来表达,比如“desirable”可以等值替换为“desired”,或者把“ingenious”替换为“clever”,这样就可以更清楚地表达意思。

在商务交流或文献研究中,确定性等值名词解释也有很重要的作用。

如果我们把某个重要的科技术语替换成一个更易理解的普通名词,就可以使我们的讲述更加清晰,并且可以让听众理解我们想要表达的意思。

此外,当进行文献研究时,也可以使用确定性等值名词解释引入更多有用的信息。

表达不同的概念用词也需要进行确定性等值名词解释,这样可以让观众更容易理解发言者的思想,也可以把发言者的思想用一句句清晰有力的文字表达出来。

另外,确定性等值名词解释在学术上也被广泛使用,比如学术论文中,撰写者可以使用确定性等值名词解释,把当前发表的论文与之相关的另一篇文献研究相连接。

这样就可以使论述更加清晰,使读者可以更容易理解发言者要表达的思想,进而更有效率地完成文献研究。

还有一种特殊的确定性等值名词解释,叫做“定义类别认知”,它的特点是可以把一组概念合在一起,组成一个整体。

这种定义类别认知可以帮助读者更容易理解一个概念在整体中的定义,也可以帮助读者把一组概念串联起来,从而加深读者对概念的认知。

高阶混合风险厌恶行为及其金融决策应用研究

高阶混合风险厌恶行为及其金融决策应用研究

A Dissertation Submitted in Partial Fulfillment of the Requirementsfor the Degree of Doctor of Philosophy in ManagementThe Research on Higher-order Cross Risk Aversion Behaviors and Its FinancialDecision-makingPh.D. Candidate: Cheng WenMajor : Management Science and EngineeringSupervisor : Prof. Xue MinggaoHuazhong University of Science and TechnologyWuhan, Hubei 430074, P.R.ChinaApril, 2015独创性声明本人声明所呈交的学位论文是我个人在导师指导下进行的研究工作及取得的研究成果。

尽我所知,除文中已经标明引用的内容外,本论文不包含任何其他个人或集体已经发表或撰写过的研究成果。

对本文的研究做出贡献的个人和集体,均已在文中以明确方式标明。

本人完全意识到,本声明的法律结果由本人承担。

学位论文作者签名:日期:年月日学位论文版权使用授权书本学位论文作者完全了解学校有关保留、使用学位论文的规定,即:学校有权保留并向国家有关部门或机构送交论文的复印件和电子版,允许论文被查阅和借阅。

本人授权华中科技大学可以将本学位论文的全部或部分内容编入有关数据库进行检索,可以采用影印、缩印或扫描等复制手段保存和汇编本学位论文。

保密□ ,在___年解密后适用本授权书。

本论文属于不保密□。

(请在以上方框内打“√”)学位论文作者签名:指导教师签名:日期:年月日日期:年月日华中科技大学博士学位论文摘要为了认识并解释现实世界中的经济金融现象,在不确定性下的决策分析通常是研究决策者的风险承担行为对其经济决策的效应。

SCI写作句型汇总

SCI写作句型汇总

S C I论文写作中一些常用的句型总结(一)很多文献已经讨论过了一、在Introduction里面经常会使用到的一个句子:很多文献已经讨论过了。

它的可能的说法有很多很多,这里列举几种我很久以前搜集的:A.??Solar energy conversion by photoelectrochemical cells?has been intensively investigated.?(Nature 1991, 353, 737 - 740?)B.?This was demonstrated in a number of studies that?showed that composite plasmonic-metal/semiconductor photocatalysts achieved significantly higher rates in various photocatalytic reactions compared with their pure semiconductor counterparts.C.?Several excellent reviews describing?these applications are available, and we do not discuss these topicsD.?Much work so far has focused on?wide band gap semiconductors for water splitting for the sake of chemical stability.(DOI:10.1038/NMAT3151)E.?Recent developments of?Lewis acids and water-soluble organometalliccatalysts?have attracted much attention.(Chem. Rev. 2002, 102, 3641?3666)F.?An interesting approach?in the use of zeolite as a water-tolerant solid acid?was described by?Ogawa et al(Chem.Rev. 2002, 102, 3641?3666)G.?Considerable research efforts have been devoted to?the direct transition metal-catalyzed conversion of aryl halides toaryl nitriles. (J. Org. Chem. 2000, 65, 7984-7989) H.?There are many excellent reviews in the literature dealing with the basic concepts of?the photocatalytic processand the reader is referred in particular to those by Hoffmann and coworkers,Mills and coworkers, and Kamat.(Metal oxide catalysis,19,P755)I. Nishimiya and Tsutsumi?have reported on(proposed)the influence of the Si/Al ratio of various zeolites on the acid strength, which were estimated by calorimetry using ammonia. (Chem.Rev. 2002, 102, 3641?3666)二、在results and discussion中经常会用到的:如图所示A. GIXRD patterns in?Figure 1A show?the bulk structural information on as-deposited films.?B.?As shown in Figure 7B,?the steady-state current density decreases after cycling between 0.35 and 0.7 V, which is probably due to the dissolution of FeOx.?C.?As can be seen from?parts a and b of Figure 7, the reaction cycles start with the thermodynamically most favorable VOx structures(J. Phys. Chem. C 2014, 118, 24950?24958)这与XX能够相互印证:A.?This is supported by?the appearance in the Ni-doped compounds of an ultraviolet–visible absorption band at 420–520nm (see Fig. 3 inset), corresponding to an energy range of about 2.9 to 2.3 eV.B. ?This?is consistent with the observation from?SEM–EDS. (Z.Zou et al. / Chemical Physics Letters 332 (2000) 271–277)C.?This indicates a good agreement between?the observed and calculated intensities in monoclinic with space groupP2/c when the O atoms are included in the model.D. The results?are in good consistent with?the observed photocatalytic activity...E. Identical conclusions were obtained in studies?where the SPR intensity and wavelength were modulated by manipulating the composition, shape,or size of plasmonic nanostructures.?F.??It was also found that areas of persistent divergent surfaceflow?coincide?with?regions where convection appears to be consistently suppressed even when SSTs are above 27.5°C.(二)1. 值得注意的是...A.?It must also be mentioned that?the recycling of aqueous organic solvent is less desirable than that of pure organic liquid.B.?Another interesting finding is that?zeolites with 10-membered ring pores showed high selectivities (>99%) to cyclohexanol, whereas those with 12-membered ring pores, such as mordenite, produced large amounts of dicyclohexyl ether. (Chem. Rev. 2002, 102,3641?3666)C.?It should be pointed out that?the nanometer-scale distribution of electrocatalyst centers on the electrode surface is also a predominant factor for high ORR electrocatalytic activity.D.?Notably,?the Ru II and Rh I complexes possessing the same BINAP chirality form antipodal amino acids as the predominant products.?(Angew. Chem. Int. Ed., 2002, 41: 2008–2022)E. Given the multitude of various transformations published,?it is noteworthy that?only very few distinct?activation?methods have been identified.?(Chem. Soc. Rev., 2009,?38, 2178-2189)F.?It is important to highlight that?these two directing effects will lead to different enantiomers of the products even if both the “H-bond-catalyst” and the?catalyst?acting by steric shielding have the same absolute stereochemistry. (Chem. Soc. Rev.,?2009,?38, 2178-2189)G.?It is worthwhile mentioning that?these PPNDs can be very stable for several months without the observations of any floating or precipitated dots, which is attributed to the electrostatic repulsions between the positively charge PPNDs resulting in electrosteric stabilization.(Adv. Mater., 2012, 24: 2037–2041)2.?...仍然是个挑战A.?There is thereby an urgent need but it is still a significant challenge to?rationally design and delicately tail or the electroactive MTMOs for advanced LIBs, ECs, MOBs, and FCs.?(Angew. Chem. Int. Ed.2 014, 53, 1488 – 1504)B.?However, systems that are?sufficiently stable and efficient for practical use?have not yet been realized.C.??It?remains?challenging?to?develop highly active HER catalysts based on materials that are more abundant at lower costs. (J. Am. Chem.Soc.,?2011,?133, ?7296–7299)D.?One of the?great?challenges?in the twenty-first century?is?unquestionably energy storage. (Nature Materials?2005, 4, 366 - 377?)众所周知A.?It is well established (accepted) / It is known to all / It is commonlyknown?that?many characteristics of functional materials, such as composition, crystalline phase, structural and morphological features, and the sur-/interface properties between the electrode and electrolyte, would greatly influence the performance of these unique MTMOs in electrochemical energy storage/conversion applications.(Angew. Chem. Int. Ed.2014,53, 1488 – 1504)B.?It is generally accepted (believed) that?for a-Fe2O3-based sensors the change in resistance is mainly caused by the adsorption and desorption of gases on the surface of the sensor structure. (Adv. Mater. 2005, 17, 582)C.?As we all know,?soybean abounds with carbon,?nitrogen?and oxygen elements owing to the existence of sugar,?proteins?and?lipids. (Chem. Commun., 2012,?48, 9367-9369)D.?There is no denying that?their presence may mediate spin moments to align parallel without acting alone to show d0-FM. (Nanoscale, 2013,?5, 3918-3930)(三)1. 正如下文将提到的...A.?As will be described below(也可以是As we shall see below),?as the Si/Al ratio increases, the surface of the zeolite becomes more hydrophobic and possesses stronger affinity for ethyl acetate and the number of acid sites decreases.(Chem. Rev. 2002, 102, 3641?3666)B. This behavior is to be expected and?will?be?further?discussed?below. (J. Am. Chem. Soc.,?1955,?77, 3701–3707)C.?There are also some small deviations with respect to the flow direction,?whichwe?will?discuss?below.(Science, 2001, 291, 630-633)D.?Below,?we?will?see?what this implies.E.?Complete details of this case?will?be provided at a?later?time.E.?很多论文中,也经常直接用see below来表示,比如:The observation of nanocluster spheres at the ends of the nanowires is suggestive of a VLS growth process (see?below). (Science, 1998, ?279, 208-211)2. 这与XX能够相互印证...A.?This is supported by?the appearance in the Ni-doped compounds of an ultraviolet–visible absorption band at 420–520 nm (see Fig. 3 inset), corresponding to an energy range of about 2.9 to 2.3 eVB.This is consistent with the observation from?SEM–EDS. (Chem. Phys. Lett. 2000, 332, 271–277)C.?Identical conclusions were obtained?in studies where the SPR intensity and wavelength were modulated by manipulating the composition, shape, or size of plasmonic nanostructures.?(Nat. Mater. 2011, DOI: 10.1038/NMAT3151)D. In addition, the shape of the titration curve versus the PPi/1 ratio,?coinciding withthat?obtained by fluorescent titration studies, suggested that both 2:1 and 1:1 host-to-guest complexes are formed. (J. Am. Chem. Soc. 1999, 121, 9463-9464)E.?This unusual luminescence behavior is?in accord with?a recent theoretical prediction; MoS2, an indirect bandgap material in its bulk form, becomes a direct bandgapsemiconductor when thinned to a monolayer.?(Nano Lett.,?2010,?10, 1271–1275)3.?我们的研究可能在哪些方面得到应用A.?Our ?ndings suggest that?the use of solar energy for photocatalytic watersplitting?might provide a viable source for?‘clean’ hydrogen fuel, once the catalyticef?ciency of the semiconductor system has been improved by increasing its surface area and suitable modi?cations of the surface sites.B. Along with this green and cost-effective protocol of synthesis,?we expect that?these novel carbon nanodots?have potential applications in?bioimaging andelectrocatalysis.(Chem. Commun., 2012,?48, 9367-9369)C.?This system could potentially be applied as?the gain medium of solid-state organic-based lasers or as a component of high value photovoltaic (PV) materials, where destructive high energy UV radiation would be converted to useful low energy NIR radiation. (Chem. Soc. Rev., 2013,?42, 29-43)D.?Since the use of?graphene?may enhance the photocatalytic properties of TiO2?under UV and visible-light irradiation,?graphene–TiO2?composites?may potentially be usedto?enhance the bactericidal activity.?(Chem. Soc. Rev., 2012,?41, 782-796)E.??It is the first report that CQDs are both amino-functionalized and highly fluorescent,?which suggests their promising applications in?chemical sensing.(Carbon, 2012,?50,?2810–2815)(四)1. 什么东西还尚未发现/系统研究A. However,systems that are sufficiently stable and efficient for practical use?have not yet been realized.B. Nevertheless,for conventional nanostructured MTMOs as mentioned above,?some problematic disadvantages cannot be overlooked.(Angew. Chem. Int. Ed.2014,53, 1488 – 1504)C.?There are relatively few studies devoted to?determination of cmc values for block copolymer micelles. (Macromolecules 1991, 24, 1033-1040)D. This might be the reason why, despite of the great influence of the preparation on the catalytic activity of gold catalysts,?no systematic study concerning?the synthesis conditions?has been published yet.?(Applied Catalysis A: General2002, 226, ?1–13)E.?These possibilities remain to be?explored.F.??Further effort is required to?understand and better control the parameters dominating the particle surface passivation and resulting properties for carbon dots of brighter photoluminescence. (J. Am. Chem. Soc.,?2006,?128?, 7756–7757)2.?由于/因为...A.?Liquid ammonia?is particularly attractive as?an alternative to water?due to?its stability in the presence of strong reducing agents such as alkali metals that are used to access lower oxidation states.B.?The unique nature of?the cyanide ligand?results from?its ability to act both as a σdonor and a π acceptor combined with its negativecharge and ambidentate nature.C.?Qdots are also excellent probes for two-photon confocalmicroscopy?because?they are characterized by a very large absorption cross section?(Science ?2005,?307, 538-544).D.?As a result of?the reductive strategy we used and of the strong bonding between the surface and the aryl groups, low residual currents (similar to those observed at a bare electrode) were obtained over a large window of potentials, the same as for the unmodified parent GC electrode. (J. Am. Chem. Soc. 1992, 114, 5883-5884)E.?The small Tafel slope of the defect-rich MoS2 ultrathin nanosheets is advantageous for practical?applications,?since?it will lead to a faster increment of HER rate with increasing overpotential.(Adv. Mater., 2013, 25: 5807–5813)F. Fluorescent carbon-based materials have drawn increasing attention in recent years?owing to?exceptional advantages such as high optical absorptivity, chemical stability, biocompatibility, and low toxicity.(Angew. Chem. Int. Ed., 2013, 52: 3953–3957)G.??On the basis of?measurements of the heat of immersion of water on zeolites, Tsutsumi etal. claimed that the surface consists of siloxane bondings and is hydrophobicin the region of low Al content. (Chem. Rev. 2002, 102, 3641?3666)H.?Nanoparticle spatial distributions might have a large significance for catalyst stability,?given that?metal particle growth is a relevant deactivation mechanism for commercial catalysts.?3. ...很重要A.?The inhibition of additional nucleation during growth, in other words, the complete separation?of nucleation and growth,?is?critical(essential, important)?for?the successful synthesis of monodisperse nanocrystals. (Nature Materials?3, 891 - 895 (2004))B.??In the current study,?Cys,?homocysteine?(Hcy) and?glutathione?(GSH) were chosen as model?thiol?compounds since they?play important (significant, vital, critical) roles?in many biological processes and monitoring of these?thiol?compounds?is of great importance for?diagnosis of diseases.(Chem. Commun., 2012,?48, 1147-1149)C.?This is because according to nucleation theory,?what really matters?in addition to the change in temperature ΔT?(or supersaturation) is the cooling rate.(Chem. Soc. Rev., 2014,?43, 2013-2026)(五)1. 相反/不同于A.?On the contrary,?mononuclear complexes, called single-ion magnets (SIM), have shown hysteresis loops of butterfly/phonon bottleneck type, with negligiblecoercivity, and therefore with much shorter relaxation times of magnetization. (Angew. Chem. Int. Ed., 2014, 53: 4413–4417)B.?In contrast,?the Dy compound has significantly larger value of the transversal magnetic moment already in the ground state (ca. 10?1?μB), therefore allowing a fast QTM. (Angew. Chem. Int. Ed., 2014, 53: 4413–4417)C.?In contrast to?the structural similarity of these complexes, their magnetic behavior exhibits strong divergence.?(Angew. Chem. Int. Ed., 2014, 53: 4413–4417)D.?Contrary to?other conducting polymer semiconductors, carbon nitride ischemically and thermally stable and does not rely on complicated device manufacturing. (Nature materials, 2009, 8(1): 76-80.)E.?Unlike?the spherical particles they are derived from that Rayleigh light-scatter in the blue, these nanoprisms exhibit scattering in the red, which could be useful in developing multicolor diagnostic labels on the basis not only of nanoparticle composition and size but also of shape. (Science 2001,? 294, 1901-1903)2. 发现,阐明,报道,证实可供选择的词包括:verify, confirm, elucidate, identify, define, characterize, clarify, establish, ascertain, explain, observe, illuminate, illustrate,demonstrate, show, indicate, exhibit, presented, reveal, display, manifest,suggest, propose, estimate, prove, imply, disclose,report, describe,facilitate the identification of?举例:A. These stacks appear as nanorods in the two-dimensional TEM images, but tilting experiments?confirm that they are nanoprisms.?(Science 2001,? 294, 1901-1903)B. Note that TEM?shows?that about 20% of the nanoprisms are truncated.?(Science 2001,? 294, 1901-1903)C. Therefore, these calculations not only allow us to?identify?the important features in the spectrum of the nanoprisms but also the subtle relation between particle shape and the frequency of the bands that make up their spectra.?(Science 2001,? 294, 1901-1903)D. We?observed?a decrease in intensity of the characteristic surface plasmon band in the ultraviolet-visible (UV-Vis) spectroscopy for the spherical particles at λmax?= 400 nm with a concomitant growth of three new bands of λmax?= 335 (weak), 470 (medium), and 670 nm (strong), respectively. (Science 2001,? 294, 1901-1903)E. In this article, we present data?demonstrating?that opiate and nonopiate analgesia systems can be selectively activated by different environmental manipulationsand?describe?the neural circuitry involved. (Science 1982, 216, 1185-1192)F. This?suggests?that the cobalt in CoP has a partial positive charge (δ+), while the phosphorus has a partial negative charge (δ?),?implying?a transfer of electron density from Co to P.?(Angew. Chem., 2014, 126: 6828–6832)3. 如何指出当前研究的不足A. Although these inorganic substructures can exhibit a high density of functional groups, such as bridging OH groups, and the substructures contribute significantly to the adsorption properties of the material,surprisingly little attention has been devoted to?the post-synthetic functionalization of the inorganic units within MOFs. (Chem. Eur. J., 2013, 19: 5533–5536.)B.?Little is known,?however, about the microstructure of this material. (Nature Materials 2013,12, 554–561)C.?So far, very little information is available, and only in?the absorber film, not in the whole operational devices. (Nano Lett.,?2014,?14?(2), pp 888–893)D.?In fact it should be noted that very little optimisation work has been carried out on?these devices. (Chem. Commun., 2013,?49, 7893-7895)E. By far the most architectures have been prepared using a solution processed perovskite material,?yet a few examples have been reported that?have used an evaporated perovskite layer. (Adv. Mater., 2014, 27: 1837–1841.)F. Water balance issues have been effectively addressed in PEMFC technology through a large body of work encompassing imaging, detailed water content and water balance measurements, materials optimization and modeling,?but very few of these activities have been undertaken for?anion exchange membrane fuel cells,? primarily due to limited materials availability and device lifetime. (J. Polym. Sci. Part B: Polym. Phys., 2013, 51: 1727–1735)G. However,?none of these studies?tested for Th17 memory, a recently identified T cell that specializes in controlling extracellular bacterial infections at mucosal surfaces. (PNAS, 2013,?111, 787–792)H. However,?uncertainty still remains as to?the mechanism by which Li salt addition results in an extension of the cathodic reduction limit. (Energy Environ. Sci., 2014,?7, 232-250)I.?There have been a number of high profile cases where failure to?identify the most stable crystal form of a drug has led to severe formulation problems in manufacture. (Chem. Soc. Rev., 2014,?43, 2080-2088)J. However,?these measurements systematically underestimate?the amount of ordered material. ( Nature Materials 2013, 12, 1038–1044)(六)1.?取决于a.?This is an important distinction, as the overall activity of a catalyst will?depend on?the material properties, synthesis method, and other possible species that can be formed during activation.?(Nat. Mater.?2017,16,225–229)b.?This quantitative partitioning?was determined by?growing crystals of the 1:1 host–guest complex between?ExBox4+?and corannulene. (Nat. Chem.?2014,?6177–178)c.?They suggested that the Au particle size may?be the decisive factor for?achieving highly active Au catalysts.(Acc. Chem. Res.,?2014,?47, 740–749)d.?Low-valent late transition-metal catalysis has?become indispensable to?chemical synthesis, but homogeneous high-valent transition-metal catalysis is underdeveloped, mainly owing to the reactivity of high-valent transition-metal complexes and the challenges associated with synthesizing them.?(Nature2015,?517,449–454)e.?The polar effect?is a remarkable property that enables?considerably endergonic C–H abstractions?that would not be possible otherwise.?(Nature?2015, 525, 87–90)f.?Advances in heterogeneous catalysis?must rely on?the rational design of new catalysts. (Nat. Nanotechnol.?2017, 12, 100–101)g.?Likely, the origin of the chemoselectivity may?be also closely related to?the H?bonding with the N or O?atom of the nitroso moiety, a similar H-bonding effect is known in enamine-based nitroso chemistry. (Angew. Chem. Int. Ed.?2014, 53: 4149–4153)2.?有很大潜力a.?The quest for new methodologies to assemble complex organic molecules?continues to be a great impetus to?research efforts to discover or to optimize new catalytic transformations. (Nat. Chem.?2015,?7, 477–482)b.?Nanosized faujasite (FAU) crystals?have great potential as?catalysts or adsorbents to more efficiently process present and forthcoming synthetic and renewablefeedstocks in oil refining, petrochemistry and fine chemistry. (Nat. Mater.?2015, 14, 447–451)c.?For this purpose, vibrational spectroscopy?has proved promising?and very useful.?(Acc Chem Res. 2015, 48, 407–413.)d.?While a detailed mechanism remains to be elucidated and?there is room for improvement?in the yields and selectivities, it should be remarked that chirality transfer upon trifluoromethylation of enantioenriched allylsilanes was shown. (Top Catal.?2014,?57: 967.?)e.?The future looks bright for?the use of PGMs as catalysts, both on laboratory and industrial scales, because the preparation of most kinds of single-atom metal catalyst is likely to be straightforward, and because characterization of such catalysts has become easier with the advent of techniques that readily discriminate single atoms from small clusters and nanoparticles. (Nature?2015, 525, 325–326)f.?The unique mesostructure of the 3D-dendritic MSNSs with mesopore channels of short length and large diameter?is supposed to be the key role in?immobilization of active and robust heterogeneous catalysts, and?it would have more hopeful prospects in?catalytic applications. (ACS Appl. Mater. Interfaces,?2015,?7, 17450–17459)g.?Visible-light photoredox catalysis?offers exciting opportunities to?achieve challenging carbon–carbon bond formations under mild and ecologically benign conditions. (Acc. Chem. Res.,?2016, 49, 1990–1996)3. 因此同义词:Therefore, thus, consequently, hence, accordingly, so, as a result这一条比较简单,这里主要讲一下这些词的副词词性和灵活运用。

基于边缘检测的抗遮挡相关滤波跟踪算法

基于边缘检测的抗遮挡相关滤波跟踪算法

基于边缘检测的抗遮挡相关滤波跟踪算法唐艺北方工业大学 北京 100144摘要:无人机跟踪目标因其便利性得到越来越多的关注。

基于相关滤波算法利用边缘检测优化样本质量,并在边缘检测打分环节加入平滑约束项,增加了候选框包含目标的准确度,达到降低计算复杂度、提高跟踪鲁棒性的效果。

利用自适应多特征融合增强特征表达能力,提高目标跟踪精准度。

引入遮挡判断机制和自适应更新学习率,减少遮挡对滤波模板的影响,提高目标跟踪成功率。

通过在OTB-2015和UAV123数据集上的实验进行定性定量的评估,论证了所研究算法相较于其他跟踪算法具有一定的优越性。

关键词:无人机 目标追踪 相关滤波 多特征融合 边缘检测中图分类号:TN713;TP391.41;TG441.7文献标识码:A 文章编号:1672-3791(2024)05-0057-04 The Anti-Occlusion Correlation Filtering Tracking AlgorithmBased on Edge DetectionTANG YiNorth China University of Technology, Beijing, 100144 ChinaAbstract: For its convenience, tracking targets with unmanned aerial vehicles is getting more and more attention. Based on the correlation filtering algorithm, the quality of samples is optimized by edge detection, and smoothing constraints are added to the edge detection scoring link, which increases the accuracy of targets included in candi⁃date boxes, and achieves the effects of reducing computational complexity and improving tracking robustness. Adap⁃tive multi-feature fusion is used to enhance the feature expression capability, which improves the accuracy of target tracking. The occlusion detection mechanism and the adaptive updating learning rate are introduced to reduce the impact of occlusion on filtering templates, which improves the success rate of target tracking. Qualitative evaluation and quantitative evaluation are conducted through experiments on OTB-2015 and UAV123 datasets, which dem⁃onstrates the superiority of the studied algorithm over other tracking algorithms.Key Words: Unmanned aerial vehicle; Target tracking; Correlation filtering; Multi-feature fusion; Edge detection近年来,无人机成为热点话题,具有不同用途的无人机频繁出现在大众视野。

Computing the Uncertainty of Geometric Primitives and Transformations

Computing the Uncertainty of Geometric Primitives and Transformations
The computation of the uncertainty of geometric primitives and transformations is an important problem in computer vision. The reliable estimation of the positional error of feature points and the estimation of the uncertainty of transformation matrices are essential processing steps in many applications such as registration, alignment, and stereo. The uncertainty of various geometric parameters is often estimated by performing a statistical analysis, which has the disadvantage that the computations involved become complicated unless sufficiently simplifying, but often unrealistic assumptions are introduced for the probability density functions. To avoid extensive calculations and unrealistic assumptions, we examined the use of hard convex sets to efficiently model the uncertainty of geometric properties in digital images [1] [2] [3] [4]. Using convex sets to model the positional uncertainty of a point as uncertainty regions allows us to use the notion of an uncertainty polytope to model the uncertainty of an affine transformation [5]. The

VoxelNet_ End-to-End Learning for Point Cloud Base

VoxelNet_ End-to-End Learning for Point Cloud Base

VoxelNet:End-to-End Learning for Point Cloud Based3D Object DetectionYin ZhouApple Inc****************Oncel TuzelApple Inc****************AbstractAccurate detection of objects in3D point clouds is a central problem in many applications,such as autonomous navigation,housekeeping robots,and augmented/virtual re-ality.To interface a highly sparse LiDAR point cloud with a region proposal network(RPN),most existing efforts have focused on hand-crafted feature representations,for exam-ple,a bird’s eye view projection.In this work,we remove the need of manual feature engineering for3D point clouds and propose VoxelNet,a generic3D detection network that unifies feature extraction and bounding box prediction into a single stage,end-to-end trainable deep network.Specifi-cally,VoxelNet divides a point cloud into equally spaced3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly in-troduced voxel feature encoding(VFE)layer.In this way, the point cloud is encoded as a descriptive volumetric rep-resentation,which is then connected to a RPN to generate detections.Experiments on the KITTI car detection bench-mark show that VoxelNet outperforms the state-of-the-art LiDAR based3D detection methods by a large margin.Fur-thermore,our network learns an effective discriminative representation of objects with various geometries,leading to encouraging results in3D detection of pedestrians and cyclists,based on only LiDAR.1.IntroductionPoint cloud based3D object detection is an important component of a variety of real-world applications,such as autonomous navigation[11,14],housekeeping robots[26], and augmented/virtual reality[27].Compared to image-based detection,LiDAR provides reliable depth informa-tion that can be used to accurately localize objects and characterize their shapes[21,5].However,unlike im-ages,LiDAR point clouds are sparse and have highly vari-able point density,due to factors such as non-uniform sampling of the3D space,effective range of the sensors, occlusion,and the relative pose.To handle these chal-lenges,many approaches manually crafted featurerepresen-Figure1.V oxelNet directly operates on the raw point cloud(no need for feature engineering)and produces the3D detection re-sults using a single end-to-end trainable network.tations for point clouds that are tuned for3D object detec-tion.Several methods project point clouds into a perspec-tive view and apply image-based feature extraction tech-niques[28,15,22].Other approaches rasterize point clouds into a3D voxel grid and encode each voxel with hand-crafted features[41,9,37,38,21,5].However,these man-ual design choices introduce an information bottleneck that prevents these approaches from effectively exploiting3D shape information and the required invariances for the de-tection task.A major breakthrough in recognition[20]and detection[13]tasks on images was due to moving from hand-crafted features to machine-learned features.Recently,Qi et al.[29]proposed PointNet,an end-to-end deep neural network that learns point-wise features di-rectly from point clouds.This approach demonstrated im-pressive results on3D object recognition,3D object part segmentation,and point-wise semantic segmentation tasks.In[30],an improved version of PointNet was introduced which enabled the network to learn local structures at dif-ferent scales.To achieve satisfactory results,these two ap-proaches trained feature transformer networks on all input points(∼1k points).Since typical point clouds obtained using LiDARs contain∼100k points,training the architec-1Figure2.V oxelNet architecture.The feature learning network takes a raw point cloud as input,partitions the space into voxels,and transforms points within each voxel to a vector representation characterizing the shape information.The space is represented as a sparse 4D tensor.The convolutional middle layers processes the4D tensor to aggregate spatial context.Finally,a RPN generates the3D detection.tures as in[29,30]results in high computational and mem-ory requirements.Scaling up3D feature learning networks to orders of magnitude more points and to3D detection tasks are the main challenges that we address in this paper.Region proposal network(RPN)[32]is a highly opti-mized algorithm for efficient object detection[17,5,31, 24].However,this approach requires data to be dense and organized in a tensor structure(e.g.image,video)which is not the case for typical LiDAR point clouds.In this pa-per,we close the gap between point set feature learning and RPN for3D detection task.We present V oxelNet,a generic3D detection framework that simultaneously learns a discriminative feature represen-tation from point clouds and predicts accurate3D bounding boxes,in an end-to-end fashion,as shown in Figure2.We design a novel voxel feature encoding(VFE)layer,which enables inter-point interaction within a voxel,by combin-ing point-wise features with a locally aggregated feature. Stacking multiple VFE layers allows learning complex fea-tures for characterizing local3D shape information.Specif-ically,V oxelNet divides the point cloud into equally spaced 3D voxels,encodes each voxel via stacked VFE layers,and then3D convolution further aggregates local voxel features, transforming the point cloud into a high-dimensional volu-metric representation.Finally,a RPN consumes the vol-umetric representation and yields the detection result.This efficient algorithm benefits both from the sparse point struc-ture and efficient parallel processing on the voxel grid.We evaluate V oxelNet on the bird’s eye view detection and the full3D detection tasks,provided by the KITTI benchmark[11].Experimental results show that V oxelNet outperforms the state-of-the-art LiDAR based3D detection methods by a large margin.We also demonstrate that V oxel-Net achieves highly encouraging results in detecting pedes-trians and cyclists from LiDAR point cloud.1.1.Related WorkRapid development of3D sensor technology has moti-vated researchers to develop efficient representations to de-tect and localize objects in point clouds.Some of the earlier methods for feature representation are[39,8,7,19,40,33, 6,25,1,34,2].These hand-crafted features yield satisfac-tory results when rich and detailed3D shape information is available.However their inability to adapt to more complex shapes and scenes,and learn required invariances from data resulted in limited success for uncontrolled scenarios such as autonomous navigation.Given that images provide detailed texture information, many algorithms infered the3D bounding boxes from2D images[4,3,42,43,44,36].However,the accuracy of image-based3D detection approaches are bounded by the accuracy of the depth estimation.Several LIDAR based3D object detection techniques utilize a voxel grid representation.[41,9]encode each nonempty voxel with6statistical quantities that are de-rived from all the points contained within the voxel.[37] fuses multiple local statistics to represent each voxel.[38] computes the truncated signed distance on the voxel grid.[21]uses binary encoding for the3D voxel grid.[5]in-troduces a multi-view representation for a LiDAR point cloud by computing a multi-channel feature map in the bird’s eye view and the cylindral coordinates in the frontal view.Several other studies project point clouds onto a per-spective view and then use image-based feature encoding公众号DASOU-整理schemes[28,15,22].There are also several multi-modal fusion methods that combine images and LiDAR to improve detection accu-racy[10,16,5].These methods provide improved perfor-mance compared to LiDAR-only3D detection,particularly for small objects(pedestrians,cyclists)or when the objectsare far,since cameras provide an order of magnitude more measurements than LiDAR.However the need for an addi-tional camera that is time synchronized and calibrated with the LiDAR restricts their use and makes the solution more sensitive to sensor failure modes.In this work we focus on LiDAR-only detection.1.2.Contributions•We propose a novel end-to-end trainable deep archi-tecture for point-cloud-based3D detection,V oxelNet, that directly operates on sparse3D points and avoids information bottlenecks introduced by manual feature engineering.•We present an efficient method to implement V oxelNet which benefits both from the sparse point structure and efficient parallel processing on the voxel grid.•We conduct experiments on KITTI benchmark and show that V oxelNet produces state-of-the-art results in LiDAR-based car,pedestrian,and cyclist detection benchmarks.2.VoxelNetIn this section we explain the architecture of V oxelNet, the loss function used for training,and an efficient algo-rithm to implement the network.2.1.VoxelNet ArchitectureThe proposed V oxelNet consists of three functional blocks:(1)Feature learning network,(2)Convolutional middle layers,and(3)Region proposal network[32],as il-lustrated in Figure2.We provide a detailed introduction of V oxelNet in the following sections.2.1.1Feature Learning NetworkVoxel Partition Given a point cloud,we subdivide the3D space into equally spaced voxels as shown in Figure2.Sup-pose the point cloud encompasses3D space with range D, H,W along the Z,Y,X axes respectively.We define each voxel of size v D,v H,and v W accordingly.The resulting 3D voxel grid is of size D =D/v D,H =H/v H,W = W/v W.Here,for simplicity,we assume D,H,W are a multiple of v D,v H,v W.Grouping We group the points according to the voxel they reside in.Due to factors such as distance,occlusion,ob-ject’s relative pose,and non-uniform sampling,the LiDARFullyConnectedNeuralNetPoint-wiseInputPoint-wiseFeatureElement-wiseMaxpoolPoint-wiseConcatenateLocallyAggregatedFeaturePoint-wiseconcatenatedFeatureFigure3.V oxel feature encoding layer.point cloud is sparse and has highly variable point density throughout the space.Therefore,after grouping,a voxel will contain a variable number of points.An illustration is shown in Figure2,where V oxel-1has significantly more points than V oxel-2and V oxel-4,while V oxel-3contains no point.Random Sampling Typically a high-definition LiDAR point cloud is composed of∼100k points.Directly pro-cessing all the points not only imposes increased mem-ory/efficiency burdens on the computing platform,but also highly variable point density throughout the space might bias the detection.To this end,we randomly sample afixed number,T,of points from those voxels containing more than T points.This sampling strategy has two purposes,(1)computational savings(see Section2.3for details);and(2)decreases the imbalance of points between the voxels which reduces the sampling bias,and adds more variation to training.Stacked Voxel Feature Encoding The key innovation is the chain of VFE layers.For simplicity,Figure2illustrates the hierarchical feature encoding process for one voxel. Without loss of generality,we use VFE Layer-1to describe the details in the following paragraph.Figure3shows the architecture for VFE Layer-1.Denote V={p i=[x i,y i,z i,r i]T∈R4}i=1...t as a non-empty voxel containing t≤T LiDAR points,where p i contains XYZ coordinates for the i-th point and r i is the received reflectance.Wefirst compute the local mean as the centroid of all the points in V,denoted as(v x,v y,v z). Then we augment each point p i with the relative offset w.r.t. the centroid and obtain the input feature set V in={ˆp i= [x i,y i,z i,r i,x i−v x,y i−v y,z i−v z]T∈R7}i=1...t.Next, eachˆp i is transformed through the fully connected network (FCN)into a feature space,where we can aggregate in-formation from the point features f i∈R m to encode the shape of the surface contained within the voxel.The FCN is composed of a linear layer,a batch normalization(BN) layer,and a rectified linear unit(ReLU)layer.After obtain-ing point-wise feature representations,we use element-wise MaxPooling across all f i associated to V to get the locally aggregated feature˜f∈R m for V.Finally,we augmenteach f i with˜f to form the point-wise concatenated featureas f outi =[f T i,˜f T]T∈R2m.Thus we obtain the outputfeature set V out={f outi }i...t.All non-empty voxels areencoded in the same way and they share the same set of parameters in FCN.We use VFE-i(c in,c out)to represent the i-th VFE layer that transforms input features of dimension c in into output features of dimension c out.The linear layer learns a ma-trix of size c in×(c out/2),and the point-wise concatenation yields the output of dimension c out.Because the output feature combines both point-wise features and locally aggregated feature,stacking VFE lay-ers encodes point interactions within a voxel and enables thefinal feature representation to learn descriptive shape information.The voxel-wise feature is obtained by trans-forming the output of VFE-n into R C via FCN and apply-ing element-wise Maxpool where C is the dimension of the voxel-wise feature,as shown in Figure2.Sparse Tensor Representation By processing only the non-empty voxels,we obtain a list of voxel features,each uniquely associated to the spatial coordinates of a particu-lar non-empty voxel.The obtained list of voxel-wise fea-tures can be represented as a sparse4D tensor,of size C×D ×H ×W as shown in Figure2.Although the point cloud contains∼100k points,more than90%of vox-els typically are empty.Representing non-empty voxel fea-tures as a sparse tensor greatly reduces the memory usage and computation cost during backpropagation,and it is a critical step in our efficient implementation.2.1.2Convolutional Middle LayersWe use Conv M D(c in,c out,k,s,p)to represent an M-dimensional convolution operator where c in and c out are the number of input and output channels,k,s,and p are the M-dimensional vectors corresponding to kernel size,stride size and padding size respectively.When the size across the M-dimensions are the same,we use a scalar to represent the size e.g.k for k=(k,k,k).Each convolutional middle layer applies3D convolution,BN layer,and ReLU layer sequentially.The convolutional middle layers aggregate voxel-wise features within a pro-gressively expanding receptivefield,adding more context to the shape description.The detailed sizes of thefilters in the convolutional middle layers are explained in Section3.2.1.3Region Proposal NetworkRecently,region proposal networks[32]have become an important building block of top-performing object detec-tion frameworks[38,5,23].In this work,we make several key modifications to the RPN architecture proposed in[32], and combine it with the feature learning network and con-volutional middle layers to form an end-to-end trainable pipeline.The input to our RPN is the feature map provided by the convolutional middle layers.The architecture of this network is illustrated in Figure4.The network has three blocks of fully convolutional layers.Thefirst layer of each block downsamples the feature map by half via a convolu-tion with a stride size of2,followed by a sequence of con-volutions of stride1(×q means q applications of thefilter). After each convolution layer,BN and ReLU operations are applied.We then upsample the output of every block to a fixed size and concatanate to construct the high resolution feature map.Finally,this feature map is mapped to the de-sired learning targets:(1)a probability score map and(2)a regression map.2.2.Loss FunctionLet{a pos i}i=1...N pos be the set of N pos positive an-chors and{a neg j}j=1...N neg be the set of N neg negative anchors.We parameterize a3D ground truth box as (x g c,y g c,z g c,l g,w g,h g,θg),where x g c,y g c,z g c represent the center location,l g,w g,h g are length,width,height of the box,andθg is the yaw rotation around Z-axis.To re-trieve the ground truth box from a matching positive anchor parameterized as(x a c,y a c,z a c,l a,w a,h a,θa),we define the residual vector u∗∈R7containing the7regression tar-gets corresponding to center location∆x,∆y,∆z,three di-Voxel Input Feature BufferVoxel CoordinateBufferK T7Sparse TensorK31Voxel-wise FeatureK C 1Point CloudIndexingMemory CopyS t a c k e d V F EFigure 5.Illustration of efficient implementation.mensions ∆l,∆w,∆h ,and the rotation ∆θ,which are com-puted as:∆x =x g c −x a cd a ,∆y =y g c −y a c d a ,∆z =z gc −z a c h a ,∆l =log(l g l a ),∆w =log(w g w a ),∆h =log(h gh a ),(1)∆θ=θg −θawhere d a =(l a )2+(w a )2is the diagonal of the base of the anchor box.Here,we aim to directly estimate the oriented 3D box and normalize ∆x and ∆y homogeneously with the diagonal d a ,which is different from [32,38,22,21,4,3,5].We define the loss function as follows:L =α1N pos i L cls (p posi ,1)+β1N neg jL cls (p neg j ,0)+1N posiL reg (u i ,u ∗i )(2)where p pos i and p neg j represent the softmax output for posi-tive anchor a posi and negative anchor a neg j respectively,whileu i ∈R 7and u ∗i ∈R 7are the regression output and ground truth for positive anchor a pos i .The first two terms are the normalized classification loss for {a pos i }i =1...N pos and {a negj }j =1...N neg ,where the L cls stands for binary cross en-tropy loss and α,βare postive constants balancing the rel-ative importance.The last term L reg is the regression loss,where we use the SmoothL1function [12,32].2.3.Efficient ImplementationGPUs are optimized for processing dense tensor struc-tures.The problem with working directly with the point cloud is that the points are sparsely distributed across space and each voxel has a variable number of points.We devised a method that converts the point cloud into a dense tensor structure where stacked VFE operations can be processed in parallel across points and voxels.The method is summarized in Figure 5.We initialize aK ×T ×7dimensional tensor structure to store the voxel input feature buffer where K is the maximum number of non-empty voxels,T is the maximum number of points per voxel,and 7is the input encoding dimension for each point.The points are randomized before processing.For each point in the point cloud,we check if the corresponding voxel already exists.This lookup operation is done effi-ciently in O (1)using a hash table where the voxel coordi-nate is used as the hash key.If the voxel is already initial-ized we insert the point to the voxel location if there are less than T points,otherwise the point is ignored.If the voxel is not initialized,we initialize a new voxel,store its coordi-nate in the voxel coordinate buffer,and insert the point to this voxel location.The voxel input feature and coordinate buffers can be constructed via a single pass over the point list,therefore its complexity is O (n ).To further improve the memory/compute efficiency it is possible to only store a limited number of voxels (K )and ignore points coming from voxels with few points.After the voxel input buffer is constructed,the stacked VFE only involves point level and voxel level dense oper-ations which can be computed on a GPU in parallel.Note that,after concatenation operations in VFE,we reset the features corresponding to empty points to zero such that they do not affect the computed voxel features.Finally,using the stored coordinate buffer we reorganize the com-puted sparse voxel-wise structures to the dense voxel grid.The following convolutional middle layers and RPN oper-ations work on a dense voxel grid which can be efficiently implemented on a GPU.3.Training DetailsIn this section,we explain the implementation details of the V oxelNet and the training procedure.work DetailsOur experimental setup is based on the LiDAR specifi-cations of the KITTI dataset [11].Car Detection For this task,we consider point clouds within the range of [−3,1]×[−40,40]×[0,70.4]meters along Z,Y ,X axis respectively.Points that are projected outside of image boundaries are removed [5].We choose a voxel size of v D =0.4,v H =0.2,v W =0.2meters,which leads to D =10,H =400,W =352.We set T =35as the maximum number of randomly sam-pled points in each non-empty voxel.We use two VFE layers VFE-1(7,32)and VFE-2(32,128).The final FCN maps VFE-2output to R 128.Thus our feature learning net generates a sparse tensor of shape 128×10×400×352.To aggregate voxel-wise features,we employ three convo-lution middle layers sequentially as Conv3D(128,64,3,(2,1,1),(1,1,1)),Conv3D(64,64,3,(1,1,1),(0,1,1)),andConv3D(64,64,3,(2,1,1),(1,1,1)),which yields a4D ten-sor of size64×2×400×352.After reshaping,the input to RPN is a feature map of size128×400×352,where the dimensions correspond to channel,height,and width of the3D tensor.Figure4illustrates the detailed network ar-chitecture for this task.Unlike[5],we use only one anchor size,l a=3.9,w a=1.6,h a=1.56meters,centered at z a c=−1.0meters with two rotations,0and90degrees. Our anchor matching criteria is as follows:An anchor is considered as positive if it has the highest Intersection over Union(IoU)with a ground truth or its IoU with ground truth is above0.6(in bird’s eye view).An anchor is considered as negative if the IoU between it and all ground truth boxes is less than0.45.We treat anchors as don’t care if they have 0.45≤IoU≤0.6with any ground truth.We setα=1.5 andβ=1in Eqn.2.Pedestrian and Cyclist Detection The input range1is [−3,1]×[−20,20]×[0,48]meters along Z,Y,X axis re-spectively.We use the same voxel size as for car detection, which yields D=10,H=200,W=240.We set T=45 in order to obtain more LiDAR points for better capturing shape information.The feature learning network and con-volutional middle layers are identical to the networks used in the car detection task.For the RPN,we make one mod-ification to block1in Figure4by changing the stride size in thefirst2D convolution from2to1.This allowsfiner resolution in anchor matching,which is necessary for de-tecting pedestrians and cyclists.We use anchor size l a= 0.8,w a=0.6,h a=1.73meters centered at z a c=−0.6 meters with0and90degrees rotation for pedestrian detec-tion and use anchor size l a=1.76,w a=0.6,h a=1.73 meters centered at z a c=−0.6with0and90degrees rota-tion for cyclist detection.The specific anchor matching cri-teria is as follows:We assign an anchor as postive if it has the highest IoU with a ground truth,or its IoU with ground truth is above0.5.An anchor is considered as negative if its IoU with every ground truth is less than0.35.For anchors having0.35≤IoU≤0.5with any ground truth,we treat them as don’t care.During training,we use stochastic gradient descent (SGD)with learning rate0.01for thefirst150epochs and decrease the learning rate to0.001for the last10epochs. We use a batchsize of16point clouds.3.2.Data AugmentationWith less than4000training point clouds,training our network from scratch will inevitably suffer from overfitting. To reduce this issue,we introduce three different forms of data augmentation.The augmented training data are gener-ated on-the-fly without the need to be stored on disk[20].1Our empirical observation suggests that beyond this range,LiDAR returns from pedestrians and cyclists become very sparse and therefore detection results will be unreliable.Define set M={p i=[x i,y i,z i,r i]T∈R4}i=1,...,N as the whole point cloud,consisting of N points.We parame-terize a3D bouding box b i as(x c,y c,z c,l,w,h,θ),where x c,y c,z c are center locations,l,w,h are length,width, height,andθis the yaw rotation around Z-axis.We de-fineΩi={p|x∈[x c−l/2,x c+l/2],y∈[y c−w/2,y c+ w/2],z∈[z c−h/2,z c+h/2],p∈M}as the set con-taining all LiDAR points within b i,where p=[x,y,z,r] denotes a particular LiDAR point in the whole set M.Thefirst form of data augmentation applies perturbation independently to each ground truth3D bounding box to-gether with those LiDAR points within the box.Specifi-cally,around Z-axis we rotate b i and the associatedΩi with respect to(x c,y c,z c)by a uniformally distributed random variable∆θ∈[−π/10,+π/10].Then we add a translation (∆x,∆y,∆z)to the XYZ components of b i and to each point inΩi,where∆x,∆y,∆z are drawn independently from a Gaussian distribution with mean zero and standard deviation1.0.To avoid physically impossible outcomes,we perform a collision test between any two boxes after the per-turbation and revert to the original if a collision is detected. Since the perturbation is applied to each ground truth box and the associated LiDAR points independently,the net-work is able to learn from substantially more variations than from the original training data.Secondly,we apply global scaling to all ground truth boxes b i and to the whole point cloud M.Specifically, we multiply the XYZ coordinates and the three dimen-sions of each b i,and the XYZ coordinates of all points in M with a random variable drawn from uniform distri-bution[0.95,1.05].Introducing global scale augmentation improves robustness of the network for detecting objects with various sizes and distances as shown in image-based classification[35,18]and detection tasks[12,17].Finally,we apply global rotation to all ground truth boxes b i and to the whole point cloud M.The rotation is applied along Z-axis and around(0,0,0).The global ro-tation offset is determined by sampling from uniform dis-tribution[−π/4,+π/4].By rotating the entire point cloud, we simulate the vehicle making a turn.4.ExperimentsWe evaluate V oxelNet on the KITTI3D object detection benchmark[11]which contains7,481training images/point clouds and7,518test images/point clouds,covering three categories:Car,Pedestrian,and Cyclist.For each class, detection outcomes are evaluated based on three difficulty levels:easy,moderate,and hard,which are determined ac-cording to the object size,occlusion state,and truncation level.Since the ground truth for the test set is not avail-able and the access to the test server is limited,we con-duct comprehensive evaluation using the protocol described in[4,3,5]and subdivide the training data into a training setMethod ModalityCar Pedestrian CyclistEasy Moderate Hard Easy Moderate Hard Easy Moderate HardMono3D[3]Mono 5.22 5.19 4.13N/A N/A N/A N/A N/A N/A 3DOP[4]Stereo12.639.497.59N/A N/A N/A N/A N/A N/A VeloFCN[22]LiDAR40.1432.0830.47N/A N/A N/A N/A N/A N/A MV(BV+FV)[5]LiDAR86.1877.3276.33N/A N/A N/A N/A N/A N/A MV(BV+FV+RGB)[5]LiDAR+Mono86.5578.1076.67N/A N/A N/A N/A N/A N/A HC-baseline LiDAR88.2678.4277.6658.9653.7951.4763.6342.7541.06 V oxelNet LiDAR89.6084.8178.5765.9561.0556.9874.4152.1850.49 Table1.Performance comparison in bird’s eye view detection:average precision(in%)on KITTI validation set.Method ModalityCar Pedestrian CyclistEasy Moderate Hard Easy Moderate Hard Easy Moderate HardMono3D[3]Mono 2.53 2.31 2.31N/A N/A N/A N/A N/A N/A 3DOP[4]Stereo 6.55 5.07 4.10N/A N/A N/A N/A N/A N/A VeloFCN[22]LiDAR15.2013.6615.98N/A N/A N/A N/A N/A N/A MV(BV+FV)[5]LiDAR71.1956.6055.30N/A N/A N/A N/A N/A N/A MV(BV+FV+RGB)[5]LiDAR+Mono71.2962.6856.56N/A N/A N/A N/A N/A N/A HC-baseline LiDAR71.7359.7555.6943.9540.1837.4855.3536.0734.15 V oxelNet LiDAR81.9765.4662.8557.8653.4248.8767.1747.6545.11 Table2.Performance comparison in3D detection:average precision(in%)on KITTI validation set.and a validation set,which results in3,712data samples for training and3,769data samples for validation.The split avoids samples from the same sequence being included in both the training and the validation set[3].Finally we also present the test results using the KITTI server.For the Car category,we compare the proposed method with several top-performing algorithms,including image based approaches:Mono3D[3]and3DOP[4];LiDAR based approaches:VeloFCN[22]and3D-FCN[21];and a multi-modal approach MV[5].Mono3D[3],3DOP[4]and MV[5]use a pre-trained model for initialization whereas we train V oxelNet from scratch using only the LiDAR data provided in KITTI.To analyze the importance of end-to-end learning,we implement a strong baseline that is derived from the V ox-elNet architecture but uses hand-crafted features instead of the proposed feature learning network.We call this model the hand-crafted baseline(HC-baseline).HC-baseline uses the bird’s eye view features described in[5]which are computed at0.1m resolution.Different from[5],we in-crease the number of height channels from4to16to cap-ture more detailed shape information–further increasing the number of height channels did not lead to performance improvement.We replace the convolutional middle lay-ers of V oxelNet with similar size2D convolutional layers, which are Conv2D(16,32,3,1,1),Conv2D(32,64,3,2, 1),Conv2D(64,128,3,1,1).Finally RPN is identical in V oxelNet and HC-baseline.The total number of parame-ters in HC-baseline and V oxelNet are very similar.We train the HC-baseline using the same training procedure and data augmentation described in Section3.4.1.Evaluation on KITTI Validation SetMetrics We follow the official KITTI evaluation protocol, where the IoU threshold is0.7for class Car and is0.5for class Pedestrian and Cyclist.The IoU threshold is the same for both bird’s eye view and full3D evaluation.We compare the methods using the average precision(AP)metric. Evaluation in Bird’s Eye View The evaluation result is presented in Table1.V oxelNet consistently outperforms all the competing approaches across all three difficulty levels. HC-baseline also achieves satisfactory performance com-pared to the state-of-the-art[5],which shows that our base region proposal network(RPN)is effective.For Pedestrian and Cyclist detection tasks in bird’s eye view,we compare the proposed V oxelNet with HC-baseline.V oxelNet yields substantially higher AP than the HC-baseline for these more challenging categories,which shows that end-to-end learn-ing is essential for point-cloud based detection.We would like to note that[21]reported88.9%,77.3%, and72.7%for easy,moderate,and hard levels respectively, but these results are obtained based on a different split of 6,000training frames and∼1,500validation frames,and they are not directly comparable with algorithms in Table1. Therefore,we do not include these results in the table. Evaluation in3D Compared to the bird’s eye view de-tection,which requires only accurate localization of ob-jects in the2D plane,3D detection is a more challeng-ing task as it requiresfiner localization of shapes in3D space.Table2summarizes the comparison.For the class Car,V oxelNet significantly outperforms all other ap-proaches in AP across all difficulty levels.Specifically, using only LiDAR,V oxelNet significantly outperforms the。

基于自适应邻域的空域错误掩盖算法

基于自适应邻域的空域错误掩盖算法

基于自适应邻域的空域错误掩盖算法
张荣福;周源华
【期刊名称】《上海交通大学学报》
【年(卷),期】2004()z1
【摘要】针对以固定邻域内的像素为参考进行图像错误掩盖时,往往容易导致边界信息的丢失或产生阴影和虚假条纹的问题,提出了根据错误块邻域图像特征选择自适应参考像素集的空域错误掩盖算法.介绍了一种有效的分割方法,将邻域分割为与错误块相关和无关的两个区域,并以相关区域内的像素为参考进行空域内插.实验结果表明,该算法掩盖的视频图像既能有效地重建多个方向的边界信息,又能抑制或消除无关区域的影响,可广泛应用于掩盖MPEG、JPEG和H.26x等基于块编码码流中的错误.
【总页数】4页(P30-33)
【关键词】错误掩盖;区域分割;自适应区域;视频信号
【作者】张荣福;周源华
【作者单位】上海交通大学图像通信与信息处理研究所
【正文语种】中文
【中图分类】TN919.8;TN911.72
【相关文献】
1.基于AVS-M标准的自适应时-空域差错掩盖算法 [J], 何绍荣;鲜乾坤;邓云;翁海敏
2.基于边缘检测的空域自适应差错掩盖算法 [J], 彭强;张庆明;徐锦亮;王宁
3.基于人脸特征的自适应空域差错掩盖算法 [J], 杨志伟;张庆明;彭强
4.基于SVC的空域增强层帧级错误掩盖算法 [J], 封颖;吴成柯
5.一种改进的H.264自适应空域错误掩盖算法 [J], 盛赞;张有志;张丽君
因版权原因,仅展示原文概要,查看原文内容请购买。

原子吸收光谱法测定玻璃白酒瓶中铅迁移量的不确定度评定

原子吸收光谱法测定玻璃白酒瓶中铅迁移量的不确定度评定

分析检测原子吸收光谱法测定玻璃白酒瓶中铅迁移量的不确定度评定张 莎,王永卿,苏 崯(祁县综合检验检测中心(国家玻璃器皿产品质量监督检验中心),山西晋中 030900)摘 要:采用原子吸收光谱法测定玻璃白酒瓶中铅迁移量,通过建立数学模型,分析测量过程中的不确定度来源,对测量不确定度进行评定。

结果表明,当白酒瓶中铅迁移量为1.35 mg·L-1时,扩展不确定度为0.12 mg·L-1(k=2),其中标准曲线拟合引入的不确定度对测量结果的影响较大。

关键词:原子吸收光谱法;玻璃白酒瓶;铅迁移量;不确定度Evaluation of Uncertainty in Determination of Lead Migration in Glass Baijiu Bottles by Atomic Absorption SpectrometryZHANG Sha, WANG Yongqing, SU Yin(Qixian Comprehensive Inspection and Testing Center (National Glassware Product Quality Supervision andInspection Center), Jinzhong 030900, China)Abstract: The migration amount of lead in glass Baijiu bottles was determined by atomic absorption spectrometry. The uncertainty sources in the measurement process were analyzed by establishing a mathematical model, and the measurement uncertainty was evaluated. The results show that the expanded uncertainty is 0.12 mg·L-1 (k=2) when the migration amount of lead in the Baijiu bottle is 1.35 mg·L-1, and the uncertainty introduced by standard curve fitting has a greater impact on the measurement results.Keywords: atomic absorption spectroscopy; glass Baijiu bottle; lead migration amount; uncertainty白酒是我国特有的一种传统酒精饮料,以其悠久的文化精神、独特的口感香味,深受广大消费者欢迎。

Cosmology with High-redshift Galaxy Survey Neutrino Mass and Inflation

Cosmology with High-redshift Galaxy Survey Neutrino Mass and Inflation

a r X i v :a s t r o -p h /0512374v 3 5 J u n 2006Cosmology with High-redshift Galaxy Survey:Neutrino Mass and InflationMasahiro Takada 1,Eiichiro Komatsu 2and Toshifumi Futamase 11Astronomical Institute,Tohoku University,Sendai 980-8578,Japan and 2Department of Astronomy,The University of Texas at Austin,Austin,TX 78712High-z galaxy redshift surveys open up exciting possibilities for precision determinations of neu-trino masses and inflationary models.The high-z surveys are more useful for cosmology than low-z ones owing to much weaker non-linearities in matter clustering,redshift-space distortion and galaxy bias,which allows us to use the galaxy power spectrum down to the smaller spatial scales that are inaccessible by low-z surveys.We can then utilize the two-dimensional information of the linear power spectrum in angular and redshift space to measure the scale-dependent suppression of matter clustering due to neutrino free-streaming as well as the shape of the primordial power spectrum.To illustrate capabilities of high-z surveys for constraining neutrino masses and the primordial power spectrum,we compare three future redshift surveys covering 300square degrees at 0.5<z <2,2<z <4,and 3.5<z <6.5.We find that,combined with the cosmic microwave background data expected from the Planck satellite,these surveys allow precision determination of the total neutrino mass with the projected errors of σ(m ν,tot )=0.059,0.043,and 0.025eV,respectively,thus yielding a positive detection of the neutrino mass rather than an upper limit,as σ(m ν,tot )is smaller than the lower limits to the neutrino masses implied from the neutrino oscillation experiments,by up to a factor of 4for the highest redshift survey.The accuracies of constraining the tilt and running index of the primordial power spectrum,σ(n s )=(3.8,3.7,3.0)×10−3and σ(αs )=(5.9,5.7,2.4)×10−3at k 0=0.05Mpc −1,respectively,are smaller than the current uncertainties by more than an or-der of magnitude,which will allow us to discriminate between candidate inflationary models.In particular,the error on αs from the future highest redshift survey is not very far away from the prediction of a class of simple inflationary models driven by a massive scalar field with self-coupling,αs =−(0.8−1.2)×10−3.PACS numbers:95.55.Vj,98.65.Dx,98.80.Cq,98.70.Vc,98.80.EsI.INTRODUCTIONWe are living in the golden age of cosmology.Vari-ous data sets from precision measurements of tempera-ture and polarization anisotropy in the cosmic microwave background (CMB)radiation as well as those of matter density fluctuations in the large-scale structure of the universe mapped by galaxy redshift surveys,Lyman-αforests and weak gravitational lensing observations are in a spectacular agreement with the concordance ΛCDM model [1,2,3,4].These results assure that theory of cos-mological linear perturbations is basically correct,and can accurately describe the evolution of photons,neu-trinos,baryons,and collisionless dark matter particles [5,6,7],for given initial perturbations generated during inflation [8,9].The predictions from linear perturbation theory can be compared with the precision cosmological measurements,in order to derive stringent constraints on the various basic cosmological parameters.Future obser-vations with better sensitivity and higher precision will continue to further improve our understanding of the uni-verse.Fluctuations in different cosmic fluids (dark matter,photons,baryons,and neutrinos)imprint characteristic features in their power spectra,owing to their interac-tion properties,thermal history,equation of state,and speed of sound.A remarkable example is the acoustic oscillation in the photon-baryon fluid that was generated before the decoupling epoch of photons,z ≃1088,which has been observed in the power spectrum of CMB tem-perature anisotropy [10],temperature–polarization cross correlation [11],and distribution of galaxies [12,13].Yet,the latest observations have shown convincingly that we still do not understand much of the universe.The standard model of cosmology tells us that the universe has been dominated by four components.In chronolog-ical order the four components are:early dark energy (also known as “inflaton”fields),radiation,dark mat-ter,and late-time dark energy.The striking fact is that we do not understand the precise nature of three (dark matter,and early and late-time dark energy)out of the four components;thus,understanding the nature of these three dark components has been and will continue to be one of the most important topics in cosmology in next decades.Of which,one might be hopeful that the next generation particle accelerators such as the Large Hadron Collider (coming on-line in 2007)would find some hints for the nature of dark matter particles.On the other hand,the nature of late-time dark energy,which was dis-covered by measurements of luminosity distance out to distant Type Ia supernovae [14,15],is a complete mys-tery,and many people have been trying to find a way to constrain properties of dark energy (see,e.g.,[16]for a review).How about the early dark energy,inflaton fields,which caused the expansion of the universe to accelerate in the very early universe?We know little about the nature of inflaton,just like we know little about the nature of late-time dark energy.The required property of infla-ton fields is basically the same as that of the late-time2dark energy component:both must have a large negativepressure which is less than−1/3of their energy density. To proceed further,however,one needs more informationfrom observations.Different inflation models make spe-cific predictions for the shape of the power spectrum[8](see also Appendix B)as well as for other statistical prop-erties[17]of primordial perturbations.Therefore,one ofthe most promising ways to constrain the physics of in-flation,hence the nature of early dark energy in the uni-verse,is to determine the shape of the primordial power spectrum accurately from observations.For example,theCMB data from the Wilkinson Microwave Anisotropy Probe[1],combined with the large-scale structure datafrom the Two-Degree Field Galaxy Redshift Survey[18], have already ruled out one of the popular inflationarymodels driven by a self-interacting massless scalarfield [19].Understanding the physics of inflation better willlikely provide an important implication for late-time dark energy.“Radiation”in the universe at around the matter-radiation equality mainly consists of photons and neu-trinos;however,neutrinos actually stop being radiationwhen their mean energy per particle roughly equals the temperature of the universe.The physics of neutrinoshas been revolutionized over the last decade by solar, atmospheric,reactor,and accelerator neutrino experi-ments having provided strong evidence forfinite neutrino masses via mixing between different neutrinoflavors,theso-called neutrino oscillations[20,21,22,23,24].These experiments are,however,only sensitive to mass squaredifferences between neutrino mass eigenstates,implying ∆m221≃7×10−5eV2and∆m232≃3×10−3eV2;thus, the most fundamental quantity of neutrinos,the abso-lute mass,has not been determined yet.Cosmologicalneutrinos that are the relic of the cosmic thermal his-tory have distinct influences on the structure formation.Their large energy density,comparable to the energy den-sity of photons before the matter-radiation equality,de-termines the expansion history of the universe.Even after the matter-radiation equality,neutrinos having be-come non-relativistic affect the structure formation by suppressing the growth of matter densityfluctuations at small spatial scales owing to their large velocity disper-sion[25,26,27,28,29,30](see Sec.II and Appendix A for more details).Therefore,the galaxy redshift surveys, combined with the CMB data,provide a powerful,albeit indirect,means to constraining the neutrino properties [31,32,33,34,35].This approach also complements the theoretical and direct experimental efforts for under-standing the neutrino physics.In fact,the cosmological constraints have placed the most stringent upper bound on the total neutrino mass,mν,tot<∼0.6eV(2σ)[36], stronger than the direct experiment limit<∼2eV[37].In addition,the result obtained from the Liquid Scintillator Neutrino Detector(LSND)experiment,which implies¯νµto¯νe oscillations with∆m2>∼0.2eV2[38]in an apparent contradiction with the other neutrino oscillation experi-ments mentioned above,potentially suggests the need for new physics:the cosmological observations will provide independent tests of this hypothesis.In this paper we shall study the capability of future galaxy surveys at high redshifts,combined with the CMB data,for constraining(1)the neutrino properties,more specifically the total neutrino mass,mν,tot,and the num-ber of non-relativistic neutrino species,N nrν,and(2)the shape of the primordial power spectrum that is parame-terized in terms of the spectral tilt,n s,and the running index,αs,motivated by inflationary predictions(see Ap-pendix B).For the former,we shall pay particular at-tention to our ability to simultaneously constrain mν,tot and N nrν,as they will provide important clues to resolv-ing the absolute mass scale as well as the neutrino mass hierarchy.The accuracy of determining the neutrino pa-rameters and the power spectrum shape parameters will be derived using the Fisher information matrix formal-ism,including marginalization over the other cosmologi-cal parameters as well as the galaxy bias.Our analysis differs from the previous work on the neutrino parameters in that we fully take into account the two-dimensional nature of the galaxy power spec-trum in the line-of-sight and transverse directions,while the previous work used only spherically averaged,one-dimensional power spectra.The geometrical distortion due to cosmology and the redshift space distortion due to the peculiar velocityfield will cause anisotropic features in the galaxy power spectrum.These features help to lift degeneracies between cosmological parameters,sub-stantially reducing the uncertainties in the parameter de-terminations.This is especially true when variations in parameters of interest cause modifications in the power spectrum shape,which is indeed the case for the neutrino parameters,tilt and running index.The usefulness of the two-dimensional power spectrum,especially for high-redshift galaxy surveys,has been carefully investigated in the context of the prospected constraints on late-time dark energy properties[39,40,41,42,43,44,45].We shall show the parameter forecasts for future wide-field galaxy surveys that are already being planned or seriously under consideration:the Fiber Multiple Object Spectrograph(FMOS)on Subaru telescope[46],its sig-nificantly expanded version,WFMOS[47],the Hobby–Ebery Telescope Dark Energy eXperiment(HETDEX) [48],and the Cosmic Inflation Probe(CIP)mission[49]. To model these surveys,we consider three hypothetical galaxy surveys which probe the universe over different ranges of redshift,(1)0.5≤z≤2,(2)2≤z≤4and (3)3.5≤z≤6.5.Wefix the sky coverage of each sur-vey atΩs=300deg2in order to make a fair compari-son between different survey designs.As we shall show below,high-redshift surveys are extremely powerful for precision cosmology because they allow us to probe the linear power spectrum down to smaller length scales than surveys at low redshifts,protecting the cosmological in-formation against systematics due to non-linear pertur-bations.We shall also study how the parameter uncertainties3 are affected by changes in the number density of sam-pled galaxies and the survey volume.The results wouldgive us a good guidance to defining the optimal surveydesign to achieve the desired accuracies in parameter de-terminations.The structure of this paper is as follows.In Sec.II,wereview the physical pictures as to how the non-relativistic(massive)neutrinos lead to scale-dependent modifica-tions in the growth of mass clustering relative to thepure CDM model.Sec.III defines the parameterization of the primordial power spectrum motivated by inflation-ary predictions.In Sec.IV we describe a methodology to model the galaxy power spectrum observable from aredshift survey that includes the two-dimensional nature in the line-of-sight and transverse directions.We thenpresent the Fisher information matrix formalism that is used to estimate the projected uncertainties in the cos-mological parameter determination from statistical errors on the galaxy power spectrum measurement for a givensurvey.After survey parameters are defined in Sec.V, we show the parameter forecasts in Sec.VI.Finally,wepresent conclusions and some discussions in Sec.VII.We review the basic properties of cosmological neutrinos inAppendix A,the basic predictions from inflationary mod-els for the shape of the primordial power spectrum in Ap-pendix B,and the relation between the primordial powerspectrum and the observed power spectrum of matter densityfluctuations in Appendix C.In the following,we assume an adiabatic,cold dark matter(CDM)dominated cosmological model withflatgeometry,which is supported by the WMAP results [1,36],and employ the the notation used in[51,52]:the present-day density of CDM,baryons,and non-relativistic neutrinos,in units of the critical density,aredenoted asΩc,Ωb,andΩν,respectively.The total mat-ter density is thenΩm=Ωc+Ωb+Ων,and fνis theratio of the massive neutrino density contribution toΩm: fν=Ων/Ωm.II.NEUTRINO EFFECT ON STRUCTUREFORMATIONThroughout this paper we assume the standard ther-mal history in the early universe:there are three neutrinospecies with temperature equal to(4/11)1/3of the photon temperature.We then assume that0≤N nrν≤3species are massive and could become non-relativistic by thepresent epoch,and those non-relativistic neutrinos have equal masses,mν.As we show in Appendix A,the den-sity parameter of the non-relativistic neutrinos is given byΩνh2=N nrνmν/(94.1eV),where we have assumed 2.725K for the CMB temperature today[50],and h is the Hubble parameter defined as H0=100h km s−1Mpc−1. The neutrino mass fraction is thus given byfν≡Ων0.658eV 0.141eVΩm h21+z 1/2.(2)Therefore,non-relativistic neutrinos with lighter masses suppress the growth of structure formation on larger spa-tial scales at a given redshift,and the free-streaming length becomes shorter at a lower redshift as neutrino velocity decreases with redshift.The most important property of the free-streaming scale is that it depends on the mass of each species,mν,rather than the total mass,N nrνmν;thus,measurements of k fs allow us to dis-tinguish different neutrino mass hierarchy models.For-tunately,k fs appears on the scales that are accessible by galaxy surveys:k fs=0.096−0.179Mpc−1at z=6−1 for mν=1eV.On the spatial scales larger than the free-streaming length,k<k fs,neutrinos can cluster and fall into gravi-tational potential well together with CDM and baryonic matter.In this case,perturbations in all matter com-ponents(CDM,baryon and neutrinos,denoted as‘cbν’hereafter)grow at the same rate given byD cbν(k,z)∝D(z)k≪k fs(z),(3) where D(z)is the usual linear growth factor(see,e.g., Eq.(4)in[53]).On the other hand,on the scales smaller than the free-streaming length,k>k fs,perturbations in non-relativistic neutrinos are absent due to the large ve-locity dispersion.In this case,the gravitational potential well is supported only by CDM and baryonic matter,and the growth of matter perturbations is slowed down rela-tive to that on the larger scales.As a result,the matter power spectrum for k>k fs is suppressed relative to that for k<k fs.In this limit the total matter perturbations grow at the slower rate given byD cbν(k,z)∝(1−fν)[D(z)]1−p k≫k fs(z),(4) where p≡(5−√4FIG.1:Suppression in the growth rate of total matter per-turbations(CDM,baryons and non-relativistic neutrinos), D cbν(a),due to neutrino free-streaming.(a=(1+z)−1is the scale factor.)Upper panel:D cbν(a)/Dν=0(a)for the neutrino mass fraction of fν=Ων/Ωm=0.05.The number of non-relativistic neutrino species is varied from N nrν=1,2,and3 (from thick to thin lines),respectively.The solid,dashed,and dotted lines represent k=0.01,0.1,and1h Mpc−1,respec-tively.Lower panel:D cbν(a)/Dν=0(a)for a smaller neutrino mass fraction,fν=0.01.Note that the total mass of non-relativistic neutrinos isfixed to mν,tot=N nrνmν=0.66eV and0.13eV in the upper and lower panels,respectively. Eq.(2).It is thus expected that a galaxy survey with different redshift slices can be used to efficiently extract the neutrino parameters,N nrνand mν.The upper and middle panels of Figure2illustrate how free-streaming of non-relativistic neutrinos suppresses the amplitude of linear matter power spectrum,P(k), at z=4.Note that we have normalized the primordial power spectrum such that all the power spectra match at k→0(see§III).To illuminate the dependence of P(k) on mν,wefix the total mass of non-relativistic neutri-nos,N nrνmν,by fν=0.05and0.01in the upper and middle panels,respectively,and vary the number of non-relativistic neutrino species as N nrν=1,2and3.The suppression of power is clearly seen as one goes from k<k fs(z)to k>k fs(z)(see Eq.[2]for the value of k fs).The way the power is suppressed may be easily un-derstood by the dependence of k fs(z)on mν;for example,linear power spectrum at z=4due to free-streaming of non-relativistic neutrinos.Wefix the total mass of non-relativistic neutrinos by fν=Ων/Ωm=0.05,and vary the number of non-relativistic neutrino species(which have equal masses, mν)as N nrν=1(solid),2(dashed),and3(dot-dashed). The mass of individual neutrino species therefore varies as mν=0.66,0.33,and0.22eV,respectively(see Eq.[1]).The shaded regions represent the1-σmeasurement errors on P(k) in each k-bin,expected from a galaxy redshift survey observ-ing galaxies at3.5≤z≤4.5(see Table I for definition of the survey).Note that the errors are for the spherically averaged power spectrum over the shell of k in each bin.Different N nrνcould be discriminated in this case.Middle panel:Same as in the upper panel,but for a smaller neutrino mass fraction, fν=0.01.While it is not possible to discriminate between different N nrν,the overall suppression on small scales is clearly seen.Lower panel:Dependences of the shape of P(k)on the other cosmological parameters.P(k)at smaller k is more suppressed for a smaller mν,as lighter neutrinos have longer free-streaming lengths.Onvery small scales,k≫k fs(z)(k>∼1and0.1Mpc−1for fν=0.05and0.01,respectively),however,the amountof suppression becomes nearly independent of k,and de-pends only on fν(or the total neutrino mass,N nrνmν) as∆P5 ≈8fν.(5)We therefore conclude that one can extract fνand N nrνseparately from the shape of P(k),if the suppression “pattern”in different regimes of k is accurately measured from observations.5Are observations good enough?The shaded boxes in the upper and middle panels in Figure2represent the1-σmeasurement errors on P(k)expected from one of the fiducial galaxy surveys outlined in Sec.V.Wefind thatP(k)will be measured with∼1%accuracy in each k bin. If other cosmological parameters were perfectly known,the total mass of non-relativistic neutrinos as small as mν,tot=N nrνmν>∼0.001eV would be detected at more than2-σ.This limit is much smaller than the lower mass limit implied from the neutrino oscillation exper-iments,0.06eV.This estimate is,of course,unrealistic because a combination of other cosmological parameters could mimic the N nrνor fνdependence of P(k).The lower panel in Figure2illustrates how other cosmolog-ical parameters change the shape of P(k).In the fol-lowing,we shall extensively study how well future high-redshift galaxy surveys,combined with the cosmic mi-crowave background data,can determine the mass of non-relativistic neutrinos and discriminate between different N nrν,fully taking into account degeneracies between cos-mological parameters.III.SHAPE OF PRIMORDIAL POWER SPECTRUM AND INFLATIONARY MODELSInflation generally predicts that the primordial power spectrum of curvature perturbations is nearly scale-invariant.Different inflationary models make specific predictions for deviations of the primordial spectrum from a scale-invariant spectrum,and the deviation is of-ten parameterized by the“tilt”,n s,and the“running index”,αs,of the primordial power spectrum.As the pri-mordial power spectrum is nearly scale-invariant,|n s−1| and|αs|are predicted to be much less than unity. This,however,does not mean that the observed mat-ter power spectrum is also nearly scale-invariant.In Ap-pendix C,we derive the power spectrum of total matter perturbations that is normalized by the primordial cur-vature perturbation(see Eq.[C6])k3P(k,z)5H20Ωm 2×D2cbν(k,z)T2(k) k2αs ln(k/k0),(6)where k0=0.05Mpc−1,δ2R=2.95×10−9A,and A is the normalization parameter given by the WMAP collaboration[1].We adopt A=0.871,which gives δR=5.07×10−5.(In the notation of[63,64]δR=δζ.) The linear transfer function,T(k),describes the evolu-tion of the matter power spectrum during radiation era and the interaction between photons and baryons be-fore the decoupling of photons.Note that T(k)depends only on non-inflationary parameters such asΩm h2and Ωb/Ωm,and is independent of n s andαs.Also,the effects of non-relativistic neutrinos are captured in D cbν(k,z); thus,T(k)is independent of time after the decoupling epoch.We use thefitting function found in[51,52]for T(k).Note that the transfer function and the growth rate are normalized such that T(k)→1and D cbν/a→1 as k→0during the matter era.In Appendix B we describe generic predictions on n s andαs from inflationary models.For example,inflation driven by a massive,self-interacting scalarfield predicts n s=0.94−0.96andαs=(0.8−1.2)×10−3for the num-ber of e-foldings of expansion factor before the end of inflation of50.This example shows that precision deter-mination of n s andαs allows us to discriminate between candidate inflationary models(see[8]for more details). IV.MODELING GALAXY POWER SPECTRUMA.Geometrical and Redshift-Space DistortionSuppose now that we have a redshift survey of galax-ies at some redshift.Galaxies are biased tracers of the underlying gravitationalfield,and the galaxy power spec-trum measures how clustering strength of galaxies varies as a function of3-dimensional wavenumbers,k(or the inverse of3-dimensional length scales).We do not measure the length scale directly in real space;rather,we measure(1)angular positions of galax-ies on the sky,and(2)radial positions of galaxies in redshift space.To convert(1)and(2)to positions in 3-dimensional space,however,one needs to assume a ref-erence cosmological model,which might be different from the true cosmology.An incorrect mapping of observed angular and redshift positions to3-dimensional positions produces a distortion in the measured power spectrum, known as the“geometrical distortion”[54,55,56].The geometrical distortion can be described as follows.The comoving size of an object at redshift z in radial,r ,and transverse,r⊥,directions are computed from the exten-sion in redshift,∆z,and the angular size,∆θ,respec-tively,asr =∆zH(z′),(8) where H(z)is the Hubble parameter given byH2(z)=H20 Ωm(1+z)3+ΩΛ .(9)6 HereΩm+ΩΛ=1,andΩΛ≡Λ/(3H20)is the present-daydensity parameter of a cosmological constant,Λ.A trickypart is that H(z)and D A(z)in Eq.(7)depend on cosmo-logical models.It is therefore necessary to assume somefiducial cosmological model to compute the conversionfactors.In the following,quantities in thefiducial cos-mological model are distinguished by the subscript‘fid’.Then,the length scales in Fourier space in radial,kfid ,and transverse,kfid⊥,directions are estimated from theinverse of rfid and rfid⊥.Thesefiducial wavenumbers arerelated to the true wavenumbers byk⊥=D A(z)fidH(z)fidkfid .(10)Therefore,any difference between thefiducial cosmolog-ical model and the true model would cause anisotropicdistortions in the estimated power spectrum in(kfid⊥,kfid )space.In addition,shifts in z due to peculiar velocities ofgalaxies distort the shape of the power spectrum alongthe line-of-sight direction,which is known as the“redshiftspace distortion”[57].From azimuthal symmetry aroundthe line-of-sight direction,which is valid when a distant-observer approximation holds,the linear power spectrumestimated in redshift space,P s(kfid⊥,kfid ),is modeled in[39]asP s(kfid⊥,kfid )=D A(z)2fid H(z)k2⊥+k22×b21P(k,z),(11)where k=(k2⊥+k2)1/2andβ(k,z)≡−1d ln(1+z),(12)is a function characterizing the linear redshift space distortion,and b1is a scale-independent,linear biasparameter.Note thatβ(k,z)depends on both red-shift and wavenumber via the linear growth rate.Inthe infall regime,k≪k fs(z),we have b1β(k,z)≈−d ln D(z)/d ln(1+z),while in the free-streaming regime, k≫k fs(z),we have b1β(k,z)≈−(1−p)d ln D(z)/d ln(1+ z),where p is defined below Eq.(4).One might think that the geometrical and redshift-space distortion effects are somewhat degenerate in the measured power spectrum.This would be true only if the power spectrum was a simple power law.For-tunately,characteristic,non-power-law features in P(k) such as the broad peak from the matter-radiation equal-ity,scale-dependent suppression of power due to baryons and non-relativistic neutrinos,the tilt and running of the primordial power spectrum,the baryonic acoustic os-cillations,etc.,help break degeneracies quite efficiently [39,40,41,42,43,44,47,55,56].ments on Baryonic OscillationsIn this paper,we employ the linear transfer function with baryonic oscillations smoothed out(but includes non-relativistic neutrinos)[51,52].As extensively in-vestigated in[39,44,47],the baryonic oscillations can be used as a standard ruler,thereby allowing one to precisely constrain H(z)and D A(z)separately through the geo-metrical distortion effects(especially for a high-redshift survey).Therefore,our ignoring the baryonic oscillations might underestimate the true capability of redshift sur-veys for constraining cosmological parameters.We have found that the constraints on n s andαs from galaxy surveys improve by a factor of2–3when baryonic oscillations are included.This is because the baryonic os-cillations basicallyfix the values ofΩm,Ωm h2andΩb h2, lifting parameter degeneracies betweenΩm h2,Ωb h2,n s, andαs.However,we suspect that this is a rather opti-mistic forecast,as we are assuming aflat universe dom-inated by a cosmological constant.This might be a too strong prior,and relaxing our assumptions about geom-etry of the universe or the properties of dark energy will likely result in different forecasts for n s andαs.In this paper we try to separate the issues of non-flat universe and/or equation of state of dark energy from the physics of neutrinos and inflation.We do not include the bary-onic oscillations in our analysis,in order to avoid too optimistic conclusions about the constraints on the neu-trino parameters,n s,andαs.Eventually,the full analysis including non-flat uni-verse,arbitrary dark energy equation of state and its time dependence,non-relativistic neutrinos,n s,andαs, using all the information we have at hand including the baryonic oscillations,will be necessary.We leave it for a future publication(Takada and Komatsu,in prepara-tion).C.Parameter Forecast:Fisher Matrix Analysis In order to investigate how well one can constrain the cosmological parameters for a given redshift survey de-sign,one needs to specify measurement uncertainties of the galaxy power spectrum.When non-linearity is weak, it is reasonable to assume that observed density perturba-tions obey Gaussian statistics.In this case,there are two sources of statistical errors on a power spectrum measure-ment:the sampling variance(due to the limited number of independent wavenumbers sampled from afinite sur-vey volume)and the shot noise(due to the imperfect sampling offluctuations by thefinite number of galax-ies).To be more specific,the statistical error is given in [58,59]by∆P s(k i)N k 1+1。

专八英语阅读

专八英语阅读

英语专业八级考试TEM-8阅读理解练习册(1)(英语专业2012级)UNIT 1Text AEvery minute of every day, what ecologist生态学家James Carlton calls a global ―conveyor belt‖, redistributes ocean organisms生物.It’s planetwide biological disruption生物的破坏that scientists have barely begun to understand.Dr. Carlton —an oceanographer at Williams College in Williamstown,Mass.—explains that, at any given moment, ―There are several thousand marine species traveling… in the ballast water of ships.‖ These creatures move from coastal waters where they fit into the local web of life to places where some of them could tear that web apart. This is the larger dimension of the infamous无耻的,邪恶的invasion of fish-destroying, pipe-clogging zebra mussels有斑马纹的贻贝.Such voracious贪婪的invaders at least make their presence known. What concerns Carlton and his fellow marine ecologists is the lack of knowledge about the hundreds of alien invaders that quietly enter coastal waters around the world every day. Many of them probably just die out. Some benignly亲切地,仁慈地—or even beneficially — join the local scene. But some will make trouble.In one sense, this is an old story. Organisms have ridden ships for centuries. They have clung to hulls and come along with cargo. What’s new is the scale and speed of the migrations made possible by the massive volume of ship-ballast water压载水— taken in to provide ship stability—continuously moving around the world…Ships load up with ballast water and its inhabitants in coastal waters of one port and dump the ballast in another port that may be thousands of kilometers away. A single load can run to hundreds of gallons. Some larger ships take on as much as 40 million gallons. The creatures that come along tend to be in their larva free-floating stage. When discharged排出in alien waters they can mature into crabs, jellyfish水母, slugs鼻涕虫,蛞蝓, and many other forms.Since the problem involves coastal species, simply banning ballast dumps in coastal waters would, in theory, solve it. Coastal organisms in ballast water that is flushed into midocean would not survive. Such a ban has worked for North American Inland Waterway. But it would be hard to enforce it worldwide. Heating ballast water or straining it should also halt the species spread. But before any such worldwide regulations were imposed, scientists would need a clearer view of what is going on.The continuous shuffling洗牌of marine organisms has changed the biology of the sea on a global scale. It can have devastating effects as in the case of the American comb jellyfish that recently invaded the Black Sea. It has destroyed that sea’s anchovy鳀鱼fishery by eating anchovy eggs. It may soon spread to western and northern European waters.The maritime nations that created the biological ―conveyor belt‖ should support a coordinated international effort to find out what is going on and what should be done about it. (456 words)1.According to Dr. Carlton, ocean organism‟s are_______.A.being moved to new environmentsB.destroying the planetC.succumbing to the zebra musselD.developing alien characteristics2.Oceanographers海洋学家are concerned because_________.A.their knowledge of this phenomenon is limitedB.they believe the oceans are dyingC.they fear an invasion from outer-spaceD.they have identified thousands of alien webs3.According to marine ecologists, transplanted marinespecies____________.A.may upset the ecosystems of coastal watersB.are all compatible with one anotherC.can only survive in their home watersD.sometimes disrupt shipping lanes4.The identified cause of the problem is_______.A.the rapidity with which larvae matureB. a common practice of the shipping industryC. a centuries old speciesD.the world wide movement of ocean currents5.The article suggests that a solution to the problem__________.A.is unlikely to be identifiedB.must precede further researchC.is hypothetically假设地,假想地easyD.will limit global shippingText BNew …Endangered‟ List Targets Many US RiversIt is hard to think of a major natural resource or pollution issue in North America today that does not affect rivers.Farm chemical runoff残渣, industrial waste, urban storm sewers, sewage treatment, mining, logging, grazing放牧,military bases, residential and business development, hydropower水力发电,loss of wetlands. The list goes on.Legislation like the Clean Water Act and Wild and Scenic Rivers Act have provided some protection, but threats continue.The Environmental Protection Agency (EPA) reported yesterday that an assessment of 642,000 miles of rivers and streams showed 34 percent in less than good condition. In a major study of the Clean Water Act, the Natural Resources Defense Council last fall reported that poison runoff impairs损害more than 125,000 miles of rivers.More recently, the NRDC and Izaak Walton League warned that pollution and loss of wetlands—made worse by last year’s flooding—is degrading恶化the Mississippi River ecosystem.On Tuesday, the conservation group保护组织American Rivers issued its annual list of 10 ―endangered‖ and 20 ―threatened‖ rivers in 32 states, the District of Colombia, and Canada.At the top of the list is the Clarks Fork of the Yellowstone River, whereCanadian mining firms plan to build a 74-acre英亩reservoir水库,蓄水池as part of a gold mine less than three miles from Yellowstone National Park. The reservoir would hold the runoff from the sulfuric acid 硫酸used to extract gold from crushed rock.―In the event this tailings pond failed, the impact to th e greater Yellowstone ecosystem would be cataclysmic大变动的,灾难性的and the damage irreversible不可逆转的.‖ Sen. Max Baucus of Montana, chairman of the Environment and Public Works Committee, wrote to Noranda Minerals Inc., an owner of the ― New World Mine‖.Last fall, an EPA official expressed concern about the mine and its potential impact, especially the plastic-lined storage reservoir. ― I am unaware of any studies evaluating how a tailings pond尾矿池,残渣池could be maintained to ensure its structural integrity forev er,‖ said Stephen Hoffman, chief of the EPA’s Mining Waste Section. ―It is my opinion that underwater disposal of tailings at New World may present a potentially significant threat to human health and the environment.‖The results of an environmental-impact statement, now being drafted by the Forest Service and Montana Department of State Lands, could determine the mine’s future…In its recent proposal to reauthorize the Clean Water Act, the Clinton administration noted ―dramatically improved water quality since 1972,‖ when the act was passed. But it also reported that 30 percent of riverscontinue to be degraded, mainly by silt泥沙and nutrients from farm and urban runoff, combined sewer overflows, and municipal sewage城市污水. Bottom sediments沉积物are contaminated污染in more than 1,000 waterways, the administration reported in releasing its proposal in January. Between 60 and 80 percent of riparian corridors (riverbank lands) have been degraded.As with endangered species and their habitats in forests and deserts, the complexity of ecosystems is seen in rivers and the effects of development----beyond the obvious threats of industrial pollution, municipal waste, and in-stream diversions改道to slake消除the thirst of new communities in dry regions like the Southwes t…While there are many political hurdles障碍ahead, reauthorization of the Clean Water Act this year holds promise for US rivers. Rep. Norm Mineta of California, who chairs the House Committee overseeing the bill, calls it ―probably the most important env ironmental legislation this Congress will enact.‖ (553 words)6.According to the passage, the Clean Water Act______.A.has been ineffectiveB.will definitely be renewedC.has never been evaluatedD.was enacted some 30 years ago7.“Endangered” rivers are _________.A.catalogued annuallyB.less polluted than ―threatened rivers‖C.caused by floodingD.adjacent to large cities8.The “cataclysmic” event referred to in paragraph eight would be__________.A. fortuitous偶然的,意外的B. adventitious外加的,偶然的C. catastrophicD. precarious不稳定的,危险的9. The owners of the New World Mine appear to be______.A. ecologically aware of the impact of miningB. determined to construct a safe tailings pondC. indifferent to the concerns voiced by the EPAD. willing to relocate operations10. The passage conveys the impression that_______.A. Canadians are disinterested in natural resourcesB. private and public environmental groups aboundC. river banks are erodingD. the majority of US rivers are in poor conditionText CA classic series of experiments to determine the effects ofoverpopulation on communities of rats was reported in February of 1962 in an article in Scientific American. The experiments were conducted by a psychologist, John B. Calhoun and his associates. In each of these experiments, an equal number of male and female adult rats were placed in an enclosure and given an adequate supply of food, water, and other necessities. The rat populations were allowed to increase. Calhoun knew from experience approximately how many rats could live in the enclosures without experiencing stress due to overcrowding. He allowed the population to increase to approximately twice this number. Then he stabilized the population by removing offspring that were not dependent on their mothers. He and his associates then carefully observed and recorded behavior in these overpopulated communities. At the end of their experiments, Calhoun and his associates were able to conclude that overcrowding causes a breakdown in the normal social relationships among rats, a kind of social disease. The rats in the experiments did not follow the same patterns of behavior as rats would in a community without overcrowding.The females in the rat population were the most seriously affected by the high population density: They showed deviant异常的maternal behavior; they did not behave as mother rats normally do. In fact, many of the pups幼兽,幼崽, as rat babies are called, died as a result of poor maternal care. For example, mothers sometimes abandoned their pups,and, without their mothers' care, the pups died. Under normal conditions, a mother rat would not leave her pups alone to die. However, the experiments verified that in overpopulated communities, mother rats do not behave normally. Their behavior may be considered pathologically 病理上,病理学地diseased.The dominant males in the rat population were the least affected by overpopulation. Each of these strong males claimed an area of the enclosure as his own. Therefore, these individuals did not experience the overcrowding in the same way as the other rats did. The fact that the dominant males had adequate space in which to live may explain why they were not as seriously affected by overpopulation as the other rats. However, dominant males did behave pathologically at times. Their antisocial behavior consisted of attacks on weaker male,female, and immature rats. This deviant behavior showed that even though the dominant males had enough living space, they too were affected by the general overcrowding in the enclosure.Non-dominant males in the experimental rat communities also exhibited deviant social behavior. Some withdrew completely; they moved very little and ate and drank at times when the other rats were sleeping in order to avoid contact with them. Other non-dominant males were hyperactive; they were much more active than is normal, chasing other rats and fighting each other. This segment of the rat population, likeall the other parts, was affected by the overpopulation.The behavior of the non-dominant males and of the other components of the rat population has parallels in human behavior. People in densely populated areas exhibit deviant behavior similar to that of the rats in Calhoun's experiments. In large urban areas such as New York City, London, Mexican City, and Cairo, there are abandoned children. There are cruel, powerful individuals, both men and women. There are also people who withdraw and people who become hyperactive. The quantity of other forms of social pathology such as murder, rape, and robbery also frequently occur in densely populated human communities. Is the principal cause of these disorders overpopulation? Calhoun’s experiments suggest that it might be. In any case, social scientists and city planners have been influenced by the results of this series of experiments.11. Paragraph l is organized according to__________.A. reasonsB. descriptionC. examplesD. definition12.Calhoun stabilized the rat population_________.A. when it was double the number that could live in the enclosure without stressB. by removing young ratsC. at a constant number of adult rats in the enclosureD. all of the above are correct13.W hich of the following inferences CANNOT be made from theinformation inPara. 1?A. Calhoun's experiment is still considered important today.B. Overpopulation causes pathological behavior in rat populations.C. Stress does not occur in rat communities unless there is overcrowding.D. Calhoun had experimented with rats before.14. Which of the following behavior didn‟t happen in this experiment?A. All the male rats exhibited pathological behavior.B. Mother rats abandoned their pups.C. Female rats showed deviant maternal behavior.D. Mother rats left their rat babies alone.15. The main idea of the paragraph three is that __________.A. dominant males had adequate living spaceB. dominant males were not as seriously affected by overcrowding as the otherratsC. dominant males attacked weaker ratsD. the strongest males are always able to adapt to bad conditionsText DThe first mention of slavery in the statutes法令,法规of the English colonies of North America does not occur until after 1660—some forty years after the importation of the first Black people. Lest we think that existed in fact before it did in law, Oscar and Mary Handlin assure us, that the status of B lack people down to the 1660’s was that of servants. A critique批判of the Handlins’ interpretation of why legal slavery did not appear until the 1660’s suggests that assumptions about the relation between slavery and racial prejudice should be reexamined, and that explanation for the different treatment of Black slaves in North and South America should be expanded.The Handlins explain the appearance of legal slavery by arguing that, during the 1660’s, the position of white servants was improving relative to that of black servants. Thus, the Handlins contend, Black and White servants, heretofore treated alike, each attained a different status. There are, however, important objections to this argument. First, the Handlins cannot adequately demonstrate that t he White servant’s position was improving, during and after the 1660’s; several acts of the Maryland and Virginia legislatures indicate otherwise. Another flaw in the Handlins’ interpretation is their assumption that prior to the establishment of legal slavery there was no discrimination against Black people. It is true that before the 1660’s Black people were rarely called slaves. But this shouldnot overshadow evidence from the 1630’s on that points to racial discrimination without using the term slavery. Such discrimination sometimes stopped short of lifetime servitude or inherited status—the two attributes of true slavery—yet in other cases it included both. The Handlins’ argument excludes the real possibility that Black people in the English colonies were never treated as the equals of White people.The possibility has important ramifications后果,影响.If from the outset Black people were discriminated against, then legal slavery should be viewed as a reflection and an extension of racial prejudice rather than, as many historians including the Handlins have argued, the cause of prejudice. In addition, the existence of discrimination before the advent of legal slavery offers a further explanation for the harsher treatment of Black slaves in North than in South America. Freyre and Tannenbaum have rightly argued that the lack of certain traditions in North America—such as a Roman conception of slavery and a Roman Catholic emphasis on equality— explains why the treatment of Black slaves was more severe there than in the Spanish and Portuguese colonies of South America. But this cannot be the whole explanation since it is merely negative, based only on a lack of something. A more compelling令人信服的explanation is that the early and sometimes extreme racial discrimination in the English colonies helped determine the particular nature of the slavery that followed. (462 words)16. Which of the following is the most logical inference to be drawn from the passage about the effects of “several acts of the Maryland and Virginia legislatures” (Para.2) passed during and after the 1660‟s?A. The acts negatively affected the pre-1660’s position of Black as wellas of White servants.B. The acts had the effect of impairing rather than improving theposition of White servants relative to what it had been before the 1660’s.C. The acts had a different effect on the position of white servants thandid many of the acts passed during this time by the legislatures of other colonies.D. The acts, at the very least, caused the position of White servants toremain no better than it had been before the 1660’s.17. With which of the following statements regarding the status ofBlack people in the English colonies of North America before the 1660‟s would the author be LEAST likely to agree?A. Although black people were not legally considered to be slaves,they were often called slaves.B. Although subject to some discrimination, black people had a higherlegal status than they did after the 1660’s.C. Although sometimes subject to lifetime servitude, black peoplewere not legally considered to be slaves.D. Although often not treated the same as White people, black people,like many white people, possessed the legal status of servants.18. According to the passage, the Handlins have argued which of thefollowing about the relationship between racial prejudice and the institution of legal slavery in the English colonies of North America?A. Racial prejudice and the institution of slavery arose simultaneously.B. Racial prejudice most often the form of the imposition of inheritedstatus, one of the attributes of slavery.C. The source of racial prejudice was the institution of slavery.D. Because of the influence of the Roman Catholic Church, racialprejudice sometimes did not result in slavery.19. The passage suggests that the existence of a Roman conception ofslavery in Spanish and Portuguese colonies had the effect of _________.A. extending rather than causing racial prejudice in these coloniesB. hastening the legalization of slavery in these colonies.C. mitigating some of the conditions of slavery for black people in these coloniesD. delaying the introduction of slavery into the English colonies20. The author considers the explanation put forward by Freyre andTannenbaum for the treatment accorded B lack slaves in the English colonies of North America to be _____________.A. ambitious but misguidedB. valid有根据的but limitedC. popular but suspectD. anachronistic过时的,时代错误的and controversialUNIT 2Text AThe sea lay like an unbroken mirror all around the pine-girt, lonely shores of Orr’s Island. Tall, kingly spruce s wore their regal王室的crowns of cones high in air, sparkling with diamonds of clear exuded gum流出的树胶; vast old hemlocks铁杉of primeval原始的growth stood darkling in their forest shadows, their branches hung with long hoary moss久远的青苔;while feathery larches羽毛般的落叶松,turned to brilliant gold by autumn frosts, lighted up the darker shadows of the evergreens. It was one of those hazy朦胧的, calm, dissolving days of Indian summer, when everything is so quiet that the fainest kiss of the wave on the beach can be heard, and white clouds seem to faint into the blue of the sky, and soft swathing一长条bands of violet vapor make all earth look dreamy, and give to the sharp, clear-cut outlines of the northern landscape all those mysteries of light and shade which impart such tenderness to Italian scenery.The funeral was over,--- the tread鞋底的花纹/ 踏of many feet, bearing the heavy burden of two broken lives, had been to the lonely graveyard, and had come back again,--- each footstep lighter and more unconstrained不受拘束的as each one went his way from the great old tragedy of Death to the common cheerful of Life.The solemn black clock stood swaying with its eternal ―tick-tock, tick-tock,‖ in the kitchen of the brown house on Orr’s Island. There was there that sense of a stillness that can be felt,---such as settles down on a dwelling住处when any of its inmates have passed through its doors for the last time, to go whence they shall not return. The best room was shut up and darkened, with only so much light as could fall through a little heart-shaped hole in the window-shutter,---for except on solemn visits, or prayer-meetings or weddings, or funerals, that room formed no part of the daily family scenery.The kitchen was clean and ample, hearth灶台, and oven on one side, and rows of old-fashioned splint-bottomed chairs against the wall. A table scoured to snowy whiteness, and a little work-stand whereon lay the Bible, the Missionary Herald, and the Weekly Christian Mirror, before named, formed the principal furniture. One feature, however, must not be forgotten, ---a great sea-chest水手用的储物箱,which had been the companion of Zephaniah through all the countries of the earth. Old, and battered破旧的,磨损的, and unsightly难看的it looked, yet report said that there was good store within which men for the most part respect more than anything else; and, indeed it proved often when a deed of grace was to be done--- when a woman was suddenly made a widow in a coast gale大风,狂风, or a fishing-smack小渔船was run down in the fogs off the banks, leaving in some neighboring cottage a family of orphans,---in all such cases, the opening of this sea-chest was an event of good omen 预兆to the bereaved丧亲者;for Zephaniah had a large heart and a large hand, and was apt有…的倾向to take it out full of silver dollars when once it went in. So the ark of the covenant约柜could not have been looked on with more reverence崇敬than the neighbours usually showed to Captain Pennel’s sea-chest.1. The author describes Orr‟s Island in a(n)______way.A.emotionally appealing, imaginativeB.rational, logically preciseC.factually detailed, objectiveD.vague, uncertain2.According to the passage, the “best room”_____.A.has its many windows boarded upB.has had the furniture removedC.is used only on formal and ceremonious occasionsD.is the busiest room in the house3.From the description of the kitchen we can infer that thehouse belongs to people who_____.A.never have guestsB.like modern appliancesC.are probably religiousD.dislike housework4.The passage implies that_______.A.few people attended the funeralB.fishing is a secure vocationC.the island is densely populatedD.the house belonged to the deceased5.From the description of Zephaniah we can see thathe_________.A.was physically a very big manB.preferred the lonely life of a sailorC.always stayed at homeD.was frugal and saved a lotText BBasic to any understanding of Canada in the 20 years after the Second World War is the country' s impressive population growth. For every three Canadians in 1945, there were over five in 1966. In September 1966 Canada's population passed the 20 million mark. Most of this surging growth came from natural increase. The depression of the 1930s and the war had held back marriages, and the catching-up process began after 1945. The baby boom continued through the decade of the 1950s, producing a population increase of nearly fifteen percent in the five years from 1951 to 1956. This rate of increase had been exceeded only once before in Canada's history, in the decade before 1911 when the prairies were being settled. Undoubtedly, the good economic conditions of the 1950s supported a growth in the population, but the expansion also derived from a trend toward earlier marriages and an increase in the average size of families; In 1957 the Canadian birth rate stood at 28 per thousand, one of the highest in the world. After the peak year of 1957, thebirth rate in Canada began to decline. It continued falling until in 1966 it stood at the lowest level in 25 years. Partly this decline reflected the low level of births during the depression and the war, but it was also caused by changes in Canadian society. Young people were staying at school longer, more women were working; young married couples were buying automobiles or houses before starting families; rising living standards were cutting down the size of families. It appeared that Canada was once more falling in step with the trend toward smaller families that had occurred all through theWestern world since the time of the Industrial Revolution. Although the growth in Canada’s population had slowed down by 1966 (the cent), another increase in the first half of the 1960s was only nine percent), another large population wave was coming over the horizon. It would be composed of the children of the children who were born during the period of the high birth rate prior to 1957.6. What does the passage mainly discuss?A. Educational changes in Canadian society.B. Canada during the Second World War.C. Population trends in postwar Canada.D. Standards of living in Canada.7. According to the passage, when did Canada's baby boom begin?A. In the decade after 1911.B. After 1945.C. During the depression of the 1930s.D. In 1966.8. The author suggests that in Canada during the 1950s____________.A. the urban population decreased rapidlyB. fewer people marriedC. economic conditions were poorD. the birth rate was very high9. When was the birth rate in Canada at its lowest postwar level?A. 1966.B. 1957.C. 1956.D. 1951.10. The author mentions all of the following as causes of declines inpopulation growth after 1957 EXCEPT_________________.A. people being better educatedB. people getting married earlierC. better standards of livingD. couples buying houses11.I t can be inferred from the passage that before the IndustrialRevolution_______________.A. families were largerB. population statistics were unreliableC. the population grew steadilyD. economic conditions were badText CI was just a boy when my father brought me to Harlem for the first time, almost 50 years ago. We stayed at the hotel Theresa, a grand brick structure at 125th Street and Seventh avenue. Once, in the hotel restaurant, my father pointed out Joe Louis. He even got Mr. Brown, the hotel manager, to introduce me to him, a bit punchy强力的but still champ焦急as fast as I was concerned.Much has changed since then. Business and real estate are booming. Some say a new renaissance is under way. Others decry责难what they see as outside forces running roughshod肆意践踏over the old Harlem. New York meant Harlem to me, and as a young man I visited it whenever I could. But many of my old haunts are gone. The Theresa shut down in 1966. National chains that once ignored Harlem now anticipate yuppie money and want pieces of this prime Manhattan real estate. So here I am on a hot August afternoon, sitting in a Starbucks that two years ago opened a block away from the Theresa, snatching抓取,攫取at memories between sips of high-priced coffee. I am about to open up a piece of the old Harlem---the New York Amsterdam News---when a tourist。

微观经济学 Uncertainty

微观经济学 Uncertainty

Utility 12
Risk Averse
EU=7 2 $0 $45 $90 Wealth
Risk Averse
Utility 12 U($45) EU=7 2 $0 $45 $90 Wealth MU declines as wealth rises. U”<0.
U($45) > EU risk-aversion. (期望值的效用大于预期效用)
17 Ca
20
State-Contingent Budget Constraints
Without insurance, Ca = m - L Cna = m.

Cna
m
State-Contingent Budget Constraints
The endowment bundle.
mL
Uncertainty is Pervasive(不确定 性是普遍的)

What is uncertain in economic systems? (在经济体系中哪些是不确定的?)
tomorrow’s prices(明天的价格) future wealth(未来的财产) future availability of commodities(未来商品
“car accident” (a) “no car accident” (na).
Accident occurs with probability a, does not with probability na ; a + na = 1. Accident causes a loss of $L.
Chapter Twelve
Uncertainty 不确定性

METHOD FOR REDUCING CRITICAL DIMENSIONS USING MUL

METHOD FOR REDUCING CRITICAL DIMENSIONS USING MUL

专利名称:METHOD FOR REDUCING CRITICALDIMENSIONS USING MULTIPLE MASKINGSTEPS发明人:MARKS, Jeffrey,SADJADI, Reza, S., M.申请号:US2006002164申请日:20060120公开号:WO06/083592P1公开日:20060810专利内容由知识产权出版社提供摘要:A method for forming features in an etch layer is provided. A first mask is formed over the etch layer wherein the first mask defines a plurality of spaces with widths. A sidewall layer is formed over the first mask. Features are etched into the etch layer through the sidewall layer, wherein the features have widths that are smaller than the widths of the spaces defined by the first mask. The mask and sidewall layer are removed. An additional mask is formed over the etch layer wherein the additional mask defines a plurality of spaces with widths. A sidewall layer is formed over the additional mask. Features are etched into the etch layer through the sidewall layer, wherein the widths that are smaller than the widths of the spaces defined by the first mask. The mask and sidewall layer are removed.申请人:MARKS, Jeffrey,SADJADI, Reza, S., M.地址:US,US,US国籍:US,US,US代理机构:LEE, Michael更多信息请下载全文后查看。

基于尺度化凸壳的代价敏感学习

基于尺度化凸壳的代价敏感学习
改 变原始训 练样 本 集的分 布重 新构 建训 练样本 集 , 然 后 用 现 有 的 分 类 代价 敏感 学 习 。② 改 进 现 有 算 法 的方 法 。根 据代 价敏感 f 题 的特 点 , 过改 进分 类器模 型 的内部 h J 通 构造 , 之适 用 于解决 代价 敏感 问题 。 使
( c o l fElcr ncEn ie r g a d Auo t n. in Unv ri f e to i Te h oo y S h o e to i o g n e i n t ma i n o Gu l ie st o cr n c c n lg .Gul 4 0 4。C ia i y El in 5 1 0 i hn )
基 于 尺 度化 凸壳 的 代 价 敏 感 学 习
刘 振 丙
( 林 电子 科 技 大 学 电子 工 程 与 自动 化 学 院 , 西 桂 林 桂 广 5 10 ) 4 0 4
摘 要 : 改 变 类 分 布 思 想 的 启 发 , 用 最 新 的 最 大 间 隔 方 法— — 尺 度 化 凸 壳 方 法 米 解 决 代 价 敏 感 学 习 。该 方 法 可 受 采 以改 变 样 木 的 分 布 , 这 种 改 变 只 需 为 不 同 的类 赋 予 不 同 的 度 因 子 就 可 以= 现 。 实 验 结 果 表 明 , 度 化 凸 壳 方 U 立 = 尺 法水解代价敏感问题的有效性 , 其求 解 过 程也 非 常 简单 。 关键词 : 尺度 化 壳 ;代 价 敏 感 ;分类
v xh l a d i i l i . e ul n t smpi t s cy
Ke r s : e l d c n e u l o ts n i v ;ca sfc t n y wo d l s ae o v x h l;c s- e st e ls i a i i i o
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Reducing Uncertainty In Location Prediction Of Moving Objects In Road NetworksHariharan Gowrisankar and Silvia NittelSpatial Information Science and Engineering, and NCGIA5711, Boardman Hall, University of Maine, Orono, ME – 04469, USA{hari@, nittel@}AbstractConsider a database which tracks moving objects in road network following a pre-specified route. In a city environment, the number of moving objects can be large, and the frequency of updating objects’ location increases the load on the database management system. Hence, it is not feasible to update an object’s location constantly and explicitly. Instead, location of a moving object is stored as a dynamic attribute, e.g. motion vector function [3], whereby the location value is calculated when it is accessed, and it is updated when the parameters of the function change (speed, route, etc), and to increase location accuracy by comparing a possibly calculated location value with a measured value to keep the deviation bounded.Today, dead reckoning is used to implement the motion vector function, and to determine the current and future positions of an object based on knowledge of the underlying route network, the object’s pre-specified route and the object’s last position and speed [2]. However, tracking moving objects by dead reckoning techniques inherently introduces uncertainty in location determination and prediction since we do not really know where the object is actually located. Several cost factors are associated with determining the location of a moving object, i.e. the communication and update cost, uncertainty cost and deviation cost [3]. Communication and update cost is the cost involved in communicating data across the wireless network, and writing the data to the database. Deviation cost is associated with the actual linear distance by which the object’s location stored in the database and its actual location [3] deviates. Accounting for deviation in all possible directions introduces a region of uncertainty within which we locate the object and, thus, an uncertainty cost. Ideally, we would like to keep communication cost as well as deviation and uncertainty cost low; however, there is a trade-off between communication/update cost, and uncertainty cost. In this extended abstract, we propose an improved dead-reckoning policy that reduces the uncertainty in location prediction of moving objects in road networks while keeping update costs low.Location update policies as in [3] use linear deviation as the threshold parameter, e.g. an object deviates more than a certain threshold (e.g. 2 miles) from its assumed position on the assumed route. This technique relies on the availability of only one spatial sensor, i.e. a GPS receiver, and assumption about speed restrictions on the road network. We assume that we additionally have information about the direction in which an object is moving. Based on this information, we propose a hybrid dead reckoning policy that involves linear as well as an angular deviation as threshold parameters for the dead reckoning policy. This means we generate a location update when the linear deviation or angular deviation surpasses the defined threshold. In contrast to other approaches, we usethe available information about angular deviation, and thus, can restrict the uncertainty significantly.If d l is the linear deviation threshold, w is the width of the road and d a is the angular deviation threshold, thenUncertainty Cost without angular constraint: U l = π * d l ^ 2Uncertainty Cost with angular constraint for road networks U a = 2 * d l * w if the actual angular deviation is less than d aReduction in Uncertainty Cost, R u % = ((U l – U a) / U l) * 100Hence, R u = (1 – (2 * w)/π * d)) * 100Reducing uncertainty greatly influences a range query such as ‘retrieve all taxi’s that will be within 2 miles from airport in 15 minutes’, assuming 70% certainty. The target area is a circle of radius 2 miles with airport as center. The database predicts the future location of objects (15 minutes from now) as a point and the uncertainty as a region around the point (thus, a moving region), and checks whether at least 70% of the moving region falls within the target area. For a taxi, whose object region falling 70% within the target area, the uncertainty that it may not fall is 30% of the moving region. If we include an angular constraint, we can represent the object as a point, and its uncertainty area is a line if the angular deviation is zero. In this case, the remaining 30% uncertainty is of linear deviation, which leads to a more accurate decision-making.For future work in this area, we are interested in using the linear and angular deviation constraint at decision points in road networks to determine the location of moving object without knowledge of the pre-specified route. Hybrid location update policy also involves a tradeoff with communication cost since more data needs to be sent to the database, but as the communication cost gets cheaper faster, and network bandwidth increases as specified by International Telecommunications Union (ITU) [1], location tracking of moving objects can become more precise by employing the hybrid dead reckoning policy.References[1] Ajay K. Mathur, The speed demon: 3G telecommunications - A high-level architecture study of 3G and UMTS, http://www-/developerworks/wireless/library/wi-speed/[2] Bowditch -The American Practical Navigator, /bowditch/[3] Ouri Wolfson, Sam Chamberlain, Liqin Jiang, Moving Objects Databases: Issues and Solutions, Conference "Statistical and Scientific Database Management", 1998 AcknowledgementsThis work was partially supported by the National Science Foundation under NSF grant number EPS-9983432.。

相关文档
最新文档