《经济学人》科技类文章整合
2018经济学人考研英文文章阅读二
Science and technology:Air pollution:Blown away科技:空气污染:吹走它Retired jet engines could help clear the smog that smothered big cities.退役喷射发动机可以用来驱逐笼罩大城市的雾霾。
To land at Indira Gandhi Airport is to descend from clear skies to brown ones.飞机在英迪拉甘地机场着陆的过程中,天空会由明朗转为棕黄。
Delhi’s air is toxic.德里的空气是有毒的。
According to the World Health Organisation,India’s capital has the most polluted atmosphere of all the world’s big cities.据世界卫生组织报道,印度首都是全世界空气污染最严重的大城市。
The government is trying to introduce rules that will curb emissions—allowing private cars to be driven only on alternate days,for example,and enforcing better emissions standards for all vehicles.政府正努力制定控制排放的条例,比如:私家车单双数出行、强制提高所有车辆的排放标准。
But implementing these ideas,even if that can be done successfully,will change things only slowly.但即使这些方法都能顺利实施,情况也不会立刻好转。
A quick fix would help.速效对策将有所帮助。
经济学人文章(四六级雅思精读素材)2020-08-27
The Economist August 29th 2020 Business 55Depending on whom you ask, Califor-nia is a leader in clean energy or a cau-tionary tale. Power outages in August prompted stern critiques from Republi-cans. “In California”, D onald Trump tweeted, “D emocrats have intentionally implemented rolling blackouts—forcing Americans in the dark.” In addition to pro-voking outrage and derision, however, the episode is also likely to inspire investment.The Golden State has long been Ameri-ca’s main testing ground for green compa-nies. Californians buy half of all electric cars sold in America. Theirs is the country’s largest solar market. As California deals with heat waves, fires and a goal of carbon-free electricity by 2045, the need for a reli-able grid is becoming ever more obvious.For years firms competed to generate clean power in California. Now a growing num-ber are vying to store and manage it, too. August’s blackouts have many causes,including poor planning, an unexpected lack of capacity and sweltering heat in not just California but nearby states from which it sometimes imports power. Long before the outages, however, electricity op-erators were anxious about capacity. Cali-fornia’s solar panels become less useful in the evening, when demand peaks. In No-vember state regulators mandated that utilities procure an additional 3.3 gigawatts (gw ) of capacity, including giant batteries that charge when energy is abundant and can sell electricity back to the grid.Too few such projects have come online to cope with the surge in demand for air-conditioning in the scorching summer. But more are sprouting across the state. On Au-gust 19th ls Power, an electricity firm backed by private equity, unveiled a 250-megawatt (mw ) storage project in San Die-go, the largest of its kind in America. In July the county of Monterey said Vistra Energy,a Texan power company, could build as much as 1.2gw of storage.The rooftop solar industry stands to benefit from a new Californian mandate that requires new homes to install panels on their roofs from this year. Sunrun, the market leader, is increasingly pairing such residential installations with batteries. In July, for instance, the company said it had won contracts with energy suppliers in the Bay Area to install 13mw of residential solar and batteries. These could supply power to residents in a blackout or feed power into the grid to help meet peak demand. Sunrunis so confident in its future that it has bid $3.2bn for Vivint Solar,its main rival.Another way to stave offoutages is to curb demand.Enel,a European power company,has contracts with local utilities to work with large commercial and indus-trial clients.When demand rises,Enel pays customers to reduce energy consumption,easing demand on the grid.A company called OhmConnect offers something sim-ilar for homeowners.Even as such offerings scale up,the need for reliability means that fossil fuels will not disappear just yet.On September 1st California’s regulators will vote on whether to delay the retirement of four natural-gas plants in light of the outages.The state remains intent on decarbonising its power system over the next 25years.But progress may not move in a straight line.7NEW YO RKBusinesses compete to battle California’s blackoutsEnergy utilitiesLitMany big companies may be struggling with depressed sales, but these are busy times for bribery-busters. Mexico is abuzz over allegations by an ex-boss of Pe-mex, the state oil giant, that several senior politicians received bungs from compa-nies including Odebrecht, a Brazilian con-struction firm (see Americas section). The scandal is the latest in a string of graft cases to make headlines this year, starting with Airbus’s record $4bn settlement in January over accusations of corruption for making illegal payments in various countries.Corporate bribery is hardly new. In sur-veys, between a third and a half of compa-nies typically claim to have lost business to rivals who won contracts by paying kick-backs. But such perceptions-based re-search has obvious limitations. A new study takes a more rigorous approach, and draws some striking conclusions.Raghavendra Rau of Judge Business School at the University of Cambridge, Yan-Leung Cheung of the Education University of Hong Kong and Aris Stouraitis of Hong Kong Baptist University examined nearly 200 prominent bribery cases in 60 coun-tries between 1975 and 2015. For the firms doing the bribing, they found, the short-term gains were juicy: every dollar of bribe translated into a $6-9 increase in excess re-turns, relative to the overall stockmarket. That, however, does not take account of the chances of getting caught. These have risen as enforcement of America’s 43-year-old anti-bribery law, the Foreign Corrupt Practices Act (fcpa ), has been stepped up and other countries have passed similar laws. The number of fcpa cases is up sharply since the financial crisis of 2007-09, according to Stanford Law School (see chart). It has dipped a bit under Presi-dent Donald Trump, who has criticised the fcpa for hobbling American firms over-seas, but remains well above historic lev-els. Total fines for fcpa violations were $14bn in 2016-19, 48 times as much as in the four years to 2007.The authors also tested 11hypotheses that emerged from past studies of bribery.They found support for some, for instance that firms pay larger bribes when they ex-pect to receive larger benefits, and that the net benefits of bribing are smaller in places with more public disclosure of politicians’sources of income.But they punctured other bits of re-ceived wisdom. Most striking, they found no link between democracy and graft. This challenges the “Tullock paradox”, which holds that firms can get away with smaller bribes in democracies because politicians and officials have less of a lock on the sys-tem than those in autocratic countries, and so cannot extract as much rent. Such find-ings will doubtless be of interest to corrup-tion investigators and unscrupulous exec-utives alike. 7Bribery pays—if you don’t get caughtBriberyA closer look at greasy palmsBrown envelopes, big chequesUnited States,Foreign Corrupt Practices ActSources:Stanford Law School;Sullivan &Cromwell*Investigations and enforcement actions †To August6543210605040302010020†10152000059095851977Enforcement actionsSanctions, $bnUtilitiesTransport Communications Basic materials Financial services Consumer goods Aerospace & defence TechnologyIndustrials Health care Oil &gas 100806040200Number of cases* by selected industry1977-2020†。
经济学人精选文章两篇
You are listening to the audio edition of the Economist. Stealing in the Business sectio n.您正在收听《经济学人》音频版。
节选自商业篇。
HTC's patent problemsHTC的专利问题Android alert安卓系统,要当心了Using Google’s Android software has given HTC a boost, but it may now make the Tai wanese handset-makervulnerable to costly lawsuits使用谷歌的安卓软件推动了宏达电(HTC)的发展,但是现在它可能变成弱点,令这家台湾手机制造公司遭遇昂贵的诉讼UNTIL a few years ago HTC was pretty small and relatively obscure. But the Taiwanes e company’s recentgrowth has been remarkable. In the second quarter it sold 11m smart phones, more than doubling its revenues inthe same period last year. HTC’s main rivals, Nokia, Samsung and Apple, still sell around twice as manysmartphones. But its rapid gro wth, especially on Apple’s American home turf, has made it a competitor to reckonwith.几年前HTC还只是一个规模很小、相对不知名的公司。
经济学人精品文章
经济学人精品文章1.世界经济一路泥泞还是一路下滑?夏天已经走近了世界几大金融中心,可人们的心情却阳光不起来。
受各地经济悲观消息影响,股市已经连阴数周。
全球工厂生产放缓,消费者也愈发谨慎。
在美国,从房屋价格到就业增长的几乎每一项统计数据都显示疲软迹象。
虽然本周早些时候悲观气氛有所平缓,但也只是因为如美国零售业和中国工业生产等数据没有预想的那么糟糕而已。
全球范围内,经济增长正处于约两年前复苏开始以来的最低点。
那么现在的疲软只是复苏道路上的一滩泥泞,还是预示了全球经济恢复动力正在消退?大疲软从导致增长停滞的原因来看停顿应该只是暂时的。
首先,虽然这次的海啸重创日本GDP;打断供应链;尤其影响了4月全球工业产出量。
但经济统计数据显示暴跌的同时,一些更具前瞻性的迹象也表明将有一轮反弹。
比如美国汽车制造商的夏季生产计划表显示,那里的年GDP增长将至少提高一个百分点。
第二,是年初突然高企的油价导致了需求下降。
虽然更多的收入正从资金紧张的石油进口国流入坐享其成的产出国。
昂贵的燃油价格也打击了消费者信心,特别是在石油消费大国美国。
而且油价随阿拉伯世界动荡加剧而再度上扬的可能性也令人不安。
然而至少就目前来看,价格上涨的压力正在减弱。
美国的平均汽油价格虽然仍比年初高出21%,但已经开始回落。
这样应该可以促进消费者信心(并刺激消费)。
第三,许多新兴经济体推行货币紧缩政策是为了应对高通胀。
中国今年5月CPI攀升到了5.5%,印度商品批发价格增长也一举跃上9.1%。
以此为鉴,增速放缓在一定程度上倒是一个有利迹象,这恰恰说明这些国家的央行正采取行动,并开始取得成效。
即使是在对经济硬着陆风险忧心最重的中国,也没有迹象表明政府措施有矫枉过正之嫌。
其实更大的风险在于对世界经济疲软的担忧导致紧缩政策过早收兵。
在当前货币环境仍极其宽松的背景下,如果政府决心有所动摇将导致更高的通胀,最终使经济崩溃的风险大大增加。
也许大部分新兴市场正好需要一场减速来降温,但任何一个发达国家此刻却对此避之不及。
最新《经济学人》文章阅读精选1
最新《经济学人》文章阅读精选1WHEN Steve Jobs unveiled the iPhone in 2007, he changed an industry. Apple’s brilliant new device was a huge advance on the mobile phones that had gone before: it looked different and it worked better. The iPhone represented innovation at its finest, making it the top-selling smartphone soon after it came out and helping to turn Apple into the world’s most valuable company, with a market capitalisation that now exceeds $630 billion.Apple’s achievement spawned a raft of imitators. Many smartph one manufacturers now boast touch-screens and colourful icons. Among them is Samsung, the world’s biggest technology manufacturer, whose gadgets are the iPhone’s nearest rivals and closest lookalikes. The competition and the similarities were close enough for Apple to sue Samsung for patent infringement in several countries, spurring the South Korean firm to counterclaim that it had been ripped off by Apple as well. On August 24th an American jury found that Samsung had infringed six patents and ordered it to pay Apple more than $1 billion in damages, one of the steepest awards yet seen in a patent case.Some see thinly disguised protectionism in this decision. That does the jury a disservice: its members seem to have stuck to the job of working out whether patent infringements had occurred. The much bigger questions raised by this case are whether all Apple’s innovations should have been granted a patent in the first place; and the degree to which technology stalwarts and start-ups alike should be able to base their designs on the breakthroughs of others.It is useful to recall why patents exist. The system wasestablished as a trade-off that provides a public benefit: the state agrees to grant a limited monopoly to an inventor in return for disclosing how the technology works. To qualify, an innovation must be novel, useful and non-obvious, which earns the inventor 20 years of exclusivity. “Design patents”, which cover appearances and are granted after a simpler review process, are valid for 14 years.The dispute between Apple and Samsung is less over how the devices work and more over their look and feel. At issue are features like the ability to zoom into an image with a double finger tap, pinching gestures, and the visual “rubber band” effect when you scroll to the end of a page. The case even extends to whether the device and its on-screen icons are allowed to have rounded corners. To be sure, some of these things were terrific improvements over what existed before the iPhone’s arrival, but to awar d a monopoly right to finger gestures and rounded rectangles is to stretch the definition of “novel” and “non-obvious” to breaking-point.A proliferation of patents harms the public in three ways. First, it means that technology companies will compete more at the courtroom than in themarketplace—precisely what seems to be happening. Second, it hampers follow-on improvements by firms that implement an existing technology but build upon it as well. Third, it fuels many of the American patent system’s broad er problems, such as patent trolls (speculative lawsuits by patent-holders who have no intention of actually making anything); defensive patenting (acquiring patents mainly to pre-empt the ri sk of litigation, which raises business costs); and “innovation gridlock” (the difficulty of combining multiple technologies tocreate a single new product because too many small patents are spread among too many players).Some basic reforms would alleviate many of the problems exemplified by the iPhone lawsuit. The existing criteria for a patent should be applied with greater vigour. Specialised courts for patent disputes should be established, with technically minded judges in charge: the inflated patent-damage awards of recent years are largely the result of jury trials. And if patents are infringed, judges should favour monetary penalties over injunctions that ban the sale of offending products and thereby reduce consumer choice.Pinch and bloomA world of fewer but more robust patents, combined with a more efficient method of settling disputes, would not just serve the interests of the public but also help innovators like Apple. The company is rumoured to be considering an iPad with a smaller screen, a format which Samsung already sells. What if its plans were blocked by a specious patent? Apple’s own early successes were founded on enhancing the best technologies that it saw, notably the graphical interface and mouse that were first invented at Xerox’s Palo Alto Research Centre. “It comes down to trying to expos e yourself to the best things that humans have done—and then try to bring those things in to what you’re doing,” said Jobs in a television documentary, “Triumph of the Nerds”, in 1996. “And we hav e always been shameless about stealing great ideas.”。
《经济学人》文章精选1
Togetherness in LibyaObama’’s awfully big change in AmericaAmerica’’s use Barack Obamaof forceMar31st2011|from the print edition•Tweet•IT IS Pavlovian.As soon as a president does something new in foreign policy,the world wants to know whether he has invented a new “doctrine”.The short answer in the case of Libya is that Barack Obama has not invented a new doctrine so much as repudiated an old one.What he is also doing,however,is challenging an American habit of mind.The doctrine Mr Obama has repudiated is the one attributed to ColinPowell,the former chairman of the joint chiefs of staff and George W. Bush’s transparently miserable secretary of state when America invaded Iraq in2003.That held,among other things,that America ought to go to war only when its vital interests are threatened,when the exit strategy is clear,and when it can apply overwhelming force to ensure that its aims are achieved.Nothing could be more different from the account Mr Obama gave Americans on March28th of his reasons for using military force in Libya.He does not believe that America’s vital interests are at stake(though some“important”ones are);the exit strategy is not entirely clear(Colonel Qaddafi must go,but who knows when,and not as a direct result of American military action);and the force America is willing to apply(no boots on the ground)is strictly limited.None of this should be a surprise.In“The Audacity of Hope”,the bestseller Mr Obama wrote as a senator in2006,he set out a theory of military intervention.Like all sovereign nations,he argued,America has the unilateral right to defend itself from attack,and to take unilateral military action to eliminate an imminent threat.But beyond matters of clear self-defence,it would“almost always”be in its interest to use force multilaterally.This would not mean giving the UN Security Council a veto over its actions,or rounding up Britain and Togo and doing as it pleased.It would mean following the example of the first President Bush in the first Gulf war—“engaging in the hard diplomatic work of obtainingmost of the world’s support for our actions”.Related topics•United States•Libya•Barack ObamaThe virtue of such an approach was that America had much to gain in a world that lived by rules.By upholding such rules itself,it could encourage others to do so too.A multilateral approach would also lighten America’s burden at times of war.This might be“a bit of an illusion”, given the modest power of most American allies.But in many future conflicts the military operation was likely to cost less than the aftermath: training police,switching the lights back on,building democracy and so forth.The president,it now emerges,remembers exactly what he wrote.He hesitated about whether to act in Libya(just ask the French and British, who egged him on but came close to losing hope),but he was always clear about how.All the conditions he wished for in that book five years ago have come to pass.In this week’s speech he ticked them methodically off:“an international mandate for action,a broad coalition prepared to join us,the support of Arab countries,and a plea for help from the Libyanpeople themselves.We also had the ability to stop Qaddafi’s forces in their tracks without putting American troops on the ground.”Under such circumstances,he said,for America to turn a blind eye to the fate of Benghazi would have been“a betrayal of who we are”.Why does this theory of intervention,and the noble sentiment attached to it,fail to qualify as a“doctrine”?Because it is too elastic to provide a guide to future action.Would America“betray”itself by turning a blind eye to atrocities under different,less favourable,circumstances?So it seems.It has,after all,done so before,in Rwanda and Darfur—and Mr Obama appears to accept that it might have to do so again when,say,an alliance would be damaged,as in Bahrain,or the job is too hot to handle, as in Syria or Iran.Also unclear is whether an American interest must also be at stake before Mr Obama invokes the moral case for action. Conveniently(for the purpose of selling this particular war),the president detects a“strategic interest”in preventing Colonel Qaddafi from chilling the wider Arab spring,so nobody knows.In fairness,elasticity is not a sin;and Mr Obama does not claim to have invented anything he calls a“doctrine”.The worst you can say about his approach is that it is merely commonsensical:decide the issues case-by-case while holding some idea of values and interests in mind. Many who say they want more consistency than this(typically by askingsome variant of“What about Zimbabwe?”)do so not because they really believe that foreign policy can be run by an algorithm but in order to embarrass Mr Obama in any way they can.Prize chump in the case of Libya this past fortnight has been Newt Gingrich,the Republican presidential hopeful who demanded consistency,called for intervention and turned on a dime the instant Mr Obama answered.After you,SarkoMore significant,however,is that habit of mind.In Libya Mr Obama is challenging the assumption of global leadership America has taken for granted ever since the second world war.America has joined coalitions before,but never under a president quite so adamant that America is not in charge—even if the military burden-sharing is indeed a bit of an illusion.Most Republicans and quite a few Democrats hate this.Mr Obama’s hope is that America’s low profile will make the war more palatable not only to the Muslim world but also to the economy-fixated voters at home who question whether America can still afford to play its traditional leadership role.What he may soon discover is that modesty extracts a price of its own.By sharing the leadership with others,he has made his policy hostage to the limited mandate(no use of force for regime change) imposed by the United Nations and the limited means of his allies inEurope and the Middle East.It may not be a doctrine,it should not be a surprise,but nobody can deny that it is a gamble.。
(完整word版)经济学人经济类文章精选4
InadequateSOMETIMES the only thing people can agree on is a mediocre idea. Ahead of the G20 meeting, some regulators are pushing to introduce dynamic provisioning for banks. Under this system, in boom years banks make provisions against profits which then sit on their balance-sheets as reserves against unspecified potential losses. In the bad years they draw down on these reserves. This smooths banks’ profits over the cycle, making their capital positions “counter-cyclical”. Supporters point to Spain, which uses this approach and whose lenders are in relatively good nick.Banks should be encouraged to save more for a rainy day. But the importance of Spain’s system has been oversold. Going into the credit crisis, its two big banks had an extra buffer equivalent to about 1.5% of risk-weighted assets. Banks like UBS or Citigroup have had write-offs far beyond this, equivalent to 8-15% of risk-weighted assets. Whether dynamic provisions influenced managers’ behaviour is also questionable. Spain’s BBV A was run us ing an economic-capital model that, according to its 2007 annual report, explicitly replaced the generic provision in its income statement with its “best estimate of the real risk incurred”.Accounting standard-setters, meanwhile, are not amused. They support the objective of counter-cyclical capital rules but think dynamic provisioning is a bad way to achieve this. Why not simply require banks to run with higher capital ratios, rather than go through a circuitous route by smoothing profits, which investors tend to dislike? Accountants worry their standards are being fiddled with needlessly, after a decades-long fight to have them independently set to provide accurate data to investors.Is there a solution? If anything, the crisis shows that accounting and supervision should be further separated to break the mechanistic link between mark-to-market losses and capital. Investors should get the information they want. Supervisors should make a judgment about the likelihood of losses and set the required capital level accordingly. Warren Buffett, an astute investor, has endorsed this approach.Sadly, bank supervision is as dysfunctional as the banks. The Basel 2 accords took five years to negotiate. Local regulators interpreted them differently and many failed to enforce them. Confidence in their integrity is now so low that many investors and some banks and regulators have abandoned Basel as their main test of capital. Given this mess, it is easy to see why policymakers might view tweaking accounting standards as an attractive short cut: with some arm-twisting, the rules can be changed quickly and are legally enforceable. But this is a matter where short cuts are not good enough.Unsavoury spreadTEN years ago Warren Buffett and Jack Welch were among the most admired businessmen in the world. Emerging markets were seen as risky, to be avoided by the cautious. But now the credit-default swaps market indicates that Berkshire Hathaway, run by Mr Buffett, is more likely to default on its debt than Vietnam. GE Capital, the finance arm of the group formerly run by Mr Welch, is a worse credit risk than Russia and on March 12th Standard & Poor's downgraded its debt—the first time GE and its subsidiaries have lost their AAA rating in over five decades.The contrast highlights the sorry state of the corporate-bond market. A turn-of-the-year rally was founded on hopes that spreads (the excess of corporate-bond yields over risk-free rates) more than compensated investors for the economic outlook. That has now petered out.The weakness has been much greater in speculative, or high-yield, bonds than in theinvestment-grade part of the market. This is hardly surprising. First, economic prospects are so dire that companies already in trouble will have difficulty surviving. Banks are trying to preserve their own capital and do not need to own any more toxic debt. Even if refinancing were available for endangered firms, it would be prohibitively dear. It is only a matter of time before some go under. Moody’s cites 283 companies at greate st risk of default, including well-known outfits like Blockbuster, a video-rental chain, and MGM Mirage, a casino group. A year ago just 157 companies made the list. Standard & Poor’s says 35 have defaulted this year, against 12 in the same period in 2008. That translates into a default rate over the past 12 months of just 3.8%.The rate is likely to increase sharply. Charles Himmelberg, a credit strategist at Goldman Sachs, forecasts that 14% of high-yield bonds will default this year, with the same proportion going phut in 2010. Worse, creditors will get back only about 12.5 cents on the dollar. All told, Goldman thinks the combination of defaults and low recovery rates will cost bondholders 37 cents on the dollar in the next five years.A second problem for the corporate-bond market is that optimism about the scope for an imminent end to the financial crisis has dissipated. “People have given up hope that the new [Obama] administration will be able to do anything to make things better quickly,” says Willem Sels, a credit strategist at Dresdner Kleinwort.Banks are still the subject of heightened concern. Credit Derivatives Research has devised a counterparty-risk index, based on the cost of insuring against default of 15 large banks; the index is now higher than it was after the collapse of Lehman Brothers. Jeff Rosenberg, head of credit strategy at Bank of America Securities Merrill Lynch, says investors are uncertain about the impact of government intervention in banks. Each successive rescue, from Bear Stearns to Citigroup, has affected different parts of the capital structure in different ways.A third problem for the high-yield market is that plans for quantitative easing (purchases by the central bank of government and private-sector debt) are focused on investment-grade bonds. As well as reviving the economy, governments are concerned about protecting taxpayers’ money, and so will not want to buy bonds at high risk of default. If the government is going to support the investment-grade market, investors have an incentive to steer their portfolios in that direction.The relative strength of the investment-grade market even permitted the issuance of around $300 billion of bonds in the first two months of the year, albeit largely for companies in safe industries such as pharmaceuticals. Circumstances suited all the market participants. “Spreads were wide, which attracted investors, but absolute levels of interest rates were low, which suited issuers,” says Mr Rosenberg.Although the Dow Jones Industrial Average jumped by nearly 6% on March 10th, it is hard to see how the equity market can enjoy a sustained rebound while corporate-bond spreads are still widening. Bondholders have a prior claim on a company’s assets; if they are not going to be paid in full, then shareholders will not get a look-in. However, credit investors say their market often takes its lead from equities. If each is following the other, that hints at a worrying downward spiral.A PlanB for global financeIn a guest article, Dani Rodrik argues for stronger national regulation, not the global sort THE clarion call for a global system of financial regulation can be heard everywhere. From Angela Merkel to Gordon Brown, from Jean-Claude Trichet to Ben Bernanke, from sober economists tocountless newspaper editorials; everyone, it seems, is asking for it regardless of political complexion.That is not surprising, perhaps, in light of the convulsions the world economy is going through. If we have learnt anything from the crisis it is that financial regulation and supervision need to be tightened and their scope broadened. It seems only a small step to the idea that we need much stronger global regulation as well: a global college of regulators, say; a binding code of international conduct; or even an international financial regulator.Yet the logic of global financial regulation is flawed. The world economy will be far more stable and prosperous with a thin veneer of international co-operation superimposed on strong national regulations than with attempts to construct a bold global regulatory and supervisory framework. The risk we run is that pursuing an ambitious goal will detract us from something that is more desirable and more easily attained.One problem with the global strategy is that it presumes we can get leading countries to surrender significant sovereignty to international agencies. It is hard to imagine that America’s Congress would ever sign off on the kind of intrusive international oversight of domestic lending practices that might have prevented the subprime-mortgage meltdown, let alone avert future crises. Nor is it likely that the IMF will be allowed to turn itself into a true global lender of last resort. The far more likely outcome is that the mismatch between the reach of markets and the scope of governance will prevail, leaving global finance as unsafe as ever. That certainly was the outcome the last time we tried an international college of regulators, in the ill-fated case of the Bank of Credit and Commerce International.A second problem is that even if the leading nations were to agree, they might end up converging on the wrong set of regulations. This is not just a hypothetical possibility. The Basel process, viewed until recently as the apogee of international financial co-operation, has been compromised by the inadequacies of the bank-capital agreements it has produced. Basel 1 ended up encouraging risky short-term borrowing, whereas Basel 2’s reliance on credit ratings and banks’ own models to generate risk weights for capital requirements is clearly inappropriate in light of recent experience. By neglecting the macro-prudential aspect of regulation—the possibility that individual banks may appear sound while the system as a whole is unsafe—these agreements have, if anything, magnified systemic risks. Given the risk of converging on the wrong solutions yet again, it would be better to let a variety of regulatory models flourish.Who says one size fits all?But the most fundamental objection to global regulation lies elsewhere. Desirable forms of financial regulation differ across countries depending on their preferences and levels of development. Financial regulation entails trade-offs along many dimensions. The more you valuefinancial stability, the more you have to sacrifice financial innovation. The more fine-tuned and complex the regulation, the more you need skilled regulators to implement it. The more widespread the financial-market failures, the larger the potential role of directed credit and state banks. Different n ations will want to sit on different points along their “efficient frontiers”. There is nothing wrong with France, say, wanting to purchase more financial stability than America—and having tighter regulations—at the price of giving up some financial innovations. Nor with Brazil giving its state-owned development bank special regulatory treatment, if the country wishes, so that it can fill in for missing long-term credit markets.In short, global financial regulation is neither feasible, nor prudent, nor desirable. What finance needs instead are some sensible traffic rules that will allow nations (and in some cases regions) to implement their own regulations while preventing adverse spillovers. If you want an analogy, think of a General Agreement on Tariffs and Trade for world finance rather than a World Trade Organisation. The genius of the GA TT regime was that it left room for governments to craft their own social and economic policies as long as they did not follow blatantly protectionist policies and did not discriminate among their trade partners.Fortify the home front firstSimilarly, a new financial order can be constructed on the back of a minimal set of international guidelines. The new arrangements would certainly involve an improved IMF with better representation and increased resources. It might also require an international financial charter with limited aims, focused on financial transparency, consultation among national regulators, and limits on jurisdictions (such as offshore centres) that export financial instability. But the responsibility for regulating leverage, setting capital standards, and supervising financial markets would rest squarely at the national level. Domestic regulators and supervisors would no longer hide behind international codes. Just as an exporter of widgets has to abide by product-safety standards in all its markets, global financial firms would have to comply with regulatory requirements that may differ across host countries.The main challenge facing such a regime would be the incentive for regulatory arbitrage. So the rules would recognise governments’ right to intervene in cross-border financial transactions—but only in so far as the intent is to prevent competition from less-strict jurisdictions from undermining domestic regulations.Of course, like-minded countries that want to go into deeper financial integration and harmonise their regulations would be free to do so, provided (as in the GA TT) they do not use this as an excuse for financial protectionism. One can imagine the euro zone eventually taking this route and opting for a common regulator. The Chiang Mai initiative in Asia may ultimately also produce a regional zone of deep integration around an Asian monetary fund. But the rest of the world would have to live with a certain amount of financial segmentation—the necessary counterpart toregulatory fragmentation.If this leaves you worried, turn again to the Bretton Woods experience. Despite limited liberalisation, that system produced huge increases in cross-border trade and investment. The reason is simple and remains relevant as ever: an architecture that respects national diversity does more to advance the cause of globalisation than ambitious plans that assume it away.One crunch after anotherCALLS for co-ordinated fiscal stimulus to lift the world out of recession were joined at the weekend by Larry Summers, Barack Obama’s top economic adviser. Such co-ordination has been absent up to now, though that could change at the meeting of G20 leaders in London in early April. But there has been plenty of fiscal stimulus, led by America’s $787 billion package, as many governments seek to offset a collapse in private demand. There are worries not only about how much these measures cost up front but their longer-term effects on government finances.The direct costs of such packages are indeed large. The IMF reckons that for G20 countries stimulus packages will add up to 1.5% of GDP in 2009 (calculated as a weighted average using purchasing power parity). Together with the huge sums used to bail out firms in the financial sector (3.5% of GDP and counting in America, for example), these are immediate ways in which the crisis is affecting public finances across the world. But they are not the only ones.A downturn affects government finances in other ways. Shares in most rich countries have plummeted. The MSCI developed-world index, which tracks stocks in 23 rich countries, has lost more than half its value since the beginning of 2008. Falling share prices hit government revenues as capital-gains tax takes decline. Similarly, taxes on financial-sector profits, a significant part of government revenue in many countries, have evaporated. And expenditures onautomatic stabilisers such as unemployment insurance rise in a recession. All this widens budget deficits.Direct stimulus measures also push up government deficits and debt, although the type of intervention affects how long-lasting its effects are. Most expenditure, such as infrastructure spending, is temporary (although it affects debt permanently). Revenue measures, such as tax cuts, are politically difficult to reverse. The question is whether this threatens the solvency of governments.A paper on the state of the world’s public finances issued by the IMF in the run-up to the G20 meetings takes a stab at identifying and measuring the fiscal implications of the crisis for both rich and developing countries. Its conclusions are sobering. For rich G20 countries, fiscal balances will worsen by 6% of GDP between 2007 and 2009. Government debt will come off worse. Between 2007 and 2009, the debt-to-GDP ratio of rich countries is projected to rise by 14.5 percentage points. In the medium term, the outlook is even more worrying. Government debt for the average rich country will be more than 100% of GDP by 2014 compared with 70% in 2000 and 40% in 1980.A great deal of uncertainty surrounds these estimates because so much depends on guesswork. Economic recovery, for example, could be slower than the IMF’s current projections: g rowth forecasts were revised down several times in 2008. Governments may also have to shoulder more burdens—private pension plans, which have been hammered by the crisis, may require government support. And the eventual cost of financial-sector bailouts will depend on how quickly and at whatlevel prices stabilise of the assets governments have taken on. Past experience suggests that there is enormous scope for variation. Sweden had a recovery rate of 94% five years after its crisis in 1991; Japan had recovered only 1% of assets in the five years after its troubles of 1997.The IMF points out that debt levels, while high, are not unprecedented by historical standards. But the worry is that primary fiscal balances in four-fifths of the rich countries studied by the IMF will be too high even in 2012 to allow debt to be stabilised, or brought down to 60% of GDP (which is the IMF benchmark for debt levels), even though revenues will recover as countries emerge from the crisis. What this implies is that, over time, fiscal deficits will have to be trimmed. And therein lies the rub.Most rich countries have rapidly ageing populations. Unless entitlement systems are reformed (by reducing benefits) or tax bases broadened, fiscal deficits will rise still further. Some of the IMF’s ideas about how to do this will seem unpalatable: it argues that health systems, for example, will have to become less generous. But rich countries were always going to have to come to terms with the fiscal consequences of demographic pressures on existing welfare systems sooner or later. The crisis will bring this problem more urgently to the fore.Inadequate有时人们只能在普普通通的事情上取的一致意见。
经济学人 精读 24 以色列 科技帝国
经济学人精读 24 以色列科技帝国2015年9月30日10:19 阅读2351The scale-up nationIsrael is trying to turn its Davids into Goliaths扩展的国度以色列正试图将它的大卫转变成巨人歌利亚ISRAEL is rightly proud of its status as a startup nation. It boasts the world’s highest concentration of high-tech startups per head. Almost 1,000 new firms are launched every year. But all this entrepreneurial activity is not creating enough jobs as the population gr ows: the share of people employed in the high-tech sector has declined from 10.7% of th e workforce in 2006-08 to 8.9% in 2013. Startups also fail to solve another problem: the c ountry’s high retail prices, which are 20% higher for basic products than in other rich cou ntries, according to the OECD, a think-tank.身为创业国度,以色列理所应当感到自豪。
它的人均高科技创业公司数在全世界首屈一指。
每年约有一千家新公司成立。
但随着人口增长,所有这些创业活动并没有创造足够的就业岗位:受雇于高科技产业的人员占全国劳动人口的比例由 2006 年至 2008 年的 10.7%降至2013年的8.9%。
《经济学人》杂志若干篇文章
城市土地空间和都市Urban land城市土地Space and the city空间和都市Poor land use in the world's greatest cities carries a huge cost糟糕的土地利用方式已经成为世界大都市不能承受之重BUY land, advised Mark Twain; they're not making it any more. In fact, land is not really scarce: the entire population of America could fit into Texas with more than an acre for each household to enjoy. What drives prices skyward is a collision between rampant demand and limited supply in the great metropolises like London, Mumbai and New York. In the past ten years real prices in Hong Kong have risen by 150%. Residential property in Mayfair, in central London, can go for as much as 55,000 (82,000) per square metre. A square mile of Manhattan residential property costs 16.5 billion.马克吐温曾建议说“都去买地吧”,但现在他们已经不这么做了。
事实上,土地并非真的如此稀缺:仅一个德克萨斯州就能容纳整个美国人口,而且每户能有一英亩之多。
在伦敦、孟买、纽约这种大都市里,地价飞涨的现实是疯狂的需求和有限的供给共同作用的结果。
经济学人文章10篇
Dominant and dangerousAs America's economic supremacy fades, the primacy of the dollar looks unsustainableIF HEGEMONS are good for anything, it is for conferring stability on the systems they dominate. For 70 years the dollar has been the superpower of the financial and monetary system. Despite talk of the yuan's rise, the primacy of the greenback is unchallenged. As a means of payment, a store of value and a reserve asset, nothing can touch it. Yet the dollar's rule has brittle foundations, and the system it underpins is unstable. Worse, the alternative reserve currencies are flawed. A transition to a more secure order will be devilishly hard.When the buck stopsFor decades, America's economic might legitimised the dollar's claims to reign supreme. But, as our special report this week explains, a faultline has opened between America's economic clout and its financial muscle. The United States accounts for 23% of global GDP and 12% of merchandise trade. Yet about 60% of the world's output, and a similar share of the planet's people, lie within a de facto dollar zone, in which currencies are pegged to the dollar or move in some sympathy with it. American firms' share of the stock of international corporate investment has fallen from 39% in 1999 to 24% today. But Wall Street sets the rhythm of markets globally more than it ever did. American fund managers run 55% of the world's assets under management, up from 44% a decade ago.The widening gap between America's economic and financial power creates problems for other countries, in the dollar zone and beyond. That is because the costs of dollar dominance are starting to outweigh the benefits.First, economies must endure wild gyrations. In recent months the prospect of even a tiny rate rise in America has sucked capital from emerging markets, battering currencies and share prices. Decisions of the Federal Reserve affect offshore dollar debts and deposits worth about $9 trillion. Because some countries link their currencies to the dollar, their central banks must react to the Fed. Foreigners own 20-50% of local-currency government bonds in places like Indonesia, Malaysia, Mexico, South Africa and Turkey: they are more likely to abandon emerging markets when American rates rise.At one time the pain from capital outflows would have been mitigated by the stronger demand—including for imports—that prompted the Fed to raise rates in the first place. However, in the past decade America's share of global merchandise imports has dropped from 16% to 13%. America is the biggest export market for only 32countries, down from 44 in 1994; the figure for China has risen from two to 43. A system in which the Fed dispenses and the world convulses is unstable.A second problem is the lack of a backstop for the offshore dollar system if it faces a crisis. In 2008-09 the Fed reluctantly came to the rescue, acting as a lender of last resort by offering $1 trillion of dollar liquidity to foreign banks and central banks. The sums involved in a future crisis would be far higher. The offshore dollar world is almost twice as large as it was in 2007. By the 2020s it could be as big as America's banking industry. Since 2008-09, Congress has grown wary of the Fed's emergency lending. Come the next crisis, the Fed's plans to issue vast swaplines might meet regulatory or congressional resistance. For how long will countries be ready to tie their financial systems to America's fractious and dysfunctional politics?That question is underscored by a third worry: America increasingly uses its financial clout as a political tool. Policymakers and prosecutors use the dollar payment system to assert control not just over wayward bankers and dodgy football officials, but also errant regimes like Russia and Iran. Rival powers bridle at this vulnerability to American foreign policy.Americans may wonder why this matters to them. They did not force any country to link its currency to the dollar or encourage foreign firms to issue dollar debt. But the dollar's outsize role does affect Americans. It brings benefits, not least cheaper borrowing. Alongside the “exorbitant privilege” of owning the reserve currency, however, there are costs. If the Fed fails to act as lender of last resort in a dollar liquidity crisis, the ensuing collapse abroad will rebound on America's economy. And even without a crisis, the dollar's dominance will present American policymakers with a dilemma. If foreigners continue to accumulate reserves, they will dominate the Treasury market by the 2030s. To satisfy growing foreign demand for safe dollar-denominated assets, America's government could issue more Treasuries—adding to its debts. Or it could leave foreigners to buy up other securities—but that might lead to asset bubbles, just as in the mortgage boom of the 2000s.It's all about the BenjaminsIdeally America would share the burden with other currencies. Yet if the hegemony of the dollar is unstable, its would-be successors are unsuitable. The baton of financial superpower has been passed before, when America overtook Britain in 1920-45. But Britain and America were allies, which made the transfer orderly. And America came with ready-made attributes: a dynamic economy and, like Britain, political cohesiveness and the rule of law.Compare that with today's contenders for reserve status. The eurois a currency whose very existence cannot be taken for granted. Only when the euro area has agreed on a full banking union and joint bond issuance will those doubts be fully laid to rest. As for the yuan, China's government has created the monetary equivalent of an eight-lane motorway—a vast network of currency swaps with foreign central banks—but there is no one on it. Until China opens its financial markets, the yuan will be only a bit-player. And until it embraces the rule of law, no investor will see its currency as truly safe.All this suggests that the global monetary and financial system will not smoothly or quickly wean itself off the greenback. There are things America can do to shoulder more responsibility—for instance, by setting up bigger emergency-swaplines with more central banks. More likely is a splintering of the system, as other countries choose to insulate themselves from Fed decisions by embracing capital controls. The dollar has no peers. But the system that it anchors is cracking. 主宰的和危险的随着美国经济支配地位的衰落,美元的霸主地位看上去是不可持续的霸主的好处在于能够给它主导的体系带来稳定。
经济学人 精读 22 虚拟现实技术
经济学人精读 22 虚拟现实技术2015年9月28日10:17 阅读2699VR and the future of computingAwaiting its iPhone momentVirtual reality is a promising technology, but will not go mainstream in its current form虚拟现实和计算的未来等待iPhone时刻虚拟现实是一项前途大好的技术,但不会以目前的形态成为主流IS IT vividly realistic—or is it still just vapid razzmatazz? Virtual reality (VR), a technology that flopped in the 1990s, is making a glitzy comeback. The dream of a headset that can i mmerse you in a detailed, realistic 3D world is now being pursued in earnest by a gaggle of startups and the giants of technology alike. Last year Facebook bought Oculus, the mo st prominent VR fledgling, for $2 billion. Mark Zuckerberg, Facebook’s boss, says “immer sive 3D content is the obvious next thing after video.” Google supports VR in several of it s products and is backing a secretive new company called Magic Leap. Microsoft, having missed the boat on smartphones, has developed an impressive VR system named Holo Lens. Tech leaders have decided that VR could be the next big thing after the smartphon e. Are they right?它是逼真的写实吗,或者仍然只是一堆让人头晕目眩的无聊玩意?虚拟现实技术在上世纪90年代经历失败过后,正在耀眼地归来。
经济学人优良文章节选
清洁,安全并且自动驾驶The future of the car汽车之未来Clean, safe and it drives itself清洁,安全并且自动驾驶Cars have already changed the way we live. They are likely to do so again 汽车已经改变了我们的生活方式,很可能再改变一次SOME inventions, like some species, seem to make periodic leaps in progress. The car is one of them. Twenty-five years elapsed between Karl Benz beginning small-scale production of his original Motorwagen and the breakthrough, by Henry Ford and his engineers in 1913, that turned the car into the ubiquitous, mass-market item that has defined the modern urban landscape. By putting production of the Model T on moving assembly lines set into the floor of his factory in Detroit, Ford drastically cut the time needed to build it, and hence its cost. Thus began a revolution in personal mobility. Almost a billion cars now roll along the world’s highways.有些发明,就像有些物种一样,似乎周期性地会发生飞跃。
经济学人优秀文章节选
清洁,安全并且自动驾驶The future of the car汽车之未来Clean, safe and it drives itself清洁,安全并且自动驾驶Cars have already changed the way we live. They are likely to do so again 汽车已经改变了我们的生活方式,很可能再改变一次SOME inventions, like some species, seem to make periodic leaps in progress. The car is one of them. Twenty-five years elapsed between Karl Benz beginning small-scale production of his original Motorwagen and the breakthrough, by Henry Ford and his engineers in 1913, that turned the car into the ubiquitous, mass-market item that has defined the modern urban landscape. By putting production of the Model T on moving assembly lines set into the floor of his factory in Detroit, Ford drastically cut the time needed to build it, and hence its cost. Thus began a revolution in personal mobility. Almost a billion cars now roll along the world‟s highways.有些发明,就像有些物种一样,似乎周期性地会发生飞跃。
经济学人科技篇(2014.3-11)
Artificial intelligence and psychology人工智能和心理学The computer will see you now现在电脑能给你看病了A virtual shrink may sometimes be better than the real thing有时候,虚拟缩小版反而比实际事物更好ELLIE is a psychologist, and a damned good one at that. Smile in a certain way, and she knows precisely what your smile means. Develop a nervous tic or tension in an eye, and she instantly picks up on it. She listens to what you say, processes every word, works out the meaning of your pitch, your tone, your posture, everything. Sheis at the top of her game but, according to a new study, her greatest asset is that she is not human.Ellie是一名心理学家,而且长于此道。
面带职业化微笑的她,清楚滴知道你每个笑容的含义。
即便是简单的神经抽搐或者眼神紧张,她也能立刻注意到这个细节。
她仔细聆听你的表达,分辨每一个字词,研究你每个音调、口音、手势等所有小动作的含义。
最新研究认为,她处在行业顶端的最大优势在于,她不是人类。
When faced with tough or potentially embarrassing questions, people often do not tell doctors what they need to hear. Yet the researchers behind Ellie, led by Jonathan Gratch at the Institute for Creative Technologies, in Los Angeles, suspected from their years of monitoring human interactions with computers that people might be more willing to talk if presented with an avatar. To test this idea, they put 239 people in front of Ellie to have a chat with her about their lives. Half were told they would be interacting with an artificially intelligent virtual human; the others were told that Ellie was a bit like a puppet, and was having her strings pulled remotely by a person.当面临艰难决定,或可能尴尬的问题的时候,人们一般并不会告诉医生他需要听到的内容。
经济学人科技类文章中英双语(5篇范例)
经济学人科技类文章中英双语(5篇范例)第一篇:经济学人科技类文章中英双语The Brain Activity Map绘制大脑活动地图Hard cell 棘手的细胞An ambitious project to map the brain is in the works.Possibly too ambitious 一个绘制大脑活动地图的宏伟计划正在准备当中,或许有些太宏伟了 NEWS of what protagonists hope will be America’s next big science project continues to dribble out.有关其发起人心中下一个科学大工程的新闻报道层出不穷。
A leak to the New York Times, published on February 17th, let the cat out of the bag, with a report that Barack Obama’s administration is thinking of sponsoring what will be known as the Brain Activity Map.2月17日,《纽约时报》刊登的一位线人报告终于泄露了秘密,报告称奥巴马政府正在考虑赞助将被称为“大脑活动地图”的计划。
And on March 7th several of those protagonists published a manifesto for the project in Science.3月7日,部分发起人在《科学》杂志上发表声明证实了这一计划。
The purpose of BAM is to change the scale at which the brain is understood.“大脑活动地图”计划的目标是改变人们在认知大脑时采用的度量方法。
经济学人文章
Science and Technology Three-dimensional printing An image of the futureOne of the biggest manufacturers in the world gives 3D printing a go ULTRASOUND scanners are used for tasks as diverse as examining unborn babies and searching for cracks in the fabric of aircraft. They work by sending out pulses of high-frequency sound and then interpreting the reflections as images. To do all this, though, you need a device called a transducer.Transducers are made from arrays of tiny piezoelectric structures that convert electrical signals into ultrasound waves by vibrating at an appropriate frequency. Their shape focuses the waves so that they penetrate the object being scanned. The waves are then reflected back from areas where there is a change in density and on their return the transducer works in reverse, producing a signal which the scanner can process into a digital image.To make a transducer by painstakingly micro-machining a brittle block of ceramic material can take many hours of work, though. As a result, even as the size and cost of the console that controls the scanner has fallen with advances in microelectronics (some are now small enough to fit in a doctor's pocket and cost a few hundred dollars), the cost of making the probe itself remains stubbornly high—as much as ten times that of the console.At least, it does if you use traditional "subtractive" manufacturing techniques like cutting and drilling. However GE, a large American conglomerate, is now proposing to make ultrasound transducers by "additive" manufacturing—or three-dimensional printing, as it is also known. A new laboratory at the firm's research centre in Niskayuna, New York, is taking a hard-headed look at the technique, which some see as a fad and others as the future, and working out which products might be made more efficiently by addition rather than subtraction.Ultrasound transducers were an early pick both because of the complicated geometry needed to focus the sound waves and because ceramics are harder than metals to cut and drill accurately. But they are easy to print.The GE process for making a transducer begins by spreading onto the print table a thin layer of ceramic slurry containing a light-sensitive polymer. This layer is exposed to ultraviolet light through a mask that represents the required pattern. Wherever the light falls on the polymer it causes it to solidify, binding the particles in the slurry together. The print table is then lowered by a fraction of a millimetre and the process repeated, with a different mask if required. And so on. Once finished, the solidified shape is cleaned of residual slurry and heated in a furnace to sinter the ceramic particles together. More work will be needed to turn the process into a production-ready system. But Prabhjot Singh, who leads the project, hopes that it will be possible to use it to make not just cheaper ultrasound probes, but also more sensitive ones that can show greater detail. Although researchers have had new transducer designs in mind for years, it has been impractical to construct them subtractively. Additive manufacturing could change that.The new laboratory will look at other forms of additive manufacturing, too. Some 3D printers spread metal powders on the print table and sinter the pattern with lasers or electron beams, rather than using masks. Others deposit thin filaments of polymer in order to build structures up. GE is interested in how the technology could be used right acrossthe firm's businesses, from aerospace to power generation and consumer products, according to Luana Iorio, head of manufacturing technologies at GE Global Research. The gains include less waste and the ability to make bespoke parts more easily. But one of the most compelling advantages is freeing designers from the constraints of traditional production. Those constraints include having to design things not in their optimal shape but to be machined, often as a series of pieces. Additive manufacturing can combine parts into a single item, so less assembly is needed. That can also save weight—a particular advantage in aerospace. These new production opportunities mean manufacturers, big and small, are about to become a lot more inventive.。
Climate science 《经济学人》——考研阅读
Climate scienceA sensitive matterThe climate may be heating up less in response to greenhouse-gas emissions than was once thought. But that does not mean the problem is going awayOVER the past 15 years air temperatures at the Earth’s surface have been flat while greenhouse-gas emissions have continued to soar. The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO₂put there by humanity since 1750. And yet, as James Hansen, the head of NASA’s Goddard Institute for Space Studies, observes, “the five-year mean global temperature h as been flat for a decade.”Related topics∙Environmental problems and protection∙Science∙Climatology∙Climate change∙Intergovernmental Panel on Climate ChangeTemperatures fluctuate over short periods, but this lack of new warming is a surprise. Ed Hawkins, of the University of Reading, in Britain, points out that surface temperatures since 2005 are already at the low end of the range of projections derived from 20 climate models (see chart 1). If they remain flat, they will fall outside the models’ range within a few years.The mismatch between rising greenhouse-gas emissions and not-rising temperatures is among the biggest puzzles in climate science just now. It does not mean global warming is a delusion. Flat though they are, temperatures in the first decade of the 21st century remain almost 1°C above their level in the first decade of the 20th. But the puzzle does need explaining.The mismatch might mean that—for some unexplained reason—there has been a temporary lag between more carbon dioxide and higher temperatures in 2000-10. Or it might be that the 1990s, when temperatures were rising fast, was the anomalous period. Or, as an increasing body of research is suggesting, it may be that the climate is responding to higher concentrations of carbon dioxide in ways that had not been properly understood before. This possibility, if true, could have profound significance both for climate science and for environmental and social policy.The insensitive planetThe term scientists use to describe the way the climate reacts to changes incarbon-dioxide levels is “climate sensitivity”. This is usually defined as how much hotter the Earth will get for each doubling of CO₂concentrations. So-called equilibrium sensitivity, the commonest measure, refers to the temperature rise after allowing all feedback mechanisms to work (but without accounting for changes in vegetation and ice sheets).Carbon dioxide itself absorbs infra-red at a consistent rate. For each doubling of CO₂levels you get roughly 1°C of warming. A rise in concentrations from preindustrial levels of 280 parts per million (ppm) to 560ppm would thus warm the Earth by 1°C. If that were all there was to worry about, there would, as it were, be nothing to worry about. A 1°C rise could be shrugged off. But things are not that simple, for two reasons. One is that rising CO₂levels directly influence phenomena such as the amount of water vapour (also a greenhouse gas) and clouds that amplify or diminish the temperature rise. This affects equilibrium sensitivity directly, meaning doubling carbon concentrations would produce more than a 1°C rise in temperature. The second is that other things, such as adding sootand other aerosols to the atmosphere, add to or subtract from the effect of CO₂. All serious climate scientists agree on these two lines of reasoning. But they disagree on the size of the change that is predicted.The Intergovernmental Panel on Climate Change (IPCC), which embodies the mainstream of climate science, reckons the answer is about 3°C, plus or minus a degree or so. In its most recent assessment (in 2007), i t wrote that “the equilibrium climate sensitivity…is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C and is very unlikely to be less than 1.5°C. Values higher than 4.5°C cannot be excluded.” The IPCC’s next assessment is due in Sep tember. A draft version was recently leaked. It gave the same range of likely outcomes and added an upper limit of sensitivity of 6°C to 7°C.A rise of around 3°C could be extremely damaging. The IPCC’s earlier assessment said such a rise could mean that more areas would be affected by drought; that up to 30% of species could be at greater risk of extinction; that most corals would face significant biodiversity losses; and that there would be likely increases of intense tropical cyclones and much higher sea levels.New Model ArmyOther recent studies, though, paint a different picture. An unpublished report by the Research Council of Norway, a government-funded body, which was compiled by a team led by Terje Berntsen of the University of Oslo, uses a differe nt method from the IPCC’s. It concludes there is a 90% probability that doubling CO₂emissions will increase temperatures by only 1.2-2.9°C, with the most likely figure being 1.9°C. The top of the study’s range is well below the IPCC’s upper estimates of likely sensitivity.This study has not been peer-reviewed; it may be unreliable. But its projections are not unique. Work by Julia Hargreaves of the Research Institute for Global Change in Yokohama, which was published in 2012, suggests a 90% chance of the actual change being in the range of 0.5-4.0°C, with a mean of 2.3°C. This is based on the way the climate behaved about 20,000 years ago, at the peak of the last ice age, a period when carbon-dioxide concentrations leapt. Nic Lewis, an independent climate scientist, got an even lower range in a study accepted for publication: 1.0-3.0°C, with a mean of 1.6°C. His calculations reanalysed work cited by the IPCC and took account of more recent temperature data. In all these calculations, the chances of climate sensitivity above 4.5°C become vanishingly small.If such estimates were right, they would require revisions to the science of climate change and, possibly, to public policies. If, as conventional wisdom has it, global temperatures could rise by 3°C or more in response to a doubling of emissions, then the correct response would be the one to which most of the world pays lip service: rein in the warming and the greenhouse gases causing it. This is called “mitigation”, in the jargon. Moreover, if there were an outside possibility of something catastrophic, such as a 6°C rise, that could justify drastic interventions. This would be similar to taking out disaster insurance. It mayseem an unnecessary expense when you are forking out for the premiums, but when you need it, you really need it. Many economists, including William Nordhaus of Yale University, have made this case.If, however, temperatures are likely to rise by only 2°C in response to a doubling of carbon emissions (and if the likelihood of a 6°C increase is trivial), the calculation might change. Perhaps the world should seek to adjust to (rather than stop) the greenhouse-gas splurge. There is no point buying earthquake insurance if you do not live in an earthquake zone. In this case more adaptation rather than more mitigation might be the right policy at the margin. But that would be good advice only if these new estimates really were more reliable than the old ones. And different results come from different models.One type of model—general-circulation models, or GCMs—use a bottom-up approach. These divide the Earth and its atmosphere into a grid which generates an enormous number of calculations in order to imitate the climate system and the multiple influences upon it. The advantage of such complex models is that they are extremely detailed. Their disadvantage is that they do not respond to new temperature readings. They simulate the way the climate works over the long run, without taking account of what current observations are. Their sensitivity is based upon how accurately they describe the processes and feedbacks in the climate system.The other type—energy-balance models—are simpler. They are top-down, treating the Earth as a single unit or as two hemispheres, and representing the whole climate with a few equations reflecting things such as changes in greenhouse gases, volcanic aerosols and global temperatures. Such models do not try to describe the complexities of the climate. That is a drawback. But they have an advantage, too: unlike the GCMs, they explicitly use temperature data to estimate the sensitivity of the climate system, so they respond to actual climate observations.The IPCC’s estimates of climate sensitivity are bas ed partly on GCMs. Because these reflect scientists’ understanding of how the climate works, and that understanding has not changed much, the models have not changed either and do not reflect the recent hiatus in rising temperatures. In contrast, the Norwegian study was based on an energy-balance model. So were earlier influential ones by Reto Knutti of the Institute for Atmospheric and Climate Science in Zurich; by Piers Forster of the University of Leeds and Jonathan Gregory of the University of Reading; by Natalia Andronova and Michael Schlesinger, both of the University of Illinois; and by Magne Aldrin of the Norwegian Computing Centre (who is also a co-author of the new Norwegian study). All these found lower climate sensitivities. The paper by Drs Forster and Gregory found a central estimate of 1.6°C for equilibrium sensitivity, with a 95% likelihood of a 1.0-4.1°C range. That by Dr Aldrin and others found a 90% likelihood of a 1.2-3.5°C range.It might seem obvious that energy-balance models are better: do they not fit what is actually happening? Yes, but that is not the whole story. Myles Allen of Oxford University points out that energy-balance models are better at representing simple and direct climate feedback mechanisms than indirect and dynamic ones. Most greenhouse gases are straightforward: they warm the climate. The direct impact of volcanoes is also straightforward: they cool it by reflecting sunlight back. But volcanoes also change circulation patterns in the atmosphere, which can then warm the climate indirectly, partially offsetting the direct cooling. Simple energy-balance models cannot capture this indirect feedback. So they may exaggerate volcanic cooling.This means that if, for some reason, there were factors that temporarily muffled the impact of greenhouse-gas emissions on global temperatures, the simple energy-balance models might not pick them up. They will be too responsive to passing slowdowns. In short, the different sorts of climate model measure somewhat different things.Clouds of uncertaintyThis also means the case for saying the climate is less sensitive to CO₂emissions than previously believed cannot rest on models alone. There must be other explanations—and, as it happens, there are: individual climatic influences and feedback loops that amplify (and sometimes moderate) climate change.Begin with aerosols, such as those from sulphates. These stop the atmosphere from warming by reflecting sunlight. Some heat it, too. But on balance aerosols offset the warming impact of carbon dioxide and other greenhouse gases. Most climate models reckon that aerosols cool the atmosphere by about 0.3-0.5°C. If that underestimated aerosols’ effects, perhaps it might explain the lack of recent warming.Yet it does not. In fact, it may actually be an overestimate. Over the past few years, measurements of aerosols have improved enormously. Detailed data from satellites and balloons suggest their cooling effect is lower (and their warming greater, where that occurs). The leaked assessment from the IPCC (which is still subject to review andrevision) suggested that aerosols’ estimated radiative “forcing”—their warming or cooling effect—had changed from minus 1.2 watts per square metre of the Earth’s surface in the 2007 assessment to minus 0.7W/m ² now: ie, less cooling.One of the commonest and most important aerosols is soot (also known as black carbon). This warms the atmosphere because it absorbs sunlight, as black things do. The most detailed study of soot was published in January and also found more net warming than had previously been thought. It reckoned black carbon had a direct warming effect of around 1.1W/m ². Though indirect effects offset some of this, the effect is still greater than an earlier estimate by the United Nations Environment Programme of 0.3-0.6W/m ².All this makes the recent period of flat temperatures even more puzzling. If aerosols are not cooling the Earth as much as was thought, then global warming ought to be gathering pace. But it is not. Something must be reining it back. One candidate is lower climate sensitivity.A related possibility is that general-circulation climate models may be overestimating the impact of clouds (which are themselves influenced by aerosols). In all such models, clouds amplify global warming, sometimes by a lot. But as the leaked IPCC assessment says, “the cloud feedback remains the most uncertain radiative feedback in climate models.” It is even possible that some clouds may dampen, not amplify global warming—which may also help explain the hiatus in rising temperatures. If clouds have less of an effect, climate sensitivity would be lower.So the explanation may lie in the air—but then again it may not. Perhaps it lies in the oceans. But here, too, facts get in the way. Over the past decade the long-term rise in surface seawater temperatures seems to have stalled (see chart 2), which suggests that the oceans are not absorbing as much heat from the atmosphere.As with aerosols, this conclusion is based on better data from new measuring devices. But it applies only to the upper 700 metres of the sea. What is going on below that—particularly at depths of 2km or more—is obscure. A study in Geophysical Research Letters by Kevin Trenberth of America’s National Centre for Atmospheric Research and others found that 30% of the ocean warming in the past decade has occurred in the deep ocean (below 700 metres). The study says a substantial amount of global warming is going into the oceans, and the deep oceans are heating up in an unprecedented way. If so, that would also help explain the temperature hiatus.Double-A minusLastly, there is some evidence that the natural (ie, non-man-made) variability of temperatures may be somewhat greater than the IPCC has thought. A recent paper byKa-Kit Tung and Jiansong Zhou in the Proceedings of the National Academy of Sciences links temperature changes from 1750 to natural changes (such as sea temperatures in the Atlantic Ocean) and suggests that “the anthropogenic global-warming trends might have been overestimated by a factor of two in the second half of the 20th century.” It is possible, therefore, that both the rise in temperatures in the 1990s and the flattening in the 2000s have been caused in part by natural variability.So what does all this amount to? The scientists are cautious about interpreting their findings. As Dr Knutti puts it, “the bottom line is that there a re several lines of evidence, where the observed trends are pushing down, whereas the models are pushing up, so my personal view is that the overall assessment hasn’t changed much.”But given the hiatus in warming and all the new evidence, a small reduction in estimates of climate sensitivity would seem to be justified: a downwards nudge on various best estimates from 3°C to 2.5°C, perhaps; a lower ceiling (around 4.5°C), certainly. If climate scientists were credit-rating agencies, climate sensitivity would be on negative watch. But it would not yet be downgraded.Equilibrium climate sensitivity is a benchmark in climate science. But it is a very specific measure. It attempts to describe what would happen to the climate once all the feedback mechanisms have worked through; equilibrium in this sense takes centuries—too long for most policymakers. As Gerard Roe of the University of Washington argues, even if climate sensitivity were as high as the IPCC suggests, its effects would be minuscule under any plausible discount rate because it operates over such long periods. So it is one thing to ask how climate sensitivity might be changing; a different question is to ask what the policy consequences might be.For that, a more useful measure is the transient climate response (TCR), the temperature you reach after doubling CO₂gradually over 70 years. Unlike the equilibrium response, the transient one can be observed directly; there is much less controversy about it. Most estimates put the TCR at about 1.5°C, with a range of 1-2°C. Isaac Held of America’s National Oceanic and Atmospheric Administration recently calculated his “personal bestestimate” for the TCR: 1.4°C, reflecting the new estimates for aerosols and natural variability.That sounds reassuring: the TCR is below estimates for equilibrium climate sensitivity. But the TCR captures only some of the warming that those 70 years of emissions would eventually generate because carbon dioxide stays in the atmosphere for much longer.As a rule of thumb, global temperatures rise by about 1.5°C for each trillion tonnes of carbon put into the atmosphere. The world has pumped out half a trillion tonnes of carbon since 1750, and temperatures have risen by 0.8°C. At current rates, the next half-trillion tonnes will be emitted by 2045; the one after that before 2080.Since CO₂accumulates in the atmosphere, this could increase temperatures compared with pre-industrial levels by around 2°C even with a lower sensitivity and perhaps nearer to 4°C at the top end of the estimates. Despite all the work on sensitivity, no one really knows how the climate would react if temperatures rose by as much as 4°C. Hardly reassuring.。
2021年1月7日《经济学人》封面文章学习资料
2021年1月7日《经济学人》封面文章学习资料20210107TE COVERY译言网翻译文本Now we’re talkingVoice technology is making computers less daunting and more accessible 语音技术轻而易“语”有了语音技术,电脑不再令人敬而远之,反而更加平易近人Any sufficiently advanced technology, noted Arthur C Clarke, a British science-fiction writer, is indistinguishable from magic. The fast-emerging technology of voice computing proves his point. Using it is just like castinga spell: say a few words into the air, and a nearby device can grant your wish.英国科幻小说作家亚瑟・克拉克(Arthur C. Clarke)曾经指出,任何科技只要先进到足够的程度,就和魔法没有区别。
迅速兴起的语音电脑证明了他的观点。
它用起来就像是变魔法:对着空气说句话,附近的智能设备就会帮你如愿以偿。
The Amazon Echo, a voice-driven cylindrical computer that sits on a table top and answers to the name Alexa, can call up music tracks and radio stations, tell joke, answer trivia questions and control smart appliances; even before Christmas it was already resident in about 4% of American households. Voice assistants are proliferating in smart phones, too: Apple's Siri handles over2bn commands a week, and 20% of Google searches on Android- powered handsetsin America are input by voice. Dictating e-mailsand text messages now works reliably enough to be useful. Why type whenyou can talk?亚马逊智能音箱(Amazon Echo)是一种声控筒状台式电脑,听到“阿丽夏”(Alexa)这个名字,它就会做出反应,挑选歌曲,选择电台,讲笑话,回答各种琐碎问题,还能控制智能设备;甚至早在圣诞节到来之前,它就已经入住了4%的美国家庭。
考研英语阅读理解基本素材经济学人科技类
英语阅读理解基本素材经济学人科技类Passage 1Wireless broadbandComputer chips for “open-spectrum” devices are a closed book TELECOMMUNICATIONS used to be a closed game, from the copper and fibre that carried the messages, to the phones themselves. Now, openness reigns in the world of wires. Networks must interconnect with those of competitors, and users can plug in their own devices as they will. One result of this openness has been a lot of innovation.Openness is coming to the wireless world, too. Cheap and powerful devices that use unlicensed and lightly regulated parts of the radio spectrum are proliferating. But there is a problem. Though the spectrum is open, the microprocessor chips that drive the devices which use it are not. The interface information—the technical data needed to write software that would allow those chips to be used in novel ways—is normally kept secret by manufacturers. The result could be a lot less innovation in the open wireless world than in the open wired one.Take, for example, the Champaign-Urbana Community Wireless Network (CUWiN), in Illinois. This group is trying to create a so-called meshed Wi-Fi network. Wi-Fi is a wireless technology that allows broadband internet communication over a range of about 50 metres. That range could, however, be extended if the devices in an area were configured to act as “platforms” that both receive and transmit signals. Messages would then hop from one platform to another until they got to their destination. That would allow such things as neighbourhood mobilephone companies and a plethora of radio and TV stations, and all for almost no cost. But to make such goodies work, CUWiN needs to tweak the underlying capabilities of Wi-Fi chips in special ways.When its engineers requested the interface information from the firms that furnish the chips, however, they were often rebuffed. A few companies with low-end, older technology supplied it. But Broadcom and Atheros, the two producers of the sophisticated chips that CUWiN needs if its system is to sing properly, refused. Nor is CUWiN alone in its enforced ignorance. SeattleWireless and NYCwireless, among other groups, have similar ideas, but are similarly stymied. Christian Sandvig of the University of Illinois at Urbana-Champaign, who has been studying the brouhaha, believes regulators ought to enforce more openness.Broadcom and Atheros say that making the interface information public would be illegal, because it could allow users to change the parameters of a chip in ways that violate the rules for using unlicensed spectrum (for example, by increasing its power or changing its operating frequency). That is a worry, but it depends on rather a conservative interpretation of the law. The current rules apply to so-called “software-defined radios” (where the ability to send and receive signals is modifiable on the chip), and do not apply directly to Wi-Fi. Also, by supplying the data, manufacturers would not be held liable if a user chose to tweak the chip in unlawful ways. And in any case, if the firms are really worried, they could release most of the interface, keeping back those features that are legally sensitive.Nor is the interface information commercially sensitive. Engineers are not asking for the computercode that drives the interfaces, merely for the means to talk to them. And having the interface information in the public domain should eventually result in more chips being sold. So it is hard to see what the problem is beyond a dog-in-themangerish desire not to give anything away. Time to open it up, boys.Passage 2Not as boring as you thoughtWatching paint dry may lead to some exciting new technologiesBelieve it or not, there are a small but significant number of people in this world who watch paint dry for a living. And watching paint dry, if you look closely enough, is fascinating. Honest. Plenty of researchers are enthralled by exactly how the paint comes off the brush, how the polymers within it interact in order to adhere to a surface, and what happens when the water, or other solvent, evaporates. This sort of thing reveals how the chemistry really works, and thus how to make better paint.The excitement of watching a molecule of water lift off from the surface of a wall is, however, hampered by the fact that the only available photographs of the action are stills. It is like trying to work out how to play football from a series of time-lapse frames. But help is at hand. Andrew Humphris, chief technology officer of Infinitesima, a small firm based in Bristol, in Britain, has come up with a system that allows you to take a movie of drying paint.The existing method of photographing molecules is more “feely” than “movie”. The camera is a device called an atomic-force microscope (AFM). This works by running the tip of a probe over the molecules in question, rather as the stylus of an old-fashioned record player runs across the surface of an LP. The bumps and grooves picked up by an AFM can be translated into a picture, but it takes between 30 seconds and a minute to build up an image. Scan much faster than that and the stylus starts to resonate, blurring the result.But Infinitesima's VideoAFM can, according to Dr Humphris, go 1,000 times faster than a standard AFM. That is fast enough to allow videos to be taken of, for example, molecules evaporating—information of great value to the paint-making industry, to which Dr Humphris hopes to sell many of his machines. He is coy about exactly how they work, since the paper describing the details is awaiting publication in Applied Physics Letters. But the process for keeping the stylus under control seems to involve some high-powered computing and signal processing.Infinitesima is testing the VideoAFM by looking at polymers as they crystallise. The movies resemble frost spreading across a chilly window. But the VideoAFM can do more than mere analysis. It can do synthesis as well. Just as a carelessly applied stylus can alter the surface of a record, so an AFM can alter the surface it is scanning at the molecular level, in effect writing on that surface. Such writing, if it were fast enough, could be used as a form of lithography for making devices whose components had dimensions of nanometres (billionths of a metre). Nanotechnology, as engineering at this scale is known, is all the rage, and nanotech firms could end up using the VideoAFM's descendants in their factories. In the meantime, live paint-drying action could soon becoming to a television near you.Passage 3Games people playThe co-operative and the selfish are equally successful at getting what they want MANY people, it is said, regard life as a game. Increasingly, both biologists and economists are tending to agree with them. Game theory, a branch of mathematics developed in the 1940s and 1950s by John von Neumann and John Nash, has proved a useful theoretical tool in the study of the behaviour of animals, both human and non-human.An important part of game theory is to look for competitive strategies that are unbeatable in the context of the fact that everyone else is also looking for them. Sometimes these strategies involve co-operation, sometimes not. Sometimes the “game” will result in everybody playing the same way. Sometimes they will need to behave differently from one another.But there has been a crucial difference in the approach taken by the two schools of researchers. When discussing the outcomes of these games, animal behaviourists speak of “evolutionarily stable strategies”, with the implication that the way they are played has been hard-wired into the participants by the processes of natural selection. Economists prefer to talk of Nash equilibria and, since economics is founded on the idea of rational human choice, the implication is that people will adjust their behaviour (whether consciously or unconsciously is slightly ambiguous) in order to maximise their gains. But a study just published in the Proceedings of the National Academy of Sciences, by Robert Kurzban of the University of Pennsylvania and Daniel Houser of George Mason University in Fairfax, Virginia, calls the economists' underlying assumption into question. This study suggests that it may be fruitful to work with the idea that human behaviour, too, can sometimes be governed by evolutionarily stable strategies.Double or quits?Dr Kurzban and Dr Houser were interested in the outcomes of what are known as public-goods games. In their particular case they chose a game that involved four people who had never met (and who interacted via a computer) making decisions about their own self-interest that involved assessing the behaviour of others. Each player was given a number of virtual tokens, redeemable for money at the end of the game. A player could keep some or all of these tokens. Any not kept were put into a pool, to be shared among group members. After the initial contributions had been made, the game continued for a random number of turns, with each player, in turn, being able to add to or subtract from his contribution to the pool. When the game ended, the value of the pool was doubled, and the new, doubled value was divided into four equal parts and given to the players, along with the value of any tokens they had held on to. If everybody trusts each other, therefore, they will all be able to double their money. But a sucker who puts all his money into the pool when no one else has contributed at all will end up with only half what he started with.This is a typical example of the sort of game that economists investigating game theory revel in, and both theory and practice suggests that a player can take one of three approaches in such a game:co-operate with his opponents to maximise group benefits (but at the risk of being suckered), free-ride (ie, try to sucker co-operators) or reciprocate (ie, co-operate with those who show signs of being co-operative, but not with free-riders). Previous investigations of such strategies, though, have focused mainly on two-player games, in which strategy need be developed only in a quite simple context. The situation Dr Kurzban and Dr Houser created was a little more like real life. They wanted to see whether the behavioural types were clear-cut in the face of multiple opponents who might be playing different strategies, whether those types were stable, and whether they had the same average pay-off.The last point is crucial to the theory of evolutionarily stable strategies. Individual strategies are not expected to be equally represented in a population. Instead, they should appear in proportions that equalise their pay-offs to those who play them. A strategy can be advantageous when rare and disadvantageous when common. The proportions in the population when all strategies are equally advantageous represent the equilibrium.And that was what happened. The researchers were able to divide their subjects very cleanly into co-operators, free-riders and reciprocators, based on how many tokens they contributed to the pool, and how they reacted to the collective contributions of others. Of 84 participants, 81 fell unambiguously into one of the three categories. Having established who was who, they then created “bespoke” games, to test whether people changed strategy. They did not. Dr Kurban and Dr Houser were thus able to predict the outcomes of these games quite reliably. And the three strategies did, indeed, have the same average pay-offs to the individuals who played them—though only 13% were co-operators, 20% free-riders and 63% reciprocators.This is only a preliminary result, but it is intriguing. It suggests that people's approaches to cooperation with their fellows are, indeed, evolutionarily stable. Of course, it is a long stretch from showing equal success in a laboratory game to showing it in the mating game that determines evolutionary outcomes. But it is good to know that in this context at least, nice guys do not come last. They do just as well as the nasty guys and, indeed, as the wary majority.Passage 4Moon river?The latest news from TitanA PICTURE may be worth a thousand words. But when the picture in question is of an alien world, it is difficult to be sure what those thousand words should be. And in the case of the images that have arrived from Titan, Saturn's largest moon, that world is very alien indeed.On January 14th Huygens, a space probe built by the European Space Agency (ESA), landed on Titan and began to deliver its precious cargo of data to anxiously waiting scientists. The most striking finding so far is a picture taken as the probe descended. It appears to show pale hills crisscrossed with drainage channels containing dark material, leading to a wide, flat darker region. The landing site itself produced less striking, but still significant images. It is flat, strewn with rounded pebbles andappears to be a dry riverbed.On Earth, or even on Mars, drainage channels and rounded pebbles would be taken as evidence for the erosive effects of liquid water. But at -180°C, Titan is too cold for water to be liquid. It is, however, not too cold for various hydrocarbons to be so (indeed, the most likely candidates, methane and ethane, are gases at terrestrial temperatures). Many people have suggested that Titan's dark regions might be lakes made of such hydrocarbons, or of tar that is composed of hydrocarbons which are too cold to be truly liquid, but have not frozen solid. The presence of hydrocarbons in Titan's atmosphere was confirmed on the probe's journey through it. Huygens's instruments detected both methane and ethane. But the pebbles in the picture probably are made of water—in the form of ice.Because all of the raw images from Huygens were immediately made available to the public via the internet, amateurs have been racing ESA and its American cousin NASA to create processed, composite images. Some scientists say that a glitch led ESA to publish more data than it had originally intended, something that ESA denies. Nevertheless, a few minutes after the Huygens data were published on one website, they were mysteriously yanked off the web again.The availability of the data, though, has led to the publication on the internet of a short movie compiled from a series of 80 still images taken of the landing site. This five-second film appears to show movement, with small white objects crossing the camera's field of vision.ESA's scientists were quick to point out that any movement seen was likely to be an artefact that owed its existence to nothing more than the fact that the images had not been put together correctly. Whether that interpretation is correct should be clear when ESA's own “official” movie is released, which had not happened as The Economist went to press.Nevertheless there is, privately, a debate among planetary scientists as to whether the white blobs are an artefact, or pieces of ice being carried past the lander on a thin stream of liquid hydrocarbon a few centimetres deep.That would be exciting. A stream is a stream, whether it is made of water or hydrocarbon. At the moment, Earth is the only body known to have them. But, as Ralph Lorenz, a planetary scientist at the University of Arizona, points out, the lesson from places such as Mars—and indeed Arizona—is that features created by liquids may exist, but the processes that carved them may be transitory or long gone. It is possible that rare but violent events, rather than continuous erosion, are responsible for shaping Titan's landscape. Whether Huygens has collected enough data to tell the difference remains to be seen.Passage 5Greener than you thoughtGenetically modified sugar beet is good for the environmentThough often conflated in the public mind, arguments against the planting of genetically modified (GM) crops fall into two distinct groups. One, which applies only to food crops, is that they might, for some as yet undemonstrated reason, be harmful to those who eat them. The other, which applies to them all, is that they might be bad for the environment.Proponents of the technology counter that in at least some cases GM crops should actually be good for the environment. Crops that are modified to produce their own insecticides should require smaller applications of synthetic pesticides of the sort that Greens generally object to. But in the case of those modified to resist herbicides the argument is less clear-cut. If farmers do not have to worry about poisoning their own crops, environmentalists fear, they will be more gung-ho about killing the wild plants that sit at the bottom of the food chain and keep rural ecosystems going—or weeds, as they are more commonly known.Research just published in the Proceedings of the Royal Society suggests, however, that it may be possible for all to have prizes. Get the dose and timing right and you can have a higher crop yield and a higher weed yield at the same time—and also use less herbicide.The research was done at Broom's Barn Research Station in Suffolk, by a team led by Mike May, the head of the station's weeds group. The team was studying GM sugar beet. This was one of the species examined in the British government's Farm-Scale Evaluations (FSEs) project, a huge, three-year-long research programme designed to assess the effects (including the environmental effects) of herbicide use on GM crops.The results for sugar beet, which competes badly with common weed species and thus relies heavily on the application of herbicides for its success, came in for particular criticism from environmentalists when the trials concluded in 2003. They indicated that fields planted with GM beet and treated with glyphosate, the herbicide against which the modification in question protects, had fewer weeds later in the season. These produced fewer seeds and thus led to reduced food supplies for birds. Some invertebrates, particularly insects, were also adversely affected.The Broom's Barn researchers, however, felt that this problem might be overcome by changing the way the glyphosate was applied. They tried f our different treatment “regimes”, which varied the timing and method of herbicide spraying, and compared them with conventional crop management regimes such as those used in the FSEs.The best results came from a single early-season application of glyphosate. This increased crop yields by 9% while enhancing weed-seed production up to sixteen-fold. And, as a bonus, it required 43% less herbicide than normal. Genetic modification, it seems, can be good for the environment, as well as for farmers' pockets.Passage 6Corpus colossalHow well does the world wide web represent human language?LINGUISTS must often correct lay people's misunderstandings of what they do. Their job is not to be experts in “correct” grammar, ready at any moment to smack your wrist f or a split infinitive. What they seek are the underlying rules of how language works in the minds and mouths of its users. In the common shorthand, linguistics is descriptive, not prescriptive. What actually sounds right and wrong to people, what they actually write and say, is the linguist's raw material.But that raw material is surprisingly elusive. Getting people to speak naturally in a controlled study is hard. Eavesdropping is difficult, time-consuming and invasive of privacy. For these reasons, ling uists often rely on a “corpus” of language, a body of recorded speech and writing, nowadays usually computerised. But traditional corpora have their disadvantages too. The British National Corpus contains 100m words, of which 10m are speech and 90m writing. But it represents only British English, and 100m words is not so many when linguists search for rare usages. Other corpora, such as the North American News Text Corpus, are bigger, but contain only formal writing and speech.Linguists, however, are slowly coming to discover the joys of a free and searchable corpus of maybe 10 trillion words that is available to anyone with an internet connection: the world wide web. The trend, predictably enough, is prevalent on the internet itself. For example, a group of linguists write informally on a weblog called Language Log. There, they use Google to discuss the frequency of non-standard usages such as “far from” as an adverb (“He far from succeeded”), as opposed to more standard usages such as “He didn't succeed—f ar from it”. A search of the blogitself shows that 354 Language Log pages use the word “Google”. The blog's authors clearly rely heavily on it.For several reasons, though, researchers are wary about using the web in more formal research. One, as Mark Li berman, a Language Log contributor, warns colleagues, is that “there are some mean texts out there”. The web is filled with words intended to attract internet searches to gambling and pornography sites, and these can muck up linguists' results. Originally, such sites would contain these words as lists, so the makers of Google, the biggest search engine, fitted their product with a list filter that would exclude hits without a correct syntactical context. In response, as Dr Liberman notes, many offending websites have hired computational linguists to churn out syntactically correct but meaningless verbiage including common search terms. “When some sandbank over a superslots hibernates, a directness toward a progressive jackpot earns frequent flier miles” is a typical example. Such pages are not filtered by Google, and thus create noise inresearch data.There are other problems as well. Search engines, unlike the tools linguists use to analyse standard corpora, do not allow searching for a particular linguist ic structure, such as “[Noun phrase] far from [verb phrase]”. This requires indirect searching via samples like “He far from succeeded”. But Philip Resnik, of the University of Maryland, has created a “Linguist's Search Engine” (LSE) to overcome this. When trying to answer, for example, whether a certain kind of verb is generally used with a direct object, the LSE grabs a chunk of web pages (say a thousand, with perhaps a million words) that each include an example of the verb. The LSE then parses thesample, allowing the linguist to find examples of a given structure, such as the verb without anobject. In short, the LSE allows a user to create and analyse a custom-made corpus within minutes.The web still has its drawbacks. Most of it is in English, limiting its use for other languages (although Dr Resnik is working on a Chinese version of the LSE). And it is mostly written, not spoken, making it tougher to gauge people's spontaneous use. But since much web content is written by non-professional writers, it more clearly represents informal and spoken English than a corpus such as the North American News Text Corpus does.Despite the problems, linguists are gradually warming to the web as a corpus for formal research. An early paper on the subject, written in 2003 by Frank Keller and Mirella Lapata, of Edinburgh and Sheffield Universities, showed that web searches for rare two-word phrases correlated well with the frequency found in traditional corpora, as well as with human judgments of whether those phrases were natural. What problems the web throws up are seemingly outweighed by the advantages of its huge size. Such evidence, along with tools such as Dr Resnik's, should convince more and more linguists to turn to the corpus on their desktop. Young scholars seem particularly keen.The easy availability of the web also serves another purpose: to democratise the way linguists work. Allowing anyone to conduct his own impromptu linguistic research, some linguists hope, will do more to popularise their notion of studying the intricacy and charm of language as it really exists, not as killjoy prescriptivists think it should be.。
2019经济学人考研英文文章阅读六十八
Privacy and technology隐私和技术How creepy is your smart speaker?你的智能音箱有多恐怖?Worries about privacy are overstated,but not entirely without merit.Your move,Alexa对隐私的担忧被夸大了,但并非完全没有道理。
你的一举一动尽在Alexa眼中“Alexa,are you recording everything you hear?”It is a question more people are asking,though Amazon’s voice assistant denies the charges.“I only record and send audio back to the Amazon cloud when you say the wake word,”she insists,before referring questioners to Amazon’s privacy policy.“Alexa,你把你听到的都录下来了吗?”尽管亚马逊的语音助手否认了这一指控,但越来越多的人在提这个问题了。
她坚称:“只有当你说出唤醒词时,我才开始录音并把音频发回到亚马逊云端。
”随后,她会向提问者介绍亚马逊的隐私政策。
Apple’s voice assistant,Siri,gives a similar answer.But as smart speakers from Amazon,Apple,Google and other technologygiants proliferate(global sales more than doubled last year,to86.2m)concerns that they might be digitally snooping have become more widespread.苹果的语音助手Siri也会给出类似的答案。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Autism?自闭症Why it's not “Rain Woman”?为什么它不是“雨女”Women have fewer cognitive disorders than men do because their bodies are better at ignoring the mutations which cause them?与男性相比,患有认知障碍的女性较少,因为她们自身的身体能更好的忽略导致认知障碍的基因突变AUTISM is a strange condition. Sometimes its symptoms of “social blindness”(an inability to read or comprehend the emotions of others) occur alone. This is dubbed high-functioning autism, or Asperger's syndrome. Though their fellow men and women may regard them as a bit odd, high-functioning autists are often successful (sometimes very successful) members of society. On other occasions, though, autism manifests as part of a range of cognitive problems. Then, the condition is debilitating. What is common to those on all parts of the so-called autistic spectrum is that they are more often men than women —so much more often that one school of thought suggests autism is an extreme manifestation of what it means, mentally, to be male. Boys are four times more likely to be diagnosed with autism than girls are. For high-functioning autism, the ratio is seven to one.?自闭症是一种奇怪的状态。
有时它是由“社会失明”症状(无法阅读或理解他人的情绪)导致的孤独。
这被称为高功能自闭症,或亚斯伯格症候群。
虽然他们的男性和女性同伴会认为他们有点奇怪,但高功能自闭患者通常是成功的社会人士(有时非常成功)。
然而,另一些场合,自闭症表现为一系列认知问题的一部分。
那么,条件逐渐衰弱。
对于那些所有被称为自闭症患者范围的人来说共同点是男性远多于女性,以至于一个学校认为自闭症意味着一种极端的在心理上表现为一名男性。
比起女孩而言,男孩有四倍的可能性被诊断为自闭症。
至于高功能自闭症,比率达到7比1。
Moreover, what is true of autism is true, to a lesser extent, of a lot of other neurological and cognitive disorders. Attention deficit hyperactivity disorder (ADHD) is diagnosed around three times more often in boys than in girls. “Intellectual disability”, a catch-all term for congenital low IQ, is 30-50% more common in boys, as is epilepsy. In fact, these disorders frequently show up in combination. For instance, children diagnosed with an autistic-spectrum disorder[1] often also receive a diagnosis of ADHD.?此外,自闭症的真相,在较小程度上,就是神经病学的认知障碍的真相。
被诊断为注意力缺乏症的男大约是女孩的三倍。
“智力残疾”,一个全面概括先天性智商低下的术语,在男孩中常见比例多达30%~50%,癫痫症也是。
事实上,这些疾病经常共同出现。
例如,被诊断为自闭症谱系障碍的小孩经常也收到ADHD的诊断。
Autism's precise causes are unclear, but genes are important. Though no mutation which, by itself, causes autism has yet been discovered, well over 100 are known that make someone with them more vulnerable to the condition.? 导致自闭症的确切原因还不知道,但是基因很重要原因。
虽然还没发现是由于它自身的突变导致自闭症,但超过100个基因被认为是使某些人在环境影响下更容易受到攻击的对象。
Most of these mutations are as common in women as in men, so one explanation for the divergent incidence is that male brains are more vulnerable than female ones to equivalent levels of genetic disruption. This is called the female-protective model. The other broad explanation, social-bias theory, is that the difference is illusory. Girls are being under-diagnosed because of differences either in the ways they are assessed, or in the ways they cope with the condition, rather than because they actually have it less. Some researchers claim, for example, that girls are better able to hide their symptoms.?这些突变在男性和女性中是一样普遍的,所以一个对分歧发生率的解释是对于同等的基因毁坏,男性大脑比女性的更容易受伤害。
这被称为女性保护模式。
另一个广发的解释是社会偏见理论,认为这些不同是虚幻的。
这些被诊断不足的女孩,因为她们被评估的方式或对环境的处理不同,而不是他们真的很少有这个症状。
例如,一些研究者声称,女孩能更好的隐藏这些症状。
The weaker sex?弱势性别To investigate this question, Sebastien Jacquemont of the University Hospital of Lausanne and his colleagues analysed genetic data from two groups of children with cognitive abnormalities. Those in one group, 800 strong, were specifically autistic. Those in the other, 16,000 strong, had a range of problems.?为了调查这个问题,洛桑医科大学的医生Sebastien Jacquemont和他的同事分析来自两组患有认识异常症状孩子的基因数据。
一组有800人明显确定患有自闭症;另一组是明显有一系列问题。
Dr Jacquemont has just published his results in the American Journal of Human Genetics. His crucial finding was that girls in both groups more often had mutations of the sort associated with abnormal neural development than boys did. This was true both for copy-number variants (CNVs, which are variations in the number of copies in a chromosome of particular sections of DNA), and single-nucleotide variants (SNVs, which are alterations to single genetic letters in the DNA message).?医生Jacquemont刚将他的研究成果发布在美国人类遗传学杂志上。
他的关键发现是两组中,女孩比男孩更多有伴随异常神经发展种类的基因突变。
这在拷贝数量变异组(CNVs,一组对DNA特定部分的染色体进行复制)和单核苷酸变异组(SNVs,修改DNA信使中单独的基因字母)两组中都是对的。
On the face of it, this seems compelling evidence for the female-protective model. Since all the children whose data Dr Jacquemont examined had been diagnosed with problems, if the girls had more serious mutations than the boys did, that suggests other aspects of their physiology were covering up the consequences. Females are thus, if this interpretation is correct, better protected from developing symptoms than males are. And, as further confirmation, Dr Jacquemont's findings tally with a?study?published three years ago, which found that CNVs in autistic girls spanned more genes (and were thus, presumably, more damaging), than those in autistic boys.?从表面上看,这似乎是令人信服的女性保护模式证据。