Lewis_MIB failure
Electric Power Systems Research
1. Introduction Trying to cut down the budget spent on maintenance every year, utilities need to come up with optimized maintenance schedules with limited budget. This task involves quantifying maintenance impact, which is a bit challenging task. Existing system level maintenance strategies such as RCM approach, Risk-Based approach etc. reported in Refs. [1–6] require considering the effect of component maintenance quantitatively through models such as probabilistic maintenance models [7–10] and/or failure rate estimation models. These models depend on condition-based data and history of operation of power system equipment such as transmission lines, transformers or circuit breakers (CBs). This paper proposes a probabilistic methodology to quantify the effect of device maintenance for circuit breakers. The proposed methodology utilizes the control circuit data of CB to define several performance indices. Fig. 1 shows the electrical representation of CB control circuit and the data consists of several voltage and current wave forms measured across trip coil, close coil and auxiliary contacts captured when the CB operates (either open or close operation). A sample representation of these signal waveforms
WESP
Richard C. Staehle and Ronald J. TriscoriK. Sampath (Sam) Kumar The Babcock & Wilcox CompanyF.L. Smidth Airtech Inc.Barberton, Ohio, U.S.AHouston, Texas, U.S.A Gary RossEd Pasternak New Brunswick PowerAES Deepwater Fredericton, New Brunswick, CanadaPasadena, Texas, U.S.AThe Past, Present and Future of Wet Electrostatic Precipitators in Power Plant ApplicationsPresented toBR-1742Combined Power Plant Air Pollutant Control Mega Symposium May 19 - 22, 2003Washington, DC, U.S.A.BackgroundWet electrostatic precipitators (WESPs) have been commer-cially available since their first introduction by F.G. Cottrell in 1907. However, most of their use has been with small, industrial-type settings as opposed to electrical utility power plants. In the past 20 years, this technology has been applied periodically to electric power plant sources.In electric utility plants firing sulfur-bearing fuel, wet flue gas desulfurization (WFGD) and, in the past decade, selective catalytic reduction (SCR) technologies have been added to control sulfur dioxide and nitrogen oxides emissions. The recent start-ups of new SCR systems on coal-fired power plants have demonstrated an increase of sulfuric acid emissions due to the oxidation of a portion of SO 2 across the SCR catalysts .1 Although the control of sulfuric acid mist from electric utility sources has typically not been regu-lated, concern and questions regarding these emissions are now being observed.The combustion of petroleum coke or Orimulsion TM with rela-tively higher concentrations of vanadium (a catalyst for the oxida-tion of SO 2 produced in combustion to SO 3) produces high levels of sulfuric acid emissions, in some cases as high as at power plants firing high sulfur coal.2In cases where the wet FGD technology is utilized in conjunc-tion with SCRs on high sulfur coals or with high vanadium petro-leum fuels, the condensed sulfuric acid component in the flue gas can exceed 20 ppmvd @ 3% O 2. This causes a pronounced stack opacity because of the inherent light scattering properties of the sub-micron particulates.3 Acid mist concentrations as low as 5 to 10 ppmvd have caused visible plume problems.This paper will discuss the past application of WESP technol-ogy, recent experiences that pertain to the electric utility industry,and economic analyses of WESP application to address potentialand future utility needs. The analyses compare the WESP to the alternative approach of sorbent injection to control acid mist emissions.Past WESP ExperienceAs stated above, WESPs have been serving the needs of metal-lurgical industry and many other applications for nearly 100 years,to control sulfuric acid and particulates. More than 1000 WESPs are in worldwide commercial operation today.4 There are several configurations of WESP designs now proven in commercial prac-tice. WESPs are made of tubular and parallel-plate type collecting electrodes. While the tubular WESPs will have only vertical gas flow orientation (upflow or downflow), the plate type designs can have either horizontal gas flow or vertical gas flow orientation.Materials of construction have long been a major issue in WESP design for metallurgical applications. In copper roasters, for ex-ample, the SO 2 concentrations in off-gases often exceed 10% be-cause of high pyritic content in the ore. Environmental regulations that reduced SO 2 emissions from these sources led to the commer-cialization of converting this rich, SO 2 laden gas stream into a us-able resource – sulfuric acid. Ironically, the WESP technology also protected the process’ vanadium oxidation catalyst from being “poi-soned” and plugged by reducing the particulates and SO 3 prior to entering the acid plant. The WESPs became the workhorses for the industry in collecting sulfuric acid mist. WESPs also removed trace elements, such as arsenic, to enhance the quality of sulfuric acid made in the acid plant.In the 1940s, the typical WESP design for acid mist control commonly used corrosion-resistant lead collection surfaces (both plate and tubes), lead-lined mild steel high voltage systems, and casings comprised of skeleton steel structures with lead burnedover the steel surfaces for protection from the acid gas stream. Due to the structural weakness of the lead and the operating pressures of downstream acid plants, leaks would occur and soon the skel-eton steel structure underwent corrosive attack and failed. Addi-tionally, corrosion-resistant lead materials were shown to be sus-ceptible to mechanical failure when used at operational tempera-tures above approximately 150°F. Eventually, the casing design evolved to alternatives such as fiberglass reinforced plastic to house the lead and lead-covered internals. This new design improved the life cycle of the WESPs and also minimized the need for highly skilled lead burner craftsmen. During this same timeframe, some manufactures started using plastic and FRP collecting electrodes to further minimize the lead content of their designs. In the 1970s some manufacturers began using specialty stainless for any lead-covered components that remained in their designs. The main driv-ing forces behind these evolutions were problems associated with the use of lead, including the specialized construction and mainte-nance labor required, reliability and maintenance/repair costs, and the ever growing concern over lead toxicity.There has been a recent report of a toxics release at a metallurgi-cal plant as a result of a fire within a WESP that used polypropy-lene collecting tubes and FRP casing. This type of experience has raised concern in other plants using plastic WESP components.Therefore, given the above, it is anticipated that electric utility power plants may understandably show a reluctance to use lead or plastic components in their WESPs.During the 1970s and 1980s, the successful use of alloy in wet FGD inlet sections and outlet flues provided sufficient confidence in the further use and applications of alloys. Today, alloy steels including 317, 6% molybdenum and C-276 grades are being used routinely in wet FGD systems.What has emerged from these past applications is a strong expe-rience base to design WESPs for high efficiency control of sulfuric acid and particulates for electric utility applications. The base in-cludes both horizontal flow and vertical flow WESPs. Both designs have been shown to achieve high efficiency collection. Site-specific questions regarding WESP layout and physical integration into the gas cleaning system will ultimately decide the best economic choice.One of the process issues that has surfaced during WESP opera-tion, when dealing with sulfuric acid mist, is a phenomenon known as corona suppression. Corona suppression is not new to electro-static precipitation. Formation of SO3 vapor together with flue gasmoisture will create ultra-fine particles of sulfuric acid mist. This mist can severely suppress operating corona current in the WESP. If present, the corona suppression will decrease collection effi-ciency in the WESP. Factors that cause or aggravate this condi-tion in a WESP following a wet FGD are large concentrations of fine sulfuric acid mist droplets and a high degree of water mist condensation.5The above effect can be made even more of a problem by im-proper choices in the design of ESP collection and discharge elec-trode geometries. By choosing a discharge electrode geometry ex-hibiting a low corona onset voltage, and by limiting the distance between corona electrode and collecting electrode, corona current can be established and maintained at adequate levels in the inlet fields. This reduces the fine particulate loading exiting the inlet field, thus permitting operation with sufficient power levels in the downstream WESP fields to achieve the overall design collection efficiency. Management of corona suppression is enabled due to past experiences with WESPs but also with dry ESPs on other applications that exhibited the corona suppression problem, such as saltcake (principally sodium sulfate salts) from chemical recov-ery boilers and dry cement kilns (where fine particulates are foundin high concentrations).6Recent WESP ExperienceAES Deepwater, TexasAES Deepwater is a petroleum coke fired cogeneration plant located on the Houston Ship Channel in Pasadena, Texas. The plant generates approximately 155 MW of electricity.2This plant utilizes a dry electrostatic precipitator to limit the levels of particulates and unburned carbon entering the limestone-based, gypsum-producing wet FGD system. This plant also uses a wet venturi scrubber prior to the wet FGD to remove additional particulate, HF and HCl. While particulate limits are necessary for regulatory compliance, control of unburned carbon is required to prevent contamination of salable by-product gypsum.The petroleum coke fuel has a high vanadium content, which results in relatively high level of SO3entering the wet FGD where the gas quenching action completes the formation of sulfuric acid mist. Only about 20-30% of the mist is captured in the FGD sys-tem because of the fine particle size of the mist droplets. The inletconcentration of SO3varies between 35 to 100 ppmvd @ 3% O2 depending on furnace operating conditions and vanadium content in the petcoke.AES has the oldest operating U.S. installation of a WESP in a power plant application. Table 1 shows the design information and required emission levels. The limit for total particulates including sulfuric acid and condensable was set for 0.005 grains/scfd, thus requiring a collection level of greater than 90% on sulfuric acid alone. Particulate control across the WESP is, typically, in the 95 to 97% range. This is in addition to the high efficiency collection of the dry ESP and the wet venturi scrubber ahead of the wet FGD. Such high efficiency, total particulate control across the system was necessary in 1986 to meet the stringent limits required in this non-attainment area. While the State of Texas required sulfuric acid emissions control to meet the tight particulate limits, to this day there are no federal standards for the control of sulfuric acid mist emissions from such sources.This WESP design consists principally of a three field, upflowTable 1 Specifications for air pollution control systeminstalled at AES DeepwaterDry ESPInlet gas flow634,000 ACFM at 360°FSCA376 ft2/1000 ACFMPart. collection efficiency97%FGDSO2removal efficiency90%Pre-scrubber/Quencher Venturi type, downflow/co-current Tower gas velocity9 ft/secMist eliminator Two-stage, chevron typeStack gas reheat To 175F w/in-line steam reheater Calcium sulfite oxidation Bleedstream pressure oxidationin separate towersWet ESPGas velocity7.7 ft/secTreatment time 4.2 secSO3,ppmv dry @ 3% O230 to 100Part. collection efficiency98.9% including sulfuric acid mist Outlet loading stopper0.005 grains/scfd, including sulfuric acidsystem of 12 parallel modules. The collection surfaces are plate-type, fabricated of balsa wood coated with reinforced thermoset plastic. The plates are kept irrigated, for electrical conductivity and removal of collected matter, by a continuous film of water flowing down over the surfaces of the plates. A system of collection trough gutters at the bottom of collector plates removes irrigation water and collected matter from the WESP.This WESP system has been in successful commercial opera-tion since 1986. Particulate limits were met, and stack opacity is generally maintained well below 10%.The discharge electrodes and other high voltage internals were made of alloy C-276. After more than 15 years of operation these internals appear new, with no observable signs of corrosion.In 1999, all of the original collection plates in 1 of the 12 mod-ules were removed and replaced with new alloy collector plates made of 6% Mo stainless steel. AES wanted access to higher per-formance and lower maintenance cost technology should the need arise in the future. The discharge electrodes in this module were also changed out from the original round wires to higher corona forming strip-type high voltage discharge electrodes. Figure 1 shows the configuration of module B that was retrofitted with alloy col-lector plates. Figure 2 shows the current-voltage relationship from the inlet to outlet fields. It is seen that corona current increases significantly from the first field to the third field, which is indica-tive of the effective management of corona suppression, existing though not serious, because of the presence of the sulfuric acid mist fines. While collection performance of the WESP is not an issue, there potential for an even higher corona producing inter-electrode geometry for this unit.After more than three years of operation on this retrofitted module with all alloy WESP collecting and discharge electrodes, the 6% Mo collector plates have shown no corrosion and still appear as they did when new.Northern States Power/Xcel EnergyThere are twenty four (24) WESP modules installed at this plant, twelve (12) each on the two 750 MW units. The plant chose the WESP solution to limit its stack opacity below 20%. The approach was to retrofit an upflow, two-field WESP inside an existing casing made available by re-positioning the particulate scrub-ber internals. Following an initial trial evaluation of WESP technol-ogy, the across-the-board retrofit of WESPs began in 1998 and was completed in 2001.Xcel Energy’s Sherbourne County Station is burning sub-bitu-minous coals having about 20% CaO available in the flyash. The free CaO in this flyash acts to absorb the SO3in the flue gas. The original high energy, combined particulate/SO2wet scrubbers alone were unable to limit stack opacity below 20%. There are no dry ESPs installed on these units. By using WESPs, the stack opacity has been limited to levels about 10% as compared to pre-WESP levels of 40%.7 Particulate control exceeding 90% has been achieved with the one second residence time available within the WESPs’treatment zones.Due to the high calcium content of the flyash, material scaling occurs in the bottom portion of the first field collector tubes. The first fields primarily capture the re-entrained droplets and carryover from the upstream scrubbers, thus allowing for relatively stable electrical operation in the second fields for fine particulates cap-ture. Each of the modules gets a thorough, off-line manual high pressure washdown about once a year to remove scale. In addition, part of normal daily operation includes a water flushing of modules while the power supplies are de-energized. The alkaline nature of flyash and absence of sulfuric acid mist allowed the use of 304 L for the WESP internals for this application. Lessons learned from this experience are: 1) WESP is able to overcome difficult conditions for dust build-up through well scheduled wash downs, and 2) high efficiency non-acid particulate collection, in addition to sulfuricacid mist collection, is an important capability to consider when evaluating WESPs for sulfuric acid collection.Northwestern U.S. Petroleum RefineryA petroleum coke calciner produces flue gas containing SO 2 that is treated with a caustic reagent scrubber. While the scrubber is highly efficient in absorbing SO 2, it cannot adequately capture the sulfuric acid mist. Since 1998, a WESP has been used to achieve a high degree of sulfuric acid capture to eliminate the visible plume.Three parallel WESP modules in a single field, upflow configura-tion are utilized. Figures 3 and 4 show the excellent particulate and sulfuric acid capture across the WESP. Sulfuric mist emissions were limited to levels of approximately 1 ppmvd @ 7% O 2, and particulate concentrations were reduced to levels well below 0.005grains/scfd.The material of construction of these WESPs’ internals and casing is alloy 904 L. The corrosion resistance has been very good,once again, observably in “like new” condition upon inspection.New Brunswick Power, Coleson CoveIn 2002, New Brunswick Power elected to install high effi-ciency WESPs following two new limestone-based wet FGD scrub-bers at their 1050 MW Coleson Cove station. This was part of a plantwide effort to reduce the cost of electricity generation by switching to lower cost Orimulsion TM fuel while considerably re-ducing SO X and particulate emissions. New Brunswick Power’s decision to install the WESPs was to assure control of sulfuric acid emissions below 5 ppmvd @ 3% O 2, and limit flyash particulates below 0.015 lb/MBtu. To achieve this level of control on sulfuric acid at all times, collection efficiency requirements will exceed 90%.Coleson Cove will be the second power plant at which New Brunswick Power has installed a WESP system for the collection of acid mist following a wet FGD. A smaller, single-field WESP system went into operation at its Dalhousie plant in the year 2000which followed that plant’s conversion to Orimulsion TM firing and wet FGD installation in 1994.Figure 5 shows a schematic of the layout for the Coleson Cove plant for each of the two wet FGD absorbers. It can be seen that the wet ESP consists of a three field upflow design, similar to the design that has been in successful use for more than 15 years at the AES Deepwater facility. Scrubbed flue gas enters the inlet field of WESPs after exiting the wet FGD mist eliminator.There are three electrical fields in series and four independently energized high voltage bus sections across each WESP electrical field. This conservatively sectionalized, twelve (12) bus section design allows for small sections to be de-energized during periodic water flushings while maintaining overall emissions within design levels.Figure 3 WESP data on sulfuric acid from refinery in northwest U.S.Figure 4 WESP Data on Particulate from Refinery in northwest U.S.Figure 2 I/V data from AES WESP.The gas exits the top of the WESP through a final mist elimina-tor section which captures any re-entrained droplets that may be present during flushing cycles, and transitions directly into the stack through an outlet hood. This design simplifies and lowers the balance of plant costs which would otherwise be associated with a conventional arrangement where the WESP may be placed on a stand-alone basis and outside of the wet FGD vessel.The Coleson Cove plant is located on a shoreline, where layout space was at a premium. Therefore, this integrated arrangement between the wet FGD absorber and WESP made the most sense.The material of construction of collecting plates at the inlet field will be C-276. Stainless steel made of 6% Mo is utilized for the high voltage systems and the balance of the collection plates.Construction of the integrated wet FGD/WESP systems will begin in spring of 2003. The systems will be operational by Sep-tember 2004.It is expected that this integrated, multi-pollutant approach to wet FGD installations will become more common in future fossilfired power plants and as a retrofit where multiple control of NOX ,SOX , mercury and fine particulates of flyash and sulfuric acid maybe required.Comparison of WESP Versus Sorbent In-jection OptionsStudies have investigated the possibility of sulfuric acid emis-sion reductions through the use of additives that allow existing air pollution control equipment to trap the resultant particulate mat-ter. In this way, the retrofit of another piece of capital equipment such as the WESP could be avoided.A pilot testing study commissioned by the Electric Power Re-search Institute (EPRI) at its high sulfur test center in New York more than ten years ago is still relevant to determine the effective-ness of various sorbents for the removal of SO3and its associated plume.9 More than ten years later, a full-scale evaluation was com-missioned by American Electric Power at its Gavin station to evalu-ate the effectiveness of control of sulfuric acid mist following the installation of retrofitted SCRs.1, 10 This installation also has anexisting wet FGD system for SO2control.A summary of the above studies is provided below and will be used as a basis to compare the economics of the alternatives:a.To remove SO3from flue gas in a utility boiler application, alkaline sorbents can be injected either in the upper furnaceor ahead of the dry ESP.b.Hydrated lime, ammonia and sodium bicarbonate were in-jected ahead of the dry ESP, while magnesium hydroxide wasinjected in the furnace sections.c.Injection of hydrated lime caused a significant loss of dryESP performance, which relies on the effect of free sulfuricacid for lowering flyash resistivity levels. Emissions wereincreased several fold due to increased flyash resistivity andparticulate loading to the dry ESP. Therefore, it may benecessary to consider the cost of ESP enlargement, or othermeans, to restore or improve the original particulate collec-tion performance.d.Injection of ammonia, while most efficient because of thegas/gas reaction for the absorption of SO3, affects dry ESP performance due to creation of ultra fine ammonium bisul-fate particulates. This can reduce the inlet corona current inthe dry ESP due to corona suppression. In addition, in-creased stickiness of ash can create ESP performance and/or maintenance problems and also ash handling systemproblems.e.There is motivation among the utilities to increase theamount of flyash utilization. Flyash utilization as cementraw material substitute is likely to show gains in energyuse, and thus contribute to the overall goal of CO2mitiga-tion. Both ammonia and sodium compounds in ash haveresulted in ash being unsuitable for use. In several plantsthat are already utilizing a salable flyash, this presents adouble problem of loss of ash revenue and increased costof ash disposal. (This impact will be included for analysiswhen evaluating sodium based sorbents.)f.Magnesium hydroxide injection in the furnace yields resultssimilar to hydrated lime injection ahead of the ESP.Economic Evaluation of Alternatives The following evaluation focuses on comparing the cost of WESP technology versus sodium bicarbonate and hydrated lime injection. The results for magnesium hydroxide and ammonia have a similar impact, the former being analogous to lime and the latter to sodium bicarbonate.The capital cost of wet ESP is annualized, and added to this are anticipated annual operating costs to generate the annual cost to own and operate the WESPs. A capital recovery factor of 0.1 is used to generate annualized cost of capital. For a levelizing period of 15 to 20 years, and a difference between interest and discount rates of 6%,this factor is considered reasonable and conforms within EPRI guide-Figure 5 Upflow WESP integrated with WFGD.lines. Operating costs for WESP power supplies, controllers, and high voltage insulator heaters are included in the evaluation.Comparison is made of the WESP annual costs to the annual-ized cost of sorbent injection technology. Impacts of ESP enlarge-ments and loss of ash sale are included as appropriate.Table 2 shows the estimated total installed cost of WESP of the design as shown for Coleson Cove for three different levels of collection efficiency. For a 50% collection efficiency, a single field unit will suffice. For a collection efficiency requirement of 80%, a two field unit will be necessary. For achieving collection efficiency of sulfuric acid exceeding 90% collection, a three field unit will be required.Table 3 shows the injection rates of dry hydrated lime andsodium bicarbonate to achieve the above mentioned SO3 collectionefficiency levels. It is readily seen that required injection ratesincrease rapidly as the SO3 collection efficiency requirements ex-ceed 50%.In the sorbent injection approach it is assumed that an injection temperature of 310°F is maintained. It is known that increasing the residence time to aid gas/solids mixing ahead of the dry ESP will reduce sorbent injection rates and associated costs. It is further as-sumed that this situation already exists at the plant. Additional costs may be incurred for ductwork modifications if this situation does not exist and provisions for increased residence time are required.The following assumptions are made in the analysis:• A 500 MW unit is considered in this example.•An existing dry ESP of 250 SCA (ft2/1000 ACFM) is keep-ing particulate limits below 0.03 lb/MBtu.•Additional 30% increase in treatment time will be needed to restore the ESP performance after lime sorbent injection.•Ash disposal costs are at 10$/ton of ash.•Ash utilization of 50% of the ash can be done at this plant ata price level of 5$/ton ash.•Bituminous coal results in ash content into the ESP of 10 lb/ MBtu.•The plant operates 8000 hours per year.•Hydrated lime can be purchased for 100 US$/ton delivered.•Sodium bicarbonate can be purchased for 250 US$/ton deliv-ered.Table 4 shows the annualized costs to own and operate the WESPs, dry lime system and sodium bicarbonate system. It is seen that if a particular plant must maintain its ash sale, or maintain its pre-injection particulate limits after sorbent injection, the WESP has considerably lower cost to own and operate when compared to the sorbent injection techniques shown here. This is true for the levels of SO3collection efficiency discussed above.The fact that total particulate emissions from the WESP are several times lower than the dry solution is not factored into this economic analysis.The economics favor the WESP technology even more for the 80% and 90% removal cases, because the sorbent utilization effi-ciency decreases for higher levels of SO3removal. Again, no credit has been applied to the WESP technology to reflect the fact that stack particulate emissions are much lower than with the sorbent technology.An additional factor that may promote the use of WESP tech-nology is that droplet carryover from the wet FGD is captured at a high efficiency in the WESP. To the extent soluble ionic mercury is readily captured in the wet ESP, overall system improvements in mercury capture are to be expected. This mechanism is unavailable in the sorbent injection technology.Conclusions•WESPs have been a proven technology for the collection of sulfuric acid mist for nearly 100 years.•The use of WESPs to limit total particulates, including sul-furic acid mist, in conjunction with a wet FGD system isbecoming increasingly relevant to electric utility plants.•WESPs integrated with wet FGD absorbers are an economi-cal alternative in terms of capital and operating costs, andplant layout restrictions, when compared to “stand-alone”WESPs.•In spite of the higher capital costs, WESPs are more eco-nomical to own and operate when compared with hydratedlime and sodium bicarbonate injection technologies.•Power plants should consider impacts of sorbent injection technologies due to potential loss of ash sales, as well asadverse impacts on the dry ESP performance.•The additional benefits in keeping solid particulates at very low levels, and possible benefits in mercury control, willmake the WESPs a desirable choice when considering theavailable options for SO3control.•Requirements for future operating permits may address emis-sions of PM2.5, SO3and visible plume. WESP technology addresses all of these emissions.Table 2 Approximate total installed cost comparison of wet ESPs vs. SO3Collection Efficiency (500 MW Plant)Number of fields SO3Collection efficiency, %$/kW150202803039540 Notes:1.These values based on 6% Mo internals.2.$/kW figures based on “greenfield” construction maximizing modular construction.3.$/kW figures based on the WESP component portion of a total WFGD/WESP integrated system.Table 3 Sorbent injection rates for achieving stated levels ofSO3removalSorbent type, SO3collection efficiency,% lb/hr/1000 ACFM50%80%95% Hydrated lime 1.536 Sodium bicarbonate35-Notes:1. Injection takes place ahead of the dry ESP around 300 to 350°F.2. All dry process.3. Capital costs of sorbent injection system estimated at 5$/kW.。
shRNA对小鼠Lewis细胞C-erbB-2基因表达的影响
p GP U6 / RF P / Ne o — s h NC 组 、 空 白组 , 应 用 RT— P c R 技 术检 测 转 染 前 后 C— e r b B 一 2 m RNA 的 水 平 , 应 用 We s t e n r b l o t t i n g技 术 检 测 其 蛋 白的 表达 。 结果 : DNA: L i p o f e c t a mi n e 2 0 0 0为 1 : 2 . 5时 ,转 染效 率最 高 ,后续 试验 均按 此 比例 进 行 转 染 4种 p G P U6 /
p GP U6 / RF P / Ne o —e r b B一 2 一mu s 一1 9 4 4组 、 p GP U6 / RF P / Ne o —e r b B一 2 -mu s 一2 3 1 0组 、 p GP U6 / RF P / Ne o —e r b B一2 一mu s 一 3 6 6 9组 、
s y n t h e s i z e d . Af t e r p GP U6 / RF P / Ne o — s h NC p l a s mi d w a s t r a n s f e c t e d i n t o mo u s e L e w i s c e l l s w i t h t h e r a t i o o f DNA t o
曹 新梅 , 张 岱权 , 王栩 , 郑春 燕
( 泸州医学院 : 免疫学教研室 ; 附属医院中医科 ; 附属医院血液内科, 四川 I 泸州
摘 要
6 4 6 0 0 0 )
目的 :探 讨 s h RNA 表达 质 粒 抑 制 小 鼠 肺 腺 癌 L e wi s 细 胞 C— e r b B一 2基 因 的表 达 。 方 法 :构 建 p GP U6 / RF P / Ne o ~
最近鲁棒优化进展Recent Advances in Robust Optimization and Robustness An Overview
Recent Advances in Robust Optimization and Robustness:An OverviewVirginie Gabrel∗and C´e cile Murat†and Aur´e lie Thiele‡July2012AbstractThis paper provides an overview of developments in robust optimization and robustness published in the aca-demic literature over the pastfive years.1IntroductionThis review focuses on papers identified by Web of Science as having been published since2007(included),be-longing to the area of Operations Research and Management Science,and having‘robust’and‘optimization’in their title.There were exactly100such papers as of June20,2012.We have completed this list by considering 726works indexed by Web of Science that had either robustness(for80of them)or robust(for646)in their title and belonged to the Operations Research and Management Science topic area.We also identified34PhD disserta-tions dated from the lastfive years with‘robust’in their title and belonging to the areas of operations research or management.Among those we have chosen to focus on the works with a primary focus on management science rather than system design or optimal control,which are broadfields that would deserve a review paper of their own, and papers that could be of interest to a large segment of the robust optimization research community.We feel it is important to include PhD dissertations to identify these recent graduates as the new generation trained in robust optimization and robustness analysis,whether they have remained in academia or joined industry.We have also added a few not-yet-published preprints to capture ongoing research efforts.While many additional works would have deserved inclusion,we feel that the works selected give an informative and comprehensive view of the state of robustness and robust optimization to date in the context of operations research and management science.∗Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France gabrel@lamsade.dauphine.fr Corresponding author†Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France mu-rat@lamsade.dauphine.fr‡Lehigh University,Industrial and Systems Engineering Department,200W Packer Ave Bethlehem PA18015,USA aure-lie.thiele@2Theory of Robust Optimization and Robustness2.1Definitions and BasicsThe term“robust optimization”has come to encompass several approaches to protecting the decision-maker against parameter ambiguity and stochastic uncertainty.At a high level,the manager must determine what it means for him to have a robust solution:is it a solution whose feasibility must be guaranteed for any realization of the uncertain parameters?or whose objective value must be guaranteed?or whose distance to optimality must be guaranteed? The main paradigm relies on worst-case analysis:a solution is evaluated using the realization of the uncertainty that is most unfavorable.The way to compute the worst case is also open to debate:should it use afinite number of scenarios,such as historical data,or continuous,convex uncertainty sets,such as polyhedra or ellipsoids?The answers to these questions will determine the formulation and the type of the robust counterpart.Issues of over-conservatism are paramount in robust optimization,where the uncertain parameter set over which the worst case is computed should be chosen to achieve a trade-off between system performance and protection against uncertainty,i.e.,neither too small nor too large.2.2Static Robust OptimizationIn this framework,the manager must take a decision in the presence of uncertainty and no recourse action will be possible once uncertainty has been realized.It is then necessary to distinguish between two types of uncertainty: uncertainty on the feasibility of the solution and uncertainty on its objective value.Indeed,the decision maker generally has different attitudes with respect to infeasibility and sub-optimality,which justifies analyzing these two settings separately.2.2.1Uncertainty on feasibilityWhen uncertainty affects the feasibility of a solution,robust optimization seeks to obtain a solution that will be feasible for any realization taken by the unknown coefficients;however,complete protection from adverse realiza-tions often comes at the expense of a severe deterioration in the objective.This extreme approach can be justified in some engineering applications of robustness,such as robust control theory,but is less advisable in operations research,where adverse events such as low customer demand do not produce the high-profile repercussions that engineering failures–such as a doomed satellite launch or a destroyed unmanned robot–can have.To make the robust methodology appealing to business practitioners,robust optimization thus focuses on obtaining a solution that will be feasible for any realization taken by the unknown coefficients within a smaller,“realistic”set,called the uncertainty set,which is centered around the nominal values of the uncertain parameters.The goal becomes to optimize the objective,over the set of solutions that are feasible for all coefficient values in the uncertainty set.The specific choice of the set plays an important role in ensuring computational tractability of the robust problem and limiting deterioration of the objective at optimality,and must be thought through carefully by the decision maker.A large branch of robust optimization focuses on worst-case optimization over a convex uncertainty set.The reader is referred to Bertsimas et al.(2011a)and Ben-Tal and Nemirovski(2008)for comprehensive surveys of robust optimization and to Ben-Tal et al.(2009)for a book treatment of the topic.2.2.2Uncertainty on objective valueWhen uncertainty affects the optimality of a solution,robust optimization seeks to obtain a solution that performs well for any realization taken by the unknown coefficients.While a common criterion is to optimize the worst-case objective,some studies have investigated other robustness measures.Roy(2010)proposes a new robustness criterion that holds great appeal for the manager due to its simplicity of use and practical relevance.This framework,called bw-robustness,allows the decision-maker to identify a solution which guarantees an objective value,in a maximization problem,of at least w in all scenarios,and maximizes the probability of reaching a target value of b(b>w).Gabrel et al.(2011)extend this criterion from afinite set of scenarios to the case of an uncertainty set modeled using intervals.Kalai et al.(2012)suggest another criterion called lexicographicα-robustness,also defined over afinite set of scenarios for the uncertain parameters,which mitigates the primary role of the worst-case scenario in defining the solution.Thiele(2010)discusses over-conservatism in robust linear optimization with cost uncertainty.Gancarova and Todd(2012)studies the loss in objective value when an inaccurate objective is optimized instead of the true one, and shows that on average this loss is very small,for an arbitrary compact feasible region.In combinatorial optimization,Morrison(2010)develops a framework of robustness based on persistence(of decisions)using the Dempster-Shafer theory as an evidence of robustness and applies it to portfolio tracking and sensor placement.2.2.3DualitySince duality has been shown to play a key role in the tractability of robust optimization(see for instance Bertsimas et al.(2011a)),it is natural to ask how duality and robust optimization are connected.Beck and Ben-Tal(2009) shows that primal worst is equal to dual best.The relationship between robustness and duality is also explored in Gabrel and Murat(2010)when the right-hand sides of the constraints are uncertain and the uncertainty sets are represented using intervals,with a focus on establishing the relationships between linear programs with uncertain right hand sides and linear programs with uncertain objective coefficients using duality theory.This avenue of research is further explored in Gabrel et al.(2010)and Remli(2011).2.3Multi-Stage Decision-MakingMost early work on robust optimization focused on static decision-making:the manager decided at once of the values taken by all decision variables and,if the problem allowed for multiple decision stages as uncertainty was realized,the stages were incorporated by re-solving the multi-stage problem as time went by and implementing only the decisions related to the current stage.As thefield of static robust optimization matured,incorporating–ina tractable manner–the information revealed over time directly into the modeling framework became a major area of research.2.3.1Optimal and Approximate PoliciesA work going in that direction is Bertsimas et al.(2010a),which establishes the optimality of policies affine in the uncertainty for one-dimensional robust optimization problems with convex state costs and linear control costs.Chen et al.(2007)also suggests a tractable approximation for a class of multistage chance-constrained linear program-ming problems,which converts the original formulation into a second-order cone programming problem.Chen and Zhang(2009)propose an extension of the Affinely Adjustable Robust Counterpart framework described in Ben-Tal et al.(2009)and argue that its potential is well beyond what has been in the literature so far.2.3.2Two stagesBecause of the difficulty in incorporating multiple stages in robust optimization,many theoretical works have focused on two stages.Regarding two-stage problems,Thiele et al.(2009)presents a cutting-plane method based on Kelley’s algorithm for solving convex adjustable robust optimization problems,while Terry(2009)provides in addition preliminary results on the conditioning of a robust linear program and of an equivalent second-order cone program.Assavapokee et al.(2008a)and Assavapokee et al.(2008b)develop tractable algorithms in the case of robust two-stage problems where the worst-case regret is minimized,in the case of interval-based uncertainty and scenario-based uncertainty,respectively,while Minoux(2011)provides complexity results for the two-stage robust linear problem with right-hand-side uncertainty.2.4Connection with Stochastic OptimizationAn early stream in robust optimization modeled stochastic variables as uncertain parameters belonging to a known uncertainty set,to which robust optimization techniques were then applied.An advantage of this method was to yield approaches to decision-making under uncertainty that were of a level of complexity similar to that of their deterministic counterparts,and did not suffer from the curse of dimensionality that afflicts stochastic and dynamic programming.Researchers are now making renewed efforts to connect the robust optimization and stochastic opti-mization paradigms,for instance quantifying the performance of the robust optimization solution in the stochastic world.The topic of robust optimization in the context of uncertain probability distributions,i.e.,in the stochastic framework itself,is also being revisited.2.4.1Bridging the Robust and Stochastic WorldsBertsimas and Goyal(2010)investigates the performance of static robust solutions in two-stage stochastic and adaptive optimization problems.The authors show that static robust solutions are good-quality solutions to the adaptive problem under a broad set of assumptions.They provide bounds on the ratio of the cost of the optimal static robust solution to the optimal expected cost in the stochastic problem,called the stochasticity gap,and onthe ratio of the cost of the optimal static robust solution to the optimal cost in the two-stage adaptable problem, called the adaptability gap.Chen et al.(2007),mentioned earlier,also provides a robust optimization perspective to stochastic programming.Bertsimas et al.(2011a)investigates the role of geometric properties of uncertainty sets, such as symmetry,in the power offinite adaptability in multistage stochastic and adaptive optimization.Duzgun(2012)bridges descriptions of uncertainty based on stochastic and robust optimization by considering multiple ranges for each uncertain parameter and setting the maximum number of parameters that can fall within each range.The corresponding optimization problem can be reformulated in a tractable manner using the total unimodularity of the feasible set and allows for afiner description of uncertainty while preserving tractability.It also studies the formulations that arise in robust binary optimization with uncertain objective coefficients using the Bernstein approximation to chance constraints described in Ben-Tal et al.(2009),and shows that the robust optimization problems are deterministic problems for modified values of the coefficients.While many results bridging the robust and stochastic worlds focus on giving probabilistic guarantees for the solutions generated by the robust optimization models,Manuja(2008)proposes a formulation for robust linear programming problems that allows the decision-maker to control both the probability and the expected value of constraint violation.Bandi and Bertsimas(2012)propose a new approach to analyze stochastic systems based on robust optimiza-tion.The key idea is to replace the Kolmogorov axioms and the concept of random variables as primitives of probability theory,with uncertainty sets that are derived from some of the asymptotic implications of probability theory like the central limit theorem.The authors show that the performance analysis questions become highly structured optimization problems for which there exist efficient algorithms that are capable of solving problems in high dimensions.They also demonstrate that the proposed approach achieves computationally tractable methods for(a)analyzing queueing networks,(b)designing multi-item,multi-bidder auctions with budget constraints,and (c)pricing multi-dimensional options.2.4.2Distributionally Robust OptimizationBen-Tal et al.(2010)considers the optimization of a worst-case expected-value criterion,where the worst case is computed over all probability distributions within a set.The contribution of the work is to define a notion of robustness that allows for different guarantees for different subsets of probability measures.The concept of distributional robustness is also explored in Goh and Sim(2010),with an emphasis on linear and piecewise-linear decision rules to reformulate the original problem in aflexible manner using expected-value terms.Xu et al.(2012) also investigates probabilistic interpretations of robust optimization.A related area of study is worst-case optimization with partial information on the moments of distributions.In particular,Popescu(2007)analyzes robust solutions to a certain class of stochastic optimization problems,using mean-covariance information about the distributions underlying the uncertain parameters.The author connects the problem for a broad class of objective functions to a univariate mean-variance robust objective and,subsequently, to a(deterministic)parametric quadratic programming problem.The reader is referred to Doan(2010)for a moment-based uncertainty model for stochastic optimization prob-lems,which addresses the ambiguity of probability distributions of random parameters with a minimax decision rule,and a comparison with data-driven approaches.Distributionally robust optimization in the context of data-driven problems is the focus of Delage(2009),which uses observed data to define a”well structured”set of dis-tributions that is guaranteed with high probability to contain the distribution from which the samples were drawn. Zymler et al.(2012a)develop tractable semidefinite programming(SDP)based approximations for distributionally robust individual and joint chance constraints,assuming that only thefirst-and second-order moments as well as the support of the uncertain parameters are given.Becker(2011)studies the distributionally robust optimization problem with known mean,covariance and support and develops a decomposition method for this family of prob-lems which recursively derives sub-policies along projected dimensions of uncertainty while providing a sequence of bounds on the value of the derived policy.Robust linear optimization using distributional information is further studied in Kang(2008).Further,Delage and Ye(2010)investigates distributional robustness with moment uncertainty.Specifically,uncertainty affects the problem both in terms of the distribution and of its moments.The authors show that the resulting problems can be solved efficiently and prove that the solutions exhibit,with high probability,best worst-case performance over a set of distributions.Bertsimas et al.(2010)proposes a semidefinite optimization model to address minimax two-stage stochastic linear problems with risk aversion,when the distribution of the second-stage random variables belongs to a set of multivariate distributions with knownfirst and second moments.The minimax solutions provide a natural distribu-tion to stress-test stochastic optimization problems under distributional ambiguity.Cromvik and Patriksson(2010a) show that,under certain assumptions,global optima and stationary solutions of stochastic mathematical programs with equilibrium constraints are robust with respect to changes in the underlying probability distribution.Works such as Zhu and Fukushima(2009)and Zymler(2010)also study distributional robustness in the context of specific applications,such as portfolio management.2.5Connection with Risk TheoryBertsimas and Brown(2009)describe how to connect uncertainty sets in robust linear optimization to coherent risk measures,an example of which is Conditional Value-at-Risk.In particular,the authors show the link between polyhedral uncertainty sets of a special structure and a subclass of coherent risk measures called distortion risk measures.Independently,Chen et al.(2007)present an approach for constructing uncertainty sets for robust opti-mization using new deviation measures that capture the asymmetry of the distributions.These deviation measures lead to improved approximations of chance constraints.Dentcheva and Ruszczynski(2010)proposes the concept of robust stochastic dominance and shows its applica-tion to risk-averse optimization.They consider stochastic optimization problems where risk-aversion is expressed by a robust stochastic dominance constraint and develop necessary and sufficient conditions of optimality for such optimization problems in the convex case.In the nonconvex case,they derive necessary conditions of optimality under additional smoothness assumptions of some mappings involved in the problem.2.6Nonlinear OptimizationRobust nonlinear optimization remains much less widely studied to date than its linear counterpart.Bertsimas et al.(2010c)presents a robust optimization approach for unconstrained non-convex problems and problems based on simulations.Such problems arise for instance in the partial differential equations literature and in engineering applications such as nanophotonic design.An appealing feature of the approach is that it does not assume any specific structure for the problem.The case of robust nonlinear optimization with constraints is investigated in Bertsimas et al.(2010b)with an application to radiation therapy for cancer treatment.Bertsimas and Nohadani (2010)further explore robust nonconvex optimization in contexts where solutions are not known explicitly,e.g., have to be found using simulation.They present a robust simulated annealing algorithm that improves performance and robustness of the solution.Further,Boni et al.(2008)analyzes problems with uncertain conic quadratic constraints,formulating an approx-imate robust counterpart,and Zhang(2007)provide formulations to nonlinear programming problems that are valid in the neighborhood of the nominal parameters and robust to thefirst order.Hsiung et al.(2008)present tractable approximations to robust geometric programming,by using piecewise-linear convex approximations of each non-linear constraint.Geometric programming is also investigated in Shen et al.(2008),where the robustness is injected at the level of the algorithm and seeks to avoid obtaining infeasible solutions because of the approximations used in the traditional approach.Interval uncertainty-based robust optimization for convex and non-convex quadratic programs are considered in Li et al.(2011).Takeda et al.(2010)studies robustness for uncertain convex quadratic programming problems with ellipsoidal uncertainties and proposes a relaxation technique based on random sampling for robust deviation optimization sserre(2011)considers minimax and robust models of polynomial optimization.A special case of nonlinear problems that are linear in the decision variables but convex in the uncertainty when the worst-case objective is to be maximized is investigated in Kawas and Thiele(2011a).In that setting,exact and tractable robust counterparts can be derived.A special class of nonconvex robust optimization is examined in Kawas and Thiele(2011b).Robust nonconvex optimization is examined in detail in Teo(2007),which presents a method that is applicable to arbitrary objective functions by iteratively moving along descent directions and terminates at a robust local minimum.3Applications of Robust OptimizationWe describe below examples to which robust optimization has been applied.While an appealing feature of robust optimization is that it leads to models that can be solved using off-the-shelf software,it is worth pointing the existence of algebraic modeling tools that facilitate the formulation and subsequent analysis of robust optimization problems on the computer(Goh and Sim,2011).3.1Production,Inventory and Logistics3.1.1Classical logistics problemsThe capacitated vehicle routing problem with demand uncertainty is studied in Sungur et al.(2008),with a more extensive treatment in Sungur(2007),and the robust traveling salesman problem with interval data in Montemanni et al.(2007).Remli and Rekik(2012)considers the problem of combinatorial auctions in transportation services when shipment volumes are uncertain and proposes a two-stage robust formulation solved using a constraint gener-ation algorithm.Zhang(2011)investigates two-stage minimax regret robust uncapacitated lot-sizing problems with demand uncertainty,in particular showing that it is polynomially solvable under the interval uncertain demand set.3.1.2SchedulingGoren and Sabuncuoglu(2008)analyzes robustness and stability measures for scheduling in a single-machine environment subject to machine breakdowns and embeds them in a tabu-search-based scheduling algorithm.Mittal (2011)investigates efficient algorithms that give optimal or near-optimal solutions for problems with non-linear objective functions,with a focus on robust scheduling and service operations.Examples considered include parallel machine scheduling problems with the makespan objective,appointment scheduling and assortment optimization problems with logit choice models.Hazir et al.(2010)considers robust scheduling and robustness measures for the discrete time/cost trade-off problem.3.1.3Facility locationAn important question in logistics is not only how to operate a system most efficiently but also how to design it. Baron et al.(2011)applies robust optimization to the problem of locating facilities in a network facing uncertain demand over multiple periods.They consider a multi-periodfixed-charge network location problem for which they find the number of facilities,their location and capacities,the production in each period,and allocation of demand to facilities.The authors show that different models of uncertainty lead to very different solution network topologies, with the model with box uncertainty set opening fewer,larger facilities.?investigate a robust version of the location transportation problem with an uncertain demand using a2-stage formulation.The resulting robust formulation is a convex(nonlinear)program,and the authors apply a cutting plane algorithm to solve the problem exactly.Atamt¨u rk and Zhang(2007)study the networkflow and design problem under uncertainty from a complexity standpoint,with applications to lot-sizing and location-transportation problems,while Bardossy(2011)presents a dual-based local search approach for deterministic,stochastic,and robust variants of the connected facility location problem.The robust capacity expansion problem of networkflows is investigated in Ordonez and Zhao(2007),which provides tractable reformulations under a broad set of assumptions.Mudchanatongsuk et al.(2008)analyze the network design problem under transportation cost and demand uncertainty.They present a tractable approximation when each commodity only has a single origin and destination,and an efficient column generation for networks with path constraints.Atamt¨u rk and Zhang(2007)provides complexity results for the two-stage networkflow anddesign plexity results for the robust networkflow and network design problem are also provided in Minoux(2009)and Minoux(2010).The problem of designing an uncapacitated network in the presence of link failures and a competing mode is investigated in Laporte et al.(2010)in a railway application using a game theoretic perspective.Torres Soto(2009)also takes a comprehensive view of the facility location problem by determining not only the optimal location but also the optimal time for establishing capacitated facilities when demand and cost parameters are time varying.The models are solved using Benders’decomposition or heuristics such as local search and simulated annealing.In addition,the robust networkflow problem is also analyzed in Boyko(2010),which proposes a stochastic formulation of minimum costflow problem aimed atfinding network design andflow assignments subject to uncertain factors,such as network component disruptions/failures when the risk measure is Conditional Value at Risk.Nagurney and Qiang(2009)suggests a relative total cost index for the evaluation of transportation network robustness in the presence of degradable links and alternative travel behavior.Further,the problem of locating a competitive facility in the plane is studied in Blanquero et al.(2011)with a robustness criterion.Supply chain design problems are also studied in Pan and Nagi(2010)and Poojari et al.(2008).3.1.4Inventory managementThe topic of robust multi-stage inventory management has been investigated in detail in Bienstock and Ozbay (2008)through the computation of robust basestock levels and Ben-Tal et al.(2009)through an extension of the Affinely Adjustable Robust Counterpart framework to control inventories under demand uncertainty.See and Sim (2010)studies a multi-period inventory control problem under ambiguous demand for which only mean,support and some measures of deviations are known,using a factor-based model.The parameters of the replenishment policies are obtained using a second-order conic programming problem.Song(2010)considers stochastic inventory control in robust supply chain systems.The work proposes an inte-grated approach that combines in a single step datafitting and inventory optimization–using histograms directly as the inputs for the optimization model–for the single-item multi-period periodic-review stochastic lot-sizing problem.Operation and planning issues for dynamic supply chain and transportation networks in uncertain envi-ronments are considered in Chung(2010),with examples drawn from emergency logistics planning,network design and congestion pricing problems.3.1.5Industry-specific applicationsAng et al.(2012)proposes a robust storage assignment approach in unit-load warehouses facing variable supply and uncertain demand in a multi-period setting.The authors assume a factor-based demand model and minimize the worst-case expected total travel in the warehouse with distributional ambiguity of demand.A related problem is considered in Werners and Wuelfing(2010),which optimizes internal transports at a parcel sorting center.Galli(2011)describes the models and algorithms that arise from implementing recoverable robust optimization to train platforming and rolling stock planning,where the concept of recoverable robustness has been defined in。
硫酸化表面的反应及特点
Catalysis Today59(2000)305–312Characterization and reactivity of pure TiO2–SO42−SCRcatalyst:influence of SO42−contentSeong Moon Jung∗,Paul GrangeUnitéde Catalyse et Chimie des Matériaux Divisés,UniversitéCatholique de Louvain,Pl.Croix du Sud2/17,B-1348Louvain-la-Neuve,BelgiumAbstractThe modifications of textural and surface properties of sulfated TiO2have been investigated by means of XRD,surface area,XPS,Raman,FT-IR,NH3-TPD,and catalytic tests in the reduction of NO by NH3.According to sulfate content,the isolatedsulfate transforms to a polynuclear sulfate type.Between SO42−=3.0and6.6wt.%,the amount of S=O species per unit area is almost constant and equal.Total number of acid sites increases with the SO42−content,while the strongest sites are maximumat1.5wt.%loading.The areal TOF(turnover frequency,mole NO converted/s m2)is also higher at SO42−=1.5wt.%and is directly related to the number of strong Lewis acid sites.Accordingly,it is suggested that the strong Lewis site generated by doping TiO2with SO42−is responsible for the higher reactivity of TiO2–SO42−at high temperature.©2000Elsevier Science B.V.All rights reserved.Keywords:Titania sulfate;SCR1.IntroductionThe elimination of nitrogen oxides emitted from the combustion process is particularly important in the reduction of the environmental problems caused by the formation of acid rain and depletion of ozone through the secondary reactions in atmosphere.Various processes have been proposed for the elim-ination of NO x through widespread application of available methods and/or via the development of new technologies[1].Amongst them,the SCR(selective catalytic reduction)is considered as the most efficient technology.The usual catalyst of SCR process con-sists of V2O5and TiO2as main components.Since this type of catalyst is used in industrial plants and satisfies the user as far as efficiency is concerned,the behavior of the different active components such as ∗Corresponding author.Fax:+32-10-47-36-49.E-mail address:jung@cata.ucl.ac.be(S.M.Jung)V,W,Mo as well as their interaction with the support have been deeply investigated.However,in the specific condition of SCR there is still some problem since severe conditions of SCR re-action induce side reactions and catalyst deactivation. For example,when the temperature of SCR reaction exceeds400◦C,a decrease of the activity and selectiv-ity of the catalyst mainly due to production of NO and NO2caused by ammonia oxidation is observed[2–3]. Accordingly,some researchers have proposed vari-ous attempts to overcome such problems on SCR cat-alysts[4–5].Amongst them,the choice of an SO42−ion as TiO2promoter is attractive since it has a high activity in high temperature reactions between400and 600◦C[6].It has been proposed that the sulfate pro-duces a strong acidic site at high temperature[7–11], and this can lead to the high reactivity of the SCR reaction without by-products.However,until now,no in-depth interpretation of the interaction between TiO2and SO42−has been0920-5861/00/$–see front matter©2000Elsevier Science B.V.All rights reserved. PII:S0920-5861(00)00296-0306S.M.Jung,P.Grange/Catalysis Today59(2000)305–312proposed.Furthermore,in the SCR process,the role of SO42−in pure TiO2–SO42−catalysts could not be explained on the basis of conventional SCR mecha-nisms[12].Accordingly,this work is mainly focused on the change of physicochemical properties caused by the TiO2–SO42−catalyst with variable SO42−content and then correlated with deNO x properties.2.Experiment2.1.Preparation of catalystsTi(OH)4was prepared by precipitation of a solution of titanium tetrachloride.The precipitate obtained by adding25wt.%ammonia solution was washed with hot distilled water until no chloride ion was detected and dried at100◦C for12h,then calcined at500◦C for5h.To obtain the sulfated catalyst,H2SO4was used as an initial precursor of sulfate in the catalyst prepa-ration.Titanium hydroxide obtained through precip-itation of TiCl4was dried at100◦C for24h.The calculated H2SO4solution(0.1N)was added to the titanium hydroxide.The solids were then dried for 12h at100◦C and calcined for5h at500◦C.To con-sider the decomposition of sulfate due to the thermal stability,the real loading amounts of sulfate after calcination were verified by the elemental analysis.2.2.CharacterizationThe elemental analyses(Ti,S)were performed by inductively coupled plasma-optical emission spec-troscopy(ICP-OES).The specific surface areas of the samples were mea-sured by the BET method with a Micromeritics ASAP 2000equipment using nitrogen at−196◦C.Prior to measurements,all samples were outgassed at150◦C until vacuum was reached at1×10−5Torr.The X-ray diffraction patterns were recorded using a Siemens D-5000powder diffractometer with nickel filtered Cu K␣radiation(λ=1.5404Å).The step scans were taken over the range of2θfrom5to80.The X-ray photoelectron spectra were obtained with a Surface Science Instruments SSX-100model206 spectrometer with a monochromatised Al K␣source,operating at10kV and12mA.The residual pressure inside the analysis chamber was below5×10−9Torr. The binding energies of O1s,Ti2p and S2p were referenced to the C1s band at284.8eV.The Raman spectra were measured with a Dilor In-strument S.A.spectrometer with the632nm line of Ar ion laser as excitation source under ambient con-ditions.The number of scans is5and the time of accumulation is5s per scan.An NH3-TPD(temperature programmed desorp-tion)spectrum was obtained by monitoring the des-orbed ammonia,after adsorbing ammonia on the catalyst at100◦C using pure ammonia,while increas-ing the temperature of the sample at a constant rate (10◦C/min)and maintaining the carrier gasflow rate (60cm3He/min).The outlet gas was passed through a 20wt.%H3BO3solution in order to check the amount of NH3.FT-IR spectra were recorded using a Brucker FT 88spectrometer.The samples were pressed into self-supporting disks,placed in an IR cell,and treated under vacuum(10−6Torr)at400◦C for2h.To obtain the spectra of NH3adsorbed on surface,after cooling to room temperature,the samples were exposed to ammonia for3min.Then,spectra were recorded after evacuation(5×10−5Torr)for30min at400◦C.2.3.Catalytic activity measurementActivity measurements were performed in a contin-uousflowfixed bed reactor operating at atmospheric pressure.GHSV(h−1)was46000.The totalflow rate was100ml/min and feed composition was:nitric ox-ide0.1vol.%;ammonia0.105vol.%;2.5vol.%oxy-gen,in helium.The inlet and outlet gas compositions were measured using a quadrupole mass spectrometer QMC311Balzers coupled to the reactor.All con-versions were measured at the steady state achieved, under our experimental conditions,after30–40min at 400◦C.3.Results and discussion3.1.Structure and morphologyThe use of titanium hydroxide as sulfated precur-sor should induce textural modifications of samplesS.M.Jung,P.Grange/Catalysis Today59(2000)305–312307 Table1Structure and textural dataSO42−loading(wt.%)BET surface area(m2/g)Pore volume(cm3/g)Pore diameter(Å)Detected phase(XRD) 0400.0661Anatase1.5580.0965Anatase3.0840.1468Anatase6.61040.1663Anataseduring the calcination,since the concentration of hy-droxides on the particles which are able to lead to theagglomeration in the particle and/or between particlesis modified by the sulfation.Table1shows structure and textural data along withthe increase of the SO42−ion contents.BET surfacearea is proportional to the amount of loaded sulfateand an increase of7m2/g wt.%SO42−is observed. The average pore diameter of all TiO2sulfate samplesis constant.It is known that the degree of agglomer-ation at constant pore diameter can be controlled bythe concentration of the free hydroxide in the particlewithout the decrease of the number of bonds betweenparticles[13].Accordingly,this type of agglomera-tion occurring in TiO2–SO42−indicates that the freeOH bond sites of Ti(OH)4in the particle,where theagglomeration/crystallization takes place during cal-cination,are contacted by the addition of SO42−ion.Fig.1shows the relation of calculated sulfate den-sity per unit area with the S/Ti ratio measured by XPS.The S/Ti ratio is linearly proportional to the sulfateconcentration.This linearity evidences the homoge-neous distribution of sulfate in all the samples.3.2.Type of SO42−species adsorbed on TiO2From the BET surface area and the sulfate con-tents,the number of S(atoms/m2)can be calculatedand considering a surface concentration7atoms/nm2of Ti,the atomic S/Ti ratio of S/Ti can also be evalu-ated.This is shown in Table2.The atomic ratio S/Tiincreases with sulfate content and a maximum valueof0.59when SO42−=6.6wt.%is obtained.Saur et al.[14]using isotope exchange and IR analysis proposeda(Ti–O)3S=O structure under dry conditions.There-fore,based on this structure,the0.33S/Ti atomic ratiois considered as a maximum value under dry condi-tions.The S/Ti ratio up to3.0wt.%SO42−catalyst is not over the maximum value of0.33,but at higher con-centration of SO42−(6.6wt.%),although the surface area is increased according to the increase of sulfate amounts,the S/Ti ratio is0.59.This evidences that the S=O structure is not(Ti–O)3S=O at high SO42−con-centration.In other words,the sulfate type changes from tridentate species to bidentate species with SO42−content.As the sulfate loading is increased,it is proposed that S2O72−and S3O102−species may also be present[15–18].To identify the S=O species in sulfate com-plex,FT-IR is a very useful method and has been widely used.IR spectra of sulfated metal oxides nor-mally present a strong absorption band near1380–1360cm−1and a broad band around1000–1200cm−1.Fig.1.Dispersion of sulfate on the surface of TiO2.308S.M.Jung,P .Grange /Catalysis Today 59(2000)305–312Table 2Distribution of sulfate and calculated data SO 42−loading Total number of Number of %SO 42−Total number of surfaceS/Ti atomic (wt.%)SO 42−(1×1018/g)(1×1018/m 2)Ti atom (1×1018/g)[7Ti/nm 2]ratio 00028001.597.3 1.73360.233.0198.6 2.45880.336.6428.34.17280.59In all the samples,after pretreatment at 400◦C,a very intensive peak is observed at around 1370cm −1,and can be attributed to S =O stretching vibration mode.Fig.2presents the relation of the concentration of S =O per unit area with the concentration of sulfate.Up to SO 42−=3.0wt.%,the amount of S =O increases and then reaches a plateau.This shows that the S =O concentration decreases at high sulfate content.This would confirm that the generation of bidentate sulfate at 6.6wt.%is due to polynuclear sulfate at the expense of isolated sulfate.The different proportion of S =O between the iso-lated and polynuclear sulfate may be distinguished and confirmed by Raman spectroscopy.As previously reported,Raman spectroscopy is an excellent tooltoFig.2.Peak position and relative concentration of S =O obtained from FT-IR according to the sulfur density per unit area.detect the double bond according to the change of environment.It is well known that the range between 900and 1100cm −1is sensitive to the change of dou-ble bond stretching and used to analyze some SCR catalysts such as V 2O 5,WO 3and V 2O 5/WO 3[19].The Raman spectra for TiO 2sulfate samples,recorded under ambient conditions,are presented in Fig.3.In pure TiO 2,no peak is detected in this region.When the amount of sulfate increases up to 1.5wt.%,a peak located at 1001cm −1is observed.Assignment based on the FT-IR results and evaluation of S/Ti ratio,indicates that the peak at 1001cm −1is due to the S =O bonds of the isolated sulfate.As the amount of sulfate on the catalyst increases,the intensity of 1001cm −1is not only increased,but a new peak at 1040cm −1is also generated.It is evident that this newly created peak at increasing sulfate density is induced by the S =O bonds of the polynuclear sulfate type.Thus,the surface sulfate at 3.0and 6.6wt.%Fig.3.Raman spectra:(a)TiO 2;(b)TiO 2–SO 42−(1.5wt.%);(c)TiO 2–SO 42−(3.0wt.%);(d)TiO 2–SO 42−(6.6wt.%).S.M.Jung,P .Grange /Catalysis Today 59(2000)305–312309appears to be constituted of isolated sulfate species and polynuclear sulfated species.3.3.Acidity generation by adsorbed sulfate The S =O structure is essential for the generation of acidic sites on sulfate promoted oxide samples.The strong ability of S =O in sulfate complexes to accom-modate electrons from a basic molecule is a driv-ing force in the generation of highly acidic properties [20,21].The acidic properties generated by the inductive ef-fect of S =O bonds of the complex are strongly af-fected by the environment of the sulfate ion.Thus,it can be proposed that acid properties would be modi-fied by both the type of S =O in the sulfate complex and the coverage of sulfate on the surface.As men-tioned above,the increased loading of sulfate on TiO 2can form the polynuclear type of sulfate complex and increase the coverage of the Ti metal ion by the sulfate ion.Accordingly,in order to investigate the effect of the type of S =O bonds on the acidity,NH 3-TPD experi-ments were carried out.The results are shown in Fig.4.The amount of NH 3desorbed up to 500◦C increases according to the increase of sulfate ion.The feature of NH 3desorption of TiO 2–SO 42−is quitedifferentFig.4.NH 3-TPD spectra:(a)TiO 2;(b)TiO 2–SO 42−(1.5wt.%);(c)TiO 2–SO 42−(3.0wt.%);(d)TiO 2–SO 42−(6.6wt.%).from that of pure TiO 2.In the case of pure TiO 2,the major desorption of NH 3occurred from 100to 320◦C and a small peak due to strongly adsorbed ammonia on surface is detected at 400◦C.In the 1.5wt.%sul-fated catalyst,the amount of NH 3desorbed between 200and 300◦C is reduced and the main desorption of NH 3appears between 300and 400◦C.Therefore,it seems that the medium acid sites in TiO 2transform to strong acid sites by addition of sulfate.For the 3.0and 6.6wt.%sulfate,the feature of TPD spectra is similar to that of the 1.5wt.%sulfated catalyst.But as com-pared with the 1.5wt.%sulfated catalyst,the peak at-tributed to the strong acid site gradually shifts to 345and 321◦C,whereas the peak due to weak acid site appears at the same temperature around 200◦C.To quantitatively analyze the relation between the generation of acid sites and the sulfate loading,the amount of NH 3desorbed during NH 3-TPD was measured.Fig.5shows the relation between the number of adsorbed ammonia per unit surface area with sulfate contents per unit area.The initial increase of NH 3,after sulfate loading up to 1.5wt.%,corre-spond to the increased rate of 0.18NH 3molecules per sulfate molecule.An additional increase of sulfate leads to a decrease of the number of NH 3moleculesFig.5.The generation of acidity and acid strength in NH 3-TPD:(᭹)from 100to 500◦C;()from 400to 500◦C.310S.M.Jung,P.Grange/Catalysis Today59(2000)305–312per sulfate molecule to0.15at3.0wt.%and0.12at 6.6wt.%,although the total amount of NH3adsorbed increases due to an increase of both sulfate contents and surface area.Besides,the proportion of strong acidic sites to total acid sites decreases to60and33% with the increase of the number of sulfate molecules per surface area.This reduction of strong acid sites is also consistent with the shift of temperature men-tioned above in the TPD experiments.Considering the results of the sulfate species and S/Ti ratio accord-ing to sulfate contents,it appears that the generation of total and strong acidity is not affected by the type of sulfate species,such as isolated and polynuclear, but by the coverage of the surface of the Ti by sul-fate ions.Accordingly,it is concluded that the free Ti ion surrounding the sulfate is responsible for the generation of strong acid sites.With respect to the kind of acid site,it has been re-ported that both Lewis and Brønsted acid sites can be generated when a sulfate ion is introduced into TiO2 [6,11].In order to investigate the effect of the type of sulfate on the generation of Brønsted and Lewis acid sites and to specify their strength,the concen-tration of both acid sites is compared using the peak areas measured by FT-IR after NH3adsorption.The Brønsted and Lewis acid sites are assigned at1430 and1605cm−1,respectively.These results are shown in Fig.6.After evacuation at400◦C,the1.5wt.%sulfate cat-alyst contains more Lewis and Brønsted sites than both the3.0and6.6wt.%sulfate catalysts.This result is consistent with that of TPR analyzed between400 and500◦C.Especially,the fact that the strong Lewis sites are strongly related to the neighbor Ti ion for sulfate concentration higher than1.5wt.%evidences the importance of sulfate coverage in the generation of strong acid sites.3.4.DeNO x activityIn SCR reaction of NO with NH3,there is no doubt that the acid function of catalyst is the main factor which controls the high activity,but the fundamental questions about the nature of active ammonia species or active sites are still not fully answered.In particular,surface acidity plays an important role in the adsorption and activation of ammonia at high temperature.Chen and Yang[6]have proposed that the SCR activity of the TiO2sulfate catalyst is di-rectly related to the amount of Brønsted acid sites,as confirmed by XPS.They detected a peak at401.7eV assigned to ammonia chemisorbed on Brønsted acid sites.However,since the adsorption of NH3was car-ried out and probably measured at room temperature in that paper,the peak of401.7eV cannot perfectly prove the role of Brønsted sites as an active site for SCR reaction at high temperature.The results only in-dicate the difference of total acidity between TiO2and TiO2sulfate.Accordingly,the influence of sulfate content and acidity on catalytic performance for SCR reaction at high temperature was studied.From these results,a type of acid site which is directly responsible for SCR reaction at high temperature is described.The SCR activities were measured under steady state conditions at400◦C and the NO conversion ob-tained over titania sulfate catalysts as a function of sulfate loading are shown in Fig.7.No considerable N2O and NO2were detected for the series of catalysts at400◦C.The NO conversion over pure TiO2is lower than20%.After sulfation,all the samples show an in-crease of conversion up to70–90%.The increase of activity is consistent with the amount of NH3adsorp-tion shown in Fig.6.Fig.6.Concentration of Brønsted(B)and Lewis(L)acid sites for sulfated TiO2samples evacuated at400◦C for30min,after adsorption by ammonia at room temperature.S.M.Jung,P.Grange/Catalysis Today59(2000)305–312311Fig.7.NO conversion and TOF value for the SCR reaction vs. sulfate contents.When the sulfate contents further increase in TiO2 sulfated catalyst,the NO conversion almost linearly increases,although the increase in the rate of conver-sion is smaller.But,it is not easy to detect the influence of acidity and acid site,since,considering the differ-ence of surface area,it is supposed that the difference in NO conversions is mainly affected by the increase of this parameter.Thus,for interpreting the real effect of acid sites and acidity and excluding the influence of surface area,the areal TOF value was used.Areal TOF represents the number of converted NO moles per unit time(s)and area(m2).The areal TOF values are compared with sul-fate contents and are also shown in Fig.7.The areal TOF of SO42−=1.5wt.%catalyst is supe-rior to SO42−=3.0wt.%and SO42−=6.6wt.%.TheSO42−=3.0wt.%is almost equal to SO42−=6.6wt.%. Considering the SCR mechanism,areal TOF value is correlated by the acid function,which controls the adsorption of ammonia on the catalyst surface,and the redox properties,which activate the adsorbed am-monia.In all samples based on TiO2,since the redox properties seem to be similar,it is reasonable to sup-pose that the difference of areal TOF is essentially connected to the number of acid sites. Accordingly,the higher areal TOF value in the SO42−=1.5wt.%catalyst shows that this catalyst has more effective active sites per unit area for SCR re-action than other catalysts.As can be seen in Fig.6, the trend of Lewis acid sites along with the sulfate contents is consistent with that of areal TOF,whereas there is no direct relationship with the Brønsted acid sites.This relationship would indicate that the actual active site for SCR reaction on the TiO2sulfate cata-lyst is not a Brønsted acid site but a Lewis acid site.4.ConclusionsThe addition of sulfuric acid during the preparation of TiO2,leads to an increase in the surface area and the formation of sulfate complexes at the surface.S=O concentration on the surface decreases when the surface is saturated by sulfate ions.This is due to the formation of polynuclear sulfate species.It is not clear that the polynuclear sulfate cannot extract as many electrons as isolated sulfate to generate a strong acidity.But the coverage of titania which can be evaluated by the S/Ti ratio is closely related to acidity and acid strength.Accordingly,the increase of sulfate amount per unit area induces a decrease of the real density of strong acidity per surface area.The areal TOF of NO for SCR reaction is higher in the case of SO42−=1.5wt.%.This is due to its strong acidity at high temperature.Especially,the strong Lewis sites are directly related with DeNO x activity.AcknowledgementsThis work was supported by ECSC project (7220-ED/093).References[1]P.Forzatti,L.Lietti,Heterogen.Chem.Rev.3(1996)33.[2]L.J.Alemany,L.Lietti,N.Ferlazzo,P.Forzatti,G.Busca,E.Giamello,F.Bregani,J.Catal.155(1996)117.[3]J.P.Chen,R.T.Yang,J.Catal.80(1992)135.[4]P.Wauthoz,M.Ruwet,T.Machej,P.Grange,Appl.Catal.69(1991)149.[5]J.Blanco,A.Bahamonde,E.Alvarez,P.Avila,Symposiumon Reduction of NO x and SO x from Combustion Sources, in:Proceedings of the214th National Meeting,Am.Chem.Soc.,1997,p.818.[6]J.P.Chen,R.T.Yang,J.Catal.139(1993)277.312S.M.Jung,P.Grange/Catalysis Today59(2000)305–312[7]K.Tanabe,M.Itoh,K.Morishige,H.Hattori,in:Proceedingsof the International Symposium on Preparation of Catalysts, Brussels,1975,p.65.[8]H.Hino,K.Arata,J.Chem.Soc.,mun.(1979)1148.[9]Y.Tsutomu,Appl.Catal.61(1990)1.[10]M.Waqif,J.Bachelier,O.Saur,valley,J.Mol.Catal.72(1992)127.[11]J.R.Sohn,H.J.Jang,J.Catal.136(1992)267.[12]G.Busca,L.Lietti,G.Ramis,F.Berti,Appl.Catal.B18(1998)1.[13]J.F.Le Page,J.Cosyns,P.Courty,E.Freund,J.P.Franck,Y.Jacquin,B.Juguin,C.Marcilly,G.Martino,J.Miquel, R.Montarnal, A.Sugier,H.Van Landeghem,Applied Heterogeneous Catalysis,Editions Technip,Paris,1987,p.92 (Chapter5).[14]O.Saur,M.Bensitel,A.B.Mohammed Saad,valley,J.Catal.99(1986)104.[15]M.Bensitel,O.Saur,valley,B.A.Morrow,Mater.Chem.Phys.19(1988)147.[16]C.Morterra,G.Cerrato,C.Emanuel,V.Bolis,J.Catal.142(1993)349.[17]C.Morterra,G.Cerrato,V.Bolis,Catal.Today17(1993)505.[18]M.Trung Tran,N.S.Gnep,G.Szabo,M.Guisnet,Appl.Catal.171(1998)207.[19]M.A.Vuurman,I.E.Wachs,A.M.Hirt,J.Phys.Chem.95(24)(1991)9928.[20]R.J.Gillespie, E.A.Robinson,Can.J.Chem.41(1963)2074.[21]T.Yamaguchi,T.Jin,K.Tanabe,J.Phys.Chem.90(1986)3148.。
Engineering Failure Analysis
Cracking of aluminum cast pistons of fuel gas reciprocating compressors W.T.Riad,B.S.Hussain,H.M.Shalaby *Kuwait Institute for Scientific Research,P.O.Box 24885,13109Safat,Kuwaita r t i c l e i n f o Article history:Received 12July 2009Accepted 2September 2009Available online 16September 2009Keywords:Aluminum pistons Reciprocating compressors Fatigue cracking Piston failurea b s t r a c tSeveral aluminum cast pistons used in fuel gas reciprocating compressors suffered fromcracking during operation after a short time of service.The pistons were obtained fromtwo manufacturing sources and the failure time was different.Metallurgical investigationwas made on the failed pistons to identify the root cause of cracking.The investigationrevealed that the cracks primarily existed at the top surfaces of the pistons and joinedscrewed plugs.The investigation also showed that the cracks had originated as fatiguecracks starting from the roots of broken threads in the body of the pistons.The root causeof failure was found to be the improper screwing of the plugs which resulted in the sheardeformation of the threads and development of incipient microcracks.The difference infailure time was attributed to differences in materials properties and the amount of castingdefects.Ó2009Elsevier Ltd.All rights reserved.1.IntroductionSeveral failures were encountered in aluminum cast pistons used in fuel gas reciprocating compressors.The reciprocating compressors were used for driving an electrical motor at 4.16kV,2500HP,and 865rpm at a flow rate of 11MMCSFD.The compressors were operating in three stages.In the first stage,the suction pressure was 10psi and the discharge pressure was 43psi.The second stage has a discharge pressure of 110psi,followed by the third stage where the final discharge pressure was 270psi.In order to improve the efficiency of the compressors through raising its capacity from 29to 33MMCSFD,two compres-sors were modified from three stages to one stage.This was done at a suction pressure of 75psi and a discharge pressure of 270psi.Following these modifications,premature piston failures occurred.The pistons were obtained from two manufacturing sources.In this paper,the pistons from these two sources are termed Nos.1and 2.The frequency of failure of the pistons termed No.1was much higher than that of pistons No.2.Sometimes pistons No.1failed after only 1week of operation,while pistons No.2failed after about 3months of operation.Fig.1shows photographs of a piston that failed after only one month of operation.2.Visual observations of the as-received pistonsOne piston from each manufacturing source was taken to the laboratory for examination.The two pistons were subjected to visual examinations in their as-received condition.The dimensions and design of both pistons were the same.The dimen-sions were found to be 44cm in diameter and 29cm in length.The top and bottom surfaces of each piston contained six plugs on each surface.The diameter of each plug is 5.9cm.The plugs were threaded into the piston’s body,and located cir-1350-6307/$-see front matter Ó2009Elsevier Ltd.All rights reserved.doi:10.1016/j.engfailanal.2009.09.004*Corresponding author.Tel.:+9653980499;fax:+9653980445.E-mail address:hshalaby@.kw (H.M.Shalaby).Engineering Failure Analysis 17(2010)440–446Contents lists available at ScienceDirectEngineering Failure Analysisj o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /e n g f a i l a n a lcumferentially at fixed distance.Each plug has four small size hemispherical grooves that are used for tightening the plug into the piston’s body.Each of these grooves is 7mm in diameter,and half of the body of the small size groove is located in the piston’s body,while the other half is in the plug.In addition to the small size grooves in each plug,there are two larger size hemispherical grooves 9.7mm in diameter and 5mm in depth.In both pistons,cracks were present at the top surface that is facing the compressed gas.On the other hand,the bottom surfaces of the pistons were free of any damage.With regard to piston No.1,two cracks extended in the piston’s body from two sides of one plug to the neighboring plugs (Fig.2a).Thus,the cracks extended in the piston’s body but not in the plugs itself.Visual examination of the failed piston No.2revealed the existence of a single crack at the top surface.The crack extended all the way in the piston’s body from the side of one plug to the surface facing the piston’s rod (Fig.2b).This crack morphol-ogy is different from that of piston No.1,where the cracks extended in the piston’s body between plugs.In addition,the plugs in piston No.2appeared slightly recessed in the piston’s body and the plug’s grooves were not equally positioned in the piston’s body,which was not the case in piston No.1.3.Results3.1.Lube oil analysesSince the lube oil used in the pistons was suspected to play a role in the piston’s damage,lube oil analyses were con-ducted on both fresh (un-used)and used lube oil samples.The used sample was taken from a compressor under operation.The two samples were tested for flash point,water content and metal content.Table 1presents the results of flash point and water content,while Table 2shows the results of metal content.The latter results were obtained for six elements,namely Fe,Pb,Sn,Al,Cr and Cu.The results given in Table 1indicate that the fresh lube oil has good flash point and minimum water content.On the other hand,the used lube oil sustained a flame,indicating that the flash point has been reached and,accordingly,the fire pointwas Fig.1.Photographs showing:(a)top view and (b)side view of a piston that failed after one month ofoperation.Fig.2.Photographs showing:(a)cracks in top surface of piston No.1and (b)crack in top surface of piston No.2.Note that the plug is recessed in the body of piston No.2and the grooves are not equally positioned.W.T.Riad et al./Engineering Failure Analysis 17(2010)440–446441not detected.This result indicates that the used lube oil has badly deteriorated due to cracking,which suggests exposure to relatively high temperature.Table 1also indicates that the level of water content has increased in the used lube oil.Table 2shows that the fresh lube oil contains three metals,namely Fe,Pb and Sn.With respect to the used lube oil,three additional metals (Al,Cr and Cu)were detected.The iron content significantly increased in the used lube oil,while Pb slightly increased.The significant increase in the Fe content of the used lube oil and the presence of Cr and Cu are possibly due to corrosion of piston parts.3.2.Examinations of cut parts of the pistonsThe pistons were cut in the cracked areas and the cracks were opened up for visual and stereo microscopic examinations.Also,specimens were machined from both pistons for metallographic examinations using optical microscopy.The metallo-graphic preparations included wet mechanical grinding,using SiC papers down to 4000grit.This was followed by polishing using 6and 1l diamond pastes and final polishing with diluted SiO 2suspension.After polishing,the specimens were etched using Keller’s reagent whenever found necessary.The reagent consisted of 2ml HF,3ml HCl,5ml HNO 3and 190ml H 2O.The examinations were made before and after etching.The examinations aimed at determining the microstructures of the body of the pistons and plug and at detecting any irregularities that might have led to failure.In both pistons,when the cracked area at the surface near the plug was cut,the body of the piston was found hollow.In addition,the plugs and the body of the piston were found threaded.The crack observed at the top surface of the piston was found to have extended into the supporter underneath the surface.Furthermore,the crack was noticed to have started in the piston’s body from the thread of the body with the plug at an area underneath the cup of the groove (Fig.3a).The fracture surfaces were found gray in color with fibrous texture and had two regions (see Fig.3b).The first region next to the threaded part exhibited short flat surface of lighter gray color.The second region was larger and basically inclined with an angle of 45°.Casting defects,mostly shrinkage cavities,were seen in the second region.Detailed examinations of the second region alsoTable 1Flash point and water content analyses of lube oils.Test typeFresh Used Flash point253.1–Water content 0.00750.0115Table 2Metal content analyses of lube oils.Metal content (ppm)Fresh Used Fe0.0625.74Pb0.1 2.1Sn0.120.14AlNone 0.62CrNone 0.30Cu None2.1Fig.3.Photographs showing:(a)sections cut in the body of piston No.1and the plug;and (b)fracture surface.442W.T.Riad et al./Engineering Failure Analysis 17(2010)440–446W.T.Riad et al./Engineering Failure Analysis17(2010)440–446443 showed a group of dark convex markings pointing to the origin of cracking.These morphological features suggest the occur-rence of fatigue.Since cracking was observed to be linked to the threads,emphasis was placed on investigation the condition of the threads.Fig.4shows micrographs taken for the threads of the body of piston No.1and its plug.It is clear from the micro-graphs that the threads under the groove were squeezed and some of which were severely damaged.Stereo microscopic examination of a specimen cut at the threads of the body of piston No.1revealed that cracking started underneath the groove at broken threads(see Fig.5).Fig.6shows details of broken threads in the piston’s body,cracks that initiated at the bottom of the broken threads,and casting defects.It is clear from the micrographs that the threads in the piston’s body were chipped away,particularly at the bottom of the groove.Also,the cracks initiated at the bottom of the threads propagated in a mode mostly transgranular in nature.The microstructure of the polished surface exhibits an interdentritic eutectic network(mottled)and some interdent-ritic particles.This microstructure is typical of that resulting from casting.The alloy structure was seen to contain shrinkage cavities and pores of different sizes.However,these casting defects were more in piston No.1when compared with piston No.2.The present results indicate that cracking/chipping of the threads was the reason of the cracks developed in the pis-ton’s body.Fig.7shows that the threads of the plugs also suffered damage.The damage was mostly in the form of squeezed threads and occasionally tearing of threads.No cracks were formed at the bottom of the threads and therefore there were no cracks propagating into the body of the plug.The microstructure of the plug was not clearly resolved either in the as-polished or etched conditions.However,the microstructure appeared to be that of a wrought material.3.3.Chemical analyses of the piston partsChemical analyses were conducted on the body of the pistons and their plugs,using energy dispersive X-ray technique.As can be seen in Table3,the body the pistons and the plugs are made of Al alloys.It is clear that the body of both pistons con-tains silicon,while the plugs are silicon free.Furthermore,the plugs contain higher Al than the body of the pistons in addi-tion to little amounts of Fe.However,unlike the plugs of piston No.1,the plugs of piston No.2do not contain Cu or Mn.The results suggest that the plugs are possibly made by machining from wrought Al,while the body of the pistons body was cast.3.4.Mechanical properties of the pistonsTensile testing and macrohardness measurements were made on the body of the pistons.The tensile tests were made in accordance with ASTM A370,using dumbbell specimens machined from crack-free areas.The dimension of each specimenpistons.was6.0mm width,3.00thickness and35.0mm length.Table4shows the results obtained for the body of the Array Fig.4.Stereo micrographs showing the condition of the threads of:(a)the piston’s body and(b)the plug.Macrohardness measurements were also made on the plugs.These latter measurements yielded an average value of 50.6HRA for the plugs of piston No.1and 42.8HRA for the plugs of piston No.2.The present results show that although the body of piston No.2has tensile strength about the same as in piston No.1,but the body of piston No.2was somewhat harder than that of piston No.1.Furthermore,the hardness of the plugs in piston No.2was relatively close to that of its body.On the other hand,the hardness of the body of piston No.1was less than that of itsplugs.Fig.5.Stereo micrograph of the fracture surface in the body of piston No.1,showing origin offracture.Fig.6.(a)Stereo micrograph showing broken threads in the body of piston No.1and crack starting from the broken threads (arrows point to damaged threads);(b)optical micrograph showing microstructure of the body of the piston and crack that initiated at the bottom of broken thread and (c)castingdefects.Fig.7.(a)Stereo micrograph showing squeezed threads in the plug of piston No.1and (b)optical micrograph showing a broken thread in the plug and its microstructure.444W.T.Riad et al./Engineering Failure Analysis 17(2010)440–446W.T.Riad et al./Engineering Failure Analysis17(2010)440–446445Table3Chemical compositions of the body of the pistons and plugs.Location Al(%)Si(%)Cu(%)Fe(%)Mn(%)Piston No.1(Body)95.65 3.920.30––Piston No.1(Plug)98.44– 1.000.210.17 Piston No.2(Body)94.81 4.750.30––Piston No.2(Plug)99.74––0.10–Table4Mechanical properties of the body of the pistons.Part Tensile strength(MPa)Elongation at break(%)Hardness(HRA)Piston1225.12 6.7433.3Piston2224.67 5.2741.54.DiscussionThe present results clearly show that the root cause of the premature failure of the pistons is the formation of stress risers (or stress concentrations)at the roots of broken threads in the body of the pistons.The chipping and cracking of the threads in the piston’s body created incipient microcracks that propagated into the piston’s body in a corrosion fatigue mechanism. In effect,improper screwing of the plugs damaged the threads of both the plugs and the piston’s body.Since the bodies of the pistons are made of cast Al alloys,which are relatively more brittle than the plugs,the incipient cracks were able to prop-agate into the piston’s body.On the other hand,the wrought materials of the plugs resisted the stress concentration created by the damaged threads.The noticeable differences in the frequency and time of failures of the pistons made at different manufacturer can be attributed to differences in materials properties.It was mentioned that piston No.1contained more casting defects,partic-ularly shrinkage cavities and pores,than piston No.2.It has been established by many investigators[1–3]that the fatigue life of cast Al alloys containing defects can be one or two orders of magnitude lower than in defect-free cast components.The presence of casting defects shortens not only the fatigue crack propagation period,but also the initiation period.The body of piston No.1should have provided better resistance to cracking than that of piston No.2due to its lower hard-ness and the apparent presence of ductility.These results suggest that the higher number of failures and the shorter time to failure of piston No.1could be due to the crack initiation stage and the role of casting defects as indicated above.It seems that the higher hardness of the plugs of piston No.1and the softer body makes the threads more prone to damage than in piston No.2,where the hardness of the plugs and the body is about the same.It was mentioned that the plant experienced significant increase in the number of piston failure after the up-grading.This increase may be due to the increase in severity of operation.As indicated above,the screwing of the plugs lead to damage of the threads and formation of incipient cracks in both pistons.Thus,up-grading of the compressor may have aided crack propagation,but is not the cause of failure.The increase in severity of operation may have been the cause of corrosion of compressor parts and the presence of elements,such as Fe,Cr and Cu in the lube oil.This is in addition to cracking of the lube oil and deposition of carbon.Analysis of the used lube oil showed a significant increase in water content in addition to elements resulting from cor-rosion of piston parts.These results indicate that appreciable water condensation has occurred in addition to contamination of the lube oil by dust particles and corrosion products.It is known that aluminum components are very sensitive to con-ditions of dirt in the oil and any foreign matter such as the presence of Fe in oil.Also,the presence of heavy metals in the lubricant has a detrimental effect on the performance of compressor and can cause abrasion and wear to the metallic components of piston-compressor[4].Water contamination in lubricating oil is known to cause corrosion.The presence of water is known to form hydro sulphuric acid with sulphur present in the form of hydrogen sulphide(H2S)or other sulphur compounds[4].Sulphur ingression,in whatever form is usually corrosive to most metals and can result from seal leakage in applications of sour gas compression.5.Conclusions and recommendations-The root cause of the premature failure of the pistons is the damage that occurred to the threads of the piston’s body.However,the design modification of the compressors may have aggravated this damage.-The damage to the threads created incipient microcracks that propagated into the piston’s body during operation and resulted in fatigue failure.-The differences in the frequency and time of failure of the pistons were attributed to differences in materials properties and casting defects.446W.T.Riad et al./Engineering Failure Analysis17(2010)440–446-In order for such failure to be avoided,the plant was recommended to carefully screw the plugs into the piston’s body and to insure that the threads of the piston’s body and plugs are damage-free.Also,the lube oil must be free of dirt and par-ticles of corrosion products through whatever means available,such as proper selection offilters.References[1]Wang QG,Apelian D,Lados DA.Fatigue behavior of A356-T6aluminum cast alloys.Part I.Effect of casting defects.J Light Metals2001;1:73–84.[2]Wang QG,Apelian D,Lados DA.Fatigue behavior of A356/357aluminum cast alloys.Part II.Effect of microstructural constituents.J Light Metals2001;1:85–97.[3]Couper MJ,Nesson AE,Griffiths JR.Casting defects and the fatigue behaviour of an aluminium casting alloy.Fatigue Fract Eng Mater Struct1990;13(3):213–27.[4]Smith Edward H,editor.Mechanical engineer’s reference book,12th :Butterworth–Heiinman;1994.。
《风险评价技术及方法》 14._Fault_Hazard_Analysis
Chapter14Fault Hazard Analysis14.1INTRODUCTIONFault hazard analysis (FaHA)is an analysis technique for identifying those hazards arising from component failure modes.It is accomplished by examining the poten-tial failure modes of subsystems,assemblies,or components and determining which failure modes can form undesired states that could result in a mishap.14.2BACKGROUNDThe FaHA technique falls under the detailed design hazard analysis type (DD-HAT)analysis.The basic hazard analyses types are described in Chapter 3.The purpose of FaHA is to identify hazards through the analysis of potential failure modes in the hardware that comprises a subsystem.The FaHA is applicable to analysis of all types of systems and equipment.FaHA can be implemented on a subsystem,a system,or an integrated set of systems.The FaHA can be performed at any level from the component level through the system level.It is hardware oriented and not suited for software analysis.The FaHA is a thorough technique for evaluating potential failure modes.How-ever,it has the same limitations as the FMEA.It looks at single failures and not com-binations of failures.FaHAs generally overlook hazards that do not result entirely from failure modes,such as poor design,timing errors,and the like.The conduct of an FaHA requires a basic understanding of hazard analysis theory,failure modes,and a detailed understanding of the system under analysis.The meth-odology is similar to failure mode and effects analysis (FMEA).Although the FaHA261Hazard Analysis Techniques for System Safety ,by Clifton A.Ericson,II Copyright #2005John Wiley &Sons,Inc.262FAULT HAZARD ANALYSISis a valuable hazard analysis technique,the subsystem hazard analysis(SSHA)has replaced the FaHA.The SSHA methodology includes considering failure modes for safety implications,and thus it accomplishes the same objective as the FaHA.The FaHA technique is not recommended for general usage.Other safety analysis techniques are more cost effective for the identification of hazards and root causes, such as the SSHA.The FaHA should be used only when a rigorous analysis of all component failure modes is required.The FaHA technique is uncomplicated and easily mastered using the worksheets and instructions provided in this chapter. 14.3HISTORYThe Boeing Company developed the FaHA in1965for the Minuteman program as a variation of the FMEA technique.It was developed to allow the analyst to stop the analysis at a point where it becomes clear that a failure mode did not contribute to a hazard,whereas the FMEA requires complete evaluation of all failure modes. 14.4THEORYThe FaHA is a qualitative and/or quantitative analysis method.The FaHA can be used exclusively as a qualitative analysis or,if desired,expanded to a quantitative one for individual component failure modes.The FaHA requires a detailed investi-gation of the subsystems to determine which components can fail leading to a hazard and resultant effects to the subsystem and its operation.The FaHA answers a series of questions:.What can fail?.How it can fail?.How frequently will it fail?.What are the effects of the failure?.What hazards result as a consequence of failure?The FaHA considers total functional and out-of-tolerance modes of failure.For example,a5percent,5000-V(+250-V)resistor can have as functional failure modes“failing open”or“failing short,”while the out-of-tolerance modes might include“too low a resistance”or“too high a resistance.”To conduct an FaHA,it is necessary to know and understand the following sys-tem characteristics:.Equipment mission.Operational constraints.Success and failure boundaries.Realistic failure modes and their probability of occurrence14.5METHODOLOGY263The general FaHA approach involves the following:.Analyzing each component.Analyzing all component failure modes.Determining if failure mode directly causes hazard.Determining the failure mode effect on subsystem and system.Determining if the component failure can be induced by another component The FaHA approach utilizing a columnar form with specially selected entries provides optimum results.This approach establishes a means for systematically ana-lyzing a system or subsystem design for the identification of hazards.In addition to identifying hazards,data in the FaHA form provides useful information for other safety analyses,such as the fault tree analysis.The purpose of the FaHA is to identify hazards existing within a subsystem due to potential hardware component failure.This is accomplished by examining the causes and effects of subsystem component failures.14.5METHODOLOGYTable14.1lists the basic steps in the FaHA process.The FaHA methodology is demonstrated in Figure14.1which contains a hypothetical system,consisting of two subsystems,shown in functional block diagram format.In performing an FaHA,the idea is to break each subsystem into major components or black boxes, whose failure modes can be evaluated.The next step is to identify and evaluate all credible failure modes for each com-ponent within the black box or subsystem.For instance,in subsystem1,component B may fail“open.”The effects of this failure mode upon components A and C are determined and also the effects at the subsystem interface with subsystem2.Secondary factors that could cause component B to fail open are identified. For instance,excessive heat radiated from component C may cause component B to fail open.Events“upstream”of component B that could directly command component B to fail open are identified.These types of events are usually a part of the normal sequence of planned events,except they occur at the wrong time and may not be controllable once they occur on their own.For example,a short circuit in component A may output from component A the signal that commands component B to respond in the open mode.When the FaHA is completed,the effects of failures in subsystem1will terminate at the interface,and the upstream events commanding failures in subsystem2will begin from the interface.Hence,it is possible to determine interface hazards by comparing the“effects”of subsystem1with the“upstream events”of subsystem2. This is an indirect result of the FaHA.Output 3InterfaceFigure 14.1Example system interface.TABLE 14.1FaHA Process Step Task Description1Define system.Define,scope,and bound system.Establish indenture levels for items to be analyzed.2Plan FaHA.Establish FaHA goals,definitions,worksheets,schedule,and process.Define credible failures of interest for the analysis.3Acquire data.Acquire all of the necessary design and process data needed for the FaHA.Refine the item indenture levels for analysis.Data can include functional diagrams,schematics,and drawings for the system,subsystems,and functions.Sources for this information could include design specifications,functional block diagrams,sketches,drawings,and schematics.4Partition system.Divide the system under analysis into smaller logical and manageable segments,such as subsystems,units,or functional boxes.5Conduct FaHA.For analyses performed down to the component level,a complete component list with the specific function of each component is prepared for each module as it is to be analyzed.Perform the FaHA on each item in the identified list of components.This step is further expanded in the next section.Analysis identifies:.Failure mode.Immediate failure effect .System-level failure effect.Potential hazard and associated risk6Recommend corrective action.Recommend corrective action for failure modes with unacceptable risk or criticality to program manager for action.7Monitor corrective action.Review the FaHA at scheduled intervals to ensure that corrective action is being implemented.8Document FaHA.Documentation of the entire FaHA process,including the worksheets.Update for new information and closure of assigned corrective actions.264FAULT HAZARD ANALYSIS14.6WORKSHEETThe FaHA is a formal and detailed hazard analysis utilizing structure and rigor.It is desirable to perform the FaHA using a worksheet.Although the format of the analy-sis worksheet is not critical,a recommended FaHA format is shown in Figure 14.2.This is the form that was successfully used on the Minuteman missile weapon system program.The intended content for each column is described as follows:ponent This column identifies the major functional or physical hard-ware components within the subsystem being analyzed.The component should be identified by part number and descriptive title.2.Failure Mode This column identifies all credible failure modes that are possible for the identified component.This information can be obtained from the FMEA,manufacturer’s data or testing.(Note:This column matches the “primary”cause question in an FTA.)3.Failure Rate This column provides the failure rate or failure probability for the identified mode of failure.The source of the failure rate should also be provided for future reference.4.Operational Mode This column identifies the system phase or mode of operation during the indicated failure mode.5.Effect on Subsystem This column identifies the direct effect on the subsys-tem and components within the subsystem for the identified failure mode.6.Secondary Causes This column identifies secondary factors that may cause the component to fail.Abnormal and out-of-tolerance conditions may cause the component ponent tolerance levels should be provided.Also,environmental factors or common cause events may be a secondary cause for failure.(Note:This column matches the “secondary”cause ques-tion in an FTA.)Fault Hazard Analysis9Effect onSystemFailure Rate ComponentUpstreamCommandCausesSystemMode Failure ModeEffect onSubsystemSecondary Causes RemarksMRISubsystem_________Assembly/Unit__________Analyst__________5101234678Figure 14.2Recommended FaHA worksheet.14.6WORKSHEET2657.Upstream Command Causes This column identifies those functions,events,or failures that directly force the component into the indicated failure mode.(Note:This column matches the “command”cause question in an FTA.)8.Mishap Risk Index (MRI)This column provides a qualitative measure of mishap risk for the potential effect of the identified hazard,given that no mitigation techniques are applied to the hazard.Risk measures are a combi-nation of mishap severity and probability,and the recommended values from MIL-STD-882are shown below.Severity Probability I.Catastrophic A.Frequent II.Critical B.Probable III.Marginal C.Occasional IV.NegligibleD.RemoteE.Improbable9.Effect on System This column identifies the direct effect on the system of the indicated component failure mode.10.Remarks This column provides for any additional information that may bepertinent to the analysis.14.7EXAMPLEIn order to demonstrate the FaHA technique,the same hypothetical small missile system from Chapter 4on preliminary hazard list (PHL)analysis will be used.The basic preliminary component and function design information from the PHL is provided again in Figure 14.3.StorageTransportation Handling Standby Alert Launch FlightCommand Response Impact Missile Body Warhead Engine (Jet) Fuel (Liquid) Computer Software NavigationCommunications Guidance Battery FunctionsComponentsFigure 14.3Missile system component list and function list.266FAULT HAZARD ANALYSISTypically,an FaHA would be performed on each of the component subsystem designs.For this FaHA example,the battery subsystem has been selected for evalu-ation using the FaHA technique.The battery design is shown in Figure 14.4.In this design the electrolyte is contained separately from the battery plates by a frangible membrane.When battery power is desired,the squib is fired,thereby breaking the electrolyte housing and releasing electrolyte into the battery,thus ener-gizing the battery.The battery subsystem is comprised of the following components:1.Case2.Electrolyte3.Battery plates4.Frangible container separating electrolyte from battery plates5.Squib that breaks open the electrolyte containerThe battery FaHA is shown in Table 14.2.The following conclusions can be derived from the FaHA worksheet contained in Table 14.2:1.The failures with a risk level of 2C indicate that the failure mode leaves the system in an unsafe state,which will require further analysis to evaluate the unsafe state and design mitigation to reduce the risk.2.The failures with a risk level of 4C indicate that the failure mode leaves the mis-sile in a state without power,resulting in a dud missile (not a safety problem).14.8ADVANTAGES AND DISADVANTAGESThe following are advantages of the FaHA technique:1.FaHAs are more easily and quickly performed than other techniques (e.g.,FTA).2.FaHAs can be performed with minimal training.3.FaHAs are inexpensive.4.FaHAs forces the analyst to focus on system elements and hazards.ContainedElectrolyteSquibBattery Plates and TerminalsFigure 14.4Example missile battery design.14.8ADVANTAGES AND DISADVANTAGES267T A B L E 14.2F a H A W o r k s h e e t f o r B a t t e r yF a u l t H a z a r d A n a l y s i sS u b s y s t e m :M i s s i l eA s s e m b l y /U n i t :B a t t e r y A n a l y s t :D a t e :C o m p o n e n tF a i l u r e M o d e F a i l u r e R a t eS y s t e m M o d e E f f e c t o n S u b s y s t e m S e c o n d a r y C a u s e s U p s t r e a m C o m m a n d C a u s e s M R IE f f e c t O n S y s t e m R e m a r k sB a t t e r y s q u i b S q u i b f a i l s t o i g n i t e3.5Â1025M a n u f .d a t a F l i g h t N o p o w e r o u t p u t f r o m b a t t e r y E x c e s s i v e s h o c kN o i g n i t i o n c o m m a n d4C D u d m i s s i l eS a f eS q u i b i g n i t e s i n a d v e r t e n t l y1.1Â1029M a n u f .d a t aG r o u n d o p e r a t i o n s B a t t e r y p o w e r i s i n a d v e r t e n t l y a p p l i e d H e a t ;s h o c k I n a d v e r t e n t i g n i t i o n c o m m a n d2CU n s a f e s y s t e m s t a t eF u r t h e r a n a l y s i s r e q u i r e dB a t t e r y e l e c t r o l y t e E l e c t r o l y t e l e a k a g e4.1Â1026M a n u f .d a t a G r o u n d o p e r a t i o n sC o r r o s i o n ;g a s e s ;fir eE x c e s s i v e s h o c k ;p u n c t u r e M a n u f a c t u r i n g d e f e c t2CU n s a f e s y s t e m s t a t e F u r t h e r a n a l y s i s r e q u i r e d B a t t e r yp o w e rP r e m a t u r e p o w e r o u t p u t1.0Â10210M a n u f .d a t aG r o u n d o p e r a t i o n sP o w e r i s i n a d v e r t e n t l y a p p l i e d t o m i s s i l e e l e c t r o n i c s N o n eE l e c t r o l y t e l e a k a g e i n t o b a t t e r y c e l l s2CU n s a f e s y s t e m s t a t e F u r t h e r a n a l y s i s r e q u i r e dN o p o w e r o u t p u t2.2Â1026M a n u f .d a t a F l i g h t N o p o w e r o u t p u t t o m i s s i l e e l e c t r o n i c s B a t t e r y d a m a g eB r o k e n c a b l e s4C D u d m i s s i l eS a f eB a t t e r y c a s eC a s e l e a k s1.0Â10212M a n u f .d a t aF l i g h tN o p o w e r o u t p u tE x c e s s i v e s h o c k4C D u d m i s s i l eS a f eG r o u n d o p e r a t i o n s C o r r o s i o n ;g a s e s ;fir e E x c e s s i v e s h o c k2C U n s a f e s t a t eF u r t h e r a n a l y s i s r e q u i r e dP a g e :1o f 1268BIBLIOGRAPHY269 The following are disadvantages of the FaHA technique:1.FaHAs focus on single failure modes and not combinations of failure modes.2.The FaHA focuses on failure modes,overlooking other types of hazards(e.g.,human errors).3.FaHAs are not applicable to software since software has no failure modes.14.9COMMON MISTAKES TO AVOIDWhenfirst learning how to perform an FaHA,it is commonplace to commit some traditional errors.The following is a list of typical errors made during the conduct of an FaHA:1.Not fully understanding the FaHA techniqueing the FaHA technique when another technique might be more appropriate 14.10SUMMARYThis chapter discussed the FaHA technique.The following are basic principles that help summarize the discussion in this chapter:1.The primary purpose of the FaHA is to identify hazards by focusing on poten-tial hardware failure modes.Every credible single failure mode for each com-ponent is analyzed to determine if it can lead to a hazard.2.FaHA is a qualitative and/or quantitative analysis tool.3.The use of a functional block diagram greatly aids and simplifies the FaHAprocess.BIBLIOGRAPHYEricson,C.A.,Boeing Document D2-113072-2,System Safety Analytical Technology—Fault Hazard Analysis,1972.Harris,R.W.,Fault Hazard Analysis,USAF—Industry System Safety Conference,Las Vegas,Feb.,1969.。
H3C S5800 版本说明书
H3C S5800_5820X-CMW520-R1810P16 版本说明书Copyright © 2018新华三技术有限公司版权所有,保留一切权利。
非经本公司书面许可,任何单位和个人不得擅自摘抄、复制本文档内容的部分或全部,并不得以任何形式传播。
本文档中的信息可能变动,恕不另行通知。
目录1 版本信息 (1)1.1 版本号 (1)1.2 历史版本信息 (1)1.3 版本配套表 (4)1.4 ISSU版本兼容列表 (5)1.5 版本升级注意事项 (6)2 硬件特性变更说明 (6)2.1 R1810P16版本硬件特性变更说明 (6)2.2 R1810P13版本硬件特性变更说明 (6)2.3 R1810P12版本硬件特性变更说明 (6)2.4 R1810P10版本硬件特性变更说明 (6)2.5 R1810P08版本硬件特性变更说明 (7)2.6 R1810P06版本硬件特性变更说明 (7)2.7 R1810P05版本硬件特性变更说明 (7)2.8 R1810版本硬件特性变更说明 (7)2.9 R1809P11版本硬件特性变更说明 (7)2.10 R1809P10版本硬件特性变更说明 (7)2.11 R1809P09版本硬件特性变更说明 (7)2.12 R1809P06版本硬件特性变更说明 (7)2.13 R1809P05版本硬件特性变更说明 (7)2.14 R1809P03版本硬件特性变更说明 (7)2.15 R1809P01版本硬件特性变更说明 (7)2.16 R1808P27版本硬件特性变更说明 (8)2.17 R1808P25版本硬件特性变更说明 (8)2.18 R1808P23版本硬件特性变更说明 (8)2.19 R1808P22版本硬件特性变更说明 (8)2.20 R1808P21版本硬件特性变更说明 (8)2.21 R1808P17版本硬件特性变更说明 (8)2.22 R1808P15版本硬件特性变更说明 (8)2.23 R1808P13版本硬件特性变更说明 (8)2.24 R1808P11版本硬件特性变更说明 (8)2.25 R1808P08版本硬件特性变更说明 (8)2.26 R1808P06版本硬件特性变更说明 (8)i2.29 F1807P01版本硬件特性变更说明 (9)2.30 R1805P02版本硬件特性变更说明 (9)2.31 R1211P08版本硬件特性变更说明 (9)2.32 R1211P06版本硬件特性变更说明 (9)2.33 R1211P03版本硬件特性变更说明 (9)2.34 R1211P02版本硬件特性变更说明 (9)2.35 R1211版本硬件特性变更说明 (9)2.36 F1209P01版本硬件特性变更说明 (9)2.37 F1209版本硬件特性变更说明 (9)2.38 F1208版本硬件特性变更说明 (10)2.39 F1207版本硬件特性变更说明 (10)2.40 R1206版本硬件特性变更说明 (10)2.41 R1110P05版本硬件特性变更说明 (10)2.42 R1110P04版本硬件特性变更说明 (10)3 软件特性及命令行变更说明 (10)4 MIB变更说明 (10)5 操作方式变更说明 (16)5.1 R1810P16版本操作方式变更 (16)5.2 R1810P13版本操作方式变更 (16)5.3 R1810P12版本操作方式变更 (16)5.4 R1810P10版本操作方式变更 (16)5.5 R1810P08版本操作方式变更 (17)5.6 R1810P06版本操作方式变更 (17)5.7 R1810P05版本操作方式变更 (17)5.8 R1810版本操作方式变更 (17)5.9 R1809P11版本操作方式变更 (17)5.10 R1809P10版本操作方式变更 (17)5.11 R1809P09版本操作方式变更 (17)5.12 R1809P06版本操作方式变更 (17)5.13 R1809P05版本操作方式变更 (18)5.14 R1809P03版本操作方式变更 (18)5.15 R1809P01版本操作方式变更 (18)5.16 R1808P27版本操作方式变更 (18)5.17 R1808P25版本操作方式变更 (18)ii5.20 R1808P21版本操作方式变更 (18)5.21 R1808P17版本操作方式变更 (19)5.22 R1808P15版本操作方式变更 (19)5.23 R1808P13版本操作方式变更 (19)5.24 R1808P11版本操作方式变更 (19)5.25 R1808P08版本操作方式变更 (19)5.26 R1808P06版本操作方式变更 (19)5.27 R1808P02版本操作方式变更 (19)5.28 R1807P02版本操作方式变更 (20)5.29 F1807P01版本操作方式变更 (20)5.30 R1805P02版本操作方式变更 (20)5.31 R1211P08版本操作方式变更 (20)5.32 R1211P06版本操作方式变更 (21)5.33 R1211P03版本操作方式变更 (21)5.34 R1211P02版本操作方式变更 (21)5.35 R1211版本操作方式变更 (21)5.36 F1209P01版本操作方式变更 (21)5.37 F1209版本操作方式变更 (21)5.38 F1208版本操作方式变更 (21)5.39 F1207版本操作方式变更 (22)5.40 R1206版本操作方式变更 (22)5.41 R1110P05版本操作方式变更 (22)5.42 R1110P04版本操作方式变更 (23)6 版本使用限制及注意事项 (23)7 存在问题与规避措施 (23)8 解决问题列表 (24)8.1 R1810P16版本解决问题列表 (24)8.2 R1810P13版本解决问题列表 (24)8.3 R1810P12版本解决问题列表 (25)8.4 R1810P10版本解决问题列表 (26)8.5 R1810P08版本解决问题列表 (27)8.6 R1810P06版本解决问题列表 (28)8.7 R1810P05版本解决问题列表 (28)8.8 R1810版本解决问题列表 (28)iii8.11 R1809P09版本解决问题列表 (30)8.12 R1809P06版本解决问题列表 (31)8.13 R1809P05版本解决问题列表 (32)8.14 R1809P03版本解决问题列表 (33)8.15 R1809P01版本解决问题列表 (34)8.16 R1808P27版本解决问题列表 (36)8.17 R1808P25版本解决问题列表 (36)8.18 R1808P23版本解决问题列表 (38)8.19 R1808P22版本解决问题列表 (38)8.20 R1808P21版本解决问题列表 (39)8.21 R1808P17版本解决问题列表 (39)8.22 R1808P15版本解决问题列表 (40)8.23 R1808P13版本解决问题列表 (41)8.24 R1808P11版本解决问题列表 (42)8.25 R1808P08版本解决问题列表 (44)8.26 R1808P06版本解决问题列表 (44)8.27 R1808P02版本解决问题列表 (46)8.28 R1807P02版本解决问题列表 (49)8.29 F1807P01版本解决问题列表 (50)8.30 R1805P02版本解决问题列表 (50)8.31 R1211P08版本解决问题列表 (53)8.32 R1211P06版本解决问题列表 (53)8.33 R1211P03版本解决问题列表 (57)8.34 R1211P02版本解决问题列表 (57)8.35 R1211版本解决问题列表 (58)8.36 F1209P01版本解决问题列表 (62)8.37 F1209版本解决问题列表 (62)8.38 F1208版本解决问题列表 (63)8.39 F1207版本解决问题列表 (65)8.40 R1206版本解决问题列表 (65)8.41 R1110P05版本解决问题列表 (66)8.42 R1110P04版本解决问题列表 (68)9 相关资料 (70)9.1 相关资料清单 (70)iv10 技术支持 (70)附录 A 本版本支持的软、硬件特性列表 (71)A.1 版本硬件特性 (71)A.2 版本软件特性 (73)附录 B 版本升级操作指导 (84)B.1 简介 (84)B.2 软件加载方式简介 (84)B.3 BootRom界面加载 (85)B.3.1 BootRom界面介绍 (85)B.3.2 通过Console口利用XModem完成加载 (89)B.3.3 通过以太网口利用TFTP完成加载 (100)B.3.4 通过以太网口利用FTP完成加载 (103)B.4 命令行接口加载 (107)B.4.1 通过USB口实现软件加载 (107)B.4.2 通过FTP实现软件加载 (107)B.4.3 通过TFTP实现软件加载 (109)v表目录表1 历史版本信息表 (1)表2 版本配套表 (4)表3 ISSU版本兼容列表 (5)表4 MIB文件变更说明 (10)表5 S5800系列产品硬件特性 (71)表6 S5820X系列产品硬件特性 (72)表7 S5800系列产品软件特性 (73)表8 S5820X系列产品软件特性 (79)表9 交换机软件加载方式一览表 (84)表10 基本BOOT菜单说明 (86)表11 基本BOOT辅助菜单说明 (86)表12 扩展BOOT菜单说明 (87)表13 扩展BOOT辅助菜单说明 (89)表14 通过Console口利用XModem加载系统文件 (93)表15 BootRom升级选择菜单 (94)表16 协议选择及参数设置菜单 (95)表17 通过以太网口利用TFTP加载系统文件 (100)表18 TFTP协议相关参数的设置说明 (102)表19 通过以太网口利用FTP加载系统文件 (104)表20 FTP协议相关参数的设置说明 (105)vi本文介绍了H3C S5800_5820X-CMW520-R1810P16版本的特性、使用限制、存在问题及规避措施等,在加载此版本前,建议您备份配置文件,并进行内部验证,以避免可能存在的风险。
dependent failure analysis
dependent failure analysis Introduction:In any system or process, failures can occur due to various reasons. One crucial aspect of failure analysis is understanding the relationship between failures that are dependent on each other. This analysis helps identify the root cause of failure, the relationship between failure events, and their impact on the overall system. In this article, we will delve into the concept of dependent failure analysis, its importance, and its applications.Understanding Dependent Failure Analysis:Dependent failure analysis is the examination and study of failures that are influenced or caused by other failures within a system. It involves identifying the relationships and dependencies between these failures, analyzing their effects, and determining the underlying causes. By understanding these interdependencies, organizations can take proactive measures to prevent, mitigate, or recover from such failures.Importance of Dependent Failure Analysis:1. Identifying Critical Failure Points: By tracing the dependencies between failures, organizations can identify critical points within a system where failures can have a profound impact. This allows them to allocate resources and implement strategies to strengthen these points and reduce the likelihood of cascading failures.2. Improving System Reliability: Dependent failure analysis helps in improving the reliability of a system by identifying the weakest links. By addressing these vulnerabilities, organizations can minimize the impact of failures and enhance overall system performance.3. Risk Assessment and Mitigation: By understanding how dependent failures can occur, organizations can conduct risk assessments to evaluate the probability and impactof such events. This enables them to develop robust mitigation strategies and contingency plans to minimize the potential damage caused by dependent failures.4. Enhancing System Resilience: Dependent failure analysis helps organizations build resilient systems that can withstand and recover from failures. By identifying the dependencies and potential failure paths, organizations can design backup systems, implement redundancy measures, or develop alternative strategies to ensure continuous operation.Applications of Dependent Failure Analysis:1. Aerospace and Aviation: Dependent failure analysis plays a crucial role in the aerospace and aviation industry. Understanding the interdependence of failed components helps engineers design safer aircraft systems, identify critical failure modes, and develop effective maintenance and inspection programs.2. Power Grids: Power grids are highly complex systems with multiple interdependent components. Dependent failure analysis allows operators to identify critical nodes, evaluate the impact of failures on the grid, and implement preventive measures to maintain uninterrupted power supply.3. Medical Devices: Medical devices, such as pacemakers or infusion pumps, need to operate reliably without failures. Dependent failure analysis helps manufacturers identify the vulnerabilities in these devices, design fail-safe mechanisms, and improve patient safety.4. Software Systems: In software development, dependent failure analysis is crucial for identifying potential software bugs, vulnerabilities, or compatibility issues. Understanding the dependencies between modules, libraries, or APIs improves system stability and performance.Conclusion:Dependent failure analysis is a powerful tool for understanding the relationships between failures in a system. By uncovering the dependencies and root causes,organizations can enhance system reliability, mitigate risks, and build resilient infrastructures. From aerospace to medical devices, this analysis has extensive applications in various industries. By investing in dependent failure analysis, organizations can prevent catastrophic failures, reduce downtime, and ultimately enhance safety and performance.。
Machine learning in automated text categorization
Machine Learning in Automated Text CategorizationFabrizio SebastianiConsiglio Nazionale delle Ricerche,ItalyThe automated categorization(or classification)of texts into pre-specified categories,although dating back to the early’60s,has witnessed a booming interest in the last ten years,due to the increased availability of documents in digital form and the ensuing need to organize them.In the research community the dominant approach to this problem is based on the application of machine learning techniques:a general inductive process automatically builds a classifier by learning, from a set of previously classified documents,the characteristics of one or more categories.The advantages of this approach over the knowledge engineering approach(consisting in the manual definition of a classifier by domain experts)are a very good effectiveness,considerable savings in terms of expert manpower,and straightforward portability to different domains.In this survey we look at the main approaches that have been taken towards automatic text categorization within the machine learning paradigm.We will discuss in detail issues pertaining to three different problems,namely document representation,classifier construction,and classifier evaluation. Categories and Subject Descriptors:H.3.1[Information storage and retrieval]:Content anal-ysis and indexing—Indexing methods;H.3.3[Information storage and retrieval]:Informa-tion search and retrieval—Informationfiltering;H.3.3[Information storage and retrieval]: Systems and software—Performance evaluation(efficiency and effectiveness);I.2.3[Artificial Intelligence]:Learning—InductionGeneral Terms:Algorithms,Experimentation,TheoryAdditional Key Words and Phrases:Machine learning,text categorization,text classification1.INTRODUCTIONIn the last ten years automated content-based document management tasks(col-lectively known as information retrieval–IR)have gained a prominent status in the information systemsfield,largely due to the increased availability of documents in digital form and the consequential need on the part of users to access them in flexible ways.Text categorization(TC–aka text classification,or topic spotting), the activity of labelling natural language texts with thematic categories from a predefined set,is one such task.TC has a long history,dating back to the early ’60s,but it was not until the early’90s that it became a major subfield of the infor-mation systems discipline,largely due to increased applicative interest and to the availability of more powerful hardware.Nowadays TC is used in many applicative contexts,ranging from automatic document indexing based on a controlled vocab-ulary,to documentfiltering,automated metadata generation,word sense disam-biguation,population of hierarchical catalogues of Web resources,and in general any application requiring document organization or selective and adaptive docu-ment dispatching.Although commercial TC systems(e.g.[D¨o rre et al.1999])are Address:Istituto di Elaborazione dell’Informazione,Consiglio Nazionale delle Ricerche,Area della Ricerca di Pisa,Localit`a San Cataldo,56100Pisa(Italy).E-mail:fabrizio@r.it2· F.Sebastianinot yet as widespread as commercial IR systems,experimental TC systems have achieved high levels of robustness(see e.g.[Lewis et al.1999]for a description of a sophisticated architecture for TC).Until the late’80s the most popular approach to TC,at least in the“operational”(mercial applications)community,was a knowledge engineering(KE)one, and consisted in manually defining a set of rules encoding expert knowledge on how to classify documents under the given categories.However,from the early’90s this approach has increasingly lost popularity,especially in the research community,in favour of the machine learning(ML)approach.In this latter approach a general inductive process automatically builds an automatic text classifier by learning,from a set of previously classified documents,the characteristics of the categories of interest.The advantages of this approach are(i)an accuracy comparable to that achieved by human experts,and(ii)a considerable savings in terms of expert manpower,since no intervention from either knowledge engineers or domain experts is needed for the construction of the classifier or for its porting to a different category set.It is the ML approach to TC that this paper concentrates on.Current-day TC is thus a discipline at the crossroads of ML and IR,and as such it shares a number of characteristics with other tasks such as information/knowledge extraction from texts and text mining[D¨o rre et al.1999;Knight1999;Pazienza 1997].There is still considerable debate on where the exact border between these disciplines lies,and the terminology is still evolving.Tentatively,we may ob-serve that“text mining”is increasingly being used to denote all the tasks that, by analysing large quantities of text and detecting usage patterns,try to extract probably useful(although only probably correct)information,and that according to this view TC is an instance of text mining.TC enjoys quite a rich literature now,but this is still fairly scattered1.Although two international journals have devoted special issues to this topic[Joachims and Sebastiani2001;Lewis and Hayes1994],there are almost no systematic treatments of the subject:there are neither textbooks nor journals entirely devoted to TC yet, and[Manning and Sch¨u tze1999]is the only chapter-length treatment of the subject. As a note,we should warn the reader that the term“automatic text classification”has sometimes been used in the literature to mean quite different things from the ones discussed here.Aside from(i)the automatic assignment of documents to a predefined set of categories,which is the main topic of this paper,the term has also been used to mean(ii)the automatic identification of such a set of categories (e.g.[Borko and Bernick1963]),or(iii)the automatic identification of such a set of categories and the grouping of documents under them(e.g.[Merkl1998;Papka and Allan1998;Roussinov and Chen1998]),a task usually called text clustering, or(iv)any activity of placing text items into groups,a task that has thus both TC and text clustering as particular instances[Manning and Sch¨u tze1999].This paper is organized as follows.In Section2we formally define TC and its var-ious subcases,while in Section3we review the most important tasks to which TC has been applied.Section4describes the main ideas underlying the ML approach to the automated classification of data items.Our discussion of text classification1A fully searchable bibliography on TC created and maintained by this author is available at a.de/bibliography/Ai/automated.text.categorization.htmlMachine Learning in Automated Text Categorization·3 starts in Section5by introducing text indexing,i.e.the transformation of textual documents into a form that can be interpreted by a classifier-building algorithm and by the classifier eventually built by it.Section6tackles the inductive construction of a text classifier from a training set of manually classified documents.Section7 discusses the evaluation of the indexing techniques and inductive techniques intro-duced in the previous sections.Section8concludes,discussing some of the open issues and possible avenues of further research for TC.2.TEXT CATEGORIZATION2.1A definition of text categorizationText categorization may be defined as the task of assigning a Boolean value to each pair d j,c i ∈D×C,where D is a domain of documents and C={c1,...,c|C|} is a set of pre-defined categories.A value of T assigned to d j,c i indicates a decision tofile d j under c i,while a value of F indicates a decision not tofile d j under c i.More formally,the task is to approximate the unknown target function ˘Φ:D×C→{T,F}(that describes how documents ought to be classified)by means of a functionΦ:D×C→{T,F}called the classifier(aka rule,or hypothesis,or model)such that˘ΦandΦ“coincide as much as possible”.How to precisely define and measure this degree of coincidence(that we will call effectiveness)will be discussed in detail in Section7.1.Throughout the paper we will assume that:—The categories are just symbolic labels,and no additional knowledge(either of a procedural or of a declarative nature)of their meaning is available to help in building the classifier.—No exogenous knowledge(i.e.data that might be provided for classification pur-poses by an external source)is available,and the attribution of documents to categories has to be realized solely on the basis of endogenous knowledge(i.e. knowledge that can be extracted from the document itself).This means that only the document text is available,while metadata such as e.g.publication date, document type,publication source,etc.are not.The effect of these assumptions is that the algorithms that we will discuss are completely general and do not depend on the availability of special-purpose re-sources that might be costly to develop or might simply be unavailable.Of course, these assumptions need not be verified in operational settings,where it is legiti-mate to use any source of information that might be available or deemed worth developing[de Buenaga Rodr´ıguez et al.1997;D´ıaz Esteban et al.1998;Junker and Abecker1997].Relying only on endogenous knowledge basically means trying to classify a document based solely on its semantics,and given that the semantics of a document is a subjective notion,it follows that the membership of a docu-ment in a category cannot be decided deterministically.This is exemplified by the well-known phenomenon of inter-indexer inconsistency[Cleverdon1984;Hamill and Zamora1980]:when two human experts decide whether to classify document d j under category c i,they may disagree,and this in fact happens with relatively high frequency.A news article on the Clinton-Lewinsky case could befiled under Politics,or under Gossip,or under both,or even under neither,depending on the subjective judgment of the classifier.The notion of“membership of a document in4· F.Sebastiania category”is in many respects similar to the IR notion of“relevance of a documentto an information need”[Saracevic1975].2.2Single-label vs.multi-label text categorizationDifferent constraints may be enforced on the categorization task,depending on the application requirements.For instance,we might need to impose that,for a given integer k,exactly k(or≤k,or≥k)elements of C must be assigned to each elementof D.The case in which exactly1category must be assigned to each documentis often called the single-label(aka non-overlapping categories)case,whereas the case in which any number of categories from0to|C|may be assigned to the same document is dubbed the multi-label(aka overlapping categories)case.A special case of single-label categorization is binary categorization,in which each documentd j must be assigned either to category c i or to its complement c i.From a theoretical point of view,the binary case(hence,the single-label case too)is more general than the multi-label case,in the sense that an algorithm for binary classification can also be used for multi-label classification:one needs only transform a problem of multi-label classification under categories{c1,...,c|C|} into|C|independent problems of binary classification under categories{c i,c i},fori=1,...,|C|.This requires,however,that categories are stochastically independentof each other,i.e.that for any two categories c ,c the value of˘Φ(d j,c )does not depend on the value of˘Φ(d j,c )and viceversa;this is usually assumed to be the case (applicative contexts in which this is not the case are discussed in Section3.5).The converse is not true:an algorithm for multi-label classification cannot be used for either binary or single-label classification.In fact,given a document d j to classify, (i)the classifier might attribute k>1categories to d j,and it might not be obvious how to choose a“most appropriate”category from them;or(ii)the classifier might attribute to d j no category at all,and it might not be obvious how to choose a “least inappropriate”category from C.In the rest of the paper,unless explicitly mentioned,we will be dealing with the binary case.There are various reasons for this choice:—The binary case is important in itself because important TC applications,in-cludingfiltering(see Section3.3),consist of binary classification problems(e.g. deciding whether a document is about Golf or not).In TC,most binary classifica-tion problems feature unevenly populated categories(i.e.much fewer documents are about Golf than are not)and unevenly characterized categories(e.g.what is about Golf can be characterized much better than what is not).—Solving the binary case also means solving the multi-label case,which is also representative of important TC applications,including automated indexing for Boolean systems(see Section3.1).—Most of the TC literature is couched in terms of the binary case.—Most techniques for binary classification are just special cases of existing tech-niques that deal with the more general single-label case,and are simpler to illus-trate than these latter.This ultimately means that we will view the classification problem for C={c1,...,c|C|} as consisting of|C|independent problems of classifying the documents in D un-Machine Learning in Automated Text Categorization·5 der a given category c i,for i=1,...,|C|.A classifier for c i is then a functionΦi:D→{T,F}that approximates an unknown target function˘Φi:D→{T,F}.2.3Category-pivoted vs.document-pivoted text categorizationOnce we have built a text classifier there are two different ways for using it.Givena document,we might want tofind all the categories under which it should befiled(document-pivoted categorization–DPC);alternatively,given a category,wemight want tofind all the documents that should befiled under it(category-pivotedcategorization–CPC).Quite obviously this distinction is more pragmatic thanconceptual,but is important in the sense that the sets C of categories and D ofdocuments might not always be available in their entirety right from the start.Itis also of some relevance to the choice of the method for building the classifier,assome of these methods(e.g.the k-NN method of Section6.9)allow the constructionof classifiers with a definite slant towards one or the other classification style.DPC is thus suitable when documents become available one at a time over a longspan of time,e.g.infiltering e-mail.CPC is instead suitable if it is possible that(i)a new category c|C|+1is added to an existing set C={c1,...,c|C|}after a number of documents have already been classified under C,and(ii)these documents needto be reconsidered for classification under c|C|+1(e.g.[Larkey1999]).DPC is morecommonly used than CPC,as the former situation is somehow more common thanthe latter.Although some specific techniques apply to one style and not to the other(e.g.theproportional thresholding method discussed in Section6.1applies only to CPC),this is more the exception than the rule:most of the techniques we will discussallow the construction of classifiers capable of working in either mode.2.4“Hard”categorization vs.ranking categorizationWhile a complete automation of the text categorization process requires a T or Fdecision for each pair d j,c i ,as argued in Section2.1,a partial automation of this process might have different requirements.For instance,given document d j a system might simply rank the categories in C={c1,...,c|C|}according to their estimated appropriateness to d j,without tak-ing any“hard”decision on either of them.Such a ranked list would be of great help to a human expert in charge of taking thefinal categorization decision,in that it would be possible for her to restrict the selection of the category(or categories)to the ones at the top of the list rather than having to examine the entire set.Alterna-tively,given category c i a system might simply rank the documents in D according to their estimated appropriateness to c i;symmetrically,for classification under c i a human expert would just examine the top-ranked documents instead than the entire document set.These two modalities are sometimes called category-ranking categorization and document-ranking categorization[Yang1999],respectively,and are the obvious counterparts of DPC and CPC.Semi-automated,“interactive”classification systems[Larkey and Croft1996]areuseful especially in critical applications in which the effectiveness of a fully au-tomated system may be expected to be significantly lower than that of a humanprofessional.This may be the case when the quality of the training data(seeSection4)is low,or when the training documents cannot be trusted to be a repre-6· F.Sebastianisentative sample of the unseen documents that are to come,so that the results ofa completely automatic classifier could not be trusted completely.In the rest of the paper,unless explicitly mentioned,we will be dealing with“hard”classification;however,many of the algorithms we will discuss naturallylend themselves to ranking categorization too(more details on this in Section6.1).3.APPLICATIONS OF TEXT CATEGORIZATIONAutomatic TC goes back at least to the early’60s,with Maron’s[1961]seminalwork on probabilistic text classification.Since then,it has been used in a numberof different applications.In this section we briefly review the most important ones;note that the borders between the different classes of applications mentioned hereare fuzzy and somehow artificial,and some of these applications might arguablybe considered special cases of others.Other applications we do not explicitly dis-cuss for reasons of space are speech categorization by means of a combination ofspeech recognition and TC[Myers et al.2000;Schapire and Singer2000],mul-timedia document categorization through the analysis of textual captions[Sableand Hatzivassiloglou2000],author identification for literary texts of unknown ordisputed authorship[Forsyth1999],language identification for texts of unknownlanguage[Cavnar and Trenkle1994],automatic identification of text genre[Kessleret al.1997],and(gasp!)automatic essay grading[Larkey1998].3.1Automatic indexing for Boolean information retrieval systemsThefirst use to which automatic text classifiers were put,and the application thatspawned most of the early research in thefield[Borko and Bernick1963;Field1975;Gray and Harley1971;Hamill and Zamora1980;Heaps1973;Hoyle1973;Maron1961],is that of automatic document indexing for IR systems relying on acontrolled dictionary,the most prominent example of which is that of Boolean sys-tems.In these latter each document is assigned one or more keywords or keyphrasesdescribing its content,where these keywords and keyphrases belong to afinite setcalled controlled dictionary and often consisting of a thematic hierarchical thesaurus(e.g.the NASA thesaurus for the aerospace discipline,or the MESH thesaurus formedicine).Usually,this assignment is done by trained human indexers,and is thusa costly activity.If the entries in the controlled vocabulary are viewed as categories,text index-ing is an instance of the TC task,and may thus be addressed by the automatictechniques described in this paper.Recalling Section2.2,note that this applicationmay typically require that k1≤x≤k2keywords are assigned to each document,for given k1,k2.Document-pivoted categorization is probably the best option,so thatnew documents may be classified as they become available.Various text classifiersexplicitly conceived for document indexing have been described in the literature;see e.g.[Fuhr and Knorz1984;Robertson and Harding1984;Tzeras and Hartmann1993].The issue of automatic indexing with controlled dictionaries is closely relatedto the topic of automated metadata generation.In digital libraries one is usuallyinterested in tagging documents by metadata that describe them under a varietyof aspects(e.g.creation date,document type or format,availability,etc.).Usually,some of these metadata are thematic,i.e.their role is to describe the semantics ofMachine Learning in Automated Text Categorization·7the document by means of bibliographic codes,keywords or keyphrases.The gen-eration of these metadata may thus be viewed as a problem of document indexing with controlled dictionary,and thus tackled by means of TC techniques.An exam-ple system for automated metadata generation by TC techniques is the Klarity system(.au/products/klarity.html).3.2Document organizationIndexing with a controlled vocabulary is one instance of the general problem of document base organization.In general,many other issues pertaining to document organization andfiling,be it for purposes of personal organization or structuring of a corporate document base,may be addressed by TC techniques.For instance,at the offices of a newspaper incoming“classified”ads must be,prior to publication, categorized under the categories used in the scheme adopted by the newspaper; typical categories might be Personals,Cars for Sale,Real Estate,etc.While most newspapers would handle this application manually,those dealing with a high volume of classified ads might prefer an automatic system to choose the most suitable category for a given ad.In this case a typical constraint is that exactly one category is assigned to each document.Similar applications are the organization of patents into categories for making their search easier[1999],the automaticfiling of newspaper articles under the appropriate sections(e.g.Politics,Home News, Lifestyles,etc.),or the automatic grouping of conference papers into sessions.3.3TextfilteringTextfiltering is the activity of classifying a dynamic collection of texts,i.e.a stream of incoming documents dispatched in an asynchronous way by an information pro-ducer to an information consumer[Belkin and Croft1992].A typical case is a newsfeed,where the producer is a news agency(e.g.Reuters or Associated Press) and the consumer is a newspaper[Hayes et al.1990].In this case thefiltering sys-tem should block the delivery to the consumer of the documents the consumer is likely not interested in(e.g.all news not concerning sports,in the case of a sports newspaper).Filtering can be seen as a case of single-label categorization,i.e.the classification of incoming documents in two disjoint categories,the relevant and the irrelevant.Additionally,afiltering system may also perform a further catego-rization into topical categories of the documents deemed relevant to the consumer; in the example above,all articles about sports are deemed relevant,and should be further classified according e.g.to which sport they deal with,so as to allow individual journalists specialized in individual sports to access only documents of high prospective interest for them.Similarly,an e-mailfilter might be trained to discard“junk”mail[Androutsopoulos et al.2000;Drucker et al.1999]and further classify non-junk mail into topical categories of interest to the user[Cohen1996].A documentfiltering system may be installed at the producer end,in which case its role is to route the information to the interested consumers only,or at the consumer end,in which case its role is to block the delivery of information deemed uninteresting to the user.In the former case the system has to build and update a “profile”for each consumer it serves[Liddy et al.1994],whereas in the latter case (which is the more common,and to which we will refer in the rest of this section) a single profile is needed.8· F.SebastianiA profile may be initially specified by the user,thereby resembling a standing IR query,and is usually updated by the system by using feedback information provided (either implicitly or explicitly)by the user on the relevance or non-relevance of the delivered messages.In the TREC community[Lewis1995c;Hull1998]this is called adaptivefiltering,while the case in which no user-specified profile is available is called either routing or batchfiltering,depending on whether documents have to be ranked in decreasing order of estimated relevance or just accepted/rejected.Batch filtering thus coincides with single-label categorization under|C|=2categories; since this latter is a completely general categorization task some authors[Hull1994; Hull et al.1996;Schapire et al.1998;Sch¨u tze et al.1995],somewhat confusingly, use the term“filtering”in place of the more appropriate term“categorization”.In information science documentfiltering has a tradition dating back to the’60s, when,addressed by systems of varying degrees of automation and dealing with the multi-consumer case discussed above,it was variously called selective dissemination of information or current awareness(see e.g.[Korfhage1997,Chapter6]).The explosion in the availability of digital information,particularly on the Internet,has boosted the importance of such systems.These are nowadays being used in many different contexts,including the creation of personalized Web newspapers,junk e-mail blocking,and the selection of Usenet news.The construction of informationfiltering systems by means of ML techniques is widely discussed in the literature:see e.g.[Amati and Crestani1999;Bruckner 1997;Diao et al.2000;Tauritz et al.2000;Tong et al.1992;Yu and Lam1998]. 3.4Word sense disambiguationWord sense disambiguation(WSD)refers to the activity offinding,given the oc-currence in a text of an ambiguous(i.e.polysemous or homonymous)word,the sense this particular word occurrence has.For instance,the English word bank may have(at least)two different senses,as in the Bank of England(afinancial institution)or the bank of river Thames(a hydraulic engineering artifact).It is thus a WSD task to decide to which of the above senses the occurrence of bank in Last week I borrowed some money from the bank refers to.WSD is very important for a number of applications,including natural language understanding, or indexing documents by word senses rather than by words for IR purposes. WSD may be seen as a categorization task(see e.g[Gale et al.1993;Hearst 1991])once we view word occurrence contexts as documents and word senses as categories.Quite obviously this is a single-label categorization case,and one in which document-pivoted categorization is most likely to be the right choice.. WSD is just an example of the more general issue of resolving natural lan-guage ambiguities,one of the most important problems in computational linguistics. Other instances of this problem,which may all be tackled by means of TC tech-niques along the lines discussed for WSD,are context-sensitive spelling correction, prepositional phrase attachment,part of speech tagging,and word choice selection in machine translation;see the excellent[Roth1998]for an introduction.3.5Hierarchical categorization of Web pagesAutomatic document categorization has recently aroused a lot of interest also for its possible Internet applications.One of these is automatically classifying WebMachine Learning in Automated Text Categorization·9 pages,or sites,into one or several of the categories that make up the commercialhierarchical catalogues hosted by popular Internet portals.When Web documentsare catalogued in this way,rather than addressing a generic query to a general-purpose Web search engine a searcher mayfind it easier tofirst navigate in thehierarchy of categories and then issue her search from(i.e.restrict her search to)aparticular category of interest.Automatically classifying Web pages has obvious advantages,since the manualcategorization of a large enough subset of the Web is infeasible.Unlike in theprevious applications,in this case one would typically want each category to bepopulated by a set of k1≤x≤k2documents,and would choose CPC so as to allow new categories to be added and obsolete ones to be deleted.With respect to other previously discussed TC applications,the automatic cate-gorization of Web pages has two essential peculiarities:(1)The hypertextual nature of the documents:hyperlinks constitute a rich sourceof information,as they may be understood as statements of relevance of thelinked page to the linking page.Techniques exploiting this intuition in a TCcontext have been presented in[Attardi et al.1998;Chakrabarti et al.1998b;G¨o vert et al.1999].(2)The hierarchical structure of the category set:this may be used e.g.by decom-posing the classification problem into a series of smaller classification problemscorresponding each to a branching decision at an internal node.Techniquesexploiting this intuition in a TC context have been presented in[Dumais andChen2000;Chakrabarti et al.1998a;Koller and Sahami1997;McCallum et al.1998;Ruiz and Srinivasan1999].4.THE MACHINE LEARNING APPROACH TO TEXT CATEGORIZATIONIn the’80s the approach that was most popular,at least in operational settings,forthe creation of automatic document classifiers consisted in their manual construc-tion through knowledge engineering(KE)techniques,i.e.in manually building anexpert system capable of taking categorization decisions.Such an expert systemtypically consisted of a set of manually defined rules(one per category)of type if DNF Boolean formula then category else¬ categoryto the effect that the document was classified under category iffit satisfied DNFBoolean formula ,DNF standing for“disjunctive normal form”.The most famousexample of this approach is the Construe system[Hayes et al.1990],built byCarnegie Group for the Reuters news agency.A sample rule of the type usedin Construe is illustrated in Figure1,and its effectiveness as measured on abenchmark selected in[Apt´e et al.1994]is reported in Figure2.Other examplesof this approach are[Goodman1990;Rau and Jacobs1991].The drawback of this“manual”approach to the construction of automatic classi-fiers is the existence of a knowledge acquisition bottleneck,similarly to what happensin expert systems.That is,rules must be manually defined by a knowledge engi-neer with the aid of a domain expert(in this case,an expert in the membership ofdocuments in the chosen set of categories).If the set of categories is updated,thenthese two trained professionals must intervene again,and if the classifier is ported。
医院仪器设备出现故障处理流程
医院仪器设备出现故障处理流程英文回答:Troubleshooting Hospital Equipment Malfunctions.1. Safety First.Ensure the safety of both the patient and the healthcare professional.Isolate the equipment from the power source.Wear appropriate personal protective equipment (PPE) if necessary.2. Gather Information.Note the specific equipment malfunction and any error codes displayed.Check the equipment's user manual or maintenance records for troubleshooting guidance.Interview the healthcare professional who was using the equipment at the time of the failure.3. Initial Troubleshooting.Verify power supply and connections.Check for loose wires or connectors.Clean any visible debris or dust from the equipment.Reset the equipment if possible.4. Advanced Troubleshooting.If initial troubleshooting fails, refer to the equipment's maintenance manual for more detailed instructions.Use diagnostic tools, such as a multimeter or oscilloscope, to identify faulty components.Inspect the equipment's circuit boards and other internal components for damage.5. Repair or Replacement.Once the fault has been identified, repair the equipment using appropriate parts and techniques.If repair is not possible, replace the defective component or the entire equipment.6. Documentation and Reporting.Document all troubleshooting and repair actions taken.Report the incident to the hospital's maintenance department and/or manufacturer.Follow up with the healthcare professional whoreported the malfunction to ensure the issue is resolved.7. Preventive Maintenance.Regularly inspect and maintain hospital equipment to minimize the risk of malfunctions.Conduct preventive maintenance checks as per the manufacturer's recommendations.Train healthcare professionals on proper equipment usage and troubleshooting techniques.中文回答:医院仪器设备故障处理流程。
路易斯-巴尔综合征护理PPT课件
预防压疮:定期 翻身,使用气垫 床或减压床垫
03
预防肺部感染: 鼓励患者咳嗽, 进行深呼吸练习, 保持室内空气流 通
05
02
预防感染:保持 皮肤清洁,定期 更换尿布,注意 个人卫生
04
预防肌肉萎缩: 进行适当的运动 和按摩,保持关 节活动度
06
预防便秘:鼓励 患者多喝水,多 吃富含纤维的食 物,进行腹部按 摩
主要采取对症治疗和康复训练
病因和症状
病因:病毒感染,如疱疹病毒、水痘病毒等 症状:发热、头痛、肌肉疼痛、皮疹等 神经系统症状:意识模糊、抽搐、昏迷等 呼吸系统症状:呼吸急促、呼吸困难等 其他症状:消化系统症状、心血管系统症状等
诊断和治疗
诊断方法:临床表现、脑电 图、脑脊液检查等
治疗方法:抗病毒药物、免 疫调节剂、对症治疗等
运动后进行适当的 放松和拉伸,缓解
肌肉疲劳
保持适当的运动 量,避免过度劳
累
运动过程中注意观 察患者的反应,及
时调整运动强度
定期进行运动评 估,调整运动计
划
心理护理
01 倾听患者心声,了解其心理需求 02 鼓励患者表达情感,提供情感支持 03 帮助患者建立信心,克服心理障碍 04 引导患者进行心理调适,保持心理健康
效果评估
患者症状改善程度 患者生活质量提高程度 患者家属满意度
护理措施的有效性 护理人员满意度
护理经验分享
护理技巧
保持良 好的个 人卫生 习惯
定期进 行身体 检查
保持良 好的心 理状态
保持良 好的生 活习惯
定期进 行康复 训练
保持良 好的人 际关系
01
02
03
04
05
06
第二章 分子结构
共享电子对
·● ·
●
非金属元素通过和其它元素共用一对电 子形成共价键结合在一起。
H + Cl
H Cl 路易斯结构式
Lewis学说的局限性: 1. 无法解释两个带负电荷的电子为什么不互相排斥,
反而相互配对趋于稳定;
2. 无法解释许多共价化合物分子中原子的外层电子数 虽少于8,或多于8仍能稳定存在。
F
B
F
F
平面三角形结构的BF3分子
3.sp3杂化
CH4的空间构 型为正四面体。
C:2s22p2
2p
2s
键角为:109.5°
45
2s
2p 激发 2s 2p
sp3杂化
sp3
CH4形成 时的sp3杂化。
46
四个sp3杂化轨道
47
二、杂化轨道理论
(三)等性与不等性杂化 (1)等性杂化:所有的杂化轨道都是等同的。 (2)不等性杂化:
原子相互接近时,由于原子轨道的重叠,原子 间通过共用自旋方向相反的电子对使体系能量降低, 由此形成共价键。
重叠部分越大,键越牢固,分子越稳定。
成键原理:
① 电子配对原理 ② 原子轨道最大重叠原理
一、价键理论(Valence Bond Theory, VB)
① 电子配对原理:原子上如果有自旋相反的成单
经典的共价键理论(G.N. Lewis, 1916, 美国)
1. 要点:
共价分子中的原子都有形成稀有气体电子结构的 趋势,求得自身的稳定。
原子通过共用电子对形成化学键。——共价键
“-”单键 “=”双键“ ”三键,价键结构式如:NN
Lewis 的贡献,在于提出了一种不同于离子键的 新的键型,解释了电负性比较小的元素之间原子的成键 事实。
Method to improve testing speed of memory
专利名称:Method to improve testing speed of memory发明人:Richard C. Blish, II,David E. Lewis申请号:US08/992077申请日:19971217公开号:US05907561A公开日:19990525专利内容由知识产权出版社提供摘要:A method of testing a semiconductor memory device using a parallel march pattern method of testing. All of the memory bits in a memory device are programmed to a first logic state. All of the memory bits in selected rows are programmed to a second logic state. All of the memory bits in rows adjacent to the rows programmed to the second logic state are read to determine if the memory bits programmed to the second logic state have caused the memory bits programmed to the first logic state in the adjacent rows to change logic state. The selected rows are determined by a periodicity value that can be values such as 4, 8, or 16. The periodicity determines the number of clock cycles needed to test the entire memory device. A periodicity of 8 requires only 8 clock cycles to test the entire memory device, regardless of the size of the memory device. The parallel march pattern method of testing can be by rows, by columns or by diagonals.申请人:ADVANCED MICRO DEVICES, INC.代理人:H. Donald Nelson更多信息请下载全文后查看。
非缺血性扩张型心肌病心源性猝死的危险因素
南确实导致了大量不必要的 ICD 植入ꎮ 未来的研
究应结合新的心律失常危险标志物的发现和验证ꎬ
以提高 LVEF 对 NIDCM 危险分层的价值ꎮ
性心脏病中ꎬ心肌纤维化是 VA 的底物ꎬ瘢痕是正常
回路和“ 瘢痕相关” 室速的产生ꎮ 这一机制也可能
有助于某些 NIDCM 患者的室性心动过速 〔36〕 ꎮ 越来
mentia:systematic review protocol〔 J〕 . J Adv Nursꎬ2020ꎻ76 (12 ) :
3662 ̄8
48 Amjad IꎬNiazi IKꎬToor HGꎬet al Acute effects of aerobic exercise
on somatosensory ̄evoked potentials in patients with mild cognitive
DCM 的 SCD 风险的关系ꎬ包括 QRS 时限、T 波电交
中国老年学杂志 2021 年 2 月第 41 卷
670
替、左 束 支 传 导 阻 滞 ( LBBB ) 、 QTc、 心 房 颤 动 等ꎮ
4 心脏磁共振( CMR) 标志物
者不良结局的一个强有力的独立预测因子ꎮ DCM
估ꎬ并通过使用钆对比剂ꎬ提供了心肌瘢痕形成的数
发生水肿的共同作用下ꎬ心肌细胞外间隙增宽ꎬ可以
容纳更多的钆对比剂ꎬ同时亦可表现出延迟强化特
点 〔32〕 ꎮ 其良好的空间分辨率及组织特异性在各种
类型的心肌病变诊断方面具有较好优势 〔33〕 ꎬ 因此
LGE 被认 为 是 非 侵 入 性 评 估 心 肌 纤 维 化 的 金 标
准 〔34〕 ꎮ 早期的荟萃分析已经确认 LGE 是非缺血性
Multilingualism and the Choice of a Language for A
Sino-US English Teaching, ISSN 1539-8072June 2013, Vol. 10, No. 6, 473-477Multilingualism and the Choice of a Language forAnglophone Education in CameroonPaul MbufongUniversity of Douala, Douala, CameroonThis paper seeks to do three things: (1) examine the linguistic situation of Cameroon; (2) identify the language(s)used in education; and (3) discuss whether in the light of the social linguistic evidence, the current choice oflanguage for education (English) is well motivated. The methodology employed is a socio-linguistic survey ofsome randomly selected urban centers in Cameroon for example, Buea, Bamenda, Kumba, etc.. The resultssuggest among other things: (1) that Cameroon is a highly multilingual country with over 280 home languages;and (2) that while English is the language for Anglophone education, Pidgin English is actually the predominantlanguage and the first language for most Anglophones. Based on these findings, the author proposes PidginEnglish as the choice language for early education in Anglophone Cameroon (the southwest and northwestregions). Pidgin English is the only language which expresses Cameroonian reality. It is spoken by more than70% of the population. It is the only language that is not associated with a particular tribe, religion, or with aAll Rights Reserved.specific colonial government.Keywords: multilingualism, Anglophone, PidginIntroductionRichards, J. Platt, and H. Platt (1992) defined multilingualism as “the use of three or more languages by an individual or by a group of speakers such as the inhabitants of a particular region or nation” (p. 238). Sub-SaharanAfrica is one of the most multilingual areas of the world in terms of the ratio of population to languages.Although Cameroon has been loosely referred to as a bilingual country, the reality is that Cameroonian bilingualism refers only to the use of two official languages, French and English. As a nation, Cameroon, likeseveral African countries is actually multilingual.The number of languages spoken in Cameroon is not known for certain. Several figures have been bandied about: Koenig, Chia, and Povey (1983, p. 23) suggested 123 mutually unintelligible languages. According to thelinguistic atlas of Cameroon, there are 239 indigenous languages belonging to many totally different families(Chumbow & Bobda, 1996, p. 44). Lewis (2009) put the number of indigenous languages for Cameroon at 286.What such disparate figures demonstrate is that there are many languages in Cameroon, and the disadvantages ofhaving such a diversity and multiplicity of languages within a single country are palpable.Paul Mbufong, doctor, Department of English and Foreign Languages, University of Douala.474MULTILINGUALISM AND THE CHOICE OF A LANGUAGE FOR ANGLOPHONEProblems of MultilingualismThe first is that there is no unity. Like the biblical story of “The Tower of Babel”, when a country speaks with 200 or more different voices, mutual-understanding becomes extremely difficult. This has been a majorcause of bitterness and suspicion among the different linguistic groups in the country, as it is very easy tomisinterpret what the other person has said. It has also encouraged favouritism, nepotism, tribalism, and othersocial ills because many Cameroonians in position of influence—(which positions they obtained largely becauseof their provenance, not merit)—naturally tend to favor those who can speak their language and who usuallycome from the same ethnic group(s). This happens at the expense and to the annoyance of other competinglinguistic groups.Only about 10% of the population speaks English or French, the two official languages. Although PE (Pidgin English) has been very useful as a language of communication between persons who do not share thesame language, the masses of our people, situated at over 70%, have no means of communicating with oneanother. Communication between the government and the people breaks down easily, thereby giving room totribalists and others who have an axe to grind, to mislead the people. It inhibits the communication of new ideasand techniques from the government to the masses thereby slowing down the economic, political, social, andcultural development of the country. Innovation in agriculture, industry, and health, which would have helped themasses of the people to combat disease, ignorance, want obscurantism and superstition, are rendered impossiblethrough the barriers created by the multiplicity of tongues in the country. Some researchers have even suggesteda correlation between multilingualism and underdevelopment.All Rights Reserved.Above all, it makes the use of indigenous languages in Cameroon education difficult, as it is not possible to use all the languages spoken in the country, nor is it easy to decide which of them to choose and which to ignorewithout triggering civil war. Owing to our mutual suspicion and jealousy, no linguistic group would like to giveup its own language in favour of another in the interest of the nation. Every linguistic group wants its ownlanguage to be chosen as the natural language, and if that is not done, then no other language must be chosen.The government’s approach to the problem of linguistic multiplicity has been to settle for foreign languages (French and English in that order). Ironically, the choice of French and English has not been without its problems,not least the perennial suspicion between francophone and Anglophone Cameroonians.The Choice of a Language for Anglophone EducationCameroon government policy prescribes English as the language for Anglophone education. The choice of English is well motivated. English is one of the official languages per the constitution. English is the language ofscience and technology. It is a passport to educational advancement and prestigious employment. It is thelanguage of commerce, law administration, and a means of national and international communication. English atthe end of the 20th century is more widely scattered, more widely spoken and written than any other language hasever been. It has become the language of the planet, the first truly global language surpassed, in numbers thoughnot in distribution only by the speakers of the many varieties of Chinese. Three-quarters of the world’s mails andtheir telexes, SMSs (short message services), and cables are in English. English is the language of technologyfrom the Silicon Valley to Shanghai. English is the language of technology from the Silicon Valley to Shanghai.MULTILINGUALISM AND THE CHOICE OF A LANGUAGE FOR ANGLOPHONE475English is the medium for 80% of the information stored in the world’s computers. Nearly half of all businessdeals are conducted in English. It is the language of sport and glamour. The official language of the Olympics andthe Miss Universe competition. English is the official voice of the air, of the sea, and of Christianity. It is theecumenical language of the world council of churches. Five of the largest broadcasting companies in the world(CBS (Columbia Broadcasting System), NBC (National Broadcasting Company), ABC (American BroadcastingCompany), BBC (British Broadcasting Corporation), and CNN (Cable News Network)) are English. English hasa few rivals, but no equals. Neither Spanish nor Arabic, both international languages, has the global sway ofEnglish. Germany, Japan, and China recently have, in matching the commercial and industrial vigour of theUnited States, achieved the commercial precondition of language power, but their languages have also beeninvaded by English.English is even used by three or four hundred million people for whom it is not a native language. It has become a second language in countries as diverse as India, Kenya, Nigeria and of course, Cameroon. In thesecountries, English is a vital alternative language, often unifying huge territories and diverse populations.For the above reasons and more, English should be taught and learnt in our schools.However, the importance of English as a global language should not blind us to sound pedagogic and linguistic policy: The best medium for teaching children at the initial stages of their education is their mothertongue, and it is after a firm linguistic foundation has been laid in it that there should be a change to the use ofEnglish as a medium of instruction at later stages. The importance of the first language in the education of achild especially at the early stages cannot be overemphasized. Psychologically, the proper development of the All Rights Reserved.child is closely bound with the continued use of the language he has spoken from birth, the language of hisparents, his brothers and sisters, and friends he is used to. It is the language in which he has acquired his firstexperience of life, the one in which he dreams and thinks and in which he can easily and conveniently expresshis feelings.For the vast majority of Anglophone Cameroonians that language is PE. Koenig et al.’s (1983) sociolinguistic survey of the major urban centres of Cameroon (1977-1978) led to the discovery of thefollowing percentages of children who acquired English and PE as their first languages respectively (seeTable 1).Table 1English/Pidgin English Use in Anglophone TownsEnglish (%) PE (%)Bamenda 1 22Mamfe 0 25Kumba 1 19Buea 7 26Limbe 4 31Alobwede’s (1998) survey used the principles in the 1977-1978 survey and came out with the following figures (see Table 2).476MULTILINGUALISM AND THE CHOICE OF A LANGUAGE FOR ANGLOPHONE Table 2English/Pidgin English Use in Both Anglophone and Francophone TownsEnglish (%) PE (%)Bamenda 3.5 24Mamfe 1 25Kumba 3 22Buea 13 28Limbe 9 30Douala 6 10Yaounde 8 15They contrast the geometric progression of the acquisition of English as a first language, with the arithmetical progression of the acquisition of PE as a first language. In the words of Alobwede (1998, p. 59), PEis the only language in Cameroon which expresses Cameroonian reality without provoking vertical or horizontalhostilities. Secondly, it is conveniently flexible and as such can be acquired at not cost. Finally, because of itshorizontal spread, it is the language of consensus… It is estimated that more than 70% of our population speaksPE. Sadly not only is the constitution silent on the existence of PE, active steps are taken by Cameroonian schoolsto discourage PE. A common warning, with the threat of severe punishment is “Don’t Speak Pidgin”. At theUniversity of Buea for example, one finds several billboards discouraging the use of PE. The government is indenial as regards its language policy vis-à-vis PE. If 70% of the population speak PE, it surely means PE is quiteAll Rights Reserved.popular and to discourage PE is to alienate 70 % of the population from that basic collective identity.To ignore this familiar language and begin to teach a foreign and unfamiliar language when children come to school is like taking them from their homes and putting them among strangers. Most of what is said they cannotunderstand. They cannot express what they want to say and become tongue-tied and inhibited. While take theBritish pupil in England for example, they have only one problem to contend with—subject content, on the otherhand, the Cameroonian pupil have two problems to cope with: (1) English; and (2) the subject content.Educationally, too, children cannot learn the most elementary facts until they have understood the foreign language in which those ideas are expressed. As language is the most powerful tool of learning, children willlearn very little until they have mastered the language of instruction. There is also considerable linguisticconfusion on the part of these children who in spite of official attempts at dissuading them confess that they thinkin PE but try to express themselves in English.Another reason why our children’s education should begin in PE is that although PE started as a contact language, it has become the language of Anglophone Cameroon culture. It is the first language for a goodmajority of our children. In fact, PE is fast assuming the status of a Creole. Language and culture are inseparable,and to separate children from their language and culture at an early stage of their education is to make them haveno regard for their culture. This does not only create a barrier between them and their less educated parents, butwhat is worse, it may cause them to despise the language of their community in our case, PE, in favour of aforeign one, English.MULTILINGUALISM AND THE CHOICE OF A LANGUAGE FOR ANGLOPHONE477ConclusionsThe success claimed for the experiment at Ife (see Afolayan, 1982) of teaching the whole of primary education through the medium of the first language Yoruba, while English is taught as a subject on the curriculumcould be instructive. A child is not likely to forget forever the language he is born into, the language of his parentsand the language of his youth, especially if he has learned to read and write it at the beginning of his schooleducation and he continues to use it as the occasion demands after school. Waudhaugh (1987) quoted a 1953UNESCO (United Nations Educational, Scientific, and Cultural Organization) report entitled “The Use ofVernacular Languages in Education” which stipulated: “The mother tongue is a person’s natural means of selfexpression, and one of his first needs is to develop his power of self expression to the full… every child shouldbegin his formal education in his mother tongue” (p. 168). This notwithstanding, we have to admit that in theCameroonian context, this would demand among other things: not being in denial about the popularity andimportance of PE, the standardization of PE, the training of teachers, and the preparation of teaching materials.ReferencesAfolayan, A. (1982). The application of Ife six year, primary project to the university primary education. Ife: University of Ife Press.Alobwede, C. (1998). Banning Pidgin English in Cameroon. English Today, 14(1).Bamgbose, A. (1970). The English language in west Africa. J. Spencer (Ed.). London: Heinemann.Chia, E. (1998). Cameroon home languages (pp. 9-32). E. L. Koenig, E. Chia, & J. Povey (Eds.). Los Angeles: Crossroads Press.Chumbow, S. B., & Bobda, S. (1996). The Life cycle of post imperial English in Cameroon. In J. Fishman (Ed.), Contributions to the sociology of languag e. New York: Mouton de Gruyter.Koenig, E. L., Chia, E., & Povey, J. (1983). A sociolinguistic profile of urban centres in Cameroon. Los Angeles: Crossroads Press.Lewis, M. P. (2009). Ethnologue: Languages of the world. Dallas, Texas: SIL International.All Rights Reserved.Richards, J. C., Platt, J., & Platt, H. (1992). Longman dictionary of language teaching and applied linguistics. Harlow: Longman.Spencer, J. (Ed.). (1971). The English language in west Africa. London: Heinemann.Waudhaugh, R. (1987). Languages in competition. Oxford: Basil Blackwell.。
学术英语课后答案 unit4
Unit 4 Writing a Literature ReviewI. Teaching ObjectivesIn this unit, you will learn how to:1.write a self-contained literature review2.write a literature review as a part of an essay3.cite sources by correct quotation and paragraphs4.give the appropriate documentation to the source you use5.avoid different kinds of plagiarism6.identify common knowledge7.acquire paraphrasing skills8.enhance language skills related with reading and listening material presented in this unit II. Teaching Procedures1 Writing a literature reviewTask 11 The four articles were published right after the Fukushima disaster in Japan and all addressed the topic of potential risks of nuclear radiation.2 Radiation is not so terrible as expected and human beings are exposed to different sources of radiation every day. Whether it will endanger human health or not depends on the duration and strength of radiation exposure.3 Amber Cornelio holds a different attitude from the other three authors. He believes that radiation exposure will certainly raise the risk of getting cancer and government officials downplay its potential danger to justify its use of nuclear power.4 Answers may vary.5 It seems that Text 11, 12, 14 provide more scientific facts about nuclear radiation than Text 13 which is more emotionally charged by using many rhetorical questions and phrases like “I am simply floored”, “let officials be oblivious”, “not to be outdone”, “Do not tell us about that”. Hence it appears less reliable and trustworthy.Task 2Compared with uranium which the production of conventional nuclear power needs, there is more lithium in the sea water which can support 30 million years’ fusion fuel.Task 31 Review the previous related studies2 State the previous s tudies’ limitation3 Announce the direction for further studies2 Writing a self-contained literature reviewTask 11 Stigmatization, a kind of social rejection, is big challenge to the mentally ill. They are rejected by people because of the label they carry or that their behaviors indicate that they belong to a certain labeled group.2 To report the past studies of the topic. Studies have proved that stigmatization of the mentally ill is caused by the public’s belief in myths about the dangerousnes s of the mentally ill and exposing those myths can reduce stigmatization.3 Three articles.4 Pescosolido & Tuch (2000) thought that a common respond to the mentally ill are rejection and fear of violence. Another article concluded that rejection and fear are caused by less contact with mentally ill. Alexander and Link (2003) found that any type of contact with mentally ill individuals reduced perceptions of dangerousness of the target.5 1) What are major causes for the rejection and fear, and can they be reduced?2) This finding is verified by Alexander and Link (2003).Task 2Text 11Title: Risks of Nuclear PowerAuthor(s): Bernard L. CohenSource: .Summary: Radiation from nuclear power is feared to have the potential of causing a cancer or some genetic diseases. This fear, however, is dismissed by Cohen after he compares artificial radiation and the radiation that occurs naturally in our environment, analyzing their respective impact on human health. Cohen separately discusses the different sources of nuclear power risks and arrives at the following conclusions: 1) the probability of real reactor accidents, with the safety system of defense in depth, are extremely small; 2) radioactive waste, if properly handled, causes negligible damage; 3) other radiation problems, such as accidents in transportation or radon exposures in mining, are also not so threatening as they seem to be. In summary he believes that radiation due to nuclear power will cause much fewer cancers and deaths than coal burning. (130 words)Text 12Title: How Radiation Threatens HealthAuthor(s): Nina BaiSource: Scientific AmericanSummary: Nina Bai addresses the widespread concerns over the health effects of radiation exposure in the wake of Fukushima nuclear crisis. She discusses three determinative factors: thelevel, type and duration of radiation exposure. First, radiation sickness usually occurs when there is excessive dose of exposure, though the limits of radiation level differ for the general public, radiation workers, and patients going through medical radiation. Second, of the four types of ionizing radiation, gamma, X-ray, alpha, and beta, the latter two, albeit being lower energy, are more likely to cause health damage. Third, a very high single dose of radiation can be more harmful than the same dosage accumulated over time. Finally, Bai draws on the lesson of Chernobyl, and concludes radiation exposure within reasonable limit is not so fearful and it is good to exercise caution. (136 words)Text 13Title: Should Nuclear Radiation Found in Domestic Milk Come as a Surprise?Author(s): Amber CornelioSource: http://www. Summary: Amber Cornelio (2011) maintains that radiation from Japan’s Fukushima disaster h as threatened the daily life of ordinary Americans. He challenges the government’s view that radioactive materials detected in domestic milk, vegetables and rainwater will pose no public health concern. He suspects that the government is downplaying the potential dangers of radiation to justify its use of nuclear power. He believes the government has failed to do the job of protecting people. In the end, he urges the government to be more responsible and stop building power plants on a faulty line. He warns that covering up the facts is not the key to avoid similar disasters in the future. (108 words) 66Text 14Title:Radiation and Health: The Aftershocks of Japan’s Nuclear DisasterAuthor(s): Susan BlumenthalSource: http://www. Summary: Susan Blumenthal (2011) aims to inform people of nuclear radiation with scientific facts. She starts the essay with a reference to the worldwide spread of fear in the wake of Fukushima disaster and then explains what radiation is. The explanation is followed by a report of different types of radioactive materials released into the air. She goes on to tell that an exposure to those materials will increase the risks of some major diseases. However, she concedes radiation is not so menacing as was assumed and humans are exposed to naturally occurring radiation every day. Whether radiation is harmful to health or not depends on two contexts: the duration and strength of the exposure. She warns that exposure to high doses of radiation can lead to acute health problems. Long-term low dose exposure to radiation is equally fatal. (137words)Task 3The release of substantial amounts of radiation into the atmosphere from Fukushima nuclear plant has triggered widespread concerns over the use of nuclear power and the health effects of radiation exposure. Since the Chernobyl disaster, especially the Fukushima nuclear crisis, many scientists and scholars have attempted to estimate the effect of nuclear radiation on human health. Cohen (2011) believes the fear that nuclear radiation will cause a cancer or other genetic diseases is unnecessary. He made a detailed analysis of the effects of accidents in nuclear power plants,accidents in transporting radioactive materials and escape of radioactive wastes from confinement systems on human health by comparing the effects of coal burning. Cohen arrived at the following conclusions: nuclear radiation, if properly handled, causes negligible damage and much fewer deaths than coal burning. Cohen’s idea is shared by Bai (2011). Bai discussed three determinative factors: the level, type and duration of radiation exposure. She found that radiation sickness usually occurs only when there is excessive dose of exposure. Second, of the four types of ionizing radiation, gamma, X-ray, alpha, and beta, the latter two are more likely to cause health damage. Third, a very high single dose of radiation can be more harmful than the same dosage accumulated over time. Bai concluded that radiation exposure within reasonable limit is not so fearful and it is good to exercise caution. Blumenthal (2011) did similar research. She examined different types of radioactive materials released into the air. She found that an exposure to those materials would increase the risks of some major diseases. However, the radiation is not somenacing as was assumed as humans are exposed to naturally occurring radiation every day. She believes that whether radiation is harmful to health or not depends on two contexts: the duration and strength of the exposure. Only exposure to high doses of radiation or long-term low dose exposure could lead to acute health problems.Contrary to the three scholars, however, Cornelio (2011) maintained that radiation from Japan’s Fukushima disaster threatened the daily life o f ordinary Americans. He challenges the government’s view that radioactive materials detected in domestic milk, vegetables and rainwater will pose no public health concern. He suspects that the government is downplaying the potential dangers of radiation to justify its use of nuclear power. Hence he urges the government to be more responsible and stop building power plants on a faulty line.3 Writing a literature review as a part of an essayTask 11 Content-based instruction (CBI) is an alternative approach to teaching English. In such an approach, language teaching is integrated within discipline-specific content courses. The major goal is to equip students with academic literacy skills across the curriculum. CBI has gained wide acceptance in U.S. undergraduate institutions.2 Numerous research studies demonstrate consistently that content-based second language teaching promotes both language acquisition and academic success.3 More than 10 articles.4 The literature on CBI has focused mainly on its most immediate effects, i.e., the outcomes of one or two semesters in which content-based instruction was provided. Studies on the sustained or long-term benefits of content-based language instruction are scarce.5 The writer plans to study how will C BI impact students’ future performance both in terms of academic courses and English proficiency.Task 2Nuclear Radiation and Its Long-Term Health EffectThere is a constant controversy as to the application of nuclear power and risks from nuclear radiation ever since the Chernobyl disaster. Especially the release of substantial amounts ofradiation into the atmosphere from Japan’s Fukushima Daiichi nuclear power plant in 2010 has triggered the widespread fear and concerns over risks of radiation leaks, radiation exposure, and their impact on people’s health. The commonsensical and intuitive response of the public is that nuclear radiation is most likely to cause a cancer or genetic diseases. Many researchers, however, assured the public that there is no substantial danger as assumed, and nuclear power is not as fearful or menacing as it seems to be. Cohen (2011), Blumenthal (2011) and Bai (2011), for example, cited numerical evidence and resorted to scientific facts to illustrate that a certain level of nuclear radiation risks won’t pose real danger if handled properly with the current technology available or by following the prescribed rules. They do admit the possibility of radiation initiating certain kinds of diseases, though. Only exposure to high doses of radiation or long-term low dose exposure could lead to acute health problems (Bai 2011). Nevertheless, not everyone agrees. Cornelio (2011), on the other hand, holds that nuclear radiation is most likely to threaten people’s health by contaminating milk, vegetables, and rainwater.The literature on the relationship between radiation and health largely focused on the manageability of nuclear risks and played down the damage that nuclear radiation is likely to cause. The researches generally took a detour as to whether there is any solid evidence to bear out the long-term health impact of nuclear radiation. There needs to be more well-grounded studies on the correlation between radiation and health, and on the possible long-term health effects in order to address the concerns of the general public. Besides, we also need to answer questions like “Why is there a disparity between the commonsensical feeling of the public and the explication offered by experts concerning nuclear radiation and health?”, “Are scientists biased and use the facts and statistics to their favor?” and “Is there a long-term negative health impact if one takes moderate doses of nuclear contaminated food over a long period?”Task 3Answers may vary.Task 4Answers may vary.4 CitationTask 1Order Name and date Quotation Paraphrase1 Newell and Simon (1972)√2 Feigenbaum and Feldman (1963) √3 Polya (1945)√4 Minsky (1968)√Task 2Technology plays an ever important role in the making discoveries. Throughout scientific history, many discoveries have been made because of the application of more sophisticated devises and equipment. For example, Galileo’s great discovery was attributed to the improv ementof machinery for making telescopes. And thanks to the Deep See Explorer II, life forms are now known to exist in the deeper parts of the Pacific Ocean despite the great pressure, a fact which defies the previous opinion that there was no life at the extreme depth. (Jones, 2001:125)Task 3Human activities are chiefly responsible for climate change. Despite the dispute as to whether global warming is caused by human activities (McGuire, 2001), carbon dioxide has been proved the major factor for climate change. Carbon dioxide will form a thick gas layer as it is constantly building up in the atmosphere. The gas layer is the killer of the ozone layer—the layer which protects the Earth from harmful radiation, thus causing global warming. It is documented that carbon dioxide (CO2) is emitted in a number of ways, among which the burning of fossil fuel can obviously release a great amount of CO2 into the atmosphere (Dalleva, 2007). Another way is deforestation, such as the conversion of forestland to farms, ranches, or urban use. According to Border (2011) 15 to 20% of total carbon dioxide emissions is attributed to land use changes.5 Documentation6 Avoiding plagiarismTask 1Answers may vary.Task 21 The sun rises in the east. (CK)2 Paris is the capital of France. (CK)3 Fudan is one of the best universities in China. (not CK)4 Shanghai students speak better English than Sichuan Students because of less accent. (not CK)5 Chinese college students are mostly scientific illiterate. (not CK)6 There are 1.3 billion residents in China in 2011. (CK)7 One can never judge a person by his appearance. (CK)Task 31: a) lacks both the inside acknowledgement and the reference.2: a) lacks the inside acknowledgement.3: a) fails to use the quotation mark when it uses the exact words of the original.4: a) lacks both the inside acknowledgement and the reference.5: a) lacks the inside acknowledgement.6: a) lacks the inside acknowledgement.7: a) doesn’t use the quotation mark when it uses the exact words of the original.8: a) lacks the inside acknowledgement.7 ParaphrasingTask 11: a)2: b)Task 23 Instructivists hold that the “real world”, external to individuals, can be represented as knowledge and determines what will be understood by individuals. This view has been shifting to a constructivist view over the past decade (Merriënboer, 1997).4 Two components must be present in an instructional design theory. The first component (methods) describes how human learning will be supported, and the second component (situation) describes when certain methods ought to be used (Reigeluth, 1999).5 According to Heimlich (1992), man has always had an interest in the environment both as a source of raw materials and as a refuge for the human spirit. Nowadays, the two main e nvironmental interests are based on the concept of “a better quality of life”, as well as the need to replenish the sources of raw materials. In comparison with the pre-1960s, much greater interest in the environment is currently being expressed.6 According to Gredler (2001), the same factors apply to developing complex skills in a classroom setting as to developing complex skills in any setting. A response must be induced, then reinforced as it gets closer to the desired behavior. Reinforcers have to be scheduled carefully, and cues have to be withdrawn gradually so that the new behaviors can be transferred and maintained.Task 31. Use a synonym of a word or phrase1) They can intrude deep inside the human body where they can damage biological cells and thereby cause a cancer.2) If radioactive material is absorbed into the body, however, it is actually the lower energy alpha and beta radiation that becomes the more dangerous.3) I am simply shocked that officials are understating nuclear radiation levels in the United States as a result of the Fukushima disaster!4) Let officials be forgetful, the rest of us saw it approaching.5) On March 11, 2011, a dimension 9.0 earthquake attacked Japan, causing a destructive tsunami that tore through the coastal regions and leveled the villages in its path.2. Change the order of information1) How the spent fuel is dealt with determines the effects of routine releases of radioactivity from nuclear plants.2) It is difficult to measure the effects of long-term, low-dose radiation.3) One indication of the terrible situation in Japan is that no sensible man wants to visit there again for the next 80 to 100 years.4) Understandably, panic among masses is what the authorities try to avoid.5) Burns or other symptoms of acute radiation syndrome (ARS) vary from person to persondepending on the strength of radiation and the level of exposure.3. Change from the active to the passive or vice versa1) Our cancer risk should be eventually increased by 0.002% (one part in 50,000), thus our life expectancy reduced by less than one hour due to the radiation brought by nuclear technology. 2) 180,000 people have been evacuated by the Japanese government from within a 20 kilometer radius of the Fukushima Daiichi complex.3) Farmers in Japan were asked to keep cows and cattle in barns by Government officials as radioactive contamination of milk spread from Fukushima prefecture, north of Tokyo.4) A sheet of paper can often block Alpha and beta particle radiation as it is lower energy.5) A broad range of acute health problems will arise only among the individuals who are exposed to high doses of radiation such as reactor worker.4. Change the positive into the negative and vice versa1) Since our body cells fail to distinguish between natural radiation and radiation from the nuclear industry.2) No number of noticeable deaths from coal burning was larger than in an air pollution incident where there were 3,500 extra deaths in one week.3) Should any increase in radiation due to a nuclear disaster instead of naturally occurring, be of concern?4) The dairy industry will not stop working closely with federal and state government agencies to ensure that we maintain a safe milk supply.5) Almost no one will experience a broad range of acute health problems due to their exposure to high doses of radiation except for the individuals close to the source of radiation such as reactor workers.5. Change personal nouns into impersonal nouns and vice versa1) The attack of sex cells can cause genetic diseases in progeny.2) Unawareness of the danger led parents to serve contaminated milk to their children.3) Our perplexity results from the increases in diseases, obesity and erratic behavior among our malnutritioned populace.4) Then again, officials tend to downplay everything, so that panic doesn’t occur among the masses.5) A person who is exposed to low dose but long-term radiation will develop chronic health condition including cancer.6. Change complex sentences into simple sentences and vice versa1) There is little likelihood, if any, for the failure of each system in this series of back-ups exists.2) We should not be worried at all.3) The increase in cancer risk is too small to determine unless many exposed subjects are studies.4) Any exposure will lead to certain damage and safety problem.5) Despite a lot of news distraction, we still notice the dire current situation.8 Enhancing your academic languageReading: Text 111 Match the words with their definitions.1 i2 f3 g4 c5 h6 a7 b8 e9 d 10 j2 Complete the following expressions or sentences by using the target words listed below with the help of the Chinese in brackets. Change the form if necessary.1 breach2 shallow3 implement4 survivor(s)5 hypothetical6 initiate7 potential8 despite9 neutralize 10 contact 11 transport 12 volume 13 penetrate 14 confirm 15 strategy 16 estimate 17 noticeable 18 generation 19 avert 20 medical 21 disperse 22 integrity 23 compensate3 Read the sentences in the box. Pay attention to the parts in bold.Now complete the paragraph by translating the Chinese in brackets. You may refer to the expressions and the sentence patterns listed above.is associated with nuclear energy(和原子能有联系)depends somewhat on(某种程度上取决于)take care of(来对付)radiation leakage takes place(辐射泄漏发生)arises from long-time exposure of radiation(由于长时间暴露在辐射下)4 Translate the following sentences from Text 11 into Chinese.1 辐射自然存在于我们的环境当中,一般人每秒钟都遭受着自然中15000个粒子的辐射,而一次普通的医疗X 光检查则带有1000亿个粒子的辐射。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
LEWIS SPACECRAFT MISSION FAILURE INVESTIGATION BOARDFINAL REPORT12 February 1998LEWIS SPACECRAFT MISSION FAILURE INVESTIGATIONBOARD REPORTTABLE OF CONTENTSTOPIC PAGE EXECUTIVE SUMMARY (1)INTRODUCTION (3)BACKGROUND (3)SSTI Program DescriptionRequest For Proposal (RFP)Contract AwardSignificant Contract ChangesSpacecraft Flight Operations and FailureAnomaly TimelineFACTORS DIRECTLY CONTRIBUTING TO FAILURE (9)Flawed ACS Design and SimulationInadequate Spacecraft MonitoringFACTORS INDIRECTLY CONTRIBUTING TO FAILURE (12)Requirements Changes without Adequate Resource AdjustmentCost and Schedule PressureMove from Chantilly to Redondo BeachInadequate Ground Station Availability for Initial OperationsFrequent TRW Personnel ChangesInadequate Engineering DisciplineInadequate Management DisciplineIMPLICATIONS ON “FASTER, BETTER, CHEAPER” (17)Balance Realistic Expectations of Faster, Better, CheaperEstablish Well Understood Roles and ResponsibilitiesAdopt Formal Risk Management PracticesFormalize and Implement Independent Technical ReviewsEstablish and Maintain Effective Communications SUMMARY (19)APPENDICESAPPENDIX A-Assignment LetterAPPENDIX B-Team MembershipAPPENDIX C-Individuals InterviewedAPPENDIX D-Meetings ConductedAPPENDIX E-Additional Detail on the Direct Causes of the AnomalyAPPENDIX F-Presentation Charts (Not Included)EXECUTIVE SUMMARYThe Lewis Spacecraft Mission Failure Investigation Board was established to gather and analyze information and determine the facts as to the actual or probable cause(s) of the Lewis Spacecraft Mission Failure. The Board was also tasked to review and assess the “Faster, Better, Cheaper”Lewis spacecraft acquisition and management processes used by both NASA and the contractor in order to determine if they may have contributed to the failure. The investigation process used by the Board was to individually interview all persons believed to have had a substantial involvement in the Lewis spacecraft acquisition, development, management, launch, operations and the events that may have led to the eventual loss. These interviews were aimed at not only understanding the facts as they occurred but also at understanding the individual perceptions that may have been instrumental in the decisions and judgments as made on this Program.The Board found that the loss of the Lewis Spacecraft was the direct result of an implementation of a technically flawed Safe Mode in the Attitude Control System. This error was made fatal to the spacecraft by the reliance on that unproven Safe Mode by the on orbit operations team and by the failure to adequately monitor spacecraft health and safety during the critical initial mission phase.The Board also discovered numerous other factors that contributed to the environment that allowed the direct causes to occur. While the direct causes were the most visible reasons for the failure, the Board believes that the indirect causes were also very significant contributors. Many of these factors can be attributed to a lack of a mutual understanding between the contractor and the Government as to what is meant by Faster, Better, Cheaper. These indirect contributors are to be taken in the context of implementing a program in the Faster, Better, Cheaper mode:•Requirement changes without adequate resource adjustment•Cost and schedule pressures•Program Office move•Inadequate ground station availability for initial operations•Frequent key personnel changes•Inadequate engineering discipline•Inadequate management disciplineThe Board strongly endorses the concept of “Faster, Better, Cheaper”in space programs and believes that this paradigm can be successfully implemented with sound engineering, and attentive, and effective management. However the role changes for Government and Industry are significant and must be acknowledged, planned for and maintained throughout the program. Since these roles are fundamental changes in how business is conducted, they must be recognized by all team members and behaviors adjusted at all levels. The Board observed an attempt during the early phase of the Lewis Program to work in a Faster, Better, Cheaper culture,but as the Program progressed the philosophy changed to business as usual with dedicated engineers working long hours using standard processes to meet a short schedule and skipping the typical Government oversight functions.Based on observations from the Lewis Program, the Board offers the following recommendations in order to enhance mission success in future programs performed under this new paradigm:Balance Realistic Expectations of Faster, Better, Cheaper.Meaningful trade space must be provided along with clearly articulated priorities. Price realism at the outset is essential and any mid-program change must be implemented with adequate adjustments in cost and schedule. This is especially important in a program that has been implemented with minimal reserves.Establish Well Understood Roles and Responsibilities.The Government and the contractor must be clear on the mutual roles and responsibilities of all parties, including the level of reviews and what is required of each side and each participant in the Integrated Product Development Team.Adopt Formal Risk Management PracticesFaster, Better, Cheaper methods are inherently more risk prone and must have their risks actively managed. Disciplined technical risk management must be integrated into the program during planning and must include formal methods for identifying, monitoring and mitigating risks throughout the program. Individually small, but unmitigated risks on Lewis produced an unpredicted major effect in the aggregate.Formalize and Implement Independent Technical ReviewsThe internal Lewis reviews did not include an adequate action response and closure system and may have received inadequate attention from the contractor’s functional organizations. The Government has the responsibility to ensure that competent and independent reviews are performed by the Government, the contractor, or both.Establish and Maintain Effective CommunicationsA breakdown of communications and a lack of understanding contributed to wrong decisions being made on the Lewis program. For example the decision to operate the early on orbit mission with only a single shift ground control crew was not clearly communicated to senior TRW or NASA management. The Board believes that, especially in a “Faster, Better, Cheaper”program these working relationships are the key to successful program implementation. Although this report necessarily focused on what went wrong with the Lewis Program, much also went right due to the skill, hard work, and dedication of many people. In fact, these people completely designed, constructed, assembled, integrated and tested a very complex space system within the two-year goal and probably came very close to mission success.INTRODUCTIONThe Lewis Spacecraft was procured by NASA via a 1994 contract with TRW, Inc., and launched on 23 August 1997. Contact with the spacecraft was subsequently lost on 26 August 1997. The spacecraft re-entered the atmosphere and was destroyed on 28 September 1997.The Lewis Spacecraft Mission Failure Investigation Board was established to gather and analyze information and determine the facts as to the actual or probable cause(s) of the Lewis Spacecraft Mission Failure. All pertinent information concerning the failure, and recommended preventive measures to preclude similar failures on future missions, were to be addressed in a report to the NASA Associate Administrator for Earth Science Programs. Because of the programmatic experimental nature of the Small Satellite Technology Initiative (SSTI) Program, the Board was also tasked to review and assess the Lewis spacecraft acquisition and management processes used by both NASA and the contractor in order to determine if they may have contributed to the failure.The investigation process used by the Board was to individually interview all persons believed to have had a substantial involvement in the Lewis spacecraft acquisition, development, management, launch, operations and the events that may have led to the eventual loss. These interviews were aimed at not only understanding the facts as they occurred but also at understanding the individual perceptions that may have been instrumental in the decisions and judgments as made on this Program.The Board wishes to acknowledge the contributions of all of those interviewed. To a person, all were open, forthright and professional. The Board also wishes to acknowledge the Failure Review Board, chartered by TRW and chaired by Vice Admiral David Frost (Retired), to perform an independent internal investigation, for their help by sharing their technical findings. BACKGROUNDSSTI Program DescriptionThe SSTI Program was intended to validate a new approach to the acquisition and management of spacecraft systems by NASA and to simultaneously produce an implementation that leverages U.S. technology investments. The stated objectives were to reduce costs and development time of space missions for science and commercial applications. Specifically, the Program was to demonstrate new small satellite design and qualification methods and proactively promote commercial technology applications while producing valued science data that was based on new technologies. This effort was to use a new approach of “Faster, Better, Cheaper”acquisition and management by NASA and the contractor. This provided for minimal oversight involvement by the Government in the implementation of the effort and shifted a larger responsibility role to the contractor than was standard practice at that time. The concept was to implement the Program using Integrated Product Development Teams (IPDT) that included industry, the science community, academia and the Government.The development of the SSTI Program was initiated through a Government sponsored workshop with industry, the science community and academia participation. The workshop, titled the Small Spacecraft Technology Workshop, was held in Pasadena, California in September 1993. This workshop included participation by NASA Headquarters, several NASA Centers, numerous industry teams and representatives from academic and research organizations. All principal participants in the Lewis Program, including NASA Headquarters Code X, the Goddard Space Flight Center, the Langley Research Center and the ultimately selected Lewis Contractor, TRW, were represented at this workshop. One of the goals of the SSTI Program was to allow teams comprised of representatives from the Government, industry and academia to work together to help develop the program. The Workshop provided a forum for Integrated Product Development Team formation to occur.Request For Proposal (RFP)NASA Headquarters issued a planning RFP in January 1994 with industry comments received two weeks later. The actual RFP was then revised according to the information learned and reissued in February 1994. The contractors were given an option to propose either a two-years-to-launch or a three-years-to-launch program depending upon the technology infusion methodology selected. The RFP specified severe fee penalties for cost or schedule overruns. There were no Government directed Contract Deliverable Requirements List (CDRL) items and no Government specified technical requirements. Additionally there were no performance, quality assurance, or other Government standards imposed.Contract AwardThe contract for the Lewis spacecraft was awarded at a price of $57,940,026 to TRW at Redondo Beach, California on 8 June 1994. The contract was a Cost Plus Award Fee (CPAF) type that included the acquisition of the total system: the spacecraft, the ground operations system, spacecraft on orbit operations for one year, payload, systems integration, data applications, commercialization outreach, and the launch vehicle. A source evaluation board using the NASA streamlined evaluation process and managed by NASA Headquarters evaluated this proposal. TRW won the contract on the strength of their proposed implementation, technology infusion, cost and schedule. Their proposal was incorporated into the Contract by reference and TRW’s Chantilly facility was established as the executing agent. The schedule from contract start to launch was two years.As implemented by TRW, Lewis was a significantly complex, small size spacecraft. The requirements were driven by the accommodations needed for the scientific payload that included the first spaceflight version of a hyperspectral imager. The spacecraft subsystems, for the most part, had challenging performance requirements, in such areas as pointing accuracy and thermal control, resulting in a relatively complex design. The proposed launch vehicle was to be the Pegasus XL built by Orbital Sciences Corporation in Chantilly, Virginia. The spacecraft was completed in two years but launch vehicle delays caused a launch slip of over a year to August 1997.Significant Contract ChangesThe Government, with TRW concurrence, made significant contractual changes in the on-going Lewis program that changed technical and performance requirements despite the ambitious fast track schedule and the premise that no changes would be allowed. These were a change to increase the on-orbit life of the SSTI Lewis Spacecraft from a three year requirement to a five year goal and a change in the launch vehicle from a Pegasus XL to the Lockheed-Martin Launch Vehicle (LMLV). Additionally, another contract change was made in May 1996. This change established a cost cap of $64,800,000 on the program.Spacecraft Flight Operations and FailureLaunch. The Lewis Spacecraft was launched into a 300-km parking orbit on 23 August 1997 with nominal performance by the launch vehicle. The spacecraft acquisition timer was autonomously initiated, and the solar arrays appeared to have deployed successfully. This deployment was to have been followed by an autonomously initiated sun acquisition maneuver. The parking orbit had high atmospheric drag and was intended to be transitional. The final mission orbit of 523 km was to have been achieved during the first 30 days by using the satellite’s own propulsion system.Ground operations recorded approximately twelve anomalies during the first four days following launch. The operations team resolved all but four of these anomalies. The rest of this section describes these four anomalies, the last of which led directly to the loss of the mission.First Anomaly: Autonomous Switch Over to the B-Side Processor. An unexpected event occurred when an autonomous switch over placed the spacecraft under the control of the B-side processor. The spacecraft was launched in the A-side configuration but when first contacted after launch, it was already under B-side control. The reason for this switch over could not be ascertained from available launch event data but the possible causes of this unexpected condition include:•an unrealistically short time-out flag set before launch;•failure of the A-side processor;•failure of gyro 1 or gyro 2 during launch;•unforeseen interaction involving the solar array drive clocking position interlock with the sun acquisition position loop closure.A limited set of simulation runs and a lack of launch event simulation fidelity may have contributed to the failure to anticipate this event. Additionally any real-time telemetry that might have been available was lost because the communication transmitter "on-off" table is zeroed-out whenever an autonomous processor switch over occurs. Therefore telemetry was lost until the on-off table was loaded into the B-side processor by ground command.Second Anomaly: Solid State Recorder Failure. A second anomaly precluded a more meaningful understanding of the actual sequence of events. This anomaly was the inability to playback thesolid state recorder (SSR) data taken during the launch event. Early attempts at playback were unsuccessful because incorrect command sequences were sent to the spacecraft. Later attempts using the command sequences from the Operations Manual were also unsuccessful. The attempted data playbacks were unsuccessful using both the A and the B-side of the spacecraft processor. Use of the redundant side of the SSR was never attempted because of a desire to preserve the data that had already been recorded. This data would have been lost if the SSR had been switched over to the redundant side. TRW has since demonstrated that the incorrect command sequence does not cause the engineering model (EM) SSR to lock up, and they were able to achieve data playback from the EM recorder when the incorrect command sequence was followed by the correct sequence. The reason for this failure has not been determined, but possible causes that have been identified are listed below.•The command sequence used from the Operations Manual may not have been correct.This possibility arises because the Operations Manual contained a different command sequence than was used for Acceptance Testing.•The operations team assigned to troubleshoot the anomaly was not sufficiently experienced in failures of the SSR unit. A similar problem occurred frequently during testing of the SSR unit in the factory but the acceptance test team was able to move a sequence of data pointers and then execute the read command, resulting in a successful read out of data.•A hardware or firmware failure may have occurred within the SSR itself.Note that throughout the initial checkout period, only one crew, serving extended shifts, conducted all of the Lewis on-orbit operations. The entire crew was given a rest period each night, only the first seven hours of which were coincident with a period when ground station coverage was unavailable. This staffing approach will be discussed later in more detail.Third Anomaly: Contact Lost for Two Orbits, Spacecraft Reappeared in Uncontrolled Attitude Mode. A third anomaly occurred after the spacecraft had been in the normal Attitude Control System (ACS) Earth Hold mode uneventfully for approximately 45 hours using the B-side processor hardware. When the crew started operations on the morning of 25 August, they reconfigured the spacecraft back to the A-side processor which enabled the electrical power subsystem fault triggers, turned on the A-side propulsion catalyst bed heaters and turned on the reaction wheel electronics. Then came a period of about three hours when the next attempted station acquisition was unsuccessful. On the following available acquisition opportunity, orbit links were established but the spacecraft was found to be not in the expected Earth Hold mode, but was instead in an uncontrolled attitude mode with its battery partially discharged. Subsequent telemetry analyses showed that the spacecraft was off pointed from the sun about 28 degrees in pitch and yaw with its battery at a 43% depth of discharge (DOD). This is significant because this DOD implies that the battery had not received adequate charge for a substantial portion of the two orbits during which no contact was achieved. Normal spacecraft operation was then restored in the sun point thruster operated mode in which the intermediate moment of inertia axis is aligned toward the sun. This mode is the built in spacecraft “Safe Mode”but is inherently unstable without the proper active control. On the Lewis spacecraft this mode is controlled by asingle 2-axis gyro that provides no rate information about the intermediate axis that is pointed toward the sun.After verifying that the spacecraft had been stable in the sun mode for four hours of operation with its battery fully charged and operating on the A-side, the operations crew entered approximately a nine-hour rest period and ceased operations for the day. This was done in spite of the serious nature of the “cause unknown”anomalies that had already occurred, and in spite of the fact that the control center had been unsuccessful in their numerous attempts to retrieve any of the data that was locked within the solid state recorder.Fourth Anomaly: Spin about the Principal Axis. The fourth and final (catastrophic) anomaly occurred approximately four hours after the seven-hour ground station planned unavailability period. The anomaly manifested itself as a spacecraft flat spin (a spin about its principal axis) that pointed the solar arrays edge-on to the sun. The start of the flat spin was in a period when the ground stations were available but the operations crew had not yet returned to work and therefore went initially unnoticed. By the time of the discovery of the anomaly, the battery was in a deep depth of discharge (approximately 72%). Subsequent analyses of the situation concluded that excessive thruster firings, caused by the spacecraft autonomous attempts to control in the intermediate axis mode, were sensed by the spacecraft processor which then disabled the A-side thrusters and had switched control from A-side processor to B-side processor. Excessive thruster firings on the B-side then caused the B-side thrusters to also be disabled by the processor, leaving the spacecraft uncontrolled. The single two-axis gyro was saturated, and the spacecraft was then in free drift that resulted in rotation about the principal axis, off pointing the solar array from the Sun.At the next and final contact pass (in this low earth orbit the ground station contact times are on the order of about five minutes each), the depth of battery discharge was 82%. In preparing for this pass, the operations crew working under extreme time pressure developed what was hoped to be a recovery plan. At the start of the contact pass the B-side thrusters were enabled by ground command, and three, one-second thruster pulses were commanded in an attempt to arrest the spacecraft rotation rate. As it turns out only the first of the three commands was executed by the dying spacecraft because the operations crew had addressed the second and third commands incorrectly. The spacecraft went out of ground station contact and was subsequently never reacquired.FACTORS DIRECTLY CONTRIBUTING TO FAILUREThe Board believes that the loss of the Lewis Spacecraft was the result of an implementation of a technically flawed Safe Mode in the Attitude Control System. This error was made fatal to the spacecraft by the reliance on that unproven Safe Mode by the operations team and the failure to adequately monitor spacecraft health and safety during the critical initial mission phase.Flawed ACS Design and SimulationFlawed ACS Design. The Safe Mode was required by TRW specification to maintain the spacecraft in a safe, power positive orientation. This mode was to drive the solar panels to a predetermined clock position, to orient the spacecraft intermediate axis (the x-axis) toward the sun and to maintain that orientation autonomously using thruster firings without ground station intervention for a minimum of 72 hours in mission (523km altitude) orbit. This was implemented using a single two-axis gyro that was unable to sense rate about the spacecraft intermediate (x-axis). Therefore, when the spacecraft tried to maintain attitude control, a small imbalance, perhaps in thruster response, caused the spacecraft to spin up around the not-sensed x-axis. Because the spin was about an intermediate axis, the spin momentum started to transfer into the controlled principal axis (z-axis) causing the thrusters to fire excessively in an attempt to maintain control. The ACS processor was programmed to shut down the control system if excessive firings occurred. When both the A-side and the B-side thrusters had been shut down sequentially, the spin momentum that had been built-up in the intermediate (x) axis transferred into the principal (z) axis. This had the effect of rotating the spacecraft up to 90 degrees in inertial space causing the solar arrays to be pointed nearly edge-on to the sun. The spacecraft then drained its battery at a significantly fast rate because of the power subsystem and thermal subsystem Safe Mode design.Flawed ACS Simulation. The operations crew, relying on the ACS Safe Mode, as validated by simulation, allowed the spacecraft to go untended for a 12-hour period. This reliance was ill founded because the simulation that was used to validate the ACS Safe Mode was flawed. The ACS design heritage was initially based on the proven Total Ozone Mapping Spacecraft (TOMS) design. The expected system performance was then analyzed using tools developed for the TOMS program. In fact, the Lewis control subsystem design was significantly more complex than TOMS because the Lewis spacecraft aligned its x-axis (intermediate/unstable), rather than its z-axis (principal/stable) of inertia toward the sun in Safe Mode. When a Lewis design modified version of the TOMS simulation was run, neither a thruster imbalance nor an initial (albeit small) spin rate about the intermediate (roll) axis was modeled. The simulation was run for about twice the 72 hour requirement and demonstrated stability under the programmed conditions. An additional factor was that the simulation was done using mission mode parameters, not low earth transfer mode parameters that represented the condition that the spacecraft was actually in at the time of these operations. The mission mode represented a more stable attitude control condition because of lower drag forces. This simulation was subsequently “validated”during a fixed base test involving spacecraft hardware. Unfortunately this validation test was done for only a 100-minute period and did not model a thruster-imbalance-on-orbit scenario. In the absence of disturbance torque that would have been imparted had thruster imbalance been modeled, the fatal flaws remained undetected.Inadequate Spacecraft MonitoringSingle Crew Operations. The contractor implemented single crew operations as a cost saving measure, even in the initial on orbit operations before the spacecraft was characterized and put into the more stable mission orbit. The single shift operation prevented a timely recognition of the final anomaly since no one was manning the operations at the time of the actual occurrence. This single shift operation concept was developed shortly after, and as a direct result of, the emphasis put on cost control by NASA. This emphasis on cost control was indicated to TRW by the issuance of the show cause notice in March 1995. Significantly the NASA management team did not know about the planned single shift operations, at least the planned single shift during the first 30 days operations, until after the fatal anomaly had occurred.Failure to declare an emergency. The spacecraft exhibited several anomalies, any of which should have been enough justification to declare a spacecraft emergency and call-up additional ground station coverage and additional people. Non-recognition of the significance of these anomalies by the operations team, especially the third anomaly, was the fatal flaw.The first anomaly was that the spacecraft was discovered, immediately after launch, in an unexpected condition. When first observed by the ground station the B-side processor was in control of the spacecraft. This signified an autonomous change from the initial launch condition that had the spacecraft under the A-side processor control. Furthermore, this B-side status was maintained for the next two days without requesting additional ground station coverage through the declaration of a spacecraft emergency. When operating on the B-side, the spacecraft is on a single string operation since an autonomous fail over back to the A-side is not a feature of this design. Therefore had the B-side processor failed during that time, the Safe Mode, flawed though it was, would have not been autonomously enabled.The second anomaly was discovered when the solid state recorder would not play back the data previously recorded. This included all launch data that could have been helpful in analyzing the first anomaly.The third anomaly was discovered after a two-orbit failure to acquire the spacecraft telemetry signal. The spacecraft reappeared in an uncontrolled attitude with the battery partially discharged. The operations crew again failed to declare a spacecraft emergency and, after implementing a ground commanded recovery and seeing that the spacecraft operated nominally in the Safe Mode for four hours, allowed the spacecraft to go untended for a 12-hour period. The fourth and catastrophic anomaly occurred after the spacecraft had been left in Safe Mode with its intermediate axis of inertia pointed toward the sun with the roll rate not sensed. When the operations crew returned from a rest period, they discovered that the spacecraft was spinning at approximately two revolutions per minute about its principal axis of inertia with its solar arrays pointed nearly edge on to the sun.。