GriPhyn Collaboration
造纸专业英语单词
造纸专业英语单词
造纸专业英语单词
造纸是古代中国劳动人民的`重要发明。
分有机制和手工两种形式。
下文是为大家精选的造纸专业英语单词,欢迎大家阅读。
tearing(breaking)strength:撕裂度
(stock,pulp,stuff)consistency:浆浓
filler retention:填料的留着率
filler,fillings,loading material:填料
ash content:灰分
impact tester:冲击强度测定仪
wet strength:湿强度
air permeability:透气度,透气性
air permeability tester:透气度测定仪
burst,bursting strength,pop strength:耐破度
burst factor:耐破因子
folding endurance,folding strength:耐折度
folding resistance:耐折性能
bending stiffness:弯曲挺度
smoothness:平滑度
contact angle test:(施胶度)接触角测定法
whiteness:白度
absorbability:吸收性能
opacity,opaqueness:不透明度diaphanometer:不透明度测量仪
ring crush compression resistance:环压强度ring stiffness:环压挺度
flat crush resistance:平压强度(瓦楞芯纸)bend strength:弯曲强度
bending chip:耐折叠纸板。
《腹腔镜或机器人辅助胰腺癌根治术中国专家共识(2022_年版)》解读
收稿日期:2023-08-03 录用日期:2023-11-03Received Date: 2023-08-03 Accepted Date: 2023-11-03基金项目:上海市科委科技创新行动计划(20Y11908600)Foundation Item: Science and Technology Inovation Action Plan of Shanghai Science and Technology Commission (20Y11908600)通讯作者:姜翀弋,Email:*********************Corresponding Author: JIANG Chongyi, Email: : *********************引用格式:蔡志伟,姜翀弋. 《腹腔镜或机器人辅助胰腺癌根治术中国专家共识(2022年版)》解读[J]. 机器人外科学杂志(中英文),2024,5(2):299-303.Citation: CAI Z W, JIANG C Y. Interpretation of Chinese expert consensus on laparoscopic or robot-assisted radical pancreatectomy for pancreatic cancer (2022 edition)[J]. Chinese Journal of Robotic Surgery, 2024, 5(2): 299-303.《腹腔镜或机器人辅助胰腺癌根治术中国专家共识(2022年版)》解读蔡志伟,姜翀弋(复旦大学附属华东医院普外科·胆胰疾病诊疗中心 上海 200040)摘 要 《腹腔镜或机器人辅助胰腺癌根治术中国专家共识(2022年版)》是由中国抗癌协会胰腺癌专业委员会微创诊治学组与中华医学会外科学分会胰腺外科学组组织全国部分胰腺外科专家制定,2022年首次发表于 Journal of Pancreatology 。
有道词典生物实验室专业词汇A
</item>
<item>
<word>Mohr burette for use with pinchcock</word>
<trans>碱氏滴定管</trans>
</item>
<item>
<word>Geiser burette</word>
<word>flint glass solution bottle with stopper</word>
<trans>白细口瓶</trans>
</item>
<item>
<word>brown glass solution bottle with stopper</word>
<trans>量杯</trans>
</item>
<item>
<word> beaker</word>
<trans>烧杯</trans>
</item>
<item>
<word>stainless-steel beaker</word>
<trans>不锈钢杯</trans>
</item>
<item>
<word>Busher funnel </word>
妙趣横生的医学英语名词
妙趣横生的医学英语名词大家对于医学名词的相关英语知道多少呢?想必都不太熟悉吧!以下是作者为大家整理的妙趣横生的医学英语名词,欢迎大家借鉴与参考,希望对大家有所帮助。
妙趣横生的医学英语名词 1laboratory timer 实验室用定时钟labor room 待产室labour 分娩,生产L-ACG (left atria-cardiogram) 左心房搏动图lachrymal canal dilator 泪管扩张器lachrymal catheter 泪导管lachrymal needle 泪囊针头lachrymal probe 泪囊探针lachrymal sac irrigator 泪囊冲洗器lachrymal sac retractor 泪囊牵开器lachrymal sound 泪管探子lachrymal style 泪管通针,泪管探子lachrymal syringe needle 泪管冲洗针lacrimal 泪的lacrimal syringe 泪囊注射器lacrimotome 泪管刀lacto- 乳,乳汁lactobutyrometer 乳脂计lactocrit 乳脂计lactodensimeter 乳比重计lactometer 乳比重计lactoprene 乳胶,人造橡胶lactoscope 乳酪计ladder 梯子,梯架,阶梯ladder splint 梯形夹ladle 杓子,铲lag ①滞后,延迟②外壳lambert 朗伯(亮度单位)lamina 薄片,板,层lamina chisel 板凿laminagram 体层照片,断层照片laminagraph 体层照相机,断层照相机laminagraphy 体层照相术,断层照相术laminar 层式的,板状的lamina spreader 板摊开器laminated insulator 胶质绝缘纸laminectomy retractor 椎板切除牵开器,推板拉钩laminectomy rongeur 椎板切除咬骨钳laminogram 体层照片,断层照片laminograph 体层照相机,断层照相机laminotomy knife 椎板切开刀lamp 灯泡,灯lamp bulb 灯泡lamp cap 灯头,管帽lamp-house 灯罩lamp jack 灯座,管座lamp lens 灯玻璃lamp screen 灯罩lance 尖刀,柳叶刀lance-pointed needle 矛头针lance-shaped knife 枪状刀lancet ①刺血针②柳叶刀,小刀lancet for venesection 静脉切开刀,刺络刀land 陆地,地面landmark 界标language ①代码②语言,术语language translation machine 语言翻译机lantern ①信号灯,幻灯②罩,外壳lantern slide 幻灯片lanthanum() 镧laparo- 腹,胁腹laparoscope 腹腔镜laparoscope for doublepuncture 双穿刺用腹腔镜laparoscopy 腹腔镜检查laparotome 剖腹刀laparotorny knife 剖腹刀laparotomy needle 剖腹针large capacity refrigerated centrifuge 大容量冷冻离心机large cervical retractor 大颈椎牵开器large channel therapeutilc duodeno-fiberscope 大通道治疗用纤维十二指肠镜large parison microscope 比较显微镜large fluorescence research microscope 大型荧光研究显微镜large freezing microtome 大型冷冻切片机large high frequency electrotome 大型高频电刀large intestine 大肠large rotary microtome 大型旋转式切片机large scale 大规模的,大型的large scale integrated circuit 大规模集成电路large sledge microtome 大型滑动切片large type diagnostic X-ray machine 大型诊断X光机laryngeal 喉的laryngeal applicator 喉卷棉子laryngeal applicator forceps 喉涂药钳laryngeal biopsy forceps 喉部活检钳laryngeal brush 喉刷laryngeal cotton applicator 喉头卷棉子laryngeal curet 喉刮匙laryngeal dilating catheter 喉扩张导管laryngeal dilator 喉扩张器laryngeal forceps 喉钳laryngeal foreign body forceps 喉头异物钳laryngeal insufflator 喉头吹入器(器儿用) laryngeal knife 喉头刀laryngeal lancet 喉柳叶刀laryngeal mirror 间接喉镜,喉镜laryngeal polypus forceps 喉部息肉钳laryngeal polypus snare 喉息肉圈套器laryngeal polypus scissors 喉息肉剪laryngeal probe 喉探针laryngeal scissors 喉剪laryngeal sound 喉探子laryngeal speculum 喉窥镜laryngectomy tube 喉切除管laryngendoscope 喉镜,检喉镜laryngo- 喉laryngogram 喉X射线(照)片laryngograph 喉动态描记器laryngometer 喉测量器laryngometry 喉测量法laryngophantom 喉模型laryngopharyngeal mirror 咽喉镜laryngopharyngeoscope 咽喉镜laryngophone 喉头微音器,喉头送话器laryngoroentgenography 喉X射线照相术laryngoscope 喉镜妙趣横生的医学英语名词 2冰袋Ice Bag药品Medicine (Drug)绷带Bandage胶带Adhesive Tape剪刀Scissors体温计Thermometer药丸Tablet, Pill舌下锭Sublingual Tablet胶囊Capsules软膏Ointment眼药Eye Medicine止咳药Cough Medicine阿司匹林Aspirin止疼药Pain Killer药方Prescription症状及名称一般症状妙趣横生的医学英语名词 3过敏Allergy健康诊断Gernral Check-upPhysical Examination检查Examination入院Admission to Hospotial退院Discharge from Hospital症状Symptom营养Nutrition病例Clinical History诊断Diagnosis治疗Treatment预防Prevention呼吸Respiration便通Bowel Movement便Stool血液Blood脉搏Pulse, Pulsation尿Urine脉搏数Pulse Rate血型Blood Type血压Blood Pressure麻醉Anesthesia全身麻醉General Anesthesia静脉麻醉Intravenous Anesthesia 脊椎麻醉Spinal Anesthesia局部麻醉Local Anesthesia手术Operation切除Resectionlie副作用Side Effect洗净Irrigation注射InjectionX光X-Ray红外线Ultra Red-Ray慢性的Chronic急性的Acute体格Build亲戚Relative遗传Heredity免疫Immunity血清Serum流行性的Epidemic潜伏期Incubation Period滤过性病毒Virus消毒Sterilization抗生素Antibiotic脑波洗肠Enema结核反映Tuberculin Reaction 华氏Fahrenheit摄氏Celsius, Centigrade药品Medicine。
胶原凝胶收缩试验英语
胶原凝胶收缩试验英语Here's a draft of the text in English, following the given requirements:Okay, let's talk about the collagen gel contraction test. It's pretty interesting, actually. It's a way to study how cells react to their environment by measuring how much a gel shrinks over time. You just set up the gel, add the cells, and watch what happens.The gel itself is made from collagen, which is a protein found in our bodies. It gives skin its elasticity and strength. In this test, the gel acts as a sort of scaffold for the cells to grow on. As the cells interact with the gel, they cause it to contract, kind of like a sponge shrinking after you squeeze it.Measuring this contraction can tell us a lot about how cells behave. For example, if the gel contracts a lot, that might mean the cells are really active and healthy. Or, ifit doesn't contract much, it could indicate something's wrong with the cells or their environment.The test is pretty simple to do, but it can provide valuable insights into cell biology and disease processes. So, whether you're a researcher studying wound healing or a drug developer looking for new therapies, the collagen gel contraction test could be a useful tool in your toolbox.All in all, it's just a cool way to see.。
科学探索创新路的英语作文
The path of scientific exploration is a journey filled with innovation and discovery. It is a road that is paved with curiosity,dedication,and a relentless pursuit of knowledge. This essay will delve into the essence of scientific exploration and how innovation is the driving force behind it.The Importance of CuriosityCuriosity is the spark that ignites the flame of scientific exploration.It is the innate desire to understand the unknown,to question the status quo,and to seek answers to the mysteries of the universe.Without curiosity,the scientific community would stagnate, and the world would be deprived of the advancements that have shaped our lives.The Role of InnovationInnovation is the lifeblood of scientific exploration.It is the process of translating curiosity into tangible results.Innovation in science is not just about creating new technologies or products it is about challenging existing paradigms,developing new theories,and finding novel solutions to complex problems.The Process of Scientific ExplorationThe journey of scientific exploration begins with observation and hypothesis. Researchers observe phenomena,formulate hypotheses to explain these observations,and then design experiments to test these hypotheses.This process is iterative and often involves a great deal of trial and error.The Role of CollaborationScientific exploration is not a solitary endeavor.It relies heavily on collaboration among researchers,institutions,and even across disciplines.Collaboration allows for the sharing of ideas,resources,and expertise,which can lead to breakthroughs that might not have been possible in isolation.The Impact of TechnologyThe advancement of technology plays a crucial role in scientific exploration.Modern tools and techniques,such as highspeed computing,advanced imaging,and data analysis software,have expanded the horizons of what is possible in research.They enable scientists to delve deeper into the unknown and to gather data at an unprecedented scale.Overcoming ChallengesThe path of scientific exploration is fraught with challenges.These can range from technical difficulties and funding constraints to ethical dilemmas and societal implications.Overcoming these challenges requires resilience,creativity,and a commitment to the scientific method.The Future of Scientific ExplorationAs we look to the future,the potential for scientific exploration and innovation is limitless.With the ongoing development of new technologies and the increasing interconnectedness of the global scientific community,we stand on the brink of discoveries that could transform our understanding of the world and our place in it.In conclusion,the road of scientific exploration is a dynamic and everevolving journey.It is driven by the human spirits insatiable thirst for knowledge and the desire to push the boundaries of what is known.Innovation is the key that unlocks new frontiers,and it is through this process that we continue to learn,grow,and advance as a species.。
手术器械英文
手术器械英文知多少 | 一页手册·协和八2016-04-07一个旦医学界外科频道发衣服(check-in)和归还物品(check-out)刷手服scrubs手术室叫 OR术者包括外科大夫(surgeon)及其助手(physician assistant)麻醉团队包括麻醉师(anesthesiologist)和麻醉护士(nurse anesthetist)护士包括台上(instrument nurse)和台下护士(circulating nurse)。
1钳子Clamp老外说起钳子大都用 clamp,但也有用 forcep 的(forcep 常翻译成镊子,但也可翻译成钳子,看具体情况)。
此钳:远端有齿,齿镊,直镊(表示为:tips~1*2 teeth,horizontalserrations,straight)。
血管钳(hemostatic clamp)的小昵称们包括: snaps,hemostats,stats,clamps,clips。
最小只的血管钳——蚊式钳叫 mosiqutos。
根据钝性分离、锐性分离、深浅有不同的钳子,它们叫 Ochsner,Kellys(大号弯血管钳),Peans……好多名字和型号。
以下几个较为常用:Allis 钳要被切除的组织,Intestinal allis clamp 是肠钳,Babcock 钳胃,Glassman 钳胃钳肠子都是无伤害的好手,Kocher(弯有齿血管钳)钳肌筋膜,Right Angle / Full Curve 夹血管、管道或钝性分离。
2镊子 Forcep举个例子:Long fingers / pick-ups 都是它的昵称,可以用来夹敷料、皮肤。
玩小血管或者神经需要精细操作的,用尖镊 sharp forcep。
消毒用的卵圆钳叫 Ring forcep(为啥不叫 Clamp?问老外吧)。
3持针器Needle holder也叫 needle driver。
网格体系结构
Tier 0
Italy Regional Centre
CERN Computer Centre
Tier 1
France Regional Centre
Germany Regional Centre
FermiLab ~4 TIPS ~622 Mbits/sec
Tier 2
~622 Mbits/sec Institute Institute Institute ~0.25TIPS
Art and Architecture
What’s the difference between Art and Architecture?
Lyonel Feininger, Marktkirche in Halle
Art and Architecture
Notre Dame de Paris
~100 MBytes/sec
1 TIPS is approximately 25,000 SpecInt95 equivalents
Offline Processor Farm ~20 TIPS ~100 MBytes/sec
~622 Mbits/sec or Air Freight (deprecated)
•
基本服务的定义
– – –
网格与WWW工作方式的区别
www
服务器的 驱动 客户与服 务器 客户端 资源
先后驱动,同时只有一个 界限明显
Grid
同时驱动 界限不明显
浏览器
网页,数据库
不限制,客户端多样 化
计算、存储、软件、 仪器等
网格的层次
• 分布式资源
• 网格系统
• 网格用户
Classic Grid Architecture
水基聚氨酯胶粘剂的英语
水基聚氨酯胶粘剂的英语When it comes to adhesives, water-based polyurethane is a real game-changer. It's easy to use, and it bonds just as strong as traditional solvent-based adhesives. Plus, the cleanup is a breeze since it's water-based.The thing I love about water-based polyurethane adhesives is how environmentally friendly they are. No harsh chemicals or solvents, just water and polyurethane.It's a win-win for both the user and the planet.Working with this adhesive is such a pleasure. It spreads evenly and dries quickly, saving me a lot of time on the job. And the bond is so strong, it's almost like the two surfaces were fused together.One cool thing about water-based polyurethane adhesives is their versatility. They can bond a wide range of materials, from wood to metal to plastic. And they do itall with the same consistent quality.My colleagues and I have been using this adhesive for years, and we've never had any issues with it. It's reliable, dependable, and just gets the job done. We wouldn't use anything else.So if you're looking for an adhesive that's strong, easy to use, environmentally friendly, and versatile, water-based polyurethane is definitely worth checking out. It's a great choice for any project, big or small.。
假肢英文代表简写
假肢英文代表简写第一篇:假肢英文代表简写截肢部位英文缩写代词1.L:left,左2.R;right,右3.S;single,单侧。
4.D:double,双侧。
*表示截肢和其他残疾类型的:1.AK:aboveknee,膝上截肢2.BK:blowknee,膝下截肢3.AE:aboveelbow,肘上截肢4.BE:blowelbow,肘下截肢5.HD:hipdisarticulation:髋关节离断术。
6.SD:shoulderdisarticulation,肩关节离断术。
7.HP:hippelvitomy:骨盆离断术。
8.para:paraplegia,截瘫。
9.WC:wheelchair,轮椅10.C:cast,石膏托。
11.B:brace,各种矫形的支具12.Hemipelvectomy:骨盆(含下肢)切除13.Hemicorporectomy:下半身切除(包括骨盆内脏器、泌尿器、生殖器等)第二篇:常用胶带英文简写常用胶带英文简写聚乙稀;全英:Polyethylene 简写:PE聚对苯二甲酸乙二醇酯;全英:Polyethylene Terephthalate简写:PET聚丙烯简写:OPP聚乙-乙酸乙酯;简写:OPP橡胶发泡系列产品,包装内衬材料。
聚甲基丙烯酸甲酯;简写:PMMA:主要用于手机保护屏,该产品分为有硬化涂层和没有硬化涂层两种,其特点是透光率极好,没有杂质。
可作静电保护膜,表面硬化后硬度可达4-6H以上。
硬化处理的PMMA材料(生板)应用于:广告灯箱、标牌、灯具、仪表、生活家具用品、液晶显示器、LCD导光板。
氯化聚丙烯;全英:Chlorinated Polypropylene 简写: CPP与树脂相溶性好,可用作粘接塑料与树脂的胶粘剂、涂料载色剂、纸张涂层、阻燃剂、印刷油墨添加剂等。
第三篇:常用地质录井英文简写常用地质录井英文简写强烈 strng中等 mod稀少,低的 pr 湿照(直照)dir明亮的 brt白色荧光 wh dir flour 均匀的 even金色荧光 gld dir flour湿照5%黄荧光yel dir flour斑点/油斑 pch/spt(stn)黄色荧光yel dir flour斑点的 sptd暗黄色荧光 dk dir flour滴照 cut滴照快速扩散fast cut 滴照暗黄色荧光dk yel cut flour 立即instant 滴照扩散中等 mod cut 滴照乳黄色荧光 mlky yel cut flour 放射性扩散blmng 滴照扩散慢slw cut 滴照白色荧光wht cut flour 均匀扩散 strmng 滴照金色荧光 gld cut flour 滴照乳白色荧光mlky wht cut flour上部 upper无烟煤,白煤 anthr重的hvy 下部under残余物 res 逐渐 grad含 wl瓷状的 porc(porcelaneous)普见 com 条痕 strk 断面 sks(slickenside)偶见 occ煤coal 孔隙度差(好)(fr)pr por 稀有,罕见rr(rare)褐煤 lig 少量的 a few 沥青质 bit 微量 tr石英 qtz方解石 cal(callitc)高岭土 kaolin 长石 feld部分长石风化 wthrd feld i/p 黄铁矿 pyr暗色矿物 dk mnrl 白垩质 chkly 海绿石 glauc 白云母 mic/musc 燧石 cht有孔虫 foram角,棱角 ang非常圆 w rnd 隐晶crpxl 次棱角 sbang 中晶粒mxl 粗晶 cxl 次圆sbrnd结晶 xln斑点的 spec(spt)圆状 rnd 微晶 micrxl分选差/中/好 p/mod/w strd 钙质胶结 cal mtx,cmt 白云质胶结dolic mtx,cmt 泥质胶结 ary mtx,cmt高岭土胶结 kao lin mtx,cmt 凝灰质胶结 tuf mtx,cmt松散lse 团块lmp 造浆(粘性的)stky 致密的cpct/tight 块状blky 溶液(解)sol 质纯的pure 次块状 sb blky 可塑性 plas 均匀(不均匀)的 hom(netr)碎个 pch 未定形的 amorp 板状,片状 plat岩屑 lith frag 化石碎屑 foss frag 灰质 lmy 含碳物质 carb mat 生物碎屑biod frag 硅质sil 贝壳碎屑 shell frag 碳质斑点 carb spec 泥质arg 砂质 sdy 部分含粉砂 slty i/p 微含钙质 sli calc 含粉砂 slty 部分含泥质重v ary i/p 不含钙质 no calc颗粒 gr 非常粗 v crs 一般 gen 非常细 vf 砾石 pbl 最大的 max 细f 结构 tex 部分 pt i/p 中粒 m 直径 dia 局部 loc 粗 crs 尺寸 size 丰富abdt主要的 pred dom 泥质砂岩 arg sst 花岗岩 granite 次要的 mnr 碳质泥岩 carb clyst 片麻岩 gneiss 粘土岩 clyst 灰岩 limest,lst 板岩 slate 泥岩 mdst 泥灰岩 mrly 玄武岩 bas 页岩 sh 白云岩 dol 辉长岩 gabbro 粉砂岩 sltst 泥粒灰岩pkst 砂岩 sst 砾岩 cong/cgl 纯净的cln 半透明transl 微弱的wk(sh,fnt)清晰的clr 透明的transp 暗的 dk白色wht 褐色brn 红色red 灰色gy 橄榄色olv 浅色,淡色pale(pl)中灰色 med gy 蓝色 bl 浅黄色,米黄色 buff(bf)黑色 blk 绿色 gn杂色 vcol 黄色 yel 紫色 purp未固结的 uncons 硬 frm 硬 hd 易碎的,脆性 fri,brit 微硬 sli hd 非常硬 v hd 软或细腻soft,sft 中硬 mod hd 极硬 extr hd第四篇:十二月英文及简写十二月英文及简写一月January Jan.二月February Feb.三月March Mar.四月 April Apr.五月 May May六月 June Jun.七月 July Jul.八月 August Aug.九月September Sep.十月 October Oct.十一月November Nov.十二月December Dec.Monday星期一Tuesday星期二Wednesday星期三Thursday星期四Friday星期五Saturday星期六Sunday星期日最佳答案-由投票者1年前选出weather forecast 天气预报 AM Clouds / PM Sun=上午有云/下午后晴AM Showers=上午阵雨AM Snow Showers=上午阵雪AM T-Storms=上午雷暴雨Clear=晴朗Cloudy=多云Cloudy / Wind=阴时有风Clouds Early / Clearing Late=早多云/晚转晴Drifting Snow=飘雪Drizzle=毛毛雨Dust=灰尘Fair=晴Few Showers=短暂阵雨Few Snow Showers=短暂阵雪Few Snow Showers / Wind=短暂阵雪时有风Fog=雾Haze=薄雾Hail=冰雹Heavy Rain=大雨Heavy Rain Icy=大冰雨Heavy Snow=大雪Heavy T-Storm=强烈雷雨Isolated T-Storms=局部雷雨Light Drizzle=微雨Light Rain=小雨Light Rain Shower=小阵雨Light Rain Shower and Windy=小阵雨带风Light Rain with Thunder=小雨有雷声Light Snow=小雪Light Snow Fall=小降雪Light Snow Grains=小粒雪Light Snow Shower=小阵雪Lightening=雷电Mist=薄雾Mostly Clear=大部晴朗Mostly Cloudy=大部多云Mostly Cloudy/ Windy=多云时阴有风Mostly Sunny=晴时多云Partly Cloudy=局部多云Partly Cloudy/ Windy=多云时有风PM Rain / Wind=下午小雨时有风 PM Light Rain=下午小雨PM Showers=下午阵雨PM Snow Showers=下午阵雪PM T-Storms=下午雷雨Rain=雨Rain Shower=阵雨Rain Shower/ Windy=阵雨/有风Rain / Snow Showers=雨或阵雪Rain / Snow Showers Early=下雨/早间阵雪Rain / Wind=雨时有风Rain and Snow=雨夹雪Scattered Showers=零星阵雨Scattered Showers / Wind=零星阵雨时有风Scattered Snow Showers=零星阵雪Scattered Snow Showers / Wind=零星阵雪时有风Scattered Strong Storms=零星强烈暴风雨Scattered T-Storms=零星雷雨Showers=阵雨Showers Early=早有阵雨Showers Late=晚有阵雨Showers /Wind=阵雨时有风Showers in the Vicinity=周围有阵雨Smoke=烟雾Snow=雪Snow / Rain Icy Mix=冰雨夹雪Snow and Fog=雾夹雪Snow Shower=阵雪Snowflakes=雪花Sunny=阳光Sunny / Wind=晴时有风Sunny Day=晴天Thunder=雷鸣Thunder in the Vicinity=周围有雷雨T-Storms=雷雨T-Storms Early=早有持续雷雨T-Storms Late=晚有持续雷雨Windy=有风Windy / Snowy=有风/有雪Windy Rain=刮风下雨Wintry Mix=雨雪混合第五篇:12月英文简写1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.De c . 1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.Dec .1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.De c . 1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.Dec .1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.De c . 1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.Dec .1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.De c . 1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun. 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.Dec .1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.De c . 1.Jan . 2.Feb . 3.Mar . 4.Apr . 5.May . 6.Jun . 7.Jul . 8.Aug . 9.Sep . 10.Oct . 11.Nov . 12.Dec .。
青霉素英语作文
青霉素英语作文Penicillin: The Miraculous AntibioticPenicillin, a groundbreaking discovery in the field of medicine, has revolutionized the way we approach infectious diseases. This remarkable antibiotic has saved countless lives and transformed the landscape of modern healthcare. The story of penicillin's discovery and development is a testament to the power of scientific inquiry and the relentless pursuit of knowledge.In the early 20th century, the world was grappling with the devastating impact of infectious diseases. Bacterial infections, often life-threatening, were a leading cause of mortality, and the search for effective treatments was a pressing concern. It was in this climate that Scottish bacteriologist Alexander Fleming made his serendipitous discovery in 1928.Fleming, while conducting research on Staphylococcus bacteria, noticed that a petri dish had been contaminated by a mold. Intrigued, he observed that the mold had created a zone of inhibition around it, preventing the growth of the Staphylococcus bacteria. This observation led Fleming to the realization that the mold must beproducing a substance with antibacterial properties. After further investigation, he identified the mold as Penicillium and named the active substance it produced "penicillin."Fleming's discovery was a groundbreaking moment in the history of medicine, but it would take nearly two decades before penicillin could be mass-produced and widely used. The challenges of purifying and stabilizing the compound proved to be formidable, and it was not until the early 1940s that a team of researchers, led by Howard Florey and Ernst Chain, were able to successfully isolate and purify penicillin.The impact of penicillin cannot be overstated. During World War II, it played a crucial role in saving the lives of wounded soldiers, reducing the mortality rate from bacterial infections. The widespread use of penicillin in the military setting demonstrated its remarkable effectiveness and paved the way for its broader application in civilian healthcare.One of the most significant contributions of penicillin is its ability to treat a wide range of bacterial infections, including those caused by Staphylococcus, Streptococcus, and Pneumococcus. Prior to the discovery of penicillin, many of these infections were often fatal, but the antibiotic's ability to target and destroy the causative bacteria transformed the prognosis for patients.Moreover, penicillin's success sparked a renewed interest in the search for other antimicrobial compounds, leading to the development of a diverse array of antibiotics. This "golden age" of antibiotic discovery, which spanned the 1940s to the 1960s, saw the introduction of numerous life-saving drugs that continue to be essential in modern medicine.However, the widespread and sometimes indiscriminate use of antibiotics has led to the emergence of antibiotic-resistant bacteria, posing a significant challenge to healthcare providers worldwide. The rise of superbugs, bacteria that are resistant to multiple antibiotics, has become a pressing global health concern. Addressing this issue requires a multifaceted approach, including the development of new antimicrobial agents, the responsible use of existing antibiotics, and the implementation of effective infection control measures.Despite these challenges, the legacy of penicillin remains strong. It continues to be a cornerstone of modern medicine, saving millions of lives each year. The discovery of penicillin by Alexander Fleming, and the subsequent efforts of researchers to refine and mass-produce the drug, stands as a testament to the power of scientific inquiry and the potential for transformative breakthroughs in the field of healthcare.As we navigate the complexities of the modern medical landscape, the story of penicillin serves as a reminder of the remarkable progress that can be achieved through perseverance, collaboration, and a dedication to improving the human condition. The legacy of penicillin will undoubtedly continue to inspire and guide future generations of scientists, physicians, and healthcare professionals in their quest to combat infectious diseases and improve the well-being of people around the world.。
GRE阅读中关于油漆涂料的高频词汇
GRE阅读中关于油漆涂料的高频词汇GRE阅读中关于油漆涂料的高频词汇下面是天道留学搜集整理的GRE阅读中关于油漆涂料的高频词汇:A----Accelerate 促进剂Active agent 活性剂Additive 添加剂Additive mixture 加色混合Adhesive 胶粘剂Adhesive solvent 胶(料)溶剂Adjacent color 类似色Advancing color 进出色Aerosol spraying 简易喷涂After image 残象Air drying 常温干燥Airless spraying 无气喷涂Alcohol stain 酒精着色剂Alert color警戒色Alkyd resin 醇酸树脂Alligatoring 漆膜龟裂Amount of spread 涂胶量Anticorrosive paint 防锈涂料Antifouling paint 防污涂料Antique finish 古式涂料Automatic spraying 自动喷涂B----Baking finish 烤漆喷涂Base boat 底漆Blistering 小泡Blushing 白化Body varnish 磨光漆Brilliant 鲜艳的Brushing 刷涂Brushing mark/streak 刷痕Bubbling 气泡Button lac 精致虫胶C----咖啡色Catalyst 催化剂,触媒,接触剂Chalking 粉化Cherry 樱桃色Chipping 剥落Chromatic color 有彩色Chromaticity 色度Chromaticity coordinates 色度坐标Chromaticity diagram色度圆Clssing 补漆Clear coating 透明涂层Clear lacquer 透明喷漆Clear paint 透明涂料Coarse particle 粗粒Coating 涂料Cobwebbing 裂痕Cocos 可可色Cold water paint 水性涂料Color blindness 色盲Color conditioning 色彩调节Color harmony 色彩调和Color in oil 片种特(调色用)Color matching 调色Color number 色号(色之编号或代号) Color paint 有色涂料Color reaction 显色反应Color reproduction 色重现Color tolerance 色容许差Compatibility 相容性Complimentary color 补色Consistency 稠厚度Contractive color 收缩色Col color 寒色,冷色Cooling agent 冷却剂Covering power 覆盖力Cracking 龟裂,裂纹Crimping 皱纹Cure 硬化Curing agent 固化剂Curing temperature 固化温度D----Dark 暗Deep 深Degumming 脱胶Dewaxed shellac 胶蜡虫胶Diluent 稀释剂,冲淡剂Dilution ratio 稀释比例Dingy 浊色Dipping 浸渍涂层Dipping treatment 变色Discoloring 变色Discord 不调和色Drier 干燥剂Dry rubbing 干磨Drying time 干燥时间Dulling 失光Dusting 粉化E----electrostatic spraying 静电涂装emulsion adhesive 乳化胶emulsion paint 乳化涂料enamel 色漆,磁漆F----Fading退色Filler 腻子,埴料,填充剂Finish code 涂料编号Finshing 涂饰Flaking 剥落Flat paint 消光涂料Flatness 消光Floor paint 地板涂料Foam glue 泡沫胶G----Gelatin 明胶,凝胶Glare 眩目Glue 胶粘剂,胶,胶料Glue and filler bond 动物胶及填料胶结Glue mixer 调胶机Glue spreader 涂胶机Gum 树胶,胶树H----Hardener 硬化剂Hide 皮胶High solid lacquer 高固体分漆Honey color 蜂蜜色I----Illuminant color 光源色J----Jelly strength 胶质强度Joint strength 胶接强度L----Lac 虫胶Lac varnish 光漆Lacquer 漆Latex 乳胶Latex paint 合成树脂乳化型涂料Leveling agent 均化剂Light 光亮的Liquid glue 液态胶Long oil varnish 长性清漆Love formaldehyde 低甲醛M----Make up paint 调和漆Medium oil varnish 中油度清漆Methyl alcohol 甲醛Multi-color 多彩漆N----Natural clear lacquer 清漆N.C lacquer 硝化棉喷漆O----Off- color 变色的,退色的,不标准的颜色Oil paint 油性漆Oil putty 油性腻子Oil solvent 油溶剂Oil stain 油性着色剂Oil staining 油着色Oil stone 油石Oil varnish 油性清漆,上清漆Opacity 不透明度Opaque paint 不透明涂料P----Paint 涂料,油漆Paint film 涂膜Paint nozzle 涂料喷头Penetrant 渗透剂Polishing varish 擦光(亮)清漆Pre-coating 预涂Procuring 预固化Preservative 防腐剂Primer 底漆(下涂涂料)Putty 腻子Q----Quick drying paint 速干漆R----Ready mixed paint 调和漆Refined shellac 精制虫胶Resin adhesive 树脂胶Reverse coater 反向涂料器Roller brush 滚筒刷S----Sample board 样板Sand blast 喷砂。
壁虎式爬行手套作文
壁虎式爬行手套作文英文回答:The gecko-style climbing gloves are a revolutionary invention that allows humans to climb walls and othervertical surfaces just like a gecko. These gloves are made with a special material that mimics the structure of agecko's feet, enabling them to stick to surfaces withoutthe need for any additional tools or equipment.One of the main advantages of these gloves is their versatility. With the gecko-style climbing gloves, I can easily scale walls, climb trees, and even hang upside down from the ceiling. It's like having superpowers! For example, I remember a time when I was hiking in the mountains and came across a steep rock face. With the gecko-styleclimbing gloves, I was able to effortlessly climb to thetop and enjoy the breathtaking view.Another great thing about these gloves is theirconvenience. They are lightweight and portable, making them easy to carry around wherever I go. Whether I'm exploring the great outdoors or simply trying to reach something on a high shelf, I can always rely on my gecko-style climbing gloves to get the job done. It's like having a personal ladder that I can wear on my hands!Furthermore, these gloves are incredibly durable. The special material used in their construction is resistant to wear and tear, ensuring that they will last for a long time.I have used my gecko-style climbing gloves on numerous occasions, and they still look and perform like new. They are definitely a worthwhile investment for anyone who loves adventure and exploration.中文回答:壁虎式爬行手套是一种革命性的发明,使人类能够像壁虎一样爬上墙壁和其他垂直表面。
校园的建筑英语作文带翻译
As a high school student, I have always been fascinated by the architecture of our campus. Its not just the buildings themselves that captivate me, but the stories they tell and the atmosphere they create. Our school is a blend of modern and traditional architecture, offering a unique environment for learning and growth.The main building of our school is a grand structure that stands tall and proud. Its a symbol of our schools history and tradition. The brick facade is weathered but strong, a testament to the years it has stood. The large windows let in an abundance of natural light, creating a bright and welcoming atmosphere inside. The building houses the administrative offices, classrooms, and the library. The library, in particular, is a treasure trove of knowledge. Its high ceilings and rows of bookshelves give it an air of grandeur. The smell of old books and the quiet hum of students studying create a peaceful and focused environment.Adjacent to the main building is the science block, a modern addition to our campus. Its sleek glass and steel design contrasts sharply with the older buildings. The science labs are equipped with the latest technology, providing us with the best resources for learning. The large whiteboards and interactive screens in the classrooms enhance our understanding of complex scientific concepts.The sports complex is another significant part of our campus. The gymnasium is spacious and wellequipped, with a basketball court, volleyball court, and a running track. The outdoor fields are perfect for soccer, track and field, and other sports. The fresh air and open spaceprovide a great environment for physical education and team sports.The campus also features several green spaces, including a central courtyard and a small park. These areas are perfect for relaxation and reflection. The lush greenery and blooming flowers add a touch of nature to our otherwise concrete environment. Its common to see students studying or chatting in these areas during breaks.The schools architecture is not just functional its also a reflection of our values and aspirations. The blend of old and new represents our respect for tradition while embracing progress. The emphasis on natural light and open spaces shows our commitment to creating a healthy and inspiring learning environment.In conclusion, the architecture of our campus plays a significant role in shaping our school experience. It provides us with the facilities we need for learning and growth, while also creating an atmosphere that fosters creativity and collaboration. As I walk through the campus, I feel a sense of pride and belonging. This is not just a place to study its a place that shapes who we are and who we will become.校园的建筑作为一名高中生,我一直对我们学校校园的建筑感到着迷。
Structure of
Overview of CMS virtual data needsDecember20001IntroductionThis document was put together to act as CMS input into the Griphyn Architecture meeting on De-cember20,2000.It is a high-level overview that tries to be short rather than complete.Structure of this document:Brief overview of CMS and its physicsCMS virtual data description2005vs.current needs and activitiesReferences to some other documents that may be of interest2Brief overview of CMS and its physics2.1CMSThe CMS experiment is a high energy physics experiment located at CERN,that will start data tak-ing in2005.The CMS detector(figure1)is one of the two general purpose detectors of the LHC accelerator.It is being designed and built,and will be used,by a world-wide collaboration,the CMS collaboration,that currently consists of some2200people in145institutes,divided over30countries. In future operation,the LHC accelerator lets two bunches of particles cross each other inside the CMS detector40.000.000times each second.Every bunch contains some protons.In every bunch crossing in the detector,on average20collisions occur between two protons from opposite bunches.A bunch crossing with collisions is called an event.Figure2shows an example of an event.Note that the picture of the collision products is very complex,and represents a lot of information:CMS has15 million individual detector channels.The measurements of the event,done by the detector elements in the CMS detector,are called the’raw event data’.The size of the raw event data for a single CMS event is about1MB(after compression using‘zero suppression’).Figure1:The CMS detector.Of the40.000.000events in a second,some100are selected for storage and later analysis.This selection is done with a fast real-timefiltering system.Data analysis is done interactively,by CMS physicists working all over the world.Apart from a central data processing system at CERN,there will be some5–10regional centres all over the world that will support data processing.Further processing may happen locally at the physicist’s home institute,maybe also on the physicist’s desktop machine. The1MB’raw event’data for each event is not analyzed directly.Instead,for every stored raw event,a number of summary objects are computed.These summary objects range in size from a100 KB’reconstructed tracks’object to a100byte’event tag’object,see section3.1for details.This summary data will be replicated widely,depending on needs and capacities.The original raw data will stay mostly in CERN’s central robotic tape store,though some of it may be replicated too.Due to the slowness of random data access on tape robots,the access to raw data will be severely limited.2.2Physics analysisBy studying the momenta,directions,and other properties of the collision products in the event, physicists can learn more about the exact nature of the particles and forces that were involved in the collision.For example,to learn more about Higgs bosons,one can study events in which a collision produced a Higgs boson that then decayed into four charged leptons.(A Higgs boson decays almost immediately after creation,so it cannot be observed directly,only its decay products can be observed.)A Higgs boson analysis effort can therefore start with isolating the set of events in which four charged leptons were produced.Not all events in this set correspond to the decay of a Higgs boson:there are many other physics processes that also produce charged leptons.Therefore,subsequent isolation steps are needed,in which’background’events,in which the leptons were not produced by a decaying Higgs boson,are eliminated as much as possible.Background events can be identified by looking at other observables in the event record,like the non-lepton particles that were produced,or the momenta ofFigure2:A CMS event(simulation).particles that left the collision point.Once enough background events have been eliminated,some important properties of the Higgs boson can be determined by doing a statistical analysis on the set of events that are left.The data reduction factor in physics analysis is enormous.Thefinal event set in the above example may contain only a few hundreds of events,selected from the events that occurred in one year in the CMS detector.This gives a data reduction factor of about1in.Much of this reduc-tion happens in the real-timefilter before any data is stored,the rest happens through the successive application of‘cut predicates’to the stored events,to isolate ever smaller subsets.3CMS virtual data description3.1Structure of event dataCMS models all its data in terms of objects.We will call the objects that represent event data,or summaries of event data,‘physics objects’.The CMS object store will contain a number of physics objects for each event,as shown infigure3.In a Griphyn context,on can think of each object infigure3as a materialized virtual data object.Among themselves,the objects for each event form a hierarchy.At higher levels in the hierarchy,the objects become smaller,and can be thought of as holding summary descriptions of the data in the objects ata lower level.By accessing the smallest summary object whenever possible,physicists can save bothFigure3:Example of the physics objects present for two events.The numbers indicateobject sizes.The reconstructed object sizes shown reflect the CMS estimates in[1].Thesum of the sizes of the raw data objects for an event is1MB,this corresponds to the1MBraw event data specified in[1].CPU and I/O resources.At the lowest level of the hierarchy are raw data objects,which store all the detector measurements made at the occurrence of the event.Every event has about1MB of raw data in total,which is partitioned into objects according to some predefined scheme that follows the physical structure of the detector.Above the raw data objects are the reconstructed objects,they store interpretations of the raw data in terms of physics processes.Reconstructed objects can be created by physicists as needed, so different events may have different types and numbers of reconstructed(materialized)objects.At the top of the hierarchy of reconstructed objects are event tag objects of some100bytes,which store only the most important properties of the event.Several versions of these event tag objects can exist at the same time.3.2Data dependenciesTo interpretfigure3in terms of virtual data,one has to make visible the way in which each of these objects was computed.This is done infigure4:it shows the data dependencies for objects infigure3. Note that the‘raw’objects are not computed,they correspond to detector measurements.Infigure3an arrow from to signifies that the value of depends on.The grey boxes represent physics algorithms used to compute the object:note that these all have particular versions.The lower-most grey box represents some’calibration constants’that specify the configuration of the detector over time.Calibration is outside of the scope of this text.Note that,usually,there is only one way in which a requested CMS virtual data product can be computed.This in contrast with LIGO,where often many ways to compute a product are feasible, and where one challenge to the scheduler is tofind the most efficient way.Figure4:Data dependencies for some of the objects infigure3.An arrow from tosignifies that the value of depends on.The grey boxes represent physics algorithmsused to compute the objects,and some’calibration constants’used by these algorithms.3.3Encoding data dependency specificsAs is obvious fromfigure4,the data dependencies for any physics object can become very complex. Note however that,for the two events infigure4,the dependency graphs are similar.It is possible to make a single big dependency graph,in a metadata repository,that captures all possible dependencies between the physics objects of every event.Figure5shows such a metadata graph for the events in figure4.graph.The numbers in the nodes representing physics objects are globally unique typenumbers.In the metadata graph,the physics objects are replaced by unique type numbers.These numbers can be used to represent the particular data dependencies and algorithms that went into the computation of any reconstructed object.A CMS virtual data addressing system could use the type numbers as keysin a transformation catalog that yields instructions on how to compute any virtual physics object. 3.4Virtual data object identificationIn CMS it is possible to uniquely identify every(virtual or real)physics object by a tuple(event ID,type number).Thefirst part of the tuple is the(unique)identifier of the event the object belongs to,the second the unique identifier of the corresponding location in the object dependency graph discussed above.3.5Importance of physics objects relative to other data productsBesides the physics objects as shown infigure3,where each object holds data about a single event only,CMS will also work with data products that describe properties of sets of events.There will be several types of such products with names like‘calibration data’,‘tag collections’,‘histograms’,‘physics papers’,etc.However in terms of data volume these products will be much less significant than the physics objects.In terms of virtual data grids,it is believed that the big challenge for CMS lies in making these grids compute and deliver physics objects to‘physics analysis jobs’,where these physics jobs can output relatively small data products like histograms that need not necessarily be managed by the grid.3.6Typical physics jobA typical job computes a function value for every event in some set,and aggregates the function results.In contrast with LIGO,in CMS the will generally be a very sparse subset of the events taken over a very long time interval.To compute,the values of one or more virtual data products for this event are needed.Generally,for every event will request the same product(s), products with the same‘type numbers’.In a high energy physics experiment,there is no special correlation between the time an event collision took place and any other property of the event:events with similar properties are evenly distributed over the time sequence of all events.In physics analysis jobs,events are treated as completely inde-pendent from each other.If function results are aggregated,the aggregation function does not depend on the order in which the function results are fed to it.Thus,from a physics standpoint,the order of traversal of a job event set does not influence the job result.3.7CMS virtual data grid characteristicsWhen comparing the grid requirements of a high energy physics experiment like CMS with the re-quirements of LIGO and SDSS,the points which are characteristic for CMS are:Not very large but extremely large amounts of data.Large amount of CPU power needed to derive needed virtual data products.The above two imply that fault tolerant facilities for the mass-materialization of virtual data products on a large and very distributed system are essential.Baseline virtual data model does not have virtual data products that aggregate data from multiple events,so the model looks relatively simple from a scheduling standpoint.A requested set of data products generally corresponds to a very sparse subset of the eventstaken over a very long time interval.42005vs.current needs and activitiesThe preceding sections talk about the CMS data model and data analysis needs when the experiment is running from2005on.This section discusses current and near-future needs and activities,and contrasts these to the2005needs.Currently CMS is performing large-scale simulation efforts,in which physics events are simulated as they occur inside a simulation of the CMS detector.These simulation efforts support detector design and the design of the real-time eventfiltering algorithms that will be used when CMS is running.The simulation efforts are in the order of hundreds of CPU years and terabytes of data.These simula-tion efforts will continue,and will grow in size,up to2005and then throughout the lifetime of the experiment.The simulation efforts and the software R&D for CMS data management are seen as strongly intertwined and complementary activities.In addition to performing grid-related R&D in the context of several projects,CMS is also already using some grid-type software‘in production’for its simulation efforts.Examples of this is the use of Condor-managed CPU power in some large-scale simulation efforts,and the use of some Globus components by GDMP[7],which is a software system developed by CMS that is currently being used in production to replicatefiles with simulation results in the wide-area.CMS simulation efforts currently still rely to a large extent on hand-coded shell and perl scripts, and the careful manual mapping of hardware resources to tasks.As more grid technology becomes available,CMS will be actively looking to use it in its simulation efforts,both as a way to save manpower and as a means to allow for greater scalability.On the grid R&D side,the CMS simulation codes could also be used inside testbeds that evaluate still-experimental grid technologies.It thus makes sense to look more closely here at the exact properties of the CMS simulation efforts,and how these differ from those of the2005CMS virtual data problem.Each individual CMS simulation run can be modeled as a definition of a set of virtual data products and a request to materialize them into a set offiles.Current CMS simulation runs have a batch nature, not an interactive nature.Each large run generally at least takes a few days to plan,with several people getting involved,and then at least few weeks to execute.At most some tens of runs will be in progress at the same time.So there is a huge contrast with the2005situation,where CMS data processing requirements are expected to be dominated by‘chaotic’interactive physics analysis workloads generated by hundreds of physicists working independently.Also,in contrast to the2005 workloads,requests for the data in sparse subsets of(simulated)event datasets will be rare,if they occur at all.Simulated event sets can be,and are,generated in such a way that events likely to be requested together are created together in the same databasefile or set of databasefiles.Therefore, to support simulation runs in the near future,it would be possible to use a virtual data system that works at the granularity offiles,rather than thefiner granularity of events or objects.Going towards 2005,the data creation and transport needs of the CMS simulation exercises are expected to become increasingly‘chaotic’andfine-grained,but the exact pace at which change will happen is currently not known.CMS currently has two distinct simulation packages.The Fortran-based CMSIM software takes care of thefirst steps in a full simulation chain.It producesfiles which are then read by the C++-based ORCA software.CMSIM usesflatfiles in the‘fz’format for its output,ORCA data storage is done using the Objectivity object database.CMSIM will be phased out in the next few years,it will be replaced by more up-to-date simulation codes using the C++-based next-generation GEANT4physics simulation library.As targets for the use in virtual data testbeds,CMSIM and ORCA each have their own strengths and weaknesses.In CMSIM,each simulation run produces one outputfile based on a set of runtime parameters.This yields a virtual data model that is almost embarrassingly simple, a data model of‘virtual outputfiles’,with eachfile having a runtime parameter set as its unique signature,and no dependencies betweenfiles.CMSIM is very portable and can be run on almost any platform.Installing the CMSIM software is not a major job.The simulations involving ORCA display a much more complicated pattern of data handling in which intermediate products appear[3], and a corresponding virtual data model would be much more complex,and more representative of the 2005situation.The ORCA software is under rapid development,with cycles of a few months or less. ORCA is only supported on Linux and Solaris,and currently takes considerable effort and expertise to install.Work on more automated installation procedures is underway.5References to some other documents that may be of interestThe CMS Computing Technical Proposal[1],written in1996,is still a good source of overview mate-rial.More recent sources are[5],which has material on CMS physics and its software requirements, and[2]which has more details about the CMS2005data model and expected access patterns.A short write-up on CMSIM and virtual data is[6].More details on simulations using ORCA are in[3]and[4].References[1]CMS Computing Technical Proposal.CERN/LHCC96-45,CMS collaboration,19December1996.[2]Koen Holtman,Introduction to CMS from a CS viewpoint.21Nov2000.http://home.cern.ch/kholtman/introcms.ps[3]David Stickland.The Design,Implementation and Deployment of Functional Prototype OOReconstruction Software for CMS.The ORCA project.http://chep2000.pd.infn.it/abst/abs。
托福词汇表
CONTROL OF DISEASES AND PESTS (病虫害控制)
ARBORETUM & BOTANICAL GARDEN (树木园 & 植物园)
TULIPS (郁金香)
BULBS (鳞茎)
(1) CUTTAGE(插木法), (2) GRAFTING(嫁接法)ห้องสมุดไป่ตู้ AND (3) LAYERING(压条法)
HOW PLANTS GROW (植物如何成长)
GERMINATION (萌芽)
PLANT HORMONES (植物的贺尔蒙)
WATER MOVEMENT (水在植物中的运动)
PHOTOSYNTHESIS (光合作用)
RESPIRATION (植物的呼吸)
AUXINS (植物激素)
SEED DISPERSAL (种子传播)
carnivorous plants (食肉植物)
PLANT ENEMIES (植物的敌人)
HOW PLANTS PROTECT THEMSELVES (植物如何保护自己)
chemicalarmsrace植物与动物间的化学武器竞赛
托福词汇表
2015年托福词汇表大全
THE CULTIVATION OF FRUITS (水果的栽种)
HOW PLANTS REPRODUCE (植物如何繁殖)
POLLINATION (授粉)
VEGETATIVE PROPAGATION (植物的.无性繁殖)
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
LIGO-T010166-00-ELASER INTERFEROMETER GRAVITATIONAL WAVE OBSERVATORY- LIGO –CALIFORNIA INSTITUTE OF TECHNOLOGYMASSACHUSETTS INSTITUTE OF TECHNOLOGYDocument TypeAnnual ReportLIGO- T010166-00-E30 June 2001LIGO DCC Archival Copy:GriPhyN Annual Report for 2000–2001NSF Grant 0086044GriPhyn CollaborationCalifornia Institute of Technology LIGO Laboratory - MS 18-34Pasadena CA 91125Phone (626) 395-212Fax (626) 304-9834E-mail: info@ Massachusetts Institute of Technology LIGO Laboratory - MS 16NW-145Cambridge, MA 01239Phone (617) 253-4824Fax (617) 253-7014E-mail: info@www: /Final VersionJune 30, 2001GriPhyN Annual Report for 2000–2001NSF Grant 00860441 The GriPhyN Project (2)1.1 Introductiongoals (2)and1.2 Participants and first year funding (2)structure (3)1.3 Management2 Activities and Findings for 2000−2001 (5)2.1 Creation of a Working Grid (5)2.2 Meetings (5)2.3 Testbeds (5)activities (6)2.4 Research2.4.1 ComputerScience (6)2.4.2 Virtual Data Toolkit (9)2.4.3 ATLAS Research Activities (9)2.4.4 CMS Research Activities (11)ResearchActivities (12)2.4.5 LIGO2.4.6 SDSS Research Activities (13)2.5 Education and Outreach (14)2.6 Connection to other efforts and the wider community (14)2.6.1 Coordination with Other Data Grid Projects (14)Coordination (15)2.6.2 Network2.7 External Advisory Committee meeting (15)3 Products resulting from our work (16)3.1 Publications (16)documents (18)3.2 Internalsite (18)3.3 Web4 Plans for next year (2001 − 2002) (18)Appendix I: Participants in Project (asterix marks institutional contact) (19)Appendix II: CMS Information Release About Large-Scale Test (23)1 The GriPhyN Project1.1 Introduction and goalsGriPhyN is a collaboration of information technology (IT) researchers and experimental physi-cists who aim to provide the IT advances required to enable Petabyte-scale data intensive science in the 21st century. Driving the project are unprecedented requirements for geographically dis-persed extraction of complex scientific information from very large collections of measured data. To meet these requirements, which arise initially from the four physics experiments involved in this project (ATLAS, CMS, LIGO and SDSS) but will also be fundamental to science and com-merce in the 21st century, the GriPhyN team will pursue IT advances centered on the creation of Petascale Virtual Data Grids (PVDG) that meet the data-intensive computational needs of a di-verse community of thousands of scientists spread across the globe.This report is being written at the start of GriPhyN’s ninth month of funding. We have started substantial research IT activities in areas such as data management, query optimization, data grid architecture, as was reported at our all-hands meeting in April and described in Section 2.3 be-low. We have also performed substantial early experiments, notably a large CMS computation across a Data Grid spanning Caltech, UW, and NCSA. Much initial effort has also been spent on recruiting and hiring students, postdocs, scientists and staff; writing detailed requirements docu-ments; setting formal milestones; implementing a management structure and organizing our col-laboration in line with these milestones; and interacting with other Grid projects and the four ex-periments which compose GriPhyN. During this time, we have had two collaboration wide meetings, at which students and postdocs made significant contributions, as well as numerous smaller meetings for CS research groups and CS-experiment interactions. The report below dis-cusses these topics in more detail.1.2 Participants and first year fundingA table showing a list of participants and their affiliations is shown in Appendix I. We have ex-pended much effort in finding and hiring personnel, but have experienced a somewhat slower rampup in personnel than we expected when the proposal was submitted, even though we had assumed 50% of our total FTE count in the first year. At the time of this report (June 2001) we have hired or made offers to approximately 80% of our full complement of hires (including peo-ple who were promised as matching contributions at Florida, Boston U and Indiana. We have had a particularly difficult time finding suitable candidates for the Project Coordinator position, in spite of a large cross-disciplinary search. Nevertheless, we have identified one and possibly two candidates for the position and we hope to have a Project Coordinator hired sometime in July. On the other hand, we have identified an outstanding candidate for the Outreach Coordina-tor (Manuela Campanelli) and she will start in Fall, 2001.Although our hiring has gone somewhat more slowly than expected, we have incurred some ad-ditional costs for our External Advisory Committee, travel and web development. We estimate that slightly more than 20% of our funds will not be expended at the end of the first year, though it is hard to be precise more than three months before that date (12 subcontracts need to be tracked and the dates of some hires are not precise). We are setting up procedures to track our total expenses every three months to aid our planning and reporting.1.3 Management structureThe management of the GriPhyN project has been conducted largely in accordance with the management plan presented in the proposal and in the more detailed GriPhyN Management Plan1 prepared in November, 2000. The organization chart from the latter document is repro-duced below. We describe briefly the activity in each of the chart's sections.• The Project Coordination Group is chaired by the Project Directors Avery and Foster. It meets once or twice each week by telephone conference to review and plan the major ac-tivities of the project.• The Applications Committee has not had formal meetings, but its members, the represen-tatives of the four component physics collaborations, have worked actively with the CSResearch Committee to define requirements and goals for integration of GriPhyN mid-dleware into each collaboration's testbed as a function of time.• The Virtual Data Toolkit Committee is chaired by Miron Livny. It has worked actively with the CS Research Committee to define the first release of the Toolkit as described in Section 2.2.2 below.• The CS Research Committee, chaired by Carl Kesselman, has met as needed during the past several months to coordinate work on the four areas of CS research.• The Technical Coordination Committee has worked only informally so far.• The External Advisory Committee has been formed and has met to review the progress of the GriPhyN project during the general GriPhyN meeting at ISI in April. The EAC re-port2 can be accessed at the GriPhyN web site.3• An active search has been underway for the Project Coordinator position. We expect to fill the position by the middle of July 2001.2 Activities and Findings for 2000−20012.1 Creation of a Working GridThis past spring, a collaborative effort involving physicists and computer scientists from Caltech, the National Center for Supercomputing Applications (NCSA), and the University of Wisconsin-Madison produced a working example of the Grid in action. The details are described in Appen-dix II, which will appear in NCSA Magazine.2.2 MeetingsThe size, scope and interdisciplinary nature of GriPhyN require building effective communica-tion channels between the members of the collaboration. Early on we decided to use face-to-face meetings to establish and maintain such channels. In early October we had our first general meet-ing at Argonne National Laboratory4. The main objective of this meeting was to introduce the different groups to each other and to agree on a plan for jumpstarting the collaboration. Each of the physics groups gave an overview of the overall scientific mission of the group and the design principals and structure of their software and hardware infrastructure. Carl Kesselman presented a strawman architecture for a Virtual Data Grid and outlined a plan for the first year. A key ele-ment of this plan was a series of meetings between each of the four Physics experiments and small groups of Computer Scientists to discuss in details experiment specific technical and con-ceptual aspects of Virtual Data Grids.More than a dozen such meetings took place in the following six months. The format, dynamics and frequency of these meetings varied from one experiment to the other. However, they all con-tributed to a collection of extremely useful documents – use cases and requirements on the phys-ics side and a Virtual Data Grid architecture document on the computer science side. Two meet-ings in December 2000 – one only of computer scientists and one of representatives from the four experiments and the computer scientists – lead to the formulation of the GriPhyN data grid reference architecture.Our second “all-hands” meeting5 was held April 9-11, 2001 at USC/ISI in Marina del Rey. More than 60 members of the collaboration attended the three-day meeting, plus a few visitors from other projects and the External Advisory Committee. The first day was devoted to presentations and discussions on the requirement and architecture documents. Early results from the computer science efforts were reported on the second day. Three planning working groups took place in parallel on the morning of the third day. Each group focused on a key component of Virtual Data Grids. The afternoon was devoted to two invited talks that challenged the attendees with vision-ary ideas about distributed computing and data management. Details and the contents of almost all talks and supporting reference6 documents can be found at the meeting web site.5Another Computer Science – application group meeting is planned for August 2-3, 2001 in Chi-cago. The next all-hands meeting will be held the week of October 15, 2001 in California.2.3 TestbedsThe activities for developing and deploying testbeds to test components of the GriPhyN architec-ture have been pursued separately by the different application groups, as discuss in the subsec-tions in Section 2.3. The testbeds for CMS and ATLAS are integrated with the production activi-ties of their international collaborations and are expected to draw on these international facilities for limited GriPhyN tests next year. These application testbeds will be operated independently for the near future. We are exploring the idea of conducting occasional wide scale tests that would require two or more testbeds working coherently, but we anticipate that these kinds of tests would take place after 2002 when more tools are available.2.4 Research activities2.4.1 Computer ScienceComputer science research during the first nine months of the GriPhyN project has focused on two main activities:• definition of a baseline virtual data grid architecture• research on basic virtual data grid technologies, such as scheduling, execution manage-ment, and virtual data instantiation.We review the progress in each of these areas in the following sections.2.4.1.1 Virtual Data Grid ArchitectureArchitecture development is a focused activity whose goal is to define a virtual data grid refer-ence architecture that can guide the development of the virtual data toolkit as well as provide a framework for performing initial application experiments. We have produced two main docu-ments: “A Data Grid Reference Architecture” and “Representing Virtual Data: A Catalog Archi-tecture for Location and Materialization Transparency.” Contributions to these documents were made by many of the participating GriyPhyN institutions.Our initial reference architecture is based on existing data-grid technologies including metadata catalogs, replica catalogs, software catalogs, Grid resource and data management protocols. By heavily leveraging existing data grid components, we anticipate rapid construction and deploy-ment of a virtual data grid testbed, which Array will enable application experiments. Thefigure to the left illustrates the main com-ponents of the reference data gridarchitecture. One key element is thespecification of a set of catalogs used todetermine if virtual data has beenmaterialized and if not, how to locate andinvoke the transformations required togenerate the requested data. A significantissue yet to be resolved is how to “name”virtual data so that its materialization statecan be determined. A number of mined. A number of alternatives are currently being investigated, including the use of applica-tion or domain-specific naming strategies that leverage existing naming conventions within anexperiment.2.4.1.2 Research in Virtual Data TechnologiesTo date, research activities on various specific virtual data technology areas have been largely decoupled. The reference architecture will provide a integrating framework for many of these efforts, and will form the basis of cross-project collaboration. Specifically, during the next year, we anticipate that more mature research activities will be integrated into the overall system archi-tecture and be deployed as part of our testbed environment for evaluation within the context of application experiments. We highlight some of these ongoing research activities below.Agent based scheduling. Request management in virtual data grid environments is complicated by the complexity of the requests, the potential duration of request execution, and the dynamics of the underlying execution environment, including component failure. For this reason, request planning and execution management for virtual data grids must be very flexible, and capable of preplanning request execution in the case of failure. A promising approach is to exploit Intelli-gent Agents as a technology for implementing request management. At USC/ISI, we have initi-ated an investigation of this approach using the Electric Elves software agents as a framework for information gathering, planning and execution monitoring. To date, we have demonstrated the integration of Grid resources into the Elves environment and have used KQML (Knowledge Query and Manipulation Language) based commands to initiate Grid operations. Our next step will be to build a Agent knowledge base that performs intelligent recovery from failed data-movement operations.Improving Query Performance. Research at UC Berkeley has been focused on developing techniques to reduce latency and improve the interactive performance of access to virtual data products. The work performed to date has focused on two promising directions: 1) Query relaxa-tion and associated caching techniques, and 2) dynamic prefix caching for large files.Query relaxation is aimed at providing more answers to user queries faster in cases where there are no existing (or readily available) virtual data products that exactly match a given query speci-fication. In this situation, the query relaxation module looks for cached data products that may overlap or be closely related to the requested data. The user is presented with a menu of such products and in indication of the expected time required to obtain them. He/she may then choose to view one of these alternative products while waiting (or perhaps before asking) for the pro-duction of a high-latency data product. The ability to relax queries changes the caching problem somewhat, as now individual data products can "cover" multiple queries. Thus, caching policies to maximize query coverage have been developed and an initial simulation study has been per-formed.Dynamic prefix caching aims to improve the cache hit rate by caching only the initial portion of large data items. When a request for such an item is received, the initial portion is sent to the user while the remainder of the item is retrieved from the slower location. If the prefix size is sufficiently large, then the latency of accessing the slower location can be completely hidden. This approach, however, may have the side effect of increasing the load on the slower site. Thus, a dynamic approach as been developed, that can balance load vs. cache hit rate by increasing the prefix size for highly popular items. A simulation study of this technique is currently under way. XML-based data manipulation: The virtual data grid requires mechanisms for the creation of derived data products. At the same time, the virtual data grid requires the ability to query the collections of derived data products. An emerging standard for querying and manipulating XML files is Quilt. A standard version of Quilt is being developed called XQuery. Of great interest iswhether XQuery can be extended to support scientific data types. This will make it possible to manipulate an XML encoded digital object directly within the same language that supports in-formation discovery.At SDSC, Xufei Qian, under the direction of Amarnath Gupta, added data types to XQuery, added overloaded operators to support scientific manipulations, and interfaced the system to the Extensible Scientific Interchange Language (XSIL). XSIL is an XML markup language used to describe LIGO data, developed by Roy Williams (Caltech). The resulting system was used to query and manipulate XSIL encoded data files. In particular, array operations were added to XQuery, including accession, element summation, array summation, and subsequence com-mands for concatenation. The resulting prototype demonstrated that it is possible to integrate query and manipulation commands for XML encoded digital objects.Data Management: There are a number of research activities that are investigating various as-pects of location transparency, specifically with respect to creating and managing replicas of ma-terialized data. Work at the University of Chicago by Kavitha Ranganathan has used simulation studies to examine alternative replication and caching strategies on application response time and bandwidth requirements. A simulator has been developed that represents data movement within a hierarchically organized data grid, and 5 different replication strategies have been implemented and compared via simulation studies. Complimentary activities at Lawrence Berkeley Labora-tory are using analytical and simulation studies to understand the performance of alternative caching strategies specifically targeted towards hierarchal storage managers. Additional simula-tion studies by Matei Ripeanu at the University of Chicago have examined TCP performance in typical replication scenarios involving parallel bulk transfer of many files. Our next step in the work will be to take the knowledge gained from simulation studies, and instantiate it in request planning strategies that can then be evaluated within the context of specific data access scenarios generated by one or more of the physics experiments.Flexible Storage Technology: At the core of virtual data grid technology is the need for effi-cient and manageable physical data storage and movement. Along these lines, at Wisconsin we are currently developing NeST (Network Storage Technology). NeST software transforms a commodity workstation or PC running a commodity operating system into an easy-to-administer and high-performance storage appliance.To operate effectively in a Grid environment, NeST will support multiple data transfer protocols, enabling integration into local sites with specific protocol preferences (e.g., NFS). However, standard Grid lingua franca such as GridFTP will also be supported. One of the main challenges of NeST is the combination of these protocols into a single, efficient, and yet easy-to-maintain software base. We are also developing the necessary software technology to allow NeST to con-figure itself to the vagaries of the underlying software and hardware platforms. In particular, NeST can statically choose between multiple concurrency architectures depending on which is best on the given platform, and in the future NeST will be able to select the most appropriatefile-access methods given a particular storage platform.We are currently investigating the following additional research issues within the NeST domain: how to build generic support for third-party transfers and wide-area caching, how to integrate NeST effectively into an opportunistic environment, and how to incorporate resource limitation and management into the NeST framework in a robust and yet flexible manner.Performance Monitoring and Analysis: The work at Northwestern University has focused on the development of an infrastructure, called Prophesy, to automate the process of generating ana-lytical models. Prophesy automates the following processes: instrumenting of code, recording of performance data into a relational database, and developing analytical models based upon em-pirical data. Prophesy also accepts data generated by other tools such as FMMPI, gprof, pixie or Rabbit. The data from these tools is placed in the performance database, which includes infor-mation about the context in which the data was collected as well as the data. This data is used by the data analysis component to produce an analytical performance model at the level of granular-ity specified by the user, or answer queries about best implementation of a given function.We are also developing a performance requirements document for the GriPhyN project. The goal of this document is to identify the performance monitoring and modeling requirements for the four HEP experiments and the CS related activities. Our initial focus is on the ATLAS ex-periment; future work entails extending the requirements to other three HEP experiments. In terms of requirements, this document attempts to identify (1) what system components and ap-plication features require monitoring and recording of performance data and (2) what perform-ance models are needed and the required accuracy. Once these requirements are identified, we can leverage from current work with performance tool development to identify the suite of tools needed.2.4.2 Virtual Data ToolkitIn preparation for the release of the first version of the Virtual Data Toolkit (VDT) by the end of the year, the Globus and Condor Teams focused on adapting, enhancing and hardening existing software tools. The original timetable and scope of this version were modified to reflect the sig-nificant reduction in first year funding allocated to the VDT activities. The VDT work was guided by the requirements and timelines gathered from the experiments and the design and im-plementation principals established by the GriPhyN architecture document. Catalogs play a key role in this architecture. A number of different data catalogs were prototyped and carefully evaluated and tested. All catalogs use a common software foundation provided by the Globus Toolkit. The performance and robustness of the GridFTP component were also addressed. We expect this component to be the “work horse” of any Data Grid and therefore pay special atten-tion its performance and robustness. The runtime support infrastructure of Condor was enhanced to fully utilize the capabilities of GridFTP and the security infrastructure of Globus. New fea-tures were added to the Directed Acyclic Graph Manager (DAGMan) of Condor and extensively tested. The DAGMan provides advanced job control services and is expected to serve as the agent responsible for dynamically materializing Virtual Data. Early versions of VDT compo-nents were successfully used for distributed production and reconstruction of simulated events for CMS.2.4.3 ATLAS Research ActivitiesATLAS-GriPhyN activities proceeded along two fronts. First, ATLAS physicist-developers and GriPhyN computer scientists met on two occasions7 to discuss computing methods in high-energy physics, the architecture and status of the Athena8,9,10 analysis control framework, and virtual data concepts specific to the Athena framework. A prototype virtual data query use case was developed, and development of associated Athena and ATLAS database software has been incorporated into the ATLAS Software and Computing Project Grid Plan11. Second, severalmore practical projects were undertaken: grid testbed development, prototype Tier 2 center selec-tion and development, grid account registration software, and grid access software.The testbed development proceeded throughout the year, with a kick-off workshop12 at the Uni-versity of Michigan, followed by bi-weekly meetings. There are five university sites participating in the activity: Boston University, Indiana University, University of Michigan, Oklahoma Uni-versity, and the University of Texas at Arlington; three national laboratories participate: Argonne National Laboratory, Brookhaven National Laboratory, and Lawrence Berkeley National Labo-ratory. Approximately 15 users are participating in this development. Each site has contributed existing facility resources to be dedicated for this purpose, and site administrators have deployed current releases of grid toolkits including Globus and Condor. Accounts have been created and grid certificates exchanged for the current group of testbed participants. Each site has installed a complete ATLAS software environment, and work is ongoing to develop an ATLAS “kit” for simple distribution and update to future grid nodes. The primary applications to be run on the system in 2001 include fast Monte Carlo simulations executed within the Athena control frame-work, and full detector simulations using legacy FORTRAN codes. The infrastructure will be used during Mock Data Challenges in 2002 and 2003 to validate the LHC Computing Model13. Indiana University and Boston University were selected through a competitive process as the prototype Tier 2 centers for the U.S. ATLAS-GriPhyN project. Development of the first centerat Indiana University has begun. A system consisting of a 16-node (32 processors) Linux farm, 500 GB RAID disk, plus access to HPSS hierarchical storage facilities for archival data has been deployed. The system is managed by professional IT staff in a machine room adjacent to theI2/Abilene Network Operations Center (NOC) in Indianapolis. Current releases of grid packages have been installed. A dedicated AFS server ring will be installed this August.GriPhyN and ATLAS users will use a secure facility to support grid account request and authori-zation. GRIPE14, (Grid Registration Infrastructure for Physics Experiments) was implemented by four computer science graduate students at Indiana University under the supervision of Profs. Gardner and Bramley. The system facilitates account creation across all administrative domains. GRIPE users choose trusted sponsors and grid site-specific resources and submit account re-quests through the system. Site administrators interact with GRIPE to communicate to users and exchange Globus certificates. The system is being tested this summer by the testbed working group, and is the primary mechanism for account creation at the Tier 2 centers.A web-based portal for ATLAS users is currently under development. GRAPPA15 (Grid Access Portal for Physics Applications) will provide ATLAS and GriPhyN users a single point of access to grid resources. The system provides a simple utility for job submission to Condor flocks and other available grid resources. The user interface will use script management tools based on Common Component Architecture Toolkit (CCAT) software being developed by Profs. Bramley and Gannon at Indiana University. The first prototypes will be available during Summer 2001. Summary of meetings held related to ATLAS-GriPhyN project, with strong participation from GriPhyN personnel:• 2nd International ATLAS Grid Workshop16, CERN, Geneva, Switzerland, September 29-30, 2000.• 3rd International ATLAS Grid Workshop17, CERN, Geneva, Switzerland, January 25, 2001.。