Torsion waves in metric-affine field theory
聚焦离子束溅射(FIB)
题目:A Review of Focused Ion Beam Sputtering
作者:Mohammad Yeakub Ali, Wayne Hung and Fu Yongqi
期刊:International journal of precision engineering and manufacturing vol. 11, no. 1, pp. 157-170
3FIB溅射模型
尽管FIB溅射能制备高精确度的微元件,控制溅射深度是相当困难的。如果衬底的材料既不是集成电路,也不是晶圆,那么SIMS(二次离子质谱技术)能以20nm的精确度探测并鉴定出过渡层。但是,对于单一组成的材料而言,SIMS技术则无法使用。这样就导致末端点的检测成为FIB溅射的难点。表面粗糙度的估测则是FIB溅射微加工中的另一个关键性问题。在如下几个部分中,若干模型被讨论如何达到期望的溅射深度、几何完整性以及表面抛光处理。
他们较高的质量能诱导溅射效应的发生。加速电压使离子在接地点加速,加速电压越高,离子移动速度越快
2.1仪器
基本的单波束仪器由液态金属离子源、一个离子柱、样品载台、真空腔组成。
典型的离子束显微镜包括液态金属离子源及离子引出极、预聚焦极、聚焦极所用的高压电源、电对中、消像散电子透镜、扫描线圈、二次粒子检测器、可移动的样品基座、真空系统、抗振动和磁场的装置、电路控制板和电脑等硬件设备。
在给定入射角度的溅射产额随有多种因素变化,通道的晶体取向。容易的通道取向,离子只经历非弹性作用,与躺在晶面里样品原子掠射角碰撞,在引起弹性散射前,深入晶体内部,因此只有少数原子从表面被溅射。这好比晶体取向效应对低能电子产额的影响。在垂直晶界的溅射通道效应。溅射剖面图还有赖于在样品表面扫描光栅的方向和序列。例如,环形的溅射轮廓,被快速和重复的扫描切割,与慢扫描逐点切割不同。
Thermal gravitational waves
v12
3kTc . 12
For a core temperature of Tc 10 7 K , the velocity of the particles is of the order of
8 10 5 m / s . For a star of density 200 g / cc 2 10 5 kg / m 3 , the numbeting in these values we get the power of thermal gravitational waves emitted as,
10 9 Watt at a frequency of 1017 Hz . E
2
The flux of thermal gravitational waves from the sun, received at earth is of the order of half a watt. 2. Thermal gravitational waves from compact stars In the case of white dwarfs, the number density is of the order of n1 n2 10 37 m 3 , and the velocity corresponding to the white dwarf temperature of Tc 10 8 K , is of the order of 2 10 6 m / s . The volume of the white dwarf is of the order of 4 1018 m 3 and the frequency corresponding to the temperature Tc 10 8 K is And for a white dwarf,
衰减全反射-傅里叶红外光谱法快速测试干酪根样品
( e g u Isi t f oo ya d Mi rl e o re , e g u 6 0 8 ,C ia Ch n d tue o n t Ge l n n a R s uc s Ch n d 1 0 2 hn ) g e
Ra i a y i o r g n b t n a e t lRe e to p d An l ss f r Ke o e y Ate u t d To a f c i n- l Fo r e a s o m n r r d S e t o c p u i r Tr n f r I f a e p c r s o y
衰减全反 射 一傅里 叶红 外光谱法快 速测试 干酪根样 品
冉 敬 ,杜 谷 ,潘 忠 习
60 8 ) 10 2 ( 成都 地质 矿产 研究 所 ,四川 成都
摘要 : 生油母质干酪根是石 油地质工作研 究的重要对象。文章利 用衰减全反射( T ) A R 技术对干酪根样品 进行傅里叶变换红外光谱分析 , 通过 与传统压片法谱 图进行峰 形峰 位特征、 结构参数 比较 , 明: A R 表 ① T 谱具有与压片法吸收谱类似的谱 图特征 ; 与压片法吸收谱相 比,T ② A R谱 的吸收强度较弱 ; A R技术 ③ T 和压片法所获得的红外结构参数具有可比性。 由于 A R技 术具有无 需制样、 T 可以减少水分干扰等优 点, 因此 可 以作 为 干酪根 样 品快 速 测试 的方 法应 用 于干 酪根 样 品 的 类型鉴 别 、 演化程 度研 究和 来源对 比。 热 关 键词 : 里叶 变换 红外 光谱 法 ;衰减全反 射 ;干酪根 傅 中图分 类号 :0 5 . 3 6 7 3 文 献标 识码 : A
二叠纪-三叠纪灭绝事件
二叠纪-三叠纪灭绝事件二叠纪-三叠纪灭绝事件(Permian–Triassic extinction event)是一个大规模物种灭绝事件,发生于古生代二叠纪与中生代三叠纪之间,距今大约2亿5140万年[1][2]。
若以消失的物种来计算,当时地球上70%的陆生脊椎动物,以及高达96%的海中生物消失[3];这次灭绝事件也造成昆虫的唯一一次大量灭绝。
计有57%的科与83%的属消失[4][5]。
在灭绝事件之后,陆地与海洋的生态圈花了数百万年才完全恢复,比其他大型灭绝事件的恢复时间更长久[3]。
此次灭绝事件是地质年代的五次大型灭绝事件中,规模最庞大的一次,因此又非正式称为大灭绝(Great Dying)[6],或是大规模灭绝之母(Mother of all mass extinctions)[7]。
二叠纪-三叠纪灭绝事件的过程与成因仍在争议中[8]。
根据不同的研究,这次灭绝事件可分为一[1]到三[9]个阶段。
第一个小型高峰可能因为环境的逐渐改变,原因可能是海平面改变、海洋缺氧、盘古大陆形成引起的干旱气候;而后来的高峰则是迅速、剧烈的,原因可能是撞击事件、火山爆发[10]、或是海平面骤变,引起甲烷水合物的大量释放[11]。
目录? 1 年代测定? 2 灭绝模式o 2.1 海中生物o 2.2 陆地无脊椎动物o 2.3 陆地植物? 2.3.1 植物生态系统? 2.3.2 煤层缺口o 2.4 陆地脊椎动物o 2.5 灭绝模式的可能解释? 3 生态系统的复原o 3.1 海洋生态系统的改变o 3.2 陆地脊椎动物? 4 灭绝原因o 4.1 撞击事件o 4.2 火山爆发o 4.3 甲烷水合物的气化o 4.4 海平面改变o 4.5 海洋缺氧o 4.6 硫化氢o 4.7 盘古大陆的形成o 4.8 多重原因? 5 注释? 6 延伸阅读? 7 外部链接年代测定在西元二十世纪之前,二叠纪与三叠纪交界的地层很少被发现,因此科学家们很难准确地估算灭绝事件的年代与经历时间,以及影响的地理范围[12]。
Surface wave higher-mode phase velocity measurements using a roller- coaster-type algorithm
Geophys.J.Int.(2003)155,289–307Surface wave higher-mode phase velocity measurements usinga roller-coaster-type algorithm´Eric Beucler,∗´El´e onore Stutzmann and Jean-Paul MontagnerLaboratoire de sismologie globale,IPGP,4place Jussieu,75252Paris Cedex05,France.E-mail:beucler@ipgp.jussieu.frAccepted2003May20.Received2003January6;in original form2002March14S U M M A R YIn order to solve a highly non-linear problem by introducing the smallest a priori information,we present a new inverse technique called the‘roller coaster’technique and apply it to measuresurface wave mode-branch phase velocities.The fundamental mode and thefirst six overtoneparameter vectors,defined over their own significant frequency ranges,are smoothed averagephase velocity perturbations along the great circle epicentre–station path.These measurementsexplain well both Rayleigh and Love waveforms,within a maximum period range includedbetween40and500s.The main idea of this technique is tofirst determine all possibleconfigurations of the parameter vector,imposing large-scale correlations over the model space,and secondly to explore each of them locally in order to match the short-wavelength variations.Thefinal solution which achieves the minimum misfit of all local optimizations,in the least-squares sense,is then hardly influenced by the reference model.Each mode-branch a posteriorireliability estimate turns out to be a very powerful instrument in assessing the phase velocitymeasurements.Our Rayleigh results for the Vanuatu–California path seem to agree correctlywith previous ones.Key words:inverse problem,seismic tomography,surface waves,waveform analysis.1I N T R O D U C T I O NOver the last two decades,the resolution of global tomographic models has been greatly improved,because of the increase in the amount and the quality of data,and due to more and more sophisticated data processing and inversion schemes(Woodhouse&Dziewonski1984, 1986;Montagner1986;Nataf et al.1986;Giardini et al.1987;Montagner&Tanimoto1990;Tanimoto1990;Zhang&Tanimoto1991; Su et al.1994;Li&Romanowicz1995;Romanowicz1995;Trampert&Woodhouse1995;Laske&Masters1996;Ekstr¨o m et al.1997; Grand et al.1997;van der Hilst et al.1997;Liu&Dziewonski1998;Ekstr¨o m&Dziewonski1998;Laske&Masters1998;M´e gnin& Romanowicz2000;Ritsema&van Heijst2000,among others).These models are derived from surface wave phase velocities and/or body wave traveltimes(or waveforms)and/or free-oscillation splitting measurements.Body wave studies provide high-resolution models but suffer from the inhomogeneous distribution of earthquakes and recording stations,even when considering reflected or diffracted phases.On the other hand,the surface wave fundamental mode is mainly sensitive to the physical properties of the upper mantle.So,the investigation of the transition zone on a global scale,which plays a key role in mantle convection,can only be achieved by using higher-mode surface waves.Afirst attempt at providing a global tomographic model using these waves has been proposed by Stutzmann&Montagner(1994),but with a limited amount of data.More recently,van Heijst&Woodhouse(1999)computed degree-12phase velocity maps of the fundamental mode and the fourfirst overtones for both Love and Rayleigh waves.These data have been combined with body wave traveltimes measurements and free-oscillation splitting measurements,to provide a global tomographic model with a high and uniform resolution over the whole mantle (Ritsema et al.1999;van Heijst et al.1999).The most recent S H model for the whole mantle was proposed by M´e gnin&Romanowicz (2000).This degree-24model results from waveform inversion of body and surface Love waves,including fundamental and higher modes and introducing cross-branch coupling.Extracting information from higher-mode surface waves is a difficult task.The simultaneous arrivals(Fig.3in Section3)and the interference between the different mode-branches make the problem very underdetermined and non-linear.To remove the non-linearity,Cara &L´e vˆe que(1987)and L´e vˆe que et al.(1991)compute the cross-correlogram between the data and monomode synthetic seismograms and ∗Now at:´Ecole Normale Sup´e rieure,24rue Lhomond,75231Paris Cedex05,France.C 2003RAS289290´E.Beucler,´E.Stutzmann and J.-P.Montagnerinvert the amplitude and the phase of thefiltered cross-correlogram.On the other hand,Nolet et al.(1986)and Nolet(1990)use an iterative inverse algorithm tofit the waveform in the time domain and increase the model complexity within the iterations.These two methods provide directly a1-D model corresponding to an average epicentre–station path.They werefirst used‘manually’,which limited the amount of data that could be processed.The exponential increase in the amount of good-quality broad-band data has made necessary the automation of most parts of the data processing and an automatic version of these methods has been proposed by Debayle(1999)for the waveform inversion technique of Cara&L´e vˆe que(1987)and by Lebedev(2000)and Lebedev&Nolet(2003)for the partition waveform inversion.Stutzmann&Montagner(1993)split the inversion into two steps;at each iteration,a least-squares optimization to measure phase velocities is followed by an inversion to determine the1-D S-wave velocity model,in order to gain insight into the factors that control the depth resolution.They retrieve the phase velocity for a set of several seismograms recorded at a single station and originating from earthquakes located in the same area in order to improve the resolution.Another approach has been followed by van Heijst&Woodhouse(1997)who proposed a mode-branch stripping technique based on monomode cross-correlation functions.Phase velocity and amplitude perturbations are determined for the most energetic mode-branch,the waveform of which is then subtracted from the seismogram in order to determine the second most energetic mode-branch phase velocity and amplitude perturbations,and so on.More recently,Y oshizawa&Kennett(2002)used the neighbourhood algorithm(Sambridge1999a,b)to explore the model space in detail and to obtain directly a1-D velocity model which achieves the minimum misfit.It is difficult to compare the efficiency of these methods because they all follow different approaches to taking account of the non-linearity of the problem.Up to now,it has only been possible to compare tomographic results obtained using these different techniques.In this paper,we introduce a new semi-automatic inverse procedure,the‘roller coaster’technique(owing to the shape of the misfit curve displayed in Fig.6b in Section3.4.1),to measure fundamental and overtone phase velocities both for Rayleigh and Love waves.This method can be applied either to a single seismogram or to a set of seismograms recorded at a single station.To deal with the non-linearity of the problem,the roller coaster technique combines the detection of all possible solutions at a large scale(which means solutions of large-wavelength variations of the parameter vector over the model space),and local least-squares inversions close to each of them,in order to match small variations of the model.The purpose of this article is to present an inverse procedure that introduces as little a priori information as possible in a non-linear scheme.So,even using a straightforward phase perturbation theory,we show how this algorithm detects and converges towards the best global misfit model.The roller coaster technique is applied to a path average theory but can be later adapted and used with a more realistic wave propagation theory.One issue of this study is to provide a3-D global model which does not suffer from strong a priori constraints during the inversion and which then can be used in the future as a reference model.We describe hereafter the forward problem and the non-linear inverse approach developed for solving it.An essential asset of this technique is to provide quantitative a posteriori information,in order to assess the accuracy of the phase velocity measurements.Resolution tests on both synthetic and real data are presented for Love and Rayleigh waves.2F O RWA R D P R O B L E MFollowing the normal-mode summation approach,a long-period seismogram can be modelled as the sum of the fundamental mode(n=0) and thefirst higher modes(n≥1),hereafter referred to as FM and HM,respectively.Eigenfrequencies and eigenfunctions are computed for both spheroidal and toroidal modes in a1-D reference model,PREM(Dziewonski&Anderson1981)in our case.Stoneley modes are removed,then the radial order n for the spheroidal modes corresponds to Okal’s classification(Okal1978).In the following,all possible sorts of coupling between toroidal and spheroidal mode-branches(Woodhouse1980;Lognonn´e&Romanowicz1990;Deuss&Woodhouse2001) and off-great-circle propagation effects(Woodhouse&Wong1986;Laske&Masters1996)are neglected.For a given recorded long-period seismogram,the corresponding synthetic seismogram is computed using the formalism defined by Woodhouse&Girnius(1982).In the most general case,the displacement u,corresponding of thefirst surface wave train,in the time domain, can be written asu(r,t)=12π+∞−∞nj=0A j(r,ω)exp[i j(r,ω)]exp(iωt)dω,(1)where r is the source–receiver spatial position,ωis the angular frequency and where A j and j represent the amplitude and the phase of the j th mode-branch,respectively,in the frequency domain.In the following,the recorded and the corresponding synthetic seismogram spectra (computed in PREM)are denoted by(R)and(S),respectively.In the Fourier domain,following Kanamori&Given(1981),a recorded seismogram spectrum can be written asA(R)(r,ω)expi (R)(r,ω)=nj=0B j(r,ω)expij(r,ω)−ωaCj(r,ω),(2)where a is the radius of the Earth, is the epicentral distance(in radians)and C(R)j(r,ω)is the real average phase velocity along the epicentre–station path of the j th mode-branch,which we wish to measure.The term B j(r,ω)includes source amplitude and geometrical spreading, whereas j(r,ω)corresponds to the source phase.The instrumental response is included in both terms and this expression is valid for bothRayleigh and Love waves.The phase shift due to the propagation in the real medium then resides in the term exp[−iωa /C(R)j(r,ω)].C 2003RAS,GJI,155,289–307The roller coaster technique291 Figure1.Illustration of possible2πphase jumps over the whole frequency range(dashed lines)or localized around a given frequency(dotted line).Thereference phase velocity used to compute these three curves is represented as a solid line.Considering that,tofirst order,the effect of a phase perturbation dominates over that of the amplitude perturbation(Li&Tanimoto 1993),and writing the real slowness as a perturbation of the synthetic slowness(computed in the1-D reference model),eq.(2)becomesA(R)(r,ω)expi (R)(r,ω)=nj=0A(S)j(r,ω)expij(r,ω)−ωaC(S)j(ω)−χ,(3) whereχ=ωa1C(R)j(r,ω)−1C(S)j(r,ω).(4) Let us now denote by p j(r,ω),the dimensionless parameter vector of the j th mode-branch defined byp j(r,ω)=C(R)j(r,ω)−C(S)j(ω)Cj(ω).(5)Finally,introducing the synthetic phase (S)j(r,ω),as the sum of the source phase and the phase shift due to the propagation in the reference model,the forward problem can be expressed asd=g(p),A(R)(r,ω)expi (R)(r,ω)=nj=0A(S)j(r,ω)expi(S)j(r,ω)+ωaCj(ω)p j(r,ω).(6)For practical reasons,the results presented in this paper are computed following a forward problem expression based on phase velocity perturbation expanded to third order(eq.A5).When considering an absolute perturbation range lower than10per cent,results are,however, identical to those computed following eq.(6)(see Appendix A).Formally,eq.(6)can be summarized as a linear combination of complex cosines and sines and for this reason,a2πundetermination remains for every solution.For a given parameter p j(r,ω),it is obvious that two other solutions can be found by a2πshift such asp+j(r,ω)=p j(r,ω)+2πC(S)j(ω)ωa and p−j(r,ω)=p j(r,ω)−2πC(S)j(ω)ωa.(7) As an example of this feature,all the phase velocity curves presented in Fig.1satisfy eq.(6).This means that2πphase jumps can occur over the whole frequency range but can also be localized around a given frequency.Such an underdetermination as expressed in eq.(6)and such a non-unicity,in most cases due to the2πphase jumps,are often resolved by imposing some a priori constraints in the inversion.A contrario, the roller coaster technique explores a large range of possible solutions,with the smallest a priori as possible,before choosing the model that achieves the minimum misfit.3D E S C R I P T I O N O F T H E R O L L E R C O A S T E R T E C H N I Q U EThe method presented in this paper is a hybrid approach,combining detection of all possible large-scale solutions(which means solutions of long-wavelength configurations of the parameter vector)and local least-squares optimizations starting from each of these solutions,in order to match the short-wavelength variations of the model space.The different stages of the roller coaster technique are presented in Fig.2and described hereafter.Thefirst three stages are devoted to the reduction of the problem underdetermination,while the non-linearity and the non-unicity are taken into account in the following steps.C 2003RAS,GJI,155,289–307292´E.Beucler,´E.Stutzmann and J.-P.MontagnerStage1Stage2Stage3Stage4using least-squares2phasejumps?Stage5Stage6Figure2.Schematic diagram of the roller coaster technique.See Section3for details.3.1Selection of events,mode-branches and time windowsEvents with epicentral distances larger than55◦and shorter than135◦are selected.Thus,the FM is well separated in time from the HM(Fig.3), and thefirst and the second surface wave trains do not overlap.Since the FM signal amplitude is much larger than the HM amplitude for about 95per cent of earthquakes,each seismogram(real and synthetic)is temporally divided into two different time windows,corresponding to the FM and to the HM parts of the signal.An illustration of this amplitude discrepancy in the time domain is displayed in Fig.3(b)and when focusing on Fig.4(a),the spectrum amplitude of the whole real signal(FM+HM)is largely dominated by the FM one.Eight different pickings defining the four time windows,illustrated in Fig.3(a),are computed using synthetic mode-branch wave trains and are checked manually.For this reason,this method is not completely automated,but this picking step is necessary to assess the data quality and the consistency between recorded and synthetic seismograms.In Appendix B,we show that the phase velocity measurements are not significantly affected by a small change in the time window dimensions.An advantage of this temporal truncation is that,whatever the amplitude of the FM,the HM part of the seismograms can always be treated.Hence,the forward problem is now split into two equations,corresponding to the FM and to the HM parts,respectively.A(R) FM (r,ω)expi (R)FM(r,ω)=A(S)0(r,ω)expi(S)0(r,ω)+ωaC(ω)p0(r,ω)(8)andA(R) HM (r,ω)expi (R)HM(r,ω)=6j=1A(S)j(r,ω)expi(S)j(r,ω)+ωaC(S)j(ω)p j(r,ω).(9)Seismograms(real and synthetic)are bandpassfiltered between40and500s.In this frequency range,only thefirst six overtone phase velocities can be efficiently retrieved.Tests on synthetic seismograms(up to n=15)with various depths and source parameters have shown that the HM for n≥7have negligible amplitudes in the selected time and frequency windows.C 2003RAS,GJI,155,289–307The roller coaster technique293Figure3.(a)Real vertical seismogram(solid line)and its corresponding synthetic computed in PREM(dotted line).The earthquake underlying this waveform occurred on1993September4in Afghanistan(36◦N,70◦E,depth of190km)and was recorded at the CAN GEOSCOPE station(Australia).The epicentral distance is estimated at around11340km.Both waveforms are divided into two time windows corresponding to the higher modes(T1–T2,T5–T6)and to the fundamental mode(T3–T4,T7–T8).(b)The contribution of each synthetic monomode shows the large-amplitude discrepancy and time delay between the fundamental mode and the overtones.The different symbols refer to the spectra displayed in Fig.4.3.2Clustering the eventsFollowing eq.(8),a single seismogram is sufficient to measure the FM phase velocity,whereas for the HM(eq.9)the problem is still highly underdetermined since the different HM group velocities are very close.This can be avoided by a reduction of the number of independent parameters considering mathematical relations between different mode-branch phase velocities.The consequence of such an approach is to impose a strong a priori knowledge on the model space,which may be physically unjustified.Another way to reduce this underdetermination is to increase the amount of independent data while keeping the parameter space dimension constant.Therefore,all sufficiently close events are clustered into small areas,and each individual ray path belonging to the same box is considered to give equivalent results as a common ray path.This latter approach was followed by Stutzmann&Montagner(1993),but with5×5deg2boxes independently of epicentral distance and azimuth values,due to the limited number of data.Here,in order to prevent any bias induced by the clustering of events too far away from one to another,and to be consistent with the smallest wavelength,boxes are computed with a maximum aperture angle of2◦and4◦in the transverse and longitudinal directions,respectively(Fig.5),with respect to the great circle path.The boxes are computed in order to take into account as many different depths and source mechanisms as possible.The FM phase velocity inversion is performed for each path between a station and a box,whereas the HM phase velocities are only measured for the boxes including three or more events.Since only the sixfirst mode-branches spectra are inverted,the maximum number of events per box is set to eight.The use of different events implies average phase velocity measurements along the common ray paths which can be unsuitable for short epicentral distances,but increases the accuracy of the results for the epicentral distances considered.C 2003RAS,GJI,155,289–307294´E.Beucler,´E.Stutzmann and J.-P.MontagnerFigure4.(a)The normalized amplitude spectra of the whole real waveform(solid line)displayed in Fig.3(a).The real FM part of the signal(truncated between T3and T4)is represented as a dotted line and the real HM part(between T1and T2)as a dashed line.(b).The solid line corresponds to the normalized spectrum amplitude of the real signal truncated between T3and T4(Fig.3a).The corresponding synthetic FM is represented as a dotted line and only the frequency range represented by the white circles is selected as being significant.(c)Selection of HM inversion frequency ranges using synthetic significant amplitudes.The solid line corresponds to the real HM signal,picked between T1and T2(Fig.3a).For each mode-branch(dotted lines),only the frequency ranges defined by the symbols(according to Fig.3b)are retained for the inversion.(d)Close up of the sixth synthetic overtone,in order to visualize the presence of lobes and the weak contribution frequency range in the spectrum amplitude.The stars delimit the selected frequency range.3.3Determination of the model space dimensionReal and synthetic amplitude spectra are normalized in order to minimize the effects due to the imprecision of source parameters and of instrumental response determination.As presented in Fig.4,a synthetic mode-branch spectrum is frequently composed by several lobes due to the source mechanism.Between each lobe and also near the frequency range edges due to the bandpassfilter,the amplitude strongly decreases down to zero,and therefore phase velocities are absolutely not constrained at these frequencies.It is around these frequencies that possible local2πphase jumps may occur(Fig.1).Then,we decide to reduce the model space dimension in order to take into account only well-constrained points.For each spectrum,the selection of significant amplitudes,with a thresholdfixed to10per cent of the mean maximum spectra amplitude,defines the inverted frequency range.In the case of several lobes in a synthetic mode-branch amplitude spectrum,only the most energetic one is selected as shown in Figs4(c)and(d).For a given mode-branch,the simultaneous use of different earthquakes implies a discrimination criterion based upon a mean amplitude spectrum of all spectra,which tends to increase the dimensions of the significant frequency range.The normalization and this selection of each mode-branch significant amplitudes is also a way to include surface wave radiation pattern information in the procedure.Changes in source parameters can result in changes in the positions of the lobes in the mode-branch amplitude spectra over the whole frequency range(40–500s).In the future,it will be essential to include these possible biases in the scheme and then to simultaneously invert moment tensor,location and depth.C 2003RAS,GJI,155,289–307The roller coaster technique295Figure5.Geographical distribution of inversion boxes for the SSB GEOSCOPE station case.The enlarged area is defined by the bold square in the inset (South America).Black stars denote epicentres and hatched grey boxes join each inversion group.Each common ray path(grey lines)starts from the barycentre (circles)of all events belonging to the same box.The maximum number of seismograms per box isfixed at eight.3.4Exploration of the model space at very large scaleThe main idea of this stage is to test a large number of phase velocity large-scale perturbations with the view of selecting several starting vectors for local inversions(see Section3.5).The high non-linearity of the problem is mainly due to the possible2πphase jumps.And,even though the previous stage(see Section3.3)prevents the shifts inside a given mode-branch phase velocity curve,2πphase jumps over the whole selected frequency range are still possible.For this reason a classical gradient least-squares optimization(Tarantola&Valette1982a)is inadequate.In a highly non-linear problem,a least-squares inversion only converges towards the best misfit model that is closest to the starting model and the number of iterations cannot change this feature.On the other hand,a complete exploration of all possible configurations in the parameter space is still incompatible with a short computation time procedure.Therefore,an exploration of the model space is performed at very large scale,in order to detect all possible models that globally explain the data set well.3.4.1Fundamental mode caseWhen considering a single mode-branch,the number of parameter vector components is rather small.The FM large-scale exploration can then be more detailed than in the HM case.Considering that,at low frequencies,data are correctly explained by the1-D reference model,the C 2003RAS,GJI,155,289–307296´E.Beucler,´E.Stutzmann and J.-P.MontagnerabFigure6.(a)Five examples of the FM parameter vector configurations during the exploration of the model space at large scale corresponding toαvalues equal to−5,−,0,+2.5and+5per cent.The selected points for which the phase velocity is measured(see Section3.3)are ordered into parameter vector components according to increasing frequency values.Thefirst indices then correspond to the low-frequency components(LF)and the last ones to the high-frequency(HF) components.Varying the exploration factorα,different perturbation shapes are then modelled and the misfit between data and the image of the corresponding vector is measured(represented in thefigure below).(b)The misfit in the FM case,symbolized by+,is the expression of the difference between data and the image of the tested model(referred to as pα)through the g function(eq.8).Theαvalues are expressed as a percentage with respect to the PREM.As an example,thefive stars correspond to the misfit values of thefive models represented in thefigure above.The circles represent the bestαvalues and the corresponding vectors are then considered as possible starting models for the next stage.dimensionless phase velocity perturbation(referred to as pα)can be modelled as shown in thefive examples displayed in Fig.6(a).Basically, the low-frequency component perturbations are smaller than the high-frequency ones.However,if such an assumption cannot be made,the simplest way to explore the model space is then byfixing an equalαperturbation value for all the components.The main idea is to impose strong correlations between all the components in order to estimate how high the non-linearity is.Varyingαenables one to compute different parameter vectors and solving eq.(8)to measure the distance between data and the image of a given model through the g function,integrated over the whole selected frequency range.Considering that only small perturbations can be retrieved,the exploration range is limited between−5and+5per cent,using an increment step of0.1per cent.The result of such an exploration is displayed in Fig.6(b)and clearly illustrates the high non-linearity and non-unicity of the problem.In a weakly non-linear problem,the misfit curve(referred to as||d−g(pα)||)should exhibit only one minimum.This would indicate that,whatever the value of the starting model,a gradient algorithm always converges towards the samefinal model,the solution is then unique.In our case,Fig.6(b)shows that,when choosing the reference model(i.e.α=0per cent)as the starting model,a gradient least-squares optimization converges to the nearest best-fitting solution(corresponding to the third circle),and could never reach the global best-fitting model(in this example representedC 2003RAS,GJI,155,289–307The roller coaster technique 297by the fourth circle).Therefore,in order not to a priori limit the inversion result around a given model,all minima of the mis fit curve (Fig.6b)are detected and the corresponding vectors are considered as possible starting models for local optimizations (see Section 3.5).3.4.2Higher-mode caseThe introduction of several mode-branches simultaneously is much more dif ficult to treat and it becomes rapidly infeasible to explore the model space as accurately as performed for the FM.However,a similar approach is followed.In order to preserve a low computation time procedure,the increment step of αis fixed at 1per cent.The different parameter vectors are computed as previously explained in Section3.4.1(the shape of each mode-branch subvector is the same as the examples displayed in Fig.6a).In order to take into account any possible in fluence of one mode-branch on another,all combinations are tested systematically.Three different explorations of the model space are performed within three different research ranges:[−4.5to +1.5per cent],[−3to +3per cent]and [−1.5to +4.5per cent].For each of them,76possibilities of the parameter vector are modelled and the mis fit between data and the image of the tested vector through the g function is computed.This approach is almost equivalent to performing a complete exploration in the range [−4.5to +4.5per cent],using a step of 0.5per cent,but less time consuming.Finally,all mis fit curve minima are detected and,according to a state of null information concerning relations between each mode-branch phase velocities,all the corresponding vectors are retained as possible starting models.Thus,any association between each starting model subvectors is allowed.3.5Matching the short-wavelength variations of the modelIn this section,algorithms,notation and comments are identical for both FM and HM.Only the main ideas of the least-squares criterion are outlined.A complete description of this approach is given by Tarantola &Valette (1982a,b)and by Tarantola (1987).Some typical features related to the frequency/period duality are also detailed.3.5.1The gradient least-squares algorithmThe main assumption which leads us to use such an optimization is to consider that starting from the large-scale parameter vector (see Section 3.4),the non-linearity of the problem is largely reduced.Hence,to infer the model space from the data space,a gradient least-squares algorithm is performed (Tarantola &Valette 1982a).The expression of the model (or parameter)at the k th iteration is given by p k =p 0+C p ·G T k −1· C d +G k −1·C p ·G T k −1−1· d −g (p k −1)+G k −1·(p k −1−p 0) ,(10)where C p and C d are the a priori covariance operators on parameters and data,respectively,p 0the starting model,and where G k −1=∂g (p k −1)/∂p k −1is the matrix of partial derivatives of the g function established in eqs (8)and (9).The indices related to p are now expressing the iteration rank and no longer the mode-branch radial order.De fining the k th image of the mis fit function byS (p k )=12[g (p k )−d ]T ·C −1d ·[g (p k )−d ]+(p k −p 0)T ·C −1p ·(p k −p 0) ,(11)the maximum-likelihood point is de fined by the minimum of S (p ).Minimizing the mis fit function is then equivalent to finding the best compromise between decreasing the distance between the data vector and the image of the parameter vector through the g function,in the data space on one hand (first part of eq.11),and not increasing the distance between the starting and the k th model on the other hand (second part of eq.11),following the covariances de fined in the a priori operators on the data and the parameters.3.5.2A priori data covariance operatorThe a priori covariance operator on data,referred to as C d ,includes data errors and also all effects that cannot be modelled by the g function de fined in eq.(8)and (9).The only way to really measure each data error and then to compute realistic covariances in the data space,would be to obtain exactly the corresponding seismogram in which the signal due to the seismic event is removed.Hence,errors over the data space are impossible to determine correctly.In order to introduce as little a priori information as possible,the C d matrix is computed with a constant value of 0.04(including data and theory uncertainties)for the diagonal elements and zero for the off-diagonal elements.In other words,this choice means that the phase velocity perturbations are expected to explain at least 80per cent of the recorded signal.3.5.3A priori parameter covariance operatorIn the model space,the a priori covariance operator on parameters,referred to as C p ,controls possible variations between the model vector components for a given iteration k (eq.10),and also between the starting and the k th model (eq.11).Considering that the phase velocity perturbation between two adjoining components (which are ordered according to increasing frequency values)of a given mode-branch do not vary too rapidly,C p is a non-diagonal matrix.This a priori information reduces the number of independent components and then induces smoothed phase velocity perturbation curves.A typical behaviour of our problem resides in the way the parameter space is discretized.In the matrix domain,the distance between two adjoining components is always the same,whereas,as the model space is not evenly spaced C 2003RAS,GJI ,155,289–307。
Pattern formation in weakly damped Faraday waves
The generation of standing waves on the free surface of a fluid layer that is oscillated vertically is known since the work of Faraday (1831). Recently, there has been renewed experimental interest in Faraday waves as an example of a pattern-forming system. Reasons include the ease of experimentation due to short characteristic time scales (of the order of 10−2 seconds), and the ability to reach very large aspect ratios (the ratio of lateral size of the system to the characteristic wavelength of the pattern) of the order of 102 . By varying the form of the driving force and by using fluids of different viscosities, a number of interesting phenomena have been observed including the emergence of standing wave patterns of different symmetries near onset (Christiansen, Alstrø m & Levinsen 1992; Fauve et al. 1992; Edwards & Fauve 1993, 1994; M¨ uller 1993), secondary instabilities of these patterns when the amplitude of the periodic driving force is increased (Ezerskii, Korotin & Rabinovich 1985; Daudet et al. 1995), and spatiotemporal chaotic states at even larger amplitudes of the driving force (Tufillaro, Ramshankar & Gollub 1989; Gollub & Ramshankar 1991; Bosch & van de Water 1993; Bridger et al. 1993; Kudrolli & Gollub 1996). A numerical linear stability analysis of the Faraday wave problem has been carried out by Kumar & Tuckerman (1994) for a laterally infinite fluid layer of arbitrary viscosity.
全反射红外光谱法 英文
全反射红外光谱法英文全反射红外光谱法在英文中被称为 Attenuated Total Reflection Infrared Spectroscopy(ATR-IR Spectroscopy)。
ATR-IR Spectroscopy is a technique used to analyze the molecular composition of samples by measuring the absorption of infrared light. It is based on the principle of total internal reflection, where an infrared beam is passed through a high refractive index material onto the sample surface at a specific angle. This causes a portion of the beam to be reflected back into the material, creating an evanescent wave that interacts with the sample. The attenuated reflection is then measured and analyzed to determine the molecular structure and functional groups present in the sample.ATR-IR Spectroscopy offers several advantages over traditional transmission infrared spectroscopy. It requires minimal sample preparation, as the sample can be analyzed directly without the need for complex sample preparationtechniques. It also allows for analysis of a wide range of sample types, including liquids, solids, and even gases. Additionally, ATR-IR Spectroscopy provides high sensitivity and signal-to-noise ratio, making it suitable for both qualitative and quantitative analysis.This technique finds applications in various fields such as pharmaceuticals, polymers, food and beverage, forensics, and environmental analysis. It can be used to identify unknown compounds, analyze chemical reactions, determine the purity of a substance, and monitor changes in molecular structure.In conclusion, ATR-IR Spectroscopy, or Attenuated Total Reflection Infrared Spectroscopy, is a powerful analytical technique used to investigate the molecular composition of samples. Its advantages include minimal sample preparation, versatility in sample types, and high sensitivity.。
The Influence of the Magnetic Field and Ultraviole
Journal of Electrical Engineering 5 (2017) 349-351doi: 10.17265/2328-2223/2017.06.009The Influence of the Magnetic Field and Ultraviolet Waves to the Liquid Fillers for Liquid LenseLiena Yu. VergunMolecular Physics Department, Faculty of Physics, Taras Shevchenko National University of Kyiv, UkraineAbstract: The question about properties of liquid fillers for liquid lenses under actions of magnetic field and ultraviolet waves are investigated. The liquid model system such as glycerol is used. The coefficients transmission and coefficients of temperatures increasing are experimentally determined. It is shown that the presence of constant magnetic field and UV-waves stabiliz formation of a wall layer. So the vibrational and rotary motions of molecules and its clusters are stabilized in liquid fillers. It is assumed that the chromatic aberration can be associated with the formation and redistribution of particles in the surface layer at the interface between the solid and liquid phases.Key words: Glycerol, magnetic field, ultraviolet waves.1. IntroductionIt is known liquid lenses are used in optoelectronic systems telescopes [1, 2]. The operating conditions of such devices connected with the interaction of UV radiation and constant magnetic field. For increasing of the quality of liquid lenses pay attention to the properties of liquid filler for exclusion of chromatic aberration [3, 4]. The determination of action UV-waves and constant magnetic field in model system is proposed for possible reasons of aberration. It is known glycerol is the simplest composition of liquid filler for liquid lenses [5]. The appearance of feature molecular structure in glycerol under the influence of a magnetic field and UV-radiation is demonstrated in work [6].The nature of the transmission of light waves after constant magnetic field action and action of UV-waves is experimentally determined. Time of action these external factors is corresponded to relaxation time in the system [7]. The transmission coefficients of glycerol after action constant magnetic field and UF-waves are obtained.Corresponding author: Liena Yu. Vergun, Ph.D., research fields: magnetic field, defects in crystal like structures.The value of a magnetic field was 22.4 mT [6]. In view of Ref. [8] it has been suggested that the occurrence of chromatic aberration is due to the relaxation processes in the boundary layer of fluid components. To exclude the impact of these processeson the image quality the author offered to hold preliminary processing liquid fillers under UV-waves and constant magnetic field.2. Materials and MethodThe type of glycerol with GOST technical standards 6259-75 is used. The scheme of laboratory bench is represented in Fig. 1a (where 1—working cell of KFK2MP, 2—glass cuvette, 3—glycerol, 4—UF emitter, 5—Helmholtz coils, 6—electronic scoreboard). The view of working cell of KFK2MP (photoelectric colorimeter) with Helmholtz coils is represented in Fig. 1b.3. Results and DiscussionThe dependences of the transmission coefficient τ(λ) are shown in Fig. 2. In this figure by number “1”—the experimental datas of glyserol without action of the external factors, by number “2”—the experimental data for glyserol after action UV-waves(time of actionAll Rights Reserved.The Influence of the Magnetic Field and Ultraviolet Waves to the Liquid Fillers for Liquid Lense350(a) (b)Fig. 1 The scheme (a) and view (b) of laboratory bench.Fig. 2 The dependence τ(λ).Fig. 3 The dependence()t ϕ.15 minutes), by number “3”—the experimental data of glyserol after action constant magnetic field (time of action 15 minutes), by number “4”—the experimental data after action UV-waves and constant magnetic field (time of action 15 minutes) are indicated.As shown in Fig. 3 the joint action of UV waves and constant magnetic field decreases the differencebetween values of coefficients transmission in the areaof wavelengths from 380 nm (violet color) to 750 nm (red color). The obtained results indicate a decrease in the probability of the appearance of chromatic aberration.According to Ref. [8], the phenomenon of chromatic aberration can be associated with the formation and redistribution of particles in the surface layer at the interface between the solid and liquid phases. The redistribution of particles and their clusters occurs at dissimilarities of velocity of their displacements at change of the action of external factors [9, 10]. An experiment was conducted for this purpose. A cuvette with the substance (glycerol) was placed in the heat-insulating container. The temperature sensor was located in the center of the cuvette. The temperature sensor readings were recorded in increments of minute. Fig. 3 shows the dependence of the dimensionless coefficient φ (the coefficient of temperatures increasing) on time t. The coefficient φ was calculated according to the formulaT T i =ϕ, where 0T is the temperature of themeasurement at the initial moment, i T —the temperature in the following intervals of time.The following symbols are introduced on the graph: In Fig. 3, by number “1”—the experimental data of glycerol without the action of the external factors, by number “2”—the experimental data of glycerol after the action of UV-waves (time of action 15 minutes) by number “3”—the experimental data of glycerol afterAll Rights Reserved.The Influence of the Magnetic Field and Ultraviolet Waves to the Liquid Fillers for Liquid Lense 351the action of the constants, magnetic field (time of action 15 minutes), by number “4”—the experimental data after action of UV-waves and constant magnetic field (time of action 15 minutes). The symbol “*” denotes the experimental data of the standardsubstance (32O Al ).As can be seen from Fig. 3 at treatment of glycerol by constant magnetic field and UV irradiation the smallest increasing of temperature is observed. The experimental data obtained correspond to the experimental results shown in Fig. 2. According to our opinion, the treatment of UV iiradiation with a magnetic field leads to a uniform distribution of molecules both in the surface layer near the wall andin the entire volume.4. ConclusionsExperimental studies of the optical andthermophysical properties of a model system of a liquidcomponent for liquid lenses (glycerin) under theinfluence of UV irradiation and a magnetic field were conducted. It is shown that the presence of constant magnetic field and UV-waves stabiliz formation of awall layer. So the vibrational and rotary motions ofmolecules and its clusters are stabilized in liquid fillers.References[1] Gibson, B. K. 1991. “Liquid Mirror Telescope History.” Journal of the Royal Astronomical Society of Canada 85 (4): 158-71.[2]Berge, B. 2005. “Liquid Lense Technology: Principle of Electrowetting Based Lenses and Applications to Imaging.” IEEE , 227-30.[3]Stallinga, S., Vrehen, J., Wals, J., Stapert, H., and Verstegen, E. 2000. “Liquid Crystal Aberration Compensation Devices.” Proceedings of SPIE 48: 50-9. [4] Vergun, L.Y. 2015. “The Using of Magnetic Field for the Improvement of Properties of Liquid Fillers in a Liquid Lense.” In Abstracts XXII Galyna Puchkovska International School-Seminar Spectroscopy of Molecules and Crystals, 155.[5] Calo, E., and Khutoryanskiy, V. V. 2015. “Biomedical Application of Hydrogels: A Reviews of Patents and Commercial Products.” European Polymer Journal 65: 252-67.[6]Pelizzeti, E., and Schiavello, M. 1991. Photochemical Conversion and Storage of Solar Energy . Springer Science & Business Media.[7]Kaneoke, Y., Furuse, M., and Yoshida, K. 1989.“Transfer Index of MR Relaxation Enhancer: A Quantitative Evaluation of MR Contrast Enhancemen.” AJNR Am J Neuroradiol 10 (2): 329-33.[8] Angell, С. А., and Ngai, K. L. 2000. “Relaxation inGlass-Forming Liquids and Amorphys Solids.” Journal of Applied Physics 88 (6): 3113-57.[9] Root, L. J., and Stillinger, F. H. 1989. “Short-Range Order in Glycerol: A Molecular Dynamics Study.” J.Chern. Phys. 90 (2): 1200-08.[10] Milner, A. A., Korobenko, A., Flo, J., Sh. Averbukh, I.,and Milner, V. 2015. “Magneto-Optical Properties of Paramagnetic Superrotors.” Phys. Rev. Lett. 115 (3): 033005.All Rights Reserved.。
Dielectric-fibre surface waveguides for optical frequencies中文翻译
光频率介质纤维表面波导Dielectric-fibre surface waveguides for optical frequencies高锟(G.A. Hockham)关键词:光学纤维,波导摘要:折射率高于周围区域的介质纤维是作为在光频段引导传输的可能的介质的一种介电波导形式。
文章中讨论的这种特殊的结构形式是圆的横截面。
用作通信目的的光波导传播模式的选择通常主要考虑损耗特性和信息容量。
文章中讨论了介电损耗,弯曲损耗和辐射损耗并且讨论了与信息容量相关的模式稳定,色散和功率控制,同时也讨论了物理实现方面,也包含 了对对光学和微波波长的实验研究。
主要符号列表:n J = n 阶的第一类贝塞尔函数n K = 2π修正的第二类n 阶的变型贝塞尔函数β = g2λπ,波导的相位系数 n J ' = n J 的一阶导数n K ' = n K 的一阶导数i h = 衰减系数或辐射波数i ε = 相对介电常数0k = 自由空间传播系数a = 光纤半径γ = 纵向传播系数k = 波耳兹曼常数T = 绝对温度,Kc β = 等温可压缩性λ = 波长n = 折射率)(H i υ = 第υ阶Hankel 函数的第i 阶导数υH ' = υH 的导数 υ = 方位角传播系数=21υυj -L = 调制周期下标n 是整数,下标m 是n J = 0的第m 个根。
1. 简介折射率高于周围区域的介质纤维是一种介电波导,它代表了光频段中能量有向传输的一种媒介。
这种结构形式引导电磁波沿着不同折射率区域的特定边界传播,相关电磁场部分在光纤内部分在光纤外。
外部电磁场在垂直于传播方向上是逐渐消失的,以且在无穷远处以近似指数的形式衰减到零。
这种结构经常被称为开放波导,以表面波模式传播。
下面要讨论的是具有圆形截面的特种介质纤维波导。
2.介质纤维波导具有圆形截面的介质纤维能够传输所有的H 0m 模、E 0m 模和HE nm 混合模。
Laser Ranging to the Moon, Mars and Beyond
a r X i v :g r -q c /0411082v 1 16 N o v 2004Laser Ranging to the Moon,Mars and BeyondSlava G.Turyshev,James G.Williams,Michael Shao,John D.AndersonJet Propulsion Laboratory,California Institute of Technology,4800Oak Grove Drive,Pasadena,CA 91109,USAKenneth L.Nordtvedt,Jr.Northwest Analysis,118Sourdough Ridge Road,Bozeman,MT 59715USA Thomas W.Murphy,Jr.Physics Department,University of California,San Diego 9500Gilman Dr.,La Jolla,CA 92093USA Abstract Current and future optical technologies will aid exploration of the Moon and Mars while advancing fundamental physics research in the solar system.Technologies and possible improvements in the laser-enabled tests of various physical phenomena are considered along with a space architecture that could be the cornerstone for robotic and human exploration of the solar system.In particular,accurate ranging to the Moon and Mars would not only lead to construction of a new space communication infrastructure enabling an improved navigational accuracy,but will also provide a significant improvement in several tests of gravitational theory:the equivalence principle,geodetic precession,PPN parameters βand γ,and possible variation of the gravitational constant G .Other tests would become possible with an optical architecture that would allow proceeding from meter to centimeter to millimeter range accuracies on interplanetary distances.This paper discusses the current state and the future improvements in the tests of relativistic gravity with Lunar Laser Ranging (LLR).We also consider precision gravitational tests with the future laser rangingto Mars and discuss optical design of the proposed Laser Astrometric Test of Relativity (LATOR)mission.We emphasize that already existing capabilities can offer significant improvements not only in the tests of fundamental physics,but may also establish the infrastructure for space exploration in the near future.Looking to future exploration,what characteristics are desired for the next generation of ranging devices,what is the optimal architecture that would benefit both space exploration and fundamental physics,and what fundamental questions can be investigated?We try to answer these questions.1IntroductionThe recent progress in fundamental physics research was enabled by significant advancements in many technological areas with one of the examples being the continuing development of the NASA Deep Space Network –critical infrastructure for precision navigation and communication in space.A demonstration of such a progress is the recent Cassini solar conjunction experiment[8,6]that was possible only because of the use of Ka-band(∼33.4GHz)spacecraft radio-tracking capabilities.The experiment was part of the ancillary science program–a by-product of this new radio-tracking technology.Becasue of a much higher data rate transmission and, thus,larger data volume delivered from large distances the higher communication frequency was a very important mission capability.The higher frequencies are also less affected by the dispersion in the solar plasma,thus allowing a more extensive coverage,when depp space navigation is concerned.There is still a possibility of moving to even higher radio-frequencies, say to∼60GHz,however,this would put us closer to the limit that the Earth’s atmosphere imposes on signal transmission.Beyond these frequencies radio communication with distant spacecraft will be inefficient.The next step is switching to optical communication.Lasers—with their spatial coherence,narrow spectral emission,high power,and well-defined spatial modes—are highly useful for many space applications.While in free-space,optical laser communication(lasercomm)would have an advantage as opposed to the conventional radio-communication sercomm would provide not only significantly higher data rates(on the order of a few Gbps),it would also allow a more precise navigation and attitude control.The latter is of great importance for manned missions in accord the“Moon,Mars and Beyond”Space Exploration Initiative.In fact,precision navigation,attitude control,landing,resource location, 3-dimensional imaging,surface scanning,formationflying and many other areas are thought only in terms of laser-enabled technologies.Here we investigate how a near-future free-space optical communication architecture might benefit progress in gravitational and fundamental physics experiments performed in the solar system.This paper focuses on current and future optical technologies and methods that will advance fundamental physics research in the context of solar system exploration.There are many activities that focused on the design on an optical transceiver system which will work at the distance comparable to that between the Earth and Mars,and test it on the Moon.This paper summarizes required capabilities for such a system.In particular,we discuss how accurate laser ranging to the neighboring celestial bodies,the Moon and Mars,would not only lead to construction of a new space communication infrastructure with much improved navigational accuracy,it will also provide a significant improvement in several tests of gravitational theory. Looking to future exploration,we address the characteristics that are desired for the next generation of ranging devices;we will focus on optimal architecture that would benefit both space exploration and fundamental physics,and discuss the questions of critical importance that can be investigated.This paper is organized as follows:Section2discusses the current state and future per-formance expected with the LLR technology.Section3addresses the possibility of improving tests of gravitational theories with laser ranging to Mars.Section4addresses the next logical step—interplanetary laser ranging.We discuss the mission proposal for the Laser Astrometric Test of Relativity(LATOR).We present a design for its optical receiver system.Section5 addresses a proposal for new multi-purpose space architecture based on optical communica-tion.We present a preliminary design and discuss implications of this new proposal for tests of fundamental physics.We close with a summary and recommendations.2LLR Contribution to Fundamental PhysicsDuring more than35years of its existence lunar laser ranging has become a critical technique available for precision tests of gravitational theory.The20th century progress in three seem-ingly unrelated areas of human exploration–quantum optics,astronomy,and human spaceexploration,led to the construction of this unique interplanetary instrument to conduct very precise tests of fundamental physics.In this section we will discuss the current state in LLR tests of relativistic gravity and explore what could be possible in the near future.2.1Motivation for Precision Tests of GravityThe nature of gravity is fundamental to our understanding of the structure and evolution of the universe.This importance motivates various precision tests of gravity both in laboratories and in space.Most of the experimental underpinning for theoretical gravitation has come from experiments conducted in the solar system.Einstein’s general theory of relativity(GR)began its empirical success in1915by explaining the anomalous perihelion precession of Mercury’s orbit,using no adjustable theoretical parameters.Eddington’s observations of the gravitational deflection of light during a solar eclipse in1919confirmed the doubling of the deflection angles predicted by GR as compared to Newtonian and Equivalence Principle(EP)arguments.Follow-ing these beginnings,the general theory of relativity has been verified at ever-higher accuracy. Thus,microwave ranging to the Viking landers on Mars yielded an accuracy of∼0.2%from the gravitational time-delay tests of GR[48,44,49,50].Recent spacecraft and planetary mi-crowave radar observations reached an accuracy of∼0.15%[4,5].The astrometric observations of the deflection of quasar positions with respect to the Sun performed with Very-Long Base-line Interferometry(VLBI)improved the accuracy of the tests of gravity to∼0.045%[45,51]. Lunar Laser Ranging(LLR),the continuing legacy of the Apollo program,has provided ver-ification of GR improving an accuracy to∼0.011%via precision measurements of the lunar orbit[62,63,30,31,32,35,24,36,4,68].The recent time-delay experiments with the Cassini spacecraft at a solar conjunction have tested gravity to a remarkable accuracy of0.0023%[8] in measuring deflection of microwaves by solar gravity.Thus,almost ninety years after general relativity was born,Einstein’s theory has survived every test.This rare longevity and the absence of any adjustable parameters,does not mean that this theory is absolutely correct,but it serves to motivate more sensitive tests searching for its expected violation.The solar conjunction experiments with the Cassini spacecraft have dramatically improved the accuracy in the solar system tests of GR[8].The reported accuracy of2.3×10−5in measuring the Eddington parameterγ,opens a new realm for gravitational tests,especially those motivated by the on-going progress in scalar-tensor theories of gravity.1 In particular,scalar-tensor extensions of gravity that are consistent with present cosmological models[15,16,17,18,19,20,39]predict deviations of this parameter from its GR value of unity at levels of10−5to10−7.Furthermore,the continuing inability to unify gravity with the other forces indicates that GR should be violated at some level.The Cassini result together with these theoretical predictions motivate new searches for possible GR violations;they also provide a robust theoretical paradigm and constructive guidance for experiments that would push beyond the present experimental accuracy for parameterized post-Newtonian(PPN)parameters(for details on the PPN formalism see[60]).Thus,in addition to experiments that probe the GR prediction for the curvature of the gravityfield(given by parameterγ),any experiment pushingthe accuracy in measuring the degree of non-linearity of gravity superposition(given by anotherEddington parameterβ)will also be of great interest.This is a powerful motive for tests ofgravitational physics phenomena at improved accuracies.Analyses of laser ranges to the Moon have provided increasingly stringent limits on anyviolation of the Equivalence Principle(EP);they also enabled very accurate measurements fora number of relativistic gravity parameters.2.2LLR History and Scientific BackgroundLLR has a distinguished history[24,9]dating back to the placement of a retroreflector array onthe lunar surface by the Apollo11astronauts.Additional reflectors were left by the Apollo14and Apollo15astronauts,and two French-built reflector arrays were placed on the Moon by theSoviet Luna17and Luna21missions.Figure1shows the weighted RMS residual for each year.Early accuracies using the McDonald Observatory’s2.7m telescope hovered around25cm. Equipment improvements decreased the ranging uncertainty to∼15cm later in the1970s.In1985the2.7m ranging system was replaced with the McDonald Laser Ranging System(MLRS).In the1980s ranges were also received from Haleakala Observatory on the island of Maui in theHawaiian chain and the Observatoire de la Cote d’Azur(OCA)in France.Haleakala ceasedoperations in1990.A sequence of technical improvements decreased the range uncertainty tothe current∼2cm.The2.7m telescope had a greater light gathering capability than thenewer smaller aperture systems,but the newer systemsfired more frequently and had a muchimproved range accuracy.The new systems do not distinguish returning photons against thebright background near full Moon,which the2.7m telescope could do,though there are somemodern eclipse observations.The lasers currently used in the ranging operate at10Hz,with a pulse width of about200 psec;each pulse contains∼1018photons.Under favorable observing conditions a single reflectedphoton is detected once every few seconds.For data processing,the ranges represented by thereturned photons are statistically combined into normal points,each normal point comprisingup to∼100photons.There are15553normal points are collected until March2004.Themeasured round-trip travel times∆t are two way,but in this paper equivalent ranges in lengthunits are c∆t/2.The conversion between time and length(for distance,residuals,and dataaccuracy)uses1nsec=15cm.The ranges of the early1970s had accuracies of approximately25cm.By1976the accuracies of the ranges had improved to about15cm.Accuracies improvedfurther in the mid-1980s;by1987they were4cm,and the present accuracies are∼2cm.One immediate result of lunar ranging was the great improvement in the accuracy of the lunarephemeris[62]and lunar science[67].LLR measures the range from an observatory on the Earth to a retroreflector on the Moon. For the Earth and Moon orbiting the Sun,the scale of relativistic effects is set by the ratio(GM/rc2)≃v2/c2∼10−8.The center-to-center distance of the Moon from the Earth,with mean value385,000km,is variable due to such things as eccentricity,the attraction of the Sun,planets,and the Earth’s bulge,and relativistic corrections.In addition to the lunar orbit,therange from an observatory on the Earth to a retroreflector on the Moon depends on the positionin space of the ranging observatory and the targeted lunar retroreflector.Thus,orientation ofthe rotation axes and the rotation angles of both bodies are important with tidal distortions,plate motion,and relativistic transformations also coming into play.To extract the gravitationalphysics information of interest it is necessary to accurately model a variety of effects[68].For a general review of LLR see[24].A comprehensive paper on tests of gravitationalphysics is[62].A recent test of the EP is in[4]and other GR tests are in[64].An overviewFigure1:Historical accuracy of LLR data from1970to2004.of the LLR gravitational physics tests is given by Nordtvedt[37].Reviews of various tests of relativity,including the contribution by LLR,are given in[58,60].Our recent paper describes the model improvements needed to achieve mm-level accuracy for LLR[66].The most recent LLR results are given in[68].2.3Tests of Relativistic Gravity with LLRLLR offers very accurate laser ranging(weighted rms currently∼2cm or∼5×10−11in frac-tional accuracy)to retroreflectors on the Moon.Analysis of these very precise data contributes to many areas of fundamental and gravitational physics.Thus,these high-precision studies of the Earth-Moon-Sun system provide the most sensitive tests of several key properties of weak-field gravity,including Einstein’s Strong Equivalence Principle(SEP)on which general relativity rests(in fact,LLR is the only current test of the SEP).LLR data yielded the strongest limits to date on variability of the gravitational constant(the way gravity is affected by the expansion of the universe),and the best measurement of the de Sitter precession rate.In this Section we discuss these tests in more details.2.3.1Tests of the Equivalence PrincipleThe Equivalence Principle,the exact correspondence of gravitational and inertial masses,is a central assumption of general relativity and a unique feature of gravitation.EP tests can therefore be viewed in two contexts:tests of the foundations of general relativity,or as searches for new physics.As emphasized by Damour[12,13],almost all extensions to the standard modelof particle physics(with best known extension offered by string theory)generically predict newforces that would show up as apparent violations of the EP.The weak form the EP(the WEP)states that the gravitational properties of strong and electro-weak interactions obey the EP.In this case the relevant test-body differences are their fractional nuclear-binding differences,their neutron-to-proton ratios,their atomic charges,etc. General relativity,as well as other metric theories of gravity,predict that the WEP is exact. However,extensions of the Standard Model of Particle Physics that contain new macroscopic-range quantumfields predict quantum exchange forces that will generically violate the WEP because they couple to generalized‘charges’rather than to mass/energy as does gravity[17,18]. WEP tests can be conducted with laboratory or astronomical bodies,because the relevant differences are in the test-body compositions.Easily the most precise tests of the EP are made by simply comparing the free fall accelerations,a1and a2,of different test bodies.For the case when the self-gravity of the test bodies is negligible and for a uniform external gravityfield, with the bodies at the same distance from the source of the gravity,the expression for the Equivalence Principle takes the most elegant form:∆a= M G M I 2(1)(a1+a2)where M G and M I represent gravitational and inertial masses of each body.The sensitivity of the EP test is determined by the precision of the differential acceleration measurement divided by the degree to which the test bodies differ(position).The strong form of the EP(the SEP)extends the principle to cover the gravitational properties of gravitational energy itself.In other words it is an assumption about the way that gravity begets gravity,i.e.about the non-linear property of gravitation.Although general relativity assumes that the SEP is exact,alternate metric theories of gravity such as those involving scalarfields,and other extensions of gravity theory,typically violate the SEP[30,31, 32,35].For the SEP case,the relevant test body differences are the fractional contributions to their masses by gravitational self-energy.Because of the extreme weakness of gravity,SEP test bodies that differ significantly must have astronomical sizes.Currently the Earth-Moon-Sun system provides the best arena for testing the SEP.The development of the parameterized post-Newtonian formalism[31,56,57],allows one to describe within the common framework the motion of celestial bodies in external gravitational fields within a wide class of metric theories of gravity.Over the last35years,the PPN formalism has become a useful framework for testing the SEP for extended bodies.In that formalism,the ratio of passive gravitational to inertial mass to thefirst order is given by[30,31]:M GMc2 ,(2) whereηis the SEP violation parameter(discussed below),M is the mass of a body and E is its gravitational binding or self-energy:E2Mc2 V B d3x d3yρB(x)ρB(y)EMc2 E=−4.64×10−10andwhere the subscripts E and m denote the Earth and Moon,respectively.The relatively small size bodies used in the laboratory experiments possess a negligible amount of gravitational self-energy and therefore such experiments indicate nothing about the equality of gravitational self-energy contributions to the inertial and passive gravitational masses of the bodies [30].TotesttheSEP onemustutilize planet-sizedextendedbodiesinwhichcase theratioEq.(3)is considerably higher.Dynamics of the three-body Sun-Earth-Moon system in the solar system barycentric inertial frame was used to search for the effect of a possible violation of the Equivalence Principle.In this frame,the quasi-Newtonian acceleration of the Moon (m )with respect to the Earth (E ),a =a m −a E ,is calculated to be:a =−µ∗rM I m µS r SEr 3Sm + M G M I m µS r SEr 3+µS r SEr 3Sm +η E Mc 2 m µS r SEMc 2 E − E n 2−(n −n ′)2n ′2a ′cos[(n −n ′)t +D 0].(8)Here,n denotes the sidereal mean motion of the Moon around the Earth,n ′the sidereal mean motion of the Earth around the Sun,and a ′denotes the radius of the orbit of the Earth around the Sun (assumed circular).The argument D =(n −n ′)t +D 0with near synodic period is the mean longitude of the Moon minus the mean longitude of the Sun and is zero at new Moon.(For a more precise derivation of the lunar range perturbation due to the SEP violation acceleration term in Eq.(6)consult [62].)Any anomalous radial perturbation will be proportional to cos D .Expressed in terms ofη,the radial perturbation in Eq.(8)isδr∼13ηcos D meters [38,21,22].This effect,generalized to all similar three body situations,the“SEP-polarization effect.”LLR investigates the SEP by looking for a displacement of the lunar orbit along the direction to the Sun.The equivalence principle can be split into two parts:the weak equivalence principle tests the sensitivity to composition and the strong equivalence principle checks the dependence on mass.There are laboratory investigations of the weak equivalence principle(at University of Washington)which are about as accurate as LLR[7,1].LLR is the dominant test of the strong equivalence principle.The most accurate test of the SEP violation effect is presently provided by LLR[61,48,23],and also in[24,62,63,4].Recent analysis of LLR data test the EP of∆(M G/M I)EP=(−1.0±1.4)×10−13[68].This result corresponds to a test of the SEP of∆(M G/M I)SEP=(−2.0±2.0)×10−13with the SEP violation parameter η=4β−γ−3found to beη=(4.4±4.5)×10−ing the recent Cassini result for the PPN parameterγ,PPN parameterβis determined at the level ofβ−1=(1.2±1.1)×10−4.2.3.2Other Tests of Gravity with LLRLLR data yielded the strongest limits to date on variability of the gravitational constant(the way gravity is affected by the expansion of the universe),the best measurement of the de Sitter precession rate,and is relied upon to generate accurate astronomical ephemerides.The possibility of a time variation of the gravitational constant,G,wasfirst considered by Dirac in1938on the basis of his large number hypothesis,and later developed by Brans and Dicke in their theory of gravitation(for more details consult[59,60]).Variation might be related to the expansion of the Universe,in which case˙G/G=σH0,where H0is the Hubble constant, andσis a dimensionless parameter whose value depends on both the gravitational constant and the cosmological model considered.Revival of interest in Brans-Dicke-like theories,with a variable G,was partially motivated by the appearance of superstring theories where G is considered to be a dynamical quantity[26].Two limits on a change of G come from LLR and planetary ranging.This is the second most important gravitational physics result that LLR provides.GR does not predict a changing G,but some other theories do,thus testing for this effect is important.The current LLR ˙G/G=(4±9)×10−13yr−1is the most accurate limit published[68].The˙G/G uncertaintyis83times smaller than the inverse age of the universe,t0=13.4Gyr with the value for Hubble constant H0=72km/sec/Mpc from the WMAP data[52].The uncertainty for˙G/G is improving rapidly because its sensitivity depends on the square of the data span.This fact puts LLR,with its more then35years of history,in a clear advantage as opposed to other experiments.LLR has also provided the only accurate determination of the geodetic precession.Ref.[68]reports a test of geodetic precession,which expressed as a relative deviation from GR,is K gp=−0.0019±0.0064.The GP-B satellite should provide improved accuracy over this value, if that mission is successfully completed.LLR also has the capability of determining PPNβandγdirectly from the point-mass orbit perturbations.A future possibility is detection of the solar J2from LLR data combined with the planetary ranging data.Also possible are dark matter tests,looking for any departure from the inverse square law of gravity,and checking for a variation of the speed of light.The accurate LLR data has been able to quickly eliminate several suggested alterations of physical laws.The precisely measured lunar motion is a reality that any proposed laws of attraction and motion must satisfy.The above investigations are important to gravitational physics.The future LLR data will improve the above investigations.Thus,future LLR data of current accuracy would con-tinue to shrink the uncertainty of˙G because of the quadratic dependence on data span.The equivalence principle results would improve more slowly.To make a big improvement in the equivalence principle uncertainty requires improved range accuracy,and that is the motivation for constructing the APOLLO ranging facility in New Mexico.2.4Future LLR Data and APOLLO facilityIt is essential that acquisition of the new LLR data will continue in the future.Accuracies∼2cm are now achieved,and further very useful improvement is expected.Inclusion of improved data into LLR analyses would allow a correspondingly more precise determination of the gravitational physics parameters under study.LLR has remained a viable experiment with fresh results over35years because the data accuracies have improved by an order of magnitude(see Figure1).There are prospects for future LLR station that would provide another order of magnitude improvement.The Apache Point Observatory Lunar Laser-ranging Operation(APOLLO)is a new LLR effort designed to achieve mm range precision and corresponding order-of-magnitude gains in measurements of fundamental physics parameters.For thefirst time in the LLR history,using a3.5m telescope the APOLLO facility will push LLR into a new regime of multiple photon returns with each pulse,enabling millimeter range precision to be achieved[29,66].The anticipated mm-level range accuracy,expected from APOLLO,has a potential to test the EP with a sensitivity approaching10−14.This accuracy would yield sensitivity for parameterβat the level of∼5×10−5and measurements of the relative change in the gravitational constant,˙G/G, would be∼0.1%the inverse age of the universe.The overwhelming advantage APOLLO has over current LLR operations is a3.5m astro-nomical quality telescope at a good site.The site in southern New Mexico offers high altitude (2780m)and very good atmospheric“seeing”and image quality,with a median image resolu-tion of1.1arcseconds.Both the image sharpness and large aperture conspire to deliver more photons onto the lunar retroreflector and receive more of the photons returning from the re-flectors,pared to current operations that receive,on average,fewer than0.01 photons per pulse,APOLLO should be well into the multi-photon regime,with perhaps5–10 return photons per pulse.With this signal rate,APOLLO will be efficient atfinding and track-ing the lunar return,yielding hundreds of times more photons in an observation than current√operations deliver.In addition to the significant reduction in statistical error(useful).These new reflectors on the Moon(and later on Mars)can offer significant navigational accuracy for many space vehicles on their approach to the lunar surface or during theirflight around the Moon,but they also will contribute significantly to fundamental physics research.The future of lunar ranging might take two forms,namely passive retroreflectors and active transponders.The advantages of new installations of passive retroreflector arrays are their long life and simplicity.The disadvantages are the weak returned signal and the spread of the reflected pulse arising from lunar librations(apparent changes in orientation of up to10 degrees).Insofar as the photon timing error budget is dominated by the libration-induced pulse spread—as is the case in modern lunar ranging—the laser and timing system parameters do√not influence the net measurement uncertainty,which simply scales as1/3Laser Ranging to MarsThere are three different experiments that can be done with accurate ranges to Mars:a test of the SEP(similar to LLR),a solar conjunction experiment measuring the deflection of light in the solar gravity,similar to the Cassini experiment,and a search for temporal variation in the gravitational constant G.The Earth-Mars-Sun-Jupiter system allows for a sensitive test of the SEP which is qualitatively different from that provided by LLR[3].Furthermore,the outcome of these ranging experiments has the potential to improve the values of the two relativistic parameters—a combination of PPN parametersη(via test of SEP)and a direct observation of the PPN parameterγ(via Shapiro time delay or solar conjunction experiments).(This is quite different compared to LLR,as the small variation of Shapiro time delay prohibits very accurate independent determination of the parameterγ).The Earth-Mars range would also provide for a very accurate test of˙G/G.This section qualitatively addresses the near-term possibility of laser ranging to Mars and addresses the above three effects.3.1Planetary Test of the SEP with Ranging to MarsEarth-Mars ranging data can provide a useful estimate of the SEP parameterηgiven by Eq.(7). It was demonstrated in[3]that if future Mars missions provide ranging measurements with an accuracy ofσcentimeters,after ten years of ranging the expected accuracy for the SEP parameterηmay be of orderσ×10−6.These ranging measurements will also provide the most accurate determination of the mass of Jupiter,independent of the SEP effect test.It has been observed previously that a measurement of the Sun’s gravitational to inertial mass ratio can be performed using the Sun-Jupiter-Mars or Sun-Jupiter-Earth system[33,47,3]. The question we would like to answer here is how accurately can we do the SEP test given the accurate ranging to Mars?We emphasize that the Sun-Mars-Earth-Jupiter system,though governed basically by the same equations of motion as Sun-Earth-Moon system,is significantly different physically.For a given value of SEP parameterηthe polarization effects on the Earth and Mars orbits are almost two orders of magnitude larger than on the lunar orbit.Below we examine the SEP effect on the Earth-Mars range,which has been measured as part of the Mariner9and Viking missions with ranging accuracy∼7m[48,44,41,43].The main motivation for our analysis is the near-future Mars missions that should yield ranging data, accurate to∼1cm.This accuracy would bring additional capabilities for the precision tests of fundamental and gravitational physics.3.1.1Analytical Background for a Planetary SEP TestThe dynamics of the four-body Sun-Mars-Earth-Jupiter system in the Solar system barycentric inertial frame were considered.The quasi-Newtonian acceleration of the Earth(E)with respect to the Sun(S),a SE=a E−a S,is straightforwardly calculated to be:a SE=−µ∗SE·r SE MI Eb=M,Jµb r bS r3bE + M G M I E b=M,Jµb r bS。
纳米材料用语生物荧光探针
REVIEWNanomaterials in fluorescence-based biosensingWenwan ZhongReceived:18November2008/Revised:29December2008/Accepted:21January2009/Published online:17February2009 #The Author(s)2009.This article is published with open access at Abstract Fluorescence-based detection is the most commonmethod utilized in biosensing because of its high sensitivity,simplicity,and diversity.In the era of nanotechnology,nanomaterials are starting to replace traditional organic dyesas detection labels because they offer superior opticalproperties,such as brighter fluorescence,wider selections ofexcitation and emission wavelengths,higher photostability,etc.Their size-or shape-controllable optical characteristicsalso facilitate the selection of diverse probes for higher assaythroughput.Furthermore,the nanostructure can provide asolid support for sensing assays with multiple probe moleculesattached to each nanostructure,simplifying assay design andincreasing the labeling ratio for higher sensitivity.The currentreview summarizes the applications of nanomaterials—including quantum dots,metal nanoparticles,and silicananoparticles—in biosensing using detection techniques suchas fluorescence,fluorescence resonance energy transfer (FRET),fluorescence lifetime measurement,and multiphoton microscopy.The advantages nanomaterials bring to the field of biosensing are discussed.The review also points out the importance of analytical separations in the preparation of nanomaterials with fine optical and physical properties for biosensing.In conclusion,nanotechnology provides a great opportunity to analytical chemists to develop better sensing strategies,but also relies on modern analytical techniques to pave its way to practical applications.Keywords Nanomaterials.Quantum dots.Gold nanoparticle.Silica nanoparticle.Fluorescence.FRET. Biosensing OverviewSensitive detection of target analytes present at trace levels in biological samples often requires the labeling of reporter molecules with fluorescent dyes,because fluorescence detection is by far the dominant detection method in the field of sensing technology,due to its simplicity,the convenience of transducing the optical signal,the avail-ability of organic dyes with diverse spectral properties,and the rapid advances made in optical imaging.However,it can be difficult to obtain a low detection limit in fluo-rescence detection due to the limited extinction coefficients or quantum yields of organic dyes and the low dye-to-reporter molecule labeling ratio.The recent explosion of nanotechnology,leading to the development of materials with submicron-sized dimensions and unique optical properties,has opened up new horizons for fluorescence detection.Nanomaterials can be made from both inorganic and organic materials and are less than100nm in lengthAnal Bioanal Chem(2009)394:47–59 DOI10.1007/s00216-009-2643-xW.Zhong(*)Department of Chemistry,University of California, Riverside,CA92521,USAe-mail:wenwan.zhong@Wenwan Zhonghas been Assistant Professor ofChemistry at the University ofCalifornia,Riverside,since July2006.She received the Pilot In-terdisciplinary Research Awardfrom the Institute for IntegrativeGenome Biology of UC,River-side.Her current research in-terests are:developing novelanalytical strategies for utilizingnanomaterials in biosensing;studying nanotoxicity using mi-croscale separation techniques like capillary electrophoresis;and developing field-flow fractionation-based methods for purification and analysis of large protein complexes.along at least one dimension.This small size scale leads to large surface areas and unique size-related optical proper-ties.For example,the quantum confinement effects that occur in nanometer-sized semiconductors widen their band gap and generate well-defined energy levels at the band edges,causing a blueshift in the threshold absorption wavelength with decreasing particle size and inducing luminescence that is strictly correlated to particle size. Therefore,the position of the absorption as well as the luminescence peaks can be fine-tuned by controlling the particle size and the size distribution during synthesis, generating a large group of“fluorophores”with diverse optical properties.Another example is the collective oscillation of free electrons on the surfaces of noble metal particles when their sizes drop below the electron mean free path,which gives rise to intense absorption in the visible to near-UV region as well as a significant enhancement of the luminescence of the fluorophore nearby.On the other hand, they can be conjugated to the reporter molecules as optical tags,like organic dyes,due to their ultrafine physical sizes. Hence,this review focuses on the applications of nano-materials,including semiconductor nanocrystals,noble metal nanoparticles,silica nanoparticles,and carbon nano-tubes,in the field of fluorescence-based sensing. Fluorescent nanoparticleQuantum dotsSince their discovery,quantum dots(QDs)have become more and more important as fluorescence labels in biosensing and imaging[1–5].They are semiconductor nanocrystals composed of atoms of elements from groups II to VI(e.g.,Cd,Zn,Se,Te)or III–V(e.g.,In,P,As)in the periodic table[3,5].The quantum confinement effect arising from their very small(<10nm)dimensions results in wide UV-visible absorption spectra,narrow emission bands,and optical properties that can be tuned by size, composition,and shape[1,5].These features come with high flexibility in the selection of excitation wavelength as well as minimal overlap in the emission spectra from multiple QDs,making them excellent labels for high-throughput screening.Additionally,choosing excitation wavelengths far from the emission wavelengths can eliminate background pared to organic dyes,QDs have similar quantum yields but extinction coefficients that are10–50times larger,and much-reduced photobleaching rates[2].The overall effect is that QDs have10∼20times brighter fluorescence and∼100–200 times better photostability[2].Because QDs are intrinsically fluorescent,they can be employed as the reporter molecules for biomolecule detection.For example,QD-based western blot detection kits are able to detect as low as20pg protein per lane[6, 7].In comparison with colorimetric or chemiluminescent detection,which lead to detection limits of around hundreds of picograms of protein per lane,the QD-based protocol was found to require the same measuring time,be more sensitive,sustain a longer storage time after staining with minimal loss of signal,and deliver better image quality [11].Innovative sensing formats have also been developed to utilize the special properties of QDs.A strategy for the smart targeting of protein was reported by Genin et al., which involved linking QDs to an organic dye of CrAsH [8].Since the interaction between CrAsH and cysteine causes a significant increase in CrAsH’s fluorescence, nanohybrids of CrAsH-QD could serve as a probe to locate Cys-tagged proteins and subsequently trace them for more than150s,taking advantage of the high resistance of QDs to photobleaching[8].On the other hand,Soman et al. profited from the much larger size of QDs than organic dyes when designing a protein detection scheme that offers subpicomolar sensitivity[9].In this scheme,QD-Ab conjugates rapidly aggregate in the presence of antigens, resulting in colloidal structures that are1–2orders of magnitude larger in size than the constituents[9].These structures are detected by light scattering in a flow cytometer[9].The detection of various antigens using QDs with different emission properties is possible with this straightforward agglomeration-based detection strategy. Due to their bright intensity and high photostability,they also have a wide range of applications in bioimaging.For instance,QDs can be employed in fluorescence in situ hybridization(FISH)and offer higher detection sensitivity than organic dyes.QD FISH detected the expression of mRNA in neurons within the midbrain region of mouse at ×4magnification,which could only be done at×20 magnification when Alexa Fluor488was used[10]. Streptavidin-coated quantum dots bound to biotinylated peptides were produced in vivo after infection of the target bacterium by an engineered host-specific bacteriophage [11].The system detected the specific bacterium at a concentration of ten bacterial cells per milliliter using flow cytometry or fluorescence microscopy[11].The optical properties of QDs—a wide absorption spectrum,narrow emission peak,and high photostability—provide great advantages in high-throughput analysis[12–15].Multi-plexed detection of Bacillus anthracis was performed on a fiber-optic microarray platform using five types of QDs and the organic dyes Cy3and Cy5.This eight-reporter system provides a fourfold throughput enhancement over standard two-color assays[14].Even though QDs are a very promising replacement for traditional organic dyes in labeling biomolecules for bioassays and bioimaging,their surface properties need to48W.Zhongbe improved for better aqueous solubility and functionality, their stability should be enhanced,and their nonspecific binding to biomolecules needs to be decreased.Various methods have been developed for these purposes.Ligand exchange is a method commonly used to replace the hydrophobic capping molecules with bifunctional linker molecules for both enhanced solubility in water and to generate functional groups for chemical conjugation with biomolecules on the surface[2,5].Another approach is to cover the hydrophobic surface groups through interactions with amphiphilic molecules like octylamine-modified poly-acrylic acid[2,5].This approach does not alter the surface and optical properties of QDs,because the original hydrophobic layer outside the core/shell CdSe/ZnS is intact. Polyethylene glycol(PEG)is another useful molecule for surface modification of QDs,because it is not only a good adapter for a variety of functional end-groups,such as biotin,amino,and carboxyl groups,but it can also help to improve the stability of QDs and decrease nonspecific binding[16].It has been proven that QDs encapsulated in oligomerized PEG-phospholipid micelles are stable for weeks in pure water(no change in precipitation and fluorescence was observed),and over90%of the fluores-cence was retained after5,000min when the oligomeric PEG-phospholipid micelle QDs were dispersed in acetate, phosphate,and borate buffers with low salt contents.In contrast,the fluorescence of monomeric PEG-phospholipid micelle QDs dropped drastically to<50%or even30% under the same conditions[17].Surface oxidation and pH value during storage are two prominent factors that influence QD stability.For extended stability in basic buffers,dihydrolipoic acid(DHLA)could be attached to the QD surface via a PEG linkage[18].The hydroxyl-coated QDs prepared by Kairdolf et al.showed stability under both basic and acidic conditions,with minimized nonspecific binding[19].All of these surface modification routes strengthen the compatibility of QDs with bioassays and should be continued in order to further enhance their applicability in the field of biological science.Toxicity is another issue that needs to be solved before QDs can be widely applied in biomedical studies performed in vivo,although it may not be a big problem in biosensing performed in vitro.Fluorescent silica nanoparticlesAnother type of fluorescent nanomaterial which has been extensively tested as a labeling reagent in the detection of pathogens,nucleic acids,and proteins is silica nanoparticles doped with organic dyes[20–25].This type of nanomaterial has the following advantages in biosensing:(1)silicon is abundant and nontoxic;(2)the high surface-to-volume ratio of the nanoparticles facilitates their binding to biomole-cules;(3)the inclusion of a large number of fluorescent dye molecules inside each nanoparticle results in excellent photostability due to the ability of the silica matrix to shield from molecular oxygen,and the inclusion also dramatically increases the dye-to-biomolecule labeling ratio,leading to high signal amplification factors during detection;and(4)silica is relatively inert in chemical reactions,but still allows surface modification with well-established chemistries[20,21].Compared to QDs, fluorescent silica nanoparticles have a wider size range, spanning from a few to hundreds of nanometers;they require less strict size control,and exhibit better water solubility[20,21].However,problems related to particle aggregation and nonspecific binding on the silica surface have been observed and will need to be solved before the full potential of silica nanoparticles in biosensing can be realized[20].Studies have shown that a ratio of inert to active functional groups on the surfaces of silica nano-particles that results in a high zeta potential is critical to maintaining a well-dispersed nanoparticle suspension and reducing nonspecific binding[26].Other than organic fluorophores,silica particles can also encapsulate quantum dots.The encapsulation not only retains the unique optical properties of QDs,but it also eliminates the aqueous solubility and modification problems associated with QDs and reduces the toxicity of QDs by preventing the leakage of heavy metal ions into the environment[27].Furthermore,magnetic nanoparticles can be co-embedded with QDs to allow handy manipu-lation of the particles using a magnetic field.Such multi-functional particles could be employed in live cell imaging [27–30].Since toxicity is always a big concern when using cadmium-containing QDs in biomedical research,efforts have been expended to generate silicon QDs(SiQDs)that are far less toxic than group II–VI QDs.However,the indirect bandgap character of silicon results in the extreme-ly low light emission efficiency.Recently,silicon quantum dots with photoluminescence quantum yields of over60% have been demonstrated for organically capped silicon nanocrystals,with emission in the near-infrared range[31, 32].Other big challenges in making biocompatible SiQDs include the instability of their photoluminescence due to their fast oxidation rate in aqueous environments,and the difficulties involved in attaching hydrophilic molecules to the SiQD surface[33–36].Optimal surface functionaliza-tion,such as capping the surface of the SiQD with NH2, SH,OH,acrylic acid,and alkyl groups,has been sought to produce water-dispersible SiQDs while maintaining spec-tral and colloidal stability[33–36].Highly stable aqueous suspensions of SiQDs encapsulated in phospholipid micelles were prepared and applied as luminescent labels for pancreatic cancer cells[35].However,applications ofNanomaterials in fluorescence-based biosensing49SiQDs in biosensing and biomedical imaging are still in their infancy because the mechanism responsible for the visible photoluminescence(PL)of SiQDs and the relation-ship of PL to surface functionalization are not yet clear [34].Moreover,a comprehensive comparison of the optical properties of organic dyes or Cd-based QDs with SiQDs has not been made,which makes it impossible to assess the analytical performance of SiQDs in biosensing.Metallic nanomaterials for fluorescence enhancement It has been known for a long time that metallic nano-structures can enhance the fluorescence of nearby fluo-rophores[37–39].Interactions between the dipole moments of the fluorophores and the surface plasmon field of the metal can increase the incident light field,which results in enhanced local fluorescence intensity and rate of excitation. Such interactions also boost the radiative decay rates, leading to an improved quantum yield and a reduced lifetime of the fluorophore[37–39].It has been estimated that the local electric field and the radiative rate could be increased by factors of140and1,000,respectively,near a silver particle[38].In addition,the shorter lifetime comes with the advantage of higher photon flux and increased photostability[37,38].The combined effect is the amplification of the total number of detected photons per fluorophore by a factor of105,significantly enhancing the detectability of the fluorophore[38].Surfaces on which silver islands or silver particles have been deposited are common platforms in bioassays utilizing MEF as the reporting system.Enhancements of the detection signal ranging from ten-to fortyfold have been observed on silver island film(SiF)-coated glass surfaces in comparison with the bare glass[40,41].An approximately thirtyfold increase in the fluorescent intensity of indocya-nine green was observed on a silver colloid-coated planar surface[42].The distance from the fluorophore to the metal surface is a critical factor in successful fluorescence amplification,because a distance that is too short can lead to quenching of the dye[43].Such a distance dependence of the transfer of electronic energy from a donor plane of molecules to an acceptor plane was modeled by Aroca et al. in the early1990s[44].The ideal range is50–200Åfrom the metal surfaces,which makes conjugation of dye-labeled molecules on the silver surface necessary for MEF[38,43]. The thickness of the silver coating also plays an important role in MEF.A study conducted by Zhang et al.found that the fluorophore was quenched on a silver film of2nm, enhanced on a film thicker than5nm,and reached saturation at20nm[45].Anisotropic silver nanostructures have been constructed for MEF as well because theoretical calculations indicated that the surface of a spheroid with an aspect ratio of1.75could lead to a greater reduction in the radiative decay rate than that of a sphere or a more elongated spheroid[43,46].A fiftyfold fluorescence enhancement was observed with high loading of nanorods on the surface,and triangular silver plates led to an enhancement factor of16[46].Metals other than silver have been tested for their effects on MEF.For instance,the use of a gold nanostructure coupled to CdSe/ZnS nanocrystals for MEF was demon-strated with precise control and high spatial selectivity over the fluorescence enhancement process[47].While silver or gold nanostructures achieve MEF in the visible to near-infrared region,aluminum nanoparticles deposited on SiO2 substrate could work in the ultraviolet–blue spectral region, which broadens the application range of MEF[48].Copper particulate films were also found to generate a moderate enhancement effect[49].MEF has been employed to increase the sensitivity in the detection of DNA,RNA,and proteins in the microarray format[50–52].A twenty-eight-fold fluorescence enhance-ment was observed for the near-infrared/infrared dye Cy5, but only a fourfold enhancement was obtained with Cy3, probably due to the stronger scattering component of the extinction spectrum of Cy5(Fig.1)[52].Sensitive detection of a484-mer RNA with a detection limit of25 fmol(4ng)has been demonstrated with MEF and a detection probe consisting of a TAMRA-labeled DNA oligo [53].Detections of a few nanograms of RNA in RNA capture assays had only been achieved previously with enzymatic signal amplification via alkaline phosphatase or linear mRNA amplification during cDNA synthesis,which are more complicated and time-consuming processes than the MEF-based method[54,55].Proof-of-principle detec-tion of proteins has been demonstrated using model systems like avidin–biotin and antigen(myoglobin)–antibody[51]. Moreover,silver island film and MEF have the potential to increase the sensitivities of bioassays performed on cell membranes.Cells were simply cast onto glass slides covered with silver islands and dried before measurement [56].Because the fluorescence signals of the fluorophores bound on the cell membranes were enhanced dramatically by the SiF supports while the intensity level of the intracellular fluorophores was not changed,the MEF helped to distinguish membrane-bound signals from those inside the cells[56].In order to spatially and kinetically accelerate the binding of biomolecules onto the surface,microwave heating was employed to enable ultrafast detection by MEF [57,58].The microwave-based approach also facilitated the release of genomic materials from bacteria spores.Extrac-tion and detection of DNA materials from less than1000 Bacillus anthracis spores have been achieved within one minute by the microwave-assisted MEF technique[59]. Without MEF,Taqman®real-time PCR was needed to50W.Zhongdetect the same DNA materials from 100unprocessed bacterial spores within 2h [60].The use of biomolecule-conjugated silver nanoparticles allows MEF to be applied to solution-based assays as well [61].Solution-based sensing offers faster reaction kinetics and requires simpler experimental apparatus.Silica-coated silver spheres conjugated with Cy3through streptavidin –biotin binding resulted in three-to five-fold fluorescence enhancements [61].Fluorescent core –shell Ag@SiO 2nano-balls were also synthesized by the same group.The silver nanoball was coated with the fluorophore-doped silica shell,and a twentyfold increase in fluorescence was observed with this structure compared to nanobubbles without a silver core (Fig.2)[62].Fluorescence resonance energy transfer (FRET)For assays studying biomolecule interactions and confor-mational changes,FRET is a better technique than simple fluorescence because it is very sensitive to nanoscale changes in distance between molecules [63].Traditional FRET pairs are organic dyes,and modern nanotechnology produces materials like single metal nanoparticles and ionic nanocrystals that can be used as FRET donors or acceptors,offering better FRET effects and more flexible sensing platforms in bioanalysis [63].Metal nanoparticles (NP)Gold nanoparticles are excellent FRET-based quenchers because their plasmon resonance in the visible range makes them strong absorbers and scatterers,with large extinction coefficients of around 105cm −1M −1.Additional superior optical properties include stable signal intensities and photobleaching resistance [63].Gold NP is exceptionally attractive in bioassays due to the ability to finely control particle sizes,and the extreme ease with which biomole-cules containing exposed thiol groups can be attached to the gold surface through gold –sulfur bonds.The gold-sulfur bond also facilitates the attachment of other functional groups such as carboxyl and amine groups via sulfur-containing ligands with special terminal groups.The most typical application of gold NP in FRET-based assays is the labeling of molecular beacons [63].The ends of the hairpin structure are conjugated to Au NP and organic fluorophore,respectively.When the molecular beacon opens up its stem upon binding to its complementary strand in the loop,the fluorophore is released from the Au NP and increased fluorescence intensity is observed.A hundredfoldincreaseFig.1Comparison of the fluorescence on glass and on SiF.A Cy5fluorescence on glass (filled circles )and SIF (empty circles ).B Cy3fluorescence on glass (filled circles )and SIF (empty circles ).C Plots of the intensity enhancement factor versus spotting concentration.Cy5is shown in red ,and Cy3is shown in green .D A Cy5scan for cohybridization with complementary Cy5-and Cy3-labeled targets on glass and SIF substrates.The intensity bar shown lower right is a linear scale from 0counts (black )to 34,000counts (white )[52]Nanomaterials in fluorescence-based biosensing 51in sensitivity was obtained with Au NP compared with traditional dye combinations [64,65].Bridge molecules can also be employed to bring the fluorophores into the proximity of Au NP .For example,β-cyclodextrin (CD)attached to the Au NP surface formed inclusion complexes with the fluorescein molecules which was then quenched [66].This can be utilized to detect cholesterol,because the replacement of fluorescein at the binding site by cholesterol frees the fluorescein from the NP.The fluorescence intensity of the system increased in proportion to the cholesterol concentra-tion [66].The small particle size of Au NP makes them excellent in vivo imaging reagents as well.Au NP modified with FAM-labeled single-stranded DNA was used to image intracellular hydroxyl radicals;here,the DNA strand was cleaved by HO·and released the quenched fluorophore [67].Silver nanoparticles can be excellent acceptors in FRET too,or they can act as enhancers for FRET pairs.It has been found that hybridization of the donor-labeled oligo-nucleotide with the acceptor-labeled complimentary strand on a silver particle surface led to enhanced FRET efficiency with increased Förster distance (from 8.3to 13nm)and a 21-fold faster FRET rate constant [68].Furthermore,the efficiency of FRET between Cy5and Cy5.5on the surfaces of silver particles was increased when the particle size was increased from 15nm to 80nm and when the distance of the donor –acceptor pair from the particle surface was increased from 2nm to 10nm [69].Therefore,as in the case of MEF,silver particles or a silver-decorated surface can act as the supporting material for FRET-based sensing to enhance assay performance.Quantum dotsOther than being directly used as fluorescent labels,QDs have also been widely recognized recently as energy donors in FRET for bioanalysis.Their broad absorption and narrow emission spectra allow single-wavelength excitation of multiple donors and can avoid crosstalk with acceptor fluorophores.In addition,the spectral overlap between donors and acceptors can be adjusted by tuning the particle size.Moreover,QDs are nanostructures that can be coupled to multiple acceptor fluorophores for higher efficiency in energy transfer and can act as the solid support for biomolecules for imaging purposes or to simplify assay st but not least,the energy transfer between QDs and molecular dyes can be approximately described by the Förster mechanism,and so accurate measurements of donor –acceptor distances can be deduced using the same FRET theory as for organic dyes when QDs are the donors and the organic dyes or QDs are the acceptors [70].However,because of the broad absorption spectra and the long excitation lifetime of QDs,they are not suitable being energy acceptors for short-lived molecular fluorophores [71].FRET-based analyses of nucleic acids,proteins,and other biological molecules have been reviewed extensively by Algar et al.Applications of QDs as the donors in FRET for DNA point mutation analysis,detection of pathogenic DNA,construction of molecular beacons with increased photostability,and immunoassays were covered in that review and will not be discussed here [71].Some new applications of QDs as FRET probes included detectionofFig.2Fluorescence emission intensities of Eu-TDPA-doped Ag@SiO 2and Rh800-doped Ag@SiO 2,as well as those of the corresponding fluorescent nanobubbles,Eu-TDPA-doped SiO 2and Rh800-doped SiO 2(top ).The bottom of the figure presents scanning confocal images of (A )Alexa 647Ag@SiO 2,(B )Alexa 647@SiO 2,and (C )zoomed-in version of panel (B ).TDPA ,tris(dibenzoyl-methane)mono(5-aminophenanthroline)europium;Rh ,rhodamine [62]52W.Zhongthe actions of biological enzymes such as protease,polymerase,and nuclease,or even visualization of the pH change in solution,as demonstrated by Suzuki et al.[72].Multiplexed detection of trypsin and deoxyribonuclease was also demonstrated [72].On the other hand,QD-mediated FRET can contribute greatly to the process of drug discovery and development.For example,it can be employed to image the cargo-unpacking process that occurs inside cells during drug delivery.The plasmid DNA is labeled by QDs and the polymeric gene carriers,such as chitosan and polyethyleneimine,are labeled with Cy5[73].The dissociation of plasmid DNA from the polymeric carriers after entering the cells releases the fluorescence of Cy5,as visualized by confocal microscopy [73].In anotherexample,QD-based FRET was employed to quantify RNA –peptide interactions that could be applied in the screening of libraries of small-molecule drugs (Fig.3)[74].The important HIV-1regulatory protein Rev was labeled with Cy5,and the Rev responsive element on HIV RNA (RRE RNA)was bound to the 605QD via the biotin –strapavidin interaction [74].Association of Rev with the RRE RNA permitted the excitation of Cy5at 444nm,a wavelength outside of the excitation range of Cy5,which would then decrease upon the binding of a small-molecule inhibitor like proflavin [74].Using the 605QD as the FRET donor not only eliminated the interference from the intrinsic fluorescence of the inhibitor (proflavin),the emission spectrum of which significantly overlaps with theabsorp-Fig.4Chemical structure of the QD –Con A –β-CD –Au NP nanobiosensor,and schematic illustration of its FRET-based operating principles [76]Fig.3A Conceptual scheme for a single-QD-based nanosen-sor for evaluating Rev peptide –RRE interactions and theinhibitory efficacy of proflavin based on FRET between 605QD and Cy5.B Histograms of the measured FRET efficiency for 605QD/RRE –Rev peptide/Cy5complexes as a function of increasing Rev peptide-to-RRE ratio.The solid curves represent the fit of the experimental data to a Gaussian function.C The variation in the number of Cy5burst counts with increasing Rev peptide-to-RRE ratio [74]Nanomaterials in fluorescence-based biosensing 53。
jstd035声学扫描
JOINT INDUSTRY STANDARDAcoustic Microscopy for Non-HermeticEncapsulatedElectronicComponents IPC/JEDEC J-STD-035APRIL1999Supersedes IPC-SM-786 Supersedes IPC-TM-650,2.6.22Notice EIA/JEDEC and IPC Standards and Publications are designed to serve thepublic interest through eliminating misunderstandings between manufacturersand purchasers,facilitating interchangeability and improvement of products,and assisting the purchaser in selecting and obtaining with minimum delaythe proper product for his particular need.Existence of such Standards andPublications shall not in any respect preclude any member or nonmember ofEIA/JEDEC or IPC from manufacturing or selling products not conformingto such Standards and Publications,nor shall the existence of such Standardsand Publications preclude their voluntary use by those other than EIA/JEDECand IPC members,whether the standard is to be used either domestically orinternationally.Recommended Standards and Publications are adopted by EIA/JEDEC andIPC without regard to whether their adoption may involve patents on articles,materials,or processes.By such action,EIA/JEDEC and IPC do not assumeany liability to any patent owner,nor do they assume any obligation whateverto parties adopting the Recommended Standard or ers are alsowholly responsible for protecting themselves against all claims of liabilities forpatent infringement.The material in this joint standard was developed by the EIA/JEDEC JC-14.1Committee on Reliability Test Methods for Packaged Devices and the IPCPlastic Chip Carrier Cracking Task Group(B-10a)The J-STD-035supersedes IPC-TM-650,Test Method2.6.22.For Technical Information Contact:Electronic Industries Alliance/ JEDEC(Joint Electron Device Engineering Council)2500Wilson Boulevard Arlington,V A22201Phone(703)907-7560Fax(703)907-7501IPC2215Sanders Road Northbrook,IL60062-6135 Phone(847)509-9700Fax(847)509-9798Please use the Standard Improvement Form shown at the end of thisdocument.©Copyright1999.The Electronic Industries Alliance,Arlington,Virginia,and IPC,Northbrook,Illinois.All rights reserved under both international and Pan-American copyright conventions.Any copying,scanning or other reproduction of these materials without the prior written consent of the copyright holder is strictly prohibited and constitutes infringement under the Copyright Law of the United States.IPC/JEDEC J-STD-035Acoustic Microscopyfor Non-Hermetic EncapsulatedElectronicComponentsA joint standard developed by the EIA/JEDEC JC-14.1Committee on Reliability Test Methods for Packaged Devices and the B-10a Plastic Chip Carrier Cracking Task Group of IPCUsers of this standard are encouraged to participate in the development of future revisions.Contact:EIA/JEDEC Engineering Department 2500Wilson Boulevard Arlington,V A22201 Phone(703)907-7500 Fax(703)907-7501IPC2215Sanders Road Northbrook,IL60062-6135 Phone(847)509-9700Fax(847)509-9798ASSOCIATION CONNECTINGELECTRONICS INDUSTRIESAcknowledgmentMembers of the Joint IPC-EIA/JEDEC Moisture Classification Task Group have worked to develop this document.We would like to thank them for their dedication to this effort.Any Standard involving a complex technology draws material from a vast number of sources.While the principal members of the Joint Moisture Classification Working Group are shown below,it is not possible to include all of those who assisted in the evolution of this Standard.To each of them,the mem-bers of the EIA/JEDEC and IPC extend their gratitude.IPC Packaged Electronic Components Committee ChairmanMartin FreedmanAMP,Inc.IPC Plastic Chip Carrier Cracking Task Group,B-10a ChairmanSteven MartellSonoscan,Inc.EIA/JEDEC JC14.1CommitteeChairmanJack McCullenIntel Corp.EIA/JEDEC JC14ChairmanNick LycoudesMotorolaJoint Working Group MembersCharlie Baker,TIChristopher Brigham,Hi/FnRalph Carbone,Hewlett Packard Co. Don Denton,TIMatt Dotty,AmkorMichele J.DiFranza,The Mitre Corp. Leo Feinstein,Allegro Microsystems Inc.Barry Fernelius,Hewlett Packard Co. Chris Fortunko,National Institute of StandardsRobert J.Gregory,CAE Electronics, Inc.Curtis Grosskopf,IBM Corp.Bill Guthrie,IBM Corp.Phil Johnson,Philips Semiconductors Nick Lycoudes,MotorolaSteven R.Martell,Sonoscan Inc. Jack McCullen,Intel Corp.Tom Moore,TIDavid Nicol,Lucent Technologies Inc.Pramod Patel,Advanced Micro Devices Inc.Ramon R.Reglos,XilinxCorazon Reglos,AdaptecGerald Servais,Delphi Delco Electronics SystemsRichard Shook,Lucent Technologies Inc.E.Lon Smith,Lucent Technologies Inc.Randy Walberg,NationalSemiconductor Corp.Charlie Wu,AdaptecEdward Masami Aoki,HewlettPackard LaboratoriesFonda B.Wu,Raytheon Systems Co.Richard W.Boerdner,EJE ResearchVictor J.Brzozowski,NorthropGrumman ES&SDMacushla Chen,Wus Printed CircuitCo.Ltd.Jeffrey C.Colish,Northrop GrummanCorp.Samuel J.Croce,Litton AeroProducts DivisionDerek D-Andrade,Surface MountTechnology CentreRao B.Dayaneni,Hewlett PackardLaboratoriesRodney Dehne,OEM WorldwideJames F.Maguire,Boeing Defense&Space GroupKim Finch,Boeing Defense&SpaceGroupAlelie Funcell,Xilinx Inc.Constantino J.Gonzalez,ACMEMunir Haq,Advanced Micro DevicesInc.Larry A.Hargreaves,DC.ScientificInc.John T.Hoback,Amoco ChemicalCo.Terence Kern,Axiom Electronics Inc.Connie M.Korth,K-Byte/HibbingManufacturingGabriele Marcantonio,NORTELCharles Martin,Hewlett PackardLaboratoriesRichard W.Max,Alcatel NetworkSystems Inc.Patrick McCluskey,University ofMarylandJames H.Moffitt,Moffitt ConsultingServicesRobert Mulligan,Motorola Inc.James E.Mumby,CibaJohn Northrup,Lockheed MartinCorp.Dominique K.Numakura,LitchfieldPrecision ComponentsNitin B.Parekh,Unisys Corp.Bella Poborets,Lucent TechnologiesInc.D.Elaine Pope,Intel Corp.Ray Prasad,Ray Prasad ConsultancyGroupAlbert Puah,Adaptec Inc.William Sepp,Technic Inc.Ralph W.Taylor,Lockheed MartinCorp.Ed R.Tidwell,DSC CommunicationsCorp.Nick Virmani,Naval Research LabKen Warren,Corlund ElectronicsCorp.Yulia B.Zaks,Lucent TechnologiesInc.IPC/JEDEC J-STD-035April1999 iiTable of Contents1SCOPE (1)2DEFINITIONS (1)2.1A-mode (1)2.2B-mode (1)2.3Back-Side Substrate View Area (1)2.4C-mode (1)2.5Through Transmission Mode (2)2.6Die Attach View Area (2)2.7Die Surface View Area (2)2.8Focal Length(FL) (2)2.9Focus Plane (2)2.10Leadframe(L/F)View Area (2)2.11Reflective Acoustic Microscope (2)2.12Through Transmission Acoustic Microscope (2)2.13Time-of-Flight(TOF) (3)2.14Top-Side Die Attach Substrate View Area (3)3APPARATUS (3)3.1Reflective Acoustic Microscope System (3)3.2Through Transmission AcousticMicroscope System (4)4PROCEDURE (4)4.1Equipment Setup (4)4.2Perform Acoustic Scans..........................................4Appendix A Acoustic Microscopy Defect CheckSheet (6)Appendix B Potential Image Pitfalls (9)Appendix C Some Limitations of AcousticMicroscopy (10)Appendix D Reference Procedure for PresentingApplicable Scanned Data (11)FiguresFigure1Example of A-mode Display (1)Figure2Example of B-mode Display (1)Figure3Example of C-mode Display (2)Figure4Example of Through Transmission Display (2)Figure5Diagram of a Reflective Acoustic MicroscopeSystem (3)Figure6Diagram of a Through Transmission AcousticMicroscope System (3)April1999IPC/JEDEC J-STD-035iiiIPC/JEDEC J-STD-035April1999This Page Intentionally Left BlankivApril1999IPC/JEDEC J-STD-035 Acoustic Microscopy for Non-Hermetic EncapsulatedElectronic Components1SCOPEThis test method defines the procedures for performing acoustic microscopy on non-hermetic encapsulated electronic com-ponents.This method provides users with an acoustic microscopy processflow for detecting defects non-destructively in plastic packages while achieving reproducibility.2DEFINITIONS2.1A-mode Acoustic data collected at the smallest X-Y-Z region defined by the limitations of the given acoustic micro-scope.An A-mode display contains amplitude and phase/polarity information as a function of time offlight at a single point in the X-Y plane.See Figure1-Example of A-mode Display.IPC-035-1 Figure1Example of A-mode Display2.2B-mode Acoustic data collected along an X-Z or Y-Z plane versus depth using a reflective acoustic microscope.A B-mode scan contains amplitude and phase/polarity information as a function of time offlight at each point along the scan line.A B-mode scan furnishes a two-dimensional(cross-sectional)description along a scan line(X or Y).See Figure2-Example of B-mode Display.IPC-035-2 Figure2Example of B-mode Display(bottom half of picture on left)2.3Back-Side Substrate View Area(Refer to Appendix A,Type IV)The interface between the encapsulant and the back of the substrate within the outer edges of the substrate surface.2.4C-mode Acoustic data collected in an X-Y plane at depth(Z)using a reflective acoustic microscope.A C-mode scan contains amplitude and phase/polarity information at each point in the scan plane.A C-mode scan furnishes a two-dimensional(area)image of echoes arising from reflections at a particular depth(Z).See Figure3-Example of C-mode Display.1IPC/JEDEC J-STD-035April1999IPC-035-3 Figure3Example of C-mode Display2.5Through Transmission Mode Acoustic data collected in an X-Y plane throughout the depth(Z)using a through trans-mission acoustic microscope.A Through Transmission mode scan contains only amplitude information at each point in the scan plane.A Through Transmission scan furnishes a two-dimensional(area)image of transmitted ultrasound through the complete thickness/depth(Z)of the sample/component.See Figure4-Example of Through Transmission Display.IPC-035-4 Figure4Example of Through Transmission Display2.6Die Attach View Area(Refer to Appendix A,Type II)The interface between the die and the die attach adhesive and/or the die attach adhesive and the die attach substrate.2.7Die Surface View Area(Refer to Appendix A,Type I)The interface between the encapsulant and the active side of the die.2.8Focal Length(FL)The distance in water at which a transducer’s spot size is at a minimum.2.9Focus Plane The X-Y plane at a depth(Z),which the amplitude of the acoustic signal is maximized.2.10Leadframe(L/F)View Area(Refer to Appendix A,Type V)The imaged area which extends from the outer L/F edges of the package to the L/F‘‘tips’’(wedge bond/stitch bond region of the innermost portion of the L/F.)2.11Reflective Acoustic Microscope An acoustic microscope that uses one transducer as both the pulser and receiver. (This is also known as a pulse/echo system.)See Figure5-Diagram of a Reflective Acoustic Microscope System.2.12Through Transmission Acoustic Microscope An acoustic microscope that transmits ultrasound completely through the sample from a sending transducer to a receiver on the opposite side.See Figure6-Diagram of a Through Transmis-sion Acoustic Microscope System.2April1999IPC/JEDEC J-STD-0353IPC/JEDEC J-STD-035April1999 3.1.6A broad band acoustic transducer with a center frequency in the range of10to200MHz for subsurface imaging.3.2Through Transmission Acoustic Microscope System(see Figure6)comprised of:3.2.1Items3.1.1to3.1.6above3.2.2Ultrasonic pulser(can be a pulser/receiver as in3.1.1)3.2.3Separate receiving transducer or ultrasonic detection system3.3Reference packages or standards,including packages with delamination and packages without delamination,for use during equipment setup.3.4Sample holder for pre-positioning samples.The holder should keep the samples from moving during the scan and maintain planarity.4PROCEDUREThis procedure is generic to all acoustic microscopes.For operational details related to this procedure that apply to a spe-cific model of acoustic microscope,consult the manufacturer’s operational manual.4.1Equipment Setup4.1.1Select the transducer with the highest useable ultrasonic frequency,subject to the limitations imposed by the media thickness and acoustic characteristics,package configuration,and transducer availability,to analyze the interfaces of inter-est.The transducer selected should have a low enough frequency to provide a clear signal from the interface of interest.The transducer should have a high enough frequency to delineate the interface of interest.Note:Through transmission mode may require a lower frequency and/or longer focal length than reflective mode.Through transmission is effective for the initial inspection of components to determine if defects are present.4.1.2Verify setup with the reference packages or standards(see3.3above)and settings that are appropriate for the trans-ducer chosen in4.1.1to ensure that the critical parameters at the interface of interest correlate to the reference standard uti-lized.4.1.3Place units in the sample holder in the coupling medium such that the upper surface of each unit is parallel with the scanning plane of the acoustic transducer.Sweep air bubbles away from the unit surface and from the bottom of the trans-ducer head.4.1.4At afixed distance(Z),align the transducer and/or stage for the maximum reflected amplitude from the top surface of the sample.The transducer must be perpendicular to the sample surface.4.1.5Focus by maximizing the amplitude,in the A-mode display,of the reflection from the interface designated for imag-ing.This is done by adjusting the Z-axis distance between the transducer and the sample.4.2Perform Acoustic Scans4.2.1Inspect the acoustic image(s)for any anomalies,verify that the anomaly is a package defect or an artifact of the imaging process,and record the results.(See Appendix A for an example of a check sheet that may be used.)To determine if an anomaly is a package defect or an artifact of the imaging process it is recommended to analyze the A-mode display at the location of the anomaly.4.2.2Consider potential pitfalls in image interpretation listed in,but not limited to,Appendix B and some of the limita-tions of acoustic microscopy listed in,but not limited to,Appendix C.If necessary,make adjustments to the equipment setup to optimize the results and rescan.4April1999IPC/JEDEC J-STD-035 4.2.3Evaluate the acoustic images using the failure criteria specified in other appropriate documents,such as J-STD-020.4.2.4Record the images and thefinal instrument setup parameters for documentation purposes.An example checklist is shown in Appendix D.5IPC/JEDEC J-STD-035April19996April1999IPC/JEDEC J-STD-035Appendix AAcoustic Microscopy Defect Check Sheet(continued)CIRCUIT SIDE SCANImage File Name/PathDelamination(Type I)Die Circuit Surface/Encapsulant Number Affected:Average%Location:Corner Edge Center (Type II)Die/Die Attach Number Affected:Average%Location:Corner Edge Center (Type III)Encapsulant/Substrate Number Affected:Average%Location:Corner Edge Center (Type V)Interconnect tip Number Affected:Average%Interconnect Number Affected:Max.%Length(Type VI)Intra-Laminate Number Affected:Average%Location:Corner Edge Center Comments:CracksAre cracks present:Yes NoIf yes:Do any cracks intersect:bond wire ball bond wedge bond tab bump tab leadDoes crack extend from leadfinger to any other internal feature:Yes NoDoes crack extend more than two-thirds the distance from any internal feature to the external surfaceof the package:Yes NoAdditional verification required:Yes NoComments:Mold Compound VoidsAre voids present:Yes NoIf yes:Approx.size Location(if multiple voids,use comment section)Do any voids intersect:bond wire ball bond wedge bond tab bump tab lead Additional verification required:Yes NoComments:7IPC/JEDEC J-STD-035April1999Appendix AAcoustic Microscopy Defect Check Sheet(continued)NON-CIRCUIT SIDE SCANImage File Name/PathDelamination(Type IV)Encapsulant/Substrate Number Affected:Average%Location:Corner Edge Center (Type II)Substrate/Die Attach Number Affected:Average%Location:Corner Edge Center (Type V)Interconnect Number Affected:Max.%LengthLocation:Corner Edge Center (Type VI)Intra-Laminate Number Affected:Average%Location:Corner Edge Center (Type VII)Heat Spreader Number Affected:Average%Location:Corner Edge Center Additional verification required:Yes NoComments:CracksAre cracks present:Yes NoIf yes:Does crack extend more than two-thirds the distance from any internal feature to the external surfaceof the package:Yes NoAdditional verification required:Yes NoComments:Mold Compound VoidsAre voids present:Yes NoIf yes:Approx.size Location(if multiple voids,use comment section)Additional verification required:Yes NoComments:8Appendix BPotential Image PitfallsOBSERV ATIONS CAUSES/COMMENTSUnexplained loss of front surface signal Gain setting too lowSymbolization on package surfaceEjector pin knockoutsPin1and other mold marksDust,air bubbles,fingerprints,residueScratches,scribe marks,pencil marksCambered package edgeUnexplained loss of subsurface signal Gain setting too lowTransducer frequency too highAcoustically absorbent(rubbery)fillerLarge mold compound voidsPorosity/high concentration of small voidsAngled cracks in package‘‘Dark line boundary’’(phase cancellation)Burned molding compound(ESD/EOS damage)False or spotty indication of delamination Low acoustic impedance coating(polyimide,gel)Focus errorIncorrect delamination gate setupMultilayer interference effectsFalse indication of adhesion Gain set too high(saturation)Incorrect delamination gate setupFocus errorOverlap of front surface and subsurface echoes(transducerfrequency too low)Fluidfilling delamination areasApparent voiding around die edge Reflection from wire loopsIncorrect setting of void gateGraded intensity Die tilt or lead frame deformation Sample tiltApril1999IPC/JEDEC J-STD-0359Appendix CSome Limitations of Acoustic MicroscopyAcoustic microscopy is an analytical technique that provides a non-destructive method for examining plastic encapsulated components for the existence of delaminations,cracks,and voids.This technique has limitations that include the following: LIMITATION REASONAcoustic microscopy has difficulty infinding small defects if the package is too thick.The ultrasonic signal becomes more attenuated as a function of two factors:the depth into the package and the transducer fre-quency.The greater the depth,the greater the attenuation.Simi-larly,the higher the transducer frequency,the greater the attenu-ation as a function of depth.There are limitations on the Z-axis(axial)resolu-tion.This is a function of the transducer frequency.The higher the transducer frequency,the better the resolution.However,the higher frequency signal becomes attenuated more quickly as a function of depth.There are limitations on the X-Y(lateral)resolu-tion.The X-Y(lateral)resolution is a function of a number of differ-ent variables including:•Transducer characteristics,including frequency,element diam-eter,and focal length•Absorption and scattering of acoustic waves as a function of the sample material•Electromechanical properties of the X-Y stageIrregularly shaped packages are difficult to analyze.The technique requires some kind offlat reference surface.Typically,the upper surface of the package or the die surfacecan be used as references.In some packages,cambered packageedges can cause difficulty in analyzing defects near the edgesand below their surfaces.Edge Effect The edges cause difficulty in analyzing defects near the edge ofany internal features.IPC/JEDEC J-STD-035April1999 10April1999IPC/JEDEC J-STD-035Appendix DReference Procedure for Presenting Applicable Scanned DataMost of the settings described may be captured as a default for the particular supplier/product with specific changes recorded on a sample or lot basis.Setup Configuration(Digital Setup File Name and Contents)Calibration Procedure and Calibration/Reference Standards usedTransducerManufacturerModelCenter frequencySerial numberElement diameterFocal length in waterScan SetupScan area(X-Y dimensions)Scan step sizeHorizontalVerticalDisplayed resolutionHorizontalVerticalScan speedPulser/Receiver SettingsGainBandwidthPulseEnergyRepetition rateReceiver attenuationDampingFilterEcho amplitudePulse Analyzer SettingsFront surface gate delay relative to trigger pulseSubsurface gate(if used)High passfilterDetection threshold for positive oscillation,negative oscillationA/D settingsSampling rateOffset settingPer Sample SettingsSample orientation(top or bottom(flipped)view and location of pin1or some other distinguishing characteristic) Focus(point,depth,interface)Reference planeNon-default parametersSample identification information to uniquely distinguish it from others in the same group11IPC/JEDEC J-STD-035April1999Appendix DReference Procedure for Presenting Applicable Scanned Data(continued) Reference Procedure for Presenting Scanned DataImagefile types and namesGray scale and color image legend definitionsSignificance of colorsIndications or definition of delaminationImage dimensionsDepth scale of TOFDeviation from true aspect ratioImage type:A-mode,B-mode,C-mode,TOF,Through TransmissionA-mode waveforms should be provided for points of interest,such as delaminated areas.In addition,an A-mode image should be provided for a bonded area as a control.12Standard Improvement FormIPC/JEDEC J-STD-035The purpose of this form is to provide the Technical Committee of IPC with input from the industry regarding usage of the subject standard.Individuals or companies are invited to submit comments to IPC.All comments will be collected and dispersed to the appropriate committee(s).If you can provide input,please complete this form and return to:IPC2215Sanders RoadNorthbrook,IL 60062-6135Fax 847509.97981.I recommend changes to the following:Requirement,paragraph number Test Method number,paragraph numberThe referenced paragraph number has proven to be:Unclear Too RigidInErrorOther2.Recommendations forcorrection:3.Other suggestions for document improvement:Submitted by:Name Telephone Company E-mailAddress City/State/ZipDate ASSOCIATION CONNECTING ELECTRONICS INDUSTRIESASSOCIATION CONNECTINGELECTRONICS INDUSTRIESISBN#1-580982-28-X2215 Sanders Road, Northbrook, IL 60062-6135Tel. 847.509.9700 Fax 847.509.9798。
Einstein-Rosen, On Gravitational Waves(1937)
We consider t h a t the g., are replaced by the expressions where
g., = a., + u.,,
~,v = I
(2)
=o
if if
/~ = v. ~¢v,
provided we take the time co6rdinate imaginary, as was dotLo by Minkowski. It is assumed t h a t the %. are smal, i.e. t h a t the gravitational field is weak. In the equatiors the 3`.. and their derivatives will occur in various powers. If the 3`.. are everywhere sufficiently small compared to uaity one obtains a first-approximation solution of the equations by neglecting in (I) the higher powers of the 3`.. (~nd their derivatives) compared with the lower ones. If one introduces further the ~., instead of the 3`.. by the relations
/
ON GRAVITATIONAL WAVES.
BY A. E I N S T E I N and N. R O S E N .
Opitcal Waves in Layered Media》(层状介质中的光波
《Opitcal Waves in Layered Media》简介一、出版情况《层状介质中的光波》(Optical Waves in Layered Media)是1998年由美国John Wiley & Sons 公司出版的,本书为2005年再版,全书406页。
二、作者情况叶伯琦(Pochi Yeh)目前是美国加州大学圣塔芭芭拉分校(University of California at Santa Barbara)电机电脑系教授与交大讲座教授(合聘)。
他1967年至1971年于国立台湾大学攻读物理系学士学位,1973至1975于美国加州理工学院物理系攻读硕士学位,1973至1977年于美国加州理工学院攻读物理系博士。
毕业后至1990年在美国洛克威尔国际科学中心光资讯部门任执行经理并在1985年至1990年任美国洛克威尔国际科学中心首席科学家。
1987年至今任台湾国立交通大学光电工程研究所兼任教授。
1990年被聘为加州大学圣塔芭芭拉分校电机电脑系教授。
曾荣誉美国光学学会会士(Optical Society of America Fellow)、电子电机工程学会会士((IEEE Fellow)、洛克威尔科学中心达芬奇杰出工程师奖(Leonardo da Vinci Award, Engineering of the Year 1985)、国际光学工程学会金氏奖(Rudolf Kingslake Medal and Prize)等。
除本书外主要著作有晶体中的光波(Optical Waves in Crystals, Wiley l984);非线性光折射简介(Introduction to Photorefractive Nonlinear Optics,Wiley l993);液晶显示光学(Optics Of Liquid Crystal Displays, Wiley l999);光子学:现代通信中的光电子学(Photonics: Optical electronics for modern communications,Oxford University Press 2006)。
2种造林方式的毛竹材质生成中微纤丝角的变化
浙江林学院学报2010,27(2):217-222Journal of Zhejiang Forestry College2种造林方式的毛竹材质生成中微纤丝角的变化杨淑敏,江泽慧,任海青,费本华(中国林业科学研究院木材工业研究所,北京100091)摘要:应用X-射线衍射法对实生苗和埋鞭2种造林方式的毛竹Phyllostachys pubescens各3个竹龄的竹材微纤丝角进行了测定分析,所选竹材各个竹龄细胞次生壁微纤丝角的径向变异规律都呈现下降或波动趋势,从竹青到竹黄存在显著差异。
埋鞭和实生苗造林的竹材微纤丝角最大值分别为12.05°和10.97°,最小值分别为7.67°和8.24°,变化幅度均小于5°;其微纤丝角平均值分别为9.41°和9.71°,相差不大。
2种方式的竹材微纤丝角随竹龄增加未呈现一致的规律性变化,竹龄对微纤丝角的影响显著。
纵向微纤丝角从下到上没有明显规律,埋鞭的竹材下、中和上部的平均微纤丝角分别为9.64°,9.25°和9.34°,小于实生苗的分别为9.73°,9.82°和9.58°,存在显著差异。
探讨了不同造林方式获得的毛竹竹材的微纤丝角的时空变异规律,为合理有效地开发竹资源提供科学依据。
图4表3参22关键词:林业工程;毛竹;造林方法;微纤丝角;X-射线衍射中图分类号:S781文献标志码:A文章编号:1000-5692(2010)02-0217-06Variation of microfibril angle in developmental Phyllostachys pubescensculms by two forestation methodsYANG Shu-min,JIANG Ze-hui,REN Hai-qing,FEI Ben-hua(Research Institute of Wood Industry,The Chinese Academy of Forestry,Beijing100091,China)Abstract:To provide a scientific basis for reasonable exploitation of bamboo resources,temporal and spa-tial variation patterns of the microfibril angle(MFA)in Phyllostachys pubescens(moso bamboo)were stud-ied.Buried rhizomes(R)and seedlings(S)aged30,54,and78months were measured and analyzed by X-ray diffraction estimation.Results showed that when the distance ranged from bark to pith for all bamboo,MFA radial variation of secondary cell walls for the three ages decreased or had no pattern.The maximum MFA(R)was12.05°and MFA(S)was10.97°with the minimum of7.67°[MFA(R)]and8.24°[MFA(S)]for a difference of less than5°.There was a significant difference between age and MFA(P<0.05)but no regular pattern.No pattern for longitudinal variations of MFA with bamboo height were found with bamboo rhizomes having a base of9.64°,a middle of9.25°,and a top of9.34°,significantly(P<0.05)more than bamboo seedlings of9.73°for the base,9.82°for the middle,and9.58°for the top.[Ch,4fig.3tab. 22ref.]Key words:forest engineering;Phyllostachys pubescens;method of forestation;microfibril angle;X-ray diffraction微纤丝角(microfibril angle,MFA)为细胞次生壁S2层微纤丝排列方向与细胞主轴所形成的夹角,或可表述为细胞壁中纤维素链的螺旋卷索与纤维轴之间的夹角[1-2]。
Gravitational singularity
Gravitational singularityA gravitational singularity or spacetime singularity is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system. These quantities are the scalar invariant curvatures of spacetime, which includes a measure of the density of matter.For the purposes of proving the Penrose–Hawking singularity theorems, a spacetime with a singularity is defined to be one that contains geodesics(测地线)that cannot be extended in a smooth manner.[1]The end of such a geodesic is considered to be the singularity. This is a different definition, useful for proving theorems.The two most important types of spacetime singularities are curvature singularities and conical singularities.[2] Singularities can also be divided according to whether or not they are covered by an event horizon(重大的转折点,“事件穹界”,指黑洞的边界。
生物物理学中的荧光共振能量转移技术
生物物理学中的荧光共振能量转移技术荧光共振能量转移技术(FLIM)是一种用于研究生物分子相互作用和分子结构的生物物理学技术。
该技术基于荧光共振能量转移现象,可以实现分子之间的距离测量,从而揭示它们之间的相互作用。
FLIM广泛应用于蛋白质相互作用、分子团聚、酶催化活性等各种研究领域,在生物医学研究和疾病诊断中具有重要的意义。
什么是荧光共振能量转移?荧光共振能量转移是一种非辐射能量转移过程,发生在两种荧光蛋白分子之间的相互作用中。
当荧光蛋白分子A被激发后,它会向周围的分子发射能量。
如果周围存在另一种荧光蛋白分子B,并且两者距离较近(通常是几纳米到十几纳米),则A分子释放的能量可以传递给B分子,激发B分子发出荧光。
这个过程被称为荧光共振能量转移,通常缩写为FRET。
荧光共振能量转移的原理基于两个原则:1、荧光蛋白分子的激发光谱和荧光光谱是固定的,因此可以根据发射光谱来确定颜色和强度;2、同种荧光蛋白分子之间的能量转移效率与它们之间的距离和相对方向密切相关。
这意味着,测量荧光共振能量转移可以帮助我们确定两个荧光蛋白之间的距离和相对方向。
FLIM原理及应用FLIM技术是基于荧光共振能量转移原理而发展起来的生物物理学技术。
它可以用于定量测量蛋白质、核酸以及其他生物分子之间的距离和相对位置关系,从而揭示它们之间的相互作用。
FLIM技术利用荧光信号的寿命(即荧光信号的持续时间)来识别荧光蛋白相互作用的位置和深度。
具体来说,FLIM技术通过测量荧光蛋白分子的荧光信号的寿命来确定分子之间的距离。
寿命较长的荧光信号通常意味着分子之间的距离较远,反之则意味着它们靠得比较近。
FLIM技术在研究蛋白质相互作用、分子团聚、酶催化活性等生物学研究领域中具有广泛应用。
例如,在蛋白质相互作用研究中,FLIM技术可以帮助研究人员确定蛋白质结合的位置和组成,并揭示蛋白质结合的以及解离动力学。
在酶催化活性研究中,FLIM技术可以提供对酶催化作用机制和环境的独特见解。
基于菲索干涉仪多普勒激光雷达的风速及后向散射系数反演
第3 7卷 第 1 2期
20 0 7年 1 2月
激 光 与 红 外
LAS ER & I NFRARED
Vo . 7, .1 1 3 No 2
De e e , 0 7 c mb r 2 0
文 章 编 号 :0 157 ( 0 7 1 —2 5 4 10 -0 8 20 )215 - 0
中图分类 号 :N 5 . 8 T 989 文 献标 识码 : A
I v r i n o i n c s a t r Co f ce t r m z a n e so f W nd a d Ba k c te e i i n s f o a Fi e u
I e f r m e e s d Do l r Li a nt r e o t r Ba e pp e d r
2 An u n t u e o tc n i e Me h n c , h n s a e fS in e , fi2 0 3 ; . h iI si d f t Op is a d F n c a i s C i e e Ac d my o c e c s Hee 3 0 1
Ab t a t A o l e rl a ts u r si v ri n meh d i t a e c b d t a emi h e o e y o n eo i n s r c : n n i a e s —q a e n e so to s h n d s r e t r t t e r c v r f n i h p s wi d v l ct a d y a r s lmoe u a ai i o t e urn y t m a a t r u h a h l s o e f l f iw a d l e e m v r p e o o — lc l rr t w t u q i g s se p r mee s c st e t e c p ed o e a rb a o e l o h r i s e i v n s a f n t n T e sg a ssmu a e y Mo t — ro tc nq e a e u e o r t e e t e w n eo i d a r s lmoe ua u c i . h in l i l td b n e Cal h i u r s d t er v h d v lct a eo o— l c r o e i i yn l rto T e r s t h w t a h ea ie me s r ro s b l w 0. % ai . h e u s s o h tt e r lt a u e er r i eo 2 l v a t u e o i mee s S mu t et iv r ta i d f3 k l t r . i l l t o n y. a i n e—
Methods for depositing ultra thin low resistivity
专利名称:Methods for depositing ultra thin lowresistivity tungsten film for small criticaldimension contacts and interconnects发明人:Feng Chen,Raashina Humayun,MichalDanek,Anand Chandrashekar申请号:US12755259申请日:20100406公开号:US08623733B2公开日:20140107专利内容由知识产权出版社提供专利附图:摘要:Provided are methods of void-free tungsten fill of high aspect ratio features.According to various embodiments, the methods involve a reduced temperature chemical vapor deposition (CVD) process to fill the features with tungsten. In certain embodiments, the process temperature is maintained at less than about 350° C. during the chemical vapor deposition to fill the feature. The reduced-temperature CVD tungsten fill provides improved tungsten fill in high aspect ratio features, provides improved barriers to fluorine migration into underlying layers, while achieving similar thin film resistivity as standard CVD fill. Also provided are methods of depositing thin tungsten films having low-resistivity. According to various embodiments, the methods involve performing a reduced temperature low resistivity treatment on a deposited nucleation layer prior to depositing a tungsten bulk layer and/or depositing a bulk layer via a reduced temperature CVD process followed by a high temperature CVD process.申请人:Feng Chen,Raashina Humayun,Michal Danek,Anand Chandrashekar地址:Milpitas CA US,Fremont CA US,Cupertino CA US,Sunnyvale CA US国籍:US,US,US,US代理机构:Weaver Austin Villeneuve & Sampson LLP更多信息请下载全文后查看。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
where R is the Riemann curvature tensor (14). Variation of (1) with respect to the metric g and the connection Γ produces Euler–Lagrange equations which we, for the time being, will write symbolically as ∂SYM /∂g = 0 , (2)
Torsion waves in metric–affine field theory
Alastair D King† and Dmif Mathematical Sciences, University of Bath, Bath BA2 7AY, UK
Torsion waves in metric–affine field theory
3
It is easy to see that the connections from Theorem 1 are not flat, i.e., R ≡ 0. The non-trivial plane wave solutions of (5) can, of course, be written down explicitly: up to a proper Lorentz transformation they are u(x) = w e−ik·x where wµ = C (0, 1, −αi, 0) , kµ = β (1, 0, 0, 1) , (9) (8)
β = ±1, and C is an arbitrary positive constant (amplitude). Let us now introduce an additional equation into our model: Ric = 0 (10)
where Ric is the Ricci curvature tensor. This is the Einstein equation describing the absence of sources of gravitation. Remark 2 If the connection is that of Levi-Civita then (10) implies (3). In the general case equations (3) and (10) are independent. The question we are about to address is whether there are any spacetimes which simultaneously satisfy the Yang–Mills equation (3), the complementary Yang–Mills equation (2), and the Einstein equation (10). More specifically, we are interested in spacetimes whose connections are not flat and not Levi-Civita connections. The following theorem provides an affirmative answer to the above question. Theorem 2 A spacetime from Theorem 1 satisfies equation (10) if and only if L is proportional to (du)|x=0 , in which case torsion equals contortion up to a natural reordering of indices: Tλµν = Kµλν . (11)
α = ±1; here u is the unknown vector function. In calling (5) the polarized Maxwell equation we are motivated by the fact that any solution of (5) is a solution of (4). We call a solution u of the Maxwell equation (4) non-trivial if du ≡ 0. If the metric is given and the connection is known to be metric compatible then the connection coefficients are uniquely determined by torsion (13) or contortion (16). The choice of torsion or contortion for the purpose of describing a metric compatible connection is purely a matter of convenience as the two are expressed one via the other in accordance with formulae (17). We define Minkowski space M4 as a real 4-manifold with a global coordinate system (x0 , x1 , x2 , x3 ) and metric gµν = diag(+1, −1, −1, −1) . Our definition of M4 specifies the manifold M and the metric g , but does not specify the connection Γ. Our first result is Theorem 1 Let u be a complex-valued vector function on M4 which is a non-trivial plane wave solution of the polarized Maxwell equation (5), let L = 0 be a constant complex antisymmetric tensor satisfying ∗L=α ˜ iL , α ˜ = ±1, and let Γ be the metric compatible connection corresponding to contortion K λ µν = Re(uµ Lλ ν ) . (7) Then the spacetime {M4 , Γ} is a solution of the system of equations (2), (3). Remark 1 In abstract Yang–Mills theory it is not customary to consider the equation (2) because there is no guarantee that this would lead to physically meaningful results. As an illustration let us examine the Maxwell equation (4) for real-valued vector functions on a Lorentzian manifold, which is the simplest example of a Yang–Mills equation. Straightforward calculations show that it does not have non-trivial solutions which are stationary points of the Maxwell action with respect to the variation of the metric. (6)
Torsion waves in metric–affine field theory ∂SYM /∂ Γ = 0 .
2 (3)
Equation (3) is the Yang–Mills equation for the affine connection. Equation (2) does not have an established name; we will call it the complementary Yang–Mills equation. Our initial objective is the study of the combined system (2), (3). This is a system of 74 real non-linear partial differential equations with 74 real unknowns. In order to state our results we will require the Maxwell equation δ du = 0 as well as the polarized Maxwell equation ∗ du = αidu , (5) (4)
PACS numbers: 04.50.+h, 03.65.Pm
Submitted to: Class. Quantum Grav.
1. Main results We consider spacetime to be a real oriented 4-manifold M equipped with a nondegenerate symmetric metric g and an affine connection Γ. The 10 independent components of the metric tensor gµν and the 64 connection coefficients Γλ µν are the unknowns, as is the manifold M itself. This approach is known as metric–affine field ´ Cartan, A S Eddington, theory. Its origins lie in the works of authors such as E A Einstein, T Levi-Civita, E Schr¨ odinger and H Weyl; see, for example, Appendix II in [1], or [2]. Reviews of the more recent work in this area can be found in [3, 4, 5, 6]. The Yang–Mills action for the affine connection is SYM := Rκ λµν Rλ κ µν (1)