TRAJECTORY AND OPTICAL PARAMETERS IN A NON-LINEAR STRAY FIELD
Optical properties of self-assembled quantum wires for application in infra-red detection
a rX iv:c ond-ma t/1183v2[c ond-m at.m trl-sci ]9J a n201Optical properties of self-assembled quantum wires for application in infra-red detection Liang-Xin Li,Sophia Sun,and Yia-Chung Chang Department of Physics and Materials Research Laboratory University of Illinois at Urbana-Champaign,Urbana,Illinois 61801(February 1,2008)Abstract We present theoretical studies of optical properties of Ga 1−x In x As self-assembled quantum-wires (QWR’s)made of short-period superlattices with strain-induced lat-eral ordering.Valence-band anisotropy,band mixing,and effects due to local strain distribution at the atomistic level are all taken into ing realistic ma-terial parameters which are experimentally feasible,we perform simulations of the absorption spectra for both inter-subband and inter-band transitions (including the excitonic effect)of this material.It is shown that the self-assembled QWR’s have favorable optical properties for application in infra-red detection with normal inci-dence.The wavelength of detection ranges from 10µm to 20µm with the length of QWR period varying from 150˚A to 300˚A .I.INTRODUCTIONQuantum-well infra-red photodetectors(QWIP’s)have been extensively studied in recent years. The main mechamism used in QWIPs is the inter-subband optical transition,because the wavelengths for these transitions in typical III-V quantum wells can be tailored to match the desired operating wavelength(1-20µm)for infra-red(IR)detection.Due to its narrow band absorption,QWIP’s are complementary to the traditional HgCdTe detectors,which utilize the inter-band absorption, and therfeore are applicable only for broad-band absorption.The main drawback of QWIP’s is the lack of normal-incidence capability,unless some processing is made to create diffraction gartings on the surface,which tends to reduce the responsivity of the material to the incident radiation. Because electrons in quantum wells have translational invariance(within the effective-mass model) in the plane normal to the growth axis,the electron inter-subband transitions for normal-incident radiation is zero(or very small even if the coupling with other bands is considered).One way to break the translational invariance is to introduce the surface diffraction grating as commonly adopted in many QWIP’s fabricated today.A better(and less expensive)way to break the in-plane translational invariance is to utilize the strain-induced lateral modulation provided in self-assembled nano-structure materials.These nano-structures inculde quantum dots and quantum wires.Because the lateral modelation is formed via self-assembly,the fabrication of this type of materials will be much more efficient once the optimized growth parameters are known.Hence,it will be cost effective to use them for device fabrications.Self-assembled III-V QWR’s via the strain-induced lateral-layer ordering(SILO)process have attracted a great deal of attention recently.[21−23]The self-assembly process occurs during the growth of short-period superlattices(SPS)[e.g.(GaAs)2/(InAs)2.25]along the[001]direction on InP substrate.The excess fractional InAs layer leads to stripe-like islands during the initial MBE growth.[4]The presence of stripes combined with strain leads to natural phase separation as additional layers of GaAs or InAs are deposited and the structure becomes laterally modulated in terms of In/Ga composition.A self-assembled QWR heterostructure can then be created by sandwiching the laterally modulated layer between barrier materials such as Al0.24Ga0.24In0.52As(quarternary),Al0.48In0.52As (ternary),or InP(binary).[4-6]It was found that different barrier materials can lead to different degree of lateral composition modulation,and the period of lateral modulation ranges from100˚A to 300˚A depending on the growth time and temperature.In this paper,we explore the usefulness of InGaAs quantum wires(QWR’s)grown by the strain-induced lateral ordering(SILO)process for IR detection.Our theoretical modeling inculdes the effects of realistic band structures and microscopic strain distributions by combining the effective bond-orbital model(EBOM)with the valence-force-field(VFF)model.One of the major parameters for the IR detectors is the absorption quantum efficiency which is directly related to the absorption coefficient byη=1−e−αl whereαis the absorption coefficient and l is the sample length.Thus,to have a realistic accessment of the materials for device application,we need to perform detailed calculations of the absorption coefficient,taking into account the excitonic and band structure effects.Both inter-subband and inter-band transitions are examined systematically for a number of structure parameters (within the experimentally feasible range)chosen to give the desired effect for IR detection.It is found that the wavelengths for the inter-subband transitions of InGaAs self-assembled QWR’s range from10to20µm,while the inter-band transitions are around1.5µm.Thus,the material provides simultaneneous IR detection at two contrasting wavelengths,something desirable for appli-cation in multi-colored IR vedio camera.Several structure models with varying degrees of alloy mixing for lateral modulation are con-sidered.For the inter-band absorption,the excitonic effect is important,since it gives rise a large shift in transition energy and substantial enhancement of the absorption spectrum.To study the excitonic effect on the absorption spectrum for both discrete and contunuum states,we use a large set of basis functions with afinite-mesh sampling in the k-space and diaginalize the exciton Hamilto-nian directly.Emphasis is put on the analysis of line shapes of various peak structures arising from discrete excitonic states of one pair of subbands coupled with the excitonic(discrete and continuum) states associated with other pairs of subbands.Wefind that the excitonic effect enhances thefirst absorption peak around1.5times and shifts the peak position by20-30meV.II.THEORETICAL MODELThe QWR structures considered here consist of8pairs of(GaAs)2(InAs)2.25short-period super-lattices(SPS)sandwiched between Al0.24Ga0.24In0.52As barriers.The SPS structure prior to strain induced lateral ordering(SILO)is depicted in Fig.1.With lateral ordering,the structure is modeled by a periodic modulation of alloy composition in layers with fractional monolayer of(In or Ga)in the SPS structure.In layers7and9(starting from the bottom as layer1),we havex In=x m[1−sin(πy′/2b)]/2for y′<b0for b<y′<L/2−bx m{1+sin[π(y′−L/2)/2b]}/2for L/2−b<y′<L/2+bx m for L/2+b<y′<L−bx m{1−sin[π(y′−L)/2b]}/2for y′>L−b,(1)where x m is the maximum In composition in the layer,2b denotes the width of lateral composition grading,and L is the period of the lateral modulation in the[110]direction.The experimental feasible range of L is between100˚A and300˚A.The length of L is controled by the growth time and temperature.In layers3and13,we havex In=0for0<y′<5L/8−bx m{1+sin[π(y′−5L/8)/2b]}/2for5L/8−b<y′<5L/8+b,x m for5L/8+b<y′<7L/8−bx m{1−sin[π(y′−7L/8)/2b]}/2for7L/8−b<y′<7L/8+b0for7L/8+b<y′<L.(2)Similar equation for x Ga in layers5and11can be deduced from the above.By varying the parameters x m and b,we can get different degrees of lateral alloy mixing.Typically x m is between0.6and1, and b is between zero and15a[110]≈62˚A.A VFF model[13-15]is used tofind the equilibrium atomic positions in the self-assembled QWR structure by minimizing the lattice energy.The strain tensor at each atomic(In or Ga)site is then obtained by calculating the local distortion of chemical bonds.Once the microscopic strain distribution in the model structure is determined,the energy levels and wave-functions of self-assembled quantum wires are then calculated within the effective bond-orbital model(EBOM).Detailed description of this method can be found in Refs.24,25−26.EBOM used here is a tight-binding-like model in which two s-like conduction bands(including spin)and four valence bands with total angular momentum J=3/2(due to spin-orbit coupling of p-like orbitals with the spinor).Thus,the present model is comparable to the six-band k·p model as adpoted in Ref.?To minimize the computing effort,we express the electron and hole states for the quantum wire structures in terms of eigen-states of a quantum well structure with different in-plane wave vectors.The quantum well consists of8pairs of(GaAs)2(InAs)2short-period superlattice(SPS)plus two InAs monolayers(one inserted after the second pair of SPS and the other after the sixth pair of SPS),so the the total In/Ga composition ratio is consistent with the(GaAs)2(InAs)2.25SPS.The whole stack of SPS’s is then sandwiched between two slabs of Al0.24Ga0.24In0.52barriers.Let us denote the quantum well eigen-states as|n,k1,k2 QW where n labels the subband,k1denotes the wave vector along thewire([1¯10])direction and k2labels the wave vector in the[110]direction,which is perpendicular to the wire and the growth axis.Expanding the quantum well states in terms of bond-orbitals,we have |n,k1,k2 QW=1L α,R f n,k1,k2(α,R z)exp(ik2R2)exp(ik1R1)|uα(R) ,where L is the sample length along the wire axis,f n,k1,k2(α,R z)is the eigen-vector for the quantum well Hamiltonian and uα(R)denotes anα-like bond orbital state at site R(α=1,···,6for two s-like conduction-band and four J=3/2valence-band orbitals).Here R runs over all lattice sites within the SPS layer(well region)and AlGaInAs layer(barrier region).We then diagonalize the hamiltonian for the quantum wire(QWR)within a basis which consists of the quantum well states with k2’s separated by reciprocal lattice vectors g m=m(2π/a[110]);m= ly,|i,k1,k2 = n,m C i,k1(n,k2+g m)|n,k1,k2+g m QWwhere C i,k1(n,k2+g m)is the eigen-vector for the quantum-wire hamiltonian matrix for the i−th QWR subband at wave vector(k1,k2).In terms of the bond orbitals,we can rewrite the QWR states as|i,k1,k2 = α,R F i,k1,k2(α,R)|uα(R)whereF i,k1,k2(α,R)=1Ln,mC i,k1(n,k2+g m)f n,k1,k2+g m(α,R z)exp[i(k2+g m)R2]exp(ik1R1)is the QWR envelope function.For the laterally confined states,the dispersion of bands versus k2is negligible;thus,the k2dependence can be ignored.The absorption coefficeient for inter-subband transitions between subbands i and j is given by αij(¯hω)=4π2e2¯hwhere n r is the refractive index of the QWR,V is the volume of the QWR sample restricted within the SPS region,f i(f j)is the Fermi-Dirac distribution function for subbnad i(j).The optical matrix elements between QWR subband states are related to those between bond orbitals byi,k1,k2|ˆǫ·p|j,k1,k2 = α,α′,τF∗i,k1,k2(α,R)F j,k1,k2(α′,R) uα(R)|ˆǫ·p|uα′(R+ τ) ,where τruns over on-site or the12nearest-neighbor sites in the fcc lattice.The optical matrix elements between bond orbitals are related to the band parameters by requiring the optical matrix elements between bulk states near the zone ceneter to be identical to those obtained in the k·p theory28.We obtain27langleu s(R)|pαuα′(R) =2a)(E p/Eg−m0/m∗e)τα;α=x,y,z,whereταis theα-th of the lattice vectorτin units of a/2,E p is the inter-band optical matrix element as defined in Ref.28,and m∗e is the electron effective mass.Next,we study the inter-band transitions.For this case,the excitonic effect is important.Here we are only interested in the absorption spectrum near the band edge due to laterally confined states.Thus,the dispersion in the k2direction can be ignored.The exciton states with zero center-of-mass momentum can then be written as linear combinations of products of electron and holes states associated with the same k1(wave vector along the wire direction).We write the electron-hole product state for the i-th conduction subband and j-th valence subband as|i,j;k1 ex=|i,k1 |j,k1≡α,β,R e,R h F i,k1(α,R e)G j,k1(β,R h)|u(α,R e)>|u(β,R h)>.The matrix elements of the exciton Hamiltonian within this basis is given byi,j,k1|H ex|i′,j′,k′1 =[E i(k1)δi,i′−E j(k1)δj,j′]− R e,R h F∗ii′(R e)v(R e−R h)G jj′(R2),(4) where v(R e,R h)=4πe2F ii′(R e)= αF∗i,k1(α,R e)F i′,k1(α,R e)describes the charge density matrix for the electrons.Similarly,G jj′(R h)= βG∗j,k1(β,R h)G j′,k1(β,R h)describes the charge density matrix for the holes.In Eq.(x),we have adopted the approximation u(α,R e)| u(β,R h)|v|u(α′,R′e) |u(β′,R′h) ≈v(R e−R h)δα,α′δβ,β′δR e,R′eδR h,R′h,since the Coulomb potential is a smooth function over the distance of a lattice copnstant,except at the origin,and the bond orbitals are orthonormal to each other.At the origin(R e=R h),the potential is singular,and we replace it by an empirical constant which is adjusted so as to give the same exciton binding energy as obtained in the effective-mass theory for a bulk system.The results are actually insensitive to the on-site Coulomb potential parameter,since the Bohr radius of the exciton is much larger than the lattice constant.After the diagonalization,we obtain the excitonic states as linear combinations of the electron-hole product states,and the inter-band absorption coefficient is computed according to4π2e2¯hαex(¯hω)=m0E p/2is needed,In order to obtain a smooth absorption spectrum,we replace theδfunction in Eq.(1)by a Lorentzian function with a half-widthΓ,δ(E i−E)≈Γ/{π[(E i−E)2+Γ2]}(7)Γis energy width due to imhomogeneous broadening,which is taken to be0.01eV(??).III.RESULTS AND DISCUSSIONSWe have performed calculations of inter-subband and inter-band absorption spectra for the QWR structure depicted in Fig.1with varaying degree of alloy mixing and different lengths of period (L)in lateral modulation.Wefind that the inter-subband absorption spectra are sensitive to the length of period(L),but rather insensitive to the degree of of alloying mixing.Thus,we only present results for the case with moderate alloy mixing,which are characterized by parameters b=33˚A and x m=1.0.In all the calculations,the bottom layer atoms of QWR’s are bounded by the InP substrate,while the upper layer atoms and GaAS capping layer atoms are allowed to move freely. This strucure is corresponding to the unclamped struture as indicated in reference10.For different period length L of QWR’s,the strain distribtion profiles are qualitatively similar as shown in reference10.As L decreases,the hydrostatic strain in rich In region(i.e.right half zone of QWR’s unit)increase,while it decreases in rich Ga region.The bi-axial strain has the opposite change with L.The variation of hydrostatic and bi-axial strains with deducing QWR’s period reflects in the potential profiles as the difference of CB and VB band eage increases,which can be seen in Figure2.It can be easily understood that the shear strains increase when L is decuded.The potential profiles due to strain-induced lateral ordering seen by an electron in two QWR structures considered here(L=50a[110]and L=40a[110])are shown in Fig.2.more discussions...The conduction subband structures for the self-assembled QWRs with alloying mixing(x m=1.0 and b=8a[110])for(L=50a[110]and L=40a[110])are shown in Fig.3.All subband are grouped in pairs with a weak spin splitting(not resolved on the scale shown).For L=50a[110],the lowest three pairs of subbands are nearly dispersionless along the k2direction,indicating the effect of strong lateral confinement.The inter-subband transition between thefirst two pairs give rise to the dominant IR response at photon energy around60meV.For L=40a[110],only the lowest pair of subbands(CB1) is laterally confined(with a weak k2dispersion).The higher subbands corresponding to laterally unconfined states(but remain confined along the growth axis)and they have large dispersion versus k2.Wefind three pairs of subbands(CB2-CB4)are closely spaced in energy(within5meV?).State orgion of degeneracry??The valence subband structures for the self-assembled QWRs with alloying mixing(x m=1.0and b=8a[110])for(L=50a[110]and L=40a[110])are shown in Fig.4.more discussions??A.Inter-subband absorptionInter-subband absorption spectrum is the most relavent quantity in determining the usefulness of self-assembled QWR’s for application in IR detection.Fig.5shows the calculated inter-subband absorption spectra of the self-assembled QWR structure(as depicted in Fig.1)for three different lengths of period:L=72,50,and40a[110](approximatley300˚A,200˚A,and160˚A,respectively).In the cacluation,we assume that these QWR structures are n-type doped with linear carrier density around1.65×106cm−1(which corresponds to a Fermi level around25meV above the condunction band minimum).For comparison purposes,we show results for polarization vector along both the[110](solid curves)and[001]directions(dashed curves).The results for[1¯10]polarization are zero due to the strict translational invariance imposed in our model calculation.The peak positions for the inter-subband transition with normal incidence(with[110]polarization) are around65meV,75meV,and110meV for the three cases considered here.All these are within the desirable range of IR detection.As expected,the transition energy increases as the length of period decreases due to the increased degree of lateral confinement.However,the transition energy will saturate at around110meV as we further reduce the length of period,since the bound-to-continuum transition is already reached at L=40a[110].The absorption strengths for thefirst two cases(L=72a[110]and L=50a[110])are reasonably strong(around400cm−1and200cm−1,respectively).They both correspond to the bound-to-bound transitions.In contrast,the absorption strength for the third case is somewhat weaker(around50 cm−1),since it corresponds to the bound-to-continuum transition.For comparison,the absorption strength for typical III-V QWIPs is around??The inter-subband absorption for the[001]polarization is peaked around??meV.The excited state involved in this transition is a quantum confined state due to the Al0.24Ga0.24In0.52barriers. Thus,it has the same physical origin as the inter-subband transition used in typical QWIP structure. Although this peak is not useful for IR detection with normal incidence,it can be used as the second-color detection if one puts a diffraction grating on the surface as typically done in the fabrication of QWIPs.B.Inter-band absorptionThe inter-band optical transitions are important for the characterization of self-assembled QWR’s, since they are readily observable via the Photoluminescence(PL)or optical transimission experiment. For IR-detector application,they offer another absorption peak at mid IR wavelengths,which can be used together with the inter-subband transitions occured at far IR wavelengths for multi-colored detection.Thus,to understand the full capability of the self-assembled QWR material,we also need to analyze the inter-band absorption.Fig.6shows the squared optical matrix elements versus k2for two self-assembled QWR’s con-sidered in the previous section(with L=50and40a[110]).For the case with L=50a[110],the optical matrix elements for both[110]and[1¯10]polarizations are strong with a polarization ratio P[1¯10]/P[110] around2.This is similar to the case with L=72a[110]as reported in Ref.xx.For the case with L=40a[110],the optical matrix elements for both[110]and[1¯10]polarizations are weak.This is due to the fact that the electrons and hole are laterally confined in different regions in the QWR,as already indicated in the potential profile as shown in Fig.2(b).Thus,the inter-band absorption for this case will be uninteresting.Fig.7shows the inter-band absorption spectra for SILO QWR’s with L=72and50a[110], including the excitonic effects.The PL properties of the L=72a[110]structure with alloying mixing characterized by x m=0.1and b=8a[110]has been studied in our previous paper.The QWR structure has a gap around0.74eV with a PL polarization ratio(P[1¯10]/P[110])around3.1.The absorption coefficient for this structure has a peak strength around250cm−1.The binding energy for the ground state exciton labeled1-1(derived primarily from the top valence subband and the lowest conduction subband)is around20meV.Thus,the peak position in the absorption spectrum shifts from0.76meV(without the excitonic effect)to0.74meV(with the excitonic effect).The exctionic effect also enhances the peak strength from200cm−1to250cm−1.The other peak structures(labeled 2-2,2-3,...etc.)are derived primarily from the transitions between the lower valence subbands to the higher conduction subbands).For the QWR structure with L=50a[110],we obtain similar absorption spectrum with a peak strength around400cm−1(??).The exciton binding is around40??meV,and the excitonic en-hancement factor of thefirst peak is around1.15(??),higer than the case with L=72a[110].This indicates that the case with L=50a[110]has stronger lateral confinement for electrons and holes, which leads larger exciton binding energy and stronger excition oscillator strength(due to the largerprobability that the electron and hole appear at the same position).The secondary peaks due to excitonic states derived from higher subbands are also subtantially stronger than their counterparts in the L=72a[110]case.IV.SUMMARY AND DISCUSSIONSWe have studied the inter-subband and inter-band absorption spectra for self-assembled InGaAs quantum wires for consideration in IR-detector application.Detailed band structures,microscopic strain distributions,and excitonic effects all have been taken into account.A number of realistic structures grown via strain-induced lateral ordering process are examined.Wefind that the self-assembled InGaAs quantum wires are good candidate for multi-colored IR detector materials.They offer two groups of strong IR absorption peaks:one in the far-IR range with wavelengths covering10 -20µm(via the inter-subband transition),the other in the mid-IR range with wavelengths centered around1.5µm(via the inter-band transition).Due the strain induced lateral modulation,the inter-subband transition is strong for normal incident light with polarization along the direction of lateral modulation([110]).This gives the self-assembled InGaAs quantum wires a distinct advantage over the quantum well systems for application in IR detection.The inter-subband absorption is found to be sensitive to the length of period(L)of laterial modulation with the aborption peak position varying from60meV to110meV as the length of period is reduced from300˚A to160˚A.However,further reduction in the length of period does not shift the absorption peak very much,as the excited states become laterally unconfined.For the inter-band transition,wefind that the excitonsic effect enhances the absorption peak strength by about10-20%,and shift the peak position by about20-40meV for the structures considered.The reduction in the period length(L)leads to stronger lateral confinement,hence larger exciton binding and stronger absorprtion strength.As conclusion,this paper should give the experiment the realistic guidance in the growth of the IR detector and present the interesting physical thoughts for the theoretists and experimentists.In conclusion,we successfully demonstrated that self-assembled quantum wires are promising IR-detector materials and we provided theoretical modeling for the optical characteristics for realistic QWR structures,which can be used to guide future fabrication of quantum wire infrared detectors.REFERENCES1A.R.Adams,Electron.Lett.22,249(1986).2A.C.Gossard,P.M.Petroff,W.Weigman,R.Dingle,and A.Savage,Appl.Phys.Lett.29,323 (1976);E.E.Mendez,L.L.Chang,C.A.Chang,L.F.Alexander,and L.Esaki,Surf.Sci.142, 215(1984).3Y.C.Chang and J.N.Schulmann,Appl.Phys.Lett.43,536(1983);Phys.Rev.B31,2069(1985).4G.D.Sanders,Y.C.Chang,Phys.Rev.B31,6892(1985);32,4282(1985);35,1300(1987).5R.B.Zhu and K.Huang,Phys.Rev.B36,8102(1987).6R.B.Zhu,Phys.Rev.B37,4689(1988).7Hanyou Chu and Y.C.Chang,Phys.Rev.B39(1989)108618S.T.Chou,K.Y.Cheng,L.J.Chou,and K.C.Hsieh,Appl.Phys.Lett.17,2220(1995);J.Appl. Phys.786270,(1995);J.Vac.Sci.Tech.B13,650(1995);K.Y.Cheng,K.C.Hsien,and J.N. Baillargeon,Appl.Phys.Lett.60,2892(1992).9L.X.Li and Y.C.Chang,J.Appl.Phys.846162,2000.10L.X.Li,S.Sun,and Y.C.Chang,J.Appl.Phys.,2001(in print).11Y.Miyake,H.Hirayama,K.Kudo,S.Tamura,s.Arai,M.Asada,Y.Miyamoto,and Y.Suematsu, J.Quantum electron.QE-29,2123-2131(1993).12E.Kapon,S.Simhony,J.P.Harbison,L.T.Florez,and P.Worland,Appl.Phys.Lett.56,1825-1827(1990)13K.Uomi,M.Mishima and N.Chinone,Appl.Phys.Lett.51,78-80(1987)14Y.Arakawa and A.Yariv,J.Quantum.Electron.QE-22,1887-1899(1986).15D.E.Wohlert,S.T.Chou,A.C Chen,K.Y.Cheng,and K.C.Hsieh,Appl.Phys.Lett.17,2386 (1996).16D.E.Wohlert,and K.Y.Cheng,Appl.Phys.Lett.76,2249(2000).17D.E.Wohlert,and K.Y.Cheng,private communications.18Y.Tang,H.T.Lin,D.H.Rich,P.Colter,and S.M.Vernon,Phys.Rev.B53,R10501(1996). 19Y.Zhang and A.Mascarenhas,Phys.Rev.B57,12245(1998).20L.X.Li and Y.C.Chang,J.Appl.Phys.846162,1998.21S.T.Chou,K.Y.Cheng,L.J.Chou,and K.C.Hsieh,Appl.Phys.Lett.17,2220(1995);J.Appl. Phys.786270,(1995);J.Vac.Sci.Tech.B13,650(1995);K.Y.Cheng,K.C.Hsien,and J.N. Baillargeon,Appl.Phys.Lett.60,2892(1992).22D.E.Wohlert,S.T.Chou,A.C Chen,K.Y.Cheng,and K.C.Hsieh,Appl.Phys.Lett.17,2386 (1996).23D.E.Wohlert,and K.Y.Cheng,Appl.Phys.Lett.76,2247(2000).24Y.C.Chang,Phys.Rev.B37,8215(1988).25J.W.Matthews and A.E.Blakeslee,J.Cryst.Growth27,18(1974).26G.C.Osbourn,Phys.Rev.B27,5126(1983).27D.S.Citrin and Y.C.Chang,Phys.Rev.B43,11703(1991).28E.O.Kane,J.Phys.Chem.Solids1,82(1956).Figure CaptionsFig.1.Schematic sketch of the unit cell of the self-assembled quantum wire for the model structure considered.Each unit cell consists of8pairs of(2/2.25)GaAs/InAs short-period superlattices(SPS). In this structure,four pairs of(2/2.25)SPS(or17diatomic layers)form a period,and the period is repeated twice in the unit cell.Filled and open circles indicate Ga and In rows(each row extends infinitely along the[1¯10]direction).Fig.2.Conduction band and valence band edges for self-assembled QWR structure depicted in Fig. 1for(a)L=50a[110]and(b)L=40a[110].Dashed:without alloy mixing.Solid:with alloy mixing described by x m=1.0and b=8a[110].Fig.3.Conduction subband structure of self-assembled QWR for(a)L=50a[110]and(b)L=40a[110] with x m=1.0and b=8a[110].Fig.4.Valence subband structure of self-assembled QWR for(a)L=50a[110]and(b)L=40a[110] with x m=1.0and b=8a[110].Fig.5.Inter-subband absorption spectra of self-assembled QWR for(a)L=72a[110],(b)L=50a[110], and(c)L=40a[110]with x m=1.0and b=8a[110].Solid:[110]polarization,dashed:[001] polarization.Fig. 6.Inter-band optical matrix elements squared versus k1of self-assembled QWR’s for(a) L=50a[110]and(b)L=40a[110]with x m=1.0and b=8a[110].Fig.7.Inter-band absorption spectra of self-assembled QWR’s for(a)L=72a[110]and(b)L= 50a[110]with x m=1.0and b=8a[110].Solid:[110]polarization with excitonic effect.Dotted:[1¯10] polarization with excitonic effect.Dashed:[110]polarization without excitonic effect.。
奥林巴斯 EZ Shot 3 Plus 25G EUS 针,增强可操作性,实现对任何病变的无妥协进入
Olympus Introduces EZ Shot 3 Plus 25 G EUS Needle with Enhanced Maneuverability for Uncompromised Access to Any Lesion, Consistent Performance to Potentially ReduceProcedural Costs and Procedure TimeEntire EZ Shot 3 Plus Line-Up Now Cleared for Fine Needle BiopsyOn September 14, 2018, Olympus, a global technology leader in designing and delivering innovative solutions for medical and surgical procedures, among other core businesses, announced today the FDA clearance of its EZ Shot 3 Plus 25 G needle as well as an expanded indication for the EZ Shot 3 Plus product line-up for both fine needle aspiration (FNA) and fine needle biopsy (FNB). The uncompromised access, enhanced puncturability, predictable trajectory and distinct echogenicity of the EZ Shot 3 Plus line, combined with the new 25 G offering and expanded indication for FNA and FNB, can drive improved staging of diseaseThe EZ Shot 3 Plus 19 G, 22 G and new 25 G have been designed to be used with an Olympus endoscopic ultrasound system (EUS) for ultrasonically guided FNA and FNB of submucosal and extramural lesions within the gastrointesitnal tract (i.e. pancreative masses, mediastinal masses, perirectal masses and lymph nodes). FNB is an important advantage for physicians because it has been reported that the larger tissue samples it provides may enable more precise diagnosis and rapid cytodiagnosis, a technique for on-the-spot pathological diagnosis of tissues collected during surgery.The EZ Shot 3 Plus 25 G, together with the EZ Shot 3 Plus 19 G and 22 G needles, completes the EZ Shot 3 Plus line of needles. EZ Shot 3 Plus benefits include:•Tissue architecture: Unique Menghini needle tip design features sharp, continuous cutting edges tocleanly cut tissue specimens while preserving cellular architecture.•Potential time and cost savings, increased efficiency:Uncompromised, accommodating access to lesions, through difficult scope positions is made possible by a combination of needle material and multi-layercoiled sheath. Smooth needle passage and responsiveness to handle motion are achieved by way of themulti-layer coiled sheath and needle flexibility.•More precision targeting of samples: Flexible needle design, smooth cutting edge, and distinctechogenicity (for distinct hyperechoic appearance on ultrasound) combine for precise access, clean cuts and visualization of the target lesion.“I have been very impressed with the maneuverability of the EZ Shot 3 Plus line,” said Dr. Allan P. Weston, MD, FACG, Digestive Health Center of the Four States. “While the needle competes with others in its ability to obtain excellent tissue samples for proper diagnosis, it exceeds predecessors, due to its flexible sheath, in its ability to pass readily through the echoendoscope channel and access the target lesions without requiring adjustments of the up/down knob, right/left knob or scope tip position, thereby ultimately reducing procedure duration times and enhancing efficiency.”“A needle is judged by the ability for the cytologist or pathologist to make a diagnosis based on the cells or tissues collected. We listened to our customers' challenges in access, sample volume, the ease of visualizing the needle on ultrasound and the needle’s ability to retain its shape after multiple passes,” said Kurt Heine, Group Vice President of the Endoscopy Division at Olympus America Inc. “We are proud to have addressed these combined challenges, producing a solution that fits squarely into the value-based paradigm of cost reduction with improved patient outcomes and satisfaction. The indication of not just FNA but now FNB is an important added tool in the fight against disease.”Photo Caption:Olympus announced the FDA clearance of its EZ Shot 3 Plus 25 G needle and expanded indication for the EZ Shot 3 Plus line-up for both fine needle aspiration (FNA) and fine needle biopsy (FNB). The uncompromised access, enhanced puncturability, predictable trajectory and distinct echogenicity of the EZ Shot 3 Plus line, combined with the new 25 G offering and expanded indication for FNA and FNB, can drive improved staging of disease and the potential to more easily connect patients to precision medicine options.Times Square Caption:Olympus 25 G needle & full EZ Shot 3 Plus line have FDA indication for FNB, FNA# # #About Olympus Medical Systems GroupOlympus is a global technology leader, crafting innovative optical and digital solutions in medical technologies; life sciences; industrial solutions; and cameras and audio products. Throughout our nearly 100-year history, Olympus has focused on being true to society and making people’s lives healthier, safer and more fulfilling. Our Medical Business works with health care professionals to combine our innovative capabilities in medical technology, therapeutic intervention, and precision manufacturing with their skills to deliver diagnostic, therapeutic and minimally invasive procedures to improve clinical outcomes, reduce overall costs and enhance quality of life for patients.。
异性材料方解石制备的隐身斗篷macroscopic invisibility cloaking of visible light
© 2011 Macmillan Publishers Limited. All rights reserved.
nature communications | DOI: 10.1038/ncomms1176
T
height H2 and filled with an isotropic material of permittivity ε and µ (µ = 1; blue region in Fig. 1a) is mapped to a quadrilateral region in the physical space with anisotropic electromagnetic properties ε′ and µ′ (brown region in Fig. 1b). Thus, the cloaked region is defined by the small grey triangle of height H1 and half-width d. Mathematically, the transformation is defined by
ARTICLE
Received 15 Sep 2010 | Accepted 4 Jan 2011 | Published 1 Feb 2011
DOI: 10.1038/ncomms1176
Macroscopic invisibility cloaking of visible light
Xianzhong Chen1, Yu Luo2, Jingjing Zhang3, Kyle Jiang4, John B. Pendry2 & Shuang Zhang1
大学-光学的培训教材
2
Historical Overview
Discover the milestones and key discoveries that have shaped the field of optics throughout history.
3
Modern Applications
Learn about the wide range of modern applications of optics, from telecommunications to imaging technology.
Endoscopy
Learn how fiber optics enables minimally invasive procedures and allows doctors to visualize internal organs.
Polarization of Light
1 Polarizing Filters
Future Trends in Optics
Quantum Optics
Explore the emerging field of quantum optics and its revolutionary applications in computing, communication, and cryptography.
Diagnostic Imaging Laser Surgery
Discover how optics plays a vital role in medical imaging techniques like X-rays, CT scans, and MRI.
Explore the use of lasers in various surgical procedures, including laser eye surgery and skin treatments.
Computer Vision and Pattern Recognition
Tyzx DeepSea High Speed Stereo Vision SystemJohn Iselin Woodfill,Gaile Gordon,Ron BuckTyzx,Inc.3885Bohannon Drive Menlo Park,CA 94025Woodfill,Gaile,Ron @AbstractThis paper describes the DeepSea Stereo Vision System which makes the use of high speed 3D images practical in many application domains.This system is based on the DeepSea processor,an ASIC,which computes absolute depth based on simultaneously captured left and right im-ages with high frame rates,low latency,and low power.The chip is capable of running at 200frames per second with 512x480images,with only 13scan lines latency between data input and first depth output.The DeepSea Stereo Vi-sion System includes a stereo camera,onboard image rec-tification,and an interface to a general purpose processor over a PCI bus.We conclude by describing several applica-tions implemented with the DeepSea system including per-son tracking,obstacle detection for autonomous navigation,and gesture recognition.In Proceedings of the IEEE Computer Society Workshop on Real Time 3-D Sensors and Their Use,Conference on Computer Vision and Pattern Recognition ,(Washington,D.C.),June 2004.1.IntroductionMany image processing applications require or are greatly simplified by the availability of 3D data.This rich data source provides direct absolute measurements of the scene.Object segmentation is simplified because discontinuities in depth measurements generally coincide with object borders.Simple transforms of the 3D data can also provide alterna-tive virtual viewpoints of the data,simplifying analysis for some applications.Stereo depth computation,in particular,has many ad-vantages over other 3D sensing methods.First,stereo is a passive sensing method.Active sensors,which rely on the projection of some signal into the scene,often pose high power requirements or safety issues under certain operating conditions.They are also detectable -an issue in security or defense applications.Second,stereo sensing provides a color or monochrome image which is exactly (inherently)registered to the depth image.This image is valuable in im-age analysis,either using traditional 2D methods,or novel methods that combine color and depth image data.Third,the operating range and Z resolution of stereo sensors are flexible because they are simple functions of lens field-of-view,lens separation,and image size.Almost any operat-ing parameters are possible with an appropriate camera con-figuration,without requiring any changes to the underlying stereo computation engine.Fourth,stereo sensors have no moving parts,an advantage for reliability.High frame rate and low latency are critical factors for many applications which must provide quick decisions based on events in the scene.Tracking moving objects from frame to frame is simpler at higher frame rates because rel-ative motion is smaller,creating less tracking ambiguity.In autonomous navigation applications,vehicle speed is lim-ited by the speed of sensors used to detect moving obstacles.A vehicle traveling at 60mph covers 88ft in a second.An effective navigation system must monitor the vehicle path for new obstacles many times during this 88feet to avoid collisions.It is also critical to capture 3D descriptions of potential obstacles to evaluate their location and trajectory relative to the vehicle path and whether their size represents a threat to the vehicle.In safety applications such as airbag deployment,the 3D position of vehicle occupants must be understood to determine whether an airbag can be safely deployed -a decision that must be made within tens of mil-liseconds.Computing depth from two images is a computation-ally intensive task.It involves finding,for every pixel in the left image,the corresponding pixel in the right image.Correct corresponding pixel is defined as the pixel repre-senting the same physical point in the scene.The distance between two corresponding pixels in image coordinates is called the disparity and is inversely proportional to distance.In other words,the nearer a point is to the sensor,the more it will appear to shift between left and right views.In dense stereo depth computation,finding a pixel’s corresponding pixel in the other image requires searching a range of pix-els for a match.As image size,and therefore pixel density,increases,the number of pixel locations searched must in-crease to retain the same operating range.Therefore,foran NxN image,the stereo computation is approximately.Fortunately,the search at every pixel can be ef-fectively parallized.Tyzx has developed a patented architecture for stereodepth computation and implemented it in an ASIC called the DeepSea Processor.This chip enables the computa-tion of3D images with very high frame rates(up to200fps for512x480images)and low power requirements(1 watt),properties that are critical in many applications.Wedescribe the DeepSea Processor in Section2.The chip isthe basis for a stereo vision system,which is described in Section3.We then describe several applications that have been implemented based on this stereo vision system in Sec-tion4including person tracking,obstacle detection for au-tonomous navigation,and gesture recognition.2.DeepSea ProcessorThe design of the DeepSea ASIC is based on a highly paral-lel,pipelined architecture[6,7]that implements the Census stereo algorithm[8].As the input pixels enter the chip,the Census transform is computed at each pixel based on the local neighborhood,resulting in a stream of Census bit vec-tors.At every pixel a summed Hamming distance is used to compare the Census vectors around the pixel to those at52locations in the other image.These comparisons are pipelined,with52comparisons occurring simultaneously. The best match(shortest summed Hamming distance)is lo-cated withfive bits of subpixel precision.The DeepSea Pro-cessor converts the resulting pixel disparity to metric dis-tance measurements using the stereo camera’s calibration parameters and the depth units specified by the user.Under specific imaging conditions,the search for corre-spondence can be restricted to a single scan line rather than a full2D window.This simplification is possible,in the absence of lens distortion,when the imagers are coplanar, their optical axes are parallel,and corresponding scan lines are co-linear.The DeepSea processor requires rectified im-agery(see Section3)to satisfy these criteria with real-world cameras.The DeepSea Processor also evaluates a number of“in-terest operators”and validity checks which are taken into account to determine the confidence of a measurement.One example is the left/right check.A correct measurement should have the same disparity whether the search is ini-tiated in the left image or the right image.Different results indicate an invalid measurement.This check is expensive in software,but easily performed in the DeepSea Processor.2.1.Advantages of the Census TransformOne problem that makes determining stereo correspondence difficult is that the left and right images come from distinct imagers and viewpoints,and hence corresponding regionsX=Pixel to match1=Brighter than X0=Same or darker than XFigure1:Census transform:pixels darker than center are 0’s in bit vector,pixels brighter than center are1’s.in the two images may have differing absolute intensities resulting from distinct internal gains and biases,as well dis-tinct viewing angles.The DeepSea Processor uses the Census transform as its essential building block to compare two pixel neigh-borhoods.The Census transform represents a local image neighborhood in terms of its relative intensity structure. Figure1shows that pixels that are darker than the center are represented as0’s whereas pixels brighter than the cen-ter are represented by1’s.The bit vector is output in row major parisons between two such Census vec-tors are computed as their Hamming distance.Because the Census transform is based on the relative intensity structure of each image it is invariant to gain and bias in the imagers.This makes the stereo matching robust enough that the left and right imagers can,for example,run independent exposure control without impacting the range quality.This is a key advantage for practical deployment.Independent evaluations of stereo correlation methods [5,2,1]have found that Census performs better than clas-sic correlation approaches such as normalized cross corre-lation(NCC),sum of absolute differences(SAD),and sum of squared differences(SSD).2.2.DeepSea Processor performanceThefigure of merit used to evaluate the speed of stereo vi-sion systems is Pixel Disparities per Second(PDS).This is the total number of pixel to pixel comparisons made per sec-ond.This is computed from the area of the image,the width of the disparity search window in pixels,and the frame rate. The DeepSea Processor is capable of2.6billion PDS.This is faster than any other stereo vision system we are aware of by an order of magnitude or more.Additional performance details are summarized in Figure2.DeepSea Processor Specifications512x2048(10bit)52Disparities 5bits 16bit200fps (512x480)1wattFigure 2:DeepSea ProcessorSpecificationsFigure 3:DeepSea Board.3.DeepSea Stereo SystemThe DeepSea Development System is a stereo vision system that employs the DeepSea Processor and is used to develop new stereo vision applications.The DeepSea Stereo Vision System consists of a PCI board which hosts the DeepSea Processor,a stereo camera,and a software API to allow the user to interact with the board in C++from a host PC.Color and range images are transferred to the host PC’s memory via DMA.The DeepSea board (shown in Figure 3)performs com-munication with a stereo camera,onboard image rectifica-tion,and interfaces to a general purpose processor over the PCI bus.Since the host PC does not perform image rectifi-cation and stereo correlation,it is now available for running the user’s applicationcode.Figure 4:Tyzx Stereo Camera Family:5cm,22cm,and 33cm baselines.The DeepSea stereo cameras are self-contained stereo cameras designed to work in conjunction with the DeepSea Board.DeepSea stereo cameras connect directly to the board using a high-speed stereo LVDS link.By directly connecting to the DeepSea system,latency is reduced and the host’s PCI Bus and memory are not burdened with frame-rate,raw,image data.Tyzx has developed a fam-ily of stereo cameras which includes 5cm,22cm,and 33cm lens separations (baselines)as shown in Figure 4.A variety of standard CMOS imagers are used based on application requirements for resolution,speed,color,and shutter type.Each camera is calibrated to define basic imager and lens parameters such as lens distortion and the exact relation-ship between the imagers.These calibration parameters are used by the system to rectify the images.After this trans-formation the images appear distortion free,with co-planar image planes,and with corresponding scan lines aligned.The frame rate of any given system configuration will vary based on the capabilities of the mon con-figurations include:Omnivision:image sizes 400x300to 512x512,frame rates of 30fps to 60fps,colorNational:image sizes 320x240to 512x480,frame rate of 30fps,high dynamic rangeMicron:image sizes 320x240to 512x480,frame rates of 47fps to 90fps,Freeze frame shutter4.Applications of Tyzx Stereo SensorsTracking people in the context of security systems is one application that is ideal for fast 3D sensors.The Tyzx dis-tributed 3D person tracking system was the first application built based on the Tyzx stereo system.The fast frame rates simplify the matching of each person’s location from frame to frame.The direct measurements of the 3D location of each person create more robust results than systems based on 2D images alone.We also use a novel background mod-eling technique that makes full use of color and depth data [3,4]that contributes to robustness of tracking in the con-text of changing lighting.The fact that each stereo camera is already calibrated to produce absolute 3D measurements greatly simplifies the process of registering the cameras to each other and the world during installation.An example of tracking results is shown in Figure 5.The right image shows a plan view of the location of all the people in a large room;on the left these tracked locations are shown overlaid on the left view from each of four networked stereo cameras.Fast 3D data is also critical for obstacle detection in au-tonomous navigation.Scanning laser based sensors have been the de facto standard in this role for some time.How-ever,stereo sensors now present a real alternative becauseFigure 5:Tracking people in a large space based on a net-work of four Tyzx stereo integratedsensors.Figure 6:CMU’s Sandstorm autonomous vehicle.Tyzx DeepSea Stereo sensor is mounted in the white dome.of faster frame rates,full image format,and their passive nature.Figure 6shows a Tyzx DeepSea stereo system mounted on the Carnegie Mellon Red Team’s Sandstorm autonomous vehicle in the recent DARPA Grand Challenge Race.Sensing and reacting to a user’s gestures is a valuable control method for many applications such as interactive computer games and tele-operation of industrial or con-sumer products.Any interactive application is very sensi-tive to the time required to sense and understand the user’s motions.If too much time passes between the motion of the user and the reaction of the system,the application will seem sluggish and unresponsive.Consider for example the control of a drawing program.The most common inter-face used for this task is a mouse -which typically reports its position 100to 125times per second.In Figure 7we show an example in which a user controls a drawing ap-plication with her fingertip instead of a mouse.The 3D position of the figure tip is computed from a stereo range image and a greyscale image.The finger position is tracked relative to the plane of the table top.When the finger tip ap-proaches the surface of the table,the virtual mouse button is depressed.Motion of the figure tip along the table is con-sidered a drag.The finger moving away from the surface is interpreted as the release of the button.In this application,a narrow baseline stereo camera is used to achieve an op-erating range of 2to 3feet from the sensor with 3D spatial accuracy ofmm.Figure 7:Using 3D tracking of the finger tip as a virtual mouse to control a drawing application.Inset shows left imageview.Figure 8:Tyzx Integrated Stereo System.Self containedstereo camera,processor,and general purpose CPU.5.Future Directions and ConclusionsTyzx high speed stereo systems make it practical to bring high speed 3D data into many applications.Integration of the DeepSea Board with a general purpose processor creates a smart 3D sensing platform -reducing footprint,costs,power,and increasing deployability.Figure 8shows a prototype stand-alone stereo system,incorporating a stereo camera,DeepSea Board,and a general purpose CPU.This device requires only power and ethernet connections for de-ployment.The processing is performed close to the image sensor,including both the computation of 3D image data and the application specific processing of these 3D images.Only the low bandwidth results,e.g.object location co-ordinates or dimensions,are sent over the network.Even higher performance levels and further reductions in foot-print and power are planned.In the future we envision a powerful networked 3D sensing platform no larger than the stereo camera itself.AcknowledgmentsTyzx thanks SAIC for providing the Tyzx stereo system to Carnegie Mellon for their Sandstorm Autonomous Vehicle. References[1]J.Banks,M.Bennamoun,and P.Corke.“Non-parametrictechniques for fast and robust stereo matching,”In Proceed-ings of IEEE TENCON,Brisbane,Australia,December1997.[2]S.Gautama,croix,and M.Devy,“Evaluation of StereoMatching Algorithms for Occupant Detection”,in Proceed-ings of the International Workshop on Recognition,Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pages177–184.Sept1999.Cofus,Greece.[3]G.Gordon,T.Darrell,M.Harville,J.Woodfill.“Backgroundestimation and removal based on range and color”,Proceed-ings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,(Fort Collins,CO),June 1999.[4]M.Harville,G.Gordon,J.Woodfill,“Foreground Segmen-tation Using Adaptive Mixture Models in Color and Depth”, Proceedings of the IEEE Workshop on Detection and Recog-nition of Events in Video,(Vancouver,Canada),July2001. [5]Heiko Hirschmller,“Improvements in Real-Time Correlation-Based Stereo Vision”,Proceedings of IEEE Workshop on Stereo and Multi-Baseline Vision,pp.141-148.December 2001,Kauai,Hawaii[6]J.Woodfill,B.V on Herzen,“Real-Time Stereo Vision onthe PARTS Reconfigurable Computer,”Proceedings IEEE Symposium on Field-Programmable Custom Computing Ma-chines,Napa,pp.242-250,April1997.[7]Woodfill,Baker,V on Herzen,Alkire,“Data processing sys-tem and method”,U.S.Patent number6,456,737.[8]R.Zabih,J.Woodfill,“Non-parametric Local Transforms forComputing Visual Correspondence”,Third European Confer-ence on Computer Vision,(Stockholm,Sweden)May1994.。
高级激光雷达系统的大口径望远镜
Large aperture telescope for advanced lidar systemFrancesca SimonettiAlessandro Zuccaro MarchiLisa GambicortiVojko BratinaPiero MazzinghiNational Institute of Optics-National Research Council͑INO-CNR͒Largo Enrico Fermi6-50125Firenze,ItalyE-mail:francesca.simonetti@ino.it Abstract.Thefinal optical design for a space-borne light detection and ranging͑lidar͒mission is presented,in response to the European Space Agency“Advanced lidar concepts”proposal for use of a differential ab-sorption lidar system to measure water vapor distribution in atmosphere at935.5nm.The telescope adopts a double afocal concept͑i.e.,four reflections with two mirrors͒using a lightweight and large aperture pri-mary mirror.It is derived from a feasibility study that compares several different optical configurations,taking into account parameters such as cost,dimensions,complexity,and technological feasibility.Thefinal tele-scope optical design is described in detail,highlighting a trade-off with other solutions and its optical tolerances.©2010Society of Photo-Optical Instru-mentation Engineers.͓DOI:10.1117/1.3461976͔Subject terms:reflective optical system;large aperture telescope;space optics; lidar.Paper090508RR received Jul.9,2009;revised manuscript received Mar.30, 2010;accepted for publication May17,2010;published online Jul.14,2010.1IntroductionThe European Space Agency͑ESA͒call for“Advanced Li-dar Concepts”has been met with a proposal in the form of a feasibility study for the development of advanced tech-nologies of a light detection and ranging͑lidar͒system, devoted mainly to the implementation of telescopes with large aperture and active control,in order to retrieve a suf-ficient signal from the laser transmitter.The main advantages of using lidar systems from space are their high spatial,temporal,and spectral resolution. Successful space-based lidar missions have already gener-ated detailed profilometry of clouds and aerosols over earth and exquisite topographical maps.Large collection area receivers are needed for space-based lidars—with laser power transmission inversely pro-portional to the aperture size,a large aperture receiver tele-scope allows us to decrease the necessary laser power. Unfortunately,these systems are expensive to produce and must be suitable for launch into small fairings,typical of small satellites.Furthermore,the mirrors must guarantee a structural integrity during launch and operations,and there-fore,stiff and lightweight structure must be realized.The resulting concept for the primary mirror is a segmented geometry with a mass/area density goalϽ16kg/m2:a cen-tral portion,locked to the bus of the satellite,surrounded by a set of petals stored during launch and to be opened before the operational phase in space.In addition,the materials selected must also have good thermal characteristics in or-der to minimize the distortion in mirror shape.In this context,this paper presents a description of pre-liminary candidate designs of optical configuration for the lidar receiver subsystem.After an analysis of ESA requirements,a brief descrip-tion of the evolution of the design is given,analyzing the telescope concepts together with the auxiliary optics for filter and wavefront sensor stages.Some of the require-ments,such as those for thefilter,have been achieved even for the early stages of the layout.The overall constraints, such as dimensions,complexity,and technological feasibil-ity,have been considered and optimized step-by-step,up to thefinal configuration,which associates the optical perfor-mances with the satisfaction of all the requirements.A trade-off analysis shows how this design͑the“baseline”͒has been chosen,for which particular effort in optical tol-erances has been dedicated.As will be described,the baseline shows near diffraction-limited optical performances and an interesting combination of the other constraints.For this reason,the technology proposed in this paper,and generically for this ESA project,can be potentially extended even further,be-yond the specifications of the present project and toward a more generic usage—for instance,in large space telescopes for astronomical applications needing high-performance imaging.The complete characterization of the whole structure has been accomplished,leaving the description of the devel-oped technologies to other papers.12Requirements AnalysisThe definition of the optical design here presented has been driven by the scientific requirements for the analysis of the water vapor distribution in atmosphere from space-based lidar instrumentation͑Table1͒.All the requirements in terms of input aperture,field of view͑FOV͒,output angle, and image size can be achieved simultaneously for water vapor emission,because they are compliant with the optical invariant.Indeed,according to the Helmholtz2invariantD i·␣iഛD out·␣out,where␣i is the FOV͑corresponding to0.0033deg͒,D out and␣out are respectively the beam diam-eter and the acceptance angle at thefilter stage,by consid-ering the requirement on the effective aperture area,and D i0091-3286/2010/$25.00©2010SPIEOptical Engineering49͑7͒,073001͑July2010͒has been taken as 4000mm in order to include the effect of petals ͑i.e.,each single petal has 1m 2area,and the central hexagonal portion is 2m 2͒.During operation after launch and deployment,the /3wavefront error requirement is achieved by assigning dif-ferent values of wavefront errors for every optical compo-nent preceding the filter,depending on dimensions and types of optics.For instance,assigning the following sur-face error contribution:M 1=/8,M 2=/10,M 3=/20,the overall wavefront error w satisfies the system specifications W =/3.The maximum allowed error is achievable for the pri-mary mirror operation also due to the use of active control through suitable linear actuators 3͑1mm stroke and 5m accuracy ͒.This has been the topic of further studies about the overall space mission,including the deployment issue of the petals.3Selection of Optical SystemA preliminary study considered two different lidar optical architectures:a multipupil system ,i.e.,an array of tele-scopes realizing the required clear aperture as a sum of the clear pupil diameters of each single telescope,and a single-pupil system ,4–6formed by one suitable single telescope.All the cited optical configurations have been analyti-cally studied at the first order and successively optimized at the third order.The final optimizations and ray tracing simulations were performed with ZEMAX EE software ͑ZEMAX Development Corporation ͒.3.1Multipupil SystemThis system is formed by a set of seven subsystems,each having 1/7of the whole aperture.Therefore,all small tele-scopes behave equally,directing collimated light into the filters.The filter stage can be thought of as the combination of one filter for each small telescope or,alternatively,as asingle filter for the whole telescope array.The layouts are shown in Fig.1,where in the second configuration,the signal is brought by fibers.The optical configuration of the single telescope is an f/1afocal Galileian,formed by two confocal ͑one positive and one negative ͒parabolic mirrors.The telescope has the func-tion of beam reducer ,from 1000mm entrance beam diam-eter to the required exit diameter ͑Fig.2͒.Table 1Optical requirements for water vapor emission.Requirement Value Wavelength 935.5nm Field of view ͑FOV ͒115rad Beam diameter at filter stage Ͻ100mm Acceptance angle at filter stage±6mrad Collecting efficiency Ͼ95%Transmission up to filter stageϾ80%Obscuration Ͻ5%Effective aperture area Ͼ7m 2Wavefront error at filter stage Ͻ/3F/number at wavefront sensorϾ30Fig.1Multipupil system:one filter stage per telescope ͑left ͒;a single filter stage for the whole array ͑right ͒.(b)(a)Fig.2͑a ͒On-axis afocal multipupil system layout and ͑b ͒spot dia-gram on filter stage.On-axis and maximum field:one subsystem.In this preliminary design stage,the solutions are com-pliant to the technical requirements of the mission,being practically diffraction limited and free of any geometrical aberration within the requested FOV and wavelength range.Although the one-filter case is more convenient for weight,tolerance issues,and costs,the case of separate filters for each telescope seems preferable,since the single-filter so-lution would need a nontrivial optical recombination stage.3.2Single-Pupil SystemFor the preliminary study,some optical single-pupil con-figurations with effective collecting area corresponding to 7m 2were proposed to satisfy the optical requirements:off-axis afocal and on-axis focal/afocal designs.The off-axis afocal telescope is formed by three aspheric off-axis mirrors,meaning no obscuration but a higher real-ization risk and cost ͑Fig.3͒.Still,the filter specifications ͑Table 1͒are not completely fulfilled,even with a very small FOV ,although this configuration could have been attractive in terms of whole system of deployment,vehicle integration,and low sensitivity to mechanical tolerance.The on-axis focal telescope is made of two aspheric mir-rors,a beamsplitter,and a suitable set of lenses,located before the telescope focal planes:one beam crosses a posi-tive lens,which collimates the beam at the desired dimen-sions at the filter stage,and the other beam crosses a nega-tive aspheric lens,which focalizes it as requested.As for the multipupil system,the single pupil also can be thought of as on-axis afocal,with a total collecting area of 7m 2.In this case,the telescope is formed by two parabolic mirrors,which give an afocal image directly on the filter,reducing the collimated beam dimension from the primary mirror up to the filter size,as described earlier.This system satisfies the filter requirements.Being on-axis,these configurations are probably simpler to build and control with respect to the off-axis one,and optical,mechanical,and thermal tolerances can be identi-fied more easily.3.3Single-versus Multipupil SystemIt is quite clear that the multipupil configuration is complex from the point of view of holding and alignment of all the telescopes together with the filters,and the use of fibers in case of common relay optics up to the filter’s stage can degrade the single-vs.multipupil system of the overall system.Moreover,this solution has high costs,volume,and mass,and generally feasibility risks in space.Therefore,the development of the study shall be addressed to the single-pupil case,and the second part of this study is devoted to the analysis of particular single-pupil,on-axis afocal,and focal configurations.4Candidate Optical ConfigurationsDifferent candidate on-axis optical solutions have been in-dividuated and are here presented:focal telescope,afocal Galilean telescope,double afocal telescope ͑four mirrors ͒,and double afocal telescope ͑two mirrors ͒.The figures rela-tive to the proposed configurations show both the 2-D and the 3-D case,where division in petals is evident.In the frame of the ESA project,in order to fulfill all the requirements of Table 1,the telescope must be coupled with some auxiliary optics.In particular,the auxiliary lens was chosen for the designs to obtain f /#ജ30on a Pyramid 7wavefront sensor,since the smaller the angular size of the impinging light,the higher the measurement accuracy of the sensor.4.1Focal TelescopeThe first studied design is the on-axis focal telescope.A possible design considers an f/10telescope configuration followed by a negative lens that gives the f/30condition at the wavefront sensor with diffraction limit.After the beam-splitting,the collimating lens is conveniently located in or-der to achieve the suitable beam diameter at the filter stage ͑Fig.4͒.The telescope is formed by a 4m-diam primary and 0.3m secondary aspheric mirrors,with about 3m interdis-tance.The negative lens must be aspheric for aberrations issues;the other optical path is folded to reduce the dimen-sions of the auxiliary optics,so that the overall longitudinal length is about 3.7m.(a)(b)Fig.3Off-axis afocal telescope layout:͑a ͒general and ͑b ͒detail.4.2Afocal Galilean TelescopeThe telescope is formed by two parabolic mirrors ͑reflective-Galilean8type,also called Mersenne8,9͒,with 4m diam of the primary,0.25m of the secondary,and3m interdistance.It is followed by an aspheric lens that con-verges the beam toward a beamsplitter cube͑Fig.5͒,which in turn splits the light in two parts:one is focused to the wavefront sensor,and the other is sent to a collimating spherical lens͑through a folding mirror and an angularfield diaphragm͒andfinally to thefilter stage with the proper quality image͑Table1͒.Aspheric lens,beamsplitter cube, folding mirror,diaphragm,and spherical collimating lens form the auxiliary optics subsystem,the two lenses mod-eled in a refractive-Keplerian10design.Thefirst diopter is made aspheric for partially correcting spherical aberrations of the system up to the needed optical quality.Because of the design and the smallfield of view,astigmatism,field curvature,and distortion are almost zero,and coma contri-bution is not corrected but it does not influence the required performance.A disadvantage of this configuration is the overall size of the back optics:about2m long and6m wide.So de-signed,the telescope performance itself is not enough to have a collimated beam with the properϽ100mm diam-eter.This goal has been accomplished with the two lenses.(a)(b)Fig.4On-axis focal telescope with auxiliary optics layout.͑a͒2-D;͑b͒3-D.(a)(b)Fig.5Afocal telescope with auxiliary optics layout.͑a͒2-D;͑b͒3-D.However,the dimensions of the auxiliary optics system de-pend on the working conditions of thefirst lens—already, an f/15lens,which is out of the required working condi-tions of the wavefront sensor,makes the optical system cumbersome͑Fig.5͒.To maintain a compact instrumentation box,some other options may be possible—for instance,folding the beams or considering a smaller secondary mirror͑which could reach the f/#working condition͒—but this element would become more critical.4.3Double Afocal Telescope(Four Mirrors)To reduce the auxiliary optics dimensions,another option for the telescope was configured:a“double afocal”tele-scope͑four mirrors͒.11This layout gathers two afocal tele-scopes͑thefirst made of two parabolic mirrors,the second of two spherical ones͒and a beamsplitter that divides the beam in two parts,one going to thefilter stage͑the quality image satisfying the requirements͒,and the other converg-ing to the wavefront sensor stage by means of a spherical lens͑Fig.6͒.The longitudinal length of the telescope is 3m,the maximum aperture is4m,and the auxiliary optics is very compact due to three folding mirrors.The image quality is diffraction-limited͑within80m Airy disk diameter͒over all thefields of view͑Fig.7͒, while Strehl ratios are67%on-axis and75%at maximum FOV͑Fig.8͒.Also the quality image of thefilter stage satisfies the requirements of Table1.This solution collects the benefits of having many de-grees of freedom to deal with,in the form of many inde-pendent optical surfaces.Indeed,performances are achieved quite well.In addition,only one spherical lens is needed for focusing at the wavefront sensor stage.4.4Double Afocal Telescope(Two Mirrors)The idea is to reduce the number of mirrors͑and therefore costs and weight͒while keeping good performances at the filter stage—the telescope becomes a combination of only two parabolic mirrors with four reflections͑twice per mir-ror͒,another version of the double afocal telescope͑Fig.9͒, with a1.5%obscuration area͑see Table1͒.Complexity,costs,mass,and overall criticalities are consequently reduced.Like the previous configuration,af-ter the afocal system,a beamsplitter sends half of the beam to thefilter stage and the other half to the wavefront sensor stage,converging by means of a spherical lens.The auxiliary optics is very compact,as before.The in-novation lies in the simplified afocal system:the exit beamdiameter isϽ50mm͑from a4000mm entrance diameter, by using just two parabolic mirrors͒,and even at maximum field,the beam falls within the allowablefilter stage dimen-sions and acceptance angle͑Table2͒.In addition,the f/#=35at the wavefront sensor is achieved through a simple spherical lens.This configuration gives good image quality:diffraction-limited over all thefield of view͑80m Airy disks diam-eter͒on the wavefront sensor stage;image quality at the filter stage within the requirements͑Fig.10͒;and99.8% Strehl ratio on-axis and94%at maximum FOV͑Fig.11͒.5Trade-Off and Baseline SolutionA summary of the proposed designs is discussed here,high-lighting in particular the methods and the values of thedriving parameters used for the trade-off activity:filterspecifications,complexity͑i.e.,the number of optical sur-faces͒,size͑meaning length and volume͒,estimated costs ͑for optics and mechanics,considering surface dimensions and spherical/aspheric shape͒,wavefront sensor specifica-tions,and criticality͑estimating the risks of technologicalrealization͒.In order to select the baseline,a trade-off table(a)(b)Fig.6Two afocal telescopes with auxiliary optics layout.͑a͒2-D;͑b͒3-D.has been compiled͑Table3͒by attributing scores to the previously mentioned parameters͑Table2͒using the crite-ria here described.For thefilter and wavefront sensor specifications,theϮscores are associated to an optical configuration satisfying/ not satisfying them,respectively.For complexity,theϩscore is given when the telescope mirrors are two andϪwhen they are four.Likewise,for size,Ϫis given for the maximum volume andϩfor the minimum volume of the whole system.Similar criteria are used for cost and criti-cality.Notwithstanding the compliance to the requirements of thefilter stage for all the designs͑they are practically diffraction-limited within the requested FOV and wave-length͒,the afocal Galileian configuration hardly satisfies the wavefront sensor specifications with acceptable auxil-iary optics dimensions,while cost and manufacturability are also driven by the presence of an aspheric lens as large as the secondary mirror.The presented design with focal telescope,instead,sat-isfies all the requisites.However,the negative lens,needed to achieve the required f/#and the diffraction limit,is sen-sitive in terms of manufacturing and positioning12in the optical path with respect to the collimating lens of the other path.While the focal case may need aspheric͑not parabolic͒mirrors,which can be a critical issue,for instance,for the optical͑interferometric͒tests,for the two other systems with afocal telescopes,the requirements of thefilter stage and wavefront sensor are achieved by using only spherical and parabolic mirrors.In addition,from a4000mm en-trance beam diameter,a100mm exit beam diameter and an f/#Ͼ30on the wavefront sensor͑see Table1͒have been reached more easily with systems with an afocal telescope whose size is more compact than for the focal one.Further-more,the lens positioning in the collimated beam of the afocal cases is more relaxed and could be anyway compen-sated by detection system refocusing.About the two double afocal telescopes͑with four and two mirrors͒,a comparison of their tolerances͑see Sec.6͒shows that in both cases,the error budget is comparable. Tolerance issues being similar,an additional mechanical support structure of the fourth mirror would yield more obstruction͑i.e.,energy loss͒.On the other hand,having only two mirrors,the alignment maintenance is easier and the overall optomechanical complexity,in terms of cost and mass,is lower.Therefore,the best solution is a double afocal telescope with two mirrors and four reflections—for these reasons,it is the baseline.6Tolerance AnalysisOptical tolerance analysis is needed to evaluate the effects due to any error source on the real optical elements.It involves four fundamental processes:•Thefirst step is thefirst-order analysis,consisting of establishing the tolerance on the main parameters such as focal length of the single elements simulated as thin(a)(b)Fig.7Spot diagrams͑a͒onfilter stage and͑b͒on wavefront sen-sor.On-axis and maximumfield.Fig.8PSF on wavefront sensor on maximumfield.lenses,interdistance,and shifts/tilts with respect to the reference axis.•The second step is a sensitivity analysis of system per-formance,i.e.,evaluation of sensitivity for each pa-rameter.This analysis allows us to tackle the issue of each element’s weight on the overall performance͑im-age quality,alignment,etc.͒.•Third,an error budget13is needed for the determina-tion of the maximum error for all the parameters while keeping performance specifications within acceptablecost goals.In practice,the adopted criteria are neces-sary for an appropriate distribution of maximum ac-ceptable errors͑tolerance͒,whose values are smaller if the relative sensitivity is larger.This part influences many aspects of a project,from top-level performance specifications and cost targets to assembly and align-ment procedures.•The last procedure is a check of tolerance budget,us-ing a Monte Carlo analysis͑via ZEMAX software͒that randomly selects the maximum tolerance errors, thus obtaining a statistics of values within perfor-mance specifications.Optical tolerance analysis was performed for the base-line and for the four-mirrors configuration.It can be gener-ally stated that for both analyses,the secondary mirror is the most sensitive element,as the error budget analysis highlights,where the maximum acceptable errors for all the elements have been reported.In particular,the tolerance budget results for the two designs are described in Tables4 and5,with an indication of the tolerance that a parameter relative to an optical element must have to satisfy the most stringent requirement͑i.e.,the one on the wavefront sen-sor͒.In order to maintain a diffraction-limited image on the wavefront sensor,for the baseline and for the four-mirrors configuration,respectively,the maximum acceptable error on curvature radii is aboutϮ20m andϮ17m,while on distances between mirrors,it isϮ10m andϮ8m.Inboth cases,for decenters and tilts,the secondary mirror tolerances are tight,while for the lens,the maximum allow-able movements are more relaxed.The structural stability is not able to ensure alone the required level of precision given by the tolerance studies. While the active control of the primary mirror also man-ages its curvature tolerances,the support structure of the secondary mirror can be equipped with a suitable device(a)(b)Fig.9Afocal telescope͑two mirrors,four reflections͒with auxiliary optics layout.͑a͒2-D;͑b͒3-D.Table2Trade-off of candidate optical configurations.ConfigurationAfocalGalileantelescopeFocaltelescopeDoubleafocaltelescope͑fourmirrors͒Doubleafocaltelescope͑twomirrors͒Filter specificationsϩϩϩϩComplexityϩϩ—ϩLongitudinal size——ϩϩMass00—ϩCost——0ϩWavefront sensorspecifications—ϩϩϩCriticality—00ϩFinal valuation—ϩϩϩϩϩthat works on the micrometric mirrors’interdistance,tilts, and decenters,while a possible detection system movement may recover defocus.7ConclusionsThis research has been a good opportunity to develop,in the framework of the given optomechanical requirements for the“Advanced Lidar Concepts”ESA call,an innova-tive,lightweight,large-aperture space-borne system,com-pliant with the proposed mission in terms of optical quality in a relatively easy manner.The single-pupil system has been proposed in four configurations,differing from one another because of complexity,weight,manufacturability issues,tolerances of the elements,and overall volume.The comparison of the performances has been studied and is here listed,the trade-off showing that the double afocal telescope with two mirrors is currently the best solution.Table3Main parameters for candidate optical configurations.ConfigurationAfocalGalileantelescopeFocaltelescopeDoubleafocaltelescope͑fourmirrors͒Doubleafocaltelescope͑twomirrors͒Filter specifications70mm70mm45mm45mmComplexity2242 Longitudinal size5m 3.7m3m3mWavefront sensorspecificationsf/#=15f/#=30f/#=32f/#=35Table4Tolerance error budget for the four-mirror configuration,with third-order͑left͒andfirst-order͑right͒layouts.Surface no.Radius(mm)±TOLThickness(mm)±TOLDecenters(mm)Tilts xy(deg) 1:Primarymirror6400-0.015+0.02-3000-0.01+0.007--2:SecondaryMirror400-0.02+0.015305010±0.006±0.002 3:Tertiarymirror50001040.793-±1±0.08 4:Fourthmirror10001090010±1±0.4 7:Lens1026.510---0.0167(a)(b)Fig.10Spot diagrams͑a͒onfilter stage and͑b͒on wavefront sen-sor.On-axis and maximumfield.Fig.11PSF on wavefront sensor at maximumfield.It has been stated that small misalignments and/or varia-tions on nominal values for some parameters can be ac-cepted in terms of tolerances.The system here introduced is somehow “frozen.”However,it may happen that such a complex optics,especially the primary lightweight seg-mented mirror with movable petals,needs some adjust-ments that may exceed the calculated tolerances.A solution based on active movement of the mirror has been deeply investigated during the overall feasibility study for the lidar system.Since the overall results in the baseline show near diffraction-limited performances,the proposed optical con-figuration can be potentially extended even beyond the specifications of the present project—for instance,for new-generation,high-performance imaging space telescopes for astronomical applications—by using large-aperture,light-weight,deployable,and actively controlled primary mir-rors.AcknowledgmentsThe authors want to thank Dr.Joao Pereira do Carmo of ESA/ESTEC for his support and all the ALC collaboration for the important contributions at this design phase of the ESA-ALC project.References1. A.Zuccaro Marchi,L.Gambicorti,F.Simonetti,P.Salinari,F.Lisi,A.Bursi,M.Oliver,and D.Gallieni,“A technology demonstrator for development of ultra-lightweight,large aperture,deployable tele-scope for space applications,”in Proc.7th Int.Conf.Space Optics ,CNES,Toulouse,France ͑2008͒.2.M.Born and E.Wolf,“Elements of the theory of diffraction,”Chap-ter 8in Principles of Optics ,M.Born and E.Wolf,Eds.,pp.370–458,Cambridge University Press,New York ͑1980͒.3.P.Mazzinghi,V .Bratina,L.Gambicorti,F.Simonetti,A.ZuccaroMarchi,D.Ferruzzi,P.Salinari,F.Lisi,M.Oliver,A.Bursi,and J.Pereira do Carmo,“An ultra-lightweight,large aperture,deployable telescope for advanced lidar applications,”in Proc.6th Int.Conf.Space Optics ,ESTEC,Noordwjik,The Netherlands ͑2006͒.4.Y .Y .Gu,C.S.Gardner,P.A.Castleberg,G.C.Papen,and M.C.Kelley,“Validation of the Lidar In-Space Technology Experiment:stratospheric temperature and aerosol measurements,”Appl.Opt.36͑21͒,5148–5157͑1997͒.5. D.M.Winkler,R.H.Couch,and M.P.McCormick,“An overview of LITE:MASA’s Lidar In-space Technology Experiment,”84,164–180͑1996͒;available at /n_overview.html#inst_design.6. E.V .Browell,S.Ismail,and W.B.Grant,“Differential absorptionlidar ͑DIAL ͒measurements from air and space,”Appl.Phys.B 67,399–410͑1998͒.7.S.Esposito and A.Riccardi,“Pyramid wavefront sensor behavior in partial correction adaptive optic systems,”369,L9–L12͑2001͒.8. D.Korsch.Ed.,“Third-order correction of two-mirror systems,”Chapter 8in Reflective Optics ,pp.151–205,Academic Press,San Diego ͑1991͒.9.R.N.Wilson,“Historical introduction,”Chapter 1in Reflecting Tele-scope Optics I ,pp.1–15,Springer,New York ͑1996͒.10.W.B.Wetherell,“Afocal systems,”Chapter 2in Handbook of Optics ,2nd ed.,vol.2,M.Bass,Ed.,pp.2.1–2.23,McGraw-Hill,New York ͑1995͒.11.J.M.Sasian,“Flat field,anastigmatic,four-mirror optical system forlarge telescopes,”Opt.Eng.26͑12͒,1197–1199͑1987͒.12.W.B.Wetherell,“The calculation of image quality,”Chapter 6inApplied Optics and Optical Engineering ,vol.VIII ,R.R.Shannon and J.C.Wyant,Eds.,pp.171–315,Academic Press,London ͑1980͒.13.R.H.Ginsberg,“Outline of tolerancing ͑from performance specifi-cation to tolerance drawings ͒,”Opt.Eng.20͑2͒,175–180͑1981͒.Francesca Simonetti graduated in optics ͑specializing in space optics ͒from the Uni-versity of Florence ͑Italy ͒in 2004.She has performed commissioned work on optical design for the National Institute of Applied Optics ͑CNR-INOA,Florence ͒and Galileo Avionica Spa,sponsored by the Italian Space Agency ͑ASI ͒,within the CIA ͑“Hyper-spectral Advanced Camera”͒project ͑super-visor Dr.Andrea Romoli ͒.Since 2004,she has been working at INOA as a contractor,where she has been involved in optical designs for various systems devoted to space applications.In 2007,she also attended an 8-month stage at Galileo Avionica ͑supervisor Dr.Romoli ͒,for a fea-sibility study of a space telescope working in pushbroom mode for earth observation with a hyperspectral camera.Since 2008,she has been working at CNR-INOA for “Solar Thermal at High Rendering,”with Regione Toscana,developing a high-efficiency solar concentra-tor,with adaptive optical techniques,adopted in the most recent astronomicaltelescopes.Alessandro Zuccaro Marchi graduated in physics from the University of Padova in 1999and received a PhD in physics from the University of Trieste in 2003.Between 2000and 2004,he spent 24months at the University of Alabama in Huntsville and at NASA–Marshall Space Flight Center to work on the optical configuration of a space telescope for detection of ultra-high-energy cosmic rays.Since 2004,he has been working ͑as a postdoctorate and then as aresearcher ͒in the Aerospace Optics Group of the National Institute of Applied Optics ͑CNR-INOA,Florence,Italy ͒,involved in optical testing and design of space projects for fluid science ͑FSL Lab on the ISS/Columbus ͒,astrophysics ͑EUSO telescope for ultra-high-energy cosmic rays ͒,and development of new technologies,in col-laboration with various Italian and internationalinstitutions.Lisa Gambicorti graduated in physics ͑specialization in astrophysics ͒in 2003from the University of Firenze,with supervisors Dr.Mazzinghi ͑CNR-INOA,National Insti-tute of Applied Optics ͒and Dr.Pace ͑Firenze University ͒.In 2007,she received a PhD in astronomy working on wide-field optical systems and designing and testing of optical adapters prototypes for ultra-high-energy cosmic rays detection.She then took a postdoctorate position at CNR-INOAmainly to design the spectropolarimeter for the space telescope World Space Observatory,with the collaboration of Galileo Avionica,and she collaborates with the Department of Astronomy and Space Science of Firenze University and the INFN National Laboratories of Frascati.Since 2008,she won a postdoctorate position at CNR-INOA for project “Solar Thermal at High Rendering,”with Regione Toscana,developing a high-efficiency solar concentrator,based on adaptive optical techniques,adopted in the most recent astronomi-cal telescopes.Table 5Tolerance error budget for the baseline,with third-order ͑left ͒and first-order ͑right ͒layouts.Surfacenumber Radius (mm)±TOL Thickness (mm)±TOLDecenters(mm)Tilts_xy (deg)1:Primarymirror6400-0.018+0.024-2865-0.012+0.009--2:Secondarymirror 670-0.024+0.018310010±0.008±0.00146:Lens684.365±10--±5X →1Y →2。
基于车载式视频图像的车辆行驶速度鉴定
10.16638/ki.1671-7988.2021.07.061基于车载式视频图像的车辆行驶速度鉴定董浩存1,聂中国2(1.沈阳理工大学,辽宁沈阳110159;2.沈阳佳实司法鉴定所,辽宁沈阳110023)摘要:文章论述了基于车载式视频图像鉴定道路交通事故中目标车辆行驶速度的基本原理,提出一种依据射影几何学中的交比不变性原理测算目标车辆行驶距离的算法。
该算法可以避免因车辆运动轨迹与视频摄录设备镜头光学轴线不垂直而产生的误差,从而提高了目标车辆行驶速度鉴定的精确度。
最后根据一个真实案例,探讨了用车载式视频图像进行车辆行驶速度鉴定的方法、步骤以及主要注意事项,可为评价这一鉴定方法的准确性和科学性提供参考。
关键词:道路交通事故;车速鉴定;车载式视频图像;摄影测量;交比中图分类号:U491.3 文献标识码:A文章编号:1671-7988(2021)07-195-04Vehicle speed identification based on the videos recorded by mobile video recorderDong Haocun1, Nie Zhongguo2( 1.Shenyang Ligong University, Liaoning Shenyang 110159;2.Shenyang Jiashi Judicial identification Office, Liaoning Shenyang 110023 )Abstract: In this paper, we discoursed the basic principle of speed identification of the target vehicle based on the videos recorded by mobile video recorder in a road traffic accident, and further, proposed an algorithm for estimating the driving distance of the target vehicle according to the principle of projective invariant in the projective geometry. The algorithm can avoid the error caused by the movement trajectory of the target vehicle and non-verticality of the lens optical axis of the video recorder, so as to improve the accuracy of the target vehicle speed identification. In addition, we applied our proposed algorithm to a real case to discuss the method, procedure, and main attentions in the identification for the speed of the target vehicle involved in a traffic accident using a mobile video recorder, which can be a reference for evaluating the accuracy and scientific validity of the proposed identification method.Keywords: Road traffic accident; Vehicle speed identification; Video recorded by mobile video recorder; Photogram -metry; Cross-ratioCLC NO.: U491.3 Document Code: A Article ID: 1671-7988(2021)07-195-04引言道路交通事故司法鉴定的关键项目之一,是事故发生瞬间目标车辆行驶速度的鉴定,这是确定事故性质、分析事故原因、认定事故责任的重要依据。
光学测量技术 英语
光学测量技术英语English:Optical measurement technology is a non-contact method of measuring various physical quantities such as distance, displacement, shape, and vibration using light. This technology utilizes a variety of optical principles and techniques, including laser, interferometry, and photogrammetry, to accurately capture and analyze objects or phenomena. Optical measurement technology has a wide range of applications across different industries, including aerospace, automotive, medical, and manufacturing, where precise and reliable measurements are essential for quality control, research, and development. Additionally, the non-contact nature of optical measurement technology makes it particularly suitable for measuring delicate or sensitive materials and components without causing any damage.中文翻译:光学测量技术是一种利用光线来测量各种物理量,如距离、位移、形状和振动的非接触方法。
光学纯对映体 英文
光学纯对映体英文## Enantiomers and Optical Purity.In the realm of chemistry, chirality refers to the property of a molecule that lacks mirror symmetry, muchlike our left and right hands. Chiral molecules exist in two distinct forms known as enantiomers, which are mirror images of each other but cannot be superimposed. Enantiomers are like two non-identical twins, sharing the same molecular formula and connectivity but differing in their spatial arrangement.Optical purity, a crucial concept in stereochemistry, quantifies the enantiomeric excess of a chiral compound. It measures the proportion of one enantiomer relative to the other in a mixture. A mixture containing equal amounts of both enantiomers is considered racemic and has an optical purity of 0%. Conversely, a mixture containing only one enantiomer is optically pure and has an optical purity of 100%.### Separation of Enantiomers.The separation of enantiomers is a challenging yet essential task in many fields, including pharmaceuticals, agrochemicals, and fragrances. Various techniques can be employed to achieve this, including:Chiral chromatography: This technique utilizes achiral stationary phase that interacts differently with different enantiomers, allowing for their separation.Chiral resolution: This involves converting a racemic mixture into a pair of diastereomers, which can then be separated by conventional methods.Enzymatic resolution: Enzymes, being chiral themselves, can selectively catalyze reactions with one enantiomer over the other, leading to the formation of optically pure products.### Optical Purity Measurement.Optical purity can be determined using various methods, such as:Polarimetry: This technique measures the rotation of plane-polarized light as it passes through a chiral sample. The magnitude and direction of rotation depend on the enantiomeric composition of the sample.NMR spectroscopy: Chiral solvents or chiral shift reagents can be used in NMR spectroscopy to differentiate between enantiomers based on their different chemical shifts.Chromatographic methods: Chiral chromatography or capillary electrophoresis can be used to separate enantiomers and determine their relative abundance.### Significance of Optical Purity.Optical purity is of paramount importance in several areas:Pharmacology: Many drugs are chiral, and their enantiomers can have different pharmacological properties, including efficacy, toxicity, and metabolism. Enantiopure drugs offer advantages in terms of safety and effectiveness.Agrochemicals: Herbicides and pesticides can be chiral, and their enantiomers may differ in their selectivity and environmental impact. Optical purity ensures the targeted control of pests and weeds.Fragrances and flavors: The fragrance and flavor of chiral compounds can depend on their enantiomeric composition. Optical purity control allows for the creation of specific scents and tastes.### Applications of Chiral Compounds.Chiral compounds find widespread applications invarious industries:Pharmaceuticals: Enantiopure drugs include ibuprofen,naproxen, and thalidomide.Agrochemicals: Herbicides such as glyphosate and pesticides like cypermethrin are chiral.Fragrances and flavors: Enantiopure compounds like menthol, camphor, and limonene contribute to thedistinctive scents and tastes of products.Materials science: Chiral polymers, liquid crystals, and self-assembling systems have unique properties and applications in optics, electronics, and nanotechnology.### Conclusion.The concept of enantiomers and optical purity is crucial for understanding the stereochemistry of chiral compounds. The ability to separate and determine the optical purity of enantiomers is essential in numerous fields, including pharmaceuticals, agrochemicals, and fragrances. The significance of optical purity lies in itsimplications for the safety, efficacy, and properties of chiral compounds in various applications.。
光学专业英语50句翻译
光学专业英语50句翻译1.The group's activities in this area have concentrated on the mechanicaleffects of angular momentum on a dielectric and on the quantum properties of orbital angular momentum.在这个研究领域,这个研究组主要集中在电介质中的角动量的机械效应和轨道角动量的量子属性。
2. Experimental realization of entanglement have been restricted totwo-state quantum systems. In this experiment entanglement exploiting the orbital angular momentum of photons, which are states of the electromagnetic field with phase singularities (doughnut modes).纠缠的实验认识还只停留在二维量子系统。
在这实验中,利用了光子的轨道角动量的纠缠是具有相位奇点(暗中空模式)的电磁场的状态。
3. Laguerre Gaussian modes with an index l carry an orbital angular momentum of per photon for linearly polarized light that is distinct from the angular momentum of the photons associated with their polarization对线偏振光来说,具有因子l的LG模式的每个光子能携带的轨道角动量,这是与偏振态相关的光子的角动量是截然不同的。
HERO传感器
Anatomy of a HEROHERO传感器介绍HERO Optical SensorHERO光学传感器The HERO architecture sets a new precedent in optical gaming sensor design. It has been designed from the ground up to optimize performance and efficiency and sets the new standard for optical gaming sensors.HERO开创了游戏光学传感器设计新的先河。
这个被设计用来进一步优化性能与节能的平衡以及为游戏光学传感器设立了新的标准。
In the history of gaming mouse sensor development, there have been many different iterations of a few core architectures, many designed with Logitech. Most sensors focus on either maximizing performance or maximizing efficiency as the two values are historically mutually exclusive. If you max out performance, it usually takes a lot more power, and if you max out efficiency you sacrifice performance. With HERO, Logitech has created an entirely new architecture and operating philosophy that delivers ultimate performance AND efficiency for the first time together.在先前的游戏鼠标传感器的开发中,一个设计能迭代产生很多产品,其中不少是由罗技设计的。
Control of matter defects using optical phase singularities
10.1117/2.1201008.003079 Control of matter defects using optical phase singularitiesIvan Smalyukh,Paul Ackerman,Rahul P.Trivedi,and Taewoo LeeReversible,controlled creation of patterns of topological defects in liquid crystals is achieved using focused beams with optical phase singularities.For many centuries,development of optical instruments and technologies has relied on the use of defect-free monocrystals. The notion that defects(also known as singularities)can be useful for photonic applications is relatively new but already broadly accepted.1–3Defects are typically points or lines at which the orientational or translational order of solid or liquid crystals is disrupted.Spontaneously occurring defects are often not desirable,since they can degrade performance of various electro-optic and display devices.On the other hand,dynamics of defects in metals permits easy plastic deformation,which is of pivotal importance for modern technology and our everyday life.Optical phase singularities,in which the phase of light be-haves discontinuously,enrich the properties of laser beams and find numerous applications including,imaging,enhanced laser trapping,and telecommunications.Defects in photonic crystals and photonic-crystalfibers allow for unprecedented control of theflow of light,similar to the control of electric current in electronic circuits.1–3Although many other important electro-optic,photonic,and all-optical applications of defects are possible,robust means for their control and generation in ma-terials using low-intensity light is lacking.In liquid crystals, defects typically appear as a result of temperature quenching, symmetry-breaking phase transitions,and mechanical stresses.4 These liquid-crystal defects can introduce well-defined spatial patterns of the molecular director(the optical axis for uniax-ial liquid crystals)and corresponding refractive-index patterns. However,they commonly annihilate to minimize the elastic free energy4and have never been controlled or used for applications in a reliable way.Noncontact control of structural organization in matter us-ing light and,in turn,control of light by ordered materialsare Figure1.Focusing of Laguerre-Gaussian laser beams of different topo-logical charge into a confinement-unwound,vertically aligned,chiral liquid crystal.(a)Schematic of vertical cross section of a cell with the uniformly aligned liquid crystal.(b)and(c)Vertical cross sections of cells overlaid with the patterns of laser-light intensity of tightly focused Laguerre-Gaussian beams with topological charge l D0and˙5,re-spectively.The insets in(b)and(c)show the corresponding intensity distributions in the lateral plane of the IR beam.fascinating research themes that have revolutionized modern technologies,scientific instruments,and consumer devices.One of its most important goals is the development of means for control and patterning of defects in ordered materials and in the optical phase of laser beams.5,6Our work shows howContinued on next pageFigure 2.Optical generation of arbitrary spatial patterns of Torons.(a)Schematic visualization of the optical generation of a Toron using a laser beam with optical phase singularity.(b)Different types of Toron structures formed by focusing Laguerre-Gaussian laser beams of differ-ent topological charge.2The largest Toron defect structure is approxi-mately 15 m in diameter.(c)Optically generated periodic 2D pattern of Torons with a dislocation.Each Toron is 10 m in diameter.(d)Characters obtained by Toron generation,each approximately 5 m in diameter.(e)Square-periodic pattern of Torons generated in a liquid-crystal sample.The Torons are each 5 m in diameter.laser beams with optical phase singularities can be used to control topological singularities in ordered,liquid-crystalline materials,6potentially enabling a number of new applications.We employed a computer-controlled,phase-only,spatial-light modulator (SLM,Boulder Nonlinear Systems)to generate holo-grams and convert an IR Gaussian beam into doughnut-shaped Laguerre-Gaussian laser beams of different topological charge (defining the number of twists that the phase of the light makes in one wavelength).6We then focused the beams into the bulk of the untwisted chiral liquid crystal confined between thin glass plates:see Figure 1(a)–(c).These chiral liquid crystals have a strong preference for molecular twisting,but can be untwisted by external fields and confinement,as shown in Figure 1(a).Us-ing focused Gaussian and Laguerre-Gaussian vortex laser beams with different optical phase singularities—see Figure 1(b)and (c)—we generated topological liquid-crystal defect architectures containing both point and line singularities:see Figure 2(a).6The defects are bound to each other by twisted interdefect re-gions,forming stable or metastable 3D configurations.In chiral nematic liquid crystals that are confined into sandwich-like cells with vertical boundary conditions,these laser-generated topo-logical defects embed the localized 3D twist into the uniform background of the director field.They are untwisted becauseof confinement,forming distinct localized chiro-elastic particle-like excitations of different types:see Figure 2(a)and (b).6These defect structures—dubbed ‘Torons’6—can be generated at a desired location in the sample and their internal structures can then be controlled by varying the topological charge of the Laguerre-Gaussian laser beam.The resultant Torons are com-prised of topological point-and ring-shaped defects of opposite topological charge,such that the overall charge is conserved:see Figure 2(a)and (b).6We also find that vortex laser beams of power 10–100mW with screw-dislocation defects in the optical phase allow for control of the topological defects and internal configurations of Torons at a desired spatial location,enabling formation of desired long-term-stable defect superstructures.We show three examples of such superstructures containing Torons of dif-ferent kinds,a square-periodic lattice with a dislocation in Figure 2(c),a structure in the form of the characters ‘SPIE’in Figure 2(d),and a regular periodic pattern:see Figure 2(e).Us-ing both single-beam steering and holographic laser-intensity patterning,7the periodic crystal lattices of Torons can be generated and tailored by tuning their periodicity,reorienting their crystallographic axes,introducing dislocation defects in the periodic patterns,etc.These periodic lattices can be dynamically modified,erased,and then recreated,depending on the need of the relevant application.Periodicity of these optically induced structures depends on the equilibrium pitch of the chiral ne-matic liquid crystal,and it can be tuned from several hundreds of nanometers to hundreds of microns by varying the pitch and using different structure-generation schemes.The key advantage of our approach is the robustness with which the periodic patterns of liquid-crystal defects can be generated and switched between multiple distinct states.6The unprecedented control over organization of the defects offers promise for a wide range of applications,such as optical-data storage,light/voltage-controlled information displays,and tunable photonic crystals.Our preliminary results show that they can be used as efficient optically reconfigurable diffrac-tion gratings.The structural multistability as well as low-voltage and low-laser-power switching may lead to powerless and low-power multimodal operation of electro-optic,all-optical,and information display devices.Our future work will be directed at realizing these applications,so that the optically con-trolled defects in liquid crystals may find use in controlling the properties of light.Continued on next pageThis work was supported by the Renewable and Sustainable Energy Initiative and Innovation Initiative Seed Grant Programs of the University of Colorado at Boulder,the International Institute for Complex Adaptive Matter,and by National Science Foundation (NSF)grants (DMR-0820579,DMR-0844115,DMR-0645461,and DMR-0847782).Paul Ackerman was supported by the NSF-funded JILA-Physics Research Experience for Undergraduates program at the University of Colorado.Author InformationIvan SmalyukhUniversity of Colorado at Boulder Boulder,COIvan Smalyukh is an assistant professor of physics,a founding fellow of the Renewable and Sustainable Energy Institute,and a senior investigator of the Liquid Crystal Materials Research Center.His research interests are at the interface of op-tics/photonics,soft-condensed-matter physics,nanoscience,and renewable energy.Paul Ackerman,Rahul P .Trivedi,and Taewoo Lee Department of PhysicsUniversity of Colorado at Boulder Boulder,COPaul Ackerman is an undergraduate research assistant in Smalyukh’s research group.His research interests include liquid crystals,laser trapping and manipulation,and tunable diffrac-tion gratings.Rahul Trivedi is a PhD student and a research assistant in Smalyukh’s group.His research interests include laser trapping and manipulation,nonlinear optical microscopy,liquid-crystal defects,optical generation of topological defects and structures,and electro-optics of liquid crystals.Taewoo Lee is a postdoctoral research associate.His research interests focus on the development of novel optical-imaging techniques,such as coherent anti-Stokes Raman-scattering polarizing microscopy,multiphoton-excitation fluorescence mi-croscopy,second-harmonic-generation polarizing microscopy,and imaging of liquid crystals,polymers,colloids,and biomole-cular materials.References1.M.Qi,E.Lidorikis,P .T.Rakich,J.D.Joannopoulos,E.P .Ippen,and H.I.Smith,A three-dimensional optical photonic crystal with designed point defects ,Nature 429,pp.538–541,2004.2.C.H.Sun and P .Jiang,Acclaimed defects ,Nat.Photon.2,pp.9–11,2008.3.K.Ishizaki and S.Noda,Manipulation of photons at the surface of three-dimensional photonic crystals ,Nature 460,pp.367–370,2009.4.P .M.Chaikin and T.C.Lubensky,Principles of Condensed Matter Physics ,Cambridge Univ.Press,2000.5.J.Leach,M.R.Dennis,J.Courtial,and M.J.Padgett,Laser beams:knotted threads of darkness ,Nature 432,p.165,2004.6.I.I.Smalyukh,nsac,N.Clark,and R.Trivedi,Three-dimensional structure and multistable optical switching of Triple Twist Toron quasiparticles in anisotropic fluids ,Nat.Mater.9,pp.139–145,2010.7.S.Anand,R.P .Trivedi,G.Stockdale,and I.I.Smalyukh,Non-contact optical con-trol of multiple particles and defects using holographic optical trapping with phase-only liquid crystal spatial light modulator ,Proc.SPIE 7232,p.723208,2009.c2010SPIE。
Balancing
Balancing interpixel cross talk and detector noiseto optimize areal density in holographic storage systems Marı´a-P.Bernal,Geoffrey W.Burr,Hans Coufal,and Manuel QuintanillaWe investigate the effects of interpixel cross talk and detector noise on the areal storage density ofholographic data storage.A numerical simulation is used to obtain the bit-error rate͑BER͒as a functionof hologram aperture,pixelfill factors,and additive Gaussian intensity noise.We consider the effect ofinterpixel cross talk at an output pixel from all possible configurations of its12closest-neighbor pixels.Experimental verification of this simulation procedure is shown for severalfill-factor combinations.Thesimulation results show that areal density is maximized when the aperture coincides with the zero orderof the spatial light modulator͑SLM͒͑Nyquist sampling condition͒and the CCDfill factor is large.Additional numerical analysis includingfinite SLM contrast andfixed-pattern noise show that,if thefixed-pattern noise reaches6%of the mean signal level,the SLM contrast has to be larger than6:1tomaintain high areal density.We also investigate the improvement of areal density when error-pronepixel combinations are forbidden by using coding schemes.A trade-off between an increase in arealdensity and the redundancy of a coding scheme that avoids isolated-ON pixels occurs at a code rate ofapproximately83%.©1998Optical Society of AmericaOCIS codes:210.2860,040.1520,050.1220,070.2580,070.2590,050.1960.1.IntroductionDigital holographic data storage has become the fo-cus of study of many researchers in the past fewyears1–8because of its potential use in storage withfast parallel access and high storage density.Thetechnique consists of storing a large number of digitalpages in a thick photosensitive medium as superim-posed gratings produced by the interference betweencoherent object and reference laser beams.One of the most widely used configurations in ho-lographic data storage is the Fourier transform con-figuration͑a4f system͒,as shown in Fig. 1.Information to be stored is encoded with aprogrammable-pixel device,a spatial light modulator ͑SLM͒,located in the front focal plane of lens L1.A collimated and expanded laser beam͑the objectbeam͒is transmitted through the SLM and focusedwith lens L1in the photosensitive medium,which for our purposes can be simplified to a square aperture of area D2.The storage material is placed at or near this Fourier transform plane to maximize areal den-sity.If the SLM pixels could be made extremely small͑ϳ5m͒,an image-plane geometry could be-come attractive.9A hologram is written when a second coherent laser beam͑the reference beam͒intersects with the object beam and their interference fringes are recorded as a diffraction grating in the medium.The information-bearing object beam can then be reconstructed by illumination of the stored diffraction grating with the reference beam.By use of a second lens L2to per-form a second Fourier transformation,the digital in-formation can be retrieved by a CCD camera in parallel.The SLM and the CCD camera are typi-cally pixelated:Each pixel on the SLM has a corre-sponding pixel on the CCD camera.The high areal densities needed to make digital holographic data storage a feasible technology are achieved by the superimposition of multiple holo-grams within the same region of storage material͑a stack of holograms͒.However,the diffraction effi-ciency of each hologram scales as1over the square of the number of overlapping exposures.Therefore it is crucial to minimize the exposure area of each hologram with a small aperture.An aperture also allows reuse of the same set of reference angles–wavelengths in neighboring storage locations withoutM.-P.Bernal,G.W.Burr,and H.Coufal are with the IBMAlmaden Research Center,650Harry Road,San Jose,California91520-6099.M.Quintanilla is with the Departamento de Fı´sicaAplicada,Facultad de Ciencias,Universidad de Zaragoza,50009Zaragoza,Spain.Received24December1997;revised manuscript received29April1998.0003-6935͞98͞235377-09$15.00͞0©1998Optical Society of America10August1998͞Vol.37,No.23͞APPLIED OPTICS5377creating interstack cross talk.However,the small aperture acts to spatially low-pass filter the data-bearing object beam.High-spatial-frequency com-ponents of the data pattern displayed on the SLM will not be recorded by the holographic medium.As a result,the light propagated from a SLM pixel is spread to the neighboring pixels of the intended tar-get CCD pixel.Depending on the pattern of the nearby pixels displayed on the SLM,the retrieved data might no longer be able to be decoded correctly.10–11This source of deterministic errors is called interpixel cross talk.For a modest amount of interpixel cross talk,al-though there could be no decoding errors,the sepa-ration between the brightest OFF pixel and the darkest ON pixel is reduced ͓the signal-to-noise ratio ͑SNR ͒is decreased ͔.Even a small amount of addi-tional random noise ͑from the detector electronics,for instance ͒will begin to cause decoding errors.In general,a holographic system that can tolerate more random noise can afford to reduce its signal levels and thus superimpose more holograms.So part of the SNR budget is used to tolerate interpixel cross talk and increase density by minimization of the stack area,and part of the SNR budget is used to tolerate random noise and increase density by an increase in the number of holograms per stack.In this study we use a numerical algorithm to ob-tain a set of design parameters that produces a digital holographic data-storage system with the optimal areal density for a given target bit-error rate ͑BER ͒.We account for several of the important noise sources present in a practical holographic system:deter-ministic sources of signal variation,such as interpixel cross talk,fixed-pattern noise,and limited SLM con-trast,and random noise sources,such as detector noise.The effects of detector alignment and optical aberrations are not included in this ing these simulations,we can model any linear fill factor in the SLM and in the CCD camera,as well as various spatial cutoff frequencies in the Fourier transform plane.In Section 2we describe the numerical algorithm to evaluate the BER as a function of the SLM and the CCD fill factors,the aperture in the Fourier plane,and the relative amount of additive noise.Simula-tions at several fill-factor combinations are compared with experimental BER measurements to validate the approach.Simulated BER maps for the fullrange of fill-factor choices indicate that each aperture has its own best SLM and CCD fill-factor combina-tion.By relating the amount of random noise to the number of holograms that can be stored,we deter-mine the areal density as a function of the target BER.We show that density is maximized by push-ing the system toward the Nyquist sampling condi-tion and that large SLM and CCD fill factors are appropriate for systems dominated by detector noise.The effects of finite SLM contrast and fixed-pattern noise are then included,and the possible benefits of low-pass modulation coding are evaluated.2.Review of the MethodOur numerical algorithm considers the system model described in Fig.1.Lenses L 1and L 2form a 4f system of unity magnification that images the SLM exactly onto the CCD detector array.We assume a SLM with pixels of a linear dimension ⌫and a linear fill factor g SLM ,a CCD camera also with pixels of an identical linear dimension ⌫but a linear fill factor g CCD ,and a square aperture of area D 2located in the common focal plane of lenses L 1and L 2.We assume that the system has a space-invariant impulse re-sponse ͑point-spread function ͒that is due solely to the aperture.The possible effects of magnification,focus,and registration errors and of lens aberrations are not included.For uniform plane-wave illumination and a linear fill factor g SLM ,the transmission-field amplitude of a single SLM pixel centered at the optical axis is given byU 0͑x ,y ,z ϭ0͒ϭrectͩx g SLM ⌫ͪrectͩy g SLM ⌫ͪ.(1)By use of the Rayleigh–Sommerfeld diffraction theo-ry,12the electric-field amplitude in the Fourier trans-form plane of lens L 1is the two-dimensional ͑2-D ͒spatial Fourier transform of Eq.͑1͒:Uc ͑x ,y ,z ϭ2f ͒ϭg SLM 2⌫2i f sincͩg SLM ⌫x f ͪsinc ͩg SLM ⌫yfͪ,(2)where is the laser wavelength and f is the focallength of lenses L 1and L 2.If these lenses are as-sumed to be diffraction limited and of infinite extent,the range of frequency components that are stored in the medium is limited by the square aperture of lin-ear dimension D ͑located in the back focal plane of L 1͒,which can be expressed mathematically asP ͑x ,y ͒ϭrectͩx D ͪrect ͩy Dͪ.(3)The electric-field amplitude at the CCD can be ob-tained directly by the application of the scalar dif-fraction theory again:U d ͑x ,y ,z ϭ4f ͒ϭϪg SLM 2⌫2I ͑x ͒I ͑y ͒,(4)Fig.1.Schematic of a 4f configuration used for holographic data storage.5378APPLIED OPTICS ͞Vol.37,No.23͞10August 1998whereI ͑v ͒ϭ͐Ϫ␣ϩ␣sinc ͑⌫g SLM s ͒exp ͑Ϫi 2vs ͒d s ,␣ϵD 2f.(5)So U d ͑x ,y ,z ϭ4f ͒is the pixel-spread function:the field distribution along the x –y direction in the CCD plane resulting from a single SLM pixel of linear fill factor g SLM centered at x ϭy ϭ0and an aperture of linear width D .By use of the convolution theorem,this can be described as the convolution of the point-spread function ͓the Fourier transform of Eq.͑3͔͒with the original pixel shape ͓Eq.͑1͔͒.There is a particular choice of aperture D that will become im-portant in the simulation:D ϭf ͞⌫ϵD N .If the aperture is thought of as a low-pass filter of band-width D N ͞2and the pixel spacing at the CCD camera as a sampling at the frequency f ͞⌫,then the aper-ture D N corresponds to the Nyquist sampling condi-tion.With temporal signals this condition is usually met if one chooses to sample at twice the low-pass filter bandwidth.In this context the sampling rate is fixed by the CCD pixel spacing,and it is the low-pass filter of aperture D N that is chosen.In describ-ing our simulation results,we describe aperture sizes in terms of the ratio D ͞D N .This allows the results to be independent of the particular choice of ,f ,and ⌫up to the point at which an absolute areal density is evaluated.In our simulation this pixel-spread function is eval-uated by a 2-D fast Fourier transform,a 2-D low-pass operation,and a second 2-D fast Fourier transform for one pixel.The input SLM pixel is represented as a grid of 51ϫ51subpixels centered at x ϭy ϭ0.A large number of subpixels ensures the accuracy of the algorithm 11and increases the number of fill factors that can be simulated.This ON pixel is surrounded by 10OFF pixels ͑51ϫ51subpixels each ͒to increase the resolution with which the aperture can be speci-fied.The limit on the total number of input subpix-els is the memory requirement of the fast Fourier transform.After the space-invariant pixel-spread function has been evaluated for a particular SLM fill factor g SLM and aperture D ,the next step is to use linear superposition to synthesize the response at the CCD to an arbitrary input pattern of neighboring pixels on the SLM.In previous studies of interpixel cross talk 10,11only the influence of the four closest neighboring SLM pix-els was considered.In this paper we investigate the interpixel cross talk when the 12nearest neighboring SLM pixels are taken into account.A schematic of our procedure is shown in Fig.2.As the field at the CCD plane for a single SLM pixel is known ͓Eq.͑4͔͒,evaluation of the field at the target CCD ͓pixel ͑0,0͔͒is simple.For each of the 13pixels under consideration the electric-field distribution for the pixel-spread func-tion is translated by the appropriate pixel increment,multiplied by the corresponding SLM brightness value,and summed at the subpixel level.This results in the total amplitude distribution over the 51ϫ51subpixels of the central CCD pixel.Integrating the output intensity over the square subpixel area defined by the CCD linear fill factor g CCD gives the signal seen by the CCD pixel at that particular fill factor.It is important to note that we are normalizing the integrated intensity over a single pixel-spread func-tion to 1,independently of the aperture or the SLM fill factor.That is,reduction in the CCD pixel signal value can come from a low CCD fill factor but not from having a small aperture or a low SLM fill factor.In practice,this implies that a small aperture or a low SLM fill factor is counteracted by longer hologram exposures,so the same diffraction efficiency is reached in the end.When the M ͞#of the system ͑which is the constant of proportionality between the diffraction efficiency and the number of holograms squared ͒is independent of the aperture ͑transmis-sion geometry but not 90°geometry ͒,13the result is a loss of recording rate but not of dynamic range.Mathematically the superposition procedure can be expressed as follows:The brightness of the ͑i ,j ͒th SLM pixel is written as c ij .Initially,we as-sume a SLM contrast of infinity,so c ij takes only the value 0or 1.The amplitude-field distribution at ͑x ,y ,z ϭ4f ͒in the CCD plane is given by U T ͑x ,y ͒ϭc 02U d ͑x ,y ϩ2⌫͒ϩc Ϫ20U d ͑x Ϫ2⌫,y ͒ϩ͚i ϭϪ11͚j ϭϪ11c ijU d͑x Ϫi ⌫,y Ϫj ⌫͒ϩc 20U d ͑x ϩ2⌫,y ͒ϩc 0Ϫ2U d ͑x ,y Ϫ2⌫͒.(6)Fig.2.Thirteen-pixel pattern used to study interpixel cross talk.The coherent contribution of these 12nearest neighbors is evalu-ated over the central pixel.10August 1998͞Vol.37,No.23͞APPLIED OPTICS5379The signal received by the target CCD pixel with a linear fill factor g CCD is obtained asI T ϭ͚x m ϭϪg CCD ⌫͞2ϩg CCD ⌫͞2͚y m ϭϪg CCD ⌫͞2ϩg CCD ⌫͞2͉U T ͑x m ,y m ,z ϭ4f ͉͒2.(7)The quantum efficiency for the detection of photons in the CCD and other scaling factors are omitted.The value of I T in Eq.͑7͒is the received signal for a particular 13-pixel pattern combination at the SLM described by the c ij values of Eq.͑6͒.The next step is to evaluate the intensity of all the 213equally likely combinations and build a discrete histogram.The 213possible combinations are separated into two classes according to the state of the central ͑de-sired ͒SLM pixel,and occurrences are accumulated in a finite number of brightness bins.Figure 3shows an example of a discrete histogram for the case of both SLM and CCD fill factors equaling 100%and at the Nyquist aperture.The bin size was 1͞1000of the normalized intensity.The histogram shows de-terministic variations coming from interpixel cross talk.If the aperture were made large,the histo-gram would coalesce to two delta functions,located at 0and 1.However,the histograms shown in Fig.3are not representations of a probability density func-tion:More samples will not reveal previously hid-den structure at the tails of the distribution.If the two intensity distributions do not overlap,they will never overlap.However,ON and OFF intensity dis-tributions that are far away from each other seem more desirable than those that are close but do not overlap.The reason for this is that all systems are subject to random fluctuations,either quantum ͑shot noise ͒or thermal ͑Johnson noise ͒.In practical holographic systems the noise associated with detector electronics tends to overshadow the shot noise.This noise source can be modeled with Gaussian statistics.14Thus,to represent the statistical fluctuation from random noise,we convolve each intensity bin in the discrete histogram with a Gaussian distribution of standard deviation d .The standard deviation d isexpressed as a percentage of the incoming signal level ͑that is,normalized to 1.0on our histogram’s x axis ͒.Now we have real probability density functions for both distributions and can derive the BER.If there are N 0bins corresponding to the histogram of a target OFF pixel,we call ͕w i ,0͖i ϭ1,N 0the number of counts in the bin at the intensity value ͕i ,0͖i ϭ1,N 0.The variables N 1,͕w i ,1͖i ϭ1,N 1,and ͕i ,1͖i ϭ1,N 1are de-fined analogously for the ON distribution.So we have a set of shifted and scaled equal-variance Gaus-sians,with ij describing the shifting and w ij the scaling.If the total number of counts in each distribution is d 0and d 1,i.e.,d 0ϭ¥i ϭ1N 0w i ,0and d 1ϭ¥i ϭ1N 1w i ,1,the BER with a threshold of intensity ⌰is given byBER ϭ14ͫ1d 0͚i ϭ1N 0w i ,0erfcͩ⌰Ϫi ,0ͱ2d ͪϩ1d 1͚i ϭ1N 1w i ,1erfcͩi ,1Ϫ⌰ͱ2dͪͬ,(8)where erfc is the complementary error function.15For given ON and OFF distributions the BER depends on the selected threshold intensity ⌰.To obtain the minimum BER for each situation,we evaluated Eq.͑8͒repeatedly until the best value of ⌰was found.In summary,this numerical procedure evaluates the BER,given four dimensionless input parameters:1.A linear SLM fill factor g SLM ,expressed as a fraction of ⌫.2.An aperture size D ,expressed as a fraction of D N .3.A linear CCD fill factor g CCD ,expressed as a fraction of ⌫.4.Additive detector noise d ,expressed as a frac-tion of the signal strength in an ideal system ͑no cross talk and a 100%fill factor ͒.3.Experimental ValidationTo verify the results of our simulations,we measured the BER as a function of aperture size experimentally for some specific SLM–CCD fill factors in a photo-refractive information-storage material ͑PRISM ͒tester.6The SLM was a chrome-on-glass mask with 100%and 50%linear fill factors,and the CCD camera had a 100%fill factor.The results are shown in Fig.4,with the BER shown as a function of the aperture ͑in units of the Nyquist aperture ͒.Note that,with a focal length of 89mm,a pixel spacing of 18m,and a 514.5-nm wavelength,the Nyquist aperture corresponds to 2.54mm in real units.Figure 4͑a ͒corresponds to the case of both SLM and CCD fill factors equaling 100%,and in Fig.4͑b ͒the linear SLM fill factor is equal to 50%.In both situations a value of d could be found at which simulation and experiment showed excellent agreement.When the SLM and the CCD fill factors were both 100%,the BER increased mono-tonically as the aperture size decreased.However,if the SLM fill factor was smaller than 100%,theBERFig.3.Example of a discrete histogram for SLM and CCD fill factors of 100%and the Nyquist aperture.5380APPLIED OPTICS ͞Vol.37,No.23͞10August 1998initially increased,then passed through a local min-imum in the BER at 1.08times the Nyquist aperture,and finally increased rapidly for smaller apertures.This is because the Fourier transform consists of a set of multiple orders at regular spacing,each one con-taining the complete information of the data page and weighted by the sinc pattern of one pixel.At the Nyquist aperture all the orders except the zero order are blocked,whereas at apertures larger than the Nyquist aperture there exists a competition between the zero order of the SLM and incomplete copies of the various Ϯ1orders ͓Fig.4͑b ͔͒.4.Bit-Error Rate Maps and Areal DensityFigure 4provides evidence that our numerical proce-dure provides an accurate representation of real ho-lographic systems and shows that there is a difference in the behavior of the BER as the SLM–CCD fill factors vary.To investigate this difference further,we computed the BER for a small set of ap-ertures but varied the SLM and the CCD fill factors over a large number of possible values.A constant amount of detector noise relative to the signal level was assumed.The results are shown in Fig.5.Contour plots of the BER as a function of the SLM and the CCD linear fill factors are shown for two different aper-tures:the Nyquist aperture and twice the Nyquist aperture.For small BER values the contour lines are not smooth because of the inaccuracy of the numerical algorithm.Detector noise of 5%was as-sumed ͑that is,the standard deviation of the addi-tive Gaussian noise was 5%of the no-cross-talk ON signal level ͒.Two interesting effects can be seen from the results.As the aperture increases,the location of the minimum BER moves from small tolarge SLM fill factors,whereas large CCD fill fac-tors are always advantageous.At the Nyquist ap-erture,the minimum BER was approximately 10Ϫ6,whereas at twice the Nyquist aperture it was of the order of 10Ϫ20.Apparently,having a large aper-ture tends to be beneficial because the system can tolerate more additive noise before reaching the target BER.For a practical system this target BER is dictated by the error-correction coding and tends to be approximately 10Ϫ3–10Ϫ4.Depending on one’s frame of reference,more additive noise can mean either increased noise and constant signal strength ͑like our model ͒or decreased signal strength and a constant noise floor ͑like the physi-cal holographic system ͒.If the aperture is large more detector noise ͑or equivalently,lower signal levels ͒could be tolerated,and therefore more holo-grams could be stored.Therefore on the basis of Fig.5it is necessary to quantify these concepts to understand the trade-off between the number of holograms and the aperture size in terms of areal density.To obtain a value of the areal density ͑the number of stored bits per area ͒for a given BER target,we returned to the numerical method de-scribed in Section 2.We reiterate our assump-tions:͑a ͒The location of the aperture is exactly at the Fourier plane.͑b ͒Only detector and interpixel cross-talk noise are included.Noise sources such as interpage or interstack noise are not considered in this numerical study.͑c ͒No optical aberrations are taken into account in the optical system ͑the point-spread function is assumed to be space invariant ͒.Fig.4.Experimental validation of the proposed algorithm.Experiments on the BER versus the aperture were performed in the PRISM tester for two situations:͑a ͒SLM and CCD fill factors of 100%.͑b ͒SLM linear fill factor of 50%and CCD fill factor of 100%.10August 1998͞Vol.37,No.23͞APPLIED OPTICS5381To solve for the areal density,it is assumed that we stop recording holograms whenn d ϭd n s ,(9)where n d is the number of detector-noise electrons per pixel,n s is the number of signal electrons de-tected by a CCD pixel,and d is the relative standard deviation of the detector noise at which the BER hits the target level ͑of the order of 1%–10%͒.The pro-cedure is to select a set of fill factors and an aperture and increase d until our numerical simulation indi-cates that the target BER has been reached.Given a detector-noise specification ͑say,100noise elec-trons ͒,we can solve for the minimum number of sig-nal electrons we must have.To relate this to readout power and integration time,we write the number of signal electrons per ON pixel asn s ϭͩP ref h ͪhologram opt e t int 1N ON,(10)where N ON is the number of ON pixels in the SLM,which,assuming half of the total number of pixels N p is ON ,can be replaced by N p ͞2.The term P ref ͞h is the number of photons per second in the reference beam,hologram is the diffraction efficiency of the ho-lograms,opt is the efficiency of the optical system,which includes all the optical losses between the me-dia and the camera ͑but not the dead spaces from the CCD pixel fill factors ͒,e is the camera quantum efficiency,and t int is the integration time of the cam-era.In addition,the diffraction efficiency of the holo-grams can be expressed in terms of the M ͞#of theholographic system and the number of stored holo-grams M ,13ashologram ϭͩM ͞#Mͪ2.(11)Equation ͑10͒tells us how many signal electrons we must have to overcome both the detector noise and the interpixel cross talk.Equations ͑10͒and ͑11͒indicate how P ref ,the M ͞#,and the number of holo-grams M influence the number of signal bining Eqs.͑9͒–͑12͒,we derive the number of holograms M in terms of d ,e ,M ͞#,P ref ,and t int :M ϳM ͞#ͩd n d P ref h t int 2N pe opt ͪ1͞2.(12)As expected,a higher M ͞#,more readout power,or more integration time means more holograms can be stored.The effect of a smaller aperture arises through a smaller d value from the simulation.In effect,a larger portion of the SNR budget goes to interpixel cross talk,leaving less for detector noise and thus reducing M .However,a smaller aperture may be better in terms of areal density:Ᏸϭͩnumber of pixelshologramͪ͑number of holograms ͒aperture area.(13)Fig.5.BER as a function of the CCD and the SLM linear fill factors for ͑a ͒the Nyquist aperture and ͑b ͒twice the Nyquist aperture.The detector-noise level is 5%of the incoming signal level ͑before losses derived from diffraction and CCD dead space ͒.5382APPLIED OPTICS ͞Vol.37,No.23͞10August 1998Substituting Eq.͑12͒into Eq.͑13͒yields the areal density in the formᏰϭM ͞#ͩN pP ref h dn dopt e t int 2ͪ1͞2D 2.(14)To verify that Eq.͑14͒has validity,we can try to predict the areal density achieved in the DEMON (demonstration platform)system.8In this system 1200holograms were superimposed,with 46,000user bits ͞hologram.Although a large aperture was used in the demonstration,recent results show that an identical performance could be expected with an ap-erture as small as 5mm.This is an achieved areal density of 2.2bits ͞m 2.By putting the parameters for this demonstration into Eq.͑14͒,i.e.,M ͞#ϳ0.2,N p ϭ320ϫ240,P ref ϳ200mW,d ϳ0.033,n d ϳ100e Ϫ,opt ϳ0.4,e ϳ0.3,t int ϳ0.016s,and D ϳ5mm,we obtain 1.9bits ͞m 2.So there is some support for using Eq.͑14͒to predict absolute areal densities.In Fig.6the areal density achievable at a BER of 10Ϫ4is shown as a function of the aperture ͑in units of the Nyquist aperture ͒for three different SLM–CCD design parameters.For curve ͑a ͒each data point has its own unique SLM–CCD fill-factor pair that gives the minimum BER for that aperture,as obtained from BER maps like those in Fig.5.The other two curves show the density with a fixed pair of fill factors:large SLM and CCD fill factors ͑90%and 87%,respectively ͒for curve ͑b ͒and small SLM and CCD fill factors for curve ͑c ͒.None of the curves shown in Fig.6includes any effects from aberrations,interpage cross talk,or interstack cross talk,and they all use the following holographic-system parameters:M ͞#ϭ0.94,P ref ϭ200mW,N p ϭ106,n d ϭ115e Ϫ,ϭ514.5nm,t int ϭ1ms,BER ϭ10Ϫ4,and D ϭ2mm.The maximum areal density is achieved when the aperture is slightly larger ͑1.08ϫ͒than the Nyquist aperture,the SLM has a 40%linear fill factor,and the detector array has an 88%linear fill factor,and it corresponds to a number of holograms equal to 400͑by use of the parameters described above ͒.How-ever,if large SLM and CCD fill factors are used ͓curve ͑b ͔͒,Fig.6shows that the best areal density ͑also located at 1.08times the Nyquist aperture ͒is just 10%lower than the maximum of curve ͑a ͒.When the CCD fill factor is small,as shown by curve ͑c ͒,the maximum areal density is almost 50%lower than for the case depicted by curve ͑a ͒.In addition,for the case represented by curve ͑c ͒there are aper-ture sizes for which it is not possible to attain the target BER of 10Ϫ4.As expected,for apertures smaller than the Nyquist aperture not all the SLM information is transmitted through the aperture,and a low BER cannot be achieved even for a few holo-grams.In summary,our simulations show that areal den-sity is maximized by use of the Nyquist aperture and large CCD fill factors,but it is reasonably indepen-dent of the SLM fill factors.Given this latter ambi-guity,it is desirable to have large SLM fill factors to decrease the recording rate and improve the optical efficiency in the object beam.5.Spatial Light Modulator Contrast and Fixed-Pattern NoiseAll the simulations up to this point assumed that the SLM has infinite contrast,that is,the intensity of the OFF pixels is zero at the SLM.In a real holographic device the OFF pixels always transmit some light.For example,the Epson liquid-crystal SLM in the DEMON platform 8has an ON –OFF contrast ratio of 25:1.Therefore it is important to include the effects of finite SLM contrasts on the areal density.In ad-dition,we introduce a second deterministic noise source,fixed-pattern noise,which comes from spatial variations in the ON level across the SLM.16Thus identical pixel patterns can have different intensities in the CCD plane,depending on their locations within the SLM page.This is a problem because the last step in most detection schemes is the threshold-ing of a small block of pixels by a common thresh-old.8,17This threshold can be calculated explicitly or it can be an implicit by-product of the modulation decoder.Spatial variations within the pixel block will tend to broaden the ON and the OFF distributions seen by the threshold and thus increase the BER.To include fixed-pattern noise in our system model,we convolve the bins of the discrete histograms cor-responding to a target ON pixel with a Gaussian with a larger standard deviation than before.Because fixed-pattern noise is a truly deterministic pattern noise,this convolution can be done only if global thresholding is assumed.For local thresholdingtheFig.6.Areal density as a function of the aperture for ͑a ͒the best combination of fill factors ͑obtained with BER maps such as those shown in Fig.5͒,͑b ͒a SLM linear fill factor of 90%and a CCD linear fill factor of 87%,and ͑c ͒a SLM linear fill factor of 60%and a CCD linear fill factor of 40%͑corresponding to the DEMON system 8͒.10August 1998͞Vol.37,No.23͞APPLIED OPTICS5383。
Optimal control of atom transport for quantum gates in optical lattices
a r X i v :0803.0183v 3 [q u a n t -p h ] 29 M a y 2008Optimal control of atom transport for quantum gates in optical latticesG.De Chiara,1,2T.Calarco,1,3M.Anderlini,4S.Montangero,5P.J.Lee,6B.L.Brown,6W.D.Phillips,6and J.V.Porto 61ECT*,BEC-CNR-INFM &Universit`a di Trento,38050Povo (TN),Italy2Grup d’Optica,Departament de Fisica,Universitat Autonoma de Barcelona,08193Bellaterra,Spain3Institute for Quantum Information Processing,University of Ulm,D-89069Ulm,Germany4INFN,Sezione di Firenze,via Sansone 1,I-50019Sesto Fiorentino (FI),Italy 5NEST-CNR-INFM &Scuola Normale Superiore,P.zza dei Cavalieri 756126Pisa Italy 6National Institute of Standards and Technology,Gaithersburg,Maryland 20899,USA(Dated:May 29,2008)By means of optimal control techniques we model and optimize the manipulation of the external quantum state (center-of-mass motion)of atoms trapped in adjustable optical potentials.We con-sider in detail the cases of both non interacting and interacting atoms moving between neighboring sites in a lattice of a double-well optical potentials.Such a lattice can perform interaction-mediated entanglement of atom pairs and can realize two-qubit quantum gates.The optimized control se-quences for the optical potential allow transport faster and with significantly larger fidelity than is possible with processes based on adiabatic transport.PACS numbers:03.67.-a,34.50.-s,I.INTRODUCTIONQuantum degenerate gases,such as Bose-Einstein con-densates (BECs)[1]or cold Fermi gases [2],trapped in optical lattices,provide a flexible platform for investigat-ing condensed matter models and quantum phase tran-sitions [3].It has been proposed to use these systems as quantum simulators of solid state systems [4]and for implementing quantum information processing (QIP)[5,6,7].Experiments on neutral atoms have shown some of the ingredients needed for QIP:the preparation of a Mott insulator state with just one particle per well,which is used as the initial state of a quantum register [3],single-qubit rotation [8],and controlled motion of atoms so as to effect entangling interactions [8,9].A general requirement of QIP is accurate control of a quantum system.Often this includes control of degrees of freedom other than the qubit or computational basis,for example the center of mass motion of an ion or atom for which the spin (internal state)represents the qubit.One approach to achieving such accurate control is adiabatic manipulation of the relevant Hamiltonian.Unfortunately adiabaticity limits the speed of operations.One way to overcome this difficulty is to use optimal control methods [7,10].Here we show that such techniques could improve the speed and fidelity of transport of atoms in an optical lattice.Recent experiments [9,11,12]have shown that quan-tum gates could be implemented in controllable opti-cal potentials by adjusting the overlap between atoms trapped in neighboring sites of an optical lattice.High fidelity of this dynamic process could be achieved by adia-batically changing the trapping potential.This,however,limits the overall gate speed to be much lower than the trapping frequency [7,13].Here we present a detailed nu-merical analysis of the transport process used to effect a two-qubit quantum gate in [12],which is performed withthe controllable double-well optical potential described in [14]and find that it gives an accurate description of the evolution measured in the experiment.Then we ap-ply optimal control theory to the transport process of the atoms,both with and without interactions,to show how to increase the speed of the gate.The success of this method in this specific problem demonstrates the promise of optimal control for coherent manipulation of a diverse class of quantum systems.II.THE EXPERIMENTA two-qubit quantum gate with neutral atoms can be realized in optical lattices through a controlled interaction-induced evolution of the wavefunction that depends on the states of the two atoms [5,6].Because atoms in their electronic ground states generally have short-range interactions,in order to use these contact interactions to produce entanglement,the atomic wave-functions must be made to overlap.Once the interaction has taken place for a fixed time,the two atoms can be separated again thus finishing the gate.In this paper we consider the control of such motion in a specific setup;however our theory can be applied to more general sys-tems.A.The Double-Well LatticeNeutral 87Rb atoms are loaded into the sites of a 3D optical lattice obtained by superimposing a 2D optical lattice of double-wells [14]in the horizontal plane and an independent 1D optical lattice in the vertical direc-tion.The horizontal lattice has a unit cell that can be dynamically transformed between single-well and double-well configurations.The horizontal potential experienced2 by the atoms is[15]:V(x,y)=−V0 cos2 β2 (cos ky+cos(kx−θ))2 (1)where x and y are the spatial coordinates,k=2π/λis the laser wave-vector andλis the laser wavelength.The potential(1)depends on three parameters:(i)the strength V0of the potential wells; (ii)the ratio tan β√3 ent vibrational levels appear spatially separated,allowingus to measure the amount of population in each vibra-tional state.The comparison between these measurements and thetheoretical model requires an accurate determination of the evolution of the parameters V0,βandθcharacteriz-ing the optical lattice during the experimental sequences. The parameter V0,which corresponds to the depth of the potential when it is set in theλ/2configuration,is measured by pulsing theλ/2-lattice and observing the resulting momentum distribution in time offlight[19]. The parametersβandθ,which determine the shape of the double-well potential and are controlled using two electro-optic modulators(EOMs),are determined from measurements of the polarization of the laser beams af-ter the EOMs as a function of their respective control voltages[14].We perform two series of experimental sequences in or-der to study the properties of the atomic transport as a function of the duration of the process and of the energy tilt between the two potential wells during the merge.In afirst series of measurements the sequence involves con-verting the lattice from the double-well to the single-well configuration by changingβ,rotating the polarization of the input light using a linear ramp,while leaving constant the light intensity and the setting of the electro-optic modulator EOMθdedicated to the control of the phase shiftθ.This sequence is repeated for different durations of the linear ramp from T=0.01ms to1.01ms.In a sec-ond series of measurements we consider the dependence of the transport on the tilt of the double-well potential during the merge.We perform the lattice transforma-tion using always the same duration of T=0.5ms,the same intensity of the light beam and the same ramp for changing the polarization angleβ,while the the setting of EOMθis kept constant in time during a sequence.We then repeat the sequence for different settings of EOMθ. The time dependence for the three lattice parameters V0,βandθfor measurements of thefirst series and of the second series are shown in Fig.2a and Fig.2b,respec-tively.Fig.2b shows the evolution of the parameterθfor two different settings of EOMθ.The potential parame-ters are determined using our calibration of the lattice setup,taking into account effects such as different losses on the optical elements for different polarizations of the lattice beams and the dependence of the optical potential on the local polarization of the light[8].These effects are responsible for the change of the potential depth V0and of the angleθduring the sequence despite the fact that both the intensity of the light and the settings of EOMθare not actively changed.III.THEORETICAL MODELHere we describe the theoretical methods that we im-plement for investigating the dynamics in the system de-scribed above,starting with the case of non-interacting00.20.40.60.81t/T100120140V0/h(kHz)00.20.40.60.81t/T0.20.40.6β/π00.20.40.60.81t/T100120140V0/h(kHz)00.20.40.60.81t/T0.20.40.6β/π00.20.40.60.81t/T-0.5-0.4-0.3-0.2-0.1θ/π00.20.40.60.81t/T-0.5-0.4-0.3-0.2-0.1θ/πa bFIG.2:Two possible sequences a(left)and b(right)employed to shift the atoms from a double-to a single-well configura-tion are shown in the left and right part of the panel.For each sequence we show the time-dependence of V0,β,θfor a sequence duration T.For sequence b we show the time de-pendence ofθfor two settings of EOMθ:−0.42π(solid)and −0.48π(dashed).particles.Then,we consider the experimental realization of the merging of adjacent lattice sites into a single site shown in Fig.1and we compare the results obtained in the experiment with the expectations based on our theo-retical model.This stage represents a useful benchmark to evaluate the reliability of the numerical model as well as for gaining insight into the details of the optical po-tential experienced by the atoms.Finally,we present the technique for optimizing the transport sequence,and we show how we can achieve a significantly higherfidelity at fixed operation time for the atomic motion than by using smooth sequences based on adiabatic evolution.A.Theoretical FrameworkWe consider the1D problem restricted to the x axis by assuming that the optical potential can be separated along the three spatial directions,allowing us to express the atomic wavefunctions as a product of three indepen-dent terms.We consider the harmonic approximation of the potential in the y and z directions,having trap fre-quenciesνy andνz respectively,that can be calculated as shown in[20]and we assume that along y and z the atoms always occupy the lowest vibrational state.This restriction does not put limitations in studying dynamic processes involving low energy states of the double-well potential since it can be chosen to have non-degenerate vibration frequencies along all three directions,with the4lowest frequency always along x .We calculate the eigen-states of the system along the x direction by solving the eigenvalue equation usingthefinite differencemethod[21].For the time evolution we consider the integration of the time dependent Schr¨o dinger equation using the Crank-Nicolson method [22].This method has the ad-vantage of being unconditionally stable and the error in the results scale quadratically with the number of space-time grid points in which the Schr¨o dinger equation is solved.The relative error of the data presented is al-ways less than 10−3.In Appendix A we present a more detailed description of our numerical methods.B.Comparison to experimental resultsIn this section we present the theoretical analysis of the transport processes described above and we discuss the agreement between the model and the experimental mea-surements.We start by considering the time evolution of the Hamiltonian spectrum during the two sequences a and b shown in Fig.3.0.20.40.60.81t/T-250-200-150-100E /h (k H z )0.20.40.60.81t/T-250-200-150-100E /h (k H z )a b FIG.3:Instantaneous spectrum of the 1D Hamiltonian for sequences a and b for EOM θ:−0.42π.At time t =0the spectrum is made of nearly degener-ate doublets of almost equally spaced pairs of harmonic oscillator states,while at time t =T the levels are similar to those of a single harmonic oscillator 1.vibrational excitation along the x direction).Additional energy levels are present in the 2D spectrum,associated with states with vibrational excitation along y .However,those states can be neglected for studying the dynamical process considered here since their energy is always higher than the three lowest states of the 1D spectrum.2Both in the experiments and in the simulations the evolution of the atom initially in the right well,i.e.in state |ψR ,shows a weaker dependence on the properties of the sequence and is less instructive.For instance,in the simulations for T =0.5ms thepopulation in the ground state f R 0is of order of 99%for a broad range of parameters.3We do not consider variations in V 0and βdue to the small dependence of the transport process on those parameters within the range associated with the accuracy of our calibrations.5f 0LT (ms)f 1LT (ms)f 2LT (ms)FIG.4:Population of the first three eigenstates of the Hamil-tonian,ground (top),first excited (middle),second excited (bottom),at the end of sequence a as a function of the se-quence duration T .The experimental data (symbols)are compared to the four sets of numerical data (lines)obtained for θ/π=−0.454+∆θa /π,with ∆θa /π=0(dot-dash),-0.02(dash),-0.03(solid)and -0.04(dot),while -0.454is the nominal value of θ/πexpected from the calibrations.of the atom starting in the left site of the double well.The transport into the first excited state has a maximum theoretical fidelity of 0.95for θb /π=-0.474.Less nega-tive values of θb ,i.e.increasing tilts,lead to a decrease of fidelity due to the increase in the fraction of population ending in the second excited state.Values of θb /πcloser to -0.5,i.e.more symmetric configurations of the double well,lead to decrease of fidelity associated with larger fractions of population ending in the ground state.Also in this case theexperimental data and the theoretical model are in satisfactory agreement.For these data the deviation between theory and experiment is more sen-sitive to the value of the phase shift θb .We find best agreement by shifting the value determined from the cal- f nLθb /πFIG.5:Population (overlap absolute squared)of the first three eigenstates of the Hamiltonian at the end of sequence b as a function of θb .The duration of the sequence is fixed to T =0.5ms.The experimental data (symbols)are in good agreement with the numerical data (lines).In this graph the x axis for the experimental has been shifted by an offset of -0.015with respect to the initial calibration.ibration by an offset ∆θb /π=-0.015,which reduces the rms deviation from 0.4to 0.15.The axis for the exper-imental data in Fig.5has been corrected by the offset ∆θb .Thus,while showing the reliability of the model in describing the dynamic process,the comparison between theoretical and experimental results also allows one to re-fine the calibration of the parameters characterizing the optical potential.Finally we find that adding an offset of ∆θ/π=-0.016to the θcalibration brings the data from both sequences to a good agreement with the theory and reduces the rms deviation from 0.19to 0.11.This is three times larger than the uncertainty of the offset from our EOM calibration but is still consistent with measurements of the lattice tilt from [16].This discrepancy might be ex-plained by the birefringence in the vacuum cell windows,which is not accounted for in our model.Inclusion of this offset should improve both the predictivity of the model and the experimental optimization of the collisional gate based on the numerical technique described below.IV.OPTIMIZED TRANSPORTIn this section we employ optimal control theory to obtain fast and high-fidelity gates.Our aim is to find a temporal dependence of the control parameters V 0(t ),β(t ),θ(t )that improves the fidelity even for a shorter se-quence duration,when the adiabatic sequences presented above yield a poor fidelity.Quantum optimal control techniques have been successfully employed in a variety of fields:molecular dynamics [10,23,24],dynamics of ultracold atoms in optical lattices [25,26,27],imple-mentation of quantum gates [7,28].We use the Krotov algorithm [29]as the optimization procedure.The objective is to find the optimal shapes of the control parameter sequences that maximize the over-lap (fidelity)between the evolved initial wave functionand a target wave function.The initial and target wave functions are fixed a priori.The algorithm works also for more than one particle.The method consists in it-eratively modifying the shape of the control parameters according to a “steepest descent method”in the space of functions (for more details see [7]).The method requires evolving each particle’s wave function and an auxiliary wave function backward and forward in time according to the Schr¨o dinger equations.In our simulations we use the Crank-Nicolson scheme to realize this step as described in Appendix A.A.Non-interacting caseWe optimize the gate for T =0.15ms 4choosing as a starting point for the optimization a sequence similar to Fig.2b where θis for simplicity taken constant to the final value θb /π=−0.474.Without optimization the fi-delities for the atom initially in the left and right well are f L 1=0.57and f R 0=0.69,respectively.The infidelities are shown in Fig.6as a function of the number of opti-mization steps:the algorithm of optimization is proven to yield a monotonic increase in fidelity [10],however it does not guarantee to reach its 100%value.The results for the two atoms give a fidelity above 98.7%.optimization step1 - f nαFIG.6:Infidelities (1−f αn )for the atom initially in the left (α=L,n =1,squares)and in the right well (α=R,n =0,plus)as a function of the optimization step.The resulting optimized parameter sequences are shown in Fig.7and compared to the original sequence without optimization.We find that the optimized se-quence for the potential depth V 0differs negligibly from0.02 0.04 0.06 0.08 0.1 0.12 0.14-0.4-0.20.2 0.4t (m s )x/λ+0.250.02 0.04 0.06 0.08 0.1 0.12 0.14-0.4-0.20.2 0.4t (m s )x/λ+0.250.02 0.04 0.06 0.08 0.1 0.12 0.14-0.4-0.20.2 0.4t (m s )x/λ+0.250.05 0.10.151230 0.2 0.4 0.6 0.8 1p (t)t (ms)nn0.02 0.04 0.06 0.08 0.1 0.12 0.14-0.4-0.20.2 0.4t (m s )x/λ+0.250.02 0.04 0.06 0.08 0.1 0.12 0.14-0.4-0.20.2 0.4t (m s )x/λ+0.250.02 0.04 0.06 0.08 0.1 0.12 0.14-0.4-0.20.2 0.4t (m s )x/λ+0.250.05 0.10.151230 0.2 0.4 0.6 0.8 1t (ms)np n (t)FIG.8:(Color online)Comparison between the evolution of the atoms with and without optimal control.Top (left to right):non optimized case,absolute square value of the wave functions as a function of time (atoms initially in the left and right well respectively);1D trapping potential as a function of time;projections p n (t )at time t of the state initially in the left well onto the instantaneous eigenstates |φn (t ) with n =0(blue solid),n =1(red dashed),n =2(green dotted),n =3(magenta dot-dashed).Bottom:analogous plots for the optimized case.g 1D =2a s h√√f R 0F non optimized0.570.220.990.98interaction optimized0.980.97TABLE I:Results of our numerical simulations for three dif-ferent sets of control parameters:the non optimized case Fig.2b ;the transport optimized case Fig.7where the optimal control algorithm is used without taking into account interac-tions;the interaction optimized case where the optimal con-trol algorithm is used taking into account interactions.Thequantities shown are:the single-particle populations f R0and f L1calculated without interactions,the two-particle fidelities F and F int calculated without and with interactions.ble I we summarize our results for T =0.15ms obtained with three different sequences:first,the non optimized sequence Fig.2b ;second,the transport optimized case Fig.7where we used the optimal control algorithm to op-timize the single-particle populations not taking into ac-count interactions;third,the interaction optimized case8a-0.350.35-0.35 0 0.35x 2/λ + 0.25x 1/λ + 0.25b-0.350.35-0.35 0 0.35x 2/λ + 0.25x 1/λ + 0.25c-0.350.35-0.35 0 0.35x 2/λ + 0.25x 1/λ + 0.25d-0.350.35-0.35 0 0.35x 2/λ + 0.25x 1/λ + 0.25FIG.9:(Color online)Absolute square values of the relevant symmetric wave-functions in the coordinates of the two atoms:a Initial wave function in the state ˛˛˛˜Ψin E with one atom in the left well and one atom in the right well;b wave function ofthe target state ˛˛˛˜Φtg E .c evolved wave-function using the non-optimized sequences of Fig.2b giving a fidelity F int =0.22(for T =150µs);d evolved wave-function using the optimized sequences of Fig.7giving a fidelity F int =0.93.where we apply the optimal control algorithm using as the initial guess the transport optimized sequence Fig.7and then optimizing including the interactions in the evo-lution.The resulting wave-functions for the non optimized and transport optimized sequences are compared in Fig.9c-d .Without optimal control the two-particle fidelity with and without interactions is F =F int =0.22while with(non-interacting)optimization we obtain F ≃f R 0f L1=0.98and F int =0.93.This shows that interactions spoil slightly the efficiency of the transport process as one might expect.Optimal control can subsequently be applied while including interactions in the optimiza-tion,producing a control sequence with a fidelity of F int =0.97.Another consideration is the experimental bandwidth available for feedback control.The optimized control waveforms Fig.7were obtained with no restriction on the frequency response of the control,and typically have frequency components on the order of a few times the lattice vibrational spacings (see Fig.10),rger than the bandwidth of our control electronics.Clearly,using a filtered version of these waveforms will lead to lower control fidelity and it will be important to increase the experimental bandwidth of the control electronics (cur-rently about 50kHz).In addition,it may be useful to develop an optimization sequence that includes the lim-ited control bandwidth,although it is likely that frequen-cies on the order of the vibrational spacing will always be needed.10-410-310-210-110N o r m a l i z e d F F T m a g .7891023456789100234567891000Frequency (kHz)FIG.10:The normalized Fourier transform magnitudes |˜β(f )|(solid)and |˜θ(f )|(dashed)of the optimized control sequences β(t )and θ(t )shown in Fig.7.The spectra are normalized to the value at the fundamental frequency 1/T =6.67kHz.V.CONCLUSIONSWe have presented a detailed,numerical analysis of the transport process of neutral atoms in a time dependent optical lattice.We show how to improve the fidelity of the transport process for T =0.15ms from F int =0.22,us-ing simple adiabatic switching,to F int =0.97,using op-timal control theory.We expect better results for longer control times.We analyze the effect of atom-atom inter-actions on the transport process and we show that the optimal control parameter sequences found in the non-interacting case still work when including interaction.We obtained the same transformation as in the case of the adiabatic transport with a better fidelity and in a time shorter by more than a factor of three,which represents a relevant improvement in terms of scalability of the num-ber of gates that can be performed before the system decoheres due to the coupling to its environment.This technique can be easily adapted to other similar trans-port processes and also extended to atoms in different magnetic states,which can allow the implementation of a fast,high-fidelity quantum gate in a real optical lat-tice setup with the qubits encoded in the atomic internal states [12].In the future,it would be interesting to study the possibility of including the effect of errors in the op-timization procedure and thus investigate in more details the robustness and noise-resilience of the optimal control technique.AcknowledgmentsThis work was supported by the European Commis-sion,through projects SCALA (FET/QIPC),EMALI (MRTN-CT-2006-035369)and QOQIP,by the National Science Foundation through a grant for the Institute for Theoretical Atomic,Molecular and Optical Physics at9 Harvard University and Smithsonian Astrophysical Ob-servatory,and by DTO.SM acknowledges support fromIST-EUROSQIP and the Quantum Information programof“Centro De Giorgi”of Scuola Normale Superiore.Thecomputations have been performed on the HPC facilityof the Department of Physics,University of Trento.Wethank J.Sebby-Strabley for experimental support.APPENDIX A:NUMERICAL METHODIn our numerical simulationswe employafinitediffer-ence method(see for example[21])that consists in dis-cretizing the coordinate representation in a homogeneousn points mesh in the interval[X1;X2]:x k−x k−1=dx,x0=X1,x n=X2.The number dx=(X2−X1)/n isthe lattice spacing.In this discretized representation theeigenvalue equation becomes:V(x k,0)−ǫδ2x2[H(x k,t n)ψ(x k,t n)++H(x k,t n+1)ψ(x k,t n+1)](A4)This method is of the second order in time and spaceand it is unconditionally stable.The price for all theseadvantages is that a tridiagonal set of linear equationsmust be solved to getψ(t n+1)as shown in Eq.(A4).We used common Fortran routines to solve the linearequations problem[32].We solve a2D time dependent Schr¨o dinger equationin the two coordinates of the atoms by making use ofthe extension in two dimensions of the Crank-Nicolsonmethod called the Peaceman-Rachford method[21].Thisis an implicit method and the integration proceeds in twosteps:first the initial wave-function is integrated in timeconsidering only one direction in the coordinate space,then from the intermediate wave-function we obtain thefinal wave-function by integrating in the other direction.This method is an example of alternate direction implicitschemes.In our simulations we used n=(X2−X1)/dx=103and n T=T/dt=5·103that assures convergence of theresults with a relative error which is less than10−3.[1]For a review,see for example:I.Bloch,J.Phys.B38,S629(2005).[2]G.Modugno,F.Ferlaino,R.Heidemann,G.Roati,M.Inguscio,Phys.Rev.A68,011601(R)(2003).[3]M.Greiner,O.Mandel,T.Esslinger,T.W.H¨a nsch,andI.Bloch,Nature London415,39(2002).[4]J.J.Garc`ıa-Ripoll,M.A.Martin-Delgado,and J.I.CiracPhys.Rev.Lett.93,250405(2004).[5]G.K.Brennen,C.M.Caves,P.S.Jessen,and I.H.Deutsch,Phys.Rev.Lett.82,1060(1999).[6]D.Jaksch,H.-J.Briegel,J.I.Cirac,C.W.Gardiner,andP.Zoller,Phys.Rev.Lett.82,1975(1999).10[7]T.Calarco,U.Dorner,P.Julienne,C.Williams,and P.Zoller,Phys.Rev.A70,012306(2004).[8]P.J.Lee,M.Anderlini,B.L.Brown,J.Sebby-Strabley,W.D.Phillips and J.V.Porto,Phys.Rev.Lett.99, 020402(2007).[9]O.Mandel,M.Greiner, A.Widera,T.Rom,T.W.H¨a nsch and I.Bloch,Nature425,937(2003).[10]A.P.Peirce,M.A.Dahleh,and H.Rabitz,Phys.Rev.A37,4950(1988).R.Kosloff,S.A.Rice,P.Gaspard,S.Tersigni,and D.J.Tannor,Chem.Phys.139,201(1989).[11]S.Trotzky,P.Cheinet,S.Folling,M.Feld,U.Schnor-rberger,A.M.Rey,A.Polkovnikov,E.A.Demler,M.D.Lukin,and I.Bloch,Science319,295(2008).[12]M.Anderlini,P.J.Lee,B.L.Brown,J.Sebby-Strabley,W.D.Phillips and J.V.Porto,Nature448452–456 (2007).[13]J.J.Garc`ıa-Ripoll,P.Zoller,and J.I.Cirac,Phys.Rev.Lett.91,157901(2003).[14]J.Sebby-Strabley,M.Anderlini,P.S.Jessen,and J.V.Porto,Phys.Rev.A73,033605(2006).[15]M.Anderlini,J.Sebby-Strabley,J.Kruse,J.V.Porto,and W.D.Phillips,J.Phys.B39,S199(2006).[16]J.Sebby-Strabley,B.L.Brown,M.Anderlini,P.J.Lee,W.D.Phillips,J.V.Porto,and P.R.Johnson,Phys.Rev.Lett.98,200405(2007).[17]A.Kastberg,W.D.Phillips,S.L.Rolston,R.J.C.Spreeuw,and P.S.Jessen,Phys.Rev.Lett.74,1542 (1995).[18]M.Greiner,I.Bloch,O.Mandel,T.W.H¨a nsch,and T.Esslinger,Phys.Rev.Lett.87,160405(2001).[19]Yu.B.Ovchinnikov,J.H.M¨u ller,M.R.Doery,E.J.D.Vredenbregt,K.Helmerson,S.L.Rolston,and W.D.Phillips,Phys.Rev.Lett83,284(1999).[20]I.B.Spielman,P.R.Johnson,J.H.Huckans,C.D.Fer-tig,S.L.Rolston,W.D.Phillips,and J.V.Porto Phys.Rev.A73,020702(R)(2006).[21]J.W.Thomas:“Numerical Partial Differential Equa-tions”,vol.1,Springer,New York(1995).[22]J.Crank and P.Nicolson,Proc.Cambridge Philos.Soc.43,50(1947).[23]S.A.Rice and M.Zhao:“Optical control of Moleculardynamics”,J.Wiley,New York(2000).[24]M.Shapiro and P.Brumer:“Principles of the quan-tum control of molecular processes”,J.Wiley,New York (2003).[25]S.E.Sklarz and D.J.Tannor,Phys.Rev.A66,053619(2002).[26]B.Vaucher,S.R.Clark,U.Dorner,D.Jaksch,New J.Phys.9221(2007).[27]O.Romero-Isart and J.J.Garc`ıa-Ripoll,Phys.Rev.A76,052304(2007).[28]S.Montangero,T.Calarco and R.Fazio,Phys.Rev.Lett.99,170501(2007).[29]V.F.Krotov:“Global methods in optimal control the-ory”,M.Dekker Inc.,New York(1996).[30]M.Olshanii,Phys.Rev.Lett.81,938(1998).[31]G.L.G.Sleijpen and H. A.van der Vorst,SIAM Review42,267(2000),we used the rou-tine written by G.L.G.Sleijpen and coworkers: http://www.math.uu.nl/people/sleijpen/[32]E.Anderson et al.:“LAPACK User’s guide”,SIAM,Philadelphia(1999).。
尺寸变化和外观英语作文
尺寸变化和外观英语作文Title: The Dynamics of Size Alteration and Appearance Transformation。
In the realm of nature and technology, the phenomena of size alteration and appearance transformation play pivotal roles in shaping the world around us. Whether it's the metamorphosis of a caterpillar into a butterfly or the development of cutting-edge technologies, these processes showcase the inherent adaptability and complexity of the universe. In this essay, we delve into the intricacies of size alteration and appearance transformation, exploring their significance, mechanisms, and implications.Size alteration, or the change in dimensions of an object or organism, is a fundamental aspect of life and the physical world. From microscopic organisms to celestial bodies, size variation is ubiquitous. One of the most fascinating examples is the growth of living organisms. In biology, size alteration is intricately linked withdevelopment and aging. Cells divide and multiply, tissues expand, and organisms undergo various stages of growth to reach their full size. The genetic blueprint encoded within an organism's DNA dictates the trajectory of its growth, ensuring that each species attains its characteristic size and shape.Furthermore, size alteration is not limited tobiological systems; it is also prevalent in the domain of physics and engineering. For instance, in the field of nanotechnology, scientists manipulate materials at the atomic and molecular levels to create structures with tailored properties. By controlling the size and arrangement of atoms, researchers can engineer materialswith enhanced strength, conductivity, or optical properties, leading to groundbreaking advancements in electronics, medicine, and energy storage.Moreover, size alteration often accompanies appearance transformation, wherein the outward characteristics of an object or organism undergo significant changes. This phenomenon is exemplified in the lifecycle of insects suchas butterflies and beetles. Through a process of metamorphosis, these creatures transition from larval forms to adults with entirely different appearances. The transformation is not merely superficial; it involves profound physiological and behavioral changes that enable the organism to adapt to new environments and ecological niches.In the realm of technology, appearance transformation is equally transformative. Consider the evolution of smartphones over the past few decades. From bulky, monochromatic devices with limited functionality, they have evolved into sleek, multifunctional gadgets with vibrant displays and advanced features. This transformation is driven by innovations in materials science, electronics, and design principles, reflecting the dynamic interplay between form and function in the development of consumer products.The mechanisms underlying size alteration and appearance transformation are diverse and complex, often involving intricate processes at the molecular, cellular,or societal levels. In biological systems, hormonal signals, environmental cues, and genetic regulation orchestrate growth and development. Similarly, in technological systems, factors such as material properties, manufacturing techniques, and user preferences dictate the evolution of products and technologies.Furthermore, the implications of size alteration and appearance transformation extend far beyond their immediate manifestations. In nature, these processes contribute to biodiversity, ecological dynamics, and evolutionary adaptations. In society, they drive innovation, economic growth, and cultural change. Understanding and harnessing these phenomena are crucial for addressing pressing challenges such as environmental sustainability, healthcare, and technological progress.In conclusion, size alteration and appearance transformation are fundamental aspects of nature and technology, shaping the world in profound ways. Whetherit's the growth of organisms or the evolution of products, these processes highlight the dynamic nature of theuniverse and our ability to adapt and innovate in response to changing circumstances. By unraveling the mechanisms and implications of size alteration and appearance transformation, we gain deeper insights into the intricate workings of the cosmos and our role within it.。
TRAJECTORY AND OPTICAL PARAMETERS IN A NON-LINEAR STRAY FIELD
New magnetic measurements on operational CPS magnet working points were performed in 1992, including measurements of the central field, the end and lateral stray fields, and the field in the junction between the two half-units [2]. The measured magnet unit was a radially defocusing open half-unit (D half-unit) followed by a radially focusing closed half-unit (F half-unit), with the yoke oriented towards the centre of the ring. Measurements were carried out in a Cartesian frame with steps of 20 mm along the z-axis (aligned along the two magnet targets and counted positively in the proton direction) and steps of 10 mm along the x-axis (put in the middle of the central magnet junction and counted positively towards the outside ring). The field map range is: -70 mm ≤ x ≤ 310 mm and -2.55 m ≤ z ≤ 2.73 m (the magnet length is 4.26 m).
ZEMAX知识库说明!!
ZEMAX知识库说明!!ZEMAX 知识库说明知识库说明ZEMAX 知识库说明知识库说明在CODE V、ZEMAX、OSLO中,ZEMAX出道最晚。
为什么最后ZEMAX在中国的占有率会有80%?我认为除了其界面符合WINDOWS标准、容易使用外,最重要的是ZEMAX公司在2000年后,在官方网站推出免费、非常很实用的知识库。
此知识库是学习ZEMAX使用方法的最重要的途径。
所有培训都不及此知识库。
同时,我认为这些软件要生存下来,除了价格、功能因素外,最重要的是要让人能免费的、最快速度的掌握这些软件的使用方法。
否则,软件再好,没有人会使用,自然也没有人会买它。
顺便提一下,与ZEMAX结合最好的光学设计书籍,当推《Introduction to Lens Design With Practical ZEMAX examples》一书了。
不少美国、中国院校都在使用它。
因此,论坛以完善ZEMAX 中文知识库为初步目标。
至2009.3为至,相应的ZEMAX 英文知识库容量,共计198篇文章,不算Installation and Troubleshooting、Hardware Key 、Exploring the ZEMAX Demo 话题,并剔出重复文章后。
下面红色加下划线的文章标题是ZEMAX中文知识库已经翻译的文章,绿色下划线表示已预定翻译的文章。
黑色加粗的标题是分类标题。
1、Frequently Asked QuestionsHow do I Make Maximum Memory Available to ZEMAX?How do I Make my Lens Telecentric in Image Space?How is Radiant Intensity Defined?What Computer Should I Buy to Run ZEMAX?How to Specify Intermediate Field PointsWhat is Virtual Propagation?Why is the Optical Performance Sometimes Different at theImage Surface Versus a Co-Located Surface?Shouldn't the Maximum Magnitude of the Scatter Vector be 1?What are the .SES and .CFG Files?How Can I See an Overview of Aberrations in my System?Why is it called ZEMAX?Why is Polychromatic Wavefront Error Greater than Monochromatic?How To Convert CodeV Files to ZEMAX FormatHow Do I Create Presentation Quality Graphics and Animations?How do I get Interferometer Data into ZEMAX?What is "Technical Support"?What is this Knowledge Base For?2、First Time Users3、Sequential Ray TracingExploring Sequential Mode in ZEMAXHow to Design Afocal SystemsHow To Design a Singlet Lens3.1、Analysis FeaturesHow do I Account for Energy Losses When Performing a Sequential Ghost Analysis? How to Include Detector Resolution in MTF CalculationsPerforming Partially Coherent Image AnalysisHow to Reverse an Optical SystemHow to Use the Center of Curvature Report to Aid System AlignmentHow to Produce Photo-Realistic Output ImagesHow to Use the Quick Adjust Tool and SliderHow To Draw Specific Rays in ZEMAX Layouts3.2 、3D GeometriesHow to Work in Global Coordinates in a Sequential Optical SystemHow Do I Change The Size and Shape of An Optical Component?Why Do Rays Trace Behind a Fold Mirror Surface?Demystifying the Off-Axis Parabola MirrorHow To Restore Coordinate Systems using the Coordinate ReturnHow to Model an Ellipse Using the Conjugate SurfaceHow to model an Off-Axis Parabolic MirrorHow to Model a Beam Splitter in Sequential ZEMAXHow To Model a Scanning MirrorHow to Tilt and Decenter a Sequential Optical Component3.3、Pupil ImagingWhat Does the Term 'Apodization' Mean?How to Specify the Pupil Shift Factor when Using Ray-Aiming How to use Ray Aiming3.4、System ModelingHow to Simulate High Resolution ImagesHow to Use the Find Best Asphere T oolHow to Use the Zernike Sag Surface to Model an All-Reflective SystemHow To Model a 'Black-Box' Optical System Using Zernike CoefficientsHow To Model Corner-Cube RetroreflectorsZEMAX Models of the Human EyeHow to Model the Human Eye in ZEMAXHow To Use ZEMAX as an Aid in Measuring the Internal Errors of a Molded Plastic Lens How to Design a Gaussian to Top-Hat Beam ShaperHow to Design Progressive LensesThe Mars Rover Camera LensesHow to Model an Adaptive Optical System3.5、Ray Tracing TheoryWhat is an "Effective F-Number"?What is a ray?What is a Point Spread Function?Understanding Paraxial Ray-Tracing4、Non Sequential Ray TracingHow to use the NSDD operand with the Universal PlotHow To Model a Mixed Sequential/Non-Sequential System Exploring Non-Sequential Mode in ZEMAXHow to Create a Simple Non-Sequential SystemHow to Convert Sequential Surfaces to Non-Sequential Objects4.1、ObjectsHow to Model a Complex Fresnel LensHow to Use the Modify Reference Objects ToolCan I Define a Background Material Other Than Air?How To Create Apertures and Off-Axis Mirrored Sections in Non-Sequential Mode Creating Polygon Objects in ZEMAX How Non-Sequential Objects are Represented in ZEMAXHow to Create Complex Non-Sequential Objects4.2、Sources, Splitting and ScatteringHow to Model Fluorescence using Bulk ScatteringHow to Model Surface Scattering via the K-Correlation DistributionHow to Use Tabular BSDF Data to Define the Surface Scattering DistributionBulk Scattering with the Rayleigh ModelHow to Model Scattering EfficientlyHow to Make Any Object into a Source ObjectUnderstanding Sobol SamplingWhat is Simple Splitting?4.3、Thin Film CoatingsHow To Model a Partially Reflective and Partially Scattering Surface4.4、Error MessagesHow To Locate Geometry Errors Part II - Design ExamplesWhat Does "Not Enough Segments Allocated to Trace All Possible Ray Paths" Mean? How T o Locate Geometry Errors5、Illumination & Stray LightHow to Show Detector V olume Data in 3-DModeling Laser Cavities using ZEMAX and LASCAD5.1、Digital Projection OpticsPolarization Conversion Systems for Digital ProjectorsFly's Eye Arrays for Uniform Illumination in Digital Projector Optics5.2、Stray LightQuantifying Veiling GlareHow To Model a Partially Reflective and Partially Scattering SurfaceHow To Perform Stray Light Analysis5.3、LEDsWhere Can I Get LED Optical Data in ZEMAX format?How to Use Osram LED data with ZEMAXHow to use Opsira LED SourcesHow to Model LEDs and Other Complex Sources5.4、LCD DisplaysHow to Model an LCD BacklightHow To Model Brightness Enhancement Film6、Physical OpticsHow To Convert FWHM Measurements to 1/e-Squared HalfwidthsWhat is the size of my POP beam?How to Model a Slicer Mirror Using a User-Defined Surface Exploring Physical Optics Propagation in ZEMAXHow to Model a High-Magnification Unstable Laser Resonator.7、Polarization and Thin Film CoatingsWhat Are The S and P Polarization States?How to Use the Jones Matrix SurfaceHow is a MIRROR Without a Coating Handled?How To Model a Dichroic Beam SplitterWhat Are The Ray And Field Coefficients in the Coatings Calculation? Modeling Frustrated T otal Internal Reflection in Non-Sequential ModeHow to Add Coating and Scattering Functions to Non-Sequential ObjectsHow To Model a Partially Reflective and Partially Scattering SurfaceHow To Design Birefringent Polarizers8、OptimizationHow to Optimize Non-Sequential Optical SystemsHow To Optimize for Worst-Case PerformanceUnderstanding the MTF OperandsHow To Optimize for As-Built PerformanceHow To Write Your Own Optimization OperandOptimizing an Infrared Lamp Heater8.1、Tips & TricksHow To Modify Field Data in the Merit Function Editor Using the FDMO operand How to Constrain the Thickness of Aspheric ComponentsHow to Optimize at Intermediate Surfaces9、TolerancingHow to Perform a Tolerance AnalysisHow to Tolerance for Tilts and Decenters of a Double Pass SystemHow to Analyze Your Tolerance Results9.1、Surface T olerancesHow To Tolerance for Material InhomogeneityHow to use TEZI to Tolerance for Manufacturing-Related Surface Sag Error9.2、Tips & TricksHow To Use The Tolerancing CacheWhat Is Polynomial Sensitivity T olerancing?How to Specify Fields for Non-Rotationally Symmetric SystemsWhy is the Nominal Criteria Different From the Value Reported Elsewhere?How to get any Optimization Operand Value in the Tolerance Report10、Thermal AnalysisHow to Fit Temperature-Dependent Index Data to the ZEMAX Thermal ModelWhy Can't I Place a Thermal Pickup on the Conic Constant?How does ZEMAX Model the Thermal Expansion of Optical Mounts?How to Set the Lens Mount Reference for Thermal AnalysisHow to Model Thermal Effects using ZEMAX11、CAD ExchangeTips and Tricks for Successful CAD ImportHow Accurate is NURBS?How to Show Exported Rays in SolidWorksWhat is the Zexport.msg file?How to Use the Boolean Object and the Combine Objects ToolHow to Add Coating and Scattering Functions to Non-Sequential ObjectsHow to Import CAD Objects12、Programming ZEMAXHow to Convert a Binary Source File into ASCIIHow to Create Binary IMA and BIM Files12.1、ZPLWhat is the Best Way to Reference a Surface or Object in ZPL?How to Use the PLOT2D keyword in ZPLHow to manipulate BMP and JPG files using the IMAGEEXTRACT and IMAGECOMBINE keywordsHow to Call a ZPL Macro From Within a ZPL MacroHow To Use the PLOT keyword in ZPLHow to Measure the Sag of an NSC ObjectHow to Use Programmer's Editors With ZPLZPL Macro for Scaling the Weight of a Range of Optimization OperandsHow to Set Solves from ZPLHow To Update and Change the Settings of a Graphic Window from ZPLHow To Debug a ZPL MacroHow to Open Consecutively-Named Lens Files Using a ZPLMacroHow to Modify String Variables in ZPL MacrosHow to Create a User-Defined SolveHow to Work with Strings in ZPL MacrosHow To Use the GETT() ZPL FunctionHow to Sum POP Beams CoherentlyHow to Automate Keyboard and Mouse Actions with ZPLWhat is ZPL?12.2、ExtensionsHow to Write User-Defined Sources and Scatter Functions in FortranHow to Talk to ZEMAX from Visual Basic for ApplicationsHow To Compile An Extension Using Microsoft Visual Studio 2005How To Write ZEMAX Extensions in FORTRANHow Do I Write an Extension in C?How to Talk to ZEMAX from MatLab12.3、User Defined FeaturesHow to Create Surfaces of Revolution via User Defined ObjectsBSDF Data Interchange File Format SpecificationHow to Create a User-Defined Scattering FunctionUsing the Henyey-Greenstein Distribution to Model Bulk ScatteringHow To Read a Static Data File into a User-Defined Surface How to Write User-Defined Surfaces with Waterloo MapleHow to Model a Slicer Mirror Using a User-Defined Surface How to Compile a User-Defined Surface13、Diffractive OpticsHow to Design Diffractive Optics Using the Binary 2 surfaceHow Diffractive Surfaces are Modeled in ZEMAX14、Glass and Refractive IndexHow ZEMAX Calculates Refractive Index At Arbitrary Temperatures and Pressures How to Create a New Glass Catalog Fitting Index Data in ZEMAXHow To Submit Vendor Data To Be Distributed With ZEMAX How To Return the Index of Refraction at a Specific WavelengthHow To Determine Which Glass Catalog Is Being UsedTake Care With 'Exact Equivalent' GlassesHow to Choose the Best Glasses for your Optical DesignWhat is the "Reference Temperature"?How To Enter Glass Data at Specific WavelengthsHow to Use the Model Glass15、Fiber CouplingHow to Model Cleaved FibersWhen Should I use the Huygens Integral in the Fiber Coupling Calculation? How to Convert from RSoft Simulations to ZEMAX and BackHow to Model Coupling Into a Multi-Mode FiberHow to Get Real Waveguide Mode Data Into ZEMAXHow to Model Coupling Between Single-Mode Fibers。
瞄准镜英文
瞄准镜英文Title: The Aiming Scope - An In-depth UnderstandingIntroduction:The aiming scope, commonly known as a scope or a sight, is a device used to enhance the accuracy and precision of aiming a weapon. It provides an optical magnification and reticles that assist shooters in aligning their target. In this document, we will explore the various types of aiming scopes, their functions, and their applications in different fields. We will also delve into the terminology and features associated with aiming scopes.1. Historical Overview:The concept of using an aiming scope dates back to the early 17th century, when hunters and marksmen realized the benefits of magnification in aiming. The first rudimentary scopes can be traced back to the German and Dutch inventors in the late 19th century. Over time, aiming scopes have evolved significantly in terms of design, technology, and versatility.2. Types of Aiming Scopes:a. Rifle Scopes:Rifle scopes are perhaps the most common type of aiming scopes used in various shooting disciplines such as hunting, sport shooting, and military operations. These scopes are typically mounted on rifles and provide a clear, magnified view of the target, making it easier for shooters to aim accurately.b. Handgun Scopes:Handgun scopes are specifically designed to enhance accuracy for pistols and revolvers. These scopes are often compact in size and offer lower magnification compared to rifle scopes, adapting to the shorter range requirements of handguns.c. Shotgun Scopes:Shotgun scopes are specialized scopes designed for shotguns used in hunting birds, clay pigeon shooting, and competitive shooting disciplines. These scopes offer a wider field of view to track moving targets, making them ideal for shotgun sports.d. Crossbow Scopes:Crossbow scopes cater to the unique needs of crossbow enthusiasts. They provide reticles calibrated for the bolt trajectory of crossbows, ensuring accurate aim and shot placement.e. Tactical Scopes:Tactical scopes are extensively used by law enforcement agencies and military personnel. They are designed to withstand harsh conditions and offer various features like illuminated reticles, bullet drop compensators, and fast target acquisition capabilities.3. Terminology and Features:To better understand aiming scopes, it is essential to familiarize oneself with terminology and features associated with them. Some common terms include:a. Magnification: The degree to which an object is visually enlarged through the scope.b. Objective Lens: The lens at the front of the scope that gathers light for better visibility.c. Reticle: The pattern or design visible through the scope that helps aim at the target.d. Eye Relief: The distance at which the shooter's eye should be positioned behind the scope for proper viewing and safety.e. Parallax Adjustment: The mechanism used to eliminate parallax, ensuring the reticle stays aligned with the target regardless of the shooter's viewing angle.4. Applications:Aiming scopes find applications in various fields:a. Hunting: Aiming scopes are invaluable tools for hunters, allowing them to spot their quarry more easily and accurately aim at their targets.b. Competitive Shooting: Aiming scopes are widely used in shooting competitions, aiding competitors in achieving higher levels of accuracy.c. Military and Law Enforcement: Aiming scopes are fundamental for military snipers, long-range marksmen, and law enforcement agencies, providing them with increased precision and target identification capabilities.Conclusion:The aiming scope is a crucial tool for shooters of all backgrounds and interests. With its origins dating back centuries, the scope has transformed into a sophisticated optical instrument. Understanding the different types, features, and applications of aiming scopes allows shooters to make informed decisions to enhance their shootingexperiences and improve accuracy. Whether for hunting, sport shooting, or tactical operations, the aiming scope remains an essential component for anyone seeking to improve their shooting abilities.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
D. Manglunki, M. Martini, CERN, Geneva, Switzerland I. Kirsten, Heidelberg University, Heidelberg, Germany ABSTRACT
verse sign, yielding large horizontal betatron function values at the magnet end. Reduction of the non-linear aberrations was done by shimming the F half-unit. Straight parallel shims have been mounted at different radial positions on the five blocks to shape a constant magnetic field over the ejected beam width [3]. Magnetic measurements have been done on a laboratory magnet unit in the absence of shims, thus the measured field map has to be corrected to consider the shimming effect. Field calculations have been carried out on the five blocks equipped with shims using the twodimensional Poisson program [4] with appropriate meshing of the field region [5]. Polynomial fittings up to degree twenty-five in x (in the range -0.1 m ≤ x ≤ 0.5 m) of Poisson output have been carried out to get a functional form of computed field (see Fig. 2). Hence, a correcting field map function w(x,z) may be defined as fi ( x ) if z is in block i of the F half-unit w( x, z) = otherwise 1 where fi (x) is the ratio of the computed field function on shimmed block i over the corresponding function derived without shim insertion. Multiplying the measured field map values by w(x,z) yields the field map relevant for the stray field in the presence of shims, which is of importance for good extraction modeling.
Fig. 1: CPS magnet unit number 16 including the 26 GeV/c extraction pipe.
1. INTRODUCTION
The proton beams extracted from the CPS experience the strongly non-linear stray field from the magnet unit downstream of the ejection septum. Previous calculations modelled each half-magnet multipole component beyond the sextupole order as a thin element. Multipole coefficients were derived from field measurements at a reference azimuthal magnet location using one-dimensional transverse fitting. The approach considered in this paper consists integrating the equations of motion for a single proton travelling through a measured discrete field map converted into bi-dimensional polynomials. The results yield the ejection trajectory and the transfer matrices through the field map. This study considers the ejection of a 26 GeV/c proton beam which will be used to fill the LHC [1]. The same approach could similarly be used to handle the other ejection settings (e.g. 24 GeV/c proton slow extraction) and the 1 GeV proton injection in the CPS.
2. FITTING OF FIELD MEASUREMENTS 2.1 Measured field map on a test magnet unit
The CPS lattice consists of ten super-periods made of ten combined function magnets, eight 1.0 m and two 2.4 m straight sections. Each magnet is composed of two half-units with gradients of opposite sign, separated by a central junction. The half-units are made of five blocks with small gaps in between lined up on the central orbit.
New magnetic measurements on operational CPS magnet working points were performed in 1992, including measurements of the central field, the end and lateral stray fields, and the field in the junction between the two half-units [2]. The measured magnet unit was a radially defocusing open half-unit (D half-unit) followed by a radially focusing closed half-unit (F half-unit), with the yoke oriented towards the centre of the ring. Measurements were carried out in a Cartesian frame with steps of 20 mm along the z-axis (aligned along the two magnet targets and counted positively in the proton direction) and steps of 10 mm along the x-axis (put in the middle of the central magnet junction and counted positively towards the outside ring). The field map range is: -70 mm ≤ x ≤ 310 mm and -2.55 m ≤ z ≤ 2.73 m (the magnet length is 4.2orm of the discrete field map
The necessity to extract a 26 GeV/c proton beam in a 2.4 m straight section with little angle deflection (≅29 mrad) imposes that the downstream half-unit adjacent to the ejection septum to be open to allow the fitting of the extraction pipe across the magnet aperture. The ejection trajectory in this region remains close to the central orbit and thus the aberrations in the magnetic fields are kept at a reasonable value. When traversing the subsequent F half-unit the ejection trajectory moves away from the central orbit and field aberrations become strongly nonlinear: the beam experiences a field gradient with a re-