crystal_growing_presentation_nov2006

合集下载

Dust Processing in Disks around T Tauri Stars

Dust Processing in Disks around T Tauri Stars

a rXiv:as tr o-ph/65415v117May26revised version Dust Processing in Disks around T Tauri Stars B.Sargent 1,W.J.Forrest 1,P.D’Alessio 2,A.Li 3,J.Najita 4,D.M.Watson 1,N.Calvet 5,E.Furlan 6,J.D.Green 1,K.H.Kim 1,G.C.Sloan 6,C.H.Chen 4,7,L.Hartmann 5,and J.R.Houck 6ABSTRACT The 8–14µm emission spectra of 12T Tauri stars in the Taurus/Auriga dark clouds and in the TW Hydrae association obtained with the Infrared Spectro-graph (IRS 1)on board Spitzer are analyzed.Assuming the 10µm features orig-inate from silicate grains in the optically thin surface layers of T Tauri disks,the 8–14µm dust emissivity for each object is derived from its Spitzer spectrum.The emissivities are fit with the opacities of laboratory analogs of cosmic dust.The fits include small nonspherical grains of amorphous silicates (pyroxene and olivine),crystalline silicates (forsterite and pyroxene),and quartz,together with large fluffy amorphous silicate grains.A wide range in the fraction of crystalline silicate grains as well as large silicate grains among these stars are found.The dust in the transitional-disk objects CoKu Tau/4,GM Aur,and DM Tau has the simplest form of silicates,with almost no hint of crystalline components and modest amounts of large grains.This indicates that the dust grains in these ob-jects have been modified little from their origin in the interstellar medium.Otherstars show various amounts of crystalline silicates,similar to the wide dispersionof the degree of crystallinity reported for Herbig Ae/Be stars of mass<2.5M⊙.Late spectral type,low-mass stars can have significant fractions of crystallinesilicate grains.Higher quartz mass fractions often accompany low amorphousolivine-to-amorphous pyroxene ratios.It is also found that lower contrast of the10µm feature accompanies greater crystallinity.Subject headings:circumstellar matter,infrared:stars,stars:pre-main-sequence1.IntroductionIt has long been known that T Tauri stars(TTSs)emit infrared(IR)radiation in excess of their stellar photosphere(e.g.,Mendoza1966).Cohen(1973)speculated that silicate dust in orbit around these stars was responsible for this excess emission.With observations from the Infrared Astronomical Satellite(IRAS),it was shown that the12–100µm IR excess emission from these young stars could arise from dusty accretion disks(Rucinski1985).Many different models for this disk emission have been proposed.Both Adams et al.(1987)and Kenyon&Hartmann(1995)construct disk models including both accretion and reprocessing of stellar radiation.In order to explain how disk reprocessing can be responsible for the IR excesses of most TTSs,Kenyon&Hartmann(1987,henceforth KH87)proposed that disks around TTSs areflared,in that the scale height of the disk increases more than linearly with the distance from the central star.Aflared disk intercepts a larger solid angle of radiation emitted from the star than aflat or nonflared disk,leading to more reprocessing of starlight.KH87suggest that the surface of such aflared disk would become hotter than the midplane due to radiative transfer effects.The disk material is optically thick toλ∼1µm stellar radiation,so the starlight is absorbed in the highest layers of the disk.Atλ∼10µm, characteristic of the reprocessed radiation from the top disk layers at R∼1AU in the disk, the disk has less optical depth,and the reprocessed radiation diffuses into the interior parts of the disk and heats those regions.For the small accretion rates typical of TTS,the disk atmosphere heats to a higher temperature than the layers in the disk underneath,and the vertical temperature inversion produces silicate features in emission(Calvet et al.1992).Dorschner(2003)summarizes how,based on the IR spectroscopic observations of Gillett et al.(1968),it came to be established that the10µm emission(or absorption)feature,the broad emission(or absorption)feature from8to13µm seen in a number of astronomical objects,is due to the Si–O stretching modes in silicate grains.Spectrophotometric obser-vations by Forrest&Soifer(1976)and Forrest et al.(1976)at wavelengths longer than 16µm provided further support for the silicate hypothesis.They found an18.5µm peakin the Trapezium emission and an18.5µm maximum in the absorption from the BN-KL source;the pairing of the18.5µm feature(the broad emission or absorption feature from16 to23µm)with the10µm features in the Trapezium and the BN-KL source confirmed the silicate hypothesis.There have been many studies of the silicate features of TTSs,both ground-based(Co-hen&Witteborn1985;Honda et al.2003;Kessler-Silacci et al.2005)and space-based(Natta et al.2000;Kessler-Silacci et al.2006).However,ground-based spectroscopic observations are limited by the Earth’s atmosphere at mid-IR wavelengths in both wavelength coverage and sensitivity.Space-based missions,such as the Infrared Space Observatory(ISO;Kessler et al.1996)do not suffer these limitations.The Spitzer Space Telescope(Werner et al.2004) offers greater sensitivity than previous space-based missions.Here we focus on studying the 10µm silicate features of TTSs with the Infrared Spectrograph(IRS;Houck et al.2004)on board Spitzer.It is generally believed that the disks and planetary systems of young stellar objects (YSOs)form from material from the ISM.Spitzer IRS spectra of objects with Class I spectral energy distributions(SEDs)(Watson et al.2004),objects believed to be young protostars still surrounded by collapsing envelope material from their parent cloud of gas and dust,show smooth,featureless10µm absorption profiles,indicating amorphous silicates.Forrest et al. (2004)presented the spectrum of CoKu Tau/4,a T Tauri star with a5–30µm spectrum well modeled by D’Alessio et al.(2005)by an accretion disk nearly devoid of small dust grains within∼10AU.Unlike the complex10µm emission features of many Herbig Ae/Be stars indicative of thermally processed silicates(Bouwman et al.2001;van Boekel et al.2005), the10µm emission feature of CoKu Tau/4is smooth,relatively narrow,and featureless,as are the silicate absorption profiles of the ISM(e.g.,Kemper et al.2004)and Class I YSOs (Watson et al.2004).Smooth,narrow,and featureless profiles indicate amorphous silicates. Other objects,such as FN Tau(Forrest et al.2004),have significant crystallinity of silicate dust in their disks,evident in the structure in their10µm emission features,and others still, such as GG Tau A,have larger grains as shown by the greater width of the10µm emission feature.In the following,we use optical constants and opacities of various materials to model the10µm features of our objects.We model the dust emission of the six objects whose spectra are presented by Forrest et al.(2004),TW Hya and Hen3-600A by Uchida et al. (2004),V410Anon13by Furlan et al.(2005a),and GM Aur and DM Tau by Calvet et al. (2005);we also present and model the5–14µm spectrum of GG Tau B.Stellar properties for our TTS sample are given in Table1.In§2,we describe our data reduction techniques. In§3,we detail how we derive andfit an emissivity for each object,and in§4we describethefit to the derived emissivity of each object.We discuss ourfits in§5and summarize our findings in§6.2.Data Reduction2.1.ObservationsThe present12TTS were observed with the IRS on board Spitzer over three observing campaigns from2004January4to2004March5.All objects were observed with both orders of the Short-Low(SL)module(R∼60–120;second order[SL2],∆λ=0.06µm[5.2–7.5µm];first order[SL1],∆λ=0.12µm[7.5–14µm]).Fainter objects were observed with the Long-Low(LL)module(R∼60–120;second order[LL2],∆λ=0.17µm[14–21.3µm];first order[LL1],∆λ=0.34µm[19.5–38µm]),while brighter objects were observed with the Short-High(SH;R∼600,9.9–19.6µm)and Long-High(LH;R∼600,18.7–37.2µm)modules (the LH spectra are not used here).The brightest objects were observed in mapping mode,in which for one module one data collection event(DCE;sampling of spectral signal from target)was executed for each position of a2×3(spatial direction×spectral direction)raster centered on the coordinates of the target.For details on how the2×3maps were obtained,see the description of spectral mapping mode by Watson et al.(2004).From mapping-mode observations,we derive our spectra from the two positions in the2×3map for which theflux levels in the raw extracted spectra are highest.All other objects were observed in staring mode,which always im-mediately followed single high-precision Pointing Calibration and Reference Sensor(PCRS) peak-up observations.For details on IRS staring mode operation and PCRS observations, see Houck et al.(2004).For staring mode,the expectedflux density of the target determined the number of DCEs executed at one pointing of the telescope;for faint objects,multiple DCEs were obtained at one pointing of the telescope and averaged together.From the position of each target’s point-spread function(PSF)in the cross-dispersion direction in the two-dimensional data,we conclude that mispointing in the cross-dispersion direction in SL in mapping mode and in staring mode is usually less than0.′′9(half a pixel). By comparing the absoluteflux levels of the spectra obtained from each of the three po-sitions in a1×3subsection(the three positions are colinear and offset from each other in the dispersion direction of the slit)of the2×3raster,the pointing of the telescope in the dispersion direction in mapping mode could be determined.The mapping mode dispersion direction mispointing is usually less than half a pixel in SL(0.′′9).Pointing in the dispersion direction cannot be quantified very easily for staring mode observations because the telescopeis not moved in the dispersion direction in this mode.To account for mispointing in the cross-dispersion direction,we use data that has been divided by theflatfield from the S11 pipeline.Mispointing in the dispersion direction primarily causes small photometric error; this will not affect the emissivities derived here.2.2.Extraction of SpectraThe spectra were reduced using the Spectral Modeling,Analysis,and Reduction Tool (SMART;Higdon et al.2004).From basic calibrated data(BCD;flat-fielded,stray-light-corrected,dark-current-subtracted)S11.0.2products from the Spitzer Science Center IRS data calibration pipeline,permanently bad(NaN)pixels werefixed in our two-dimensional spectral data.The corrected pixel value was a linear interpolation of the nonbad pixels of the set of four nearest neighboring pixels(up,down,left,and right of the pixel in question). Unresolved lines of[Ne II]and molecular hydrogen,seen in the spectra of other objects, provided the wavelength calibration;all spectra presented in this paper are wavelength-corrected,and these wavelengths are estimated to be accurate to±0.02µm.All DCEs taken at the same pointing of the telescope(same module/order/nod position)were averaged together.Because one order records the spectrum of sky∼1′–3′away from the target whose spectrum is being recorded in the module’s other order,sky subtraction in low resolution spectra obtained in staring mode is accomplished by subtracting the average spectrum from one spectral order of a given module from that in which the spectrum is located in the same nod position of the other order of the same module.For FM Tau,the SSC pipeline introduced artifacts in the off-order images in SL2.In this case,we used the same-order, different-nod DCE to subtract the sky.The low-resolution sky-subtracted spectra were then extracted using variable-width col-umn extraction in order to account for the linear increase of size of the object’s PSF with wavelength.From shortest to longest wavelengths of each module,respectively,extraction region width varied from3.2to4.9pixels in SL2,3.6to4.4pixels in SL bonus order(a short fragment offirst-order light from7.5–8.4µm recorded simultaneously with SL2),2.7-5.4pixels in SL1,3.2-4.9pixels in LL2,3.8-4.3pixels in LL bonus order(a short fragment offirst-order light from19.4to20.9µm recorded simultaneously with LL2),and2.5to5.5 pixels in LL1.For SH the sky is not subtracted,as the SH slit is only5pixels long,and no separate sky observations were acquired.The extraction region at each wavelength for SH was the entire5pixel long slit.Since the roughly square SH pixels are∼2.′′2wide and SL pixels are∼1.′′8wide,the SH extraction region covered more solid angle at every wavelength than SL,and as spectra from SH typically gave lowerflux than SL over the wavelength rangeof overlap of SH with SL(∼10–14µm),the sky levels for the SH observations are estimated to be much lower than theflux density of the point sources.The spectra are calibrated using a relative spectral response function(RSRF),which givesflux density,Fν,at each wavelength based on the signal detected at that wavelength. The RSRFs were derived by dividing the template spectrum of a calibrator star by the result of extraction of the calibrator’s spectrum in SMART for each nod of each order of each module.For both orders of SL and for both orders of LL,a spectral template ofαLacertae(A1V;M.Cohen2004,private communication)of higher spectral resolution than the templates described by Cohen et al.(2003)was used,and a spectral template forξDra (K2III)was used for SH(Cohen et al.2003).As with the science targets,SH observations of the calibrator sourceξDra were not sky-subtracted.The science target raw extractions were then multiplied by the RSRFs corresponding to the same nod,order,and module.Typically, goodflux agreement at wavelength regions of order overlap within the same module was found.As the spectra obtained for a given source at the telescope’s two nod positions are independent measurements of the object’s spectrum,close agreement between the two nod positions was expected;this was the case for all sources except for FN Tau.2.3.Remaining technical problemsFN Tau was observed in mapping mode,and from the extractedflux levels of all ob-servations in the2×3mapping raster,we determined that the central1×2pair both suffer mispointing of differing amounts.The more mispointed DCE was mispointed in the disper-sion direction by0.′′7,while the less mispointed DCE was mispointed in this direction by0.′′3; both DCEs were mispointed in the cross-dispersion direction by between0.′′5and0.′′7.The effect of this differential mispointing shows up most prominently in the derived spectrum of the more mispointed observation offirst order of SL;there is a mismatch offlux level of about10%over the entire order compared to theflux level of the spectrum obtained from the less mispointed DCE.To correct for this,thefirst order of the spectrum from the more mispointed mapping position was multiplied by1.1to match the less mispointed position. Except for thefirst order of SL for FN Tau,the derived spectra are the mean at each order of each module of the spectra from the two independent nod or map positions.For the first order of SL for FN Tau,the reported spectrum is that from only the less mispointed map position.Error bars are derived for each of the spectra,and the error bar at a given wavelength is equal to half the difference between theflux(at that wavelength)from the two nod(or map)positions used to derive the mean spectrum.For SLfirst order of FN Tau, the spectrum from the less mispointed map position and the corrected spectrum(previouslydescribed)from the more mispointed map position were used to derive its error bars.Any error bar with relative uncertainty<1%is attributed to the low number(2)of measurements at that wavelength,and that errorbar is set to1%of theflux.There are some mismatches influx between SL and SH,and between SL and LL in the paring the SL spectra of nonvariable sources to available photometry,absolute spectrophotometric accuracy is estimated to be better than10%in SL.Therefore,small mismatches influx levels between SL,SH,and LL are corrected by scaling the entire longer wavelength module to match theflux in SL,as we trust the photometric levels of SL.SH was multiplied by factors between1.04and1.11to match SL;LL for CY Tau was multiplied by0.95to match its SL spectrum.In order to account for off-order leaks in thefilters which define the orders of each of the modules,the ends of each order of every module are truncated to guarantee the spectral purity of our spectra.The spectra of all objects in the sample excluding GG Tau B have been previously published:CoKu Tau/4,FM Tau,IP Tau,GG Tau A,FN Tau,and CY Tau in Forrest et al.(2004);TW Hya and Hen3-600A in Uchida et al.(2004);V410Anon13in Furlan et al.(2005a);and GM Aur and DM Tau in Calvet et al.(2005).For all previously published spectra except the two by Calvet et al. (2005),wavelengths from8to14µm were too long by0.05µm;as described previously,this wavelength problem was corrected before further analysis.The correction has moved the 9.4µm feature in FN Tau noted by Forrest et al.(2004)closer to9.3µm.In Figure1,the spectrum obtained of the GG Tau B binary system is shown;in§4we discuss the origin of the IR excess for this pair.3.Analysis3.1.Correction for ExtinctionFor all of the objects in the sample except V410Anon13,no correction for extinction is applied,in order not to introduce artifacts of overcorrection for extinction.See Table1 for the assumed visual extinction A V for each of the objects in our sample.No extinction correction is applied for any object having A V less than1.4;this includes all objects in the sample except CoKu Tau/4,GG Tau A,and V410Anon13.As described by D’Alessio et al.(2005),optical spectra of CoKu Tau/4indicate time-dependent reddening to the star. This suggests that the source of the extinction to CoKu Tau/4may be local to the object. However,the precise time and space dependence of extinction to CoKu Tau/4is unknown. For this reason,no extinction correction is applied for CoKu Tau/4.White et al.(1999) estimate A V of∼3.2toward GG Tau Ab(the GG Tau system is a hierarchical quadruple with the northern pair,GG Tau A,being binary and separated by0.′′25–this is describedin greater detail in§4),while A V≈0.72toward GG Tau Aa.Consequently,no extinction correction is applied for GG Tau A,as it is believed the less extinguished GG Tau Aa component dominates the IRS spectrum.For V410Anon13,an extinction correction is applied,assuming A V≈5.8along with Furlan et al.(2005a).Furlan et al.present a disk model for this object tofit its IRS spectrum assuming an inclination i=70◦.Because of the large implied disk inclination,at least part of the extinction to V410Anon13could be due to dust in theflared disk atmosphere at large disk radii lying in the sightline from the star and inner disk regions to observer.Furlan et al.(2005a)also found that when i in the model is changed from70◦to60◦,the peak of the flux in the10µm feature increases by∼20%.Because the emergent disk spectrum in the model greatly depends on its inclination i,the effect of extinction correction for this object is discussed in§4.12when describing its dust modelfit.Because dust in the outermost reaches of YSO disks is expected to be little altered from its origin in the ISM(see the discussion in§5below),the composition of any dust providing local extinction is assumed to be approximately the same as that in the ISM between V410Anon13and Earth.In the ISM,the ratio of the visual extinction to the optical depth at the9.7µm peak of the silicate absorption feature(A V/∆τ9.7)varies by as much as a factor of2to3(see Draine 2003).To convert from A V to the9.7µm extinction,we take A V/∆τ9.7≈18,typical for the local diffuse ISM(see Draine2003).For simplicity,we assume that the composition of the material responsible for the extinction does not change over the sightline from the target to Earth.3.2.Derivation of EmissivityThe spectral excess for each of the objects in the sample is interpreted as arising from a disk surrounding one or more central star(s)beginning in most cases at a few stellar radii away from the central star(s)and extending as far away as a few hundred AU.We call a disk a“transitional disk”if it is optically thick to mid-IR wavelengths over some range of radii and optically thin elsewhere.CoKu Tau/4(D’Alessio et al.2005),DM Tau(Calvet et al.2005),GM Aur(Calvet et al.2005),TW Hya(Calvet et al.2002),and Hen3-600A (Uchida et al.2004)have been shown to be transitional disks through spectral modeling. The spectrum of CoKu Tau/4is photospheric at wavelengths shortward of8µm but has a large IR excess seen in its IRS spectrum longward of that wavelength;correspondingly,it has an optically thick disk at radii greater than10AU with less than∼0.0007lunar masses of small silicate dust grains inside that radius(D’Alessio et al.2005).DM Tau has an IRS spectrum similar to that of CoKu Tau/4and is modeled similarly by Calvet et al.(2005),but with the radius of transition between the optically thick disk and the(very)optically thin inner regions at3AU.GM Aur,TW Hya,and Hen3-600A,also with large excess above photosphere longward of8µm,are not photospheric shortward of8µm.This excess indicates an optically thin inner disk region.The IR disk emission for each of the transitional disks is isolated by subtracting an appropriate stellar photosphere represented by a blackbody.For CoKu Tau/4and DM Tau,the blackbody isfit to the5–8µm IRS spectral data,while for GM Aur,TW Hya,and Hen3-600A the Rayleigh-Jeans tail of the blackbody isfit to that of the stellar photosphere model by Calvet et al.(2005)and Uchida et al.(2004),respectively. This isolated disk emission for each object should,shortward of∼20µm,be due in large part to emission from the optically thin regions of the disk.For CoKu Tau/4and DM Tau,this emission is mostly due to the optically thin regions of each object’s wall.For GM Aur,TW Hya,and Hen3-600A,this emission is mostly due to the optically thin inner disk regions (Calvet et al.2005;Uchida et al.2004).We refer to a“full disk”if the disk is optically thick to mid-IR wavelengths throughout and extends from the dust-sublimation radius from the central star.Following the reasoning of Forrest et al.(2004),FM Tau,IP Tau,GG Tau A,GG Tau B,FN Tau,V410Anon13, and CY Tau are identified as having full disks based on their5–8µm spectra.Each has a continuum from5to8µm characterized by a spectral slope shallower than the Rayleigh-Jeans tail from a naked stellar photosphere.In addition,the5to8µmflux exceeds that from stellar photosphere alone(modeled byfitting a stellar blackbody to the near-IR photometry) by factors>2.Following the discussion by Forrest et al.(2004),most of the5–8µm emission from full disks originates from optically thick inner disk regions,while most of the emission in the dust features above the continuum longward of8µm is due to emission from dust suspended in the optically thin disk atmosphere.Therefore,a power law continuum isfit to the<8µm region of each“full-disk”spectrum and subtracted from the spectrum to isolate the optically thin disk atmospheric emission.Dust grains suspended in the optically thin atmosphere of aflared disk are directly exposed to stellar radiation,which heats the grains above the temperature of the disk’s photosphere.The grains then reemit the absorbed energy according to their temperature; this emission gives rise to the distinctive dust features seen in the spectra beyond8µm.The emission features are much narrower than a Planck function,which indicates structure in the dust emissivity.As explained in Calvet et al.(1992),a radiatively heated disk with a modest accretion rate has a thermal inversion.The upper layers of the disk,the optically thin disk atmosphere,are hotter than the lower layers,which are optically thick.This gives rise to spectral emission features characteristic of the dust in the disk atmosphere.In modeling the SEDs of Classical T Tauri Stars(CTTSs),both Calvet et al.(1992)andD’Alessio et al.(2001)compute the temperature of the atmosphere of each annulus of disk material as a function of vertical optical depth.In such models,it is assumed that all dust grains at a given height in an annulus are at the same temperature,independent of grain composition and grain size.It is similarly assumed here that all grains in any sufficiently small volume in a disk are at the same temperature,independent of grain composition and grain size.We aim for a simple model in order to determine the composition of the part of the disk giving rise to the optically thin dust emission.Bouwman et al.(2001)and van Boekel et al.(2005)also model dust emission by assuming a single temperature for all dust components. Optically thin emission from dust over the range of radii(and therefore temperatures)that contributes most to the8–20µm range is represented by emission from optically thin dust at a single,“average”temperature,T.It is assumed that the monochromaticflux of this optically thin emission over the short range from8to14µm is given byFν=ΩdτνBν(T)=ǫνBν(T),(1) where Fνis either the photosphere-subtracted residuals(for transitional disks)or power-law-continuum-subtracted residuals(for full disks);ǫνis referred to as the emissivity;Ωd is the solid angle of the region of optically thin emission;andτνis the frequency-dependent optical depth of dust.For all objects except GG Tau B and GM Aur,T is found by assuming a long-to short-wavelength emissivity ratio,ǫl/ǫs,with“l”meaning long wavelengths(∼20µm)and “s”meaning short wavelengths(∼10µm),for dust and solving for T in the equationFν(λl).(2)ǫ(λs)Bν(λs,T)For GG Tau B,where no long-wavelength data exist,a dust temperature of252K is assumed,the same temperature as for GG Tau A.For reasons discussed in§4.4,the dust temperature is set to T=310K for GM Aur.For all other objects observed with SL and LL excluding DM Tau,we take the20µm-10µmflux ratios;for the objects observed in SL,SH, and LH,we decrease the wavelengths in the ratio to19.3and9.65µm,as SH does not extend to20.0µm.In a single-temperature dust model,the same temperature will be computed regardless of the wavelengths used to determine theflux ratio due to the properties of the Planck function.The wavelengths were set to9.5and19.0µm for DM Tau as a test to determine if the derived temperature depended much on the exact choice of wavelengths used to the determineflux ratio.When the wavelengths were changed to10and20µm for DM Tau,the computed temperature changed from160to158K;however,this did not require any change to the DM Tau dust model.A similar test was performed on FN Tau by changing the long and short wavelengthfluxes used for its dust temperature determination from10and20µm to9.5and19µm.This increased dust temperature from208to209K; as with DM Tau,no change to the FN Tau dust model was required.Using this temperature,T,the photosphere-or continuum-subtracted residuals were divided by Bν(T)to give the emissivity,which is proportional to the mass-weighted sum of opacities as follows:ǫ(λ)∝ j m jκj(λ)=σ(λ),(3) where m j is the mass fraction of dust component j,κj(λ)is the wavelength-dependent opacity(cm2g−1)of dust component j,andσ(λ)is the wavelength-dependent cross-section of the dust mixture model.Bothǫ(λ)andσ(λ)are normalized to unity at their peak in the 8–14µm range.To determine the uncertainties in the emissivities,the corresponding spectral error bars, obtained as described in§2.3from spectra obtained at two nod positions,are divided by Bν(T),the result of which is then divided by the same normalization constant used to derive the corresponding emissivity.The derived emissivities are believed to be valid immediately longward of8µm,where the10µm feature rises above the extrapolation of the<8µm continuum,as no drastic change of the slope of the continuum from the optically thick com-ponents of the disks at wavelengths between8and14µm is expected.However,assuming one dust temperature for a wide range of wavelengths in a spectrum of a circumstellar disk is unrealistic.In addition,for wavelengths longward of∼14µm,the slope of the contin-uum from optically thick emission is not well determined by extrapolation from the<8µm continuum.The power-law-continuum-or photosphere-subtracted residualflux at∼20µm, attributed as described previously to optically thin emission,is therefore uncertain,leading to uncertainty in the derived dust temperature.For this reason,we do not attempt tofit the 18µm and longer wavelength features here.The emissivity isfit byfinding the optimal set of mass fractions,m j,such that the normalized model dust cross-sectionfits the normalized emissivity as well as possible.Thefitting method is iterated until the assumedǫl/ǫs,used to compute grain temperature and therefore derive emissivity,equals theǫl/ǫs derived from thefit emissivities;ǫl/ǫs and other details of derivation of emissivities are reported for each of the opacity models in Table2.Also listed in Table2isβ9.9,the ratio of the continuum-subtracted residualflux at9.9µm to the9.9µm continuum of the full disks,which gives the contrast of the silicate emission feature to the optically thick continuum in the original spectrum.3.3.Disk Model for IP TauAs a test of these simple dust models,a disk model following the methods of D’Alessio et al.(1998,1999,2001)was computed,using stellar parameters from Table1and the mass accretion rate from Hartmann et al.(1998).First,opacities similar to those generated from。

晶体生长过程视频

晶体生长过程视频
Successive crystallizations purify the compound Always use recrystallized material when setting
up a crystal growing attempt
Solubility Profile
Solubility
Nucleation – fewer nucleation sites are better. Too many nucleation sites (i.e. dust, hairs, etc.) lower the average crystal size
Mechanics – mechanical disturbances are bad.
Peak positions for true atomic positions
Limiting the Resolution of the Data
Limited qmax = 11.54° Resolution = 2.5 Å
Set-up simultaneous crystal growing experiments
Factors Affecting Crystallization
Solvent – moderate solubility is best. Supersaturation leads to sudden precipitation and smaller crystal size
positions are still resolvable
Limiting the Resolution of the Data
Limited qmax = 14.48° Resolution = 2.0 Å

聚碳酸酯材料耐UVC的辐照应用

聚碳酸酯材料耐UVC的辐照应用

聚碳酸酯材料耐UVC的辐照应用发布时间:2022-11-03T02:42:10.084Z 来源:《科学与技术》2022年第7月第13期作者:曾建广1 ,胡齐新2 [导读] 在聚碳酸酯材料的应用过程中,紫外线UVC辐照对聚碳酸酯材料的影响直接表观为颜色的色差上;使用光稳定剂,紫外线吸收剂,抗氧剂等对聚碳酸酯材料进行改性,采用辐照前后色差差异进行聚碳酸酯材料对紫外线UVC耐受性的比较;实验结果显示:单独使用光稳定剂,紫外线吸收剂,抗氧剂曾建广1 ,胡齐新21.东莞市惠科新材料有限公司,东莞,523622;2.广州润峰科技股份有限公司,广州511430摘要:在聚碳酸酯材料的应用过程中,紫外线UVC辐照对聚碳酸酯材料的影响直接表观为颜色的色差上;使用光稳定剂,紫外线吸收剂,抗氧剂等对聚碳酸酯材料进行改性,采用辐照前后色差差异进行聚碳酸酯材料对紫外线UVC耐受性的比较;实验结果显示:单独使用光稳定剂,紫外线吸收剂,抗氧剂,均能在一定程度上提高聚碳酸酯材料对紫外线UVC耐受效果;复配使用能发挥各种助剂的协同作用,增加聚碳酸酯材料对紫外线UVC的耐受性,延缓了材料的老化,提高聚碳酸酯材料的使用寿命。

关键词:紫外线辐照;紫外线UVC;加速老化测试;聚碳酸酯;色差中图分类号:TQ322.3 文献标识码:AApplication of polycarbonate materials to UVC irradiationZeng Jianguang1 Hu Qixin21.Dongguan Huike New Material Co. LTD,Dongguan,5236222.Guangzhou Runfeng Technology Co., LTD,Guangzhou,511430Abstract:For the application of polycarbonate ,with the UVC Exposure ,the influence of polycarbonate apparent directly for color change; Use light stabilizer, UV absorber,antioxidant, to modified polycarbonate , using color difference before and after irradiation to polycarbonate material for uv UVC tolerance; Experimental results show that single use light stabilizer,UV absorber, antioxidant, all can improve the effect of polycarbonate material UVC resistance to uv light; Compound with them, can increase the polycarbonate material UVC tolerated to ultraviolet ray, delay the aging of the material, improve the service life of polycarbonate material.Key words:UV Exposure;UVC;accelerated aging test;polycarbonate;chromatic aberration 作者简介:曾建广(1988-),男,广东梅州市,本科,从事聚碳酸酯及其合金改性,E-mail:********************聚碳酸酯材料为五大工程塑料之一,其分子链中含碳酸酯基和刚性大的苯环空间位阻结构,使其形成一种无定型的透明工程塑料,具有优异的冲击强度和耐蠕变性,透光率高及耐热性能,因此被应用于各个领域;随着COVID-19流行,许多行业例如医疗设备、汽车内饰和消费电子、智能家居产品都在寻找对产品进行消毒的新方法。

Becoming a Scientist The Role of Undergraduate Research in Students ’ Cognitive, Personal,

Becoming a Scientist The Role of Undergraduate Research in Students ’ Cognitive, Personal,

Becoming a Scientist:The Roleof Undergraduate Research in Students’Cognitive,Personal, and Professional DevelopmentANNE-BARRIE HUNTER,SANDRA URSEN,ELAINE SEYMOUR Ethnography&Evaluation Research,Center to Advance Research and Teaching in the Social Sciences,University of Colorado,Campus Box580,Boulder,CO80309,USAReceived9November2005;revised2May2006;accepted2June2006DOI10.1002/sce.20173Published online12October2006in Wiley InterScience().ABSTRACT:In this ethnographic study of summer undergraduate research(UR)expe-riences at four liberal arts colleges,where faculty and students work collaboratively on aproject of mutual interest in an apprenticeship of authentic science research work,analysisof the accounts of faculty and student participants yields comparative insights into thestructural elements of this form of UR program and its benefits for parison ofthe perspectives of faculty and their students revealed considerable agreement on the nature,range,and extent of students’UR gains.Specific student gains relating to the process of “becoming a scientist”were described and illustrated by both groups.Faculty framed these gains as part of professional socialization into the sciences.In contrast,students emphasizedtheir personal and intellectual development,with little awareness of their socialization intoprofessional practice.Viewing studyfindings through the lens of social constructivist learn-ing theories demonstrates that the characteristics of these UR programs,how faculty practiceUR in these colleges,and students’outcomes—including cognitive and personal growth and the development of a professional identity—strongly exemplify many facets of these theo-ries,particularly,student-centered and situated learning as part of cognitive apprenticeshipin a community of practice.C 2006Wiley Periodicals,Inc.Sci Ed91:36–74,2007Correspondence to:Anne-Barrie Hunter;e-mail:abhunter@Contract grant sponsor:NSF-ROLE grant(#NSF PR REC-0087611):“Pilot Study to Establish the Nature and Impact of Effective Undergraduate Research Experiences on Learning,Attitudes and Career Choice.”Contract grant sponsor:Howard Hughes Medical Institute special projects grant,“Establishing the Processes and Mediating Factors that Contribute to Significant Outcomes in Undergraduate Research Experiences for both Students and Faculty:A Second Stage Study.”This paper was edited by former Editor Nancy W.Brickhouse.C 2006Wiley Periodicals,Inc.BECOMING A SCIENTIST37INTRODUCTIONIn1998,the Boyer Commission Report challenged United States’research universities to make research-based learning the standard of students’college education.Funding agencies and organizations promoting college science education have also strongly recommended that institutions of higher education provide greater opportunities for authentic,interdis-ciplinary,and student-centered learning(National Research Council,1999,2000,2003a, 2003b;National Science Foundation[NSF],2000,2003a).In line with these recommen-dations,tremendous resources are expended to provide undergraduates with opportunities to participate in faculty-mentored,hands-on research(e.g.,the NSF-sponsored Research Experience for Undergraduates[REU]program,Howard Hughes Medical Institute Science Education Initiatives).Notwithstanding widespread belief in the value of undergraduate research(UR)for stu-dents’education and career development,it is only recently that research and evaluation studies have produced results that begin to throw light on the benefits to students,faculty,or institutions that are generated by UR opportunities(Bauer&Bennett,2003;Lopatto,2004a; Russell,2005;Seymour,Hunter,Laursen,&DeAntoni,2004;Ward,Bennett,&Bauer, 2002;Zydney,Bennett,Shahid,&Bauer,2002a,2002b).Other reports focus on the effects of UR experiences on retention,persistence,and promotion of science career pathways for underrepresented groups(Adhikari&Nolan,2002;Barlow&Villarejo,2004;Hathaway, Nagda,&Gregerman,2002;Nagda et al.,1998).It is encouraging tofind strong convergence as to the types of gains reported by these studies(Hunter,Laursen,&Seymour,2006).How-ever,we note limited or no discussion of some of the stronger gains that we document,such as students’personal and professional growth(Hunter et al.,2006;Seymour et al.,2004) and significant variation in how particular gains(especially intellectual gains)are defined. Ongoing and current debates in the academic literature concerning how learning occurs, how students develop intellectually and personally during their college years,and how communities of practice encourage these types of growth posit effective practices and the processes of students’cognitive,epistemological,and interpersonal and intrapersonal de-velopment.Although a variety of theoretical papers and research studies exploring these topics are widely published,with the exception of a short article for Project Kaleidoscope (Lopatto,2004b),none has yet focused on intensive,summer apprentice-style UR experi-ences as a model to investigate the validity of these debates.1Findings from this research study to establish the nature and range of benefits from UR experiences in the sciences,and in particular,results from a comparative analysis of faculty and students’perceptions of gains from UR experiences,inform these theoretical discussions and bolsterfindings from empirical studies in different but related areas(i.e.,careers research,workplace learning, graduate training)on student learning,cognitive and personal growth,the development of professional identity,and how communities of practice contribute to these processes. This article will presentfindings from our faculty andfirst-round student data sets that manifest the concepts and theories underpinning constructivist learning,development of professional identity,and how apprentice-style UR experience operates as an effective community of practice.As these bodies of theory are central tenets of current science education reform efforts,empirical evidence that provides clearer understanding of the actual practices and outcomes of these approaches inform national science education pol-icy concerns for institutions of higher learning to increase diversity in science,numbers of students majoring in science,technology,engineering,or mathematics(STEM)disci-plines,student retention in undergraduate and graduate STEM programs and their entry 1David Lopatto was co-P.I.on this study and conducted quantitative survey research on the basis of our qualitativefindings at the same four liberal arts colleges.Science Education DOI10.1002/sce38HUNTER ET AL.into science careers,and,ultimately,the production of greater numbers of professional scientists.To frame discussion offindings from this research,we present a brief review of theory on student learning,communities of practice,and the development of personal and professional identity germane to our data.CONSTRUCTIVIST LEARNING,COMMUNITIES OF PRACTICE,AND IDENTITY DEVELOPMENTApprentice-style URfits a theoretical model of learning advanced by constructivism, in which learning is a process of integrating new knowledge with prior knowledge such that knowledge is continually constructed and reconstructed by the individual.Vygotsky’s social constructivist approach presented the notion of“the zone of proximal development,”referencing the potential of students’ability to learn and problem solve beyond their current knowledge level through careful guidance from and collaboration with an adult or group of more able peers(Vygotsky,1978).According to Green(2005),Vygotsky’s learning model moved beyond theories of“staged development”(i.e.,Piaget)and“led the way for educators to consider ways of working with others beyond the traditional didactic model”(p.294).In social constructivism,learning is student centered and“situated.”Situated learning,the hallmark of cultural and critical studies education theorists(Freire,1990; Giroux,1988;Shor,1987),takes into account students’own ways of making meaning and frames meaning-making as a negotiated,social,and contextual process.Crucial to student-centered learning is the role of educator as a“facilitator”of learning.In constructivist pedagogy,the teacher is engaged with the student in a two-way,dialog-ical sharing of meaning construction based upon an activity of mutual ve and Wenger(1991)and Wenger(1998)extended tenets of social constructivism into a model of learning built upon“communities of practice.”In a community of practice“newcomers”are socialized into the practice of the community(in this case,science research)through mutual engagement with,and direction and support from an“old-timer.”Lave and Wenger’s development of the concept and practice of this model centers on students’“legitimate pe-ripheral participation.”This construct describes the process whereby a novice is slowly,but increasingly,inducted into the knowledge and skills(both overt and tacit)of a particular practice under the guidance and expertise of the master.Legitimate peripheral participation requires that students actively participate in the authentic practice of the community,as this is the process by which the novice moves from the periphery toward full membership in the community(Lave&Wenger,1991).Similar to Lave and Wenger’s communities of practice, Brown,Collins,and Duguid(1989)and Farmer,Buckmaster,and LeGrand(1992)describe “cognitive apprenticeships.”A cognitive apprenticeship“starts with deliberate instruction by someone who acts as a model;it then proceeds to model-guided trials by practition-ers who progressively assume more responsibility for their learning”(Farmer et al.,1992, p.42).However,these latter authors especially emphasize the importance of students’ongoing opportunities for self-expression and reflective thinking facilitated by an“expert other”as necessary to effective legitimate peripheral participation.Beyond gains in understanding and exercising the practical and cultural knowledge of a community of practice,Brown et al.(1989)discuss the benefits of cognitive ap-prenticeship in helping learners to deal capably with ambiguity and uncertainty—a trait particularly relevant to conducting science research.In their view,cognitive apprenticeship “teaches individuals how to think and act satisfactorily in practice.It transmits useful, reliable knowledge based on the consensual agreement of the practitioners,about how to deal with situations,particularly those that are ill-defined,complex and risky.It teachesScience Education DOI10.1002/sceBECOMING A SCIENTIST39‘knowledge-in-action’that is‘situated”’(quoted in Farmer et al.,1992,p.42).Green(2005) points out that Bowden and Marton(1998,2004)also characterize effective communities of practice as teaching skills that prepare apprentices to negotiate undefined“spaces of learning”:“the‘expert other’...does not necessarily‘know’the answers in a traditional sense,but rather is willing to support collaborative learning focused on the‘unknown fu-ture.’In other words,the‘influential other’takes learning...to spaces where the journey itself is unknown to everyone”(p.295).Such conceptions of communities of practice are strikingly apposite to the processes of learning and growth that we have found among UR students,particularly in their understanding of the nature of scientific knowledge and in their capacity to confront the inherent difficulties of science research.These same issues are central to Baxter Magolda’s research on young adult development. The“epistemological reflection”(ER)model developed from her research posits four categories of intellectual development from simplistic to complex thinking:from“absolute knowing”(where students understand knowledge to be certain and view it as residing in an outside authority)to“transitional knowing”(where students believe that some knowledge is less than absolute and focus onfinding ways to search for truth),then to“independent knowing”(where students believe that most knowledge is less than absolute and individuals can think for themselves),and lastly to“contextual knowing”(where knowledge is shaped by the context in which it is situated and its veracity is debated according to its context) (Baxter Magolda,2004).In this model,epistemological development is closely tied to development of identity. The ER model of“ways of knowing”gradually shifts from an externally directed view of knowing to one that is internally directed.It is this epistemological shift that frames a student’s cognitive and personal development—where knowing and sense of self shift from external sources to reliance upon one’s own internal assessment of knowing and identity. This process of identity development is referred to as“self-authorship”and is supported by a constructivist-developmental pedagogy based on“validating students as knowers, situating learning in students’experience,and defining learning as mutually constructed meaning”(Baxter Magolda,1999,p.26).Baxter Magolda’s research provides examples of pedagogical practice that support the development of self-authorship,including learning through scientific inquiry.As in other social constructivist learning models,the teacher as facilitator is crucial to students’cognitive and personal development:Helping students make personal sense of the construction of knowledge claims and engagingstudents in knowledge construction from their own perspectives involves validating thestudents as knowers and situating learning in the students’own perspectives.Becoming socialized into the ways of knowing of the scientific community and participating in thediscipline’s collective knowledge creation effort involves mutually constructing meaning.(Baxter Magolda,1999,p.105)Here Baxter Magolda’s constructivist-developmental pedagogy converges with Lave and Wenger’s communities of practice,but more clearly emphasizes students’development of identity as part of the professional socialization process.Use of constructivist learning theory and pedagogies,including communities of practice, are plainly evident in the UR model as it is structured and practiced at the four institutions participating in this study,as we describe next.As such,the gains identified by student and faculty research advisors actively engaged in apprentice-style learning and teaching provide a means to test these theories and models and offer the opportunity to examine the processes,whereby these benefits are generated,including students’development of a professional identity.Science Education DOI10.1002/sce40HUNTER ET AL.THE APPRENTICESHIP MODEL FOR UNDERGRADUATE RESEARCH Effective UR is defined as,“an inquiry or investigation conducted by an undergraduate that makes an original intellectual or creative contribution to the discipline”(NSF,2003b, p.9).In the“best practice”of UR,the student draws on the“mentor’s expertise and resources...and the student is encouraged to take primary responsibility for the project and to provide substantial input into its direction”(American Chemical Society’s Committee on Professional Training,quoted in Wenzel,2003,p.1).Undergraduate research,as practiced in the four liberal arts colleges in this study,is based upon this apprenticeship model of learning:student researchers work collaboratively with faculty in conducting authentic, original research.In these colleges,students typically underwent a competitive application process(even when a faculty member directly invited a student to participate).After sorting applications, and ranking students’research preferences,faculty interviewed students to assure a good match between the student’s interests and the faculty member’s research and also between the faculty member and the student.Generally,once all application materials were reviewed (i.e.,students’statements of interest,course transcripts,grade point averages[GPA]), faculty negotiated as a group to distribute successful applicants among the available summer research advisors.Students were paid a stipend for their full-time work with faculty for 10weeks over summer.Depending on the amount of funding available and individual research needs,faculty research advisors supervised one or more students.Typically,a faculty research advisor worked with two students for the summer,but many worked with three or four,or even larger groups.In most cases,student researchers were assigned to work on predetermined facets of faculty research projects:each student project was open ended,but defined,so that a student had a reasonable chance of completing it in the short time frame and of producing useful results.Faculty research advisors described the importance of choosing a project appropriate to the student’s“level,”taking into account their students’interests,knowledge, and abilities and aiming to stretch their capacities,but not beyond students’reach.Research advisors were often willing to integrate students’specific interests into the design of their research projects.Faculty research advisors described the intensive nature of getting their student re-searchers“up and running”in the beginning weeks of the program.Orienting students to the laboratory and to the project,providing students with relevant background information and literature,and teaching them the various skills and instrumentation necessary to work effectively required adaptability to meet students at an array of preparation levels,advance planning,and a good deal of their time.Faculty engaged in directing UR discussed their role as facilitators of students’learning.In the beginning weeks of the project,faculty advisors often worked one-on-one with their students.They provided instruction,gave “mini-lectures,”explained step by step why and how processes were done in particular ways—all the time modeling how science research is done.When necessary,they closely guided students,but wherever possible,provided latitude for and encouraged students’own initiative and experimentation.As the summer progressed,faculty noted that,based on growing hands-on experience,students gained confidence(to a greater or lesser degree)in their abilities,and gradually and increasingly became self-directed and able,or even eager, to work independently.Although most faculty research advisors described regular contact with their student researchers,most did not work side by side with their students everyday.Many research advisors held a weekly meeting to review progress,discuss problems,and make sure students(and the projects)were on the right track.At points in the research work,facultyScience Education DOI10.1002/sceBECOMING A SCIENTIST41 could focus on other tasks while students worked more independently,and the former were available as necessary.When students encountered problems with the research,faculty would serve as a sounding board while students described their efforts to resolve difficulties. Faculty gave suggestions for methods that students could try themselves,and when problems seemed insurmountable to students,faculty would troubleshoot with them tofind a way to move the project forward.Faculty research advisors working with two or more student researchers often used the research peer group to further their students’development.Some faculty relied on more-senior student researchers to help guide new ones.Having multiple students working in the laboratory(whether or not on the same project)also gave student researchers an extra resource to draw upon when questions arose or they needed help.In some cases,several faculty members(from the same or different departments)scheduled weekly meetings for group discussion of their research monly,faculty assigned articles for students to summarize and present to the rest of the group.Toward the end of summer, weekly meetings were often devoted to students’practice of their presentations so that the research advisor and other students could provide constructive criticism.At the end of summer,with few exceptions,student researchers attended a campus-wide UR conference, where they presented posters and shared their research with peers,faculty,and institution administrators.Undergraduate research programs in these liberal arts colleges also offered a series of seminars andfield trips that explored various science careers,discussed the process of choosing and applying to graduate schools,and other topics that focused on students’professional development.We thus found that,at these four liberal arts colleges,the practice of UR embodies the principles of the apprenticeship model of learning where students engage in active,hands-on experience of doing science research in collaboration with and under the auspices of a faculty research advisor.RESEARCH DESIGNThis qualitative study was designed to address fundamental questions about the benefits (and costs)of undergraduate engagement in faculty-mentored,authentic research under-taken outside of class work,about which the existing literature offers fewfindings and many untested hypotheses.2Longitudinal and comparative,this study explores:•what students identify as the benefits of UR—both following the experience,and inthe longer term(particularly career outcomes);•what gains faculty advisors observe in their student researchers and how their view of gains converges with or diverges from those of their students;•the benefits and costs to faculty of their engagement in UR;•what,if anything,is lost by students who do not participate in UR;and•the processes by which gains to students are generated.This study was undertaken at four liberal arts colleges with a strong history of UR.All four offer UR in three core sciences—physics,chemistry,and biology—with additional programs in other STEMfields,including(at different campuses)computer science,engi-neering,biochemistry,mathematics,and psychology.In the apprenticeship model of UR practiced at these colleges,faculty alone directed students in research;however,in the few2An extensive review and discussion of the literature on UR is presented in Seymour et al.(2004). Science Education DOI10.1002/sce42HUNTER ET AL.instances where faculty conducted research at a nearby institution,some students did have contact with post docs,graduate students,or senior laboratory technicians who assisted in the research as well.We interviewed a cohort of(largely)“rising seniors”who were engaged in UR in summer2000on the four campuses(N=76).They were interviewed for a second time shortly before their graduation in spring2001(N=69),and a third time as graduates in 2003–2004(N=55).The faculty advisors(N=55)working with this cohort of students were also interviewed in summer2000,as were nine administrators with long experience of UR programs at their schools.We also interviewed a comparison group of students(N=62)who had not done UR. They were interviewed as graduating seniors in spring2001,and again as graduates in 2003–2004(N=25).A comparison group(N=16)of faculty who did not conduct UR in summer2000was also interviewed.Interview protocols focused upon the nature,value,and career consequences of UR experiences,and the methods by which these were achieved.3After classifying the range of benefits claimed in the literature,we constructed a“gains”checklist to discuss with all participants“what faculty think students may gain from undergraduate research.”Dur-ing the interview,UR students were asked to describe the gains from their research experience(or by other means).If,toward the end of the interview,a student had not mentioned a gain identified on our“checklist,”the student was queried as to whether he or she could claim to have gained the benefit and was invited to add further com-ment.Students also mentioned gains they had made that were not included in the list. With slight alterations in the protocol,we invited comments on the same list of possi-ble gains from students who had not experienced UR,and solicited information about gains from other types of experience.All students were asked to expand on their an-swers,to highlight gains most significant to them,and to describe the sources of any benefits.In the second set of interviews,the same students(nearing graduation)were asked to reflect back on their research experiences as undergraduates,and to comment on the rel-ative importance of their research-derived gains,both for the careers they planned and for other aspects of their lives.In thefinal set of interviews,they were asked to of-fer a retrospective summary of the origins of their career plans and the role that UR and other factors had played in them,and to comment on the longer term effects of their UR experiences—especially the consequences for their career choices and progress, including their current educational or professional engagement.Again,the sources of gains cited were explored;especially gains that were identified by some students as arising from UR experiences but may also arise from other aspects of their college education.The total of367interviews represents more than13,000pages of text data.We are currently analyzing other aspects of the data and will reportfindings on additional topics, including the benefits and costs to faculty of their participation in UR and longitudinal and comparative outcomes of students’career choices.This article discussesfindings from a comparative analysis of all faculty and administrator interviews(N=80),withfindings from thefirst-round UR student interviews(N=76),and provides empirical evidence of the role of UR experiences in encouraging the intellectual,personal,and professional development of student researchers,and how the apprenticeship modelfits theoretical discussions on these topics.3The protocol is available by request to the authors via abhunter@.Science Education DOI10.1002/sceBECOMING A SCIENTIST43METHODS OF DATA TRANSCRIPTION,CODING,AND ANAL YSISOur methods of data collection and analysis are ethnographic,rooted in theoretical work and methodological traditions from sociology,anthropology,and social psychol-ogy(Berger&Luckman,1967;Blumer,1969;Garfinkel,1967;Mead,1934;Schutz& Luckman,1974).Classically,qualitative studies such as ethnographies precede survey or experimental work,particularly where existing knowledge is limited,because these meth-ods of research can uncover and explore issues that shape informants’thinking and actions. Good qualitative software computer programs are now available that allow for the multiple, overlapping,and nested coding of a large volume of text data to a high degree of complexity, thus enabling ethnographers to disentangle patterns in large data sets and to reportfindings using descriptive statistics.Although conditions for statistical significance are rarely met, the results from analysis of text data gathered by careful sampling and consistency in data coding can be very powerful.Interviews took between60and90minutes.Taped interviews and focus groups were transcribed verbatim into a word-processing program and submitted to“The Ethnograph,”a qualitative computer software program(Seidel,1998).Each transcript was searched for information bearing upon the research questions.In this type of analysis,text segments referencing issues of different type are tagged by code names.Codes are not preconceived,but empirical:each new code references a discrete idea not previously raised.Interviewees also offer information in spontaneous narratives and examples,and may make several points in the same passage,each of which is separately coded.As transcripts are coded,both the codes and their associated passages are entered into“The Ethnograph,”creating a data set for each interview group(eight,in this study). Code words and their definitions are concurrently collected in a codebook.Groups of codes that cluster around particular themes are assigned and grouped by“parent”codes.Because an idea that is encapsulated by a code may relate to more than one theme,code words are often assigned multiple parent codes.Thus,a branching and interconnected structure of codes and parents emerges from the text data,which,at any point in time,represents the state of the analysis.As information is commonly embedded in speakers’accounts of their experience rather than offered in abstract statements,transcripts can be checked for internal consistency;that is,between the opinions or explanations offered by informants,their descriptions of events, and the reflections and feelings these evoke.Ongoing discussions between members of our research group continually reviewed the types of observations arising from the data sets to assess and refine category definitions and assure content validity.The clustered codes and parents and their relationships define themes of the qualita-tive analysis.In addition,frequency of use can be counted for codes across a data set, and for important subsets(e.g.,gender),using conservative counting conventions that are designed to avoid overestimation of the weight of particular opinions.Together,these frequencies describe the relative weighting of issues in participants’collective report. As they are drawn from targeted,intentional samples,rather than from random samples, these frequencies are not subjected to tests for statistical significance.They hypothesize the strength of particular variables and their relationships that may later be tested by random sample surveys or by other means.However,thefindings in this study are un-usually strong because of near-complete participation by members of each group under study.Before presentingfindings from this study,we provide an overview of the results of our comparative analysis and describe the evolution of our analysis of the student interview data as a result of emergentfindings from analysis of the faculty interview data.Science Education DOI10.1002/sce。

ADAPTIVE LASSO FOR SPARSE HIGH-DIMENSIONAL REGRESSION MODELS

ADAPTIVE LASSO FOR SPARSE HIGH-DIMENSIONAL REGRESSION MODELS

in sparse, high-dimensional, linear regression models when the number of covariates may increase with the sample size. We consider variable selection using the adaptive Lasso, where the L1 norms in the penalty are re-weighted by data-dependent weights. We show that, if a reasonable initial estimator is available, then under appropriate conditions, the adaptive Lasso correctly selects covariates with nonzero coefficients with probability converging to one and that the estimators of nonzero coefficients have the same asymptotic distribution that they would have if the zero coefficients were known in advance. Thus, the adaptive Lasso has an oracle property in the sense of Fan and Li (2001) and Fan and Peng (2004). In addition, under a partial orthogonality condition in which the covariates with zero coefficients are weakly correlated with the covariates with nonzero coefficients, marginal regression can be used to obtain the initial estimator. With this initial estimator, adaptive Lasso has the oracle property even when the number of covariates is much larger than the sample size. Key Words and phrases. Penalized regression, high-dimensional data, variable selection, asymptotic normality, oracle property, zero-consistency. Short title. Adaptive Lasso AMS 2000 subject classification. Primary 62J05, 62J07; secondary 62E20, 60F05

热红外传感史

热红外传感史

History of infrared detectorsA.ROGALSKI*Institute of Applied Physics, Military University of Technology, 2 Kaliskiego Str.,00–908 Warsaw, PolandThis paper overviews the history of infrared detector materials starting with Herschel’s experiment with thermometer on February11th,1800.Infrared detectors are in general used to detect,image,and measure patterns of the thermal heat radia−tion which all objects emit.At the beginning,their development was connected with thermal detectors,such as ther−mocouples and bolometers,which are still used today and which are generally sensitive to all infrared wavelengths and op−erate at room temperature.The second kind of detectors,called the photon detectors,was mainly developed during the20th Century to improve sensitivity and response time.These detectors have been extensively developed since the1940’s.Lead sulphide(PbS)was the first practical IR detector with sensitivity to infrared wavelengths up to~3μm.After World War II infrared detector technology development was and continues to be primarily driven by military applications.Discovery of variable band gap HgCdTe ternary alloy by Lawson and co−workers in1959opened a new area in IR detector technology and has provided an unprecedented degree of freedom in infrared detector design.Many of these advances were transferred to IR astronomy from Departments of Defence ter on civilian applications of infrared technology are frequently called“dual−use technology applications.”One should point out the growing utilisation of IR technologies in the civilian sphere based on the use of new materials and technologies,as well as the noticeable price decrease in these high cost tech−nologies.In the last four decades different types of detectors are combined with electronic readouts to make detector focal plane arrays(FPAs).Development in FPA technology has revolutionized infrared imaging.Progress in integrated circuit design and fabrication techniques has resulted in continued rapid growth in the size and performance of these solid state arrays.Keywords:thermal and photon detectors, lead salt detectors, HgCdTe detectors, microbolometers, focal plane arrays.Contents1.Introduction2.Historical perspective3.Classification of infrared detectors3.1.Photon detectors3.2.Thermal detectors4.Post−War activity5.HgCdTe era6.Alternative material systems6.1.InSb and InGaAs6.2.GaAs/AlGaAs quantum well superlattices6.3.InAs/GaInSb strained layer superlattices6.4.Hg−based alternatives to HgCdTe7.New revolution in thermal detectors8.Focal plane arrays – revolution in imaging systems8.1.Cooled FPAs8.2.Uncooled FPAs8.3.Readiness level of LWIR detector technologies9.SummaryReferences 1.IntroductionLooking back over the past1000years we notice that infra−red radiation(IR)itself was unknown until212years ago when Herschel’s experiment with thermometer and prism was first reported.Frederick William Herschel(1738–1822) was born in Hanover,Germany but emigrated to Britain at age19,where he became well known as both a musician and an astronomer.Herschel became most famous for the discovery of Uranus in1781(the first new planet found since antiquity)in addition to two of its major moons,Tita−nia and Oberon.He also discovered two moons of Saturn and infrared radiation.Herschel is also known for the twenty−four symphonies that he composed.W.Herschel made another milestone discovery–discov−ery of infrared light on February11th,1800.He studied the spectrum of sunlight with a prism[see Fig.1in Ref.1],mea−suring temperature of each colour.The detector consisted of liquid in a glass thermometer with a specially blackened bulb to absorb radiation.Herschel built a crude monochromator that used a thermometer as a detector,so that he could mea−sure the distribution of energy in sunlight and found that the highest temperature was just beyond the red,what we now call the infrared(‘below the red’,from the Latin‘infra’–be−OPTO−ELECTRONICS REVIEW20(3),279–308DOI: 10.2478/s11772−012−0037−7*e−mail: rogan@.pllow)–see Fig.1(b)[2].In April 1800he reported it to the Royal Society as dark heat (Ref.1,pp.288–290):Here the thermometer No.1rose 7degrees,in 10minu−tes,by an exposure to the full red coloured rays.I drew back the stand,till the centre of the ball of No.1was just at the vanishing of the red colour,so that half its ball was within,and half without,the visible rays of theAnd here the thermometerin 16minutes,degrees,when its centre was inch out of the raysof the sun.as had a rising of 9de−grees,and here the difference is almost too trifling to suppose,that latter situation of the thermometer was much beyond the maximum of the heating power;while,at the same time,the experiment sufficiently indi−cates,that the place inquired after need not be looked for at a greater distance.Making further experiments on what Herschel called the ‘calorific rays’that existed beyond the red part of the spec−trum,he found that they were reflected,refracted,absorbed and transmitted just like visible light [1,3,4].The early history of IR was reviewed about 50years ago in three well−known monographs [5–7].Many historical information can be also found in four papers published by Barr [3,4,8,9]and in more recently published monograph [10].Table 1summarises the historical development of infrared physics and technology [11,12].2.Historical perspectiveFor thirty years following Herschel’s discovery,very little progress was made beyond establishing that the infrared ra−diation obeyed the simplest laws of optics.Slow progress inthe study of infrared was caused by the lack of sensitive and accurate detectors –the experimenters were handicapped by the ordinary thermometer.However,towards the second de−cade of the 19th century,Thomas Johann Seebeck began to examine the junction behaviour of electrically conductive materials.In 1821he discovered that a small electric current will flow in a closed circuit of two dissimilar metallic con−ductors,when their junctions are kept at different tempera−tures [13].During that time,most physicists thought that ra−diant heat and light were different phenomena,and the dis−covery of Seebeck indirectly contributed to a revival of the debate on the nature of heat.Due to small output vol−tage of Seebeck’s junctions,some μV/K,the measurement of very small temperature differences were prevented.In 1829L.Nobili made the first thermocouple and improved electrical thermometer based on the thermoelectric effect discovered by Seebeck in 1826.Four years later,M.Melloni introduced the idea of connecting several bismuth−copper thermocouples in series,generating a higher and,therefore,measurable output voltage.It was at least 40times more sensitive than the best thermometer available and could de−tect the heat from a person at a distance of 30ft [8].The out−put voltage of such a thermopile structure linearly increases with the number of connected thermocouples.An example of thermopile’s prototype invented by Nobili is shown in Fig.2(a).It consists of twelve large bismuth and antimony elements.The elements were placed upright in a brass ring secured to an adjustable support,and were screened by a wooden disk with a 15−mm central aperture.Incomplete version of the Nobili−Melloni thermopile originally fitted with the brass cone−shaped tubes to collect ra−diant heat is shown in Fig.2(b).This instrument was much more sensi−tive than the thermometers previously used and became the most widely used detector of IR radiation for the next half century.The third member of the trio,Langley’s bolometer appea−red in 1880[7].Samuel Pierpont Langley (1834–1906)used two thin ribbons of platinum foil connected so as to form two arms of a Wheatstone bridge (see Fig.3)[15].This instrument enabled him to study solar irradiance far into its infrared region and to measure theintensityof solar radia−tion at various wavelengths [9,16,17].The bolometer’s sen−History of infrared detectorsFig.1.Herschel’s first experiment:A,B –the small stand,1,2,3–the thermometers upon it,C,D –the prism at the window,E –the spec−trum thrown upon the table,so as to bring the last quarter of an inch of the read colour upon the stand (after Ref.1).InsideSir FrederickWilliam Herschel (1738–1822)measures infrared light from the sun– artist’s impression (after Ref. 2).Fig.2.The Nobili−Meloni thermopiles:(a)thermopile’s prototype invented by Nobili (ca.1829),(b)incomplete version of the Nobili−−Melloni thermopile (ca.1831).Museo Galileo –Institute and Museum of the History of Science,Piazza dei Giudici 1,50122Florence, Italy (after Ref. 14).Table 1. Milestones in the development of infrared physics and technology (up−dated after Refs. 11 and 12)Year Event1800Discovery of the existence of thermal radiation in the invisible beyond the red by W. HERSCHEL1821Discovery of the thermoelectric effects using an antimony−copper pair by T.J. SEEBECK1830Thermal element for thermal radiation measurement by L. NOBILI1833Thermopile consisting of 10 in−line Sb−Bi thermal pairs by L. NOBILI and M. MELLONI1834Discovery of the PELTIER effect on a current−fed pair of two different conductors by J.C. PELTIER1835Formulation of the hypothesis that light and electromagnetic radiation are of the same nature by A.M. AMPERE1839Solar absorption spectrum of the atmosphere and the role of water vapour by M. MELLONI1840Discovery of the three atmospheric windows by J. HERSCHEL (son of W. HERSCHEL)1857Harmonization of the three thermoelectric effects (SEEBECK, PELTIER, THOMSON) by W. THOMSON (Lord KELVIN)1859Relationship between absorption and emission by G. KIRCHHOFF1864Theory of electromagnetic radiation by J.C. MAXWELL1873Discovery of photoconductive effect in selenium by W. SMITH1876Discovery of photovoltaic effect in selenium (photopiles) by W.G. ADAMS and A.E. DAY1879Empirical relationship between radiation intensity and temperature of a blackbody by J. STEFAN1880Study of absorption characteristics of the atmosphere through a Pt bolometer resistance by S.P. LANGLEY1883Study of transmission characteristics of IR−transparent materials by M. MELLONI1884Thermodynamic derivation of the STEFAN law by L. BOLTZMANN1887Observation of photoelectric effect in the ultraviolet by H. HERTZ1890J. ELSTER and H. GEITEL constructed a photoemissive detector consisted of an alkali−metal cathode1894, 1900Derivation of the wavelength relation of blackbody radiation by J.W. RAYEIGH and W. WIEN1900Discovery of quantum properties of light by M. PLANCK1903Temperature measurements of stars and planets using IR radiometry and spectrometry by W.W. COBLENTZ1905 A. EINSTEIN established the theory of photoelectricity1911R. ROSLING made the first television image tube on the principle of cathode ray tubes constructed by F. Braun in 18971914Application of bolometers for the remote exploration of people and aircrafts ( a man at 200 m and a plane at 1000 m)1917T.W. CASE developed the first infrared photoconductor from substance composed of thallium and sulphur1923W. SCHOTTKY established the theory of dry rectifiers1925V.K. ZWORYKIN made a television image tube (kinescope) then between 1925 and 1933, the first electronic camera with the aid of converter tube (iconoscope)1928Proposal of the idea of the electro−optical converter (including the multistage one) by G. HOLST, J.H. DE BOER, M.C. TEVES, and C.F. VEENEMANS1929L.R. KOHLER made a converter tube with a photocathode (Ag/O/Cs) sensitive in the near infrared1930IR direction finders based on PbS quantum detectors in the wavelength range 1.5–3.0 μm for military applications (GUDDEN, GÖRLICH and KUTSCHER), increased range in World War II to 30 km for ships and 7 km for tanks (3–5 μm)1934First IR image converter1939Development of the first IR display unit in the United States (Sniperscope, Snooperscope)1941R.S. OHL observed the photovoltaic effect shown by a p−n junction in a silicon1942G. EASTMAN (Kodak) offered the first film sensitive to the infrared1947Pneumatically acting, high−detectivity radiation detector by M.J.E. GOLAY1954First imaging cameras based on thermopiles (exposure time of 20 min per image) and on bolometers (4 min)1955Mass production start of IR seeker heads for IR guided rockets in the US (PbS and PbTe detectors, later InSb detectors for Sidewinder rockets)1957Discovery of HgCdTe ternary alloy as infrared detector material by W.D. LAWSON, S. NELSON, and A.S. YOUNG1961Discovery of extrinsic Ge:Hg and its application (linear array) in the first LWIR FLIR systems1965Mass production start of IR cameras for civil applications in Sweden (single−element sensors with optomechanical scanner: AGA Thermografiesystem 660)1970Discovery of charge−couple device (CCD) by W.S. BOYLE and G.E. SMITH1970Production start of IR sensor arrays (monolithic Si−arrays: R.A. SOREF 1968; IR−CCD: 1970; SCHOTTKY diode arrays: F.D.SHEPHERD and A.C. YANG 1973; IR−CMOS: 1980; SPRITE: T. ELIOTT 1981)1975Lunch of national programmes for making spatially high resolution observation systems in the infrared from multielement detectors integrated in a mini cooler (so−called first generation systems): common module (CM) in the United States, thermal imaging commonmodule (TICM) in Great Britain, syteme modulaire termique (SMT) in France1975First In bump hybrid infrared focal plane array1977Discovery of the broken−gap type−II InAs/GaSb superlattices by G.A. SAI−HALASZ, R. TSU, and L. ESAKI1980Development and production of second generation systems [cameras fitted with hybrid HgCdTe(InSb)/Si(readout) FPAs].First demonstration of two−colour back−to−back SWIR GaInAsP detector by J.C. CAMPBELL, A.G. DENTAI, T.P. LEE,and C.A. BURRUS1985Development and mass production of cameras fitted with Schottky diode FPAs (platinum silicide)1990Development and production of quantum well infrared photoconductor (QWIP) hybrid second generation systems1995Production start of IR cameras with uncooled FPAs (focal plane arrays; microbolometer−based and pyroelectric)2000Development and production of third generation infrared systemssitivity was much greater than that of contemporary thermo−piles which were little improved since their use by Melloni. Langley continued to develop his bolometer for the next20 years(400times more sensitive than his first efforts).His latest bolometer could detect the heat from a cow at a dis−tance of quarter of mile [9].From the above information results that at the beginning the development of the IR detectors was connected with ther−mal detectors.The first photon effect,photoconductive ef−fect,was discovered by Smith in1873when he experimented with selenium as an insulator for submarine cables[18].This discovery provided a fertile field of investigation for several decades,though most of the efforts were of doubtful quality. By1927,over1500articles and100patents were listed on photosensitive selenium[19].It should be mentioned that the literature of the early1900’s shows increasing interest in the application of infrared as solution to numerous problems[7].A special contribution of William Coblenz(1873–1962)to infrared radiometry and spectroscopy is marked by huge bib−liography containing hundreds of scientific publications, talks,and abstracts to his credit[20,21].In1915,W.Cob−lentz at the US National Bureau of Standards develops ther−mopile detectors,which he uses to measure the infrared radi−ation from110stars.However,the low sensitivity of early in−frared instruments prevented the detection of other near−IR sources.Work in infrared astronomy remained at a low level until breakthroughs in the development of new,sensitive infrared detectors were achieved in the late1950’s.The principle of photoemission was first demonstrated in1887when Hertz discovered that negatively charged par−ticles were emitted from a conductor if it was irradiated with ultraviolet[22].Further studies revealed that this effect could be produced with visible radiation using an alkali metal electrode [23].Rectifying properties of semiconductor−metal contact were discovered by Ferdinand Braun in1874[24],when he probed a naturally−occurring lead sulphide(galena)crystal with the point of a thin metal wire and noted that current flowed freely in one direction only.Next,Jagadis Chandra Bose demonstrated the use of galena−metal point contact to detect millimetre electromagnetic waves.In1901he filed a U.S patent for a point−contact semiconductor rectifier for detecting radio signals[25].This type of contact called cat’s whisker detector(sometimes also as crystal detector)played serious role in the initial phase of radio development.How−ever,this contact was not used in a radiation detector for the next several decades.Although crystal rectifiers allowed to fabricate simple radio sets,however,by the mid−1920s the predictable performance of vacuum−tubes replaced them in most radio applications.The period between World Wars I and II is marked by the development of photon detectors and image converters and by emergence of infrared spectroscopy as one of the key analytical techniques available to chemists.The image con−verter,developed on the eve of World War II,was of tre−mendous interest to the military because it enabled man to see in the dark.The first IR photoconductor was developed by Theodore W.Case in1917[26].He discovered that a substance com−posed of thallium and sulphur(Tl2S)exhibited photocon−ductivity.Supported by the US Army between1917and 1918,Case adapted these relatively unreliable detectors for use as sensors in an infrared signalling device[27].The pro−totype signalling system,consisting of a60−inch diameter searchlight as the source of radiation and a thallous sulphide detector at the focus of a24−inch diameter paraboloid mir−ror,sent messages18miles through what was described as ‘smoky atmosphere’in1917.However,instability of resis−tance in the presence of light or polarizing voltage,loss of responsivity due to over−exposure to light,high noise,slug−gish response and lack of reproducibility seemed to be inhe−rent weaknesses.Work was discontinued in1918;commu−nication by the detection of infrared radiation appeared dis−tinctly ter Case found that the addition of oxygen greatly enhanced the response [28].The idea of the electro−optical converter,including the multistage one,was proposed by Holst et al.in1928[29]. The first attempt to make the converter was not successful.A working tube consisted of a photocathode in close proxi−mity to a fluorescent screen was made by the authors in 1934 in Philips firm.In about1930,the appearance of the Cs−O−Ag photo−tube,with stable characteristics,to great extent discouraged further development of photoconductive cells until about 1940.The Cs−O−Ag photocathode(also called S−1)elabo−History of infrared detectorsFig.3.Longley’s bolometer(a)composed of two sets of thin plati−num strips(b),a Wheatstone bridge,a battery,and a galvanometer measuring electrical current (after Ref. 15 and 16).rated by Koller and Campbell[30]had a quantum efficiency two orders of magnitude above anything previously studied, and consequently a new era in photoemissive devices was inaugurated[31].In the same year,the Japanese scientists S. Asao and M.Suzuki reported a method for enhancing the sensitivity of silver in the S−1photocathode[32].Consisted of a layer of caesium on oxidized silver,S−1is sensitive with useful response in the near infrared,out to approxi−mately1.2μm,and the visible and ultraviolet region,down to0.3μm.Probably the most significant IR development in the United States during1930’s was the Radio Corporation of America(RCA)IR image tube.During World War II, near−IR(NIR)cathodes were coupled to visible phosphors to provide a NIR image converter.With the establishment of the National Defence Research Committee,the develop−ment of this tube was accelerated.In1942,the tube went into production as the RCA1P25image converter(see Fig.4).This was one of the tubes used during World War II as a part of the”Snooperscope”and”Sniperscope,”which were used for night observation with infrared sources of illumination.Since then various photocathodes have been developed including bialkali photocathodes for the visible region,multialkali photocathodes with high sensitivity ex−tending to the infrared region and alkali halide photocatho−des intended for ultraviolet detection.The early concepts of image intensification were not basically different from those today.However,the early devices suffered from two major deficiencies:poor photo−cathodes and poor ter development of both cathode and coupling technologies changed the image in−tensifier into much more useful device.The concept of image intensification by cascading stages was suggested independently by number of workers.In Great Britain,the work was directed toward proximity focused tubes,while in the United State and in Germany–to electrostatically focused tubes.A history of night vision imaging devices is given by Biberman and Sendall in monograph Electro−Opti−cal Imaging:System Performance and Modelling,SPIE Press,2000[10].The Biberman’s monograph describes the basic trends of infrared optoelectronics development in the USA,Great Britain,France,and Germany.Seven years later Ponomarenko and Filachev completed this monograph writ−ing the book Infrared Techniques and Electro−Optics in Russia:A History1946−2006,SPIE Press,about achieve−ments of IR techniques and electrooptics in the former USSR and Russia [33].In the early1930’s,interest in improved detectors began in Germany[27,34,35].In1933,Edgar W.Kutzscher at the University of Berlin,discovered that lead sulphide(from natural galena found in Sardinia)was photoconductive and had response to about3μm.B.Gudden at the University of Prague used evaporation techniques to develop sensitive PbS films.Work directed by Kutzscher,initially at the Uni−versity of Berlin and later at the Electroacustic Company in Kiel,dealt primarily with the chemical deposition approach to film formation.This work ultimately lead to the fabrica−tion of the most sensitive German detectors.These works were,of course,done under great secrecy and the results were not generally known until after1945.Lead sulphide photoconductors were brought to the manufacturing stage of development in Germany in about1943.Lead sulphide was the first practical infrared detector deployed in a variety of applications during the war.The most notable was the Kiel IV,an airborne IR system that had excellent range and which was produced at Carl Zeiss in Jena under the direction of Werner K. Weihe [6].In1941,Robert J.Cashman improved the technology of thallous sulphide detectors,which led to successful produc−tion[36,37].Cashman,after success with thallous sulphide detectors,concentrated his efforts on lead sulphide detec−tors,which were first produced in the United States at Northwestern University in1944.After World War II Cash−man found that other semiconductors of the lead salt family (PbSe and PbTe)showed promise as infrared detectors[38]. The early detector cells manufactured by Cashman are shown in Fig. 5.Fig.4.The original1P25image converter tube developed by the RCA(a).This device measures115×38mm overall and has7pins.It opera−tion is indicated by the schematic drawing (b).After1945,the wide−ranging German trajectory of research was essentially the direction continued in the USA, Great Britain and Soviet Union under military sponsorship after the war[27,39].Kutzscher’s facilities were captured by the Russians,thus providing the basis for early Soviet detector development.From1946,detector technology was rapidly disseminated to firms such as Mullard Ltd.in Southampton,UK,as part of war reparations,and some−times was accompanied by the valuable tacit knowledge of technical experts.E.W.Kutzscher,for example,was flown to Britain from Kiel after the war,and subsequently had an important influence on American developments when he joined Lockheed Aircraft Co.in Burbank,California as a research scientist.Although the fabrication methods developed for lead salt photoconductors was usually not completely under−stood,their properties are well established and reproducibi−lity could only be achieved after following well−tried reci−pes.Unlike most other semiconductor IR detectors,lead salt photoconductive materials are used in the form of polycrys−talline films approximately1μm thick and with individual crystallites ranging in size from approximately0.1–1.0μm. They are usually prepared by chemical deposition using empirical recipes,which generally yields better uniformity of response and more stable results than the evaporative methods.In order to obtain high−performance detectors, lead chalcogenide films need to be sensitized by oxidation. The oxidation may be carried out by using additives in the deposition bath,by post−deposition heat treatment in the presence of oxygen,or by chemical oxidation of the film. The effect of the oxidant is to introduce sensitizing centres and additional states into the bandgap and thereby increase the lifetime of the photoexcited holes in the p−type material.3.Classification of infrared detectorsObserving a history of the development of the IR detector technology after World War II,many materials have been investigated.A simple theorem,after Norton[40],can be stated:”All physical phenomena in the range of about0.1–1 eV will be proposed for IR detectors”.Among these effects are:thermoelectric power(thermocouples),change in elec−trical conductivity(bolometers),gas expansion(Golay cell), pyroelectricity(pyroelectric detectors),photon drag,Jose−phson effect(Josephson junctions,SQUIDs),internal emis−sion(PtSi Schottky barriers),fundamental absorption(in−trinsic photodetectors),impurity absorption(extrinsic pho−todetectors),low dimensional solids[superlattice(SL), quantum well(QW)and quantum dot(QD)detectors], different type of phase transitions, etc.Figure6gives approximate dates of significant develop−ment efforts for the materials mentioned.The years during World War II saw the origins of modern IR detector tech−nology.Recent success in applying infrared technology to remote sensing problems has been made possible by the successful development of high−performance infrared de−tectors over the last six decades.Photon IR technology com−bined with semiconductor material science,photolithogra−phy technology developed for integrated circuits,and the impetus of Cold War military preparedness have propelled extraordinary advances in IR capabilities within a short time period during the last century [41].The majority of optical detectors can be classified in two broad categories:photon detectors(also called quantum detectors) and thermal detectors.3.1.Photon detectorsIn photon detectors the radiation is absorbed within the material by interaction with electrons either bound to lattice atoms or to impurity atoms or with free electrons.The observed electrical output signal results from the changed electronic energy distribution.The photon detectors show a selective wavelength dependence of response per unit incident radiation power(see Fig.8).They exhibit both a good signal−to−noise performance and a very fast res−ponse.But to achieve this,the photon IR detectors require cryogenic cooling.This is necessary to prevent the thermalHistory of infrared detectorsFig.5.Cashman’s detector cells:(a)Tl2S cell(ca.1943):a grid of two intermeshing comb−line sets of conducting paths were first pro−vided and next the T2S was evaporated over the grid structure;(b) PbS cell(ca.1945)the PbS layer was evaporated on the wall of the tube on which electrical leads had been drawn with aquadag(afterRef. 38).。

Presentation Preference Oral Presentation or Poster Presentation

Presentation Preference Oral Presentation or Poster Presentation

Paper Title3D Face Recognition based on Geodesic DistancesAuthorsShalini GuptaDepartment of Electrical and Computer EngineeringThe University of Texas at Austin1University Station C0800Austin,TX78712+1.512.471.8660+1.512.471.0616(fax)shalinig@Mia K.MarkeyDepartment of Biomedical EngineeringThe University of Texas at Austin1University Station C0800Austin,TX78712+1.512.471.8660+1.512.471.0616(fax)mia.markey@Jake AggarwalDepartment of Electrical and Computer EngineeringThe University of Texas at Austin1University Station C0803Austin,TX78712+1.512.471.1369+1.512.471.5532(fax)aggarwaljk@Alan C.BovikDepartment of Electrical and Computer EngineeringThe University of Texas at Austin1University Station C0803Austin,TX78712+1.512.471.5370+1.512.471.1225(fax)bovik@Presentation PreferenceOral Presentation or Poster PresentationPrincipal Author’s BiographyShalini Gupta received a BE degree in Electronics and Electrical Communication Engineering from Punjab Engineering College,India.She received a MS degree in Electrical and Computer Engi-neering from the University of Texas at Austin,where she is currently a PhD student.During her masters,she developed techniques for computer aided diagnosis of breast cancer.She is currently investigating techniques for3D human face recognition.KeywordsGeodesic distances,three-dimensional face recognition,range image,biometricsExtended AbstractProblem Statement:Automated human identification is required in applications such as access control,passenger screening,passport control,surveillance,criminal justice and human computer interaction.Face recognition is one of the most widely investigated biometric techniques for human identification. Face recognition systems require less user co-operation than systems based on other biometrics(e.g.fingerprints and iris).Although considerable progress has been made on face recognition systems based on two dimensional(2D)intensity images,they are inadequate for robust face recognition. Their performance is reported to decrease significantly with varying facial pose and illumination conditions[1].Three-dimensional face recognition systems are less sensitive to changes in ambient illumination conditions than2D systems[2].Three-dimensional face models can also be rigidly transformed to a canonical pose.Hence,considerable research attention is now being directed toward developing3D face recognition systems.Review of Previous Work:Techniques employed for3D face recognition include those based upon global appearance of face range images,surface matching,and local facial geometric features.Techniques based on global appearance of face range images are straight-forward extensions of statistical learning techniques that were successful to a degree with2D face images.They involve statistical learning of the3D face space through an ensemble of range images.A popular3D face recognition technique is based on principal component analysis(PCA)[3]and is often taken as the baseline for assessing the performance of other algorithms[4].While appearance based techniques have met with a degree of success,it is intuitively less obvious exactly what discriminatory information about faces they encode.Furthermore,since they employ information from large range image regions,their recog-nition performance is affected by changes in facial pose,expression,occlusions,and holes.Techniques based on surface matching use an iterative procedures to rigidly align two face surfaces as closely as possible[5].A metric quantifies the difference between the two face surfaces after alignment,and this is employed for recognition.The computational load of such techniques can be considerable,especially when searching large3D face databases.Their performance is also affected by changes in facial expression.For techniques based on local geometric facial features,characteristics of localized regions of the face surface,and their relationships to others,are quantified and employed as features.Some local geometric features that have been used previously for face recognition include surface curva-tures,Euclidean distances and angles betweenfiducial points on the face[6,7,8],point signatures [9],and shape variations of facial sub regions[10].Techniques based on local features require an additional step of localization and segmentation of specific regions of the face.A pragmatic issue affecting the success of these techniques is the choice of local regions andfiducial points.Ideally the choice of such regions should be based on an understanding of the variability of different parts of the face within and between individuals.Three dimensional face recognition techniques based on local feature have been shown to be robust to a degree to varying facial expression[9].Recently,methods for expression invariant3D face recognition have been proposed[11].They are based on the assumption that different facial expressions can be regarded as isometric deformations of the face surface.These deformations preserve intrinsic properties of the surface,one of which is the geodesic distance between a pair of points on the surface.Based on these ideas we present a preliminary study aimed at investigating the effectiveness of using geodesic distances between all pairs of25fiducial points on the face as features for face recognition.To the best of our knowledge,this is thefirst study of its kind.Another contribution of this study is that instead of choosing a random set of points on the face surface,we considered facial landmarks relevant to measuring anthropometric facial proportions employed widely in fa-cial plastic surgery and art[12].The performance of the proposed face recognition algorithm was compared against other established algorithms.Proposed Approach:Three dimensional face models for the study were acquired by an MU-2stereo imaging systemby3Q Technologies Ltd.(Atlanta,GA).The system simultaneously acquires both shape and tex-ture information.The data set contained1128head models of105subjects.It was partitioned intoa gallery set containing one image each of the105subjects with a neutral expression.The probeset contained another663images of the gallery subjects with a neutral or an arbitrary expression.The probe set had a variable number of images per subject(1-55).Models were rigidly aligned to frontal orientation and range images were constructed.Theywere medianfiltered and interpolated to remove holes.Twenty-fivefiducial points,as depicted inFigure1were manually located on each face.Three face recognition algorithms were implemented.Thefirst employed300geodesic distances(between all pairs offiducial points)as features for recog-nition.The fast marching algorithm for front propagation was employed to calculate the geodesicdistance between pairs of points[13].The second algorithm employed300Euclidean distancesbetween all pairs offiducial points as features.The normalized L1norm where each dimensionwas divided by its variance,was used as the metric for matching faces with both the Euclideandistance and geodesic distance features.The third3D face recognition algorithm implemented was based on PCA.For this algorithm,a subsection of each face range image of size354pixels,enclosing the main facial features wasemployed.The gallery and probe sets employed to test the performance of this algorithm were thesame as those used in thefirst and second algorithms.Additionally a separate set of360rangeimages of12subjects(30images per subjects),was used to train the PCA classifier.Face rangeimages were projected on to42eigen vectors accounting for99%of the variance in the data.Again,the L1norm was employed for matching faces in the42dimensional PCA sub space.Verification performance of all algorithms was evaluated using the receiver operating charac-teristic(ROC)methodology,from which the equal error rates(EER)were noted.Identificationperformance was evaluated by means of the cumulative match characteristic curves(CMC)andthe rank1recognition rates(RR)were observed.The performance of each technique for the entireprobe set,for neutral probes only and for expressive probes only were evaluated separately. Experimental Results:Table1presents the equal error rates for verification performance and the rank1recognitionrates for identification performance of the three face recognition algorithms.Figure2(a)presentsROC curves of the three systems for neutral expression probes only.Figure2(b)presents the CMCcurves for the three systems for neutral expression probes only.It is evident that the two algorithmsbased on Euclidean or geodesic distances between anthropometric facial landmarks(EER∼5%, RR∼89%)performed substantially better than the baseline PCA algorithm(EER=16.5%, RR=69.7%).The algorithms based on geodesic distance features performed on a par with the algorithm based on Euclidean distance features.Both were effective,to a degree,at recognizing3D faces.In this study the performance of the proposed algorithm based on geodesic distancesbetween anthropometric facial landmarks decreased when probes with arbitrary facial expressionswere matched against a gallery of neutral expression3D faces.This suggests that geodesic distancesbetween pairs of landmarks on a face may not be preserved when the facial expression changes.This was contradictory to Bronstein et al.’s assumption regarding facial expressions being isometricdeformations of facial surfaces[11].In conclusion,geodesic distances between anthropometric landmarks were observed to be ef-fective features for recognizing3D faces,however they were not more effective than Euclideandistances between the same landmarks.The3D face recognition algorithm based on geodesic dis-tance features was affected by changes in facial expression.In the future,we plan to investigatemethods for reducing the dimensionality of the proposed algorithm and to identify the more dis-criminatory geodesic distance features.Acknowledgments:The authors would like to gratefully acknowledge Advanced Digital Imaging Research,LLC(Houston,TX)for providing support in terms of funding and3D face data for the study. Figures and Tables:Figure1:Thefigures show the25anthropometric landmarks that were considered on a color and range image of a human face.(a)ROC(b)CMCFigure2:Thisfigure presents the2(a)verification performance in terms of an ROC curve;2(b) the cumulative match characteristic curves for the identification performance of the three face recognition algorithms with the neutral expression probes only.Method EER(%)Rank1RR(%)N-N N-E N-All N-N N-E N-AllGEODESIC2.78.55.693.181.489.9EUCLIDEAN2.26.74.192.978.188.8PCA18.113.416.570.268.369.7Table1:Verification and identification performance statistics for the face recognition systems based on PCA,Euclidean distances and geodesic distances.N-N represents performance of a system for the neutral probes only,N-E for the expressive probes only and N-All for all probes. References[1]P.J.Phillips,P.Grother,R.J.Micheals,D.M.Blackburn,E.Tabassi,and J.M.Bone.Frvt2002:Overview and summary.available at ,March2003.[2]E.P.Kukula,S.J.Elliott,R.Waupotitsch,and B.Pesenti.Effects of illumination changes onthe performance of geometrix facevision/spl reg/3d frs.In Security Technology,2004.38th Annual2004International Carnahan Conference on,pages331–337,2004.[3]K.I.Chang,K.W.Bowyer,and P.J.Flynn.An evaluation of multimodal2d+3d facebiometrics.Pattern Analysis and Machine Intelligence,IEEE Transactions on,27(4):619–624,2005.[4]P.J.Phillips,P.J.Flynn,T.Scruggs,K.W.Bowyer,and W.Worek.Preliminary face recog-nition grand challenge results.In Automatic Face and Gesture Recognition,2006.FGR2006.7th International Conference on,pages15–24,2006.[5]Xiaoguang Lu,A.K.Jain,and D.Colbry.Matching2.5d face scans to3d models.PatternAnalysis and Machine Intelligence,IEEE Transactions on,28(1):31–43,2006.[6]G.G.Gordon.Face recognition based on depth and curvature features.In Computer Vi-sion and Pattern Recognition,1992.Proceedings CVPR’92.,1992IEEE Computer Society Conference on,pages808–810,1992.[7]A.B.Moreno,A.Sanchez,J.Fco,V.Fco,and J.Diaz.Face recognition using3d surface-extracted descriptors.In Irish Machine Vision and Image Processing Conference(IMVIP 2003),Sepetember2003.[8]Y.Lee,H.Song,U.Yang,H.Shin,and K.Sohn.Local feature based3d face recognition.InAudio-and Video-based Biometric Person Authentication,2005International Conference on, LNCS,volume3546,pages909–918,2005.[9]Yingjie Wang,Chin-Seng Chua,and Yeong-Khing Ho.Facial feature detection and facerecognition from2d and3d images.Pattern Recognition Letters,23(10):1191–1202,2002. [10]Chenghua Xu,Yunhong Wang,Tieniu Tan,and Long Quan.Automatic3d face recognitioncombining global geometric features with local shape variation information.In Automatic Face and Gesture Recognition,2004.Proceedings.Sixth IEEE International Conference on, pages308–313,2004.[11]A.M.Bronstein,M.M.Bronstein,and R.Kimmel.Three-dimensional face recognition.International Journal of Computer Vision,64(1):5–30,2005.[12]L.Farkas.Anthropometric Facial Proportions in Medicine.Thomas Books,1987.[13]R.Kimmel and puting geodesic paths on manifolds.Proceedings of theNational Academy of Sciences,USA,95:84318435,1998.。

Lecture3-06

Lecture3-06

The model of animal cells1. Models of membrane structureQuickTime?and aPhoto - JPEG decompressor are needed to see this picture.Boundary between two glial cells: the plasma membraneCell 1Cell 2!J.D. Robertson (1959):The TEM showing: thetrilaminar appearance ofPM;Unit membrane model!S.J. Singer and G.Nicolson (1972):Fluid-mosaic model!K. Simons et al(1997):Lipid rafts model ;Functional rafts on cellmembranes.Nature 387:569-572Models of membrane structureCell membraneMembrane proteins and lipids can be confinedto s specific domainLipid rafts are rich in choleterol and sphingolipids2. The chemical composition of membranesComposition of biomembranePhospholipid moleculeFour major phospholipids in mammalian plasma membranesGlycolipids are found on the surface of all plasma membranesSome functions of phospholipids in cell signalingPhospholipasecleavage sitesLiposomes: phospholipid vesiclesApplication: gene transfer; as a carrier.Membrane protein attachment via lipidsPalmitic acid (18 saturated fatty acid)geranylgeranyl groupMany membrane proteins are glycosylatedon the non-cytosolic side of the membraneN-linked glycosylationO-linked glycosylationDetergents disrupt the lipid bilayer and solubilize membrane proteinsdetergentssystemsMembrane proteins and lipids can be confinedto s specific domainVarious ways to restrict the movement of specific membrane proteinsPlasma membrane proteins in red blood cellsPlasma membrane proteins in red blood cellsThe cortical region of the cytosol consists of a complicated cytoskeletal network rich in actin filaments, as illustrated in RBCThe surface of lymphocyteThe glycocalyx(cell coat)is formed by carbohydrates projecting from membrane lipids and proteins.3. Characteristics of biomembraneFluorescence Recovery After Photobleaching(FRAP) to demonstrate the lateral diffusion of membrane lipidsFLIP: Fluorescence Loss In PhotobleachingThe mobility of membrane proteinsexperimentally by the mixing of membrane proteins that occurs when two cells are tagged with different fluorescent labels and then induced to fuseintegral proteins can bePM contains lipid rafts that are enriched in sphingolipids, Cl and someMembrane proteins and lipids can be confinedto a specific domainLipid rafts are signaling centers?4. An overview of membrane functionsA. PM define the boundaries of the cell and organelles.B. Compartmentalization:membranes form continuous sheets thatenclose intracellular compartments.C. Transporting solutes:membrane proteins facilitate the movementof substances between compartments.D. Responding to external signals:membrane receptors transducesignals from outside the cell in response to specific ligands.E. Intercellular interaction:membrane mediate recognition andinteraction between adjacent cells by cell-to-cell communication and junction.F. Locus for biochemical activities:membrane provide a scaffold thatorganizes enzymes for effective interaction.G. Energy transduction:membranes transduce photosyntheticenergy, convert chemical energy to ATP, and store energy in ion and solute gradients.Integrating cells to tissues。

CurriculumVita-Chiao-Yao(Joe)She-CAS

CurriculumVita-Chiao-Yao(Joe)She-CAS

Curriculum Vita - Chiao-Yao (Joe) SheEducation:1957 - B.S., Taiwan University, Taipei, Taiwan1961 - M.S., North Dakota State University, Fargo, North Dakota1964 - Ph.D., Stanford University, Stanford, CaliforniaExperience:1975-Present Professor of Physics, Colorado State University, Fort Collins, CO1968- 1971 Assistant Professor of Physics, Colorado State University1971- 1975 Associate Professor of Physics, Colorado State University1964 - 1968 Assistant Professor of Electrical Engineering, The University of Minnesota Honors, Memberships and Services:Fellow of the Optical Society of AmericaMember of APS, AGU1976 Research Publication Award, Naval Research Laboratory, Washington, D.C.1978 President of the Rocky Mountain Section of OSA.1987 Burlington Northern Faculty Achievement Award, Colorado State University 1988-1989 Golden Screw (Teaching) Award, Colorado State University1988, 1995 On NSF Review Panels for Research Initiation Awards, Lightwave Technology Program, and Optical Science and Engineering, respectively1992-1994 Member, AMS Committee on Laser Studies of the Atmosphere1993-1995 Member, CLEO/IQEC Program Committee1994-1997 Member, Arecibo Users and Scientific Advisory Committee, N.A.I.C.1989-2005 Fellow, Coop. Institute for Research in the Atmosphere, Colorado State University1997-2000 Member, NSF CEDAR Science Steering Committee2000-2001 Fulbright Research Award, Norway2003NSF/CEDAR Workshop – CEDAR Lecture Prize2003AGU Editor’s Citation – Outstanding Reviewer for Geophysics Research Letters 2005 Included in the 60th Diamond Edition of Marquis Who’s Who in AmericaBook Chapter and EditingWilliam B. Grant, Edward V. Browell, Robert T. Menzies, Kenneth Sassen and Chiao-Yao She (Editors), Selected Papers on Laser Applications in Remote Sensing, SPIE Milestone Series, MS 141 (1997).Research Interest and Current SupportThe research interests of Prof. She have been broad and often interdisciplinary. He has developed new laser measurement techniques for solving basic as well as applied problems. For the past 25 years, he has made a series of innovations in two high-spectral-resolution lidars: Rayleigh-Mie lidar and narrowband sodium lidar for atmospheric temperature and wind measurements, respectively in the lower and upper atmosphere. The narrowband sodium lidar, now capable of measuring mesopause region (80-110km in altitude) temperature and wind on 24-hour continuous basis, has enjoyed considerable success with continued NSF funding for upper atmospheric research since 1989. Its technology and innovations were and are beingduplicated by National and International Middle and Upper Atmospheric Research Facilities. In the past ten years, Prof. She’s research has been supported by NSF, NASA and AFOSR.Research PublicationsProfessor She has co-authored about 170 papers in refereed journals. A selected list since 1998:109. She, C. Y. and U. von Zahn, The concept of two-level mesopause: Support through new lidar observation, J.Geophys. Res., 103, 5855 - 5863, 1998.110. She, C. Y., S. W. Thiel and D. A. Krueger, Observed episodic warming at 86 and 100 km between 1990 and 1997: Effects of Mount Pinatubo eruption, Geophys. Res. Lett., 25, 497 - 500, 1998.114. She, C. Y., and R. P. Lowe, Seasonal temperature variations in the mesopause region at mid-latitude: comparison of lidar and hydroxyl rotational temperatures using WINDII/UARD OH height profiles, J. Atmo.Solar-Terr. Physics, 60, 1573-1583, 1998.119. She, C. Y., S. S. Chen, Z. L. Hu, J. Sherman, J. D. Vance, V. Vasoli, M. A. White, J. R. Yu, and D. A.Krueger, Eight-year climatology of nocturnal temperature and sodium density in the mesopause region (80 to 105 km) over Fort Collins, CO (41o N, 105o W), Geophys. Res. Lett., 27, 3289 - 3292, 2000.120. She, Chiao-Yao, Spectral structure of laser light scattering revisited: bandwidths of nonresonant scattering lidar, Appl. Opt. 40, 4875-4884, 2001.124. She, C. Y., Joe D. Vance, B. P. Williams, D. A. Krueger, H. Moosuller, D. Gibson-Wilde, and D. C. Fritts, Lidar studies of atmospheric dynamics near polar mesopause, EOS, Transactions, American Geophysical Union, 83 (27), P.289 and P.293, 2002.125. She, C. Y., Songsheng Chen, B. P. Williams, Zhilin Hu, David A. Krueger and M. E. Hagan, Tides in the mesopause region over Fort Collins, CO (41o N, 105o W) based on lidar temperature observations coveringfull diurnal cycles, Jour. Geophys. Research, 107, N. 0, 10.1029/2001JD001189, 2002.134. She, C. Y., Jim Sherman, Tao Yuan, B. P. Williams, Kam Arnold, T. D.Kawahara, Tao Li, LiFang Xu, J. D.Vance and David A. Krueger, The first 80-hour continuous lidar campaign for simultaneous observation of mesopause region temperature and wind, Geophys. Res. Lett.30, 6, 52, 10.1029/2002GL016412, 2003.136. She, C. Y., and D. A. Krueger, Impact of natural variability in the 11-year mesopause region temp erature observation over Fort Collins, CO (41N, 105W), Adv. Space Phys. 34, 330-336, 2004.137. Beig, G.; Keckhut, P.; Lowe, R. P.; Roble, R. G.; Mlynczak, M. G.; Scheer, J.; Fomichev, V. I.; Offermann,D.; French, W. J. R.; Shepherd, M. G.; Semenov, A. I.; Remsberg,E. E.; She, C. Y.; Lübken,F. J.; Bremer, J.;Clemesha, B. R.; Stegman, J.; Sigernes, F.; Fadnavis, S. (2003), Review of mesospheric temperature trends,Rev. Geophys., Vol. 41, No. 4, 1015, 10.1029/2002RG000121.138.Chiao-Yao She, Initial full-diurnal-cycle mesopause region lidar observations: Diurnal-means and tidal perturbations of temperature and winds over Fort Collins, CO (41N, 105W), PSMOS 2002, J. Atmo. Solar-Terr. Phys. 66, 663-674, 2004.139. She, C. Y.,Tao Li, Biff P. Williams, Tao Yuan and R. H. Picard (2004), Concurrent OH imager and sodium temperature/wind lidar observation of a mesopause region undular bore event over Fort Collins/Platteville, CO, J. Geophys. Res. 109, D22107, doi:10.1029/2004JD004742.140. She, C.Y., T. Li, R. C. Collins, T. Yuan, B. P. Williams, T. D. Kawahara, J. D. Vance, P. Acott, D. A. Krueger,H.-L. Liu, and M. E. Hagan (2004), Tidal perturbations and variability in the mesopause region over FortCollins, CO (41N, 105W): Continuous multi-day temperature and wind lidar observations, Geophys. Res. Lett., 31, L24111, doi:10.1029/2004GL021165.142. Fritts, D. C., B. P. Williams, C. Y. She, J. D. Vance, r. Rapp, F.-J. L¨ubken, A. F. J. Schmidlin, A. M¨ullemann R. A. Goldberg (2004) Observations of extreme temperature and wind gradients near the summer mesopause during the MaCWAVE/MIDAS rocket campaign, Geophys. Res. Lett., 31, L24S06, doi:10.1029/2003GL019389.144. Tao Li, C. Y. She, Bifford P. Williams, Tao Yuan, Richard L. Collins, Lois M. Kieffaber and Alan W.Peterson (2005), Concurrent OH imager and sodium temperature/wind lidar observation of localized ripples over Northern Colorado, J. Geophys. Res. 110, D13110, doi:10.1029/2004JD004885146. Chiao-Yao She (2005), On atmospheric lidar performance comparison: from power aperture to power–aperture–mixing ratio–scattering cross-section, Modern Optics, 52, 2723-2729, DOI:10.1080/09500340500352618.147. She, C. Y., B. P. Williams, P. Hoffmann, R. Latteck, G. Baumgarten, J. D. Vance, J. Fiedler, P. Acott, D. C.Fritts, F.-J. Luebken (2006): Observation of anti-correlation between sodium atoms and PMSE/NLC in summer mesopause at ALOMAR, Norway (69N, 12E), J. Atmos. Solar-Terres. Phys. 68, 93-101.148. Yuan, T., C. Y. She, M. E. Hagan, B. P. Williams, T. Li, K. Arnold, T. D. Kawahara, P. E. Acott, J. D. Vance,D. A. Krueger and R. G. Roble (2006), Seasonal variation of diurnal perturbations in mesopause-regiontemperature, zonal, and meridional winds above Fort Collins, CO (40.6°N, 105°W), J. Geophys. Res. 111, D06103, doi:10.1029/2004JD005486.149. Xu, Jiyao, C. Y. She, Wei Yuan, C. J. Mertens, Marty Mlynzack, J. R. Russell (2006), Comparison between the temperature measurements by TIMED/SABER and Lidar in the midlatitude, J. Geophys. Res., 111, A10S09, doi:10.1029/2005JA011439.150. Xu, J., A. K. Smith, R. L. Collins, and C.-Y. She (2006), Signature of an overturning gravity wave in the mesospheric sodium layer: Comparison of a nonlinear photochemical-dynamical model and lidar observations, J. Geophys. Res., 111, D17301, doi:10.1029/2005JD006749.151. Sherman, J. P., and C.-Y. She (2006), Seasonal variation of mesopause region wind shears, convective and dynamic instabilities above Fort Collins, CO: A statistical study, J. Atmo. Solar-Terr. Physics, 68, 1061-1074. 154. Davis, D. S., P. Hickson, G. Harriot and C. Y. She (2006), Temporal variability of the telluric sodium layer, Optics Lett. 31, 3369-3371.155. She, Chiao-Yao, J. D. Vance, T. D. Kawahara, B. P. Williams, and Q. Wu (2007), A proposed all-solid-state transportable narrowband sodium lidar for mesopause region temperature and horizontal wind measurements, Canadian Journal of Physics, 85, 111 – 118.156. Gumbeli, J., Z. Y. Fan, T. Waldemarsson, J. Stegman, G. Witts, E. J. Llewellyn, C.-Y. She and J. M. C. Plane (2007), Retrieval of global mesospheric sodium densities from the Odin satellite, Geophys. Res. Lett.,. 34, L04813, doi:10.1029/2006GL028687.157. Li, T., C.-Y. She, H.-L. Liu, and M. T. Montgomery (2007), Evidence of a gravity wave breaking event and the estimation of the wave characteristics from sodium lidar observation over Fort Collins, CO (41_N, 105_W) , Geophys. Res. Lett.,. 34, L05815, doi:10.1029/2006GL028988.158. She, C.-Y., J. Yue and Z.-A. Yan, J. W. Hair, J.-J. Guo, S.-H. Wu and Z.-S. Liu (2007), Direct-detection Doppler wind measurements with a Cabannes-Mie lidar: A. Comparison between iodine vapor filter and Fabry-Perot interferometer methods, Applied Optics 46, 4434-4443.159. She, C.-Y., J. Yue and Z.-A. Yan, J. W. Hair, J.-J. Guo, S.-H. Wu and Z.-S. Liu (2007), Direct-detection Doppler wind measurements with a Cabannes-Mie lidar:B. Impact of aerosol variation on iodine vapor filter methods, Applied Optics 46, 4444-4454.161. Tao Li, C.-Y. She, Scott E. Palo, Qian Wu, Han-Li Liu, and Murry L. Salby (2008), Coordinated Lidar and TIMED observations of the quasi-two-day wave during August 2002-2004 and possible quasi-biennial oscillation influence, Advanced Space Research 41, 1462-1470.162. Liu, H.-L., T. Li, C.-Y. She, J. Oberheide, Q. Wu, M. E. Hagan, J. Xu, R. G. Roble, M. G. Mlynczak, and J. M.Russell III (2007), Comparative study of short term diurnal tidal variability, J. Geophys. Res.(In press).163. She, Chiao-Yao, and David A. Krueger (2007), Laser-Induced Fluorescence: Spectroscopy in the Sky, Optics & Photonic News (OPN), 18(9), 35-41.164. Li, T., C. -Y. She, H.-L. Liu, T. Leblanc, and I. S. McDermid, Sodium lidar observed strong inertia-gravity wave activities in the mesopause region over Fort Collins, CO (41°N, 105°W) J. Geophys.Res. 112, D22104, doi:10.1029/2007JD008681.165. Yuan T., C.-Y. She and D. A. Krueger, F. Sassi, R. Garcia, R. Roble, H.-L. Liu, and H. Schmidt (2008), Climatology of mesopause region temperature, zonal wind and meridional wind over Fort Collins, CO (41ºN, 105ºW) and comparison with model simulations, J. Geophys. Res. 113, D03105, doi:10.1029/2007JD008697. 166. Liguo Su, R. J. Coolins, D. A. Krueger and C.-Y. She, Statistical Analysis of Sodium Doppler Wind-Temperature Lidar Measurements of Vertical Heat Flux, J. Atm. Oceanic Tech. (In press).167. Yuan, T., C. Y. She, Hauke Schmidt, David A. Krueger, Steven Reising, Seasonal variations of semidiurnal tidal-period perturbations in mesopause region temperature, zonal and meridional winds above Fort Collins, CO(40.6°N, 105°W), J. Geophys. (In oress).168. Yue, J., S. L. Vadas2, C-Y She, et al. (2008), A study of OH imager observed concentric gravity waves near Fort Collins on May 11, 2004, Geophys. Res. Lett. (submitted).169. Vadas, S. L., J. Yue, C.-Y. She and P. Stamus (2008), The effects of winds on concentric rings of gravity waves from a thunderstorm near Fort Collins in May 2004, J. Geophys.Res., (submitted).。

遥感图像场景分类综述

遥感图像场景分类综述

人工智能及识别技术本栏目责任编辑:唐一东遥感图像场景分类综述钱园园,刘进锋*(宁夏大学信息工程学院,宁夏银川750021)摘要:随着科技的进步,遥感图像场景的应用需求逐渐增大,广泛应用于城市监管、资源的勘探以及自然灾害检测等领域中。

作为一种备受关注的基础图像处理手段,近年来众多学者提出各种方法对遥感图像的场景进行分类。

根据遥感场景分类时有无标签参与,本文从监督分类、无监督分类以及半监督分类这三个方面对近年来的研究方法进行介绍。

然后结合遥感图像的特征,分析这三种方法的优缺点,对比它们之间的差异及其在数据集上的性能表现。

最后,对遥感图像场景分类方法面临的问题和挑战进行总结和展望。

关键词:遥感图像场景分类;监督分类;无监督分类;半监督分类中图分类号:TP391文献标识码:A文章编号:1009-3044(2021)15-0187-00开放科学(资源服务)标识码(OSID ):Summary of Remote Sensing Image Scene Classification QIAN Yuan-yuan ,LIU Jin-feng *(School of Information Engineering,Ningxia University,Yinchuan 750021,China)Abstract:With the progress of science and technology,the application demand of remote sensing image scene increases gradually,which is widely used in urban supervision,resource exploration,natural disaster detection and other fields.As a basic image pro⁃cessing method,many scholars have proposed various methods to classify the scene of remote sensing image in recent years.This pa⁃per introduces the research methods in recent years from the three aspects of supervised classification,unsupervised classification and semi-supervised classification.Then,combined with the features of remote sensing images,the advantages and disadvantages of these three methods are analyzed,and the differences between them and their performance performance in the data set are com⁃pared.Finally,the problems and challenges of remote sensing image scene classification are summarized and prospected.Key words:remote sensing image scene classification;Unsupervised classification;Supervise classification;Semi-supervised clas⁃sification遥感图像场景分类,就是通过某种算法对输入的遥感场景图像进行分类,并且判断某幅图像属于哪种类别。

Single defect centres in diamond-A review

Single defect centres in diamond-A review

phys. stat. sol. (a) 203, No. 13, 3207–3225 (2006) / DOI 10.1002/pssa.200671403© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim Review ArticleSingle defect centres in diamond: A reviewF. Jelezko and J. Wrachtrup *3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, GermanyReceived 9 February 2006, revised 28 July 2006, accepted 9 August 2006Published online 11 October 2006PACS 03.67.Pp, 71.55.–r, 76.30.Mi, 76.70.–rThe nitrogen vacancy and some nickel related defects in diamond can be observed as single quantum sys-tems in diamond by their fluorescence. The fabrication of single colour centres occurs via generation of vacancies or via controlled nitrogen implantation in the case of the nitrogen vacancy (NV) centre. The NV centre shows an electron paramagnetic ground and optically excited state. As a result electron and nuclear magnetic resonance can be carried out on single defects. Due to the localized nature of the electron spin wavefunction hyperfine coupling to nuclei more than one lattice constant away from the defect as domi-nated by dipolar interaction. As a consequence the coupling to close nuclei leads to a splitting in the spec-trum which allows for optically detected electron nuclear double resonance. The contribution discusses the physics of the NV and other defect centre from the perspective of single defect centre spectroscopy.© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim1 IntroductionThe ever increasing demand in computational power and data transmission rates has inspired researchers to investigate fundamentally new ways to process and communicate information.Among others, physicists explored the usefulness of “non-classical”, i.e. quantum mechanical systems in the world of information processing. Spectacular achievements like Shors discovery of the quantum factoring algorithm [1] or the development of quantum secure data communication gave birth to the field of quantum information processing (QIP) [2]. After an initial period where the physical nature of infor-mation was explored [3] and how information processing can be carried out by unitary transformation in quantum mechanics, researchers looked out for systems which might be of use as hardware in QIP. From the very beginning it became clear that the restrictions on the hardware of choice are severe, in particular for solid state systems. Hence in the recent past scientists working in the development of nanostructured materials and quantum physics have cooperated on different solid-state systems to define quantum me-chanical two-level system, make them robust against decoherence and addressable as individual units. While the feasibility of QIP remains to be shown, this endeavour will deepen our understanding of quan-tum mechanics and also marks a new area in material science which now also has reached diamonds as a potential host material. The usefulness of diamond is based on two properties. First defects in diamond are often characterized by low electron phonon coupling, mostly due to the low density of phonon states i.e. high Debye temperature of the material [4]. Secondly, colour centres in diamond are usually found to be very stable, even under ambient conditions. This makes them unique among all optically active solid-state systems.* Corresponding author: e-mail: wrachtrup@physik.uni-stuttgart.de3208 F. Jelezko and J. Wrachtrup: Single defect centres in diamond: A review© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim The main goal of QIP is the flexible generation of quantum states from individual two-level systems (qubits). The state of the individual qubits should be changed coherently and the interaction strength among them should be controllable. At the same time, those systems which are discussed for data com-munication must be optically active which means, that they should show a high oscillator strength for an electric dipole transition between their ground and some optically excited state. Individual ions or ion strings have been applied with great success. Here, currently up to eight ions in a string have been cooled to their ground state, addressed and manipulated individually [5]. Owing to careful construction of the ion trap, decoherence is reduced to a minimum [6]. Landmark experiments, like teleportation of quantum states among ions [7, 8] and first quantum algorithms have been shown in these systems [9, 10].In solid state physics different types of hardware are discussed for QIP. Because dephasing is fast in most situations in solids only specific systems allow for controlled generation of a quantum state with preservation of phase coherence for a sufficient time. Currently three systems are under discussion. Su-perconducting systems are either realized as flux or charge quantized individual units [11]. Their strength lies in the long coherence times and meanwhile well established control of quantum states. Main pro-gresses have been achieved with quantum dots as individual quantum systems. Initially the electronic ground as well as excited states (exciton ground state) have been used as definition of qubits [12]. Mean-while the spin of individual electrons either in a single quantum dot or coupled GaAs quantum dots has been subject to control experiments [13–15]. Because of the presence of paramagnetic nuclear spins, the electron spin is subject to decoherence or a static inhomogeneous frequency distribution. Hence, a further direction of research are Si or SiGe quantum dots where practically no paramagnetic nuclear spins play a significant role. The third system under investigation are phosphorus impurities in silicon [16]. Phospho-rus implanted in Si is an electron paramagnetic impurity with a nuclear spin I = 1/2. The coherence times are known to be long at low temperature. The electron or nuclear spins form a well controllable two-level system. Addressing of individual spins is planned via magnetic field gradients. Major obstacles with respect to nanostructuring of the system have been overcome, while the readout of single spins based on spin-to-charge conversion with consecutive detection of charge state has not been successful yet. 2 Colour centres in diamondThere are more then 100 luminescent defects in diamond. A significant fraction has been analysed in detail such that their charge and spin state is known under equilibrium conditions [17]. For this review nitrogen related defects are of particular importance. They are most abundant in diamond since nitrogen is a prominent impurity in the material. Nitrogen is a defect which either exists as a single substitutional impurity or in aggregated form. The single substitutional nitrogen has an infrared local mode of vibration Fig. 1 (online colour at: ) Schematic represen-tation of the nitrogen vacancy (NV) centre structure.phys. stat. sol. (a) 203, No. 13 (2006) 3209 © 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim65070075080050010001500T =300KT =1.8K F l u o r e s c e n c e I n t e n s i t y ,C t s Wavelength,nm ZPL 637.2nmat 1344 cm –1. The centre is at a C 3v symmetry site. It is a deep electron donor, probably 1.7 eV below the conduction band edge. There is an EPR signal associated with this defect, called P1, which identifies it to be an electron paramagnetic system with S = 1/2 ground state [17]. Nitrogen aggregates are, most com-monly, pairs of neighbouring substitutional atoms, the A aggregates, and groups of four around a va-cancy, the B aggregate. All three forms of nitrogen impurities have distinct infrared spectra.Another defect often found in nitrogen rich type Ib diamond samples after irradiation damage is the nitrogen vacancy defect centre, see Fig. 1. This defect gives rise to a strong absorption at 1.945 eV (637 nm) [18]. At low temperature the absorption is marked by a narrow optical resonance line (zero phonon line) followed by prominent vibronic side bands, see Fig. 2. Electron spin resonance measure-ment have indicated that the defect has an electron paramagnetic ground state with electron spin angular momentum S = 1 [19]. The zero field splitting parameters were found to be D = 2.88 GHz and E = 0 indicating a C 3v symmetry of the electron spin wavefunction. From measurements of the hyperfine cou-pling constant to the nitrogen nuclear spin and carbon spins in the first coordination shell it was con-cluded that roughly 70% of the unpaired electron spin density is found at the three nearest neighbour carbon atoms, whereas the spin density at the nitrogen is only 2%. Obviously the electrons spend most of their time at the three carbons next to the vacancy. To explain the triplet ground state mostly a six elec-tron model is invoked which requires the defect to be negatively charged i.e. to be NV – [20]. Hole burn-ing experiments and the high radiative recombination rate (lifetime roughly 11 ns [21], quantum yield 0.7) indicate that the optically excited state is also a spin triplet. The width of the spectral holes burned into the inhomogeneous absorption profile were found to be on the order of 50 MHz [22, 23]. Detailed investigation of the excited state dephasing and hole burning have caused speculations to as whether the excited state is subject to a J an–Teller splitting [24, 25]. From group theoretical arguments it is con-cluded that the ground state is 3A and the excited state is of 3E symmetry. In the C 3v group this state thus comprises two degenerate substrates 3E x,y with an orthogonal polarization of the optical transition. Photon echo experiments have been interpreted in terms of a Jan Teller splitting of 40 cm –1 among these two states with fast relaxation among them [24]. However, no further experimental evidence is found to sup-port this conclusion. Hole burning experiments showed two mechanisms for spectral hole burning: a permanent one and a transient mechanism with a time scale on the order of ms [23]. This is either inter-preted as a spin relaxation mechanism in the ground state or a metastable state in the optical excitation-emission cycle. Indeed it proved difficult to find evidence for this metastable state and also number and energetic position relative to the triplet ground and excited state are still subject of debate. Meanwhile it seems to be clear that at least one singlet state is placed between the two triplet states. As a working hypothesis it should be assumed throughout this article that the optical excitation emission cycle is de-scribed by three electronic levels.Fig. 2 Fluorescence emission spectra of single NVcentres at room temperature and LHe temperatures.Excitation wavelength was 514 nm.3210 F. Jelezko and J. Wrachtrup: Single defect centres in diamond: A review© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim 3 Optical excitation and spin polarizationGiven the fact that the NV centre has an electron spin triplet ground state with an optically allowed tran-sition to a 3E spin triplet state one might wonder about the influence of optical excitation on the electron spin properties of the defect. Indeed in initial experiments no electron spin resonance (EPR) signal of the defect was detected except when subject to irradiation in a wavelength range between 450 and 637 nm[19]. Later on it became clear that in fact there is an EPR signal even in the absence of light, yet the signal strength is considerably enhanced upon illumination [26]. EPR lines showed either absorptive or emissive line shapes depending on the spectral position. This indicates that only specific spin sub-levels are affected by optical excitation [27]. In general a S = 1 electron spin system is described by a spin Hamiltonian of the following form: e ˆˆˆH g S SDS β=+B . Here g e is the electronic g -factor (g = 2.0028 ± 0.0003); B 0 is the external magnetic field and D is the zero field splitting tensor. This ten-sor comprises the anisotropic dipolar interaction of the two electron spins forming the triplet state aver-aged over their wave function. The tensor is traceless and thus characterized by two parameters, D and E as already mentioned above. The zero field splitting causes a lifting of the degeneracy of the spin sub-levels m s = ±1,0 even in the absence of an external magnetic field. Those zero field spin wave functions T x,y,z do not diagonalize the full high-field Hamiltonian H but are related to these functions by121212=x T T T ββαα-+-=-〉〉,121211y T T T ββαα-++=+〉〉,12120|.z T T αββα+=〉〉 The expectation value of S z for all three wave functions ,,,,||x y z z x y z T S T 〈〉 is zero. Hence there is no magnetization in zero external field. There are different ways to account for the spin polarization process in an excitation scheme involving spin triplets. To first order optical excitation is a spin state conserving process. However spin–orbit (LS) coupling might allow for a spin state change in the course of optical excitation. Cross relaxation processes on the other hand might cause a strong spin polarization as it is observed in the optical excitation of various systems, like e.g. GaAs. However, optical spectroscopy and in particular hole burning data gave little evidence for non spin conserving excitation processes in the NV centre. In two laser hole burning experiments data have been interpreted by assuming different zero field splitting parameters in ground and excited state exc exc (2GHz,0,8GHz)D E ªª by an otherwise spin state preserving optical excitation process [28]. Indeed this is confirmed by later attempts to gener-ate ground state spin coherence via Raman process [29], which only proves to be possible when ground state spin sublevels are brought close to anticrossing by an external magnetic field. Another spin polaris-ing mechanism involves a further electronic state in the optical excitation and emission cycle [30, 31]. Though being weak, LS coupling might be strong enough to induce intersystem crossing to states with different spin symmetry, e.g. a singlet state. Indeed the relative position of the 1A singlet state with re-spect to the two triplet states has been subject of intense debate. Intersystem crossing is driven by LS induced mixing of singlet character into triplet states. Due to the lack of any emission from the 1A state or noticeable absorption to other states, no direct evidence for this state is at hand up to now. However, the kinetics of photo emission from single NV centres strongly suggests the presence of a metastable state in the excitation emission cycle of the state. As described below the intersystem crossing rates from the ex-cited triplet state to the singlet state are found to be drastically different, whereas the relaxation to the 3A state might not depend on the spin substate. This provides the required optical excitation dependent relaxa-tion mechanism. Bulk as well as single centre experiments show that predominantly the m s = 0 (T z ) sublevel in the spin ground state is populated. The polarization in this state is on the order of 80% or higher [27].phys. stat. sol. (a) 203, No. 13 (2006) 3211 © 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim4 Spin properties of the NV centreBecause of its paramagnetic spin ground and excited state the NV centre has been the target of numerous investigations regarding its magnetooptical properties. Pioneering work has been carried out in the groups of Manson [32–36], Glasbeek [37–39] and Rand [26, 40, 41].The hyperfine and fine structure splitting of the NV ground state has been used to measure the Aut-ler–Townes splitting induced by a strong pump field in a three level system. Level anticrossing among the m s = 0 and m s = –1 allows for an accurate measurement of the hyperfine coupling constant for the nitrogen nucleus, yielding an axially symmetric hyperfine coupling tensor with A || = 2.3 MHz and A ^ = 2.1 MHz [42, 43]. The quadrupole coupling constant P = 5.04 MHz. Because of its convenient access to various transitions in the optical, microwave and radiofrequency domain the NV centre has been used as a model system to study the interaction between matter and radiation in the linear and non-linear regime. An interesting set of experiments concerns electromagnetically induced transparency in a Λ-type level scheme. The action of a strong pump pulse on one transition in this energy level scheme renders the system transparent for radiation resonant with another transitions. Experiments have been carried out in the microwave frequency domain [44] as well as for optical transitions among the 3A ground state and the 3E excited state [29]. Here two electron spin sublevels are brought into near level anticrossing such that an effective three level system is generated with one excited state spin sublevel and two allowed optical transitions. A 17% increase in transmission is detected for a suitably tuned probe beam.While relatively much work has been done on vacancy and nitrogen related impurities comparatively little is known about defects comprising heavy elements. For many years it was difficult to incorporate heavy elements as impurities into the diamond lattice. Only six elements have been identified as bonding to the diamond lattice, namely nitrogen, boron, nickel, silicon, hydrogen and cobalt. Attempts to use ion implantation techniques for incorporation of transition metal ions were unsuccessful. This might be due to the large size of the ions and the small lattice parameters of diamond together with the metastability of the diamond lattice at ambient pressure. Recent developments in crystal growth and thin film technology have made it possible to incorporate various dopants into the diamond lattice during growth. This has enabled studies of nickel defects [45, 46]. Depending on the annealing conditions Ni can form clusters with various vacancies and nitrogen atoms in nearest neighbour sites. Different Ni related centres are listed with NE as a prefix and numbers to identify individual entities. The structure and chemical compo-k 23k 12k 31k 213A 3E 1AOptical excitation 3A 3E 1A z x,yz´x´y´k 23k 31a bFig. 3 a) Three level scheme describing the optical excitation and emission cycle of single NV centres. 3A and 3E are the triplet ground and excited state. 1A is a metastable singlet state. No information is at hand presently about the number and relative position of singlet levels. The arrows and k ij denote the rates of transition among the various states. b) More detailed energy level scheme differentiating between trip-let sublevels in the 3A and 3E state.3212 F. Jelezko and J. Wrachtrup: Single defect centres in diamond: A review© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim sition of defects have mostly been identified by EPR on the basis of the hyperfine coupling to nitrogen nuclei [46]. A particularly rich hyperfine structure has been identified for the NE8 centre.Analysis of the angular dependence of the EPR spectrum for the NE8 centre showed that this centre has electronic spin S = 1/2 and a g -value typical of a d -ion with more than half filled d -shell. The NE8 centre has been found not only in HPHT synthetic diamonds but also in natural diamonds which contain the nickel-nitrogen centres NE1 to NE3 [46]. The structure of the centre is shown in Fig. 4. It comprises 4 substitutional nitrogen atoms and an interstitial Ni impurity. The EPR signature of the system has been correlated to an optical zero phonon transition at around 794 nm. The relative integral intensity of the zero phonon line and the vibronic side band at room temperature is 0.7 (Debey–Waller factor) [47]. The fluorescence emission statistics of single NE8 emitters shows a decay to a yet unidentified metastable state with a rate of 6 MHz.5 Single defect centre experimentsExperiments on single quantum systems in solids have brought about a considerable improvement in the understanding of the dynamics and energetic structure of the respective materials. In addition a number of quantum optical phenomena, especially when light–matter coupling is concerned, have been investi-gated. As opposed to atomic systems on which first experiments on single quantum systems are well established, similar experiments with impurity atoms in solids remain challenging. Single quantum sys-tems in solids usually strongly interact with their environment. This has technical as well as physical consequences. First of all single solid state quantum systems are embedded in an environment which, for example, scatters excitation light. Given a diffraction limited focal volume usually the number of matrix atoms exceed those of the quantum systems by 106–108. This puts an upper limit on the impurity content of the matrix or on the efficiency of inelastic scattering processes like e.g. Raman scattering from the matrix. Various systems like single hydrocarbon molecules, proteins, quantum dots and defect centres have been analysed [48]. Except for some experiments on surface enhanced Raman scattering the tech-nique usually relies on fluorescence emission. In this technique an excitation laser in resonance with a strongly allowed optical transition of the system is used to populate the optically excited state (e.g. the 3E state for the NV centre), see Fig. 3a. Depending on the fluorescence emission quantum yield the system either decays via fluorescence emission or non-radiatively, e.g. via inter-system-crossing to a metastable Fig. 4 (online colour at: ) Structure of the NE8 cen-tre.phys. stat. sol. (a) 203, No. 13 (2006) 3213© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, WeinheimF l u o r .I n t e n s i t y ,k C t s /s Excitation power,mW.state (1A in the case of the NV). The maximum numbers of photons emitted are given when the optical transition is saturated. In this case the maximum fluorescence intensity is given as312123F max 3123()=.2k k k I k k Φ++ Here k 31 is the relaxation rate from the metastable to the ground state and k 21 is the decay rate of the opti-cally excited state, k 23 is the decay rate to the metastable state and φF marks the fluorescence quantum yield. For the NV centre I max is about 107 photon/s. I max critically depends on a number of parameters. First of all the fluorescence quantum yield limits the maximum emission. A good example to illustrate this is the GR1 centre, the neutral vacancy defect in diamond. The overall lifetime of the excited state for this defect is 1 ns at room temperature. However, the radiative lifetime is on the order of 100 ns. Hence φF is on the order of 0.01. Given the usual values for k 21 and k 31 this yields an I max which is too low to allow for detecting single GR1 centres with current technology. Figure 5 shows the saturation curve of a single NV defect. Indeed the maximum observable emission rate from the NV centre is around 105 pho-tons/s which corresponds well to the value estimated above, if we assume a detection efficiency of 0.01. Single NV centres can be observed by standard confocal fluorescence microscopy in type Ib diamond. In confocal microscopy a laser beam is focussed onto a diffraction limited spot in the diamond sample and the fluorescence is collected from that spot. Hence the focal probe volume is diffraction limited with a volume of roughly 1 µm 3. In order to be able to detect single centres it is thus important to control the density of defects. For the NV centre this is done by varying the number of vacancies created in the sam-ple by e.g. choosing an appropriate dose of electron irradiation. Hence the number of NV centres de-pends on the number of vacancies created and the number of nitrogen atoms in the sample. Figure 7 shows an image of a diamond sample where the number of defects in the sample is low enough to detect the fluorescence from single colour centres [49]. As expected the image shows diffraction limited spots. From the image alone it cannot be concluded whether the fluorescence stems from a single quantum system or from aggregates of defects. To determine the number of independent emitters in the focal vol-ume the emission statistics of the NV centre fluorescence can be used [50–52]. The fluorescence photon number statistics of a single quantum mechanical two-level system deviates from a classical Poissoniandistribution. If one records the fluorescence intensity autocorrelation function 2()()=()t I t g t ΙττΙ+2〈()〉〈〉 for short time τ one finds g 2(0) = 0 if the emission stems from a single defect centre (see Fig. 6). This is due to the fact that the defect has to be excited first before it can emit a single photon. Hence a single defect never emits two fluorescence photons simultaneously, in contrast to the case when a number of independent emitters are excited at random. If one adopts the three level scheme from Fig. 3a, rate equa-tions for temporal changes of populations in the three levels can be set up. The equations are solved by 12(2)()=1(1)e e ,k k g K K τττ-++Fig. 5 Saturation curve of the fluorescence inten-sity of a single NV centre at T = 300 K. Excitationwavelength is 514 nm. The power is measured atthe objective entrance.3214F. Jelezko and J. Wrachtrup: Single defect centres in diamond: A review © 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimg (2)(τ)τ,nswith rates 1,2=k -P = k 21 + k 12 + k 23 + k 31 and Q = k 31(k 21 + k 12) + k 23(k 31 + k 12) with23231123112= .k k k k k K k k+-- This function reproduces the dip in the correlation function g 2(τ) for τ → 0 shown in Fig. 6, which indicates that the light detected originates from a single NV. The slope of the curve around 0τ= is de-terminded by the pumping power of the laser k 12 and the decay rate k 21. For larger times τ a decay of the correlation function becomes visible. This decay marks the ISC process from the excited triplet 3E to the metastable singlet state 1A. Besides the spin quantum jumps detected at low temperature the photon sta-tistics measurements are the best indication for detection of single centres. It should be noted that the radiative decay time depends on the refractive index of the surrounding medium as 1/n medium . Because n medium of diamond is 2.4 the decay time should increase significantly if the refractive index of the sur-rounding is reduced. This is indeed observed for NV centres in diamond nanocrystals [51]. It should beFig. 7 (onl ine col our at: ) Confocal fl uorescence image of various diamond sampl es with different electron irradiation dosages.Fig. 6 Fluorescence intensity autocorrelation function of a single NV defect at room temperature.phys. stat. sol. (a) 203, No. 13 (2006) 3215 © 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheimnoted, that owing to their stability single defect centres in diamond are prime candidates for single pho-ton sources under ambient conditions. Such sources are important for linear optics quantum computing and quantum cryptography. Indeed quantum key distribution has been successful with fluorescence emis-sion from single defect centres [53].A major figure of merit for single photon sources is the signal to background ratio, given (e.g.) by the amplitude of the correlation function at 0τ=. This ratio should be as high as possible to ensure that a single bit of information is encoded in a single photon only. The NV centre has a broad emission range which does not allow efficient filtering of background signals. This is in sharp contrast to the NE8 defect which shows a very narrow, only 1.2 nm wide spectrum. As a consequence the NE8 emission can be filtered out efficiently [47]. The correlation function resembles the one from the NV centre. Indeed the photophysical parameters of the NV and NE8 are similar, yet under comparable experimental conditions the NE8 shows an order of magnitude improvement in signal-to-background ratio because of the nar-rower emission range.Besides application in single photon generation, photon statistical measurements also allow to derive conclusions on photoionization and photochromism of single defects. Most notably the NV centre is speculated to exist in two charge forms, the negatively charged NV with zero phonon absorption at 637 nm and the neutral from NV 0 with absorption around 575 nm [20, 54]. Although evidence existed that both absorption lines stem from the same defect no direct charge interconversion has been shown in bulk experiments. The best example for a spectroscopically resolved charge transfer in diamond is the vacancy, which exists in two stable charge states. In order to observe the charge transfer from NV to NV 0 photon statistical measurements similar to the ones described have been carried out, except for a splitting of photons depending on the emission wavelength [55]. This two channel set up allows to detect the emission of NV 0 in one and NV in another detector arm. Figure 8 shows the experimental result. For delay time 20,()g ττ= shows a dip, indicating the sub-Poissonian statistics of the light emitted. It should -300-200-10001002000,00,40,8g (2)(τ)τ,nsPhotonsDichroicBSStartNV 0Stop NV - Fig. 8 (online colour at: ) Fluorescence cross correlation function between the NV 0 and NV emission of a single defect.。

Co-experience user experience as interaction

Co-experience user experience as interaction

Co-experience:user experience as interactionKATJA BATTARBEE*and ILPO KOSKINENUniversity of Art and Design Helsinki,School of Design,Industrial Design,Ha meentie135C,00560Helsinki,Finland(Received 26January 2004;in final form 10May 2004)User experience is becoming a key term in the world of interactive productdesign.The term itself lacks proper theoretical definition and is used in manydifferent,even contradictory,ways.This paper reviews various existingapproaches to understanding user experience and describes three mainapproaches and their differences.A missing perspective is noted in all three:their focus is on only the individual having the experience and neglects thekinds of experiences that are created together with others.To address this,anew elaboration called co-experience is presented.It builds on an existingapproach but borrows from symbolic interactionism to create a moreinclusive interactionist framework for thinking about user experiences.Datafrom a study on mobile multimedia messaging are used to illustrate anddiscuss the framework.Keywords :User experience;Social interaction;Mobile communication;Multimedia messaging1.IntroductionUsability experts know that while usability is important,it is not enough on its own to guarantee a product’s success with customers.While helping people take advantage of a product’s functionality,usability also needs to pave the road for ability techniques can be used to improve a given solution,but they do not reveal whether a different solution might deliver better and more enjoyable experiences.Consequently,designers have begun to apply hedonistic psychology (Jordan 2000,Hassenzahl 2003)and to design for user experience.For example,Jordan takes a hedonistic perspective by proposing that pleasure with products is the sum of sociopleasure,ideopleasure,physiopleasure and psychopleasure.He defines pleasure with products as ‘the emotional,hedonic and practical benefits associated with products’(Jordan 2000,p.12).Hassenzahl (2003)shows that satisfaction,a part of usability,is the sum of pragmatic and hedonic quality.However,as Desmet (2002)notes,the problem *Corresponding author.Email:kbattar@uiah.fiCoDesign ,Vol.1,No.1,March 2005,5–18CoDesign ISSN 1571-0882Print/ISSN 1743-3755online #2005Taylor &Francis Ltd /journals DOI:10.1080/157108804123312899176K.Battarbee and I.Koskinenwith focusing on pleasure is that it ignores the unpleasant emotional experiences related to product use.Perhaps to overcome this deficiency,user experience has become the new buzzword in design(for example,see Shedroff2001,Garrett2003,Kuniavsky2003).User experience is subjective and holistic.It has both utilitarian and emotional aspects,which change over time(Rhea1992).In this paper,we deal with what we see as a major problem in the user experience literature,which is its implicit individualistic bias.We refer to the mostly missing social quality of experience with the term‘co-experience’,and propose an interactionist perspective for studying co-experience.We show that with this concept,we are able to pay attention to things that are not addressed by existing theories of user experience.We illustrate this perspective by showing how people communicate emotions with each other via mobile multimedia technology.2.Three approaches to user experienceCurrently there are three main approaches to applying and interpreting user experience in design.These are the measuring approach,the empathic approach,and the pragmatist approach.The role of emotional experiences is important in all three,although,as they stem from different disciplines,they treat emotions differently.The measuring approach is mainly used in development and testing.It builds on the notion that experiences can be measured via emotional reactions.Thus,the approach is narrow—the definition only includes those aspects of user experience that can be measured and,through measuring,understood and improved.There are several alternative orientations within the approach.Thefirst builds on the idea that people experience things as reactions in their bodies.People’s bodies react to situations chemically and electrically,and experience this reaction in terms of emotions.As these reactions are oftenfleeting and sometimes difficult to verbalise,tools for monitoring such reactions,such as facial expressions or changes in galvanic skin response,can be recorded in order to understand when and where people get frustrated(Picard1997).A second orientation is based on subjective reports(e.g.Jordan2000).For instance,Desmet(2002) has developed a testing tool to elicit emotional responses to products such as cars.His tool,PrEmo,uses animated cartoon characters to describe14different emotional responses.By selecting all that apply,the user creates an emotional profile.Universal evaluation criteria for user experience do not exist,though some have been proposed for interaction design(Alben1996).Rather,the‘soft and emotional experiences’need to be translated into‘experience goals’relevant to each project and included in the testing of products and prototypes(Teague and Whitney2002).The empathic approach also claims that experience is emotional in nature but that the kinds of experiences that products elicit should be connected to the needs,dreams and motivations of individuals(Dandavate et al.1996,Black1998).Designing for user experience begins with creating a rich,empathic understanding of the users’desired experiences and only then designing concepts and products to support them.The term ‘design empathy’has been in use since the late1990s to describe the role of the designer/ researcher(Leonard and Rayport1997,Segal and Fulton Suri1997,Koskinen et al. 2003).Design empathy makes use not only of the emotions of the users,but also those of the designers.In order to become not merely informed but also inspired,designers must both observe and feel for the users(Ma kela and Fulton Suri2001,Kankainen2002).The methods used in empathic approaches aim to provide an understanding of users’experiences with qualitative methods;they also assist users in constructing,for designers,Co-experience:user experience as interaction7 descriptions of their experiences,dreams,expectations and life context(Dandavate et al. 1996).Typically,these methods combine visual and textual data,self-documentation and projective tasks,several of which are used in parallel.This approach aims to inspire designers rather than produce testable hypotheses through measurement and conceptual elaboration.The pragmatist approach borrows much of its perspective from pragmatist philosophy (see Dewey1934).Recently,Forlizzi and Ford(2000)presented a model of user experience in interaction.This model is theoretical in nature,and shows that experiences are momentary constructions that grow from the interaction between people and their environment.In their terminology,experiencefluctuates between the states of cognition, subconsciousness and storytelling,depending on our actions and encounters in the world. Experience is something that happens all the time:subconscious experiences arefluent, automatic and fully learned;cognitive experiences require effort,focus and concentration. Some of these experiences form meaningful chunks and become demarcated as‘an experience’—something meaningful that has a beginning and an end.Through stories, they may be elaborated into‘meta-experiences’that are names for collections of individual experiences.Even more recently,Wright et al.(2003)focused on what is common to all experience,describing four strands—the compositional,sensory, emotional and spatio-temporal strands—which together form experience.They also describe sense-making processes such as anticipating,interpreting and recounting. These three approaches propose divergent methodologies for studying user experience, but imply different things.The measuring approach focuses on emotional responses,the empathic approach on user-centred concept design,while the pragmatic approach links action to meaning.The measuring approach is useful in development and evaluation,but is more difficult to apply at the fuzzy front end of design(Cagan and Vogel2002).The pragmatist approach concentrates on the embodied nature of experience and interaction. Thefirst two approaches,the measuring and the empathic,share one main problem. Both see emotions as driving forces of human conduct,an assumption contested by more situated views of interaction(Blumer1986,p.7;about plans,see Dourish2002,pp.70–73).Of user experience approaches,only the pragmatist perspective really accounts for the situated unity of action,emotion and thought in the individual in a theoretical way. The pragmatist perspective is broader than the others in its scope;in fact,other models can be seen as its special cases.However,all these approaches are individualistic,thus missing a crucially important aspect of human experience.People as individuals depend on others for all that makes them truly human.Experiencing happens in the same social context—therefore,it is necessary to account for this context and its effect on experience.3.Co-experience:elaborating the pragmatist perspectiveWe use the term‘co-experience’to describe experiences with products in terms of how the meanings of individual experiences emerge and change as they become part of social interaction.To explore co-experience more deeply,we expand the pragmatist model of user experience in interaction(Forlizzi and Ford2000)and address the mention of meaning in more detail by building on three classic principles of symbolic interactionism. First,people act towards things through the meanings they have for them.Second, meanings arise from interaction with one’s fellows.Third,meanings are handled in,and modified through,an interpretive process used by the person in dealing with things he encounters(Blumer1986,pp.2–6).These are the classic statements of symbolic interactionism,a sociological tradition that builds on the pragmatist philosophy of John8K.Battarbee and I.KoskinenDewey,William James,and George Herbert Mead(see Joas1997).This perspective adds social interaction to the pragmatist model,maintaining that people come to define situations through an interpretive process in which they take into account the non-symbolic gestures and interpretations of others.The improved interactionist model for co-experience uses these meanings to explain how experiences migrate between the different levels of Forlizzi and Ford’s model(for an elaboration,see Forlizzi and Battarbee2004)—from the centre of attention to the periphery or into stories and acts of personalisation and back again.Such migrations happen in at least three general ways..Lifting up experiences.Often subconscious experience migrates to become‘an experience’through a social process.People constantly lift things from the stream of events in everyday life and communicate them to others.For example,a person may describe something that has happened to them,evaluating it as meaningful enough to be told to others..Reciprocating experiences.Quite often,once it has been lifted up in this way,recipients acknowledge and respond to experience.For example,they may reciprocate by telling about their own,similar experiences,or simply offer a sympathetic response(Mauss 1980,Licoppe and Heurtin2001,Koskinen et al.2002,Ch.7,Taylor and Harper 2002).In doing so,they show that the experience(as well as the person sharing it)is meaningful for them.This can be shown in various ways,for example,by appreciating the experience,or by taking sides with it.Experiences can be maintained,supported and elaborated socially.Memories of relevant experiences may be retold in this way as well..Rejecting and ignoring experiences.Finally,experiences brought to the attention of others may also be rejected or downgraded by others.For example,something that is important for one person may be too familiar,uninteresting or even offensive for others.They may indicate this in various ways to soften the rejection,for example through humor or teasing,or with varying degrees of topic change,direct response or inaction.Similarly,people often elaborate‘meta-experiences’together(see Forlizzi and Ford 2000).In this paper we do not focus specifically on meta-experience for two reasons. First,the pragmatist model of Forlizzi and Ford already accounts for it.When people compare experiences,often collected over several years,they come tofind similarities and differences,and classify them in stories.Ultimately,some stories may become key symbols of their identities(see Orr’s1996analysis of technicians’‘war stories’).Also, stories provide one of the main mechanisms for reconstructing memories(Neisser1981, Orr1996).Second,we see storytelling as just another form of social interaction.It is significant when sharing experiences verbally,but not necessarily the dominant form for digital media.Although storytelling has well-studied forms and traits,it nevertheless is included in the more general approach of symbolic interactionism,thus making it a special case of the more general argument for all social interaction.The following example(figure1)illustrates the strength of this framework.Thefigure is a mobile multimedia message(MMS):a photo,audio and text message sent from one mobile phone to another during a pilot study in Finland in2002(the pilot study and further details of the messaging are described in Section4).The story behind this MMS is how Thomas,a father,lifts up a significant experience:the toddler Mikey’s evening tantrum.Jani,a friend,reciprocates by saying that his experiences in babysitting Mikeyhave been similar,and Thomas should consider getting him a soccer ball of his own.Jani’s comment could be taken as a rejection,suggesting a disinterest in Mikey and his temper.In a subsequent reply (shown in the figure)Thomas reinstated the importance of the event,and furthermore,turned it into an opportunity to tease Jani.His reply contained a good audio sample of the howling and a picture of the boy,red in the face and tears streaming down his cheeks,and suggested similarities between Mikey and Jani.However,Jani’s softened rejection was successful:there were no more reports on Mikey crying after that.As this example shows,people may use technology to share meaningful experiences,to sympathise with them,to suggest that they are not particularly significant,or even to reject denial of their significance.These experiences would not occur to a user alone;identities,roles and emotions are resources for interpreting and continuing interaction (Blumer 1986).For instance,in the example of figure 1,Thomas and Jani do more than share an experience:they actively interpret it,relate to it,reinterpret it and,in so doing,constitute a line of action and come to define their mutual relationship for a brief moment.The other recipients of the MMS remain more or less neutral bystanders.The interactionist perspective on co-experience claims that experience is a social phenomenon and needs to be understood as such.Also,it claims that bodily and psychological responses to external phenomena do not necessarily lead to predictable emotional reactions,because of an interpretive social process in between (see Shott 1979).Thus,relying solely on emotion as an index of experience leads us astray.For these same reasons,empathising with individuals does not explain co-experience.Empathy is necessary,but the focus must first be on interaction.When people act together,they come to create unpredictable situations where they must respond to each other’s actions creatively.In the lifecycle of an experience (cf.Rhea 1992),we need to pay attention to co-experience,not just to individual aspects of experience.This is the crux of the symbolic interactionist perspective on user experience.4.Data and methodsWe illustrate our argument with data from Mobile Multimedia,a multimedia messaging pilot study organised with Radiolinja,a Finnish telecommunications operator.In Mobile Multimedia several groups of friends exchanged multimedia messages with each other for about five weeks in the summer of 2002.Each participant was given an MMS phone (either a Nokia 7650with an integrated camera or a SonyEricsson T68i with a plug-in camera);the service was free of charge (see Koskinen 2003).Out of the Mobile Multimedia pilot,three mixed-gender groups of 7,11and 7members were selected fora Figure 1.A little boy’s bad mood.Co-experience:user experience as interaction 910K.Battarbee and I.Koskinendetailed study to explore in more detail gender difference,terminal types and the city–countryside axis.The qualitative study focused on the messaging of these groups. During the pilot,the three groups sent over4000messages which were analysed quantitatively;two samples of the messages were also analysed qualitatively.The messages are published here with permission;the names of people and places have been changed.The data reveal how people themselves construct messages,and how others respond to them.Even though there is no access to what people did when they received the messages,we can see their virtual responses:exactly the same content of text,image and audio as was received by the participant(see Battarbee2003,Koskinen2003, Kurvinen2003and references therein).The study of co-experience is the study of social interaction between several people who lift up something from their experience to the centre of social interaction for at least a turn or more.Since the focus is on how people give meanings to things,and how they understand them,the study setting needs to be naturalistic,i.e.to happen in the real world rather than in a controlled setting such as a laboratory(Glaser and Strauss1967, Blumer1986).Designers need to explore how interaction proceeds and aim to describe its forms before trying to explain it in terms of such structural issues as roles or identities. Rather,inference proceeds inductively(Seale1999).Roles and identities may be made relevant in interaction,but they are resources people can use rather than features that explain co-experience.In this paper,we aim to indicate the value of the concept by showing that experience has features that cannot be studied adequately with existing concepts of user experience.Here,we aim to illustrate co-experience as a sensitising concept(Blumer1968),rather than trying to provide a comprehensive analysis of the varieties of co-experience.5.Lifting up experiences into the focus of social interactionFrom the symbolic interactionist standpoint proposed in this paper,the key feature of experience is symbolisation:what people select from experience to be shared with others. People communicate with each other for a variety of reasons,ranging from practical to emotional.In so doing,they place the things they communicate at the focal point of shared attention.In presenting things as‘an experience’,they invite others to join in. However,these communications remain open to negotiation,something that may or may not be picked up by others and made into something more meaningful than merely the scenic background of experience.As an example of an ordinary message that illustrates this argument,we may take the simple pleasures of eating,drinking and socialising(seefigure2).This message is part of a sequence of holiday reports between two groups of friend:the‘land lovers’and the ‘sailors’.Susse and her friends choose to describe their evening sentiments with a multimedia puzzle.The audio explains the picture and the text suggests that the key element is in fact still missing and remains to be imagined:the smell of hot pizza. Susse may have tried to convey a realistic sense of what the experience of hot pizza is, but she is also acknowledging that it is impossible,with the smell(and the pizza itself) missing.However,she seems to trust that with the names of the ingredients,the‘sailors’will get the idea—and share their sentiments as she has shared theirs.Sometimes experiences belong to larger themes and can be called scalable(Forlizzi and Battarbee2004).For example,an eagerly waited holiday trip to Paris is a complex experience that may last for weeks and contain many larger and smaller,sometimes contradictory,elements.Documenting such experiences requires more than one message,as in the case of the following monologue.Markku and his friends are driving to a weekend rock festival.Their first message (figure 3)describes the mood inside the van.The second message (figure 4)reports that they are still on their way,but something unexpected has happened—they were caught in a speed trap and fined.When experiencing strong emotions,the process of symbolisation requires more effort.The description of the experience has to take into account the responses of others,such as anger,fear,disappointment,ridicule or sympathy,and explore which interpretations are desirable and which are to be avoided.What is offered here for common attention is laughing at the experience and making fun of it,with only a side reference to the actual event and the emotional experience of being caught by the police and receiving a fine.In principle,almost any detail of ordinary life can be meaningful enough to send.In MMSs,people document food,drink,children,pets and spouses (see Koskinen et al.Figure 2.A pleasantevening.Figure 3.Driving to the rockfestival.Figure 4.Reporting on the speeding ticket incident.Co-experience:user experience as interaction 1112K.Battarbee and I.Koskinen2002,Lehtonen et al.2003).In addition,people report events such as rock festival trips and events in summer homes as well as moods,socially significant things and emotionally relevant experiences.The reason for sending an image and audio is its topic rather than its artistic quality.The literature on experience tends to emphasise and focus on experiences that are emotionally strong and that stand out as memorable.However,the content in the Mobile Multimedia project focuses predominantly on small,everyday and mundane matters,suggesting that in social interaction,the strength of emotions does not correlate with the emotional satisfaction of the experience of communicating and sharing them.6.Reciprocating experience in social interactionPeople do not merely compose Multimedia messages,they also acknowledge them in replies.In responding,recipients pick up the gist of the message andfit their response to it.Typically,they show that they either share the experience or empathise with the sender on a more general level,as is suggested in theories of gift-exchange(Mauss1980)applied to mobile communications(Licoppe and Heurtin2001,Taylor and Harper2002).Parents share pictures of their babies,expecting others to mirror their delight,but even in more ordinary cases,the expected response is a positive,reinforcing one.Of course,recipients may not always produce a proper response,and this may prompt problems in subsequent interactions.For example,the sender may become embarrassed or hurt,and may even lose face(Gross and Stone1964,Goffman1967,pp.5–45).Between the need to maintain social interaction and support others,and the need to look out for personal gain and be selfish,the more likely people are to meet again,the more they will try to keep the interaction going and help everyone maintain face.This,among socially connected people,results in an in-built tendency to reciprocate experiences in human interaction—and in Multimedia messaging.Most responses follow this logic.Sometimes people start with a parody,as infigure5. Replies to such messages(figure6)are usually not explicit congratulations.Risto, however,makes a point of saying how much he enjoyed it.However,to really mean this, he needs to respond with a similarly overdone picture,a reflection of thefirst one.Pleased with his message,Risto reuses the picture and shares it with other friends as well,this time with a new text(figure7).The response to Risto’s message does not merely share the holiday mood,but also copies the response format almost perfectly(figure8).People may also align with negative experiences,as in the following example in which two young women share a mood.First,Maria lets Liisa know that she is experiencing something‘typical’,which seems neither exciting nor fun.Liisa sympathises,and reciprocates the experience,sharing her own interpretation of what a‘typical’experience is like(figures9and10).This example demonstrates the power of the visual in pared to emotions, moods are lower intensity and last longer.Because moods are not focused on any particular object,objects do not describe moods very well.Here,the focus is on the face. The MMS phones were often used for literal self-documentation—taking a picture of one’s own face at arm’s length—although collaboration was also frequent.Through this exchange,Liisa and Maria indicate that they know each other and have shared similar experiences before:how else could they talk about‘this’being‘typical’? The closeness is also expressed by the framing of the picture.Whether Liisa’s response is sincere or a parody is hard to say.Maybe the interpretation is intentionally left for the recipient to decide,and to remain open for future interactions.Figures 5–8.A staged picture prompts stagedresponses.Figures 9,10.Exchanging pictures of mood.Co-experience:user experience as interaction 1314K.Battarbee and I.Koskinen7.Rejecting and ignoring experiences in social interactionFor a number of reasons,experiences that are offered to the common awareness may also be rejected,downplayed or made fun of.A certain banality is almost built into MMS use, which focuses on mundane experiences rather than,say,key rituals of life or experiences withfine art.Banality may go overboard and lose the recipient’s interest;sometimes,the report may stretch the bounds of what is morally acceptable,for example by being sexually explicit(see Kurvinen2003).Recipients,then,may have many different reasons to interrupt or redirect the messaging,even when it may be difficult to do so without insulting the sender.How can they accomplish such actions without causing the sender to lose face? Thefirst thing to notice is that rejection may be active or passive—communication always offers multiple alternative possibilities for interpretation,and choosing one option may negate others.In the following sequence,Thomas offers a significant experience (getting engaged/married)for others to respond to(figures11–13).Predictably,he receives several congratulations and pictures of happy faces.However,Jani did not notice the engagement message until25hours later,and takes a different course of action.In his response,he teases Thomas indirectly for losing his freedom,proclaiming that he himself has no intention of getting‘snatched’,and thus inverts the value of Thomas’s experience. In response,Thomas defends his case by returning the tease and peppering it with an insult.The communication between Thomas and Jani is a clever play on the possibilities of multimedia,as the joke is largely a visual play on the theme ofhands. Array Figures11–13.Two teases.Generally,a positive experience like that sent by Thomas calls for an aligning response. Responses rejecting the intended value of such messages normally incorporate accounts and disclaimers that soften the impact of the rejection.Typical examples of such accounts and disclaimers are humour,excuses,justifications and hedges(Scott and Lyman1968, Hewitt and Stokes1975).With these devices,the communication channel is kept open despite the interactional problems posed by the rejection.This was also the case in the messaging aroundfigure1,in which Jani indirectly indicated to Thomas that Mikey’s tantrums were no longer a welcome topic.By advising Thomas to buy a ball for Mikey, Jani softened the message by suggesting that maybe Mikey had good reason to be upset, i.e.not having a soccer ball of his own.However,the tactic failed,and Thomas countered by comparing Jani with the baby—humorously,of course,but the comparison still turned his reply into a tease.No matter how nice,such rejections may still insult the original sender—or at least give them an opportunity to behave as if they were insulted.8.Conclusions and discussionIn this paper,we have introduced the notion of co-experience and present it as an elaboration of Forlizzi and Ford’s(2000)model of user experience in interaction.Our claim is based on a simple observation:people create,elaborate and evaluate experiences together with other people,and products may be involved as the subject,object or means of these interactions.Social processes are particularly significant in explaining how experiences migrate from subconscious into something more meaningful,or lose that status.The concept of co-experience builds on the understanding that experiences are individual,but they are not only that.Social interaction is to the experiences of the individual the same as a sudden jolt is to a jar of nitroglycerine:it makes things happen. We claim that neglecting co-experience in user experience leads to a limited under-standing of user experience—and a similarly limited understanding of design possibilities. The concept of co-experience enriches design in several ways..Co-experience extends the previous understanding of user experience by showing that user experiences are created together and are thus different from the user experiences people have alone..It suggests an interactionist methodology for studying user experience.It is important to see what the content is,what people do,or,in the case of the Mobile Multimedia project,what is in their messages.This alone,however,is not enough to make sense of co-experience.It is also necessary to study the interactions between people with and without technologies,and to put the messages into context..Co-experience opens new possibilities in design for user experience by focusing on the role of technology in human action(parallel ideas can be found in the concept of embodied interaction,see Dourish2001).Co-experience focuses on how people make distinctions and meanings,carry on conversations,share stories and do things together.By understanding these interactions,opportunities for co-experience can be designed into the interactions of products and services.To put this into design terms:user experiences can only be understood in context.New technologies are adopted in social interactions where the norms for behavior(and product use)are gradually developed and accepted.These rules are never absolute or complete.For example,instead of merely responding to a suggestion,people may turn their response into a mock tease.There is therefore little point in creating an interface。

NJQ

NJQ

Leading EdgeReviewChromatin:Receiver and Quarterbackfor Cellular SignalsDavid G.Johnson1,2,3and Sharon Y.R.Dent1,2,3,*1Department of Molecular Carcinogenesis2Center for Cancer EpigeneticsThe University of Texas MD Anderson Cancer Center,Science Park,Smithville,TX78957,USA3The University of Texas Graduate School of Biomedical Sciences at Houston,Houston,TX77030,USA*Correspondence:sroth@/10.1016/j.cell.2013.01.017Signal transduction pathways converge upon sequence-specific DNA binding factors to reprogram gene expression.Transcription factors,in turn,team up with chromatin modifying activities. However,chromatin is not simply an endpoint for signaling pathways.Histone modifications relay signals to other proteins to trigger more immediate responses than can be achieved through altered gene transcription,which might be especially important to time-urgent processes such as the execution of cell-cycle check points,chromosome segregation,or exit from mitosis.In addition, histone-modifying enzymes often have multiple nonhistone substrates,and coordination of activity toward different targets might direct signals both to and from chromatin.IntroductionSignal transduction classically involves coordinated cascades of protein phosphorylation or dephosphorylation,which in turn alter protein conformation,protein-protein interactions,subcel-lular protein locations,or protein stability.In many cases,these pathways begin at the cell surface and extend into the nucleus, where they alter the interactions of transcription factors and chromatin-modifying enzymes with the chromatin template.In some cases,signaling promotes such interactions,whereas in others,factors are ejected from chromatin in response to incoming signals.Several such pathways have been defined that control developmental fate decisions or response to physi-ological or environmental changes(for examples,see Fisher and Fisher,2011;Long,2012;Valenta et al.,2012).In these cases, the ultimate endpoint of the signal is often considered to be a modification of chromatin structure to modulate DNA accessi-bility to control gene expression.The architecture of chromatin can be altered by a variety of mechanisms,including posttranslational modification of histones,alterations in nucleosome locations,and exchange of canonical histones for histone variants.Histone modifications have at least three nonmutually exclusive effects on chromatin packing(Butler et al.,2012;Suganuma and Workman,2011). First,modifications such as acetylation or phosphorylation can alter DNA:histone and histone:histone interactions.Second, histone acetylation,methylation,and ubiquitylation can create binding sites for specific protein motifs,thereby directly promoting or inhibiting interactions of regulatory factors with chromatin(Smith and Shilatifard,2010;Yun et al.,2011).Bromo-domains,for example,promote interactions with acetyl-lysines within histones.PHD domains,Tudor domains,and chromo domains can selectively bind particular methylated lysines (Kme).At least one Tudor domain(TDRD3)serves as a reader for methylarginine(Rme)residues(Yang et al.,2010).In contrast, other domains,such as the PhDfinger in BHC80(Lan et al., 2007),are repelled by lysine methylation.Such regulation is enhanced by combining domains to create multivalent‘‘readers’’of histone modification patterns(Ruthenburg et al.,2007).The combination of PhD and bromodomains in the TRIM24protein, for example,creates a motif that specifically recognizes histone H3K23acetylation in the absence of H3K4methylation(Tsai et al.,2010).Third,histone modifications also affect the chro-matin landscape by influencing the occurrence of other modifi-cations at nearby sites(Lee et al.,2010).Methylation of H3R2, for example,inhibits methylation of H3K4,but not vice versa (Hyllus et al.,2007;Iberg et al.,2008).Such modification‘‘cross-talk’’can result either from direct effects of a pre-existing modi-fication on the ability of a second histone-modifying enzyme to recognize its substrate site or from indirect effects on substrate recognition through the recruitment of‘‘reader proteins’’that mask nearby modification sites.Binding of the chromodomain in the HP1protein to H3K9me blocks subsequent phosphoryla-tion of S10by Aurora kinases,for example(Fischle et al.,2003). The Power of CrosstalkHistone modification crosstalk can also occur in trans between sites on two different histones.The most studied example of such crosstalk is the requirement of H2B monoubiquitylation for methylation of H3K4(Shilatifard,2006).In yeast,the Bre1 E3ligase ubiquitylates H2BK123and works together with the Paf1complex to recruit the Set1H3K4methyltransferase complex,often referred to as COMPASS,to gene promoters (Lee et al.,2010).Bre1-mediated H2B ubiquitylation also stimulates H3K79methylation by the Dot1methyltransferase (Nakanishi et al.,2009;Ng et al.,2002).Each of these histone modifications is widely associated with activelytranscribed Cell152,February14,2013ª2013Elsevier Inc.685genes and can regulate multiple steps during transcription (Laribee et al.,2007;Mohan et al.,2010;Wyce et al.,2007). These crosstalk events are conserved,at least in part,in mammalian systems(Kim et al.,2009;Zhou et al.,2011). Though H2B ubiquitylation is observed in the bodies of all actively transcribed genes,knockdown of the mammalian homolog of Bre1,ringfinger protein20(RNF20),affects the expression of only a small subset of genes(Shema et al., 2008).Interestingly,RNF20depletion not only led to the repres-sion of some genes,but also caused the upregulation of others. Genes negatively regulated by RNF20and H2B ubiquitylation include several proto-oncogenes,such as c-MYC and c-FOS, as well as other positive regulators of cell proliferation.On the other hand,depletion of RNF20and reduction in H2B ubiquityla-tion reduced the expression of the p53tumor suppressor gene and impaired the activation of p53in response to DNA damage. Consistent with these selective changes in gene expression, RNF20depletion elicited a number of phenotypes associated with oncogenic transformation.The suggestion that RNF20 may function as a tumor suppressor is further supported by thefinding of decreased levels of RNF20and H3K79methylation in testicular seminomas(Chernikova et al.,2012)and the obser-vation that the RNF20promoter is hypermethylated in some breast cancers(Shema et al.,2008).A more concrete link between these histone modifications and human cancer comes from leukemias bearing translocations of the mixed lineage leukemia(MLL)gene.MLL is a H3K4methyl-transferase related to the yeast Set1protein found in theCOMPASS complex.A number of different gene partners are found to be translocated to the MLL locus,and this invariably creates an MLL fusion protein that lacks H3K4methyltransferase activity.Interestingly,many of the translocation partners are part of a‘‘superelongation complex’’that stimulates progress of the polymerase through gene bodies(Mohan et al.,2010;Smith et al.,2011).Data suggest that at least some of these oncogenic MLL fusion proteins alter the expression of select target genes, such as HOXA,by increasing H3K79methylation(Okada et al., 2005).Knockdown of Dot1reduced H3K79methylation at these targets and inhibited oncogenic transformation by MLL fusion proteins.These examples demonstrate how deregulation of crosstalk among different histone modifications can contribute to diseases such as cancer.Not Just for HistonesJust as in histones,modifications in nonhistone proteins are subject to regulatory crosstalk and serve as platforms for binding of‘‘reader’’proteins.For example,a yeast kinetochore protein, Dam1,is methylated at K233by the Set1methyltransferase,an ortholog of mammalian MLL proteins(Zhang et al.,2005).The functions of Dam1,like those of other kinetochore proteins,are highly regulated by Aurora-kinase-mediated phosphorylation (Lampson and Cheeseman,2011).At least some of these phos-phorylation events are inhibited by prior methylation of Dam1, creating a phosphomethyl switch that impacts chromosome segregation(Zhang et al.,2005).Another more complicated example of a phosphomethyl regu-latory cassette occurs in the RelA subunit of NF-k B(Levy et al., 2011).RelA is monomethylated by SETD6at K310,and this modification inhibits RelA functions in transcriptional activation through recruitment of another methyltransferase,G9a-like protein(GLP).GLP binds to K310me1in RelA and induces a repressive histone modification,H3K9me,in RelA target genes. Phosphorylation of the adjacent S311in RelA,however,blocks GLP association with RelA and instead promotes the recruitment of CREB-binding protein(CBP)to activate transcription of NF-k B targets(Duran et al.,2003)(Figure1A).These two examples in yeast and in mammalian cells likely foreshadow the discovery of many additional regulatory ‘‘switches’’created by modification crosstalk.The p53tumor suppressor is a prime candidate for such regulation,as it harbors several diverse modifications.Moreover,many kinase con-sensus sites contain arginine or lysine residues,providing a high potential for phosphomethyl,phosphoacetyl,or phosphou-biquitin switches(Rust and Thompson,2011).The induction of H3K9me by recruitment of GLP via a methyl-ation event in RelA illustrates how a signaling pathway,in this case mediated by NF-k B,can transduce a signal to chromatin. However,signaling can also occur in the other direction;that is,a histone modification can affect the modification state of a nonhistone protein.Methylation of Dam1,for example,requires ubiquitylation of histone H2B(Latham et al.,2011).Most likely, H2Bub recruits the Set1complex to centromeric nucleosomes, positioning it for methylation of Dam1at the kinetochore.Thus, transregulation of posttranslational modifications can occur both between histones(such as H2Bub and H3K4me)and between histones and nonhistones(such as H2Bub and Dam1-K233me),providing a platform for bidirectional signaling fromchromatin.Figure1.Regulation of RelA/NF-k B by a Phosphomethyl Switch and in Response to DNA Damage(A)Methylation of RelA at lysine310(K310)by SETD6creates a binding site for GLP,which in turn methylates H3K9at NF-k B target genes to inhibit tran-scription.Phosphorylation of RelA at serine311(S311)by PKC z blocks binding of GLP to RelA(Levy et al.,2011)and,along with other RelA modifications not shown,promotes its interaction with CBP,leading to histone acetylation and activation of NF-k B target genes(Duran et al.,2003).(B)Phosphorylation of NEMO by ATM in response to a DSB promotes its export from the nucleus.In the cytoplasm,NEMO activates the IKK complex, leading to I k B phosphorylation and degradation and NF-k B(RelA-p50) translocation to the nucleus,where it can activate transcription as shown in(A). Note that some ATM may translocate with NEMO to the cytoplasm and participate in IKK activation.686Cell152,February14,2013ª2013Elsevier Inc.Signaling to and from Chromatin in Response to DNA DamageSignaling to and from chromatin impacts other important cellular processes as well.DNA repair involves coordination among the repair machinery,chromatin modifications,and cell-cycle checkpoint signaling.At the apex of the DNA damage response are three kinases related to the PI3kinase family,ataxia telangi-ectasia mutated(ATM),ATM and Rad3-related protein(ATR), and DNA-PK(Jackson and Bartek,2009;Lovejoy and Cortez, 2009).DNA-PK is activated when its regulatory subunit Ku70/ 80binds to the end of a DNA double-strand break(DSB).ATR activation involves recognition of single-stranded DNA coated with replication protein A(RPA)by the ATR-interacting protein, ATRIP,as well as direct interaction with topoisomerase II b-bind-ing protein(TopBP1)(Burrows and Elledge,2008).Like DNA-PK, ATM is also activated in response to DSBs,but rather than recognition of broken DNA ends,ATM appears to be activated in response to large-scale changes in chromatin structure caused by a DSB(Bakkenist and Kastan,2003).How alterations in chromatin structure are signaled to ATM is at present unclear. One of the earliest events in the DNA damage response is the phosphorylation of a variant of histone H2A,H2AX,by ATM, DNA-PK,and/or ATR(Rogakou et al.,1998).Phosphorylated H2AX(g H2AX)provides a mediator of DNA damage signaling directed by these kinases,and this modification is found inflank-ing chromatin regions as far as one megabase from a DNA DSB. This phosphorylation event creates a binding motif for the medi-ator of DNA damage checkpoint(MDC1)protein,which in turn recruits other proteins,such as Nijmegen breakage syndrome 1(NBS1)and RNF8,to sites of DSBs through additional phos-pho-specific interactions(Chapman and Jackson,2008;Kolas et al.,2007;Stucki and Jackson,2006).NBS1is part of the MRN complex that also contains Mre11and Rad50and is involved in DNA end processing for both the homologous recom-bination and nonhomologous end-joining pathways of DSB repair(Zha et al.,2009).In addition,NBS1functions as a cofactor for ATM by stimulating its kinase activity and recruiting ATM to sites of DSBs where many of it substrates are located(Lovejoy and Cortez,2009;Zha et al.,2009).ATM also phosphorylates effector proteins that only transiently localize to DSBs.One of these proteins is the checkpoint2(Chk2)kinase,which can be activated by ATM-mediated phosphorylation at sites of damage but then spreads throughout the nucleus to phosphorylate and regulate additional proteins as part of the DNA damage response (Bekker-Jensen et al.,2006).ATM also phosphorylates tran-scription factors,such as p53and E2F1,to regulate the expres-sion of numerous genes involved in the cellular response to DSBs(Banin et al.,1998;Biswas and Johnson,2012;Canman et al.,1998;Lin et al.,2001).These events again illustrate that signals to chromatin,in this case resulting in H2AX phosphoryla-tion,can be relayed to other proteins both on and off of the chro-matin-DNA template.Bidirectional signaling is illustrated even further by another branch of the ATM-mediated DNA damage response that involves activation of NF-k B.NF-k B is normally sequestered in an inactive state in the cytoplasm through its association with I k B.Following ATM activation by a DNA DSB,ATM phosphory-lates NF-k B essential modulator(NEMO)in the nucleus(Wu et al.,2006),which promotes additional modifications to NEMO and export from the nucleus to the cytoplasm.Once in the cytoplasm,NEMO participates in the activation of the canon-ical inhibitor of NF-k B(I k B)kinase(IKK)complex that targets I k B for degradation,leading to NF-k B activation.NF-k B then trans-locates to the nucleus,where it regulates the expression of genes that are important for cell survival following DNA damage. In this case,a change in chromatin structure caused by a DSB initiates a signal that travels to the cytoplasm and back to the nucleus to activate transcription of NF-k B target genes by modi-fying chromatin structure(Figure1).Multiple Roles for H2B UbiquitylationIn addition to phosphorylation of H2A/H2AX,a number of other histone modifications are induced at sites of DSBs in yeast and mammalian cells.One such modification is H2Bub,the same mark involved in regulating transcription as described above.As with transcription,the Bre1ubiquitin ligase(RNF20-RNF40in mammalian cells)is responsible for H2Bub at sites of DNA damage(Game and Chernikova,2009;Moyal et al.,2011; Nakamura et al.,2011).Moreover,H2Bub is required for and promotes H3K4and H3K79methylation at sites of damage, similar to its role at actively transcribed genes.These histone modifications are important for altering chromatin structure to allow access to repair factors involved in DNA end resection and processing(Moyal et al.,2011;Nakamura et al.,2011). Moreover,H2Bub and H3K79me are not only required for DNA repair but are also important for activating the Rad53kinase and for imposing subsequent cell-cycle checkpoints(Giannat-tasio et al.,2005).Blocking H2B ubiquitylation or H3K79methyl-ation in response to DSBs inhibits Rad53activation and impairs the G1and intra S phase checkpoints.Bre1-mediated H2B ubiquitylation and subsequent methyla-tion of H3K4by Set1and H3K79by Dot1are also involved in regulating mitotic exit in yeast.The Cdc14phosphatase controls mitotic exit by dephosphorylating mitotic cyclins and their substrates during anaphase(D’Amours and Amon,2004).Prior to anaphase,Cdc14is sequestered on nucleolar chromatin through interaction with its inhibitor,the Cf1/Net1protein.Two pathways,Cdc fourteen early anaphase release(FEAR)and mitotic exit network(MEN),control the release of Cdc14from ribosomal DNA(rDNA)in the nucleolus.Upon inactivation of the MEN pathway,H2B ubiquitylation and methylation of H3K4 and H3K79are necessary for FEAR-pathway-mediated release of Cdc14from the nucleolus(Hwang and Madhani,2009).It appears that alteration of rDNA chromatin structure induced by these modifications is important for this process.Thus,depending on its chromosomal location,H2Bub can regulate gene transcription,DNA repair and checkpoint sig-naling,mitotic exit,and chromosome segregation(Figure2). The ability of this modification to affect methylation of both histone(H3K4and H3K79)and nonhistone proteins in trans high-lights its potential to serve as a nexus of signals coming into and emanating from chromatin.Unanswered QuestionsThe roles of H2B ubiquitylation and H3K4and H3K79meth-ylation in regulating nontranscriptional processes are well Cell152,February14,2013ª2013Elsevier Inc.687established in yeast.An unanswered question is whether these histone modifications regulate similar cellular processes in hu-mans.If so,then defects in chromatin signaling,independent of transcription,could contribute to diseases associated with alterations in histone-modifying enzymes.At present,studies aimed at understanding the oncogenic properties of MLL fusion proteins have focused on their abilities to regulate transcription.Likewise,the putative tumor suppressor function of RNF20is assumed to be due to selective regulation of certain genes (Shema et al.,2008).However,it is possible that defects in the DNA damage response or chromosomal segregation might contribute to the oncogenic properties of MLL fusion proteins or participate in the transformed phenotype associated with depletion of RNF20.Indeed,RNF20was recently shown to localize to sites of DNA DSBs to promote repair and maintain genome stability,a function that is apparently independent of transcriptional regulation.The importance of chromatin organization and reorganization for the regulation of gene expression and other DNA-templated processes cannot be argued.Defining how such changes are triggered by incoming signals is clearly important for under-standing how cells respond to changes in their environment,developmental cues,or insults to genomic integrity.However,emerging studies indicate that chromatin is not simply an obstacle to gene transcription or DNA repair.Rather,it is an active participant in these processes that can provide real-time signals to facilitate,amplify,or terminate cellular responses.Given the regulatory potential of modification crosstalk within histones and between histone and nonhistone proteins,coupled with ongoing definitions of vast networks of protein methylation,acetylation,and ubiquitylation events,our current view of signaling pathways as ‘‘one-way streets’’that dead end at chro-matin is likely soon to be converted into a view of chromatin as an information hub that directs multilayered and multidirec-tional regulatory networks.Defining these networks will not only provide a greater understanding of biological processes,but will also provide entirely new game plans for combatingcomplex human diseases that result from inappropriate signal transduction.ACKNOWLEDGMENTSWe thank Becky Brooks for preparation of the manuscript,Chris Brown for graphics,and Mark Bedford and Boyko Atanassov for suggestions and insightful comments.This research is supported,in part,by grants from the National Institutes of Health (CA079648to D.G.J.and GM096472and GM067718to S.R.D.)and through MD Anderson’s Cancer Center Support Grant (CA016672).REFERENCESBakkenist, C.J.,and Kastan,M.B.(2003).DNA damage activates ATM through intermolecular autophosphorylation and dimer dissociation.Nature 421,499–506.Banin,S.,Moyal,L.,Shieh,S.,Taya,Y.,Anderson,C.W.,Chessa,L.,Smoro-dinsky,N.I.,Prives,C.,Reiss,Y.,Shiloh,Y.,and Ziv,Y.(1998).Enhanced phosphorylation of p53by ATM in response to DNA damage.Science 281,1674–1677.Bekker-Jensen,S.,Lukas,C.,Kitagawa,R.,Melander,F.,Kastan,M.B.,Bartek,J.,and Lukas,J.(2006).Spatial organization of the mammalian genome surveillance machinery in response to DNA strand breaks.J.Cell Biol.173,195–206.Biswas,A.K.,and Johnson,D.G.(2012).Transcriptional and nontranscriptional functions of E2F1in response to DNA damage.Cancer Res.72,13–17.Published online December 16,2011./10.1158/0008-5472.CAN-11-2196.Burrows,A.E.,and Elledge,S.J.(2008).How ATR turns on:TopBP1goes on ATRIP with ATR.Genes Dev.22,1416–1421.Butler,J.S.,Koutelou,E.,Schibler,A.C.,and Dent,S.Y.(2012).Histone-modifying enzymes:regulators of developmental decisions and drivers of human disease.Epigenomics 4,163–177.Canman,C.E.,Lim,D.S.,Cimprich,K.A.,Taya,Y.,Tamai,K.,Sakaguchi,K.,Appella,E.,Kastan,M.B.,and Siliciano,J.D.(1998).Activation of the ATM kinase by ionizing radiation and phosphorylation of p53.Science 281,1677–1679.Chapman,J.R.,and Jackson,S.P.(2008).Phospho-dependent interactions between NBS1and MDC1mediate chromatin retention of the MRN complex at sites of DNA damage.EMBO Rep.9,795–801.Chernikova,S.B.,Razorenova,O.V.,Higgins,J.P.,Sishc,B.J.,Nicolau,M.,Dorth,J.A.,Chernikova,D.A.,Kwok,S.,Brooks,J.D.,Bailey,S.M.,et al.(2012).Deficiency in mammalian histone H2B ubiquitin ligase Bre1(Rnf20/Rnf40)leads to replication stress and chromosomal instability.Cancer Res.72,2111–2119.D’Amours, D.,and Amon, A.(2004).At the interface between signaling and executing anaphase—Cdc14and the FEAR network.Genes Dev.18,2581–2595.Duran,A.,Diaz-Meco,M.T.,and Moscat,J.(2003).Essential role of RelA Ser311phosphorylation by zetaPKC in NF-kappaB transcriptional activation.EMBO J.22,3910–3918.Fischle,W.,Wang,Y.,and Allis,C.D.(2003).Binary switches and modification cassettes in histone biology and beyond.Nature 425,475–479.Fisher,C.L.,and Fisher,A.G.(2011).Chromatin states in pluripotent,differen-tiated,and reprogrammed cells.Curr.Opin.Genet.Dev.21,140–146.Game,J.C.,and Chernikova,S.B.(2009).The role of RAD6in recombinational repair,checkpoints and meiosis via histone modification.DNA Repair (Amst.)8,470–482.Giannattasio,M.,Lazzaro,F.,Plevani,P.,and Muzi-Falconi,M.(2005).The DNA damage checkpoint response requires histone H2B ubiquitination by Rad6-Bre1and H3methylation by Dot1.J.Biol.Chem.280,9879–9886.Figure 2.H2Bub Passes Signals to Different ReceiversIn yeast,Bre1-mediated ubiquitylation of H2B promotes H3K4and Dam1methylation by Set1and H3K79methylation by Dot1.Depending on its loca-tion,H2Bub can participate in the regulation of transcription,chromosome segregation,cell-cycle checkpoints,and mitotic exit.688Cell 152,February 14,2013ª2013Elsevier Inc.Hwang,W.W.,and Madhani,H.D.(2009).Nonredundant requirement for multiple histone modifications for the early anaphase release of the mitotic exit regulator Cdc14from nucleolar chromatin.PLoS Genet.5,e1000588. Hyllus,D.,Stein,C.,Schnabel,K.,Schiltz,E.,Imhof,A.,Dou,Y.,Hsieh,J.,and Bauer,U.M.(2007).PRMT6-mediated methylation of R2in histone H3antag-onizes H3K4trimethylation.Genes Dev.21,3369–3380.Iberg,A.N.,Espejo,A.,Cheng,D.,Kim,D.,Michaud-Levesque,J.,Richard,S., and Bedford,M.T.(2008).Arginine methylation of the histone H3tail impedes effector binding.J.Biol.Chem.283,3006–3010.Jackson,S.P.,and Bartek,J.(2009).The DNA-damage response in human biology and disease.Nature461,1071–1078.Kim,J.,Guermah,M.,McGinty,R.K.,Lee,J.S.,Tang,Z.,Milne,T.A.,Shilati-fard,A.,Muir,T.W.,and Roeder,R.G.(2009).RAD6-Mediated transcription-coupled H2B ubiquitylation directly stimulates H3K4methylation in human cells.Cell137,459–471.Kolas,N.K.,Chapman,J.R.,Nakada,S.,Ylanko,J.,Chahwan,R.,Sweeney, F.D.,Panier,S.,Mendez,M.,Wildenhain,J.,Thomson,T.M.,et al.(2007). Orchestration of the DNA-damage response by the RNF8ubiquitin ligase. Science318,1637–1640.Lampson,M.A.,and Cheeseman,I.M.(2011).Sensing centromere tension: Aurora B and the regulation of kinetochore function.Trends Cell Biol.21, 133–140.Lan,F.,Collins,R.E.,De Cegli,R.,Alpatov,R.,Horton,J.R.,Shi,X.,Gozani,O., Cheng,X.,and Shi,Y.(2007).Recognition of unmethylated histone H3lysine4 links BHC80to LSD1-mediated gene repression.Nature448,718–722. Laribee,R.N.,Fuchs,S.M.,and Strahl,B.D.(2007).H2B ubiquitylation in tran-scriptional control:a FACT-finding mission.Genes Dev.21,737–743. Latham,J.A.,Chosed,R.J.,Wang,S.,and Dent,S.Y.(2011).Chromatin signaling to kinetochores:transregulation of Dam1methylation by histone H2B ubiquitination.Cell146,709–719.Lee,J.S.,Smith,E.,and Shilatifard,A.(2010).The language of histone cross-talk.Cell142,682–685.Levy,D.,Kuo,A.J.,Chang,Y.,Schaefer,U.,Kitson,C.,Cheung,P.,Espejo,A., Zee,B.M.,Liu,C.L.,Tangsombatvisit,S.,et al.(2011).Lysine methylation of the NF-k B subunit RelA by SETD6couples activity of the histone methyltrans-ferase GLP at chromatin to tonic repression of NF-k B signaling.Nat.Immunol. 12,29–36.Lin,W.C.,Lin,F.T.,and Nevins,J.R.(2001).Selective induction of E2F1in response to DNA damage,mediated by ATM-dependent phosphorylation. Genes Dev.15,1833–1844.Long,F.(2012).Building strong bones:molecular regulation of the osteoblast lineage.Nat.Rev.Mol.Cell Biol.13,27–38.Lovejoy,C.A.,and Cortez,D.(2009).Common mechanisms of PIKK regula-tion.DNA Repair(Amst.)8,1004–1008.Mohan,M.,Lin,C.,Guest,E.,and Shilatifard,A.(2010).Licensed to elongate: a molecular mechanism for MLL-based leukaemogenesis.Nat.Rev.Cancer 10,721–728.Moyal,L.,Lerenthal,Y.,Gana-Weisz,M.,Mass,G.,So,S.,Wang,S.Y.,Eppink, B.,Chung,Y.M.,Shalev,G.,Shema,E.,et al.(2011).Requirement of ATM-dependent monoubiquitylation of histone H2B for timely repair of DNA double-strand breaks.Mol.Cell41,529–542.Nakamura,K.,Kato,A.,Kobayashi,J.,Yanagihara,H.,Sakamoto,S.,Oliveira, D.V.,Shimada,M.,Tauchi,H.,Suzuki,H.,Tashiro,S.,et al.(2011).Regulation of homologous recombination by RNF20-dependent H2B ubiquitination.Mol. Cell41,515–528.Nakanishi,S.,Lee,J.S.,Gardner,K.E.,Gardner,J.M.,Takahashi,Y.H.,Chan-drasekharan,M.B.,Sun,Z.W.,Osley,M.A.,Strahl,B.D.,Jaspersen,S.L.,and Shilatifard, A.(2009).Histone H2BK123monoubiquitination is the critical determinant for H3K4and H3K79trimethylation by COMPASS and Dot1.J. Cell Biol.186,371–377.Ng,H.H.,Xu,R.M.,Zhang,Y.,and Struhl,K.(2002).Ubiquitination of histone H2B by Rad6is required for efficient Dot1-mediated methylation of histone H3lysine79.J.Biol.Chem.277,34655–34657.Okada,Y.,Feng,Q.,Lin,Y.,Jiang,Q.,Li,Y.,Coffield,V.M.,Su,L.,Xu,G.,and Zhang,Y.(2005).hDOT1L links histone methylation to leukemogenesis.Cell 121,167–178.Rogakou,E.P.,Pilch,D.R.,Orr,A.H.,Ivanova,V.S.,and Bonner,W.M.(1998). DNA double-stranded breaks induce histone H2AX phosphorylation on serine 139.J.Biol.Chem.273,5858–5868.Rust,H.L.,and Thompson,P.R.(2011).Kinase consensus sequences: a breeding ground for crosstalk.ACS Chem.Biol.6,881–892.Ruthenburg,A.J.,Allis,C.D.,and Wysocka,J.(2007).Methylation of lysine4on histone H3:intricacy of writing and reading a single epigenetic mark.Mol.Cell 25,15–30.Shema,E.,Tirosh,I.,Aylon,Y.,Huang,J.,Ye,C.,Moskovits,N.,Raver-Shapira,N.,Minsky,N.,Pirngruber,J.,Tarcic,G.,et al.(2008).The histone H2B-specific ubiquitin ligase RNF20/hBRE1acts as a putative tumor suppressor through selective regulation of gene expression.Genes Dev.22, 2664–2676.Shilatifard,A.(2006).Chromatin modifications by methylation and ubiquitina-tion:implications in the regulation of gene expression.Annu.Rev.Biochem. 75,243–269.Smith,E.,and Shilatifard,A.(2010).The chromatin signaling pathway:diverse mechanisms of recruitment of histone-modifying enzymes and varied biolog-ical outcomes.Mol.Cell40,689–701.Smith,E.,Lin,C.,and Shilatifard,A.(2011).The super elongation complex (SEC)and MLL in development and disease.Genes Dev.25,661–672.Stucki,M.,and Jackson,S.P.(2006).gammaH2AX and MDC1:anchoring the DNA-damage-response machinery to broken chromosomes.DNA Repair (Amst.)5,534–543.Suganuma,T.,and Workman,J.L.(2011).Signals and combinatorial functions of histone modifications.Annu.Rev.Biochem.80,473–499.Tsai,W.W.,Wang,Z.,Yiu,T.T.,Akdemir,K.C.,Xia,W.,Winter,S.,Tsai,C.Y., Shi,X.,Schwarzer,D.,Plunkett,W.,et al.(2010).TRIM24links a non-canonical histone signature to breast cancer.Nature468,927–932.Valenta,T.,Hausmann,G.,and Basler,K.(2012).The many faces and func-tions of b-catenin.EMBO J.31,2714–2736.Wu,Z.H.,Shi,Y.,Tibbetts,R.S.,and Miyamoto,S.(2006).Molecular linkage between the kinase ATM and NF-kappaB signaling in response to genotoxic stimuli.Science311,1141–1146.Wyce,A.,Xiao,T.,Whelan,K.A.,Kosman,C.,Walter,W.,Eick,D.,Hughes, T.R.,Krogan,N.J.,Strahl,B.D.,and Berger,S.L.(2007).H2B ubiquitylation acts as a barrier to Ctk1nucleosomal recruitment prior to removal by Ubp8 within a SAGA-related complex.Mol.Cell27,275–288.Yang,Y.,Lu,Y.,Espejo,A.,Wu,J.,Xu,W.,Liang,S.,and Bedford,M.T.(2010). TDRD3is an effector molecule for arginine-methylated histone marks.Mol. Cell40,1016–1023.Yun,M.,Wu,J.,Workman,J.L.,and Li,B.(2011).Readers of histone modifi-cations.Cell Res.21,564–578.Zha,S.,Boboila,C.,and Alt,F.W.(2009).Mre11:roles in DNA repair beyond homologous recombination.Nat.Struct.Mol.Biol.16,798–800.Zhang,K.,Lin,W.,Latham,J.A.,Riefler,G.M.,Schumacher,J.M.,Chan,C., Tatchell,K.,Hawke,D.H.,Kobayashi,R.,and Dent,S.Y.(2005).The Set1 methyltransferase opposes Ipl1aurora kinase functions in chromosome segregation.Cell122,723–734.Zhou,V.W.,Goren,A.,and Bernstein,B.E.(2011).Charting histone modifica-tions and the functional organization of mammalian genomes.Nat.Rev.Genet. 12,7–18.Cell152,February14,2013ª2013Elsevier Inc.689。

An optimal hydrodynamic model for the normal Type IIP supernova 1999em

An optimal hydrodynamic model for the normal Type IIP supernova 1999em

a r X i v :a s t r o -p h /0609642v 1 23 S e p 2006Astronomy &Astrophysics manuscript no.SN99em cESO 2008February 5,2008An optimal hydrodynamic model for the normal Type IIPsupernova 1999emVictor P.Utrobin 1,21Max-Planck-Institut f¨u r Astrophysik,Karl-Schwarzschild-Str.1,D-85741Garching,Germany2Institute of Theoretical and Experimental Physics,B.Cheremushkinskaya St.25,117218Moscow,RussiaReceived 21July 2006/accepted 22August 2006ABSTRACTContext.There is still no consensus about progenitor masses of Type IIP supernovae.Aims.We study a normal Type IIP SN 1999em in detail and compare it to a peculiar Type IIP SN 1987A.Methods.We computed the hydrodynamic and time-dependent atmosphere models interpreting simultaneously both the photometric and spectroscopic observations.Results.The bolometric light curve of SN 1999em and the spectral evolution of its H αline are consistent with a presupernova radius of 500±200R ⊙,an ejecta mass of 19.0±1.2M ⊙,an explosion energy of (1.3±0.1)×1051erg,and a radioactive 56Ni mass of 0.036±0.009M ⊙.A mutual mixing of hydrogen-rich and helium-rich matter in the inner layers of the ejecta guarantees a good fit of the calculated light curve to that observed.Based on the hydrodynamic models in the vicinity of the optimal model,we derive the approximate relationships between the basic physical and observed parameters.The hydrodynamic and atmosphere models of SN 1999em are inconsistent with the short distance of 7.85Mpc to the host galaxy.Conclusions.We find that the hydrogen recombination in the atmosphere of a normal Type IIP SN 1999em,as well as most likely other Type IIP supernovae at the photospheric epoch,is essentially a time-dependent phenomenon.It is also shown that in normal Type IIP supernovae the homologous expansion of the ejecta in its atmosphere takes place starting from nearly the third day after the supernova explosion.A comparison of SN 1999em with SN 1987A reveals two very important results for supernova theory.First,the comparability of the helium core masses and the explosion energies implies a unique explosion mechanism for these core collapse supernovae.Second,the optimal model for SN 1999em is characterized by a weaker 56Ni mixing up to ≈660km s −1compared to a moderate 56Ni mixing up to ∼3000km s −1in SN 1987A,hydrogen being mixed deeply downward to ∼650km s −1.Key words.stars:supernovae:individual:SN 1999em –stars:supernovae:individual:SN 1987A –stars:supernovae:Type IIP supernovae1.IntroductionThe supernova (SN)1999em was discovered by the Lick Observatory Supernova Search on October 29.44UT in the nearly face-on SBc galaxy NGC 1637(Li 1999).Detected shortly after the explosion at an unfiltered magnitude of ∼13.5,SN 1999em was bright enough to be observed well both pho-tometrically and spectroscopically for more than 500days (Hamuy et al.2001;Leonard et al.2002;Elmhamdi et al.2003).SN 1999em was the first Type II-plateau supernova (SN IIP)detected at both X-ray and radio wavelengths,being the least radio luminous and one of the least X-ray luminous SNe (Pooley et al.2002).The X-ray data indicated the interaction between SN ejecta and a pre-SN wind with a low mass-loss rate of ∼2×10−6M ⊙yr −1.Leonard et al.(2001)presented the first spectropolarimetry of SN IIP based on the optical observa-tions of SN 1999em during ∼160days after SN discovery.The weak continuum polarization increasing from p ≈0.2%on day2V.P.Utrobin:Type IIP supernova1999emtained the distance to SN1999em of12.5±1.8Mpc in a good agreement with the Cepheid distance scale.Finally,study-ing various ingredients entering the original EPM,Dessart& Hillier(2006)improved this method and also achieved better agreement to the Cepheid distance with an estimate of11.5±1.0 Mpc.Starting the investigation of SN1999em,a normal SN IIP, it is impossible to pass by the well-known and studied SN 1987A in the Large Magellanic Cloud(LMC),a peculiar SN IIP.Recent progress in reproducing the bolometric light curve observed in SN1987A with a modern hydrodynamic model (Utrobin2004)and in modelling the Hαprofile and the Ba II6142Åline in SN1987A at the photospheric phase with a time-dependent approach for the SN atmosphere(Utrobin& Chugai2002,2005)makes this experience very instructive for other SNe IIP.First,it turned out that a good agreement be-tween the hydrodynamic models and the photometric observa-tions of SN1987A did not guarantee a correct description of this phenomenon as a whole.Second,the strength of the Hαline and its profile provided hard constraints on the hydrody-namic models.The most important lesson from this study of SN1987A is that we have to take both the photometric and spectroscopic observations into account to obtain an adequate hydrodynamic model(Utrobin2005).Now such an approach should be applied to SN1999em.Here we present the comprehensive hydrodynamic study of SN1999em complemented by the atmosphere model with the time-dependent kinetics and energy balance.A brief descrip-tion of the hydrodynamic model and the atmosphere model based on it is given in Sect.2.The study of SN1999em we begin with a construction of the optimal hydrodynamic model (Sect.3).Then we continue with a question:at what distance to SN1999em,short or long,the hydrodynamic model and the proper atmosphere model are consistent with the photometric and Hαline observations(Sect.4).The time development of the optimal hydrodynamic model is presented in Sect.5,and its general regularities in Sect.6.The basic relationships between the physical and observed parameters for SNe IIP similar to the SN1999em event are obtained in Sect.7,while in Sect.8 we address the comparison of SN1999em with SN1987A.In Sect.9we discuss our results from the theoretical and obser-vational points of view.Finally in Sect.10we summarize the results obtained.We adopt here a distance of11.7Mpc(Leonard et al.2003), a recession velocity to SN1999em of800km s−1(Leonard et al.2002),an explosion date of JD2451476.77,and a total extinction A V=0.31(Baron et al.2000;Hamuy et al.2001; Elmhamdi et al.2003).2.Supernova modelling and input physics Keeping in mind the importance of a hydrodynamic study and of atmosphere modelling,including the time-dependent kinet-ics and energy balance for the interpretation of the SN1987A phenomenon,we use this approach to investigate SN1999em.A hydrodynamic model is computed in terms of radiation hydrodynamics in the one-group approximation taking into ac-count non-LTE effects on the average opacities and the thermal emissivity,effects of nonthermal ionization,and a contribution of lines to opacity as in the case of SN1987A(Utrobin2004). Note that the bolometric luminosity of an SN is calculated by including the retardation and limb-darkening effects.The atmosphere model includes the time-dependent ioniza-tion and excitation kinetics of hydrogen and other elements, the time-dependent kinetics of molecular hydrogen,and the time-dependent energy balance(Utrobin&Chugai2005).The density distribution,chemical composition,radius of the pho-tosphere,and effective temperature are provided by the corre-sponding hydrodynamic model.The obtained time-dependent structure of the atmosphere is then used to calculate synthetic spectra at selected epochs.The spectra are modelled by the Monte Carlo technique suggesting that the photosphere diffu-sively reflects the incident photons and that the line scattering is generally non-conservative and is described in terms of the line scattering albedo.The Thomson scattering on free electrons and Rayleigh scattering on neutral hydrogen are also taken into account.3.Optimal hydrodynamic modelElmhamdi et al.(2003)have constructed the“UBVRI”bolo-metric light curve of SN1999em from the corresponding pho-tometric data.To account for a possible contribution of the missing infrared bands,we add a value of0.19dex,taken from Elmhamdi et al.(2003),to the“UBVRI”luminosity and adopt the resultant light curve as the bolometric light curve of SN 1999em.Our aim is tofind an adequate hydrodynamic model that reproduces the photometric and spectroscopic observations of SN1999em.Tofit the bolometric light curve of SN1999em, various hydrodynamic models were explored.The bolometric light curve of SN1999em isfitted by adjusting the pre-SN radius R0,the ejecta mass M env,and the explosion energy E, along with the density distribution in the pre-SN model and its chemical composition in the transition region between the hydrogen-rich envelope and metal/helium core.The radioactive 56Ni mass is reliably measured by the light curve tail after the plateau phase.As the pre-SN model of SN1999em,we con-sider non-evolutionary models similar to those of SN1987A (Utrobin2004),but for the outer layers assume the standard so-lar composition with the mass fraction of hydrogen X=0.735, helium Y=0.248,and the metallicity Z=0.017(Grevesse& Sauval1998)taking a normal spiral nature of the host galaxy NGC1637into account.The best version of such afitting was obtained with the optimal model that is characterized by the basic parameters:the pre-SN radius of500R⊙,the ejecta mass of19M⊙,and the explosion energy of1.3×1051erg(model D11in Table1).The density profile of the pre-SN model consisting of a cen-tral white dwarf like core and an outer envelope with the size of the red supergiant is shown in Fig.1.In the calculations, the1.58M⊙central core is removed from the computational mass grid and assumed to collapse to a neutron star,while the rest of the star is ejected by the SN explosion,which is mod-elled by a piston action near the edge of the central core.The pre-SN model has a heterogeneous chemical composition con-taining a5.6M⊙helium core and a11.9M⊙outer shell ofV .P.Utrobin:Type IIP supernova 1999em3Fig.1.Density distribution with respect to interior mass (a )and radius (b )for the pre-SN model D11.The central core of 1.58M ⊙isomitted.Fig.2.The mass fraction of hydrogen (solid line ),helium (long dashed line ),heavy elements (short dashed line ),and radioactive 56Ni (dotted line )in the ejecta of model D11.the solar chemical composition (Fig.2).Note that the helium cores up to about 8M ⊙are consistent with the observations (Sect.6.2).There is no sharp boundary between the hydrogen-rich and helium-rich layers in the ejecta of the optimal model.Hydrogen-rich material is mixed into the central region,and helium-rich material,in turn,is mixed outwards.It is evident that such a distribution of hydrogen and helium implies a strong mixing at the helium /hydrogen composition interface.The fact that the radioactive 56Ni is confined to the innermost ejected layers suggests its weak mixing during the SN explosion.In Fig.3we show the very good match between the bolo-metric light curve calculated for the optimal hydrodynamic model and the one observed for SN 1999em (Elmhamdi et al.2003).Note that hereafter t obs is the time in theobserver’sparison of the calculated bolometric light curve of model D11(solid line )with the bolometric data of SN 1999em obtained by Elmhamdi et al.(2003)(open circles ).Table 1.Hydrodynamic models for distances of 7.85and 11.7Mpc.ModelR 0M env E M NiXZ(R ⊙)(M ⊙)(1051erg)(10−2M ⊙)frame of reference.The model agrees well with the observed tail of the bolometric light curve for the total 56Ni mass of 0.036M ⊙,the bulk of the radioactive 56Ni being mixed in the velocity range ≤660km s −1(Fig.10)in order to reproduce the observed transition from the plateau to the tail.4.H αprofile:evidence against short distanceIt is quite clear that any well-observed SN should be described by a unique hydrodynamic model in combination with the at-mosphere model based on it.The experience during the study of SN 1987A showed that such a combination of the models had to fit not only the photometric,but also the spectroscopic observations.We believe that the adequate hydrodynamic and atmosphere models of SN 1999em are able to distinguish be-tween the short and long distances.A distance of 7.85Mpc,the average value of the EPM distance estimates,is taken as the short distance,and the Cepheid distance of 11.7Mpc is taken as the long distance.As demonstrated above,we constructed the optimal hydro-dynamic model for the long distance (model D11in Table 1).The model reproduces the observed bolometric light curve of SN 1999em very well (Fig.4a).In Fig.4b the calculated ex-pansion velocity at the photosphere level,the photospheric ve-locity,is compared with the radial velocities at maximum ab-sorption of the di fferent spectral lines measured by Hamuy et al.(2001)and Leonard et al.(2002).The photospheric velocity of model D11is consistent with the observed points,at least for the first 60days.To verify the hydrodynamic model by matching the constraints from the spectral observations of SN4V .P.Utrobin:Type IIP supernova1999emFig.4.Hydrodynamic model D11and H αline for the distance of 11.7Mpc.Panel (a ):the calculated bolometric light curve (solid line )com-pared with the observations of SN 1999em obtained by Elmhamdi et al.(2003)(open circles ).Panel (b ):calculated photospheric velocity (solid line )and radial velocities at maximum absorption of spectral lines measured by Hamuy et al.(2001)and Leonard et al.(2002)(open circles ).Panel (c ):H αprofiles,computed with the time-dependent approach (thick solid line )and with the steady-state model (dotted line ),overplotted on the observed profile on day 26.24,as obtained by Leonard et al.(2002)(thin solid line ).Panel (d ):the same as panel (c)but for day 52.14.1999em,we examined the H αprofile on days 26.24and 52.14computed in the time-dependent approach.In the time-dependent atmosphere model for SN 1987A,we considered two extreme cases to allow for the uncertainty of our approximation in a description of the ultraviolet radiation field:(i)the photospheric brightness is black-body with the ef-fective temperature (model A);(ii)the photospheric brightness corresponds to the observed spectrum (model B)(Utrobin &Chugai 2005).For SN 1999em the photospheric brightness in model B is black-body with the e ffective temperature and the corresponding brightness reduction taken from the SN 1987A observations.The time-dependent approach with model B satisfactorily reproduces the strength of the H αemission component on day 26.24with some emission deficit near the maximum in com-parison to what was observed (Fig.4c).This deficit is not sig-nificant because model A,calculated for this phase and not plotted on Fig.4c for the sake of clarity,gives the emission component with a relative flux of 3.1at the maximum,which is much higher than the observed one,so the real situation is somewhere between these two cases and is closer to model B.Note that in SN 1987A the H αemission component demon-strated the same behavior in the early phase (Utrobin &Chugai 2005).In contrast,the H αabsorption component calculatedinFig.5.Hydrodynamic model D07and H αline for the distance of 7.85Mpc.See the Fig.4legend for details.model B is stronger than that observed on day 26.24in SN 1999em.This discrepancy is presumably related to a poor de-scription of the ultraviolet radiation at frequencies between the Balmer and Lyman edges.This radiation interacts with numer-ous metal lines and controls the populations of hydrogen lev-els.It is very important that both the emission and absorption components of the H αline calculated in model B extend over the whole range of the observed radial velocities from −15000km s −1to 15000km s −1(Fig.4c).Unfortunately,the calculated absorption component runs above the observed one in a radial velocity range between −15000km s −1and −12500km s −1.On day 52.14the emission component of the H αline com-puted with the time-dependent approach with model B fits the observed one fairly well,while the absorption component is still stronger than the observed one (Fig.4d).Thus,we may state that the above hydrodynamic and atmosphere models are in a good agreement with the photometric and spectroscopic observations of SN 1999em.Now let us pay attention to the time-dependent e ffects in hydrogen lines of SN 1999em,a normal SN IIP.The time-dependent approach with model B reproduces the strength of the H αline at day 26.24and day 52.14fairly well (Figs.4c and 4d).In contrast,a steady-state model B demonstrates an extremely weak H αline on days 26.24and 52.14.This reflects the fact that the steady-state ionization is significantly lower than in the time-dependent model,while the electron tempera-ture is too low for the collisional excitation of hydrogen.The radioactive 56Ni is mixed too weakly to a ffect the ionization and excitation of hydrogen and other elements in the atmosphere at the plateau phase.Thus,it is possible to conclude that the hy-drogen recombination in the atmosphere of SN 1999em during the whole plateau phase is essentially a time-dependent phe-nomenon.V .P.Utrobin:Type IIP supernova 1999em5Fig.6.Evolution of a normal SN IIP illustrated by the bolometric light curve of the optimal model.Lettered dots mark the specific times in the evolution.The end time of the shock breakout phase,t 1,is shown in Fig.8a.The cross notes the moment of complete hydrogen recom-bination.The numbers indicate the total optical depth of the ejecta at the corresponding times.In turn,the short distance results in the hydrodynamic model with the following parameters:the pre-SN radius of 375R ⊙,the ejecta mass of 16M ⊙,and the explosion energy of 6.86×1050erg (model D07in Table 1).In this model the cen-tral core of 1.4M ⊙is assumed to collapse to a neutron star.The hydrodynamic model fits the observed bolometric light curve of SN 1999em but for the 56Ni mass of 0.0162M ⊙,the most of 56Ni being mixed in the velocity range ≤580km s −1(Fig.5a).The photospheric velocity curve runs well below the observed points (Fig.5b),and it might be expected that the spectral lines in this model would be narrower than those ob-served.Indeed,on days 26.24and 52.14both the emission and absorption components of the H αline computed with the time-dependent approach with model B are significantly narrower than those observed in SN 1999em (Figs.5c and 5d).The hy-drodynamic model D07agrees fairly well with the observed bolometric light curve,but the relevant atmosphere model fails to reproduce the H αprofile observed in SN 1999em.We thus conclude that the short distance of 7.85Mpc should be dis-carded.5.Development of the optimal modelAlthough the major issues of light curve theory were recog-nized a long time ago (Grassberg et al.1971;Falk &Arnett 1977),it is useful to consider an SN outburst in a more detailed approach.This study of SN 1999em provides a good oppor-tunity to examine the time development of a normal SN IIP.Figures 6and 8a show the following stages in the observer time scale:a shock breakout (t obs ≤t 1),an adiabatic cooling phase (t 1<t obs ≤t 2),a phase of cooling and recombination wave (CRW)(t 2<t obs ≤t 3),a phase of radiative di ffusion cooling (t 3<t obs ≤t 4),an exhaustion of radiation energy (t 4<t obs ≤t 5),a plateau tail (t 5<t obs ≤t 6),and a radioactive tail (t obs >t 6).In addition to the above stages there are two specific points:a transition from acceleration of theenvelopeFig.7.Propagation of the shock through the ejected envelope of model D11.Velocity profiles with respect to interior mass are plotted at t =0.0065,0.0558,0.1199,0.2416,0.6234,0.8937days,and t ≥10days after the SN explosion.matter to a homologous expansion and a moment of complete hydrogen recombination.In the optimal model D11,the char-acteristic moments are t 1≈0.93days,t 2≈18days,t 3≈94days,t 4≈116days,t 5≈124days,and t 6≈150days,and the complete hydrogen recombination occurs at t H ≈111.3days.Below we consider the basic stages in the time development of the optimal model.5.1.Shock breakoutThe explosion of the star is assumed to be triggered by a piston action near the edge of the central core immediately after the epoch zero,t =0.From here on,t is the time in the comov-ing frame of reference.This energy release generates a strong shock wave that propagates towards the stellar surface.In mov-ing out of the center,the shock wave heats matter and acceler-ates it to velocities increasing outward and exceeding the local escape velocity.From t =0.0065days to t =0.2416days,the shock wave,propagating outward from the compact dense core of the pre-SN (Fig.1),is attenuated slightly due to the spher-ical divergence (Fig.7).It then reaches the outermost layers with a sharp decline in density and after t =0.6234days gains strength and accelerates due to the e ffect of hydrodynamic cu-mulation.Only a small portion of the star undergoes this accel-eration and acquires a high velocity.For example,the layers of velocities exceeding 5000km s −1have a mass of ≈0.478M ⊙.By day 0.8651the shock wave reaches the stellar surface and then begins to heat the external layers,so that the color temperature jumps to 3.84×105K at day 0.8823,the e ffective temperature increases up to 1.76×105K at day 0.8943,and the bolometric luminosity rises accordingly up to 6.46×10446V .P.Utrobin:Type IIP supernova1999emFig.8.Shock breakout in model D11.Arrow indicates the moment of 0.8803days when the velocity at the stellar surface reaches escape ve-locity.Panel (a ):the calculated bolometric light curve and the lettered dot marking the end time of the shock breakout phase.Panel (b ):the calculated photospheric radius (solid line ),the e ffective temperature (dashed line ),and the color temperature (dotted line )as a function of the observer’s time.erg s −1at day 0.8943(Fig.8).Note that the maximum of the color temperature coincides closely with the moment of 0.8803days when a velocity at the stellar surface reaches the escape velocity.A very rapid rise in the bolometric luminosity to maximum,starting at day 0.8651,instantly changes a growth of the pho-tospheric radius into its reduction because of the intense ra-diative losses of energy in the outermost layers (Fig.8b).At day 0.8691these layers begin to move outward,and the addi-tional cooling by adiabatic expansion makes the reduction of the photospheric radius more noticeable.At the same time the envelope expansion is favorable to a photon di ffusion,decreas-ing the characteristic di ffusion time.The photon di ffusion even-tually dominates,stops this reduction at day 0.8802,and then,along with the envelope expansion,blows the photospheric ra-dius outward.When a velocity at the stellar surface exceeds the escape velocity,the outside layers of the star begin to cool rapidly,the color temperature begins to decrease from its maximum value,and both the e ffective temperature and the luminosity decrease somewhat later.A narrow peak in the bolometric luminosity forms as a result (Fig.8a).The peak has a width of about 0.02days at a half level of the luminosity maximum.Most of its radiation is emitted in an ultraviolet flash.The total number of ionizing photons above 13.598eV for the whole outburst is 2.768×1058.During the first 1.234days the number of ioniz-ing photons is 90%of the total number,and the radiated energy adds up to 1.59×1048ergs.We conditionally define the transi-tion time between the shock breakout and the adiabatic cooling phase,t 1,as the time at which the bolometric luminosity drops by one dex from its maximum value,with the understanding that the adiabatic cooling becomes essential soon after the on-set of the envelope expansion.In the optimal model this transi-tion time is nearly 0.93days.Scattering processes are fundamentally nonlocal and re-sult in exceeding the color temperature over the e ffective one when they are dominant (Mihalas 1978;Sobolev 1980).During the shock breakout,the adiabatic cooling phase and the CRW phase scattering processes dominate those of true absorption near the photospheric level in the ejecta of the optimal model and,as a consequence,lead to a color temperature that is considerably higher than the e ffective temperature.Figure 8b shows that the maximum of the color temperature coincides very closely in time with the local minimum of the photo-spheric radius at day 0.8802and comes before the maximum of the e ffective temperature and the bolometric luminosity.5.2.Adiabatic cooling phaseDuring the shock passage throughout the star,the gas is heated up to about 105K and,as a consequence,is totally ionized.Both the internal gas energy and the radiation energy increase in the SN envelope.After the shock breakout,a dominant pro-cess in the subphotospheric,optically thick layers of the ex-pelled envelope is the cooling by adiabatic expansion.The adi-abatic losses drastically reduce the stored energy and com-pletely determine the evolution of the bolometric luminosity during the adiabatic cooling phase (Figs.6and 8a).Such be-havior of the luminosity lasts till the gas and radiation temper-atures drop to the critical value in the subphotospheric layers,and a recombination of hydrogen,the most abundant element in the ejected envelope,starts.In these layers hydrogen becomes partially ionized by around day 18.For the adiabatic cooling phase,the color and e ffective temperatures drop rapidly from 1.35×105K and 9.32×104K,respectively,at day 0.93to 6650K and 6560K at day 18.Their ratio also reduces as the contribution of scattering processes to the opacity decreases.In an optically thick medium,the strong shock wave prop-agates almost adiabatically throughout the stellar matter.When the shock wave emerges on the pre-SN surface,the adiabatic regime is broken and transforms into the isothermal regime.This transformation takes place during the adiabatic cooling phase and gives rise to a thin,dense shell at the outer edge of the ejected envelope.The dense shell is arising from day 0.945to day 1.022,reaching the density contrast of ∼210at a veloc-ity of ∼12300km c −1.The formation of the dense shell starts in the optically thick medium at the optical depth of ∼8,and ends in semi-transparent medium at the optical depth of ∼1.By day 18the shell accelerates to a velocity of ∼13400km c −1re-ducing the density contrast to a value of ∼2.5.The matter in the dense shell is subject to the Rayleigh-Taylor instability,which may result in the strong mixing of matter (Falk &Arnett 1977).The latter,in turn,may prevent the thin shell-like structure from forming in the outer layers of the envelope.V .P.Utrobin:Type IIP supernova 1999em7Fig.9.Time dependence of the velocity of di fferent mass shells (solid line )in the ejected envelope of model D11and the photospheric ve-locity (dashed line ).Full and open circles indicate moments when a velocity of the mass shell reaches 99%and 99.9%,respectively,of its terminal velocity.The arrow marks the mass shell that has undergone an additional acceleration due to the expansion opacity.5.3.Homologous expansionThe SN explosion causes an acceleration of the envelope mat-ter.At late times,when the acceleration of the ejecta be-comes negligible,the envelope matter expands homologously.Evidently,there is a transition from the acceleration to the ho-mologous expansion.This transition for di fferent mass shells occurs at di fferent times:the deeper the layer,the later the tran-sition (Fig.9).Thus,the transition time of the whole expelled envelope is determined by the deepest layer.If we consider physical processes in an SN atmosphere,the relevant transi-tion time is given by the photospheric location.It is clear that the transition time for the SN atmosphere is much shorter than for the whole envelope.For instance,the atmosphere layers ex-ceed 99%and 99.9%of their terminal velocities starting from nearly day 2.8and day 11,respectively (Fig.9).It is interesting that a monotonic increase in the transition time with embedding inward the ejecta is broken at the 99.9%level of the terminal velocity.There is a prominent feature at a velocity of ∼14500km c −1marked in Fig.9.As we show later in Sect.6.5,it is the result of an additional acceleration due to the resonance scattering in numerous metal lines.Homologous expansion is characterized by a density dis-tribution frozen in velocity space and scaled in time as ∝t −3.Such a density profile for the optimal model is shown at t =50days in Fig.10.In the inner region of the ejected envelope there is a dense shell with a density contrast of ∼20produced by the 56Ni bubble at a velocity of ∼660km c −1.The density dis-tribution above the 56Ni bubble shell is nearly uniform up to ∼2000km c −1.The outer layers with velocities in therangesFig.10.The density and the 56Ni mass fraction as a function of the velocity for model D11at t =50days.Dot-dash lines are the density distribution fits ρ∝v −7and ρ∝v −9.5.3000–6900km s −1and 6900–13000km s −1may be fitted by an e ffective index n =−∂ln ρ/∂ln r of 7and 9.5,respectively,as seen from the density distribution with respect to the expan-sion velocity (Fig.10).At a velocity of ∼13000km c −1,there is another dense shell with a density contrast of ∼3originated during the adiabatic cooling phase.Note that both dense shells slightly change their density profiles with time.For example,the evolution of the 56Ni bubble shell is clearly demonstrated by Fig.11a.5.4.Cooling and recombination waveAs the envelope expands,cooling by radiation –a cooling and recombination wave –occurs and completely dominates the lu-minosity of the SN by about day 18.From this time to nearly day 94the bolometric luminosity is mainly determined by properties of the CRW (for details see Grassberg &Nadyozhin 1976;Imshennik &Nadyozhin 1989).A main property of the CRW consists in generating virtually the entire energy flux car-ried away by radiation within its front.This radiation flux ex-hausts the thermal and recombination energy of cooling matter by means of recombination.At the inner edge of the CRW,the radiation flux is negligibly small and matter is ionized,but at the outer edge the flux is equal to the luminosity of the star,and matter completely recombines.During the CRW phase the evolution of the density in the ejecta illustrates the homologous expansion (Fig.11a).The be-havior of the gas and radiation temperatures reflects two impor-tant facts (Fig.11b).First,in the subphotospheric layers in an optically thick medium,under conditions very close to LTE,the radiation temperature is virtually equal to the gas temperature.Second,in the transparent layers,beyond the photosphere,the。

A method for growing a compound semiconductor and

A method for growing a compound semiconductor and

专利名称:A method for growing a compoundsemiconductor and a method for producinga semiconductor laser发明人:Seki, Akinori,Hosoba, Hiroyuki,Hata,Toshio,Kondo, Masafumi,Suyama, Takahiro,1-106, Koriyama -,Matsui, Sadayoshi申请号:EP91310852.8申请日:19911126公开号:EP0488632A3公开日:19920708专利内容由知识产权出版社提供专利附图:摘要:A method for growing a compound semiconductor and a method for producing a semiconductor laser are provided, which comprises a first step of thermal cleaning of the GaAs layer (4) surface by raising the temperature of the substrate (3) and irradiating an arsenic molecular beam onto the substrate (3) prior to the growth of AlGaAs; and a second step of removing the GaAs layer (4) by etching by raising the temperature of the substrate further than the first step and irradiating a gallium molecular beam and an arsenic molecular beam at the same time onto the substrate, in which a growth rate of the GaAs layer (4) is set lower than that of an evaporation rate thereof.申请人:SHARP KABUSHIKI KAISHA地址:22-22 Nagaike-cho Abeno-ku Osaka 545 JP国籍:JP代理机构:White, Martin David更多信息请下载全文后查看。

METHOD OF GROWING NITRIDE SEMICONDUCTORS, NITRIDE

METHOD OF GROWING NITRIDE SEMICONDUCTORS, NITRIDE

专利名称:METHOD OF GROWING NITRIDESEMICONDUCTORS, NITRIDESEMICONDUCTOR SUBSTRATE ANDNITRIDE SEMICONDUCTOR DEVICE发明人:KIYOKU, Hiroyuki,NAKAMURA, Shuji,KOZAKI, Tokuya,IWASA, Naruhito,CHOCHO, Kazuyuki 申请号:JP1998001640申请日:19980409公开号:WO98/047170P1公开日:19981022专利内容由知识产权出版社提供摘要:A method of growing a nitride semiconductor crystal having very few crystal defects and capable of being used as a substrate, comprising the step of forming a first selective growth mask equipped with a plurality of first windows for selectively exposing the surface of a support on the support having a main plane and including different kinds of substrates made of materials different from those of a nitride semiconductor, and the step of growing the nitride semiconductor, by using a gaseous Group III element source and a gaseous nitrogen source, until portions of the nitride semiconductor crystal growing in adjacent windows from the surface of the support exposed from the window join with one another on the upper surface of the selective growth mask.申请人:KIYOKU, Hiroyuki,NAKAMURA, Shuji,KOZAKI, Tokuya,IWASA,Naruhito,CHOCHO, Kazuyuki地址:491-100, Oka Kaminaka-cho Anan-shi Tokushima-ken 774-0044 JP,Nichia Chemical Industries, Ltd. 491-100, Oka Kaminaka-cho Anan-shi Tokushima-ken 774-0044JP,Nichia Chemical Industries, Ltd. 491-100, Oka Kaminaka-cho Anan-shi Tokushima-ken 774-0044 JP,Nichia Chemical Industries, Ltd. 491-100, Oka Kaminaka-cho Anan-shi Tokushima-ken 774-0044 JP,Nichia Chemical Industries, Ltd. 491-100, Oka Kaminaka-cho Anan-shi Tokushima-ken 774-0044 JP,Nichia Chemical Industries, Ltd. 491-100, Oka Kaminaka-cho Anan-shi Tokushima-ken 774-0044 JP国籍:JP,JP,JP,JP,JP,JP代理机构:SUZUYE, Takehiko更多信息请下载全文后查看。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Purify your compound (using conventional crystallization and/or other purification steps) Consider the empirically established physical properties of your compound – sensitivities, thermal stability, etc.


Presence of benzene can help crystal growth
Limiting the Resolution of the Data

Limited qmax = 8.21° Resolution = 3.5 Å
Metal electron density is “bleeding” into the traces of the ligand's electron density

Limiting the Resolution of the Data

Limited qmax = 25.0° Resolution = 0.84 Å
Peaks are beginning to flatten out
Atomic positions are still easily resolvable IUCr recommended minimum resolution
Limiting the Resolution of the Data

Limited qmax = 7.18°
Resolution = 4.0 Å Metal position is only barely above background No trace of ligand

Limiting the Resolution of the Data
Purify Your Compound

Impure samples do not recrystallize as well as pure samples Recrystallization minimizes the presence of foreign insoluble material which increases the number of nucleating sites
Noise
The Right Attitude toward Crystal Growing for X-ray Analysis

Growing X-ray quality crystals requires care and attention to detail Don't treat crystal growing in an offhand or casual way (forget what you learned in undergraduate organic chemistry lfect of Limiting the Resolution of the Data
Effect of Limiting Resolution of the Data

Electron Density Map using all available data (qmax = 32.35°) Resolution = 0.66 Å All atomic positions are easily resolved

A “good” crystal is: 0.1-0.4 mm in at least 2 of its dimensions exhibits a high degree of internal order as evidenced by the presence of an X-ray diffraction pattern
for Compound X
9 8 7
Solubility
6 5 4 3 2 1 0 CH3CN C6H6 THF Acetone EtOH MeOH CH2Cl2 Et2O
Solvent
Use CLEAN Glassware

Clean glassware should sheet water uniformly Use KOH/EtOH bath or aqua regia to initially clean glassware and rinse Follow by soap & water washing Follow by acetone or MeOH rinse


Limiting the Resolution of the Data

Limited qmax = 9.59° Resolution = 3.0 Å
Only metal position is discernible
Ligand is completely “washed out”



Successive crystallizations purify the compound
Always use recrystallized material when setting up a crystal growing attempt
Solubility Profile
Solubility Profile


Very often, but not always shows regular faces and edges
Why do you need good crystals anyway?

Quality of sample characterized by maximum diffraction angle (q) -- also expressed in “resolution” (Å) The larger the max. diffraction angle, the greater the resolution and the greater number of data (which is necessary to adequately model the structure) Discerning individual atomic position requires data resolution which is greater than the bond lengths between the atoms (.e.g C-C = 1.54Å)


Solvent Considerations

Moderate solubility is best (avoid supersaturation) Like dissolves like Hydrogen bonding can help or hinder crystallization. Experiment!


Oven drying
Parallel Crystal Growing Attempts

Combine knowledge of solubility profile with crystal growing techniques Set-up simultaneous crystal growing experiments

Factors Affecting Crystallization

Solvent – moderate solubility is best. Supersaturation leads to sudden precipitation and smaller crystal size Nucleation – fewer nucleation sites are better. Too many nucleation sites (i.e. dust, hairs, etc.) lower the average crystal size Mechanics – mechanical disturbances are bad. Time – faster crystallization is not as good as slow crystallization. Faster crystallization higher chance of lower quality crystals


Develop a solubility profile of your compound
Use CLEAN glassware as crystal growing vessels

Set up crystal growing attempts in parallel utilizing different conditions


Treat it like its own miniature research project
Don't try to skimp on the amount of material when growing crystals
General Approach to Growing X-ray Quality Crystals


Limiting the Resolution of the Data

Limited qmax = 11.54° Resolution = 2.5 Å
Metal position still discernible
Individual atomic positions for ligand have been lost Lost of chemically useful information
相关文档
最新文档