$pi$-$K$ scattering lengths at finite temperature in the Nambu--Jona-Lasinio model

合集下载

White-light supercontinuum generation in normally dispersive optical fiber using original multi-wave

White-light supercontinuum generation in normally dispersive optical fiber using original multi-wave

#4814 - $15.00 US
Received 15 July 2004; revised 31 August 2004; accepted 3 September 2004
(C) 2004 OSA
20 September 2004 / Vol. 12, No. 19 / OPTICS EXPRESS 4366
2004 Optical Society of America
OCIS codes: (190.4370) Nonlinear optics, fibers; (190.1900) Diagnostic applications of nonlinear optics
References and links
White-light supercontinuum generation in normally dispersive optical fiber using original multi-wavelength pumping system
Pierre-Alain Champert, Vincent Couderc, Philippe Leproux, Sébastien Février, Vincent Tombelaine, Laurent Labonté, Philippe Roy and Claude 16. 17. 18.
19.
A. Mussot, T. Sylvestre, L. Provino and H. Maillotte, “Generation of a broadband single-mode supercontinuum in a conventional dispersion-shifted fiber by use of a subnanosecond microchiplaser,” Opt. Lett. 28, 1820-1822 (2003). B. Colombeau, J. Monneret, F. Reynaud, B. Carquille, F. Louradour and C. Froehly, “Réduction du gain de la diffusion Raman stimulée dans les fibres optiques unimodales de silice,” presented at the Dixièmes Journées Nationales d’Optique Guidée, Jouy-en-Josas, France, Aug. 1989. E. Golovchenko, E. M. Dianov, P. V. Mamyshev and A. N. Pilipetskii, “Parametric suppression of stimulated Raman scattering,” JETP Lett. 50, 190-193 (1989). P. V. Mamyshev and A. P. Vertikov, in Quantum Electronics and Laser Science, Vol. 13 of OSA Technical Digest Series (Optical Society of America, Washington, D. C., 1992), p. 130. S. Trillo and S. Wabnitz, “Parametric and Raman amplification in birefringent fibers,” J. Opt. Soc. Am. B 9, 1061-1082 (1992). T. Sylvestre, H. Maillotte and E. Lantz, “Stimulated Raman suppression under dual-frequency pumping in singlemode fibres,” Electron. Lett. 34, 1417-1418 (1998). S. Pitois, G. Millot and P. Tchofo Dinda, “Influence of parametric four-wave mixing effects on stimulated Raman scattering in bimodal optical fibers,” Opt. Lett. 23, 1456-1458 (1998). P. Tchofo Dinda, S. Wabnitz, E. Coquet, T. Sylvestre, H. Maillotte and E. Lantz, “Demonstration of stimulated-Raman-scattering suppression in optical fibers in a multifrequency pumping configuration,” J. Opt. Soc. Am. B 16, 757-767 (1999). T. Sylvestre, H. Maillotte, P. Tchofo Dinda and E. Coquet, “Suppression of stimulated Raman scattering in optical fibres by power-controlled multifrequency pumping,” Opt. Commun. 159, 32-36 (1999).

Thresholded and Optimized Histogram Equalization for contrast

Thresholded and Optimized Histogram Equalization for contrast

Thresholded and Optimized Histogram Equalization for contrastenhancement of imagesqP.Shanmugavadivu a ,⇑,K.Balasubramanian baDepartment of Computer Science and Applications,Gandhigram Rural Institute –Deemed University,Dindigul,Tamil Nadu,India b Department of Computer Applications,PSNA College of Engineering and Technology,Dindigul,Tamil Nadu,Indiaa r t i c l e i n f o Article history:Available online 20July 2013a b s t r a c tA novel technique,Thresholded and Optimized Histogram Equalization (TOHE)is pre-sented in this paper for the purpose of enhancing the contrast as well as to preserve theessential details of any input image.The central idea of this technique is to first segmentthe input image histogram into two using Otsu’s threshold,based on which a set of weigh-ing constraints are formulated.A decision is made whether to apply those constraints toany one of the sub-histograms or to both,with respect to the input image’s histogram pat-tern.Then,those two sub-histograms are equalized independently and their union pro-duces a contrast enhanced output image.While formulating the weighing constraints,Particle Swarm Optimization (PSO)is employed to find the optimal constraints in orderto optimize the degree of contrast enhancement.This technique is proved to have an edgeover the other contemporary methods in terms of Entropy and Contrast ImprovementIndex.Ó2013Elsevier Ltd.All rights reserved.1.IntroductionContrast enhancement techniques are used in image and video processing to achieve better visual interpretation.In gen-eral,histogram equalization based contrast enhancement is attained through the redistribution of intensity values of an in-put image.Histogram modification is the underlying strategy in most of the contrast enhancement techniques.Histogram equalization (HE)is one of the most widely used techniques to achieve contrast enhancement,due to its simplicity and effec-tiveness [1].The HE techniques use linear cumulative histogram of an input image and distribute its pixel values over its dynamic intensity range.HE based enhancement finds applications in medical image processing,speech recognition,texture synthesis,satellite image processing,etc.HE methods can be categorized into two as global and local.Global HE methods improve the quality of image by normal-izing the distribution of intensities over the dynamic range,using the histogram of the entire image.Histogram equalization is an ideal example of this approach that is widely used for contrast enhancement [1].It is achieved by manipulating the intensity distribution using its Cumulative Distribution Function (CDF)so that the resultant image may have a linear distri-bution of intensities.As HE modifies the mean of the original image,it tends to introduce washed-out effect in the output image.Local Histogram Equalization (LHE)methods use the histogram intensity statistics of the neighbourhood pixels of an im-age for equalization.These techniques usually divide the original image into several non-overlapped sub-blocks and perform histogram equalization on the individual sub-blocks.The resultant image is produced by merging the sub-blocks using the 0045-7906/$-see front matter Ó2013Elsevier Ltd.All rights reserved./10.1016/peleceng.2013.06.013qReviews processed and recommended for publication to Editor-in-Chief by Deputy Editor Dr.Ferat Sahin.⇑Corresponding author.Tel.:+919443736780.E-mail addresses:psvadivu67@ (P.Shanmugavadivu),ksbala75@ (K.Balasubramanian).bilinear interpolation method.The major drawback of these methods is the introduction of checkerboard effect which ap-pears near the boundaries of the sub-blocks.Histogram Specification(HS)is another enhancement method in which the ex-pected output of image histogram can be controlled by specifying the desired output histogram[1].However,specifying the output histogram pattern is not a simple task as it varies with the images.In this paper,a novel HE based method namely,Thresholded and Optimized Histogram Equalization(TOHE)is proposed that uses Otsu’s method to perform histogram thresholding.With respect to Otsu’s threshold,a set of weighing constraints are formulated and applied to the sub-histograms.The weighing constraints are optimized using Particle Swarm Optimiza-tion(PSO),which is a population-based optimization technique.TOHE is proved to exhibit better brightness preservation and contrast enhancement.In Section2,the traditional HE and a few contemporary HE based methods are described.Section3presents the principle of the proposed technique,TOHE,various image enhancement measures and the PSO algorithm.In Section4,the results are discussed and in Section5,the conclusion is given.2.Review of histogram equalization methodsThe traditional histogram equalization technique[1]is described below:Consider the input image,F(i,j)with a total number of‘n’pixels in the gray level range[X0,X NÀ1].The probability density function P(r k)for the level r k is given by:Pðr kÞ¼n kð1Þwhere n k represents the frequency of occurrence of the level r k in the input image,n is the total number of pixels in the image and k=0,1,...,NÀ1.A plot of n k against r k is known as histogram of the image F.Based on Eq.(1),the cumulative density function is calculated as:Cðr kÞ¼X ki¼0Pðr iÞð2ÞHE maps an image into the entire dynamic range,[X0,X NÀ1]using the cumulative density function which is given as: fðXÞ¼X0þðX NÀ1ÀX0ÞCðXÞð3ÞThus,HEflattens the histogram of an image and results in a significant change in the brightness.A new HE based brightness preservation method known as Brightness Preserving Bi-Histogram Equalization(BBHE)was proposed by Kim[2].BBHEfirst segments the histogram of input image into two,based on its mean;the one ranging from minimum gray level to mean and the other from mean to the maximum and then,it equalizes the two histograms indepen-dently.It has been clearly proved that BBHE can preserve the original brightness to a certain extent[3].Wan,Chen and Zhang proposed a method(1999)called equal area Dualistic Sub-Image Histogram Equalization(DSIHE)which is an extension of BBHE[4].DSIHE differs from BBHE only in the segmentation process.The input image is segmented into two,based on med-ian instead of mean.This method is suitable only for images which are not having uniform intensity distribution.But,the brightness preserving potential of DSIHE is not found to be significant.Minimum Mean Brightness Error Bi-Histogram Equal-ization(MMBEBHE)is also an extension of BBHE(2003)proposed by Chen and Ramli which performs the histogram sepa-ration based on the threshold level,which would yield minimum difference between input and output mean,called Absolute Mean Brightness Error(AMBE)[5].This technique is also not free from undesirable effects.Recursive Mean Separate Histogram Equalization(RMSHE)was proposed by Chen and Ramli(2003)in which the histo-gram of the given image is partitioned recursively[6].Each segment is equalized independently and the union of all the seg-ments gives the contrast-enhanced output.This technique has been clearly proved to be a better method among the recursive partitioning approaches.Sim,Tso and Tan proposed a similar method(2007)called Recursive Sub-Image Histo-gram Equalization(RSIHE)[7].This technique has the same characteristics as RMSHE in equalizing an input image,except that it separates the histogram based on gray level with cumulative probability density equal to0.5,whereas RMSHE uses mean-separation approach.This method is proved to have an edge over RMSHE.However,this recursive procedure increases the computational complexity and the resultant image is very similar to that of original input image as the recursion level increases.Moreover,finding a generic optimal level of recursion,applicable to all types of images is still a challenge for all of these methods.A fast and effective method for video and image contrast enhancement,known as Weighted Thresholded HE(WTHE)was proposed[8].This technique provides an adaptive mechanism to control the enhancement process.WTHE method provides twofold benefits such as,adaptivity to different images and ease of control,which are difficult to achieve in the GHE-based enhancement methods.In this method,the probability density function of an image is modified by weighing and threshold-ing prior to HE.A mean adjustment factor is then added to normalize the luminance changes.Two more weighing tech-niques,Weight Clustering HE(WCHE)[9]and Recursively Separated and Weighted HE(RSWHE)[10]were also developed (2008).Both these techniques use different weighing principles and have proved their own merits.Ibrahim and Kong pro-posed Sub-Regions Histogram Equalization(SRHE)[11]which partitions the input image based on its Gaussianfiltered, 758P.Shanmugavadivu,K.Balasubramanian/Computers and Electrical Engineering40(2014)757–768smoothed intensity values and outputs the sharpened image.Zuo,Chen and Sui recently developed Range Limited Bi-Histo-gram Equalization(RLBHE)for image contrast enhancement in which the input image’s histogram is divided into two inde-pendent sub-histograms by a threshold that minimizes the intra-class variance[12].Then,the range of the equalized image is calculated to yield minimum absolute mean brightness error between the original image and the equalized one.3.Thresholded and Optimized Histogram Equalization(TOHE)The proposed TOHE accomplishes the contrast enhancement of input images in three distinct phases as:Phase I:Segmentation of the input image’s histogram based on Otsu’s threshold.Phase II:Development of weighing constraints with respect to the threshold.Phase III:Optimizing the constraints using Particle Swarm Optimization(PSO).3.1.Segmentation of the input image’s histogram based on Otsu’s thresholdThresholding is an ideal method for image segmentation.A threshold is used to divide the input image’s histogram into two parts:the lower gray level of the object and the higher gray level of the background.Then the target region and the back-ground can be equalized separately so that the contrast of target and background can both be effectively improved.From the pattern recognition perspective,the optimal threshold should produce the best performance to separate the target class from the background class.This performance is characterized by the intra-class variance.Otsu’s method[13]is used to automat-ically perform thresholding based on the histogram shape of the image.Otsu’s method assumes that the image to be thres-holded contains two classes of pixels(e.g.,foreground and background)and calculates the optimum threshold separating those two classes.It exhaustively searches for the threshold that maximizes the inter-class variance which is defined as a weighted sum of variances of the two classes given below:r2ðtÞ¼W LðEðX LÞÀEðXÞÞ2þW UðEðX UÞÀEðXÞÞ2ð4Þwhere E(X L)and E(X U)are the average brightness of the two sub-images thresholded by‘t’.E(X)is the mean brightness of the whole image.W L and W U are the cumulative probabilities of class occurrences of the two classes of pixels of the image and are given as:W L¼X ti¼0pið5ÞandW U¼X NÀ1i¼tþ1pið6ÞFor bi-level thresholding,the optimal threshold tÃis chosen so as to maximize the between-class variance r2(t)as follows: tümax06t<NÀ1f r2ðtÞgð7Þ3.2.Development of weighing constraints with respect to the thresholdThe input image,F(i,j)is segmented into two,using the optimal threshold tÃobtained by Otsu’s method as:F L(i,j)and F U(-i,j).Then,the Probability Density Functions(PDF),P L(r k)and P U(r k)for the lower and upper sub-images respectively are com-puted.The mean PDF of lower and upper sub-images are found as m L and m U.3.2.1.Constraint for lower sub-imageThe PDFs of lower sub-image is suitably modified using the transformation function T(.)with the following constraints:P LCðr kÞ¼TðP Lðr kÞÞ¼a if P Lðr kÞ>aP Lðr kÞÀbaaÂa if b6P Lðr kÞ6a0if P Lðr kÞ<b8>><>>:9>>=>>;ð8Þwhere a=bÂmax(P L(r k)),0.1<b<1.0,b=0.0001and‘a’is the power factor,0.1<a<1.0.Then,the mean PDF of constrained lower sub-image,m LC is calculated.The mean error m eL is found as:m eL=m LCÀm L.The error,m eL is added to P LC(r k).Finally,the Cumulative Distribution Function(CDF),C L(F L(i,j))using P LC(r k)is computed and the HE procedure is applied using:F0 L ði;jÞ¼X0þðtÃÀX0ÞÂC LðF Lði;jÞÞð9ÞP.Shanmugavadivu,K.Balasubramanian/Computers and Electrical Engineering40(2014)757–768759The original PDFs of the lower sub-image are clamped to the upper threshold a and lower threshold b.The PDFs of the lower sub-image which are higher than a are normalized to a,based on the maximum probability level of this sub-image.The b value of lower sub-image isfixed as low as possible(0.0001)since the contribution of intensities of such PDFs is negligible in controlling the enhancement process.3.2.2.Constraint for upper sub-imageAnalogously,the following constraint is applied to the PDFs of upper sub-image.P UCðr kÞ¼TðP Uðr kÞÞ¼d if P Uðr kÞ>dP Uðr kÞÀ/cÂd if/6P Uðr kÞ6d/if P Uðr kÞ</8>><>>:9>>=>>;ð10Þwhere d=dÂmax(P U(r k)),0.1<d<1.0;/the mean(P U(r k))and‘c’is the power factor,0.1<c<1.0.The mean PDF of constrained upper sub-image,m UC and the mean error,m eU are calculated(m eU=m UCÀm U).The m eU is added to P UC(r k).The CDF,C U(F U(i,j))using P UC(r k)is calculated and the HE procedure is applied as: F0Uði;jÞ¼ðtÃþ1ÞþðX nÀðtÃþ1ÞÞÂC UðF Uði;jÞÞð11ÞThe PDFs of the upper sub-image are clamped with respect to the upper threshold d and lower threshold/.The/value of lower sub-image isfixed as the mean PDF value of the upper sub-image so that there is not much degradation in the mean of the output image with respect to the input image.The PDFs of the upper sub-image which are higher than d are normalized to d,based on the maximum probability level of this sub-image.This upper limit clamping procedure in both sub-images is used to avoid the dominating high probabilities,when allo-cating the output dynamic range.The values of the controlling parameters b and d arefixed in the range0.1and1.0.When those values go beyond this limit,image gets over-enhanced.The parameters a and c are the power factors to control the degree of enhancement.When a and c are less than1.0,the less probable levels in the corresponding sub-images are real-located with more probable levels.So,the important visual details of the entire input image are preserved.When the value of those power factors gradually approaches1.0,the effect of the proposed method tends to behave like the traditional HE. When these values are greater than1.0,more weight is shifted to the high-probability levels and TOHE would yield even stronger effect than the traditional HE.But,this also results in over-enhancement,yet it is still useful in specific applications where the levels with high probabilities(e.g.,the background)need to be enhanced with extra strength.The union of F0L ði;jÞand F0Uði;jÞproduces the contrast enhanced output image.F O¼F0L ði;jÞ[F0Uði;jÞð12Þ3.3.Optimizing the constraints using Particle Swarm Optimization(PSO)In this proposed TOHE algorithm,four major parameters namely a,b,c and d are identified.The optimal values of them are found using Particle Swarm Optimization(PSO)in which an objective function is defined which will maximize the con-trast of the input image.There are several measures such as Discrete Entropy(DE)[14]and Contrast Improvement Index(CII) [15],which are used to calculate the degree of image’s contrast enhancement.One such measure can be considered to be an objective function.In this paper,it is found that DE provides better trade-off than CII.Hence,DE is selected as the objective function while CII being a supportive measure to evaluate the degree of contrast enhancement.3.3.1.Discrete EntropyDiscrete Entropy E(X)is used to measure the richness of information in an image after enhancement.It is defined as: EðXÞ¼ÀX255k¼0pðX kÞlog2ðpðX kÞÞð13ÞWhen the entropy value of an enhanced image is closer to that of original input image,then the details of the input image are said to be preserved in the output image.3.3.2.Contrast Improvement IndexCII is a quantitative measure of the image contrast enhancement which is defined as:CII¼C pC oð14Þwhere C p and C o are the contrast values of the processed and original images respectively.The contrast C of an image is de-fined as:C¼ðfÀbÞðfþbÞð15Þ760P.Shanmugavadivu,K.Balasubramanian/Computers and Electrical Engineering40(2014)757–768where f and b are the mean gray levels of the foreground and background of the image.The higher values of CII signify the contrast improvement in the enhanced image.3.3.3.Particle Swarm OptimizationParticle Swarm Optimization(PSO)is a population based stochastic optimization technique developed by Dr.Eberhart and Dr.Kennedy in1995,inspired by social behaviour of birdflocking[16].It uses a number of agents(particles)that con-stitute a swarm moving around in the search space looking for the best solution.Each particle keeps track of its coordinates in the solution space which are associated with the best solution(fitness)that has achieved so far by that particle.This value is called personal best(pbest)and another best value that is tracked by the PSO is the best value obtained so far by any par-ticle in its neighbourhood,is called global best(gbest).The basic concept of PSO lies in accelerating each particle towards its pbest and the gbest locations,with a random weighted acceleration at each time as shown in Fig.1.S k:current searching point.S k+1:modified searching point.V k:current velocity.V k+1:modified velocity.V pbest:velocity based on pbest.V gbest:velocity based on gbest.Each particle tries to modify its position using the information such as current position,current velocity,the respective distance between the current position and pbest,the distance between the current position and the gbest.The velocity of each particle is suitably modified using following equation.V kþ1 i ¼WÂV kiþc1ÂrandðÞÂpbest iÀs kiÀÁþc2ÂrandðÞÂgbest iÀs kiÀÁð16Þwhere v k i is the velocity of agent i at iteration k(usually in the range,0.1–0.9);c1and c2are learning factors in the range,0–4;rand()is the uniformly distributed random number between0and1;s kiis the current position of agent i at k th iteration; pbest i is present best of agent i and gbest is global best of the group.The inertia weight,W is set to be in the range,0.1–0.9and is computed as:W¼W MaxÀ½ðW MaxÀW MinÞÂiterationIteration Maxð17ÞUsing the modified velocity,the particle’s position can be updated as:S kþ1 i ¼S kiþV kþ1ið18ÞThe optimal values of a,b,c and d are found using the following algorithm.PROCEDURE Optimize_Param ISInput:Image X(i,j)Output:Optimal values of a,b,c and dBEGINStep1:Initialize particles with random position and velocity vectorStep2:Loop until maximum generationStep2.1:Loop until the particles exhaustStep2.1.1:Evaluate the difference between Discrete Entropy values of original and TOHEed image(p)Step2.1.2:If p<pbest,then pbest=pStep2.1.3:GOTO Step2.1Step2.2:Set best of pbests as gbest and record the values of a,b,c and dStep2.3:Update particles velocity using Eq.(16)and position using Eq.(18)respectivelyStep3:GOTO Step2Step4:Stop–Giving gbest,the optimal solution with optimal a,b,c and d valuesENDSimilar kind of procedure is developed for two parameters’case also,so as to apply the constraints either to lower(a and b) or upper sub-image(c and d)only.The decision to use any one of these procedures(four parameters case or two parameters) depends on the nature of the input image and of the need of the hour.These procedures were experimented on three dif-ferent images namely,Elaine,Natural and Pirate.The Elaine image shown in Fig.2(a)is a light image which is applied with both PSO procedures.Fig.2(b)–(d)are the TOHEed images with the constraints applied to upper sub-image only,lower sub-image only and to both sub-images respectively.P.Shanmugavadivu,K.Balasubramanian/Computers and Electrical Engineering40(2014)757–768761Fig.1.Modification of a searching point by PSO.Elaine(original),(b)constraints applied to upper sub-image only,(c)constraints applied to lower sub-image only,(d)constraints appliedsub-images.Natural(original),(b)constraints applied to upper sub-image only,(c)constraints applied to lower sub-image only,(d)constraints applied sub-images.Fig.2(b)and(d)are almost equal to original where Fig.2(c)is clearly an enhanced image in terms of visual perception.A natural dark image is given in Fig.3(a).Fig.3(b)–(d)are the results of the proposed technique,TOHE where the constraints are applied to only upper sub-image,only lower sub-image and to both sub-images respectively.It can be noted that Fig.3(b) is better enhanced image than Fig.3(c)and(d).Similarly,in Fig.4,the Pirate and its TOHEed images are given.Among Fig.4(b)–(d),Fig.4(d)exhibits better contrast enhanced result.It is apparent from the above experimentation that it is enough to apply the optimal constraints to only lower sub-image, when the given image is light image.In case of dark image like Natural image given in Fig.3(a),applying optimal constraints only to upper sub-image will improve the contrast effectively.When the input image is having its histograms evenly distrib-uted like Pirate image(Fig.4(a)),it is better to apply the optimal constraints to both sub-images.Once the application of constraints to sub-image(s)is over,both sub-images are equalized independently and their union produces a contrast en-hanced output image.As the performance of this proposed technique is decided by the two dependent parameters,namely the number of gen-erations and the size of the particles,the execution time are thoroughly influenced by those two parameters.The computa-tional complexity of the proposed PSO algorithm is the order of n2,since two iterations are involved in the PSO algorithm; one is to process generations and the other is to process particles.4.Results and discussionThe performance of the proposed method,TOHE was tested on standard images such as Aircraft,Einstein,Airport,F16, Copter,Girl,Truck and Pirate.These images are subjected to PSO-based TOHE process with50generations.Figs.5and6 are the graphs showing the PSO-based optimum entropy search for Aircraft and Truck images.Both graphs clearly show the effective implementation of optimal search mechanism since the best entropy value of each generation revolves around the global best entropy without much deviation.The global best entropy value(optimal)for Aircraft and Truck images are obtained in11th and27th generations respectively.compare the performance of TOHE,the same images are enhanced with the contemporary enhancement techniques BBHE,HS,RMSHE,RLBHE,SRHE and WTHE.The performance of all these methods is measured qualitatively in human visual perception and quantitatively using Discrete Entropy and CII.qualitative performance of TOHE(constraints applied to both sub-images)and the contemporary methods are using Aircraft and Truck images which are given in Figs.7and8respectively.The enhanced images of the sameHS,RMSHE,RLBHE,SRHE and WTHE are shown in Figs.7(b)–(h)and8(b)–(h)respectively.In Fig.7(b)–(h),(a)Pirate(original),(b)constraints applied to upper sub-image only,(c)constraints applied to lower sub-image only,(d)constraints applied sub-images.Fig.6.PSO search for Truck image.Fig.5.PSO search for Aircraft image.764P.Shanmugavadivu,K.Balasubramanian/Computers and Electrical Engineering40(2014)757–768Fig.7.Enhancement of Aircraft image.encircled portions in the Aircraft image clearly show the brightness degradation and over-enhancement.The same abrupt brightness change and over-enhancement can be notified in the Truck images(Fig.8(b)–(h))also.Though there is not much brightness change in the results of RMSHE(Figs.7(e)and8(e)),these images are almost similar to original image even for the recursion level2and the marked portions of them are not found to be enhanced.Figs.7(i)and8(i)are the visual results of TOHE which are better than those of other HE techniques and are free from over-enhancement.The TOHE is found to pro-duce comparatively better results for the other test images too.The histogram patterns of original,HEed and TOHEed images of Aircraft and Truck are shown in Figs.9and10respec-tively.In Figs.9(b)and10(b),the abrupt changes in brightness as well as contrast can be well notified because of the uncon-trolled distribution of histograms,whereas Figs.9(c)and10(c)exhibit the controlled distribution,which results in the expected contrast enhancement and brightness preservation.The proposed technique,TOHE can be effectively applied on video frame image ually,the global HE methods result in the introduction of white saturation effect while enhancing the input video frame images.This is overcome by TOHE and can be suitably applied for video frame enhancement.Fig.11(a)shows the video frame of a mechanic.This image is enhanced by HE and TOHE and are presented in Fig.11(b) and(c)respectively.It is evident from Fig.11(b)that it is added with excessive white saturation.This is effectively avoided in Fig.11(c).Most of the HE based enhancement techniques,invariably attribute to mean shift in the output image due to the redis-tribution of intensity values during intensity normalization.However,the proposed technique TOHE is proved to preserve the mean of the input image.Fig.12shows the comparison of mean values of the original,HEed and TOHEed test images. It is apparent from thisfigure that HE drastically changes the original mean of the input image which always results inP.Shanmugavadivu,K.Balasubramanian/Computers and Electrical Engineering40(2014)757–768765Fig.8.Enhancement of Truck image.Fig.9.Histogram patterns of Aircraft image.the brightness degradation.But,the controlled procedure adopted by TOHE gives the mean values which are closer to that of original ones.This characteristic of TOHE preserves the brightness of the input image.Further,the qualities of the test images which are enhanced using the above mentioned techniques are measured in terms of DE and CII and are given in Tables1and2respectively.In Table1,TOHE is proved to produce better DE values which are very much closer to that of original.This confirms the retention of original image details in the enhanced image.The rel-atively higher CII values of TOHE than other HE methods presented in Table2is another quantitative proof for the merits of TOHE for being a better technique for contrast enhancement.。

Generalized Berezin quantization, Bergman metrics and fuzzy Laplacians

Generalized Berezin quantization, Bergman metrics and fuzzy Laplacians
Preprint typeset in JHEP style - PAPER VERSION
TCDMATH 08-04
arXiv:0804.4555v2 [hep-th] 9 Sep 2008
Generalized Berezin quantization, Bergman metrics and fuzzy Laplacians
Calin Iuliu Lazaroiu, Daniel McNamee and Christian S¨ amann
Trinity College Dublin Dublin 2, Ireland calin, danmc, saemann@maths.tcd.ie
Abstract: We study extended Berezin and Berezin-Toeplitz quantization for compact K¨ ahler manifolds, two related quantization procedures which provide a general framework for approaching the construction of fuzzy compact K¨ ahler geometries. Using this framework, we show that a particular version of generalized Berezin quantization, which we baptize “Berezin-Bergman quantization”, reproduces recent proposals for the construction of fuzzy K¨ ahler spaces. We also discuss how fuzzy Laplacians can be defined in our general framework and study a few explicit examples. Finally, we use this approach to propose a general explicit definition of fuzzy scalar field theory on compact K¨ ahler manifolds. Keywords: Non-Commutative Geometry, Differential and Algebraic Geometry.

matlab卷积平滑处理

matlab卷积平滑处理

在MATLAB中,您可以使用卷积函数conv2对二维数据进行平滑处理。

卷积是一种线性运算,可以用于去除或增强原始数据的特征。

首先,创建一个二维数据矩阵,例如使用peaks函数创建100x100的二维数据矩阵Z。

然后,向数据中插入随机噪声并绘制含噪等高线。

接下来,定义一个3x3的核K,并使用conv2函数对含噪数据进行平滑处理。

平滑处理后的数据可以绘制成等高线。

同样地,您可以定义其他大小的核,例如5x5的核,并使用conv2函数进行平滑处理。

在conv2函数中,'same'选项使输出的大小与输入相同。

Study of $chi_{c1}$ and $chi_{c2}$ Meson Production in B Meson Decays

Study of $chi_{c1}$ and $chi_{c2}$ Meson Production in B Meson Decays

a rXiv:h ep-e x /944v12Sep2CLNS 00/1691CLEO 00-18Study of χc 1and χc 2meson production in B meson decays CLEO Collaboration (February 7,2008)Abstract Using a sample of 9.7×106BS.Chen,1J.Fast,1J.W.Hinson,1J.Lee,ler,1E.I.Shibata,1I.P.J.Shipsey,1 V.Pavlunin,1D.Cronin-Hennessy,2A.L.Lyon,2E.H.Thorndike,2V.Savinov,3T.E.Coan,4V.Fadeyev,4Y.S.Gao,4Y.Maravin,4I.Narsky,4R.Stroynowski,4J.Ye,4 T.Wlodek,4M.Artuso,5R.Ayad,5C.Boulahouache,5K.Bukin,5E.Dambasuren,5 S.Karamov,5G.Majumder,5G.C.Moneti,5R.Mountain,5S.Schuh,5T.Skwarnicki,5 S.Stone,5J.C.Wang,5A.Wolf,5J.Wu,5S.Kopp,6M.Kostin,6A.H.Mahmood,7S.E.Csorna,8I.Danko,8K.W.McLean,8Z.Xu,8R.Godang,9G.Bonvicini,10D.Cinabro,10M.Dubrovin,10S.McGee,10G.J.Zhou,10E.Lipeles,11S.P.Pappas,11M.Schmidtler,11A.Shapiro,11W.M.Sun,11A.J.Weinstein,11F.W¨u rthwein,11,∗D.E.Jaffe,12G.Masek,12H.P.Paar,12E.M.Potter,12S.Prell,12D.M.Asner,13A.Eppich,13T.S.Hill,13R.J.Morrison,13R.A.Briere,14G.P.Chen,14A.Gritsan,15 J.P.Alexander,16R.Baker,16C.Bebek,16B.E.Berger,16K.Berkelman,16F.Blanc,16 V.Boisvert,16D.G.Cassel,16P.S.Drell,16J.E.Duboscq,16K.M.Ecklund,16R.Ehrlich,16A.D.Foland,16P.Gaidarev,16L.Gibbons,16B.Gittelman,16S.W.Gray,16D.L.Hartill,16B.K.Heltsley,16P.I.Hopman,16L.Hsu,16C.D.Jones,16D.L.Kreinick,16M.Lohner,16A.Magerkurth,16T.O.Meyer,16N.B.Mistry,16E.Nordberg,16M.Palmer,16J.R.Patterson,16D.Peterson,16D.Riley,16A.Romano,16J.G.Thayer,16D.Urner,16B.Valant-Spaight,16G.Viehhauser,16A.Warburton,16P.Avery,17C.Prescott,17A.I.Rubiera,17H.Stoeck,17J.Yelton,17G.Brandenburg,18A.Ershov,18D.Y.-J.Kim,18R.Wilson,18H.Yamamoto,19T.Bergfeld,20B.I.Eisenstein,20J.Ernst,20G.E.Gladding,20G.D.Gollin,20R.M.Hans,20E.Johnson,20I.Karliner,20M.A.Marsh,20C.Plager,20C.Sedlack,20M.Selen,20J.J.Thaler,20J.Williams,20K.W.Edwards,21 R.Janicek,22P.M.Patel,22A.J.Sadoff,23R.Ammar,24A.Bean,24D.Besson,24X.Zhao,24 S.Anderson,25V.V.Frolov,25Y.Kubota,25S.J.Lee,25R.Mahapatra,25J.J.O’Neill,25 R.Poling,25T.Riehle,25A.Smith,25C.J.Stepaniak,25J.Urheim,25S.Ahmed,26 M.S.Alam,26S.B.Athar,26L.Jian,26L.Ling,26M.Saleem,26S.Timm,26F.Wappler,26A.Anastassov,27E.Eckhart,27K.K.Gan,27C.Gwon,27T.Hart,27K.Honscheid,27D.Hufnagel,27H.Kagan,27R.Kass,27T.K.Pedlar,27H.Schwarthoff,27J.B.Thayer,27E.von Toerne,27M.M.Zoeller,27S.J.Richichi,28H.Severini,28P.Skubic,28andA.Undrus281Purdue University,West Lafayette,Indiana479072University of Rochester,Rochester,New York146273Stanford Linear Accelerator Center,Stanford University,Stanford,California943094Southern Methodist University,Dallas,Texas752755Syracuse University,Syracuse,New York132446University of Texas,Austin,TX787127University of Texas-Pan American,Edinburg,TX785398Vanderbilt University,Nashville,Tennessee372359Virginia Polytechnic Institute and State University,Blacksburg,Virginia2406110Wayne State University,Detroit,Michigan4820211California Institute of Technology,Pasadena,California9112512University of California,San Diego,La Jolla,California92093 13University of California,Santa Barbara,California9310614Carnegie Mellon University,Pittsburgh,Pennsylvania15213 15University of Colorado,Boulder,Colorado80309-039016Cornell University,Ithaca,New York1485317University of Florida,Gainesville,Florida32611 18Harvard University,Cambridge,Massachusetts0213819University of Hawaii at Manoa,Honolulu,Hawaii9682220University of Illinois,Urbana-Champaign,Illinois6180121Carleton University,Ottawa,Ontario,Canada K1S5B6 and the Institute of Particle Physics,Canada 22McGill University,Montr´e al,Qu´e bec,Canada H3A2T8 and the Institute of Particle Physics,Canada23Ithaca College,Ithaca,New York1485024University of Kansas,Lawrence,Kansas66045 25University of Minnesota,Minneapolis,Minnesota5545526State University of New York at Albany,Albany,New York12222 27Ohio State University,Columbus,Ohio4321028University of Oklahoma,Norman,Oklahoma73019The recent measurements of charmonium production in various high-energy physics reac-tions have brought welcome surprises and challenged our understanding both of heavy-quark production and of quarkonium bound state formation.The CDF and D0measurements[1] of a large production rate for charmonium at high transverse momenta(P T)were in sharp disagreement with the then-standard color-singlet model.The development of the NRQCD factorization framework[2]has put the calculations of the inclusive charmonium production on a rigorous footing.The high-P T charmonium production rate at the Tevatron is now well understood in this formalism.The recent CDF measurement of charmonium polarization[3], however,appears to disagree with the NRQCD prediction.The older color-evaporation model accommodates both the high-P T charmonium production rate and polarization mea-surements at the Tevatron[4].Inclusive B decays to charmonia offer another means by which theoretical predictions may be confronted with experimental data.The color-singlet contribution,for example,is thought to be[5]a factor of5–10below the observed inclusive J/ψproduction rate[6].A measurement of theχc2-to-χc1production ratio in B decays provides an especially clean test of charmonium production models.The V−A current c pair in a2S+1L J=3P2state,therefore the decay B→χc2X is forbidden at leading order inαs in the color-singlet model[7].The importance of the color-octet mechanism forχc production in B decays was recognized[8]even before the development of the NRQCD framework[2].While the NRQCD calculations cannot yet produce sharp quantitative predictions for theχc2-to-χc1production ratio in B decays[5],we can consider two limiting cases.If the color-octet mechanism dominates in B→χcJ X decays,then theχc2-to-χc1production ratio should be 5:3because the color-octet contribution is proportional to2J+1.In contrast,if the color-singlet contribution dominates,thenχc2production should be strongly suppressed relative toχc1production.The color-evaporation model predicts the ratio to be5:3[9].Our data were collected at the Cornell Electron Storage Ring(CESR)with two configu-rations of the CLEO detector called CLEO II[10]and CLEO II.V[11].The components of the CLEO detector most relevant to this analysis are the charged particle tracking system, the CsI electromagnetic calorimeter,the time-of-flight system,and the muon chambers.In CLEO II the momenta of charged particles are measured in a tracking system consisting of a6-layer straw tube chamber,a10-layer precision drift chamber,and a51-layer main drift chamber,all operating inside a1.5T solenoidal magnet.The main drift chamber also pro-vides a measurement of the specific ionization,dE/dx,used for particle identification.For CLEO II.V,the straw tube chamber was replaced with a3-layer silicon vertex detector,and the gas in the main drift chamber was changed from an argon-ethane to a helium-propane mixture.The muon chambers consist of proportional counters placed at increasing depths in the steel absorber.We use9.2fb−1of e+e−data taken at theΥ(4S)resonance and4.6fb−1taken60MeV below theΥ(4S)resonance(off-Υ(4S)sample).Two thirds of the data were collected with the CLEO II.V detector.The simulated event samples used in this analysis were generated with a GEANT-based[12]simulation of the CLEO detector response and were processed in a manner similar to the data.We reconstruct theχc1,2radiative decays to J/ψ.The branching fractions for theχc1,2→J/ψγdecays are,respectively,(27.3±1.6)%and(13.5±1.1)%,whereas the branching fraction for theχc0→J/ψγdecay is only(0.66±0.18)%[14].In addition,theχc0production ratein B decays is expected to be smaller than theχc1,2rates[5,8].We therefore do not attempt to measureχc0production in this analysis.The J/ψreconstruction procedure is described in Ref.[13]and summarized here.We reconstruct both J/ψ→µ+µ−and J/ψ→e+e−decays,recovering the bremsstrahlung photons for the J/ψ→e+e−mode.We use the normalized invariant mass for the J/ψcandidate selection(Fig.1of Ref.[13]).For example,the normalized J/ψ→µ+µ−mass is defined as[M(µ+µ−)−M J/ψ]/σ(M),where M J/ψis the world average value of the J/ψmass[14]andσ(M)is the expected mass resolution for that particularµ+µ−combination calculated from track four-momentum covariance matrices.We require the normalized mass to be between−6and+3for the J/ψ→e+e−candidates and between−4and+3for the J/ψ→µ+µ−candidates.The momentum of the J/ψcandidates is required to be less than 2GeV/c,which is slightly above the maximal J/ψmomentum in B decays.Photon candidates forχc1,2→J/ψγreconstruction must be detected in the central angular region of the calorimeter(|cosθγ|<0.71),where our detector has the best energy resolution.Most of the photons inΥ(4S)→BB events.All the polynomial coefficients are allowed tofloat in thefit.Theχc1andχc2signal shapes arefit with templates extracted from Monte Carlo simulation;only the template normalizations are free in thefit.Theχc1 andχc2signal yields in theΥ(4S)data are N ON(χc1)=672±47(stat)and N ON(χc2)= 83±37(stat).Theχc1andχc2yields in off-Υ(4S)data are both consistent with zero: N OFF(χc1)=4±7(stat)and N OFF(χc2)=1±7(stat).Subtracting the contributions from non-BB pairs,and the daughter branching fractions.The reconstruction efficiencies,determined from simulation, are(25.7±0.2)%forχc1and(26.6±0.2)%forχc2,where the uncertainties are due to the size of our B→χc1,2X simulation samples.For the calculation of the rates for the decays B→χc1,2(direct)X,we make an assumption that the only other source ofχc1,2production in B decays is the decay chain B→ψ(2S)X withψ(2S)→χc1,2γ.The95%confidenceintervals are calculated using the Feldman-Cousins approach[16].The resulting branching fractions are listed in Table I.Taking into account correlations between the uncertainties,we obtain the branching ratioΓ[B→χc2(direct)X]/Γ[B→χc1(direct)X]=0.18±0.13±0.04; the95%CL upper limit on the ratio is0.44.TABLE I.Branching fractions for inclusive B decays toχc1andχc2.Branching fraction Measured value95%CL interval(×10−3)(×10−3)B and non-BB pairs,tracking efficiency,photon detection efficiency,lepton detection efficiency, and model-dependence and statistical uncertainty of the B→χc1,2X simulation.Theχc1,2 polarization affects the photon energy spectrum.We define the helicity angleθh to be the angle between theγdirection inχc frame and theχc direction in the B frame.We assume aflat cosθh distribution in our simulation.The systematic uncertainty associated with this assumption is estimated by comparing the reconstruction efficiencies in the Monte Carlo samples with I(θh)∝sin2θh and I(θh)∝cos2θh angular distributions.Parity is conserved in the decaysχc1,2→J/ψγ,so the helicity angle distribution contains only even powers of cosθh.Another source of uncertainty is our modeling of the X system in the B→χc1,2X simulation.Photon detection efficiency depends on the assumed model through theχc mo-mentum spectrum and theπ0multiplicity of thefinal state.In our simulation,we assume that X is either a single K or one of the higher K resonances;we also include the decay chainB→ψ(2S)X withψ(2S)→χc1,2γ.To estimate the systematic uncertainty,we compare the χc→J/ψγdetection efficiency extracted using this sample with the efficiency in the sample where we assume that X is either a K±or K0S→π+π−.Assumed branching fractions.—This category includes the uncertainties on the exter-nal branching fractions.We use the following values of the daughter branching fractions: B(J/ψ→ℓ+ℓ−)=(5.894±0.086)%[15],B(χc1→J/ψγ)=(27.3±1.6)%[14],and B(χc2→J/ψγ)=(13.5±1.1)%[14].In the calculation of B[B→χc1,2(direct)X],we also assume the following values:B(B→ψ(2S)X)=(3.5±0.5)×10−3[14],B(ψ(2S)→χc1γ)=(8.7±0.8)%[14],and B(ψ(2S)→χc2γ)=(7.8±0.8)%[14].TABLE II.Systematic uncertainties on B(B→χc1,2X).Source of relative uncertainty in% systematic uncertainty B(B→χc1X)B(B→χc2X)B)2.02.0 Tracking efficiency2.02.0 Lepton identification4.24.2 Photonfinding2.52.5 Monte Carlo statistics0.70.7 Model for X in B→χc1,2X3.33.3 Polarization ofχc1,21.01.0 Assumed branching fractionsB(χc1,2→J/ψγ)5.98.1B(J/ψ→ℓ+ℓ−)1.51.5B(B→ψ(2S)X)a1.15.5B(ψ(2S)→χc1,2γ)a0.74.0a Contributes only to uncertainty on B[B→χc1,2(direct)X].In the second part of this work,called the B-reconstruction analysis,we employ the B-reconstruction technique similar to the one developed for the b→sγrate measurement[17]. We still extractχc1andχc2signal yields from afit to M(J/ψγ)−M(J/ψ)distribution, but we select only those J/ψγcombinations that reconstruct to a B→J/ψγX s decay. This B-reconstruction technique is used to suppress backgrounds and allows us to probe the composition of the X s system accompanyingχc1,2mesons.We extract the branching ratio R(χc2/χc1)≡Γ[B→χc2(direct)X s]/Γ[B→χc1(direct)X s]for the following three X s configurations:1.Sample A.—X s is reconstructed as a kaon(K+or K0S→π+π−)with0to4pions,one of which can be aπ0.We consider21possible X s modes as well as the charge conjugates of these modes.2.Sample B.—X s is reconstructed as a single kaon or K∗(892).A Kπcombination isa K∗candidate if|M(Kπ)−M K∗|<75MeV/c2,where M K∗is the world averageK∗(892)mass[14].3.Sample C.—X s is reconstructed as a kaon with1to4pions,but not as a K∗(892)candidate(|M(Kπ)−M K∗|>200MeV/c2).Thus samples B and C are subsets of A.To an excellent approximation,sample A is a sum of B and C.With sample A,we try to reconstruct as many B→J/ψγX s decays as possible.Dividing sample A into subsamples B and C,we also probe the dynamics of the B→χc1,2X s decays.If the dominant production mechanisms forχc1andχc2are different, color-singlet mechanism forχc1and color-octet forχc2,then it is natural to expect thatχc2, in comparison withχc1,is more often accompanied by multi-body X s states rather than a single K or K∗.Thus the measuredχc2-to-χc1production ratio might be quite different for samples B and C.We require that the charged kaon and pion candidates have,if available,dE/dx and time-of-flight measurements that lie within3σof the expected values.The dE/dx measurement is required for kaons,but used only if available for pions.The time-of-flight measurement is used only if available.The K0S→π+π−candidates are selected from pairs of tracks forming displaced vertices.We require the absolute value of the normalized K0S→π+π−mass to be less than4and perform afit constraining the mass of each K0S candidate to the world average value[14].Photon candidates forπ0→γγdecays are required to have an energy of at least30MeV in the central region and at least50MeV in the endcap region(0.71<|cosθγ|<0.95)of the calorimeter.We require the absolute value of the normalizedπ0→γγmass to be less than3and perform afit constraining the mass of each π0candidate to the world average value[14].The J/ψfour-momentum used in B→J/ψγX s reconstruction is obtained by performing afit constraining the J/ψcandidate mass to the world average value[14].The B candidates are selected by means of two observables.Thefirst observable is the difference between the energy of the B candidate and the beam energy,∆E≡E(B)−E beam. The average∆E resolution varies from12to17MeV depending on the B-reconstruction mode.The second observable is the beam-constrained B mass,M(B)≡B continuum events.Wefinally subtract theψ(2S)→χc1,2γfeeddown to obtain the rates for directχc1,2production in B decays.For all three X s configurations,we observe a strongχc1signal yet no statistically significant signal for directχc2production(Table III). To calculate the branching ratio R(χc2/χc1),we multiply the ratio of the feeddown-corrected χc1,2yields by the reconstruction efficiency ratio E(χc1)/E(χc2)and by the branching ratioΓ(χc1→J/ψγ)/Γ(χc2→J/ψγ).The efficiency of the B-reconstruction depends on the composition of the X s system.We assume that the X s system composition is the same for χc1andχc2production.From our simulation we determine E(χc1)/E(χc2)≃0.93for all three X s configurations.The resultingχc2-to-χc1production ratios are listed in Table III.TABLE III.Results for each of the three X s configurations used in B→J/ψγX s reconstruc-tion.Theχc1andχc2event yields with associated statistical uncertainties are listed in lines1and 2.Line3contains the significance of the B→χc2(direct)X s signal with statistical and systematic uncertainties taken into account.Lines4and5contain the measured value and95%confidence interval for the branching ratio R(χc2/χc1)≡Γ[B→χc2(direct)X s]/Γ[B→χc1(direct)X s],deter-mined with an assumption that the X s system composition is the same forχc1andχc2production.Sample A Sample B Sample CB),tracking,photonfinding,and lepton identification.E(χc2)/E(χc1).—We assume that the X s system in B→χc1,2X s is the same forχc1 andχc2.We do not assign any uncertainty for this assumption.The remaining sources of uncertainty are theχc1,2polarization and the statistics of the B→χc1,2X s simulation samples.B(χc1,2→J/ψγ).—Our measurement depends on the ratioΓ(χc1→J/ψγ)/Γ(χc2→J/ψγ)and its uncertainty.TABLE IV.The absolute systematic uncertainties on the branching ratio R(χc2/χc1)for each of the three X s configurations used in B→J/ψγX s reconstruction.uncertainty on R(χc2/χc1)Source of uncertainty Sample A Sample B Sample CAdded in quadrature0.090.050.14 In conclusion,we have measured the branching fractions for inclusive B decays to the χc1andχc2charmonia states.Our measurements are consistent with and supersede the previous CLEO results[6].We have also studied B→χc1,2X s decays,reconstructing X s as a kaon and up to four pions.In this way,we have measured the branching ratio Γ[B→χc2(direct)X s]/Γ[B→χc1(direct)X s]for three X s configurations.In all the cases, we observe strongχc1signal yet no statistically significant signal forχc2production.Our measurement of theχc2-to-χc1production ratio in B decays is consistent with the predic-tion of the color-singlet model[7]and disagrees with the color-evaporation model[9].In the NRQCD framework,our measurement suggests that the color-octet mechanism does not dominate in B→χc X decays.We gratefully acknowledge the effort of the CESR staffin providing us with excellent luminosity and running conditions.This work was supported by the National Science Foun-dation,the U.S.Department of Energy,the Research Corporation,the Natural Sciences and Engineering Research Council of Canada,the A.P.Sloan Foundation,the Swiss National Sci-ence Foundation,the Texas Advanced Research Program,and the Alexander von Humboldt Stiftung.REFERENCES[1]CDF Collaboration,F.Abe et al.,Phys.Rev.Lett.69,3704(1992);79,572(1997);79,578(1997);D0Collaboration,S.Abachi et al.,Phys.Lett.B370,239(1996).[2]G.T.Bodwin,E.Braaten,and G.P.Lepage,Phys.Rev.D51,1125(1995).[3]CDF Collaboration,T.Affolder et al.,Report No.FERMILAB-PUB-00-090-E,hep-ex/0004027(submitted to Phys.Rev.Lett.).[4]J.F.Amundson et al.,Phys.Lett.B390,323(1997)[5]M.Beneke,F.Maltoni,and I.Z.Rothstein,Phys.Rev.D59,054003(1999).[6]CLEO Collaboration,R.Balest et al.,Phys.Rev.D52,2661(1995).[7]J.H.K¨u hn,S.Nussinov,and R.R¨u ckl,Z.Phys.C5,117(1980);J.H.K¨u hn andR.R¨u ckl,Phys.Lett.135B,477(1984);Phys.Lett.B258,499(1991).[8]G.T.Bodwin et al.,Phys.Rev.D46,3703(1992).[9]G.A.Schuler,Eur.Phys.J.C8,273(1999).[10]CLEO Collaboration,Y.Kubota et al.,Nucl.Instrum.Meth.Phys.Res.A320,66(1992).[11]T.S.Hill,Nucl.Instrum.Meth.Phys.Res.A418,32(1998).[12]CERN Program Library Long Writeup W5013(1993).[13]CLEO Collaboration,P.Avery et al.,Phys.Rev.D62,051101(2000)[14]Particle Data Group,D.E.Groom et al.,Eur.Phys.J.C15,1(2000).[15]BES Collaboration,J.Z.Bai et al.,Phys.Rev.D58,092006(1998).[16]G.J.Feldman and R.D.Cousins,Phys.Rev.D57,3873(1998).[17]CLEO Collaboration,M.S.Alam et al.,Phys.Rev.Lett.74,2885(1995).( d )( c )( b )( a )3001002010204055045035025090070050030004055045035025010015020001009007005003005050055045035025010020030040050012016020024040800012040800550450350250C a n d i d a t e s / (4 M e V /c 2)C a n d i d a t e s / (8 M e V /c 2)C a n d i d a t e s / (8 M e V /c 2)IM(J /)M(J /) (MeV/c 2)2030800-017C a n d i d a t e s / (8 M e V /c 2)FIG.1.The M (J/ψγ)−M (J/ψ)distribution in the Υ(4S )data (points with error bars).Plot (a)is for inclusive J/ψγcombinations,whereas plots (b),(c),and (d)are for those J/ψγcombinations that reconstruct to a B →J/ψγX s decay with the X s composition corresponding to samples A ,B ,and C described in the text.The fit function is shown by a solid line with the background component represented by a dashed line.The insets show the background-subtracted distributions with the χc 1and χc 2fit components represented by a solid line.。

Finite Density of States in a Mixed State of d_x^2-y^2+id_xy Superconductor

Finite Density of States in a Mixed State of d_x^2-y^2+id_xy Superconductor

a r X i v :c o n d -m a t /9809095v 4 [c o n d -m a t .s u p r -c o n ] 13 A p r 1999Finite Density of States in a Mixed State of d x 2−y 2+id xy SuperconductorW.Mao (1)and A.V.Balatsky (2)(1)Department of Physics,Boston College,Chestnut Hill,MA 02167,USA(2)T-Div and MST-Div,Los Alamos National Laboratory,Los Alamos,NM 87545,USA(February 1,2008)We have calculated the density of states of quasiparticles in a d x 2−y 2+id xy superconductor,and show that in the mixed state the quasiparticle spectrum remains gapless because of the Doppler shift by superflow.It was found that if the d xy order gap ∆1∝√H in accord with experimental data at lowest temperatures.Thisis an appended version of the paper published in Phys.Rev.B 59,6024,(1999).We now also discuss the disorder effects and analyze the H log H crossover at small fields.We argue that H log H regime is present and disorder effect is dominant as the field-induced seconary gap is small at small fields.Based on the experiments recently carried out by sev-eral groups [1,2],it has been found that the longitudinal thermal conductivity κxx of unconventional superconduc-tor,such as Bi 2Sr 2CaCu 2O 8+δ,displays quite strange behavior in the mixed state.With temperature from 5K to 20K,it decreases as the applied magnetic field along c-axis increases.At some critical field value H k (T ),ther-mal conductivity becomes insensitive to the increase of magnetic field and develops a plateau.In order to explain this phenomena,it has been proposed that the d x 2−y 2or-der parameter is suppressed at H k due to the opening of a d x 2−y 2+id xy gap on the whole Fermi surface,thus sup-pressing quasiparticle contribution to thermal transport.On the other hand,more recent measurements at low-est temperatures [2],say 0.1K,show thermal conductivity increases as field rises κxx ∝√H if ∆1∝√H .Here we considerd x 2−y 2+id xy order parameter,thus the quasiparticle spectrum is fully gapped,E (k )=(2π)3d 2r δ[E (k ,r )+me v F ·v s ],(1)where v s =(¯h /2m e )( φ/r).The main contribution of the DOS comes from the vicinity of the d x 2−y 2gap nodes,the gap function has the general form around the node ∆0(k )≈ γ(k z )·[k −k n (k z )],where γ(k z )||e z ×k n .It is convenient to introduce new momentum variables k x =(k −k n )· k n , k y =(k −k n )· γn ,where k n is the unit vector in the direction of the gap nodes in the a −b plane, γn the unit vector along the direction of γ.Then we can write Eq.(1)asN deloc (0)=4×1v (k z )d 2rδ[π2v F γdk zdθξR Hrdrk c 2dxδ(x +∆12by the variable y ,we can furtherwrite above equation asN (0)=2In order to find the analytical form we first express the delta function as an integral form,δ(y +k F v s cosθ)=∞−∞dλexp (iλy −iλk F v s cosθ),and integrate over θ,then over λ.We find from Eq.(4),N (0)=2dk z k F 2v s 2−∆12Θ(k F 2v s 2−∆12)=dk z 2∆1r 02−r 2≈dk z∆1r 02−R H 2+r 20arcsinR H 2ξ ∆−1, ∆=∆1/∆0,R H is the intervortexdistance,and ξis the coherent length for pairing.We al-ready know that R H ≈ξR H−1r 0)∼√√H at small fields.If one uses the form of ∆1=¯h vH3¯h ev 2H c 2H )=KN FH c 2(1−4∆0H prefactor in N (0)by about 20%.The DOS with N (E =0,H )∼√H is consistent with quasiparticle transport and “Volovik effect”in the density of states.They also pointed out that even at lowest temperature data are consistent with the gapped d x 2−y 2+id xy phase provided ∆1is small enough.0.01.02.03.04.05.06.0H(Tesla)0.05.010.015.020.025.0T (K )d x 2−y 2N(0)~H1/2d x 2−y 2+id xy N(0)~H1/2FIG.1.Two regions of the phase diagram are shown.The line is the the boundary between gapped and gapless states,given by k B T c =0.52¯h v¯h c .In both phases the density of states N (0,H )∝√H .Fig.1demonstrates that the superconductor has two phases [4]as temperature and applied magnetic field change,one with “pure”d x 2−y 2order parameter,the other with d x 2−y 2+id xy order parameter.According to our calculation,in both phases,the DOS of quasiparticle at E =0is proportional to square root of applied field.We also note the implication of our calculation for the specific heat.Recent measurements of the specific heat C (T,H )in the mixed state of YBCO superconductors by Moler et.al.[6]indicate that the low temperature density of states indeed scales as N (E =0,H )∼√H in accordance with the work by Laughlin,one can still be consistent with experimental data at lowest tem-perature.It shows that we cannot rule out the possibility that a second superconducting phase appears in the mag-netic field at low temperatures.Note Added (March,1999):In the paper by C.Kubert and P.J.Hirschfeld [7],the density of states was calcu-lated in a d x 2−y 2superconductor in the dirty limit,and they found a H ln H behavior at the lowest field instead of√ing,and found that the appearance of the induced gap ∆1does not change the HlogH behavior in the lowest field.The details of calculation are described in the appendix.We are grateful to K.Bedell,M.Franz,R.Movshovich,M.Salkola,J.Sauls and P.W¨o lfle for useful discussions.This work was supported by DoE.W.M.acknowledges the LANL support for the visit.1.Appendix AWe assume the impurity gives an imaginary term γ0in the self energy of one particle Green function.Thus the density of states can be calculated by following equation:N (E =0,H )=1πR 2H d 3kd 2r γ0ǫ2k +|∆|2)2+γ20=1(v ·k F +R 2H γv Frdrdθdk zY ∆1ydy γv Fdk z1ξγ0cosθx+∆12]+1x +Y )2+γ20x+∆1)2+γ20},where E H =ǫ2k +|∆|2.It should be noticed that γ0is the imaginaryself energy term from impurity,while γis the gradient of ∆0with respect to k yIn above equation,we can get the HlogH behavior when ∆1equals to zero.We calculated the integral nu-merically,including both the effect of induced d xy gap and impurity,and show them in Fig.2.In our calcula-tion weused the result,γ=0.61√[1]K.Krishana,N.P.Ong,Q.Li,G.D.Gu,N.Koshizuka,Science,Vol.277,83(1997)[2]Herve Aubin,Kamran Behnia,Shuuichi Ooi,TsuyoshiTamegai,cond-mat/9807037[3]G.E.Volovik,JETP Lett.,Vol.58,469(1993)[4]ughlin,Phys.Rev.Lett.80,5188(1998)[5]P.Hirschfeld and P.W¨o lfle,Bull.Am.Phys.Soc.,Vol.43,817(1998).March 98.[6]K.Moler,et.al.,Phys.Rev.Lett.73,5136,(1994);Phys.Rev.B55,3954(1998).[7]C.K¨u bert and P.J.Hirschfeld,Solid State Communica-tions,Vol.105,No.7,459(1998).[8]R.A.Fisher,et.al.,Physica,C252,1995,237.3。

Delamination cracking in advanced aluminum–lithium alloys – Experimental and computational studies

Delamination cracking in advanced aluminum–lithium alloys – Experimental and computational studies

Delamination cracking in advanced aluminum–lithium alloys –Experimental and computational studiesS.Kalyanam a ,A.J.Beaudoin b,*,R.H.Dodds Jr.a ,F.Barlat caDepartment of Civil and Environmental Engineering,University of Illinois,Urbana,IL 61801,USAb Department of Mechanical and Industrial Engineering,University of Illinois,Urbana,IL 61801,USAc Graduate Institute of Ferrous Technology,Pohang University of Science and Technology,San 31Hyoja-dong,Nam-gu,Pohang,Gyeongbuk 790-784,Republic of Korea a r t i c l e i n f o Article history:Received 17September 2008Received in revised form 12June 2009Accepted 17June 2009Available online 30June 2009Keywords:Aluminum–lithium (Al–Li)Delamination fracture Small-scale yielding 3-D finite element analysis Stress and deformation fields Yld2004-18p modela b s t r a c tDelamination cracking in advanced aluminum–lithium (Al–Li)alloys plays a dominant rolein the fracture process.With the introduction of these materials into components of aero-space structures,a quantitative understanding of the interplay between delaminationcracking and macroscopic fracture must be established as a precursor to reliable designand defect assessment.Delamination cracking represents a complex fracture mechanismwith the formation of transverse cracks initially on the order of the grain size.In this work,interrupted fracture toughness tests of C(T)specimens,followed by incremental polishing,reveal the locations,sizes and shapes of delamination cracks and extensions of the primarymacrocrack.These observations suggest that delamination crack sizes scale with loading ofthe primary crack front expressed in terms of J =r ing a 3-D,small-scale yielding frame-work for Mode I loading,a companion finite element study quantifies the effects of pre-scribed delamination cracks on local loading along the macroscopic (primary)crack andahead of the delamination cracks.An isotropic hardening model with an anisotropic yieldsurface describes the constitutive behavior for the 2099-T87Al–Li alloy plate examined inthis study.The computational results characterize the plastic zone size,the variation oflocal J ahead of the macrocrack front and the stress state that serves to drive growth ofthe macrocrack and delamination crack.The computational studies provide new,quantita-tive insights on the observed increase in toughness that has been observed during fractureexperiments caused by delamination cracks that divide the primary crack front.Ó2009Published by Elsevier Ltd.1.IntroductionThe demands for lower weight,higher strength and improved fracture toughness in aerospace applications have pro-moted the continued development of Al–Li alloys.The addition of lithium as an alloying element significantly alters the mechanical properties and density of traditional alloys developed for these same purposes (e :g :2219-T8).Each 1%weight of Li alloyed with Al reduces the density by 3%and increases the Young’s modulus by 6%compared to pure Al.Second gen-eration Al–Li alloys,including 2090,2091,8090,all contain Li in 2%or larger concentrations.Although these alloys have the desired lower density,higher modulus and improved fatigue resistance,they also exhibit lower ductility and fracture tough-ness in the short-transverse (S -T )direction.Further,the occurrence of delamination cracks along grain boundaries in Al–Li alloys has been demonstrated through experimental studies by Rao et al.[1],Rao and Ritchie [2–4],Xu et al.[5],Sohn et al.0013-7944/$-see front matter Ó2009Published by Elsevier Ltd.doi:10.1016/j.engfracmech.2009.06.010*Corresponding author.E-mail address:abeaudoi@ (A.J.Beaudoin).Engineering Fracture Mechanics 76(2009)2174–2191Contents lists available at ScienceDirectEngineering Fracture Mechanicsj o u r n a l h o m e p a g e :w w w.elsevier.c om /loc ate/engfracmechS.Kalyanam et al./Engineering Fracture Mechanics76(2009)2174–21912175 [6],Kim et al.[7]and Csontos and Starke[8].The complex nature of the interaction of delamination cracks with the mac-rocrack during fracture and fatigue tests is well recognized and has been discussed by McKeighan and Hilberry[9],Kumai and Higo[10]and Takahashi et al.[11].Guo et al.[12]examined similar delamination phenomena with C(T)specimens of X70steel used in pipeline construction.Multiple delaminations developed as specimen thickness(i:e:wall thickness)in-creased;an approximate analysis provided through-thickness stresses to help quantify the cause and effects of delaminations.The development of delamination cracks intrinsic to the fracture process represents a fundamental challenge in structural design.Current fracture mechanics testing and assessment procedures,e.g.,ASTM E399-08[13]and E1820-08[14],for struc-tural metals do not reflect the potential impact of delamination cracking events on the measured toughness values used in residual strength predictions of structural components.In Al–Li alloys,the extent of delamination cracking varies with the composition,test temperature,sample location and the microstructure produced by the thermo-mechanical processing.The newer2099-T87Al–Li alloy plate examined in this study has the desirable lower density and exhibits high strength and frac-ture toughness(in some orientations),and yet it has a strong propensity for delamination cracking in the S-T plane.As discussed by Rao et al.[1],Rao and Ritchie[4]and by Csontos and Starke[8],delamination cracking originates from the unrecrystallized anisotropic grains created by the thermo-mechanical processing.In comparison to the void growth and coa-lescence fracture process exhibited by conventional Al–Mg alloys,delaminations represent a more complex fracture mech-anism involving transverse splitting that occurs at a length-scale on the order of the grain size(e:g:1–1.5mm)in the2099-T87Al–Li alloy plate.Csontos and Starke[8]find that the existence of such large grains promotes planar slip leading to the formation of intense shear bands that impinge on grain boundaries causing intergranular fracture.More recently,detailed investigations using X-ray diffraction and electron backscatter diffraction(EBSD)measurements of the grain microstructure near the delamination cracks found during the post-test analysis of specimens tested under fatigue loading have led McDon-ald et al.[15]to conclude that shear bands impinging on the grain boundaries lead to the formation of delamination cracks along the grain boundaries in the2099-T87Al–Li alloy mellar TiAl alloys[16]also develop delamination cracking similar to that considered in the present work.The combinations of different orientations of the macrocrack and delamination cracks have been classified as crack ar-rester,crack divider and crack delamination configurations.Fig.1depicts the conventional ASTM notation for rolled plates and fracture specimens[17],while Fig.2illustrates the variety of delamination cracking phenomena observed in Al–Li alloys from fracture toughness tests.Depending on the loading and material directions,the delamination cracks have been found to either enhance or reduce the toughness measured for the pre-existing macrocrack.Crack‘‘divider”delaminations reduce sig-nificantly the stress triaxiality on each remaining ligament leading to increased toughness(i.e.,nearer to plane stress rather than plane strain behavior).Rao and Ritchie[1–3]demonstrate through fracture testing the enhanced toughness of2090-T8 caused by the increased number and extent of divider delaminations at cryogenic temperatures.Crack arrester delamina-tions stop advancement of the primary crack and lead to crack turning,even in globally Mode I geometries and loading.Prop-agation of a macrocrack in the delamination plane(ST plane)contributes to a lower fracture toughness in comparison to the propagation of a macrocrack in the L or LT planes of the material.The three different fracture behaviors exhibited by these Al–Li alloys preclude the straightforward application of conventional fracture mechanics approaches to quantify the material toughness for residual strength predictions.Fracture experiments with C(T)specimens from a2099-T87Al–Li alloy plate performed by Lambert et al.[18]in the T-L (crack divider)orientation reveal more pronounced delamination cracks at cryogenic temperature(À195°C,LN2)compared to room temperature(RT).Fracture tests with multiple C(T)specimens were interrupted at different reduced fractions of maximum load which led to smaller extensions of the macrocrack and smaller delamination cracks.Specimens from these interrupted tests were then incrementally polished in an elaborate,step-by-step process with corresponding fractographic documentation of the surfaces,see Chen and Shah[19].Assemblages of micrographs obtained in the step polishing of these specimens enable characterization of the shape and size of the delamination zones that develop along the front of the(primary)macrocrack under increased loading.This detailed documentation of the delamination crack sizes at increasing load levels provides essential information to guide the present modeling efforts.This paper describes:(1)key features of the experimental study using interrupted tests on C(T)specimens,and (2)the deformation and strain–stress fields near the macrocrack front with and without a delamination crack to gain insight into the local crack driving rmation obtained from the experimental study makes possible the simulations that prede-fine the in-plane shapes and sizes of realistic (discrete)delamination cracks located at the specimen mid-thickness using the framework of a 3-D,small-scale yielding (SSY)finite element model.The analyses provide quantitative descriptions of the local crack-front loading (J -local)on the two remaining,essentially parallel ligaments created by the divider delamination and the local stress fields on those ligaments.The local state of stress may be seen to drive subsequent fracture/delamination processes under increased load;the values of stress are linked to the single,global K J value remotely specified in a 3-D SSY model.The analyses also support interpretation of the experimental results that indicate the existence of a scaling relationship for the length of the delamination zone,the plastic zone size and the stress fields in the vicinity of the macrocrack front.2.Delamination cracking in C(T)specimensLambert et al.[18]conducted fracture tests with C(T)specimens of the 2099-T87plate in the T -L orientation at À195°C following the procedures of ASTM E1820.The specimen dimensions are all nominally:W ¼31:8mm ;B ¼9:5mm,with side-grooves providing B net ¼7:4mm.At maximum load in the J —D a tests,the specimens experience $0:63mm of extension along the macroscopic crack front.Polishing of the fractured specimens readily demonstrated the presence of delamination cracking,see Chen and Shah [19].Fig.3shows the ST plane with multiple delamination cracks that extend in the L direction.The testing methodology first determined a complete J —D a curve that reflects all delamination events and extension of the primary crack as they occured.The loading process in subsequent fracture tests was terminated at various levels below the maximum load (deformation)reached in the J —D a test.Post-test fractography on cross-sections in the ST plane revealed the existence and growth of ‘‘divider”delamination cracks as shown schematically in Fig.2.Fig.4shows the J —D a records from the unloading compliance procedure with the scale focused on the reduced loading levels for the three specimens with interrupted tests to detect growth of the delamination cracks.Several valid points per the E1820protocol are obtained for test specimen 1,with the 0.2mm offset line providing an estimate of J Ic ¼24kJ =m 2ðK Jc ¼47:5MPa ffiffiffiffiffim p Þ.Etched, 20XSpecimen Thickness, B = 9.5 mm, B net = 7.4 mmMid-thickness region used tomeasure delamination heightin incremental polishing study D h = 1.1 mmZ (ST)Y (LT)X (L)Delamination cracksdelamination cracks observed following the test at À195°C of an ASTM E1820standard,side-grooved,C(T)specimen (T -L orientation)location of a Al–Li 2099-T87plate.Delamination cracks lie in weak planes normal to the thickness (ST)direction.2176S.Kalyanam et al./Engineering Fracture Mechanics 76(2009)2174–2191Specimens from the interrupted tests were ground,polished and etched at finite increments along the L direction.Optical micrographs of the etched cross-sections reveal the through-thickness locations and heights ðD h Þof the delaminations.The process of incremental polishing in the L direction for each specimen continued until no further indications of delamination cracking were observed ahead of the macrocrack front.The arranged sequence of micrographs obtained from the polishing study enable extraction of key,quantitative measures of the delamination crack shape and size corresponding to each load level examined.Fig.5shows a subset of micrographs for polished surfaces from specimen 1with the test interrupted at the highest load considered ðJ ¼24kJ =m 2;K J ¼47:5MPa ffiffiffiffiffim p Þ.Table 1summarizes:J -values obtained using E1820procedures in the experiments,and the equivalent K J -values for reference where K J ¼ffiffiffiffiffiEJ p ;extension of the (primary)macrocrack at each load ðD a macro Þ;total length of the delamination zones ðD L Þ;and the lengths of the delamination zone ahead of the cur-rent location of the macrocrack front ðD l Þas defined in Fig.6(where the current location of the macrocrack front reflects extension beyond the initial fatigue crack front).Fig.7shows the measured sizes and shapes of the delamination cracks on the center plane at each of the three increasing levels of applied load.The shapes and sizes of each delamination crack are shown relative to the location of the final fatigue crack position on the center plane for each specimen.At each of the three load levels considered,the primary crack front extends forward during the J —D a test.The polishing process confirmed the amount of primary crack extension estimated in the J —D a test records.The delamination cracks also extend forward,and normal to the plane of the macro (fatigue)crack,achieving the sizes and shapes shown with the absolute scales used in Fig.7.The measured lengths of the delamination zone listed in Table 1immediately suggest a scaling model that links the ex-tent of the delamination zone ðD l Þ,ahead of the current position of the macrocrack,with yield stress ðr 0Þand the load mea-sure ðJ Þhaving the formD l ¼CJr 0;ð1Þwhere in the absence of a delamination crack,the crack front plastic zone size and crack tip opening displacement (CTOD)both scale with the quantity J =r 0in SSY.Table 1indicates the value of the parameter C varies between 24and 30for this 2099-T87Al–Li alloy plate.The impacts of the discrete microstructure of the material coupled with the incremental procedures of the interrupted testing–polishing study leave open several interesting questions.The present results,for example,do not resolve the issue of continuous or discontinuous growth of the delamination cracks with loading,and with extension of the primary crack.The order in which delaminations form across the thickness is also not determined.However,Fig.5(g,f and e)suggests that the largest ðD h Þdelaminations over thickness form first at the leading edge of the macrocrack.Moreover,the initialS.Kalyanam et al./Engineering Fracture Mechanics 76(2009)2174–21912177delaminations apparently formed and extended by some amount during the fatigue pre-cracking at D K $10MPa ffiffiffiffiffim p .Fig.5(a and b)shows clearly the roughness of the fatigue crack front (normal to the plane of the notch cut)over the thickness due possibly to micro-delaminations.This also illustrates the challenges in defining the end of the fatigue crack and initial stages of macrocrack extension during the fracture test.The non-zero delamination sizes indicated in Fig.7behind ðx <0Þthe final location of the fatigue crack front apparently measure the increased roughness (Fig.5a and b)as the fatigue crack extended forward from the machined notch.3.Model of C(T)specimen with delamination3.1.3D small scale yielding frameworkThe combination of moderate yield stress and moderate-to-low fracture toughness for this Al–Li alloys leads to small plastic-zone sizes at the onset of stable crack extension,compared to the in-plane dimensions of structural components and standard fracture test specimens.Under such SSY conditions,a 3-D boundary layer framework of the type illustrated in Fig.8efficiently models the crack front constraint conditions.The thickness ðB Þprovides the only physical length-scale of the computational model.The remote boundary of the SSY model is loaded by plane-stress displacements of an increasing d=2.28 mm d=3.56 mm d=3.91 mm d=4.27 mm d=4.75 mm d=5.71 mm d=5.82 mm mm 1.0 mm delaminations in a C(T)specimen interrupted during Table 1Measured sizes of macrocrack (extension on center plane)and the divider delamination cracks at mid-thickness under increasing remote load in the tested C(T)specimens.The J (measured)to K J conversion uses the plane-stress form for SSY conditions:K 3¼ffiffiffiffiffiEJ p .Note:These experimental J -values and measured delamination crack sizes do not exactly match the three computational configurations.J ðkJ =m 2ÞK J ðMPa ffiffiffiffiffim p ÞD a macro ðmm ÞD L (mm)D l (mm)D l =ðJ =r 0Þ9.6300.050.740.4824.611.432.70.08 1.020.6628.52447.50.25 2.29 1.4529.72178S.Kalyanam et al./Engineering Fracture Mechanics 76(2009)2174–2191S.Kalyanam et al./Engineering Fracture Mechanics76(2009)2174–21912179K I and,optionally,a T-stress,field.For SSY conditions,the T-stress(positive or negative)approximates very well the geom-etry effects of different fracture test specimens and structural components,especially for large,thin panels with limited yielding at fracture as the crack extends stably for some distance.Although the SSY model enables the independent control of K I and T-stressfields,actual specimen geometries have a fixed,proportional evolution of K I and T-stress under increased far-field loading[20].The evolution of local J-values and strain–stressfields along the3-D crack front scales in a unique manner with thickness variations relative to the global plastic zone size,r p=B,which characterizes the non-dimensional crack front loading.The studies of Nakamura and Parks[21]first suggested the existence of such scaling for isotropic plasticity with monotonic loading.Roychowdhury and Dodds[22–24] demonstrate the existence of r p=B scaling for computational models with cyclic loading,cyclic plasticity and discrete crack growth to simulate Mode I fatigue crack growth.The existence of such scaling relationships has practical significance:the normalized results for a singlefinite element computation using theflow properties of an Al–Li alloy quantify the response for an entire family of real structural components(of the same material)with varying thickness and loading levels.For the 3-D SSY conditions described above,a linear-elastic,plane-stress Mode I field encloses the crack front plastic zone.At each location along the macrocrack front and extending into the plastic zone,a complex 3-D displacement–strain–stress field exists that transitions gradually to a plane-stress field at distances from the macrocrack front comparable to the thick-ness,B .The first two terms of the Williams [25]solution transmit the effects of remote loading through the surrounding lin-ear-elastic material to the macrocrack front region,r ij ¼K I ffiffiffiffiffiffiffiffiffi2r p f ij ðh ÞþT d 1i d 1j ;ð2Þwhere f ij ðh Þdefine the angular variations of in-plane stress components.The constant T represents an in-plane tension (or compression)stress parallel to the direction of Mode I crack extension.The out-of-plane stresses,r 3j ,vanish due to the exis-tence of plane stress conditions remote from the crack front.For fracture specimens and structural components,the T -stress varies linearly with K I through a non-dimensional constant,often denoted b ,that depends on the geometry and loading (ten-sion,bending,thermal,etc.).The present work considers the case of b ¼0and leaves the investigation of non-zero T -stress effects on delamination cracks to future work.3.2.Finite element modelThe SSY model consists of an edge crack and a large region of material enclosing the crack front (Fig.8).The specific model constructed for analysis has thickness B ¼9:5mm to match the tested C(T)specimens.The boundary of the domain lies at a radius ¼100B ,such that the plastic zone for the macrocrack front at maximum load remains well-confined within a linear-elastic region and has negligible interaction with the boundary.Twofold symmetry of the 3-D Mode I configuration allows modeling of only one-quarter of the domain.The models have eight-node isoparametric brick elements with a standard B formulation to preclude mesh locking for incompressible plastic deformation.A finite strain formulation captures blunting effects at the macrocrack front –delaminations initiate within the region of material affected by the blunting crack front.A very small,initial crack front radius included in the model enhances convergence of the finite strain plasticity compu-tations.An initial radius of 1:5l m leads to self-similar crack-front fields before the far-field loading reaches levels that ini-tiate delamination cracks in the 2099-T87material.Experience shows that these models sustain load levels causing the deformed crack tip radius to exceed 15–20times the undeformed radius before severe element distortions prevent further loading.Preliminary analyses using ten,uniform layers of elements defined over the half-thickness indicated the need for additional refinement to capture steep gradients in the stress fields and in the local J -values near the delamination crack (at mid-thickness)and near the (traction-free)outside surface of the specimen.Fig.8shows a typical mesh adopted for Mode I Symmetry PlanecSymmetry Plane Root Radius (1.5Delamination CrackB /2(x 50)RZ (ST)X (L)Y (LT)M a c r o c r a c k g r o w t h mesh for 3-D small-scale yielding analyses.(a)Overall dimension scaled 50Âfor clarity;(b)transition macrocrack front to outer circular domain;(c)divider crack introduced on the center plane of the SSY shows root mesh near the macrocrack front.Size element ahead of the macrocrack front is L e =B root radius is 1:5l m.Direction of macrocrack growth:delamination crack lies in the X —Y ðZ ¼0Þplane 2180S.Kalyanam et al./Engineering Fracture Mechanics 76(2009)2174–2191computations with a prescribed delamination crack that has sixteen layers of elements through the half-thickness.The layer thicknesses starting from the mid-planeðz¼0Þare:0:005B;0:005B;0:008B;0:008B;0:016B;0:032B;0:039B;0:137B and the same progression starting at0:005B from the outside surfaceðz¼0:5BÞ.Post-test fractography of the C(T)specimens and the investigation leading to characterization of the delamination cracks on the center plane provide the quantitative guidelines for introduction of the delamination cracks into the computational models.Fig.9shows the idealized delamination crack shape on the center plane of the SSY model that matches approxi-mately the measured delamination cracks shown in Fig.7.The SSY models have maximum applied loads and corresponding delamination crack sizes as summarized in the table included in Fig.9.The SSY model and tested C(T)specimens evolved to have slightly different delamination crack sizes/shapes and load levels during continued refinement of the fractography.In the C(T)specimens,the macrocrack extends forward(þx in thefigures)as the remote loading increases.The delami-nation crack also grows ahead of the advancing macrocrack and normal(Æy here)to the plane of the macrocrackðx—zÞ.How-ever,our modeling scheme introduces the delamination crack into each of the threefinite element meshes at the outset, prior to any loading.The boundary loading on the SSY model is then increased gradually to reach the values listed in Fig.9.The location of the macrocrack front in each of the threefinite element models is set along x¼y¼0.The model is completed by defining the geometric parameters D L;D l and D h to approximate the measured delamination cracks.In the tested C(T)specimens,the location of the macrocrack front reflects the saw notch and the initial fatigue crack,plus the load-ing(J)dependent,ductile crack extensionða¼a0þD a fþD a macroÞ(see also Fig.7).The analyses impose monotonically increasing remote loads with no growth of the macrocrack and no growth of the delamination crack.Consequently,the current results reflect no history effects of stable crack growth on the computed mac-rocrack frontfields and on the local J(and K J)values.The work here focuses on localfields along the macrocrack front and in the vicinity of the delamination crack.Conse-quently,the smallest element sizes in the vicinity of the macrocrack are set to L e=B¼0:001,based on convergence studies and the experience gained from earlierfinite strain,3-D simulations performed on C(T)specimens.The quarter-symmetric models for L e=B¼0:001contain typically88,250nodes and80,950elements.They have2500of the smallest elements de-fined in the vicinity of the delamination crack and the macrocrack front to provide substantial refinement in the delamina-tion crack region even for the lowest load level analyzedðJ¼9:75kJ=m2Þ.3.3.Loading and boundary conditionsLoading of the model occurs through displacements imposed on the remote cylindrical boundary at R by in-plane com-ponentsðu;vÞthat follow the linear elastic,plane-stress(Mode I)field for the specified K I-values,u x¼K I2GffiffiffiffiffiffiffiR2pscosh2jÀ1þ2sin2h2;ð3Þu y¼K I2GffiffiffiffiffiffiffiR2pssinh2jÀ1þ2cos2h2;ð4ÞlDLD2hDMacrocrack frontGrowth of Macrocrack Front1.272.0347.5240.561.033.011.60.510.7630.29.75()MPa mJK()2kJ mJ()mmFELD()mmFElD1.070.560.51()mmFEhDS.Kalyanam et al./Engineering Fracture Mechanics76(2009)2174–21912181where u x is the displacement parallel to the crack(in the x direction),u y is the displacement normal to the crack plane(in they direction),j¼ð3ÀmÞ=ð1þmÞ;m is Poisson’s ratio,G is the shear modulus,R is the radius of the SSY model domain,and K I is the specified stress-intensity factor.These displacements remain uniform at each through-thicknessðz P0Þnode locationon the outer boundary at.Symmetry conditionsðv¼0Þare imposed aheadðx P0Þof the macrocrack front on the y¼0 plane.All nodes on the z¼0plane have symmetry conditions w¼0enforced except those nodes lying inside the area ofthe delamination crack,which have unrestrained w displacements.The computations increase remote boundary displacements monotonically from zero to those corresponding to the max-imum specified K I-values using up to400variably sized increments.The use of smaller D K I increments early and late in the loading process facilitates the numerical convergence as plastic deformation develops along the macrocrack front and around the delamination crack.The load increment sizes vary to reach J-values of9.75,11.6and24kJ=m2comparable to the experimental J-values of9.6,11.4and24kJ=m2at which the J—D a tests were suspended,specimens polished and delam-ination crack sizes determined.3.4.Material constitutive modelThe thermo-mechanical processing of Al–Li alloys in the form of rolled plate leads to the development of a strong crys-tallographic texture and concomitant anisotropy.To reflect the anisotropy,computational models typically employ polycrys-tal plasticity approaches that track rotations of each grain or conventional continuum plasticity with complex yield surfaces. The current analyses adopt advanced continuum models to describe material anisotropy on plastic deformation.Several families of yield functions have been proposed beyond the simple isotropic von Mises[26]and orthotropic Hill [27]forms.Depending on the complexity built into the anisotropic plasticity models,they capture the plastic deformation accurately over moderate strain ranges.Barlat et al.[28]provide a recent review of these phenomenological plasticity mod-els.For3-D modeling,Barlat et al.[29]developed approaches based on linear transformations of the stress tensors and com-bining principal values of transformed stress tensors with an isotropic yield function.Two such yield functions are termed Yld2004-18p(has18material dependent parameters)and Yld2004-13p(has13material dependent parameters).The larger set of parameters(coefficients that define the anisotropic plastic behavior)in these yield functions appear to provide a more accurate description of the material anisotropy in3-D.Barlat et al.[29]demonstrate the applicability of these yield functions through simulations conducted to predict the measured deformations in sheet materials of6111-T4and2090-T3Al alloys.The current work adopts the Yld2004-18p model for3-Dfinite element analyses of delamination cracks in the SSY frame-work.The anisotropic yield function Yld2004-18p may be used for general stress states,and has the form/¼X1;3i;j~S0iÀ~S00ja¼4 r a;ð5Þwhere e S0i and e S00i are the principal values of two tensors~s0and~s00,and r is theflow stress.Based on crystal plasticity compu-tations,the exponent‘‘a”is typically6for BCC materials and8for FCC materials.The tensors~s0and~s00arise from linear trans-formations of the stress deviator s,i.e.,~s0¼C0s;~s00¼C00s;ð6Þwhere C0and C00are matrices containing the18anisotropy coefficients.Details of the anisotropic behavior captured by Yld2004-18p may be excessive for some applications.In certain cases,for instance,the prediction of ears after cup drawing in the beverage can manufacturing industry,these details are quite significant[31].Here,post-yield hardening follows a sim-ple isotropic expansion of the yield surface with a Voce model representing the uniaxial hardening characteristics.Material data to support using this constitutive model derive from tension and compression tests conducted atÀ195°C on the2099-T87Al–Li plate.Conventional(round)tensile tests on specimens taken from the T/4thickness location in the L direction provide these average values:Young’s modulus of94GPa,yield stressðr0Þof490MPa,and Poisson’s ratio m¼0:33. Additional tests were performed on specimens cut from the T/4thickness location along different orientations.Tensile yield stresses varied as:L(490MPa),LT(470MPa)and L45LT(440MPa).The ST direction has a compressive yield stress of 450MPa(no tensile tests).Selected samples of round-bar specimens extracted from the L-direction also had cross-section diameter ratios measured during tensile tests to define r-values,r¼D LT=D ST.This ratio varied from0.995for a longitudinal strainð LÞof0.01to r¼0:96at L¼0:09.Flow stress and r-values from the tension and compression tests were compared withflow stress anisotropy predictions of Taylor–Bishop–Hill(TBH)polycrystal analyses.The combination of these mechanical tests,texture analysis and metallographic observations enabled characterization of the anisotropic material behavior for the2099-T87Al–Li alloy atÀ195°C.The corresponding anisotropic coefficients for the Yld2004-18p model follow from the calculation procedure outlined in Barlat et al.[29].Table2lists the18computed coef-ficients.Table3provides the Voce strain hardening coefficients for the tensile stress–strain curves measured atÀ195°C.putational codeThefinite element computations reported here are performed with the fracture mechanics research code,WARP3D[30]. The global solution procedure uses an implicit,incremental-iterative strategy with Newton iterations to achieve equilibrium 2182S.Kalyanam et al./Engineering Fracture Mechanics76(2009)2174–2191。

Status on the Searches of Neutrino Magnetic Moment at the Kuo-Sheng Power Reactor

Status on the Searches of Neutrino Magnetic Moment at the Kuo-Sheng Power Reactor

AS-TEXONO/02-05 February 7, 2008Status on the Searches of Neutrino MagneticarXiv:hep-ex/0209003v1 1 Sep 2002Moment at the Kuo-Sheng Power ReactorHenry Tsz-King Wong1 ( on behalf of the TEXONO2 Collaboration ) Institute of Physics, Academia Sinica, Taipei 11529, TaiwanAbstractThe TEXONO collaboration has been built up among scientists from Taiwan and China to pursue an experimental program in neutrino and astro-particle physics. The flagship efforts have been the study of low energy neutrino physics at the Kuo-Sheng Power Reactor Plant in Taiwan. The Reactor Laboratory is equipped with flexiblydesigned shieldings, cosmic veto systems, electronics and data acquisition systems which can function with different detector schemes. Data are taken during the Reactor Period June-01 till April-02 with a high purity germanium detector and 46 kg of CsI(Tl) crystal scintillator array operating in parallel. A threshold of 5 keV has been achieved for the germanium detector, and the background level comparable to those of Dark Matter experiments underground is achieved. Based on 62/46 days of analyzed Reactor ON/OFF data, a preliminary result of (µν¯e /10−10 µB )2 = −1.1 ± 2.5 can be derived for neutrino magnetic moment µν¯e . Sensitivity region on neutrino radiative decay lifetime is inferred. The complete data set would include 180/60 days of ON/OFF data.( Contributed Paper to the International Conference on High Energy Physics, 2002 )1 2Email: htwong@.tw Taiwan EXperiment On NeutrinO : Home Page at .tw/∼texono/1Introduction and HistoryThe TEXONO Collaboration has been built up since 1997 to initiate and pursue an experimental program in Neutrino and Astroparticle Physics [1]. The Collaboration comprises more than 40 research scientists from major institutes/universities in Taiwan (Academia Sinica† , Chung-Kuo Institute of Technology, Institute of Nuclear Energy Research, National Taiwan University, National Tsing Hua University, and Kuo-Sheng Nuclear Power Station), China (Institute of High Energy Physics† , Institute of Atomic Energy† , Institute of Radiation Protection, Nanjing University, Tsing Hua University) and the United States (University of Maryland), with AS, IHEP and IAE (with † ) being the leading groups. It is the first research collaboration of this size and magnitude among major research institutes from Taiwan and China. The research program [1] emphasizes on the the unexplored and unexploited theme of adopting detectors with high-Z nuclei, such as solid state device and scintillating crystals, for low-energy low-background experiments in Neutrino and Astroparticle Physics[2]. The “Flagship” program [3] is a reactor neutrino experiment to study low energy neutrino properties and interactions at the Kuo-Sheng (KS) Neutrino Laboratory. It is the first particle physics experiment performed in Taiwan. In parallel to the reactor experiment, various R&D efforts coherent with the theme are initiated and pursued. This article focuses on the magnetic moment data taking and analysis with a germanium detector during the Reactor Period June 2001 till April 2002.2Electromagnetic Properties of the NeutrinosThe strong and positive evidence of neutrino oscillations implies the existence of neutrino masses and mixings [4, 5], the physical origin, structures and experimental consequences of which are still not thoroughly known and understood. Experimental studies on the neutrino properties and interactions which may reveal some of these fundamental questions and/or constrain certain classes of models are therefore of interests. The coupling of neutrino with the photons are consequences of non-zero neutrino masses. Two of the manifestations of the finite electromagnetic form factors for neutrino interactions [6, 7] are neutrino magnetic moments and radiative decays. The searches of neutrino magnetic moments are performed in experiments on neutrino-electron scatterings [8]: νl1 + e− → νl2 + e− . (1) The experimental observable is the kinetic energy of the recoil electrons(T). The differential cross section for the magnetic scattering (MS) channel can be parametrized by the neutrino magnetic moment (µl ), often expressed in units of the Bohr magneton(µB ). Its dependence on neutrino energy Eν is given by [6]: (2 παem µl 2 1 − T/Eν dσ [ )MS = ]. dT m2 T e(2)The process can be due to diagonal and transition magnetic moments, for the cases where l1 = l2 and l1 = l2 , respectively. The MS interactions involve a flip of spin and therefore do not have interference with the Standard Model (SM) cross-section. The µl term has a 1/T dependence and hence dominates at low electron recoil energy over the SM process. The quantity µl is an effective parameter which, in the case of νe and large mixings ¯ between the mass eigenstates, can be expressed as [9]: µ2 = ek|jUej µjk|2(3)where U is the mixing matrix and µjk are the fundamental constants that characterize the couplings between the mass eigenstates νj and νk with the electromagnetic field. The ν-γ couplings probed by ν-e scatterings is the same as that giving rise to the neutrino radiative decays [10]: νj → νk + γ (4) between νj and νk . The decay rate Γjk is related to µjk by: Γjk = µ2 jk 4π (∆m2 )3 jk 3 mj (5)where mj is the mass of νj and ∆m2 = m2 − m2 . The total decay rate is given by jk j k Γν e =k|jUej Γjk |2(6)Reactor neutrinos provide a sensitive probe for “laboratory” searches of µe , taking advantages of the high νe flux, low Eν and the better experimental control via the reactor ¯ ON/OFF comparison. Neutrino-electron scatterings were first observed in pioneering experiment [11] at Savannah River. However, a reanalysis of the data by Ref [6] with improved input parameters on the reactor neutrino spectra and sin2 θW gave a positivesignature consistent with the interpretation of a finite magnetic moment at (2 − 4) × 10−10 µB . Other results came from the Kurtchatov [12] and Rovno [13] experiments which quoted limits of µν¯e less than 2.4 × 10−10 µB and 1.9 × 10−10 µB at 90% confidence level (CL), respectively. However, the lack of experimental details and discussions in the published work make it difficult to assess the robustness of the results. Theoretically, a minimally-extended Standard Model would give rise to µν ’s for Dirac neutrinos too small to be of interest [14]. However, there are models [4] which can produce large µν by incorporating new features like right-handed currents and transition moments. Neutrino flavor conversion induced by resonant or non-resonant spin-flip transitions in the Sun via its transition magnetic moments has been considered as solution to the Solar Neutrino Problem [15]. Stimulated by the new results from the Super-Kamiokande and SNO experiments, recent detailed work [16] suggested that this scenario is also in excellent agreement with the existing solar neutrino data. Alternatively, the measured solar neutrino ν⊙ -e spectra has been used to set limits of µ⊙ < 1.5 × 10−10 µB at 90% ν CL for the “effective” ν⊙ magnetic moment [9], which is in general different from that of a pure νl state derived in laboratory experiments. In addition, there are astrophysical arguments [4] from nucleosynthesis, stellar cooling and SN1987a which placed limits of the range µe < 10−12 − 10−13 µB . However, care should be taken in their interpretations due to the model dependence and assumptions on the neutrino properties implicit in the derivations. These discussions show that further laboratory experiments to put the current limits on more solid grounds and to improve on the sensitivities are of interest. Beside the KuoSheng (KS) experiment [1] reported in this article, there is another on-going experiment MUNU [17] at the Bugey reactor using a time projection chamber with CF4 gas.3Kuo-Sheng Reactor Neutrino LaboratoryThe “Kuo-Sheng Reactor Neutrino Laboratory” is located at a distance of 28 m from the core #1 of the Kuo-Sheng Nuclear Power Station at the northern shore of Taiwan [3]. A schematic view is depicted in Figure 1. A multi-purpose “inner target” detector space of 100 cm×80 cm×75 cm is enclosed by 4π passive shielding materials and cosmic-ray veto scintillator panels, the schematic layout of which is shown in Figure 2. The shieldings provide attenuation to the ambient neutron and gamma background, and are made up of, inside out, 5 cm of OFHC copper,Kuo-sheng Nuclear Power Station : Reactor BuildingReactor Pressure VesselPrimary ContainmentDrywellAuxiliary BuildingReactor CoreExperiment SiteSuppression PoolFigure 1: Schematic side view, not drawn to scale, of the Kuo-sheng Nuclear Power Station Reactor Building, indicating the experimental site. The reactor core-detector distance is about 28 m. 25 cm of boron-loaded polyethylene, 5 cm of steel and 15 cm of lead. Different detectors can be placed in the inner space for the different scientific goals. The detectors are read out by a versatile electronics and data acquisition systems [18] running on a VME-PCI bus. Signals are recorded by Flash Analog-to-Digital-Convertor (FADC) modules with 20 MHz clock and 8-bit resolution. The readout allows full recording of all the relevant pulse shape and timing information for as long as several ms after the initial trigger. The reactor laboratory is connected via telephone line to the home-base laboratory at AS, where remote access and monitoring are performed regularly. Data are stored and accessed in a multi-disks array with a total of 600 Gbyte memory via IDE-bus in PCs. It is recognized recently [19] that the low energy part of the reactor neutrino spectra is not well modeled or experimentally checked. Consequently, the uncertainties induced in the SM νe -e cross-sections can limit the sensitivities of magnetic moment searches at the ¯ domain where they are comparable or larger than the MS interactions. Therefore, experiments intended for measure Standard Model cross sections with reactor neutrinos should focus on higher energies (T>1.5 MeV) while µν searches should base on measurements with T<100 keV.Shielding Design [Only One out of Six Sides Shown]Inner Target Volume 100(W) x 80(D) x 75(H) cm 3 Copper : 5cm Boron-loaded Polyethylene : 25 cm Stainless Steel Frame : 5 cm Lead : 15 cmVeto Plastic Scintillator : 3 cmFigure 2: Schematic layout of the inner target space, passive shieldings and cosmic-ray veto panels. The coverage is 4π but only one face is shown. Accordingly, data were taken for Period I Reactor ON/OFF from June 2001 till April 2002 with these strategies. Two detector systems are running in parallel using the same data acquisition system but independent triggers: (a) an Ultra Low Background High Purity Germanium (ULB-HPGe), with a fiducial mass of 1.06 kg, and (b) 46 kg of CsI(Tl) crystal scintillators. Preliminary results of the HPGe system is presented in the following sections. The performance of the CsI(Tl) modules is discussed elsewhere [1, 20].4Low Background Germanium DetectorThe low threshold and excellent energy resolution of the germanium detector make it optimal for µν searches [1, 21]. The set-up of the KS-Ge experiment is schematically shown in Figure 3. It is a coaxial germanium detector with an active target mass of 1.06 kg. The lithium-diffused outer electrode is 0.7 mm thick. The end-cap cryostat, also 0.7 mm thick, is made of OFHC copper. Both of these features provide total suppression to ambient γ-background below 60 keV, such that events below this energy are either due to internal activity or external high energy γ’s via Compton scattering. The HPGe was surrounded by an anti-Compton (AC) detector system made up of two components: (1) an NaI(Tl) well-detector of thickness 5 cm that fit onto the end-cap cryostat, the innerKuo-Sheng Experiment : HPGe DetectorOFHCPMT OFHC NaI Ge Det OFHC CsIRadon Purge Plastic BagPbFigure 3: Schematic drawings of the ULB-HPGe detector with its anti-Compton scintillators, passive shieldings and radon purge system. wall of which is also made of OFHC copper, and (2) a 4 cm thick CsI(Tl) detector at the bottom. Both AC detectors were read out by photo-multipliers (PMTs) with low-activity glass. The assembly was surrounded by 3.7 cm of OFHC copper inner shielding. Another 10 cm of lead provided additions shieldings on the side of the liquid nitrogen dewar and pre-amplifier electronics. The inner shieldings and detectors were covered by a plastic bag connected to the exhaust line of the dewar, serving as a purge for the radioactive radon gas. The HPGe pre-amplifier and AC PMT signals were distributed via 10 m cables to two spectroscopy amplifiers at the same 4 µs shaping time but with different gain factors. A very loose amplitude threshold on the SA output provided the on-line trigger, ensuring all the events down to the electronics noise edge were recorded. The SA output, as wellas the PMT signals from the AC detectors, were recorded by the FADC for a duration of 25 µs after the trigger. The discriminator output of the Veto PMTs were also recorded. A random trigger was provided by an external clock once per 10 s for sampling the background level and evaluating the various efficiency factors. The DAQ system remains active for 2 ms after a trigger to record possible time-correlated signatures. The activities in any part of the detector systems within 10 µs prior to the trigger were also recorded. The typical data taking rate for the HPGe sub-system was about 1 Hz. The accurately recorded DAQ dead time was about 10-20 ms per event.5ResultsThe measured spectra, after cuts of cosmic and anti-Compton vetos, during 60/46 days of reactor ON/OFF data taking are displayed in Figure 4. The background level of 1 keV−1 kg−1 day−1 and a detector threshold of 5 keV, comparable to those of underground Dark Matter experiment, are achieved. Additional cuts based on pulse shape and timing information are expected to further reduce the background level and the threshold. Several lines can be identified: Ga X-rays at 10.37 keV and 73 Ge∗ at 66.7 keV from internal cosmic-induced activities, and 234 Th at 63.3 keV and 92.6 keV due to residual ambient radioactivity from the 238 U series in the vicinity of the target. The ON and OFF spectra differ in two significant features. The excess of OFF over ON below the noise edge of 5 keV is due to instabilities in the trigger threshold and do not affect the analysis at higher energies. The excess at the Ga X-ray peaks originates from the long-lived isotopes (68 Ge and 71 Ge with half-lives of 271 and 11.4 days, respectively) activated by cosmic-rays prior to installation. It has been checked that the time evolution of the peak intensity can be fit to two exponentials consistent with the two known half-lives. The reactor neutrino spectra was evaluated from reactor operation data using the standard prescriptions on from fission νe [6] together with a low energy contribution due ¯ 238 to neutron capture on U [22]. The total flux is 5.6 × 1012 cm−2 s−1 . The electron recoil spectra from SM and MS interactions can then be calculated. Background in the energy range of 12 to 60 keV are due to Compton scatterings of higher energy γ’s. Accordingly, the Reactor OFF data are fitted to a smooth function φOFF = α1 eα2 E + α3 + α4 E. (7)A χ2 /dof of 40/46 is obtained indicating this background description is valid. A one-10 210110-110-2020406080100120Figure 4: The energy spectra after the anti-Compton and cosmic veto cuts for 60/46 live time days of Reactor ON/OFF data taking. The OFF data set is multiplied by 10 for display purposes.0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 10 20 30 40 50 60Figure 5: The residual spectrum of the Reactor ON recoil data subtracting φOFF . The best-fit 1σ region is also shown. parameter fit is then performed to the Reactor ON data for φOFF + φSM + k2 φMS , (8)where φSM and φMS are the expected recoil spectra due to weak interactions and MS at µν¯e = 10−10 µB , respectively. A best-fit value of k2 = −1.1 ± 2.5 (9)at a χ2 /dof of 55/49 is obtained. The residual plot and the best-fit 1σ region is depicted in Figure 5. A total of 180/60 days of Reactor ON/OFF are taken for this reactor period. A limit will be derived when the data analysis is improved, complete and finalized. It is expected that the data would give world-level sensitivities in the searches of νe magnetic moments. ¯ ¯ Depicted in Figure 6 is the summary of all the results in µν¯e searches with reactor νe versus the achieved threshold. The dotted lines are the σµ /σSM ratio at a particular (T, µν¯e ). The KS experiment operated at a much lower threshold of 12 keV compared to previous and current measurements. The large σµ /σSM ratio at low energy implies that effects due to the uncertainties in the SM cross-sections can be neglected such that the limits derived are more robust.0.511.522.533.544.5110102103Figure 6:Summary of the results in the searches of neutrino magnetic moments with reactor neutrinos.The dotted line is the ratio between the cross-sections due to magnetic moments and Standard Model weak interactions.Indirect bounds on the neutrino radiative decay rate are inferred using Eq.5and displayed in Figure 7.The sensitivity region of µν=10−10µB is shown for illustration purpose.Superimposed are the limits from the previous direct searches of excess γ’s from reactor neutrinos [23]and from the supernova SN1987a [24].Also shown is the sensitivity level of proposed simulated conversion experiments at accelerator [25].It can be seen that ν-e scatterings give much more stringent bounds than the direct searches.6Summary &OutlookWith the strong evidence of physics beyond the Standard Model revealed in the neu-trino sector,neutrino physics and astrophysics remains a central subject in experimental particle physics in the coming decade and beyond.There are room for ground-breaking technical innovations -as well as potentials for surprises in the scientific results.A Taiwan,China and U.S.A.collaboration has been built up with the goal of es-tablishing a experimental program in neutrino and astro-particle physics.It is the first generation collaborative efforts in large-scale basic research between scientists from Tai--5510152025303540-4-3.5-3-2.5-2-1.5-1-0.500.51Figure 7:Summary of the results in the bounds in radiative decay lifetime of the neutrino.See text for explanationswan and China.The flagship effort is to perform the first-ever particle physics experiment in Taiwan at the Kuo-Sheng Reactor Plant.From the Period I data taking,we expect to achieve world-level sensitivities in neutrino magnetic moments and radiative lifetime studies.A wide spectrum of R&D projects are being pursued.New ideas for future directions are being explored.7AcknowledgmentsThe author is grateful to the scientific members,technical staffand industrial partners of TEXONO Collaboration,as well as the concerned colleagues for the invaluable contri-butions which “make it happen”in such a short period of time.Funding are provided by the National Science Council,Taiwan and the National Science Foundation,China,as well as from the operational funds of the collaborating institutes.References[1]C.Y.Chang,S.C.Lee and H.T.Wong,Nucl.Phys.B(Procs.Suppl.)66,419(1998);H.T.Wong and J.Li,Mod.Phys.Lett.A15,2011(2000);H.T.Wong and J.Li,hep-ex/0201001(2002).[2]H.T.Wong et al.,Astropart.Phys.14,141(2000).[3]H.B.Li et al.,TEXONO Coll.,Nucl.Instrum.Methods A459,93(2001).[4]See the respective sections in D.E.Groom et al.,Particle Data Group,Eur.Phys.J.C15(2000),for details and a complete list of references.[5]For the latest results,see these Proceedings,ICHEP2002.[6]P.Vogel and J.Engel,Phys.Rev.D39,3378(1989).[7]J.F.Nieves,Phys.Rev.D26,3152(1982).[8]B.Kayser et al.,Phys.Rev.D20,87(1979).[9]J.F.Beacom and P.Vogel,Phys.Rev.Lett.83,5222(1999).[10]G.G.Raffelt,Phys.Rev.D39,2066(1989).[11]F.Reines,H.S.Gurr and H.W.Sobel,Phys.Rev.Lett.37,315(1976).[12]G.S.Vidyakin et al,JETP Lett.55,206(1992).[13]A.I.Derbin et al.,JETP Lett.57,769(1993).[14]B.W.Lee and R.E.Shrock,Phys.Rev.D16,1444(1977);W.Marciano and A.I.Sanda,Phys.Lett.B67,303(1977).[15]M.B.Voloshin,M.I.Vysotskii and L.B.Okun,Sov.Phys.JETP64,446(1986);E.K.Akhmedov,Phys.Lett.B213,64(1988);C.Lim and W.J.Marciano,Phys.Rev.D 37,1368(1988).[16]J.Pulido and E.K.Akhmedov,Astropart.Phys.13,227(2000);E.K.Akhmedov andJ.Pulido,Phys.Lett.B485,178(2000);O.G.Miranda et al.,Nucl.Phys.B595, 360(2001);O.G.Miranda et al.,Phys.Lett.B521,299(2001);[17]C.Amsler et al.,Nucl.Instrum.Methods A396,115(1997);C.Broggini,Nucl.Phys.B(Procs.Suppl.)91,105(2001).[18]i et al.,TEXONO Coll.,Nucl.Instrum.Methods A465,550(2001).[19]H.B.Li and H.T.Wong,J.Phys.G,in press(2002).[20]Y.Liu et al.,TEXONO Coll.,Nucl.Instrum.Methods482,125(2002).[21]A.G.Beda,E.V.Demidova,and A.S.Starostin,Nucl.Phys.A663,819(2000);A.S.Starostin and A.G.Beda,Phys.Atom.Nucl.631297(2000).[22]V.I.Kopeikin,L.A.Mikaelyan,and V.V.Sinev,Phys.Atomic Nuclei60,172(1997).[23]L.Oberauer,F.von Feilitzsch and R.L.M¨o ssbauer,Phys.Lett.B198,113(1987);J.Bouchez et al.,Phys.Lett.B207,217(1988).[24]E.L.Chupp,W.T.Vestrand,and C.Reppin,Phys.Rev.Lett.62,505(1989).[25]S.Matsuki and K.Yamamoto,Phys.Lett.B289,194(1992);M.C.Gonzalez-Garcia,F.Vannucci and J.Castromonte,Phys.Lett.B373,153(1996).。

python scatter参数详解

python scatter参数详解

python scatter参数详解Scatter 折线图是一种常见的多变量数据可视化图表,熟练掌握其参数调整有助于我们更加精准有效地分析和展示数据。

本文将介绍Python 中Scatter 参数的用法及特性,助力大家形成正确的可视化思维。

Python scatter是一种常用的散点图绘制方法,可用于可视化表达两个变量之间的关系。

它拥有各种参数,这些参数可以控制散点图的外观和行为。

本文将详细介绍Python scatter参数,包括每个参数的含义,用法和取值范围,帮助使用者更好地理解和控制Python scatter的效果。

一、散点图的coordinates参数coordinates参数用于告诉Python散图该绘制的样本数据。

它必须是一个二元元组的列表,以定义每个散点的位置。

该参数的取值范围是(x,y)对,其中x表示横坐标值,y表示纵坐标值,都可以为数值或字符串。

二、散点图的s参数s参数用于控制散点图点的大小。

它可以是一个单独的数字,定义所有散点的大小,也可以是一组数字,定义每个散点的大小。

它的取值范围是整数或浮点数,数值越大,表示散点越大。

三、散点图的c参数c参数用于控制散点图元素的颜色。

它可以是单色('blue')、渐变色或者一个颜色的列表,以定义每个散点的颜色。

当使用渐变色时,用户可以选择color map,将不同的颜色分配给不同的数值,其中参数vmin 和vmax帮助用户定义colorbar,最大值和最小值。

四、散点图的marker参数marker参数用于控制散点图中所有、点的形状。

它可以是单个形状或一个形状的列表,以定义每个散点的形状。

Python scatter可以根据用户的选择绘制不同的点形状,比如:像素点'.'、园点'o'、三角点'^'和倒三角点'v'等。

五、散点图的edgecolors参数edgecolors参数用于控制散点图元素的边缘颜色。

scatterplot函数r语言

scatterplot函数r语言

scatterplot函数r语言【最新版】目录1.散点图概述2.R 语言中的散点图函数3.scatterplot 函数的使用方法4.散点图的参数及其功能5.应用实例正文1.散点图概述散点图是一种用于展示两个变量之间关系的图表,通常用于显示连续变量之间的关系。

在统计学和数据分析中,散点图是一种常用的数据可视化工具,可以帮助我们直观地观察到变量间的相关性、趋势和散布情况。

2.R 语言中的散点图函数R 语言是一种功能强大的数据处理和统计分析语言,提供了丰富的数据可视化功能。

在 R 语言中,我们可以使用散点图函数(scatterplot)来创建散点图。

3.scatterplot 函数的使用方法在 R 语言中,我们可以使用以下语法来创建散点图:```Rscatterplot(x, y, main = "散点图", xlab = "X 轴标签", ylab = "Y 轴标签",...)```其中,x 和 y 是两个变量,分别表示散点图中的横坐标和纵坐标。

main、xlab 和 ylab 参数用于设置图表的标题和轴标签。

其他参数用于调整散点图的样式和颜色等。

4.散点图的参数及其功能以下是散点图函数中的一些常用参数及其功能:- pch:点形状,如圆点(circle)、十字形(cross)、空心圆(open circle)等。

- col:点颜色,可以是颜色名称或颜色代码。

- cex:点的大小,单位为倍数。

- main:图表标题。

- xlab:X 轴标签。

- ylab:Y 轴标签。

- grid:是否显示网格线,TRUE 表示显示,FALSE 表示不显示。

- legend:是否显示图例,TRUE 表示显示,FALSE 表示不显示。

- asp:设置散点图的纵横比,默认为 1。

- xy.axis:设置 X 轴和 Y 轴的类型,如“scatter”(散点图)、“plot”(折线图)等。

python scatter函数用法

python scatter函数用法

python scatter函数用法Python中的scatter函数是Matplotlib库中的一个函数,用于绘制散点图。

散点图是一种用于显示两个变量之间关系的图表类型,其中每个点代表一个数据点,其位置由其两个变量的值确定。

scatter函数的基本语法如下:```pythonmatplotlib.pyplot.scatter(x, y, s=None, c=None, marker=None, cmap=None, norm=None, vmin=None, vmax=None,alpha=None, linewidths=None, edgecolors=None, *, plotnonfinite=False, data=None, **kwargs)```其中,x和y是两个数组,分别表示散点图中每个点的x和y坐标。

s 是可选的,表示散点的大小。

c是可选的,表示散点的颜色。

marker 是可选的,表示散点的形状。

cmap是可选的,表示颜色映射。

norm 是可选的,表示颜色映射的归一化。

vmin和vmax是可选的,表示颜色映射的最小值和最大值。

alpha是可选的,表示散点的透明度。

linewidths是可选的,表示散点的边框宽度。

edgecolors是可选的,表示散点的边框颜色。

下面是一个简单的例子,演示如何使用scatter函数绘制散点图:```pythonimport matplotlib.pyplot as pltimport numpy as np# 生成随机数据x = np.random.rand(50)y = np.random.rand(50)colors = np.random.rand(50)sizes = 1000 * np.random.rand(50)# 绘制散点图plt.scatter(x, y, c=colors, s=sizes, alpha=0.5)# 显示图形plt.show()```在这个例子中,我们首先使用NumPy库生成了50个随机数作为x和y坐标,以及50个随机数作为颜色和大小。

python scatter 对角线 标记方差 相关系数

python scatter 对角线 标记方差 相关系数

python scatter 对角线标记方差相关系数全文共四篇示例,供读者参考第一篇示例:Python是一种流行的编程语言,被广泛应用于数据分析、机器学习和科学计算等领域。

在数据可视化方面,Python提供了丰富的库和工具,能够帮助我们更好地理解和分析数据。

其中的scatterplot(散点图)是一种常用的可视化方法,通过展示变量之间的关系来揭示数据的模式和趋势。

在本文中,我们将重点介绍Python中如何使用scatterplot来展示对角线、标记方差和相关系数。

这三个概念是数据分析中常用的指标,它们可以帮助我们评估变量之间的关系和特征。

通过结合scatterplot,我们可以更直观地观察数据的分布和特征,从而做出更准确的分析和预测。

让我们介绍一下scatterplot。

散点图通常用于展示两个变量之间的关系,通过在坐标系中绘制数据点来展示它们之间的关系。

在Python中,我们可以使用matplotlib库来绘制scatterplot,其代码如下:```pythonimport matplotlib.pyplot as pltx = [1, 2, 3, 4, 5]y = [2, 3, 4, 5, 6]plt.scatter(x, y)plt.xlabel('X')plt.ylabel('Y')plt.title('Scatterplot')plt.show()```以上代码将生成一个简单的散点图,其中x轴表示变量x的取值,y轴表示变量y的取值。

通过观察散点图的分布,我们可以初步判断变量之间的关系,比如正相关、负相关或无关。

接下来,我们将介绍如何在scatterplot中展示对角线、标记方差和相关系数。

让我们介绍对角线。

对角线是一种直线,它表示变量自身的关系。

在散点图中,我们可以在x=y这条对角线上添加一条直线,以展示变量自身的关系。

这样可以帮助我们更直观地观察数据点是否沿着对角线分布,从而判断变量之间是否存在内在的关系。

cityengine随机点函数scatter语法

cityengine随机点函数scatter语法

概要scatter(domain, nPoints, distributionType) { operations }scatter(domain, nPoints,gaussian, scatterMean, scatterStddev) { operations }参数domain (selstr):指定随机点产生的位置,有三个选项:surface、volume、scope。

nPoints (float):指定点的数量distributionType (selstr):指定点分布的类型,有两个选项:uniform、gaussian scatterMean (selstr):可选的参数,指定随机点群的中心相对于当前的scope的位置,有七个选项center、front、back、left、right、top、bottom,默认为centerscatterStddev (float):可选的参数,指定标准差,默认值是0.16.注意只有在几何体是闭合的形状时,domain才能使用volume,如果不是闭合的,则会采用surface产生的每一个点的scope的尺寸为0产生的每一个点只包含一个节点,可以使用替换函数i()来插入模型示例一、在表面生成随机点,类型为uniform[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(surface,1000,uniform) {A.}[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(volume,1000,uniform) {A.}[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(surface,1000,gaussian) {A.}[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(volume,1000,gaussian) {A.}[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,1000,<span style="font-size:14px;">uniform</span>) {A.}[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,1000,gaussian) {A.}七、在scope生成随机点,类型为uniform,设置scatterMean参数scatterMean为front[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,front,0.9) {A.}\scatterMean为left[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,left,0.9) {A.}scatterMean为bottom[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,bottom,0.9) {A.}scatterMean为right[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,right,0.9) {A.}scatterMean为back[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,back,0.9) {A.}scatterMean为top[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,top,0.9) {A.}八、设置标准差标准差越大,点越分散设置标准差为0.2[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,top,0.2) {A.}设置标准差为2[plain]view plain copy1.attr height =102.Lot-->3. extrude(height)4. scatter(scope,20,gaussian,top,2) {A.}。

On the Distribution of Extrinsic L-values in Gray-mapped 16-QAM

On the Distribution of Extrinsic L-values in Gray-mapped 16-QAM

May 23, 2006
DRAFT
3
analytical approach over the simulations-based one lies in the speed-up of any analysis relying on the knowledge of the PDF. This paper is organized as follows. The model of the BICM-ID transmission is introduced in Section II along with mathematical notation. The expressions for the PDF of the extrinsic L-values are shown in Section III, while the detailed derivations are left for the Appendix. In Section IV, the derived expressions are contrasted with histograms obtained from the extensive numerical simulations and we provide a simple application example calculating the EXIT functions of the demapper. The conclusions are drawn in Section V. II. S YSTEM M ODEL Consider the baseband model of BICM transmission shown in Fig. 1 where the bits, denoted by y (l), taken from the output of the binary channel encoder are interleaved yielding y (l′ ). These are gathered in the codewords of length m, y(n) = [y (nm), . . . , y (nm + m − 1)] = discrete times defined for bits, interleaved bits, and symbols, respectively. The symbols ak are taken from a set X = {a0 , . . . , aM −1 }, where M = 2m . Here, we consider 16-QAM so each √ symbol may be represented as aj = ℜ{aj } + ıℑ{aj }, where ı = −1, ℜ{·} and ℑ{·} are, X I = X Q = {−3∆, −∆, +∆, +3∆}. The constellation X is zero-mean and normalized to √ unitary-energy so ∆ = 1/ 10. and that, thanks to the perfect (infinite-depth) interleaving, they can be modeled as independent random variables. The channel output is given by r ˜(n) = h(n)s(n) + η (n), where η (n) is an additive white Gaussian noise (AWGN) with variance N0 (its real and imaginary parts are independent, each with variance N0 /2) and h(n) is the channel complex gain. In this work we focus our attention on AWGN channels so h(n) = 1, consequently the signal to noise ratio (SNR) is given by γ = 1/N0 . In the following, the dependence on time n is omitted to simplify notation. Assuming perfect channel knowledge, the extrinsic L-values Lk for the k -th bit of the codeword We make the realistic assumption that the bits y (l′ ) are equiprobable Pr{y (l′ ) = 1} = 1/2 respectively, the real and imaginary part of the symbol ℜ{aj } ∈ X I and ℑ{aj } ∈ X Q , with [y0 (n), . . . , ym−1 (n)] and mapped into symbols s(n) = µ{y(n)} ∈ X , where l, l′ and n denote

DMIS语句解释

DMIS语句解释

F(PT2)=FEAT/POINT,CART,146.307037,0.000000,9.925466,0.000000,$-1.000000,0.000000MEAS/POINT,F(PT2),1PTMEAS/CART,146.307037,0.000000,9.925466,0.000000,-1.000000,0.000000 ENDMESl F(PT2)=FEAT/POINT,CART,146.307037,0.000000,9.925466,0.000000,$ -1.000000,0.000000FEAT/POINT点元素定义:DMIS的标准格式为:F(Label)=FEA T/POINT,CART[POL],X,Y,Z,I,J,K或FA(Label)=FEA T/POINT,CART[POL],X,Y,Z,I,J,KLABEL是点的名称.F(Label)表示定义理论点元素.FA(Label)表示定义实际点元素.CART表示定义数据为直角坐标系.POL表示定义数据为极坐标系.X,Y,Z为定义点在当前坐标系,当前单位下的点坐标.I,J,K为此点的法线向量.l MEAS/POINT,F(PT2),1MEAS用来测量一个元素,DMIS的标准格式为:MEAS/ARC[CIRCLE][CONE][CONRADSEGMNT][CPARLN][CYLNDR][CYLRADSEGMN T][EDGEPT][ELLIPS][ELONGCYL][GCURVE][GSURF][LINE][OBJECT][PARPLN][PLANE][RCTNGL][REVSURF][SPHERE][SPHRADSEGMNT][S YMPLN][TORUS][TORRADSEGMNT],F(label1),n或MEAS/POINT,[COMP],[AXDIR][DME][POL][SPH][VEC,i,j,k][FEA T,[F(label2)][FA(label2)][ G(label3)]],F(label1),nARC表示测量的是圆弧元素.CIRCLE表示测量的是圆元素.CONE表示测量的是圆锥元素.CONRADSEGMNT表示测量的是圆锥段元素.CPARLN表示测量的是键槽元素.CYLNDR表示测量的是圆柱元素.CYLRADSEGMNT表示测量的是圆柱段元素.EDGEPT表示测量的是边界点元素.ELLIPS表示测量的是椭圆元素.ELONGCYL表示测量的是延长圆柱元素.GCURVE表示测量的是曲线元素.GSURF表示测量的是曲面元素.LINE表示测量的是直线元素.OBJECT表示测量的是物体.PARPLN表示测量的是键槽元素.PLANE表示测量的是平面元素.RCTNGL表示测量的是棱柱元素.REVSURF表示测量的是旋转曲面元素.SPHERE表示测量的是球元素.SPHRADSEGMNT表示测量的是球缺元素.SYMPLN表示测量的是键槽元素.TORUS表示测量的是圆环元素.TORRADSEGMNT表示测量的是圆环段元素.POINT表示测量的是点元素.F(label1)是要测量的理论元素名称.n表示要测量的测量点数.COMP表示测量时要应用探头补偿,探头补偿的方式由COMP后的参数决定.AXDIR表示沿离当前坐标系最近的坐标轴方向应用探头补偿.DME表示使用DME系统算法应用探头补偿.POL表示在当前坐标原点和工作平面下沿径向方向应用探头补偿.SPH表示在当前坐标原点下沿径向方向应用探头补偿.VEC,i,j,k表示沿某个向量应用探头补偿,此向量由i,j,k定义.FEA T表示在指定元素的质心点下沿径向方向应用探头补偿.F(label2)是用来指定探头补偿方向的理论元素名称.FA(label2)是用来指定探头补偿方向的实际元素名称.G(label3)是用来指定探头补偿方向的几何数据名称.备注:每个MEAS语句都应该有一个相应的ENDMES语句作为测量结束语句.MEAS语句和ENDMES语句之间可有多个PTMEAS等语句来进行实际的测量.l PTMEAS/CART,146.307037,0.000000,9.925466,0.000000,-1.000000,0.000000 PTMEAS用来执行一次点测量,DMIS的标准格式为:PTMEAS/CART,x,y,z[POL,r,a,h],[i,j,k]CART,x,y,z是点测量的直角坐标值.POL,r,a,h是点测量的极坐标值.i,j,k是点测量的方向向量.l ENDMES用来表示"CALIB...ENDMES","MEAS...ENDMES",或"RMEAS...ENDMES"语句块的结束DMIS的标准格式为:ENDMESDMISMN/'Created by[爱科腾瑞公司-080106]on星期三,三月15,2006',4.0 DMISMN用来设定某个DMIS输入程序的标识,DMIS的标准格式为:DMISMN/'text',version'text'是标识名称.version是DMIS版本号,由主版本号和副版本号组成,如XX.x.UNITS/MM,ANGDECUNITS用来设置单位,DMIS的标准格式为:UNITS/MM[CM][METER][INCH][FEET],ANGDEC[ANGDMS][ANGRAD],[TEMPF][TE MPC]MM表示设置长度单位为毫米.CM表示设置长度单位为厘米.METER表示设置长度单位为米.INCH表示设置长度单位为英寸.FEET表示设置长度单位为英尺.ANGDEC表示设置角度单位为角度.ANGDMS表示设置角度单位为度分秒.ANGRAD表示设置角度单位为弧度.TEMPF表示设置温度单位为华氏度.TEMPC表示是设置温度单位为摄氏度.WKPLAN/XYPLANWKPLAN用来设置工作平面,DMIS的标准格式为:WKPLAN/XYPLAN[YZPLAN][ZXPLAN]XYPLAN表示设置当前工作坐标系的XY坐标平面为工作平面.YZPLAN表示设置当前工作坐标系的YZ坐标平面为工作平面.ZXPLAN表示设置当前工作坐标系的ZX坐标平面为工作平面.PRCOMP/ONPRCOMP用来设置打开或关闭自动探头补偿,DMIS的标准格式为:PRCOMP/ON[OFF]ON表示打开自动探头补偿.OFF表示关闭自动探头补偿.TECOMP/MACH,ONTECOMP用来设置温度补偿,DMIS的标准格式为:TECOMP/MACH,ON[OFF]或TECOMP/PART,ON,[DA(label)][OFFSET,xoff,yoff,zoff],tmpexp,ALL[[tmpexpunc],'temps ns']或TECOMP/PART,OFFMACH表示要设置机器的温度补偿.PART表示要设置工件的温度补偿.ON表示打开温度补偿.OFF表示关闭温度补偿.DA(label)是作为温度补偿热量数据的坐标系名称.OFFSET,xoff,yoff,zoff表示相对于当前坐标系的偏移,xoff为X方向相对于坐标原点的偏移,yoff为Y方向相对于坐标原点的偏移,zoff为Z方向相对于坐标原点的偏移.tmpexp表示工件的热膨胀系数.ALL表示使用所有的工件探头.tmpexpunc表示工件热膨胀系数的不确定度.'tempsns'是工件探头的名称.FLY/OFFFLY用来设置Fly模式或关闭Fly模式,DMIS的标准格式为:FLY/radius[OFF]radius是设置Fly模式的最大球半径.OFF表示要关闭Fly模式.MODE/MANMODE用来设置测量机执行程序的模式,DMIS的标准格式为:MODE/MAN[PROG,MAN][AUTO,MAN[PROG,MAN]]MAN表示测量机在测量或移动时由人工手动控制.PROG表示测量机在执行MEAS和GOTARG等语句时会使用给定的中间移动.AUTO表示测量机在执行MEAS和GOTARG等语句时会使用自己的算法来移动. SNSET/CLRSRF,15.000000SNSET用来指定和激活探头设置,DMIS的标准格式为:SNSET/VA(label1)[VF(label2)][VL(label3),intnsty][VW(label4)][FOCUSY][FOCUSN][SC ALEX,n][SCALEY,n][MINCON,level][APPRCH,dist1][RETRCT,dist1][SEARCH,dist1]或SNSET/CLRSRF[DEPTH],[dist2][OFF][F(label5),[dist3]][FA(label6),[dist3]][DAT(x),[dist3 ]]VA(label1)是以前定义的测量机算法.VF(label2)是以前定义的视频探头过滤器.VL(label3),是以前定义的视频探头灯光.VW(label4)是以前定义的视频探头窗口.FOCUSY表示关闭自动对焦.FOCUSN表示打开自动对焦.SCALEX,n表示设置图像在X方向的缩放系数n.SCALEY,n表示设置图像在Y方向的缩放系数n.MINCON,level表示设置最小信任标准为level.APPRCH,dist1表示设置探头的接近距离为dist1.RETRCT,dist1表示设置探头的回退距离为dist1.SEARCH,dist1t表示设置探头的搜寻距离为dist1.CLRSRF表示设置探头与元素的间隔距离.DEPTH表示设置探头深入测量元素的深度.OFF表示关闭CLRSRF或DEPTH选项.F(label5)是作为间距平面或深度测量平面的理论元素名称.FA(label6)是作为间距平面或深度测量平面的实际元素名称.DAT(x)是作为间距平面或深度测量平面的坐标数据名称.RECALL/D(MCS)RECALL用来取出由SAVE语句保存的数据,DMIS的标准格式为:RECALL/D(label2)[DA(label1)][S(label3)][SA(label4)][FA(label5)][RT(label6)],[DID(label7)]D(label2)是要取出的工作坐标系名称,此坐标系会被激活成为当前坐标系.DA(label1)是要取出的实际工作坐标系名称,此坐标系会被激活成为当前坐标系.S(label3)是要取出的探头名称.SA(label4)是要取出的实际探头名称.FA(label5)是要取出的实际元素名称.RT(label6)是要取出的转盘名称.DID(label7)是保存有要取出数据的设备名称,当不指定DID(label7)时,会从测量机默认的存储设备中取出.SNSLCT选择用来进行测量的探头GEOALG用来设置某种元素类型的拟和算法ENDFIL用来指示程序或模块结束WKPLAN/XYPLANWKPLAN用来设置工作平面,DMIS的标准格式为:WKPLAN/XYPLAN[YZPLAN][ZXPLAN]GOHOME使探头回到其初始位置,DMIS的标准格式为:GOHOMEDMESW用于控制数据或测量机输入文件中发送数据的过程。

scatterplot函数r语言

scatterplot函数r语言

scatterplot函数r语言Scatterplot函数是R语言中用于绘制散点图的函数。

散点图是数据可视化中常用的一种图形,它可以展示两个变量之间的关系,并且可以通过颜色或大小等视觉元素表示第三个变量。

使用scatterplot函数绘制散点图非常简单,只需要指定x轴和y 轴的变量即可。

以下是一个简单的例子:```library(car)scatterplot(Sepal.Width ~ Sepal.Length, data = iris)```这个例子使用iris数据集中的Sepal.Width和Sepal.Length两个变量绘制了一个散点图。

Sepal.Width在y轴上,Sepal.Length在x轴上。

我们可以看到,这两个变量之间存在一定的正相关关系。

除了基本的散点图之外,scatterplot函数还支持许多高级特性,如添加回归线、绘制分组数据、添加标签等等。

以下是一些例子:1. 添加回归线```scatterplot(Sepal.Width ~ Sepal.Length, data = iris, smoother = "loess", span = 0.5)```这个例子在原来的散点图基础上添加了一条回归线。

我们可以看到,回归线表明了这两个变量之间的趋势。

2. 绘制分组数据```scatterplot(Sepal.Width ~ Sepal.Length | Species, data = iris, legend = TRUE)```这个例子将数据按照花的种类分组,并绘制了三个不同的散点图。

在每个散点图中,x轴和y轴的变量都是相同的,但是颜色不同。

这个例子展示了如何在同一个图形中展示多组数据。

3. 添加标签```scatterplot(Sepal.Width ~ Sepal.Length, data = iris, labels = rownames(iris))```这个例子在散点图中添加了每个数据点的标签。

scatter函数详解

scatter函数详解

scatter函数详解scatter函数是PyTorch中一个用于在张量中放置或修改元素的函数。

它的主要作用是将一个张量(src)中的数据根据指定的索引(index)按照指定的维度(dim)填充到另一个张量(input)中。

换句话说,scatter函数可以帮助我们将src中的数据按照index中的索引放置到input的相应位置。

scatter函数的定义如下:```pythontorch.scatter(input, dim, index, src)```参数说明:- input:接收数据的张量。

- dim:表示沿着哪个维度进行索引。

- index:用于scatter的元素索引。

- src:用于scatter的源元素,可以是一个标量或一个张量。

举个例子,我们可以创建两个张量,然后使用scatter函数将第二个张量中的数据按照指定的索引放置到第一个张量中:```pythonimport torch# 创建两个张量tensor1 = torch.tensor([[1, 2, 3], [4, 5, 6]])tensor2 = torch.tensor([10, 20, 30])# 使用scatter函数将tensor2中的数据按照索引放置到tensor1中result = tensor1.scatter(1, tensor2.index, tensor2)print(result)```输出结果:```tensor([[1, 2, 10],[4, 5, 20],[3, 6, 30]])```在这个例子中,我们将tensor2中的数据按照索引放置到了tensor1中的相应位置。

需要注意的是,scatter函数在沿着指定维度(dim)进行索引时,会忽略输入张量中该维度上的其他元素。

总之,scatter函数是PyTorch中一个实用的工具,可以帮助我们在张量中放置或修改元素。

在实际应用中,它可以用于许多场景,例如将特征值放置到权重矩阵中,或在一维数组中根据索引修改元素等。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1
There is an early Russian publication on this subject that we also draw the reader’s
2
experimental values are rather poorly known. An elaboration of this approach that includes intermediate resonances [2], in particular the exchange of the vector K ∗ in the u-channel as well as of a ρ-meson in the t-channel, alters the actual numerical values by 2% at most. In this paper, we wish to examine the πK scattering lengths at finite temperature. The Nambu–Jona-Lasinio (NJL) model [8,9] gives a good description of these deeply bound mesons at T = 0. In obtaining such a description, the mean field or Hartree approximation to the self-energy, together with the random phase approximation for the mesonic fields has been used. It has been recognized that these approximations taken together constitute an expansion in the inverse number of colors 1/Nc , within the model [10,11] and are essential in order to preserve the chiral symmetry of the underlying Lagrangian. For T = 0, the dependence of some of the pseudoscalar and scalar mesons as a function of temperature has qualitatively been validated by lattice gauge calculations [12,13]. The results of Ref. [13] display a remarkable qualitative agreement with the finite temperature behavior of the NJL model (pole) masses. This gives one some confidence in the application of this model in this form to finite temperature, although it is speculative. In addition, it can be shown that the current algebra results or alternatively the results of chiral perturbation theory to lowest order can be obtained from this model on making a suitable chiral expansion [14,15]. It is then a simple issue to extend calculations to finite temperature, and examine the consequences thereof. There is no necessity to enforce the chiral limit, since this is a model study. We thus examine the three flavor NJL model. Contributions to the scattering amplitude are organized according to the expansion in 1/Nc . To lowest order within this scheme,
HD-TVP-97/10
π -K scattering lengths at finite temperature in the
arXiv:hep-ph/9804286v1 14 Apr 1998
Nambu–Jona-Lasinio model
P. Piwnicki, S. P. Klevansky and P. Rehberg
∗ (1430). Both the Pauli-Villars and O(3) regularization procedures K0
are used to evaluate the T = 0 values of the l = 0 scattering lengths a0
3/2
and a0 . The finite temperature dependence is studied. We find
3/2
1/2
that the variation in the t channel in the calculation of a0 change in a0
3/2
leads to a
of a factor of about two over the temperature range of
T = 150 MeV.
PACS: 11.30.Rd; 13.75.Lb; 14.40Aq; 11.10.Wx. Keywords: finite temperature scattering lengths; Nambu–Jona-Lasinio model; 1Biblioteka I. INTRODUCTION
The pseudoscalar octet (π, K, η ) forms the set of lowest mass hadronic states and are thus particles that play a central role in heavy ion collisions, since they occur copiously. In particular the pion, with lowest mass and being an isospin triplet has a central role, while the kaonic sector, occurring as isospin doublets, gives us information on strangeness production. Since this octet consists of the Goldstone bosons associated with spontaneous chiral symmetry breaking, their role in nature is determined almost exclusively by this feature. Conversely, our understanding of chiral symmetry breaking is augmented by studying the properties of such mesons, and this can be extended to finite temperatures and baryon density. The purpose of this paper is to examine elastic πK → πK scattering in particular at T = 0 and at finite temperatures. Expressions for the transition amplitudes and hence the scattering lengths at T = 0 are derived and then the temperature dependence of the scattering lengths is investigated. This is of interest (a) in its own right, although at present it is unclear how one would access finite temperature scattering lengths experimentally, and (b) as it provides a first step in the calculation of the elastic πK → πK cross-sections as a function of the temperature that are required as input for the collision dynamics for a simulation of heavy-ion collisions using a chiral Lagrangian. At T = 0, πK scattering has been studied within the framework of chiral perturbation theory (CHPT) [1–3]. It is not a priori clear to what extent a chiral expansion is appropriate for properties involving the strange quark, in view of its mass. Yet the results of even the early calculations of the scattering lengths1 [1] fall within the expected measured range [5–7], although one should note that actual
相关文档
最新文档