A Possible Intrinsic Fluence-Duration Power-Law Relation in Gamma-ray Bursts

合集下载

2013_合作频谱感知的软判决融合方法

2013_合作频谱感知的软判决融合方法

Efficient Soft Decision Fusion Rule in Cooperative Spectrum SensingWeijia Han,Member,IEEE,Jiandong Li,Senior Member,IEEE,Zan Li,Jiangbo Si,and Yan ZhangAbstract—In cognitive radio(CR),the soft decision fusion(SDF) rule plays a critical role in cooperative spectrum sensing(CSS). However,the computational cost on obtaining efficient SDF rule becomes infeasible even with a small number of cooperative users. In this paper,the efficiency of SDF rule in inhomogeneous back-ground is studied from the perspective of quantization theory. We formulate the calculation of sensing performance including the probabilities of detection and false alarm when regarding both i)the quantization impact and ii)the inhomogeneous back-ground,and then conclude a condition under which the sensing performance can be calculated by the fast Fourier transform (FFT).Based on this condition,two novel quantization schemes with two optimization methods are proposed to guarantee both the quantizer and decision threshold of SDF rule can be obtained efficiently,at the same time,the SDF can achieve high sensing performance.Index Terms—Cognitive radio,cooperative spectrum sensing, inhomogeneous background,optimization,quantization,soft decision fusion.I.I NTRODUCTIONI N cognitive radio(CR),spectrum sensing is employedto explore the scarce spectrum resource.As a promising method,the cooperative spectrum sensing(CSS)has been widely studied since it offers the temporal and spatial diversity to gain high sensing performance.A.Related WorksThis paper focuses on the fusion rule in the centralized CSS where the fusion center makes afinal sensing decision based on the reports from each involved cooperative user.Generally, such centralized CSS schemes are categorized into two fash-ions:i)the cooperative users pre-process the observation to gen-erate their measurements(or test statistics).Based on the re-ported measurements,the fusion center makes thefinal sensingManuscript received July21,2012;revised November03,2012and January 25,2013;accepted January25,2013.Date of publication February07,2013; date of current version March21,2013.The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Shuguang Cui.This work was supported by the National Science Fund for 973Program(2009CB320404),Major National Science and Technology Projects(2010ZX03006-002-04),National Natural Science Foundation of China by Grant No.61072070,(61231008)(IRT0852),(60902032),Distin-guished Young Scholars(60725105),National Key Lab.Fund(ISN02080001, ISN1101002),111Project(B08038),Doctoral Programs Foundation of the Ministry of Education(20110203110011),(2012JZ8002),(K5051201008),and 107103.The authors are with the Broadband Wireless Communications Lab-oratory and State Key Laboratory(ISN),Information Science Institute, Xidian University,Xi’an,Shaanxi710071,China(e-mail:alfret@gmail. com;jdli@;zanli@;jbsi@; yanzhang@).Digital Object Identifier10.1109/TSP.2013.2245659judgment[1]–[10];ii)the fusion center processes the total re-ceived samples forwarded from each cooperative user to make afinal sensing decision[10]–[12].This fashion ii)requires a large quantity of overhead as cooperative users feeding back their collected samples.Hence,the gain from the cooperation can be exhausted by the overhead of communication.Thus,the fashion i)attracts wider attention.When considering the current CSS schemes with the fashion i),there are two types of fusion rules:the hard decision fusion rule[1]–[7]and the soft decision fusion(SDF)rule[7]–[10]. Essentially,the hard decision fusion rule is a special case of the SDF rule.However,since the performance of SDF rule is hard to be tractable[13],the amount of the related literatures is comparably small in CR.In common,the current CSS studies including the hard decision fusion rule and SDF rule[1]–[10], [13]assume the received signal-to-noise ratio(SNR)is approx-imately the same at each cooperative user.This assumption simplifies the calculation of thefinal sensing performance including the detection probability and false alarm probability. However,when considering the channel shadowing effect, it cannot handle the practical inhomogeneous cases where the SNR varies among cooperative users.In this paper,the efficiency of SDF rule is studied when considering both the limit quantization levels and the inhomogeneous background. Theoretically,the spectrum sensing issue is similar as the bi-nary detection problem widely studied in sensor networks.A comprehensive survey is presented in[14]–[22]and the refer-ences therein.In sensor networks,the related studies mainly concentrate on the minimization of the total error probability and the derivation of the optimality condition.Contrastingly, the fusion rule of CR usually focuses on how to control the false alarm and detection probabilities efficiently,as well as how to achieve high sensing performance with low overhead effily,the efficient SDF rule is one rule can ob-tain both the proper quantizer and the optimal sensing decision threshold efficiently.The SDF rule consists of the quantization and the data fu-sion.In practice,the detectors have a certain receiver operating characteristic(ROC)for a range of interesting SNR.In addi-tion,the CR standardization may impose an explicit sensing constraint on the false alarm and detection probabilities[23]. Hence,the quantization scheme determines the ROC of CSS; while,the decision threshold in fusion process is to control the final false alarm probability and detection probability satisfying the sensing constraint.In this paper,we study the efficient SDF rule which ensures both the quantizer and the decision threshold can be optimized efficiently.For the optimization of quantization,[19]shows the optimal quantizer of the Lloyd-Max quantization scheme,and[24] presents an approach on acquiring the optimalfinal decision1053-587X/$31.00©2013IEEEthreshold.However,the optimal quantizer of[19]is based on an assumption that the test statistic follows a specified distri-bution.And the proposed optimization method in[19]only ensures the optimized quantizer converges to a local optimum [20].In addition,besides of[19],the current quantizers with the related optimization methods are based on quantizing the test statistic in log-likelihood ratio(LLR)or likelihood ratio (LR)domain for the binary hypothesis testing[14]–[19],[21], [22].But,the probability density function(pdf)of test statistic is not available in LLR domain in some applications[11], which require the quantization scheme can be performed in other domain.In contrast to[14]–[22],this paper studies the quantizer and the related optimization methods which can suit for the various distributions of test statistic and do not just hold in LLR domain.The decision threshold plays a key role in SDF rule,since it is used to balance the tradeoff between the probability of false alarm and the probability of detection under given quantizer. The optimization of decision threshold needs to evaluate the ROC determined by the quantizer.[24]shows that a major dif-ficulty normally encountered in SDF rule is the high computa-tional cost associated with evaluating the false alarm and de-tection probabilities.Additionally,[24]proves the computation of the decision threshold of SDF becomes infeasible even with a small number of cooperative users.However,this issue has not been widely concerned in current studies.Hence,it cannot be found a quantization scheme can be efficiently optimized and evaluated simultaneously.[24]proposes an approximation method to mitigate this difficulty but it is not proper in the in-homogeneous background scenario,since it requires every co-operative user to perform a complex transformation.Evidently, for the efficient SDF rule,it is necessary to design a quantiza-tion scheme which has low computational cost on determining the decision threshold and quantizer.It is a major work in this paper.B.ContributionsThis paper studies the efficient SDF rule,when considering three major issues in CSS:the quantization impact,the inhomo-geneous background,and the high computational cost on deci-sion threshold and quantizer optimization.Actually,when comparing with the homogeneous back-ground,inhomogeneous scenario is a practical model since the wireless signal experiences the shadowing effect.Briefly,the homogeneous background considers the test statistics of spec-trum sensing follow an identical distribution among cooperative users(in spatial domain);contrastingly,the inhomogeneous background regards the distributions of test statistics are generalized in spatial domain.Hence,the inhomogeneous background is universal but hard to study.This paper will employ the inhomogeneous background in the system model, which ensures the derived results suit for broader applications. To analyze the quantization impact,we study the calcula-tion of cooperative sensing performance(including coopera-tive false alarm and detection probabilities)from the aspect of quantization theory.When comparing with the related results in[18],[21],[22],the derived expression shows that,besides of the quantization thresholds,the value of quantization levels determines the performance of CSS.This outcome means the efficiency of SDF rule in inhomogeneous background can be studied by employing the existing results in the quantization theory.Furthermore,this paper does not set afixed number of quantization levels nor specifies the type of detector during the derivation of expression.Consequently,the obtained expres-sions present the following merits:i)they are adequate for an arbitrary number of quantization levels;ii)they are universal for the different types of detectors;iii)they can be employed to analyze the impact of quantization error in various quantization schemes such as the Lloyd-Max quantization scheme,uniform quantization scheme,etc.Through studying the impact of quantization levels on the de-rived expressions,it is found that:when the quantization levels satisfy a certain condition,the FFT can be employed to simplify the derived expressions and reduce the corresponding computa-tional cost on evaluating the ROC of CSS.Based on thisfinding, two quantization schemes are proposed to ensure the sensing de-cision threshold can be obtained with low computational cost in our efficient SDF rule.On the other hand,in order to ac-quire the optimal quantizer,this paper designs a global opti-mization method and a distributed optimization method to op-timize the proposed quantization schemes efficiently.Addition-ally,the proposed quantization schemes and the related opti-mization methods do not demand for quantizing the test statistic in LLR domain.The simulation results show the proposed quan-tization schemes achieve evidently high sensing performance with low computational cost when the corresponding optimiza-tion methods are not performed in LLR domain. Consequently,by employing the proposed quantization schemes and the designed optimization methods,the SDF rule can efficiently achieve high sensing performance with low computational cost on both performance evaluation and optimization.C.Paper StructureThe rest of this paper is organized as follows:Section II shows the system model.In sequence,Section III presents the fundamental study on calculating the CSS performance. In Section IV,two new quantization schemes and related optimization methods are proposed.Followed by conclusion, the numerical studies are indicated in Section V.II.S ENSING M ODELLet denote the null hypothesis that the PU is idle,and denote the alternative hypothesis that the PU is busy.The sensing problem is formulated as a binary hypothesis testing issue that:cooperative SUs identify the actual state between and.Consider cooperative secondary users(SUs)are randomly located in spatial domain.As shown in Fig.1,the cooperative SUs distributively sense the common PU transmission in par-allel,and then fuse their observations to obtain a global decision. Because of the channel shadowing/fading effects,the received signal denoted by at each SU has different distribution.Let denote the number of quantization levels,express the test statistic in the-th SU,and be a bit sequence denoting a quantization region to which the test belongs.At the beginning of CSS,each SU pre-processes the received signalHAN et al.:EFFICIENT SOFT DECISION FUSION RULE IN COOPERATIVE SPECTRUM SENSING1933Fig.1.Illustration of CSS in CR.FC is short for fusion center.denotes the pre-processing operation,represents the quantization and means thefinal sensingto generate the related test statistic.1The SUs use quan-tization thresholds to quantize the test statistic and generate a report which represents the quantized result.Then SUs feed back the report to the fusion center.After collecting the total re-ports,the fusion center utilizes them to fuse afinal test statistic. At the end,the fusion center makes a binary sensing decision by comparing thefinal test statistic with a decision threshold. III.CSS P ERFORMANCE IN I NHOMOGENEOUS B ACKGROUNDIn inhomogeneous scenario,every cooperative SU has the different pdf of the test statistic.This situation baffles the difficulty of calculating thefinal sensing performance.In this section,the calculation of sensing performance is formulated to show the relationship between the configuration of SDF rule and the sensing performance.A.CodebookFor a-level quantization scheme,there are two types of al-location for the quantization levels:Type I).each cooperative SU has a distinct configuration of quantization levels;Type II). each cooperative SU has an identical configuration of quanti-zation levels.For example,,the -th quantization level of-th SU may not be equal to the-th quantization level of-th SU in Type I);while,the-th quan-tization levels of-th SU and-th SU have the same value in Type II).Briefly,for Type I),SUs have different codebooks expressing the one-to-one mapping between the quantization level and the code which represents the report;contrastingly, for Type II),SUs have identical codebook.Thus,assuming fu-sion center knows every codebook,it is evident that:for fusion center,Type II)requires less prior knowledge on the received report rather than Type I);but Type I)can provide more infor-mation on the test statistic.This paper focuses on studying the generalized-level quantization regardless of the overhead and requirement involved in these two types of codebook.1In statistical hypothesis testing,the hypothesis test is determined by a test statistic,which is a function of the sample[25].This paper does not specify a certain type of detector but employ the general expression of probability density function(pdf).Therefore,the results of this paper can suit for uni-versal detectors.B.Formulation of CSS PerformanceIn a-level quantization scheme,the procedure of generatinga report at a SU is formulated by(1) where means the-th quantized threshold in-th SU;;;and.2In the fusion center,the strategy of thefinal sensing is given by[11]ifif(2)where denotes the quantization level of the reported quan-tization in user;is the assigned weight;means the fused quantized test statistic and;is the decision threshold and.Actually,can be treated as alevel after weight Hence,to sim-plify the expression in sequential derivation,we directly use instead of.Then(2)is simplified to(3).ifif(3)According to[6],the cooperative false alarm probability(de-noted by)and the cooperative detection probability(denoted by)the similar form.Hence,we usewhere,,when,andwhen throughout paper.For a quantization level,the corresponding probability mass function(pmf)is.(4)where denotes the pdf of under.Since we have pre-defined and,it is concludedand.Given a certain value decision threshold,the sensing performance is expressed by(5)Evidently,both the cooperative false alarm probability and co-operative detection probability are the complementary cumula-tive distribution function(ccdf)of under.Hence,the ter-minology ccdf is employed in sequential discussion.It is emphasized that the type of test statistic is not speci-fied in our study,though it commonly is the log-likelihood ratio (LLR)in the binary hypothesis testing.Sometimes the pdf of the LLR is not available in practical applications,which causes the 2For a-level quantization scheme,there are quantized thresholds ex-cluding and.1934IEEE TRANSACTIONS ON SIGNAL PROCESSING,VOL.61,NO.8,APRIL15,2013outcomes based on the LLR are no longer workable.However, our solutions are robust to this issue.According to Appendix A,we have the following proposition to show the ccdf of when quantizing in inhomogeneous background.Proposition1:When holds, the ccdf of for is by(6) where denotes the indicator function expressed bywhenothers(7)Compared with the related ccdf in[18],[21],[22],Propo-sition1shows the form of ccdf is not only determined by the quantization thresholds but also impacted by the quantization levels,which will be discussed detailedly in next subsection.C.Related DiscussionBased on Proposition1,this subsection studies the relation-ship between the combined quantized test statistic and the related pmf in detail.This investigation shows the computation of ccdf is determined by the quantizer including both the quan-tization thresholds and the quantization levels.For clear expression,let,,,,and,where.Since the number of realizations of is,let be a matrix of which the rows represent every realization of user’s reports. For example,for an arbitrary row of,there are entries representing the users’reports respectively.In addition,let express the entry at the-th row and-th column of .It is considered is available,because it can be obtained effortlessly3.When the quantization levels have beenfixed,a realization of the related fused test statistic is expressed by(8)where means the index of,is a-by-1column vector,and in denotes the index.Obviously,the total 3As an instance,an algorithm for obtaining is given as follows,,where denotes the modulo operation.realizations of the combined test statistic are given by.Then, there exists the following equation,(9) where denotes a column vector including every entry ofin ascending order;is the permutation matrix.Thus,we can obtain which is used in subsequent study.For example,con-sider a CSS case where,,,, ,.We have,,,.exist and.Since and known,is available.On the other hand,for,the pmf related to is(10) .By and,is given by,(11) .means the probability that occurs.Thus,we arrive at the following corollary.Corollary1:When holds,the ccdf of for is given by(12) where is given by(11),and.Proof:For any two realizations of reports,it is possible that their combined quantized statistic have same value.In other words,may contain entries which have the identical value.For the entries with the same value in,the corresponding index is betweenand. Thus,when,the pmf is(13) .Based on the pmf(13),the related ccdf(12)is obtained.Con-sequently,Corollary1is proved.Based on Corollary1,the following Corollary2is obtained directly.Corollary2:When,is a decreasing function of;when,is a non-increasing function of .Proof:The fused test statistic is a discrete random vari-able,and its domain including every realization of hasfinite elements.Hence,when,(12)shows,which is the ccdf of,is a decreasing function of.When,the value of is continuous between any two discrete numbers in set.Thus,under the condition of,is a step function of and it is a non-increasing function of. Corollary2implies that,based on the monotonicity of ,the false alarm and detection probabilities can be eligibly controlled by changing the decision threshold.IfHAN et al.:EFFICIENT SOFT DECISION FUSION RULE IN COOPERATIVE SPECTRUM SENSING1935can be calculated efficiently,its monotonicity ensures the optimal decision threshold can also be obtained efficiently. According to Corollary1,it is clear that the quantization levels affect not only the size of but also the order of entries in .Evidently,the pmf in terms of the combined test statistic is impacted by the quantization levels.Therefore,given a certain set of quantization thresholds,the ccdf of will change as long as the quantization levels changing.This is the major difference between our result and the related ccdf in[18],[21],[22].Based on Corollary1,we obtain a key result given by Corollary3and prove it in Appendix B.Corollary3:When is an integerand,the ccdf of is given by(14) where,,and(15) where denotes the convolution operation;and the size ofis-by-1.When the quantization levels meet the condition in Corollary 3,the expression and calculation complexity for the false alarm probability and detection probability can be simplified and reduced efficiently by employing the FFT.D.ConclusionsIn this section,Proposition1provides a feasible and reason-able approach to evaluate the performance of CSS in inhomo-geneous scenario from the perspective of quantization theory. This outcome has three major merits:i)the cooperative detec-tion and false alarm probabilities in inhomogeneous scenario become traceable by the derived expressions;ii)the quantiza-tion theory shows an easy way to understand the relationship be-tween the quantizer and thefinal sensing performance;iii)there have been a large number of literatures on quantization schemes which can be employed directly.These three merits facilitate to have an insight into the calculation and optimization of quanti-zation schemes.For example,Corollary3proves the computa-tion of cooperative false alarm and detection probabilities in[9] is not reasonable,since the quantization levels of[9]does not meet the condition of Corollary3.The decision threshold is designed to ensure thefinal false alarm and detection probabilities can be balanced under a cer-tain decision criterion(e.g.,NP-criterion).According to Propo-sition1,Corollary1,and Corollary2,the optimal decision threshold is available once the ccdf of the combined test statistic is obtained.However,the current quantization schemes have not concerned about the computational cost on the ccdf,re-sulting in a large quantity of overhead[24].To obtain the op-timal decision threshold efficiently,this paper will propose two novel quantization schemes based on Corollary3.IV.P ROPOSED Q UANTIZATION S CHEMES According to the derived Corollary3,the FFT method can efficiently reduce the computational cost on the ccdf of the combined test statistic.This section proposes two FFT based quantization schemes,which are named as Pseudo-uniform quantization scheme and weighted Pseudo-uniform quanti-zation scheme respectively.Additionally,two related efficient optimization methods are designed to achieve high sensing performance in CR scenario.In order to clarify the subsequent explanation and discussion,it is considered each entry of has different value4.A.Continuity AnalysisSince every fused test statistic is belong to,the value of is discrete.Thus,when,the ccdfis discrete as increasing.Evidently,for,,can be obtained by linear combination formulated bywhere[26].On the other hand,for,so thatholds.Then,ensures. Consequently,when,there is(16) Notice that and can be considered as the probabilities that fusion center employs and to make sensing decision respectively.In when deciding thefinal judgment,the fusion center randomly prefers with probability or with probability as thefinal decision by,can be considered as a continuous monotonic increasing of ,.This outcome means,in ROC curve(as shown in Section V),the discrete sensing performance becomes continuous by introducing.B.Objective FormulationCurrently,the minimum mean-squared error(MMSE)based quantization schemes are widely discussed in SDF rule.[27], [28]derived the relationship between the quantization levelsand the quantization thresholds when the mean squared quantized error achieves the minimum.Based on this re-lationship,[19],[20]proposed an optimization method for Lloyd-Max quantization scheme.According to[19],[20],[27], [28],the optimization method in[19],[20]no longer suits for the quantization scheme which has to meet the condition of Corollary3.Additionally,to achieve high sensing performance, the MMSE based quantization schemes should be performed in LLR domain.This section will propose two novel quantization schemes which do not require to know the pdf of test statistic in LLR domain.The proposed quantization schemes demand that the monotonicity of test statistic does not change when the test statistic is projected onto the LLR domain.Normally,this requirement is satisfied in CR sensing case.In contrast to the MMSE based quantization schemes,the op-timization objective in this paper is to maximize the area under the ROC curve(AUC)as shown in Fig.2.The AUC has been widely used in machine learning research to evaluate the perfor-mance of a classifier[29].Additionally,[30]explains the AUC reflects the detection capability in spectrum sensing.Thus,our 4This consideration can be guaranteed by the following three pro-cedures:1)let denote a column vector including every element in set and the entries of are in ascending order;2)useto obtain corresponding to;3)let and replace and.The new and in procedure3)guarantees the1936IEEE TRANSACTIONS ON SIGNAL PROCESSING,VOL.61,NO.8,APRIL15,2013Fig.2.Illustration of the proposed optimization in ROC curve manner.The black-curve and dot-curve denote the original and optimized ROC curves re-spectively.The optimized AUC is enclosed by the line,the line and the ROC curve.optimization objective is reasonable.A merit of using AUC is that this new objective is irrespective of the domain where the quantization is performed.Before introducing other merits,the new optimization objective is formulated as[31](17) where,;;is a including every discretein ascending order and it can be obtained by;.According to Section III, is impacted by the quantized thresholds and quantization levels.Thus,is effected by and.Since the pmf for every realization of the combined quanti-zation level is given by,there is the following expression for the right hand side of,(18) where let.By(4),(18),(17)is transformed into as follows,(19) where means the sum of each entry in an arbitrary row of is equal to1.Notice that the optimal can be obtained by the optimal directly.Thus,once the optimal is known,it can be declared the optimal is acquired.From (19),the optimal quantizer including and can be obtained. Next,a quantization scheme is proposed based on(19).C.Pseudo-Uniform Quantization Scheme With Optimization According to Corollary3,the calculation of will be very efficient when the quantization levels meet the condition of Corollary3.Thus,we propose a quantization scheme whose quantizer is set as follows:1)Quantization levels:let the quantization levels be(22)where is an integer,denotes an initial quantization level for the CSS,and denotes the quantization level spacing.In(22),is treated as the initial quanti-zation level for user-.2)Quantization thresholds:employ the quantization thresh-olds calculated by the optimal in(19).Here the optimal is obtained when employing the quantization levels(22)in(19).Obviously,the quantization levels in(22)satisfy the condi-tion of Corollary3.According to(22),given,the quantization levels are an arithmetic sequence.This is the the scheme is named as Pseudo-uni-form quantization scheme.Afterwards,we will show how to determine,,,and efficiently... ....(20)(21)HAN et al.:EFFICIENT SOFT DECISION FUSION RULE IN COOPERATIVE SPECTRUM SENSING1937According to(8)–(11),(22)can be simplified to(23).(23) where and are normalized to1and0respectively.Thus, based on(23),the optimization of Pseudo-uniform quantization scheme is programmed by(24) where denotes the integer set.1)Centralized Optimization Method:According to Corol-lary1and Corollary3,the maximum value of is not im-pacted by when holds.Thus,by setting, ,the optimization is given by(25) To obtain the optimal efficiently,Proposition2is derived to show the major merit of the formulated objective function. Proposition2:Given,is a quasi-convex function of.Proof:First,we prove it is a necessary condition that is constant.According to(8)–(11),determines and in turn. Hence,impacts on the structure of expression.This is the reason for letting be constant.According to(10)(19),the monotonicity of can be obtained by studying the following function,(26) For,one-to-one maps to,and bothand are the pmf for at each SU.Then,for, increases as increasing.Thus,there exists re-sulting in.Consequently,Proposition2are proved. According to Proposition2,(25)is a quasi-convex program-ming with linear constraints.The optimal in(25)can be ob-tained efficiently by the convex optimization theory[32].Once the optimal is known,it is easy to get the related quantization thresholds associating with the related pmf’s.On the other hand,the quantization levels are normalized to for each user in the proposed Pseudo-quantizationthe quantization levels are and.Obviously,another merit of the proposed scheme is that the configuration of quantization levels is simple and it only demands for the codebook Type II)in Section III-A.2)Distributed Optimization Method:To enhance the ef-ficiency of optimization,a two-step optimization method is proposed.According to the principle of optimization method in[19],[20],it is concluded that:when each cooperative user achieves the MMSE of quantization,then i)the optimal quantization levels are independent between any two users,and ii)the optimal quantizer is irrespective of the number of users .It means if the cooperation of any two users can show high sensing performance,then the cooperation of total users also present high sensing performance.Based on these conclusions,we can randomly select a cooperative user as a reference,and then optimize the quantizer one by one.This optimization method is formulated by(27).(27) where in Step i)is obtained in Step1).Evidently,when every cooperative user knows the pdf of test statistic in user-1, (27)can be performed distributively.In(27),to complete the optimization,the Step i)should be performed at least times,which is similar as the optimization procedures in[19], [20].In this distributed optimization method,there are vari-ables in each step,and Proposition2also suits for(27).On the other hand,the quantization levels areand.Thus,the quantizer is optimized efficiently.For the proposed two optimization methods,what is even better is these methods can be performed in either signal do-main or LLR domain.D.Extension StudyWhen comparing with the current MMSE based optimizations [19],[20],[27],[28],the formulated optimization programming (19)has an extinct merit that:it provides an approach to max-imize the sensing performance in an interesting region of(, ).For example,sometimes CSS needs to achieve the detection probability within low false alarm probability region. However,the MMSE based optimization cannot handle this requirement but the proposed optimization objective(19)can. The objective function in(19)can be expressed by(20). To improve the sensing performance in a certain region,it is better to assign different weights to the terms of(20).In(20), the probabilities reflect the region of in ROC picture from low to high.Apparently,if the large is assigned to the low region of,the optimized results will improve the probability of detection within this region. Under the weight assignment,the objective function is ex-pressed by(28) where denote the assigned weights.Based on (28),we propose a heuristic strategy on the weight assignment given by(21)and name it as the weighted Pseudo-uniform quantization scheme.In(21),is set for achieving the high detection probability within the low false alarm proba-bility region;;.Notice that Proposition2still holds for expressed by (21).Therefore,when the expression of in(19)is replaced by(21),the proposed two optimization methods still hold for the weighted Pseudo-uniform quantization scheme.Hence, the related quantizer and decision threshold can be optimized efficiently.。

USP-1092-溶出度试验的开发和验证(中英文对照版)

USP-1092-溶出度试验的开发和验证(中英文对照版)

(1092)溶出度试验的开发和验证【中英文对照版】INTRODUCTION前言Purpose目的The Dissolution Procedure: Developmentand Validation <1092> provides a comprehensive approach covering items to considerfor developing and validating dissolution procedures and the accompanyinganalytical procedures. It addresses the use of automation throughout the testand provides guidance and criteria for validation. It also addresses thetreatment of the data generated and the interpretation of acceptance criteriafor immediate- and modified-release solid oral dosage forms.溶出实验:开发和验证(1092)指导原则提供了在溶出度方法开发和验证过程中以及采用相应分析方法时需要考虑的因素。

本指导原则贯穿溶出度实验的全部过程,并对方法提供了指导和验证标准。

同时它还涉及对普通制剂和缓释制剂所生成的数据和接受标准进行说明。

Scope范围Chapter <1092> addresses the development andvalidation of dissolution procedures, with a focus on solid oral dosage forms.Many of the concepts presented, however, may be applicable to other dosageforms and routes of administration. Generalrecommendations are given with theunderstanding that modifications of the apparatus and procedures as given inUSP general chapters need to be justified.<1092>章节讨论了溶出度实验的开发和验证,重点是口服固体制剂。

英语作文给建议的句子

英语作文给建议的句子

英语作文给建议的句子英文回答:1. Consider the long-term implications. Before making a decision, take the time to think about how it will affect you in the future. What are the potential benefits and risks? What are the possible consequences, both intended and unintended? By considering the long-term implications, you can make a more informed decision that is more likely to lead to a positive outcome.2. Seek advice from trusted sources. Talking to people you trust can help you get different perspectives on your situation. They can provide you with valuable insights and advice that you may not have considered on your own. When seeking advice, it is important to be open-minded and willing to listen to different opinions.3. Weigh the pros and cons. Once you have gathered information and advice, it is important to weigh the prosand cons of each option. Consider the potential benefitsand risks of each option, as well as your own values and priorities. By carefully weighing the pros and cons, youcan make a decision that is right for you.4. Make a decision and be confident in it. After you have considered all of the factors involved, it is time to make a decision. Be confident in your decision and don't second-guess yourself. Once you have made a decision, stick to it and see it through.5. Be open to feedback and adjust your course as needed. Even after you have made a decision, it is important to be open to feedback from others. Be willing to adjust your course if necessary, based on new information or feedback. This flexibility will help you stay on track and reach your goals.中文回答:1. 考虑长远影响。

USP401225药典的验证中英文对照

USP401225药典的验证中英文对照

VALIDATION OF COMPENDIAL PROCEDURES药典方法的验证Test procedures for assessment of the quality levels of pharmaceutical articles are subject to various requirements. According to Section 501 of the Federal Food, Drug, and Cosmetic Act, assays and specifications in monographs of the United States Pharmacopeia and the National Formulary constitute legal standards. The Current Good Manufacturing Practice regulations [21 CFR 211.194(a)] require that test methods, which are used for assessing compliance of pharmaceutical articles with established specifications, must meet proper standards of accuracy and reliability. Also, according to these regulations [21 CFR 211.194(a)(2)], users of analytical methods described in USP NF are not required to validate the accuracy and reliability of these methods, but merely verify their suitability under actual conditions of use. Recognizing the legal status of USP and NF standards, it is essential, therefore, that proposals for adoption of new or revised compendial analytical procedures be supported by sufficient laboratory data to document their validity.用于评估药品质量的检验方法需要满足不同的要求。

Space VLBI Observations Show $T_b 10^{12} K$ in the Quasar NRAO 530

Space VLBI Observations Show $T_b  10^{12} K$ in the Quasar NRAO 530

a r X i v :a s t r o -p h /9809220v 1 16 S e p 1998Space VLBI Observations Show Tb >1012K in the QuasarNRAO 530Geoffrey C.BowerMax Planck Institut f¨u r Radioastronomie,Auf dem H¨u gel 69,D 53121Bonn Germany Donald C.Backer Astronomy Department &Radio Astronomy Laboratory,University of California,Berkeley,CA 94720ABSTRACT We present here space-based VLBI observations with VSOP and a southern hemisphere ground array of the gamma-ray blazar NRAO 530at 1.6GHz and 5GHz.The brightness temperature of the core at 1.6GHz is 5×1011K.The size is near the minimum observable value in the direction of NRAO 530due to interstellar scattering.The 5GHz data show a single component with a brightness temperature of ∼3×1012K,significantly in excess of the inverse Compton limit and of the equipartition brightness temperature limit (Readhead 1994).This is strong evidence for relativistic motion in a jet requiring model-dependent Doppler boosting factors in the range 6to 60.We show that a simple homogeneous sphere probably does not model the emission region accurately.We favor instead an inhomogeneous jet model with a Doppler boosting factor of 15.Subject headings:radiation mechanisms:non-thermal —galaxies:active —galaxies:jets —techniques:interferometric1.IntroductionCompact extragalactic radio sources typically exhibit brightness temperatures in the range of 1010to 1011K (Kellermann,Vermeulen,Zensus &Cohen 1998).These brightness temperatures are near the limit of what can be measured by ground-based very long baseline interferometry (VLBI)and are near to the limits imposed by intrinsic physical processes (Readhead 1994).Historically,the brightness temperature limit has been attributed to the inverse Compton (IC)catastrophe,a process in which the high energy electrons thatproduce the radio synchrotron photons are rapidly cooled by scattering the same photons to higher energies(e.g.,Kellermann&Pauliny-Toth1969).This process is considered to be a likely source for the high energy gamma-rays that are identified with very compact radio sources(von Montigny et al.1995).Brightness temperatures in excess of the IC limit have been interpreted as the effect of Doppler boosting in a relativistic jet beamed towards the observer with a Doppler boosting factorδ∼10.Readhead(1994)has shown that the actual distribution of interferometrically-measured brightness temperatures does not correspond to the distribution expected by a sample limited by IC scattering.Instead,the distribution indicates that brightness temperatures are governed by an unspecified mechanism that maintains equipartition of energy between magneticfields and particles in a synchrotron emitting region which sets a brightness temperature limit that is∼3to10times lower than the IC limit.Brightness temperatures in excess of this limit are again due to Doppler boosting.Ground-based VLBI measurements of brightness temperatures have a limit of∼5.6×1010(1+z)S K,where S is theflux density in Jy and z is the cosmological redshift. With techniques of super-resolution,lower limits to brightness temperatures have been inferred that are greater than1012K(e.g.,Moellenbrock et al.1996).VLBI observations with an orbiting antenna permit the detection of fully resolved components with brightness temperatures greater than1012K(Linfield et al.1989).Observations with the VLBI Space Observatory Programme(VSOP)have baselines greater than two Earth diameters, quadrupling the maximum detectable brightness temperature(Hirabayashi1998).We present here observations with VSOP of the blazar NRAO530at1.6GHz and5 GHz.NRAO530(J1733-1304)recently underwent a bright millimeter and radio wavelength flare that appears to be correlated with the creation of a new component in the jet and with an increase in gamma-ray activitity(Bower et al.1997).NRAO530is a m pg≈18.5 mag QSO(Welch and Spinrad1973)with a redshift z=0.902(Junkkarinen1984).Itis a gamma-ray active blazar with a bolometric luminosity of0.9×1048h−2ergs s−1, assuming isotropic emission(Nolan et al.1996).Throughout this paper we assume q0=0.5, H0=100h km s−1Mpc−1and h=0.7,which implies an angular to linear scale conversion of6.0pc mas−1and a luminosity distance of4.4Gpc.In Section2,we present the observations and data reduction.In Section3,we discuss the implications of the high brightness temperature that we detect in NRAO530.2.Observations and Data Reduction2.1.Observations,Correlation and Fringe-FittingNRAO530was observed by VSOP and an array of ground radio telescopes on8and9 September1997.Observations in left-circular polarization at1.6GHz and5GHz were done in two separate5hour orbits of HALCA,the VSOP satellite.The Usuda tracking station was used during the observations.System temperatures were measured during adjacent tracking passes.The ground radio telescopes(GRTs)consisted of Usuda(Japan),Seshan (China),Mopra and AT(Australia).The ground-based observations were approximately8 hours in duration.Figure1displays the(u,v)coverage for the5GHz experiment.The data were recorded in the S2format with two intermediate frequency(IF)bands each with a bandwidth of16MHz.Data were correlated in March1998at Penticton (Carlson et al.1998).The space-ground and ground-ground baselines were accumulated with periods of0.1and2s,respectively.The data were binned in128and256frequency channels at1.6GHz and5GHz,respectively.Initial steps in data reduction were performed with the15April1998version of AIPS. Fringefitting was done in two steps.In thefirst step,singleband delays and rates were determined for the GRTs alone.In the second step,the previous GRT solutions were applied and new singleband delay and rate solutions for VSOP alone were found.Fringe fitting simultaneously GRTs with VSOP produced bad solutions that did not correctly eliminate all single band delays.Strong fringes from VSOP to Usuda were consistently detected.Solution intervals of 60s were employed at both frequencies.At1.6GHz,fringe amplitudes to VSOP were0.1 to1Jy.At5GHz,fringe amplitudes to VSOP were1to3Jy.Residual fringe rates varied smoothly between0and90mHz at1.6GHz and between-50and400mHz at5GHz. The maximum residual fringe rate corresponds to a coherence loss of∼3%in an0.1s integration.Measured decorrelation over60s at the time of maximum residual rate on the VSOP-Usuda baseline was5.6%at5GHz.Residual fringe delays varied smoothly between 150and550ns at1.6GHz and between0and400ns at5GHz.2.2.Calibration,Imaging and Model-FittingWe show in Table1the system temperature and gain information used for each antenna.System temperatures for HALCA were measured during tracking passes adjacent to the observations and varied by less than10%over an entire orbit.Gain information for HALCA was taken from calibration observations of Cygnus A during the period21October to4November1997.Data were averaged to5minutes before imaging and self-calibration.Imaging and modeling were performed with DIFMAP(Shepherd,Pearson&Taylor1994).The solutions converged quickly andfits to all baselines were good.Amplitude scaling factors determined through self-calibration are also included in Table1.Only for the case of Mopra at5GHz were the corrections greater than20%.Wefit models to the self-calibrated visibility data. (Table2).We show in Figure2the visibility amplitude at5GHz as a function of(u,v) distance along with circular Gaussian models that bracket the best-fit model.We have confidence in the5GHz amplitude self-calibration.First,we note thatfits to the1.6GHz and5GHz data prior to amplitude self-calibration were consistent with those for the self-calibrated data.Second,without input of the zero-baselineflux the amplitude self-calibration reduced theflux on the Mopra-AT baseline(b≈2Mλ)from9.86±0.1to 7.48±0.06Jy.This latter value is consistent with a single-dish measurement made in the same epoch(see below).Theflux missing between this short baseline and the longer baselines(∼3.8Jy)must be contained in structures with an angular size in the range8to 100mas(2<b<25Mλ).VLBA imaging at8.4GHz shows multiple structures with∼2 Jy with a FWHM of b≈25Mλ(Bower et al.1998).If theflux and the size are proportional to wavelength,then the missingflux is accounted for.2.3.Contemporaneous SpectrumThe UMRAO reports observations of NRAO530within two weeks of the VSOP observations(Aller&Aller,/obs/radiotel/umrao.html). Theyfindfluxes of7.34±0.13,8.49±0.11,and7.49±0.06Jy at4.8,8.0and14.5GHz. Observations with the BIMA millimeter interferometer give aflux of∼5Jy at86GHz in the same epoch.We assume for the remainder of the paper that the spectrum has a self-absorption turnover at8GHz with aflux of8.5Jy and a high frequency spectral index ofα=−0.2for S∝να.3.Discussionparison with Other ObservationsWe tabulate the brightness temperatures of each of the components in Table2using the expressionT b=1.41×109K(1+z) S mas2 −1 λwhere z is the cosmological redshift of the source,S is the peakflux density,σ1andσ2 are the FWHM sizes of the component in major and minor axes andλis the observed wavelength.This and all subsequent brightness temperatures are given in the rest frame of the host galaxy.The5GHz brightness temperature is the highest ever measured for NRAO530and is among the highest measured interferometrically.Previous space VLBI observations of NRAO530with the TDRSS satellite at15GHz found a T b=9×1011K(Linfield et al. 1990).Previous3millimeter VLBI observations found a brightness temperature of4×1011 K at the peak of the millimeterflare(Bower et al.1997).Contemporaneous measurements with the VLBA at22and43GHz give brightness temperatures of4×1011and2×1011K, respectively(Bower et al.1998).Given the limits to ground-based brightness temperatures, these values also represent lower limits to the brightness temperature.These observations also indicate that the structure on sub-mas scales has two components.The1.6GHz and5GHz brightness temperatures are,therefore,lower limits to the actual brightness temperature.3.2.Extrinsic Effects on the Brightness TemperatureNRAO530is at a low galactic latitude in the direction of the Galactic Center(l=12◦,b=+11◦)and,hence,may be affected by significant interstellar scattering.The expected scattering size scales asλ2.2(Taylor&Cordes1993).This is marginally consistent with the power law index of the measured core sizes,1.5±0.2.The expected scattering sizes are1.8mas at1.6GHz and0.16mas at5GHz.Since the scattering sizes and the measured sizes are comparable and there is a large uncertainty in the scattering sizes,we cannot reliably remove their effects.We conclude that the measured sizes at1.6GHz and5GHz are near the minimum observable at thisflux density and that the brightness temperatures are lower limits to the intrinsic brightness temperature.3.3.Intrinsic Causes for Excess Brightness TemperatureWe can compare the observed brightness temperatures to the limits imposed by intrinsic physical processes.We considerfirst processes associated with a homogenous sphere as discussed by Readhead(1994),the IC catastrophe and an unspecified mechanism that maintains equipartition of ter,we will consider the limits imposed byan inhomogeneous jet in energy equipartition.The IC catastrophe imposes a limit ofT b≈5×1011K.The equipartition requirement imposes a limit of T b≈0.5×1011K. These values are quite robust;they depend only weakly on the spectral parameters of the component.The5GHz component significantly exceeds both of these limits.Doppler factors of 6and60are necessary to accomodate the limits,respectively.We estimated previously through component proper motion studies at millimeter wavelengths thatβapp≈7h−1 (Bower et al.1997)and have since refined this value through more extensive monitoring toβapp≈4h−1(Bower et al.1998).We also estimated previously through synchrotron self-Compton arguments thatδ>11and through gamma-ray opacity arguments that δ>2.3.If we require that the apparent velocity is4h−1c,wefindθ=8◦.9andγ=4.4for δ=6andθ=0◦.1andγ=30forδ=60;θis the angle between the jet and the line of sight andγis the bulk Lorentz factor.A Lorentz factor of30is at the limit of what can exist under standard jet and accretion disk parameters.Melia and K¨o nigl(1989)calculate that thermal radiation from an accretion disk produces a radiative drag on a jet that leads to terminal Lorentz factors on the order of10.Further,objects with Lorentz factors this large must be extremely rare since measured apparent velocities appear to have an upper limit of∼10(Vermeulen et al. 1994).We consider it more likely that the true Doppler boosting factor is∼6and that the limiting physical process in this component is the IC catastrophe.If this is true,then the component is significantly far from the equipartition energy. Following Readhead,we calculate that the total energy exceeds the equipartition energy value by a factor of∼103and the magneticfield is less than the equipartition value by a factor of∼102.The particle energy density exceeds thefield energy density by a factor of∼108.These conditions are far from what one expects under the typical magnetized shock-in-jet scenario for the acceleration of relativistic electrons.We now consider a third intrinsic explanation,an inhomogeneous jet in energy equipartition.In this case the brightness temperature limit is3×1011δ5/6K(Blandford& K¨o nigl1979).This limit is higher because of the special geometry of the inhomogeneous jet which leads to a more peaked distribution of theflux than in the case of the homogeneous sphere.The implied Doppler boosting factor is then∼15andθ≈2.5andγ≈9.The expected size for such a jet at5GHz is∼0.5mas,in excellent agreement with our observations.The implied magneticfield strength at the radius of maximum brightness temperature is0.01G and the total jet power is7×1045erg s−1.We have securely measured a brightness temperature in excess of1012K in the gamma-ray blazar NRAO530through observations with the VSOP orbiting radio telescope.This isstrong evidence for beamed relativistic motion in a jet.We have considered several intrinsic causes for this extreme brightness temperature.We favor the inhomogeneous jet model of Blandford&K¨o nigl which produces a reasonable Doppler boosting factor while maintaining energy equipartition.The equipartition brightness temperature limit of Readhead does not apply in this situation due to the extreme Doppler boosting factor required.Although this limit appears to apply in many other sources,variability in NRAO530may allow departures from this equipartition limit on timescales of a few years or more.If the brightness temperature is limited by the IC catastrophe in a homogeneous sphere,then the implied Doppler boosting factors are more reasonable.However,the departures from energy equipartition are difficult to understand.On the other hand,this scenario is favorable because it provides a link between the millimeter and centimeter wavelength variability and the gamma-ray activity observed in NRAO530.One can speculate that blazars detected by EGRET are those in which the equipartition brightness temperature limit is briefly superceded by the IC catastrophe limit.Space VLBI observations with VSOP and with future missions such as ARISE(Ulvestad&Linfield1998)will be necessary to probe the physical limits on high brightness temperature sources.This research has made use of data from the University of Michigan Radio Astronomy Observatory which is supported by the National Science Foundation and by funds from the University of Michigan.We gratefully acknowledge the VSOP Project,which is led by the Japanese Institute of Space and Astronautical Science in cooperation with many organizations and radio telescopes around the world.REFERENCESBlandford,R.D.&K¨o nigl,A.,1979,ApJ,232,34.Bower,G.C.,Backer,D.C.,Wright,M.,Forster,J.R.,Aller,H.D.&Aller,M.F.,1997,ApJ, 484,118.Bower,G.C.et al.,1998,in preparation.Carlson,B.,Dewdney,P.E.,Perachenko,W.T.,Burgess,T.,Casorso,R.,&Cannon,W.H., 1998,in preparation.Hirabayashi,H.,1998,IAU164,J.A.Zensus,G.B.Taylor,&J.M.Wrobel,eds.,ASP Conf., 144,11.Junkkarinen,V.,1984,PASP,96,539.Kellermann,K.I.&Pauliny-Toth,I.I.K.,1969,ApJ,193,43.Kellermann,K.I.,Vermeulen,R.C.,Zensus,J.A.&Cohen,M.H.,1998,AJ,115,1295. Linfield,R.P.et al.,1989,ApJ,336,1105.Linfield,R.P.et al.,1990,ApJ,358,350.Marscher,A.P.,1983,ApJ,264,296.Melia,F.,&K¨o nigl,A.,1989,ApJ,340,162.Moellenbrock,G.A.,et al.,1996,AJ,111,2174.Nolan,P.L.et al.,1996,ApJ,459,100.Readhead,A.C.S.,1994,ApJ,426,51.Shepherd,M.C.,Pearson,T.J.,&Taylor,G.B.,1994,BAAS,26,987.Taylor,J.H.&Cordes,J.M.,1993,ApJ,411,674.Ulvestad,J.S.&Linfield,R.P.,1998,IAU164,J.A.Zensus,G.B.Taylor,&J.M.Wrobel, eds.,ASP Conf.,144,397.Vermeulen,R.C.,&Cohen,M.H.,1994,ApJ,430,467.von Montigny,C.,et al.,1995a,ApJ,440,525.Welch,W.J.and Spinrad,H.,1973,PASP,85,456.Fig.1.—The(u,v)coverage at5GHz.Triangles indicate space-ground baselines and crosses indicate ground-ground baselines.Note that the VSOP-AT and VSOP-Mopra baselines overlap and provide the greatest resolution.Fig.2.—Visibility amplitude as a function of(u,v)distance at5GHz.The solid lines are for a zero-baselineflux of3.66Jy and circular Gaussian sizes of0.47and0.28mas,which correspond to the major and minor axis FWHMs,respectively.Table1.Amplitude Calibration ParametersStation T sys Gain Scaling Factor(K)(K/Jy)1.6GHz5GHz 1.6GHz5GHz 1.6GHz5GHzTable2.Modelfits to componentsFlux rθσ1σ2φT b (Jy)(mas)(deg)(mas)(mas)(deg)(1011K) 1.720.000.0 2.41 1.4625.25 0.840.587.0 4.46 2.3686.70.8 0.92 3.5316.2 3.21 1.8653.02 3.660.000.000.470.2823.030。

药品国际注册相关英语词汇

药品国际注册相关英语词汇

1. toxicity [tɔk'sisəti]n. [毒物] 毒性2. content ['kɔntent]n. 内容,目录;满足;容量adj. 满意的vt. 使满足n. (content)人名;(法)孔唐3. substance ['sʌbstəns]n. 物质;实质;资产;主旨4. strictly ['striktli]adv. 严格地;完全地;确实地5. regulate ['reɡjuleit]vt. 调节,规定;控制;校准;有系统的管理6. monograph ['mɔnəɡrɑ:f, -ɡræf]n. 专题著作,专题论文vt. 写关于…的专著7. specified ['spesifaid]v. 指定;详细说明(specify的过去分词)adj. 规定的;详细说明的8. detectedv. 发现(detect的过去分词);检测到;侦测到adj. 检测到的9. annex [ə'neks, 'æneks]n. 附加物;附属建筑物vt. 附加;获得;并吞10. validation [,væli'deiʃən]n. 确认;批准;生效11. degradation [,deɡrə'deiʃən]n. 退化;降格,降级;堕落;降解12. in-house ['in'haus]adj. 内部的adv. 内部地13. residual solvent剩余溶剂残留溶剂14. characterisation [,kærəktərai'zeiʃən, -ri'z-]n. (英)特性描述;性格化(等于characterization)15. elemental [,eli'mentəl]元素的16. spectrum ['spektrəm]n. 光谱;频谱;范围;余象17. infrared [,infrə'red]n. 红外线adj. 红外线的18. crossing over spectrum交叉谱,交换图谱19. COSY ['kəuzi]化学位移相关谱二维化学位移相关谱相关谱20. enlargement [in'lɑ:dʒmənt]n. 放大;放大的照片;增补物21. X-ray ['eks,rei]n. 射线;射线照片adj. x光的;与x射线有关的vt. 用x光线检查vi. 使用x光22. diffraction [di'frækʃən]n. (光,声等的)衍射,绕射23. chromatogram ['krəumətəɡræm]n. [分化] 色谱图;[分化] 色层分离谱24. comply with照做,遵守25. concentration [,kɔnsən'treiʃən]n. 浓度26. soluble ['sɒljʊb(ə)l]adj. [化学] 可溶的,可溶解的;可解决的27. residue on ignition炽灼残渣28. enantiomeric purity对映体纯度29. assay [ə'sei]n.含量测定30. injection volume进样量注入体积进样体积31. overhead tank压力罐;[化工] 高位槽,高位罐32. Deviation [diːvɪ'eɪʃ(ə)n]n. 偏差;误差;背离33. appendix [ə'pendɪks]n. 附录;阑尾;附加物34. Discharge the aqueous layer分掉水层35. agitator ['ædʒɪteɪtə]n. 搅拌器36. Adjust [ə'dʒʌst]vt. 调整,使…适合;校准vi. 调整,校准;适应37. crystallize ['kristə,laiz]vt. 使结晶;明确;使具体化;做成蜜饯vi. 结晶,形成结晶;明确;具体化38. as per proportion 2:1按2:1比例39. nomenclature [nə(ʊ)'meŋklətʃə; 'nəʊmən,kleɪtʃə]n. 命名法;术语40. molecular formulan. [化学] 分子式41. elucidation [ɪ,l(j)uːsɪ'deɪʃ(ə)n]n. 结构鉴定42. accordance [ə'kɔːd(ə)ns]n. 一致;和谐43. active substance活性物质;放射性物质;有效物质44. microbial [maɪ'krəʊbɪəl]adj. 微生物的;由细菌引起的45. spectrophotometry [,spektrəufəu'tɔmitri]n. [分化][光] 分光光度法;[分化] 分光光度测定法46. desiccator ['desɪkeɪtə]n. 干燥器;干燥剂;干燥工47. potassium bromide[无化] 溴化钾48. platinum crucible白金坩埚;铂坩埚49. mobile phase(色谱分析的)[分化] 流动相50. diluent ['dɪljʊənt]n. 稀释液;冲淡剂adj. 稀释的;冲淡的51. racemic [rə'siːmɪk; rə'semɪk]adj. 外消旋的;消旋酸的52. respectively [rɪ'spektɪvlɪ]adv. 分别地;各自地,独自地53. relative retention time相对保留时间54. resolution [rezə'luːʃ(ə)n]n. [物] 分辨率;决议;解决;决心55. RSDabbr. 无线电科学部(radio science division)56. consecutive [kən'sekjʊtɪv]adj. 连贯的;连续不断的57. average ['æv(ə)rɪdʒ]n. 平均;平均数;海损adj. 平均的;普通的vt. 算出…的平均数;将…平均分配;使…平衡vi. 平均为;呈中间色58. polypropylene plasticpolypropylene plastic: pp塑料|聚丙烯塑料PP-R(polypropylene) plastic: PP-R(聚丙烯)塑料pp-r (polypropylene) plastic: pp-r聚丙烯塑料59. vail [veɪl]?60. gradient ['greɪdɪənt]n. [数][物] 梯度;坡度;倾斜度adj. 倾斜的;步行的61. stock solution[分化] 储备溶液;原液;贮备溶液62. correction factor[数][分化] 校正因子;校正系数63. retention time[分化] 保留时间;[电子] 保持时间64. split [splɪt]n. 劈开;裂缝adj. 劈开的vt. 分离;使分离;劈开;离开;分解vi. 离开;被劈开;断绝关系65. head space[化学] 液面上空间;顶部空间;灌头顶隙66. LOOP [luːp]n. 回路67. flask [flɑːsk]n. [分化] 烧瓶;长颈瓶,细颈瓶;酒瓶,携带瓶68. bacterial [bæk'tɪərɪəl]adj. [微] 细菌的69. mould [məʊld]n. 模具;霉vt. 浇铸;用泥土覆盖vi. 发霉70. yeast [jiːst]n. 酵母;泡沫;酵母片;引起骚动因素71. filter cartridge滤芯;滤筒72. agar ['eɪgɑː]n. 琼脂(一种植物胶)73. perspective [pə'spektɪv]n. 观点;远景;透视图adj. 透视的74. disclaimer [dɪs'kleɪmə]n. 不承诺,免责声明;放弃,拒绝75. outline ['aʊtlaɪn]n. 轮廓;大纲;概要;略图vt. 概述;略述;描画…轮廓76. observation [ɒbzə'veɪʃ(ə)n]n. 观察;监视;观察报告77. regulatory ['reɡjulətəri]adj. 管理的;控制的;调整的78. formally ['fɔːməlɪ]adv. 正式地;形式上79. policy ['pɒləsɪ]n. 政策,方针;保险单80. guideline ['gaɪdlaɪn]n. 指导方针参考81. efficacy ['efɪkəsɪ]n. 功效,效力82. Developed by自行研制83. amendment [ə'men(d)m(ə)nt]n. 修正案;改善;改正84. be granted被授予85. provision [prə'vɪʒ(ə)n]n. 规定;条款;准备;[经] 供应品vt. 供给…食物及必需品86. assessment [ə'sesmənt]n. 评定;估价87. chronic ['krɒnɪk]adj. 慢性的;长期的;习惯性的n. (chronic)人名;(英)克罗尼克88. duration [djʊ'reɪʃ(ə)n]n. 持续,持续的时间,期间[语音学]音长,音延89. paediatric [,piːdɪ'ætrɪk]adj. [儿科] 儿科的;儿科学的90. category ['kætɪg(ə)rɪ]n. 种类,分类;[数] 范畴91. derived from来源于92. pharmacogenetics [,fɑːməkəʊdʒɪ'netɪks]n. 遗传药理学;药物基因学93. genomic [dʒiː'nəʊmɪk]adj. 基因组的;染色体的94. genomics [dʒə'nəumiks]n. 基因组学;基因体学95. expedite ['ekspɪdaɪt]vt. 加快;促进;发出adj. 畅通的;迅速的;方便的96. transmission [trænz'mɪʃ(ə)n; trɑːnz-; -ns-]n. 传动装置,[机] 变速器;传递;传送;播送97. implement ['ɪmplɪm(ə)nt]n. 工具,器具;手段vt. 实施,执行;实现,使生效98. implementation [ɪmplɪmen'teɪʃ(ə)n]n. [计] 实现;履行;安装启用99. pharmacovigilance药物警戒;药物安全监视100. dose-response ['dəusri,spɔns]n. 剂量反应;剂量效应101. ethnic ['eθnɪk]adj. 种族的;人种的102. addendum [ə'dendəm]n. 附录,附件;补遗;附加物103. geriatric [,dʒerɪ'ætrɪk]n. 老年病人;衰老老人adj. 老人的;老年医学的104. statistical [stə'tɪstɪk(ə)l]adj. 统计的;统计学的105. pediatric [,pi:dɪ'ætrɪk]adj. 小儿科的106. investigation [ɪn,vestɪ'geɪʃ(ə)n]n. 调查;调查研究107. therapeutic [,θerə'pjuːtɪk]n. 治疗剂;治疗学家adj. 治疗的;治疗学的;有益于健康的108. antihypertensive drugAntihypertensive drug: 抗高血压药|抗高血压药物|降压药central antihypertensive drug: 中枢性降压药centrally acting antihypertensive drug: 中枢降压药109. qualify ['kwɒlɪfaɪ]vi. 取得资格,有资格vt. 限制;使具有资格;证明…合格110. qualification [,kwɒlɪfɪ'keɪʃ(ə)n]n. 资格;条件;限制;赋予资格111. biomarker [,baiə'mɑ:kə]n. 生物标志物;生物标记;生物指标112. format ['fɔːmæt]n. 格式;版式;开本vt. 使格式化;规定…的格式vi. 设计版式113. submission [səb'mɪʃ(ə)n]n. 投降;提交(物);服从;(向法官提出的)意见;谦恭114. multi-regional跨地区的,跨区域的115. character ['kærəktə]n. 性格,品质;特性;角色;[计] 字符vt. 印,刻;使具有特征116. characteristic [kærəktə'rɪstɪk]adj. 典型的;特有的;表示特性的n. 特征;特性;特色117. headspace vial顶空瓶118. sonication [,sɒnɪ'keɪʃən]n. 声波降解法119. pipette [pɪ'pet]n. 移液管;吸移管vt. 用移液器吸取120. slightly soluble微溶121. freely soluble易溶解的122. practically insoluble几乎不溶的123. in-process control中间过程控制124. potassium [pə'tæsɪəm]n. [化学] 钾125. carbonate ['kɑːbəneɪt]n. 碳酸盐vt. 使充满二氧化碳;使变成碳酸盐126. moisten ['mɒɪs(ə)n]vt. 弄湿;使…湿润vi. 潮湿;变潮湿127. hydrochloric acid[无化] 盐酸128. erlenmeyer flask ['ə:lən,maiə]锥形烧瓶;爱伦美氏烧瓶129. stopper ['stɒpə]n. 塞子;阻塞物;制止者;妨碍物vt. 用塞子塞住130. Carbon Dioxide–Free Wate无CO2水131. volatile ['vɒlətaɪl]n. 挥发物;有翅的动物adj. [化学] 挥发性的;不稳定的;爆炸性的;反覆无常的n. (volatile)人名;(意)沃拉蒂莱132. flammable ['flæməb(ə)l]n. 易燃物adj. 易燃的;可燃的;可燃性的133. aqueous ['eɪkwɪəs]adj. 水的,水般的134. nonvolatile matter不挥发物135. evaporate [ɪ'væpəreɪt]vt. 使……蒸发;使……脱水;使……消失vi. 蒸发,挥发;消失,失踪136. lustrous ['lʌstrəs]adj. 有光泽的;光辉的137. neutral ['njuːtr(ə)l]n. 中立国;中立者;非彩色;齿轮的空档adj. 中立的,中性的;中立国的;非彩色的138. turbidness ['tə:bidnis]n. 浓密;混浊139. constant ['kɒnst(ə)nt]n. [数] 常数;恒量adj. 不变的;恒定的;经常的n. (constant)人名;(德)康斯坦特140. thoroughly [ˈθʌrəli]adv. 彻底地,完全地141. vinegary fiavourvinegary fiavour: 醋味142. nasil[动] 鼻嗅143. ignite [ɪg'naɪt]vt. 点燃;使燃烧;使激动vi. 点火;燃烧144. platinum wire[材] 铂丝145. platinum filament铂丝146. equivalent [ɪ'kwɪv(ə)l(ə)nt]n. 等价物,相等物adj. 等价的,相等的;同意义的147. coliform ['kɒlɪfɔːm]n. 大肠菌(等于coliform bacillus)adj. 像大肠菌的;筛状的148. organoleptic test感官试验感官检验149. acidity or alkalinityAcidity or Alkalinity: 酸碱度|酸度或碱度|或碱的酸碱性150. immersed [ɪ'mɜːst]adj. 浸入的;专注的v. 浸(immerse的过去式和过去分词);沉湎于151. deionized water[化学] 去离子水152. vertical axis[力] 垂直轴;纵轴153. vice versa [,vaisi'və:sə]反之亦然154. volumetric solutionn. 滴定液;标定溶液155. area normalization method面积归一化法156. sterilization [,sterəlaɪ'zeɪʃən]n. 消毒,杀菌157. sucker ['sʌkə]n. 吸管;乳儿;易受骗的人vt. 从……除去吸根vi. 成为吸根;长出根出条158. heatproof ['hi:t'pru:f]adj. 抗热的,耐热的vt. 使……隔热;使……耐热159. nitrate ['naɪtreɪt]n. 硝酸盐vt. 用硝酸处理160. ammonia-free waterAmmonia-Free Water: 不含氨的水161. membrane filtration methodmembrane filtration method: 薄膜过滤法|微孔滤膜法|滤膜法filter membrane filtration method: 滤膜过滤法centrifuging membrane filtration method: 离心薄膜过滤法162. declaration [deklə'reɪʃ(ə)n]n. (纳税品等的)申报;宣布;公告;申诉书申明163. genetically modified转基因的164. organism ['ɔːg(ə)nɪz(ə)m]n. 有机体;生物体;微生物165. genetically modified organism基因改造生物166. In accordance with依照;与…一致167. Melting range熔距;熔化范围;熔点区间168. Specific rotation[光] 旋光率;比旋度169. sodium hydroxiden. [无化] 氢氧化钠170. acetate ['æsɪteɪt]n. [有化] 醋酸盐;醋酸纤维素及其制成的产品171. Gradually ['grædʒʊlɪ; 'grædjʊəlɪ] adv. 逐步地;渐渐地172. tawny ['tɔːnɪ]n. 黄褐色;茶色adj. 黄褐色的;茶色的173. precipitate [prɪ'sɪpɪteɪt]n. [化学] 沉淀物vt. 使沉淀;促成;猛抛;使陷入adj. 突如其来的;猛地落下的;急促的vi. [化学] 沉淀;猛地落下;冷凝成为雨或雪等174. yellowish-brownn. 黄棕175. on standingon standing: 静置时|一经静置|静置以后176. on shaking振摇后177. be concordant with与……一致178. phosphorous pentoxide五氧化二磷179. under reduced pressureunder reduced pressure: 负压|在减压下Distillation under reduced pressure: 减压蒸馏concentrated under reduced pressure: 减压浓缩180. Residue on ignition炽灼残渣181. butyl acetate[有化] 乙酸丁酯;醋酸丁酯182. Pyrogensn. 致热源;焦精(pyrogen的复数)183. sonicate ['sɒnɪkeɪt]n. 对……进行声处理184. chloride ['klɔːraɪd]n. 氯化物185. sodium chloride[无化] 氯化钠,食盐186. hydroxide [haɪ'drɒksaɪd]n. [无化] 氢氧化物;羟化物187. parenteral administration肠胃外投药 ,注射给药188. rapidly ['ræpɪdlɪ]adv. 迅速地;很快地;立即189. persist [pə'sɪst]vi. 存留,坚持;持续,固执vt. 坚持说,反复说190. persist for持续到191. equivalent to等于,相当于;与…等值192. storage ['stɔːrɪdʒ]n. 存储;仓库;贮藏所193. preserve [prɪ'zɜːv]n. 保护区;禁猎地;加工成的食品vt. 保存;保护;维持;腌;禁猎194. hermetically [hə:'metikəli]adv. 密封地,不透气地;炼金术地195. granules [grænju:ls]n. 粒斑,颗粒(granule的复数);颗粒剂196. spray [spreɪ]n. 喷雾;喷雾器;水沫vt. 喷射vi. 喷197. crystalline ['krɪst(ə)laɪn]adj. 透明的;水晶般的;水晶制的198. crystalline powdercrystalline powder: 晶体粉末|结晶性粉末|结晶粉末199. acetonitrile [ə,siːtə(ʊ)'naɪtraɪl; ,æsɪtəʊ-] n. [有化] 乙腈;氰化甲烷200. methanol ['meθənɒl]n. [有化] 甲醇(methyl alcohol)201. be identical with与…相同/一致202. principal peak主峰203. neutral solution[化学] 中性溶液204. liberate ['lɪbəreɪt]vt. 解放;放出;释放可用于化学反应中某种气味的产生205. mineral ['mɪn(ə)r(ə)l]n. 矿物;(英)矿泉水;无机物;苏打水(常用复数表示)adj. 矿物的;矿质的206. mineral acid矿物酸;[无化] 无机酸207. acetatesn. [有化] 醋酸盐,[有化] 乙酸盐;醋酸纤维素208. aluminium [æl(j)ʊ'mɪnɪəm]n. 铝adj. 铝的209. aluminium saltsaluminium salts: 铝盐|铝系絮凝剂aluminium and iron salts: 铝盐和铁盐210. ammonium [ə'məʊnɪəm]n. [无化] 铵;氨盐基211. ammonium salts【无机化学】铵盐212. antimony ['æntɪmənɪ]n. [化学] 锑(符号sb)213. barium ['beərɪəm]n. [化学] 钡(一种化学元素)214. barium salts215. benzoates ['benzəʊeɪt]n. 苯酸盐;安息香酸盐216. bismuth ['bɪzməθ]n. [化学] 铋217. bismuth salts铋盐218. borate ['bɔːreɪt]n. [无化][有化] 硼酸盐vt. 用硼酸处理;使与硼酸混合219. bromide ['brəʊmaɪd]n. [无化] 溴化物;庸俗的人;陈词滥调220. calcium ['kælsɪəm]n. [化学] 钙221. bicarbonate [baɪ'kɑːbəneɪt; -nət]n. 碳酸氢盐;重碳酸盐;酸式碳酸盐222. citrate ['sɪtreɪt]n. 柠檬酸盐223. copper ['kɒpə]n. 铜;铜币;警察adj. 铜的vt. 镀铜于n. (copper)人名;(英)科珀224. ferric ['ferɪk]adj. 铁的;[无化] 三价铁的;含铁的n. (ferric)人名;(法)费里克225. ferrous ['ferəs]adj. [化学] 亚铁的;铁的,含铁的226. iodide ['aɪədaɪd]n. [无化] 碘化物227. lactates [læk'teɪt]vi. 分泌乳汁;喂奶n. [有化] 乳酸盐228. lithium ['lɪθɪəm]n. 锂(符号li)229. magnesium [mæg'niːzɪəm]n. [化学] 镁230. malonylurea [,mæləniljuə'riə] n. 丙二酰脲;巴比土酸231. mercuric [mɜː'kjʊərɪk]adj. [无化] 汞的;水银的232. mercurous ['mɜːkjʊrəs]adj. 水银的;[无化] 亚汞的;含水银的233. organic fluorinated compounds含氟有机化合物234. fluorinate ['flʊərɪneɪt; 'flɔː-] vt. 使与氟素化合235. fluorine ['flʊəriːn; 'flɔː-] n. [化学] 氟236. phosphate ['fɒsfeɪt]n. 磷酸盐;皮膜化成237. primary aromatic amines初级芳香胺(primary aromatic amine的复数)238. aromatic amines芳香胺;芳香族碳氢基氨239. aromatic [ærə'mætɪk]n. 芳香植物;芳香剂adj. 芳香的,芬芳的;芳香族的240. amines [ə'mi:ns]n. 胺类;有机胺类(amine的复数)241. salicylate [sə'lɪsɪlət]n. [有化] 水杨酸盐242. silver ['sɪlvə]n. 银;银器;银币;银质奖章;餐具;银灰色adj. 银的;含银的;有银色光泽的;口才流利的;第二十五周年的婚姻vt. 镀银;使有银色光泽vi. 变成银色n. (silver)人名;(法)西尔韦;(英、德、芬、瑞典)西尔弗243. silver saltsn. 银盐244. sodium ['səʊdɪəm]n. [化学] 钠(11号元素,符号 na)245. stannous ['stænəs]adj. 锡的;含锡的;含二价锡的246. stannous saltstannous salt: 亚锡盐|亚锡酸盐organic stannous salt: 有机亚锡盐247. sulfate ['sʌlfeɪt]n. [无化] 硫酸盐vt. 使成硫酸盐;用硫酸处理;使在上形成硫酸铅沉淀vi. 硫酸盐化248. sulfite ['sʌlfaɪt]n. [无化] 亚硫酸盐(等于sulphite)249. bisulfite [,baɪ'sʌlfaɪt]n. [无化] 重亚硫酸盐;酸性亚硫酸盐250. tartrate ['tɑːtreɪt]n. [有化] 酒石酸盐251. tropane托品烷252. alkaloide生物碱253. tropane alkaloidstropane alkaloids: 烷类生物碱|烷生物碱|莨菪烷类生物碱254. zinc [zɪŋk]n. 锌vt. 镀锌于…;涂锌于…;用锌处理255. feed Port进料口256. orifice ['ɒrɪfɪs]n. [机] 孔口257. inlet ['ɪnlet]n. 入口,进口;插入物258. potassium dihydrogen phosphate[肥料] 磷酸二氢钾259. dihydrogen二氢260. phosphoric [fɒs'fɒrɪk]adj. 磷的,含磷的261. phosphoric acid磷酸262. full scale全尺寸;原大的;完全的;全刻度的;满量程263. attenuation [ə,tenjʊ'eɪʃən]n. [物] 衰减;变薄;稀释264. disregard [dɪsrɪ'gɑːd]n. 忽视;不尊重vt. 忽视;不理;漠视;不顾265. vacuum ['vækjʊəm]n. 真空;空间;真空吸尘器adj. 真空的;利用真空的;产生真空的vt. 用真空吸尘器清扫266. bonded silica gelbonded silica gel: 键合硅胶bonded silica gel adsorption: 吸附stearyl bonded silica gel: 十八烷基键合硅胶267. particle size[岩] 粒度;颗粒大小268. octadecylsilaneoctadecylsilane: 十八烷基硅烷n-octadecylsilane: 正十八烷基硅烷octadecylsilane chemically bonded silica: 十八烷基硅烷键合硅胶269. silane ['sɪleɪn]n. [无化][电子] 硅烷;矽烷270. pack with塞进;挤进;塞满了东西;某地方挤满了人271. neutralize ['nju:trəlaiz]vt. 抵销;使…中和;使…无效;使…中立vi. 中和;中立化;变无效272. with reference to关于,根据(等于in reference to)273. external [ɪk'stɜːn(ə)l; ek-]n. 外部;外观;外面adj. 外部的;表面的;[药] 外用的;外国的;外面的274. external standard method[分化] 外标法275. capsule ['kæpsjuːl; -sjʊl]n. 胶囊;[植] 蒴果;太空舱;小容器adj. 压缩的;概要的vt. 压缩;简述276. apart from远离,除…之外;且不说;缺少277. other than除了;不同于278. reflux condenser[化工] 回流冷凝器;回龄凝器279. sachet ['sæʃeɪ]n. 香囊;小袋280. instrument ['ɪnstrʊm(ə)nt]n. 仪器;工具;乐器;手段;器械281. EDQMEDQM: 欧洲药品质量管理局(European Directorate For Quality Medicines) 282. compression [kəm'preʃ(ə)n]n. 压缩,浓缩;压榨,压迫283. extract [ˈekstrækt]n. 汁;摘录;榨出物;选粹vt. 提取;取出;摘录;榨取284. certificate [sə'tɪfɪkət]n. 证书;执照,文凭vt. 发给证明书;以证书形式授权给…;用证书批准285. supersede [,suːpə'siːd; ,sjuː-] vt. 取代,代替;紧接着……而到来vi. 推迟行动286. subsequent ['sʌbsɪkw(ə)nt]adj. 后来的,随后的287. carton ['kɑːt(ə)n]n. 纸板箱;靶心白点vt. 用盒包装vi. 制作纸箱n. (carton)人名;(英、西)卡顿;(法)卡尔东288. aluminium foil铝箔(作包装材料);锡箔纸289. dossier ['dɒsɪə; -ɪeɪ; -jeɪ]n. 档案,卷宗;病历表册n. (dossier)人名;(法)多西耶290. in accordance with依照;与…一致291. render ['rendə]n. 打底;交纳;粉刷vt. 致使;提出;实施;着色;以…回报vi. 给予补偿n. (render)人名;(英、德)伦德尔292. void [vɒɪd]n. 空虚;空间;空隙adj. 空的;无效的;无人的vt. 使无效;排放n. (void)人名;(俄)沃伊德293. grant [grɑːnt]n. 拨款;[法] 授予物vt. 授予;允许;承认vi. 同意n. (grant)人名;(瑞典、葡、西、俄、罗、英、塞、德、意)格兰特;(法)格朗294. establish [ɪ'stæblɪʃ; e-]vi. 植物定植vt. 建立;创办;安置295. notification [,nəʊtɪfɪ'keɪʃn]n. 通知;通告;[法] 告示296. manufacturer [,mænjʊ'fæktʃ(ə)rə(r)]n. 制造商;[经] 厂商297. performing partyperforming party: 履约方|参与履约方298. responsible party责任方299. consultedv. 请教,咨询(consult过去式)300. release [rɪ'liːs]n. 释放;发布;让与vt. 释放;发射;让与;允许发表301. respective [rɪ'spektɪv]adj. 分别的,各自的302. territory ['terɪt(ə)rɪ]n. 领土,领域;范围;地域;版图303. generate ['dʒenəreɪt]vt. 使形成;发生;生殖304. refrigeration [rɪ,frɪdʒə'reɪʃən]n. 制冷;冷藏;[热] 冷却305. Equivalent [ɪ'kwɪv(ə)l(ə)nt]n. 等价物,相等物adj. 等价的,相等的;同意义的306. capillary [kə'pɪlərɪ]n. 毛细管adj. 毛细管的;毛状的307. Vaporizer ['veɪpəraɪzə]n. 汽化器;喷雾器;蒸馏器308. general noticesgeneral notices: 凡例|相应的总要求General Notices and Accelerated Revision: 凡例修订Notices of General Meetings: 股东大会召开公告309. general principles通则;总则310. abbreviation [əbriːvɪ'eɪʃ(ə)n]缩写,缩写词311. enact [ɪ'nækt; e-]vt. 制定法律;颁布;扮演;发生312. promulgate ['prɒm(ə)lgeɪt]vt. 公布;传播;发表313. enforce [ɪn'fɔːs; en-]vt. 实施,执行;强迫,强制314. enforcement [en'fɔːsm(ə)nt]n. 执行,实施;强制315. issue for enforcementissue for enforcement: 颁行316. national standard[标准] 国家标准317. quote [kwəʊt]n. 引用vi. 报价;引用;引证vt. 报价;引述;举证318. compendium [kəm'pendɪəm]n. 纲要;概略319. denote [dɪ'nəʊt]vt. 表示,指示320. in addition to除…之外 (同besides)321. so as to以便;以致322. serve as担任…,充当…;起…的作用323. interpretation [ɪntɜːprɪ'teɪʃ(ə)n] n. 解释;翻译;演出324. obviate ['ɒbvɪeɪt]vt. 排除;避免;消除325. replication [replɪ'keɪʃ(ə)n]n. 复制;回答;反响326. replica ['replɪkə]n. 复制品,复制物327. adopt [ə'dɒpt]vi. 采取;过继vt. 采取;接受;收养;正式通过328. adopt inadopt in: 采用adopt in time: 及时采取329. admit [əd'mɪt]vi. 承认;容许vt. 承认;准许进入;可容纳330. cite [saɪt]vt. 引用;传讯;想起;表彰331. be specified for针对……而言332. medicament [mɪ'dɪkəm(ə)nt; 'medɪk-] n. 药剂;医药vt. 用药物治疗333. violate ['vaɪəleɪt]vt. 违反;侵犯,妨碍;亵渎334. in spite尽管335. airtight container密封容器336. humidity [hjʊ'mɪdɪtɪ]n. [气象] 湿度;湿气337. illumination [ɪ,ljuːmɪ'neɪʃən] n. 照明;[光] 照度;启发;灯饰(需用复数);阐明338. Oxidative ['ɒksɪdeɪtɪv]adj. [化学] 氧化的339. Commitment [kə'mɪtm(ə)nt]n. 承诺,保证;委托;承担义务;献身340. apparatus [ˌæpəˈreɪtəs]n. 装置,设备;仪器;器官341. monitor ['mɒnɪtə]n. 监视器;监听器;监控器;显示屏;班长vt. 监控342. editorial [edɪ'tɔːrɪəl]n. 社论adj. 编辑的;社论的343. editorial board编辑委员;编辑部344. preface ['prefəs]n. 前言;引语vt. 为…加序言;以…开始vi. 作序345. designation [dezɪg'neɪʃ(ə)n]n. 指定;名称;指示;选派头衔,职位346. production capacity生产能力;生产力347. expansion [ɪk'spænʃ(ə)n; ek-]n. 膨胀;阐述;扩张物348. expiration date[贸易] 截止日期349. initiate [ɪ'nɪʃɪeɪt]n. 开始;新加入者,接受初步知识者vt. 开始,创始;发起;使初步了解adj. 新加入的;接受初步知识的350. monotherapy单药治疗单一疗法351. genotype ['dʒenətaɪp; 'dʒiːn-]n. 基因型;遗传型352. dosage regimen给药方案353. interferon [,ɪntə'fɪərɒn]n. [生化][药] 干扰素354. peginterferon聚乙二醇干扰素α-2a355. prescribe [prɪ'skraɪb]vt. 规定;开处方vi. 规定;开药方356. infection [ɪn'fekʃ(ə)n]n. 感染;传染;影响;传染病357. ineligible [ɪn'elɪdʒɪb(ə)l]n. 无被选资格的人adj. 不合格的;不适任的;无被选资格的358. interferon-based therapy基于干扰素治疗359. assessment of the potential benefits and risks 潜在利益与风险评估360. hepar ['hi:pɑ:]n. [解剖] 肝361. liver ['lɪvə]n. 肝脏;生活者,居民362. hepatic [hɪ'pætɪk]adj. 肝的;肝脏色的;治肝病的363. hepatocyte ['hepətəʊsaɪt; he'pætə(ʊ)-] n. [细胞] 肝细胞364. hepatocellular [,hepətəu'seljulə]adj. 肝细胞的365. carcinoma [,kɑːsɪ'nəʊmə]n. [肿瘤] 癌366. hepatocellular carcinoma肝细胞癌;肝细胞性肝癌367. renal ['riːn(ə)l]adj. [解剖] 肾脏的,[解剖] 肾的368. kidney ['kɪdnɪ]n. [解剖] 肾脏;腰子;个性n. (kidney)人名;(英)基德尼369. severe [sɪ'vɪə]adj. 严峻的;严厉的;剧烈的;苛刻的370. drug interactions药物相互作用371. convert [kən'vɜːt]n. 皈依者;改变宗教信仰者vt. 使转变;转换…;使…改变信仰vi. 转变,变换;皈依;改变信仰n. (convert)人名;(法)孔韦尔372. predominant [prɪ'dɒmɪnənt]adj. 主要的;卓越的;支配的;有力的;有影响的373. circulate ['sɜːkjʊleɪt]vt. 使循环;使流通;使传播vi. 传播,流传;循环;流通374. metabolite [mɪ'tæbəlaɪt]n. [生化] 代谢物375. account for对…负有责任;对…做出解释;说明……的原因;导致;(比例)占376. substrate ['sʌbstreɪt]n. 基质;基片;底层(等于substratum);酶作用物377. breast cancer乳腺癌378. resistance [rɪ'zɪst(ə)ns]耐受性,抗性n. 阻力;电阻;抵抗;反抗;抵抗力379. inducer [ɪn'djuːsə]n. [遗] 诱导物;引诱者;导流片;电感器380. coadministration同时服用381. inhibit [ɪn'hɪbɪt]vt. 抑制;禁止382. alteration [ɔːltə'reɪʃ(ə)n; 'ɒl-]n. 修改,改变;变更383. comment ['kɒment]n. 评论;意见;批评vi. 发表评论;发表意见vt. 为…作评语n. (comment)人名;(德)科门特;(法)科芒384. clinical comment临床评价385. impairment [ɪm'peəm(ə)nt]n. 损伤,损害386. renal impairment肾损害387. undergo [ʌndə'gəʊ]vt. 经历,经受;忍受388. as professionally prescribed遵医嘱389. cold symptomsCold Symptoms: 感冒症状alleviate cold symptoms: 缓解感冒症状common cold symptoms: 感冒症状390. pregnant ['pregnənt]adj. 怀孕的;富有意义的391. breastfeeding ['brest,fi:diŋ]n. 母乳哺育v. 用母乳喂养(breastfeed的现在分词)392. consult [kən'sʌlt]vi. 请教;商议;当顾问vt. 查阅;商量;向…请教393. preservatives [prɪ'zɝvətɪv]n. 防腐的(preservative的复数);[助剂] 防腐剂;保存剂394. artificial [ɑːtɪ'fɪʃ(ə)l]adj. 人造的;仿造的;虚伪的;非原产地的;武断的395. flavour ['fleɪvə]n. 香味;滋味vt. 给……调味;给……增添风趣396. sweetenersn. 甜味剂(sweetener的复数形式)397. artificial flavoursArtificial flavours: 香味No artificial flavours: 不含人工香料|不含野生香料|无添加香料Artificial Colours Or Flavours: 人工色素香料398. wound [wuːnd]n. 创伤,伤口vt. 使受伤vi. 受伤,伤害399. expiry [ɪk'spaɪrɪ; ek-]n. 满期,逾期;呼气;终结400. expiring date到期日;有效期限;失效日期401. regardless of不顾,不管402. diameter [daɪ'æmɪtə]n. 直径403. anhydrous [æn'haɪdrəs]adj. 无水的404. hygroscopic [haɪgrə(ʊ)'skɒpɪk] adj. 吸湿的;湿度计的;易潮湿的405. multiply ['mʌltɪplaɪ]adj. 多层的;多样的vt. 乘;使增加;使繁殖;使相乘vi. 乘;繁殖;增加adv. 多样地;复合地406. pharmaceutical excipients药用辅料407. excipient [ek'sɪpɪənt]n. [药] 赋形剂408. vehicles ['viɪkl]赋形剂n. [车辆] 车辆(vehicle的复数形式);交通工具409. hypochlorite [,haɪpəʊ'klɔːraɪt] n. [无化] 次氯酸盐;低氧化氯410. thiosulfate [,θaɪəʊ'sʌlfeɪt] n. [无化] 硫代硫酸盐411. sodium thiosulfate[无化] 硫代硫酸钠412. quench [kwen(t)ʃ]急冷vt. 熄灭,[机] 淬火;解渴;结束;冷浸vi. 熄灭;平息413. dehydration [,diːhaɪ'dreɪʃən] n. 脱水,干燥414. filter ['fɪltə]n. 滤波器;[化工] 过滤器;筛选;滤光器vt. 过滤;渗透;用过滤法除去vi. 滤过;渗入;慢慢传开n. (filter)人名;(德)菲尔特415. dimethyl sulfoxide二甲亚砜416. centrifuge ['sentrɪfjuːdʒ]n. 离心机;[机][化工] 离心分离机vt. 用离心机分离;使…受离心作用417. saturated ['sætʃəreɪtɪd]v. 使渗透,使饱和(saturate的过去式)adj. 饱和的;渗透的;深颜色的418. brine [braɪn]n. 卤水;盐水;海水n. (brine)人名;(阿拉伯)布里内;(英)布赖恩vt. 用浓盐水处理(或浸泡)419. sampling ['sɑːmplɪŋ]n. 取样;抽样v. 取样;抽样(sample的ing形式)420. evacuate [ɪ'vækjʊeɪt]vt. 疏散,撤退;排泄vi. 疏散;撤退;排泄421. evacuate air排出空气422. juge endpoint判断终点423. developing solvent[分化] 展开剂;显影溶剂424. catalyst ['kæt(ə)lɪst]n. [物化] 催化剂;刺激因素425. milligram ['miliɡræm]n. 毫克426. litre升427. millilitre ['mili,li:tə]n. [计量] 毫升428. methylamine [miː'θaɪləmiːn] n. [有化] 甲胺429. conical ['kɒnɪk(ə)l]adj. 圆锥的;圆锥形的430. conical flask锥形烧瓶;锥形瓶431. methyl ['miːθaɪl; 'meθ-; -θɪl] n. [有化] 甲基;木精432. sulphuric [sʌl'fjʊərɪk]adj. 硫磺的;含多量硫磺的433. odor ['əudə]n. 气味;名声n. (odor)人名;(匈)欧多尔434. evaporation [ɪ,væpə'reɪʃən] n. 蒸发;消失435. evaporation pan[分化] 蒸发皿436. oven ['ʌv(ə)n]n. 炉,灶;烤炉,烤箱(也可用作蒸发皿)437. turbid ['tɜːbɪd]adj. 浑浊的;混乱的;雾重的438. filtrate ['fɪltreɪt]n. [化学] 滤液vt. 过滤;筛选vi. 过滤439. phenolphthalein [,fiːnɒl'(f)θæliːn; -'(f)θeɪl-] n. [试剂] 酚酞(一种测试碱性的试剂,可作刺激性泻剂)440. and vice versa反之亦然;反过来也一样441. hydrolysis [haɪ'drɒlɪsɪs]n. 水解作用442. prospectivelyadv. 盼望中;可能;潜在;预期前瞻性443. administration route给药途径444. topical ['tɒpɪk(ə)l]adj. 局部的;论题的;时事问题的;局部地区的445. eliminate [ɪ'lɪmɪneɪt]vt. 消除;排除446. attain [ə'teɪn]n. 成就vt. 达到,实现;获得;到达vi. 达到;获得;到达447. raw material[材] 原料448. reassessment [,ri:ə'sesmənt]n. 重新评估;[经] 重新估价;重新考虑449. as well as也;和…一样;不但…而且450. conventional [kən'venʃ(ə)n(ə)l]adj. 符合习俗的,传统的;常见的;惯例的451. conventional testsconventional tests: 常规试验方法|常规试验conventional indoor tests: 室内常规试验conventional tests of physical properties: 常规452. viscosity [vɪ'skɒsɪtɪ]n. [物] 粘性,[物] 粘度453. sterility [stə'rɪlɪtɪ]n.无菌; [泌尿] 不育;[妇产] 不孕;不毛;内容贫乏454. pyrogen ['paɪrədʒ(ə)n]n. 热原质;发热源455. endotoxin ['endəʊ,tɒksɪn]n. [病理] 内毒素456. bacterial endotoxin细菌内毒素457. adjacent [ə'dʒeɪs(ə)nt]adj. 邻近的,毗连的458. gelatinous [dʒə'lætinəs]adj. 凝胶状的,胶状的459. vapour ['veɪpə]n. 蒸气(等于vapor);水蒸气460. litmus ['lɪtməs]n. [试剂] 石蕊461. red litmus paper红色石蕊纸462. filter paper滤纸(尤制定量滤纸)463. nonluminous [nɔn'lju:minəs]adj. 无光的;不发光的464. nonluminous flame无色火焰465. charring ['tʃɑ:riŋ]n. 炭化v. 烧焦(char的ing形式)466. sublimate ['sʌblɪmeɪt]n. 升华物vt. 使升华;使高尚vi. 升华;纯化adj. 纯净化的;理想化的;高尚的467. turmeric ['tɜːmərɪk]n. 姜黄;姜黄根粉末468. turmeric paper[分化] 姜黄试纸;姜黄纸469. curdy ['kɜːdɪ]adj. 凝结了的;成凝乳状的470. pale [peɪl]n. 前哨;栅栏;范围adj. 苍白的;无力的;暗淡的vt. 使失色;使变苍白;用栅栏围vi. 失色;变苍白;变得暗淡n. (pale)人名;(塞)帕莱471. pale yellowadj. 淡黄色,浅黄色472. violet ['vaɪələt]n. 紫罗兰;堇菜;羞怯的人adj. 紫色的;紫罗兰色的n. (violet)人名;(西)比奥莱特;(法)维奥莱;(印、匈、英)维奥莱特473. portion ['pɔːʃ(ə)n]n. 部分;一份;命运vt. 分配;给…嫁妆474. emission [ɪ'mɪʃ(ə)n]n. (光、热等的)发射,散发;喷射;发行n. (emission)人名;(英)埃米申475. calibrate ['kælɪbreɪt]vt. 校正;调整;测定口径476. coefficient [,kəʊɪ'fɪʃ(ə)nt]n. [数] 系数;率;协同因素adj. 合作的;共同作用的477. equation [ɪ'kweɪʒ(ə)n]n. 方程式,等式;相等;[化学] 反应式478. mechanism ['mek(ə)nɪz(ə)m]n. 机制;原理,途径;进程;机械装置;技巧479. exclusion [ɪk'skluːʒ(ə)n; ek-]n. 排除;排斥;驱逐;被排除在外的事物480. affinity [ə'fɪnɪtɪ]n. 密切关系;吸引力;姻亲关系;类同481. adsorbantadsorbant: 吸附剂Adsorbant affinity: 吸着力Broken adsorbant: 意即已被破碎的482. eluted洗脱483. successively [sək'sesivli]adv. 相继地;接连着地484. distribution [dɪstrɪ'bjuːʃ(ə)n]n. 分布;分配485. stationary ['steɪʃ(ə)n(ə)rɪ]n. 不动的人;驻军adj. 固定的;静止的;定居的;常备军的486. stationary phase[物化][分化] 固定相;稳定期487. steering ['stiəriŋ]n. 操纵;指导;掌舵v. 驾驶;掌舵(steer的ing形式)488. steering committee指导委员会。

State Recovery Attacks on Pseudorandom Generators

State Recovery Attacks on Pseudorandom Generators

Appears in WEWoRC2005-Western European Workshop on Research in Cryptology,Lecture Notes in Informatics(LNI)P-74(2005)53-63.Gesellschaft f¨ur Informatik.State Recovery Attacks on Pseudorandom GeneratorsAndrey Sidorenko and Berry SchoenmakersEindhoven University of TechnologyP.O.Box513,5600MB Eindhoven,The Netherlandsa.sidorenko@tue.nl berry@win.tue.nlAbstract:State recovery attacks comprise an important class of attacks on pseudo-random generators.In this paper we analyze resistance of pseudorandom generatorsagainst these attacks in terms of concrete security.We show that security of the Blum-Micali pseudorandom generator against state recovery attacks is tightly related to thesecurity of the corresponding one-way function.1IntroductionOne of the most fundamental issues in modern cryptology is randomness.Ran-domness is used for probabilistic encryption and digital signature algorithms.Noncesand session keys in cryptographic protocols must be random.However in practice truerandomness is hard(if not impossible)to achieve.A possible solution to this problemis proposed by the theory of cryptographically secure pseudorandom generators thatstarts with a seminal paper of Yao[Ya82].A pseudorandom generator is a deterministic algorithm that,given a truly randombinary sequence of length n,outputs a binary sequence of length M>n that looksrandom for all efficient applications.The input to the generator is called the seed andthe output is called the pseudorandom sequence.Security of a pseudorandom gen-erator is a characteristic that shows how hard it is to tell the difference between thepseudorandom sequences and truly random sequences.A pseudorandom generator issaid to be provably secure if distinguishing these two classes of sequences is as diffi-cult as solving a well-known and supposedly hard(typically number-theoretic)prob-lem[Ya82].For instance,the Blum-Blum-Shub pseudorandom generator[BBS86]is secure under the assumption that factoring large Blum integers is a difficult prob-lem.For the pseudorandom generator proposed by Kaliski[Ka88]the correspondingassumption is intractability of the elliptic curve discrete logarithm.In order to prove security of a pseudorandom generator one has to show that an algorithm D that distinguishes the pseudorandom sequences from truly random se-quences can be reduced to an algorithm S that solves the corresponding difficult prob-lem.The security reduction is said to be tight if running time and success probabilityof these two algorithms are about the same[BR96].The pseudorandom generators mentioned above are based on the Blum-Micali (BM)construction[BM84].The BM pseudorandom generator is proved to be asymp-totically secure(see e.g.[Go01]).Namely,it is shown that there exists a polynomial-time reduction from the distinguisher D to the solver S.In fact asymptotic securityimplies that no polynomial-time distinguishing attack exists.However this asymptoticstatement says little about the security of the pseudorandom generator in practice fora particular seed length and against adversaries investing a specific amount of effort.In practice it is important to know the concrete seed length that guarantees a certain level of security.We need for concrete analysis a reduction that is as efficient aspossible.A tight reduction gives rise to a small modulus,which in turn ensures that thepseudorandom generator is efficient.To our knowledge,there exists no pseudorandomgenerator with tight security reduction.The reductions[Al88,BM84,FS00,Ka88,VV84]are polynomial-time but they are not tight.Moreover the reduction proposed by Fischlin et al.[FS00]for the Blum-Blum-Shub generator is shown to be optimal. It implies that no tight security reduction is possible for this pseudorandom generator. Inefficiency of security reductions explains the major disadvantage of provably secure pseudorandom generators,namely they are slow.A way to improve tightness of the security reductions is to weaken Yao’s defin-ition of security for pseudorandom generators.The fact that there exists no efficient distinguishing attack on a pseudorandom generator means that every bit of the pseudo-random sequence is unpredictable.Is it necessary to demand such a strong property of pseudorandom generators?Our work raises an interesting open problem:is it possible to weaken the definition of security for pseudorandom generators in a reasonable way so that there exists a pseudorandom generator with tight security reduction?In this paper we analyze security of pseudorandom generators against a subclass of distinguishing attacks,namely state recovery attacks.A state recovery attack A on a pseudorandom generator is an algorithm that,given a pseudorandom sequence, recovers the seed.We show that in case of the BM pseudorandom generators there exists a tight security reduction from the state recovery attack A to the solver S.The rest of the paper is organized as follows.In Section2we recall the definition of security for pseudorandom generators proposed by Yao[Ya82].We consider state recovery attacks versus distinguishing attacks in Section2.2.In Section2.3we give a simple example of an efficient state recovery attack.Section3contains the main result of the paper.In this section we prove robustness of BM generators against state recov-ery attacks.Two important examples,namely the Blum-Blum-Shub generator and the Kaliski generator,are discussed in Sections3.1and3.2respectively.In Section4we discuss the importance of tight security reductions.We consider the BBS generator as an example.Section5concludes the paper.2Security of Pseudorandom Generators2.1Distinguishing AttacksWe refer to a sequence of independent uniformly distributed bits as a truly random binary rmally speaking,a pseudorandom generator is a deterministic algorithm that,given a truly random binary sequence of length n,outputs a binary sequence of length M>n that looks random.Let G be some pseudorandom generator that outputs binary sequences of length M.Let S M be the set of the output sequences.Throughout this paper the notation ”∈R”indicates the process of selecting an element at random and uniformly over the corresponding set.The running time of a probabilistic algorithm is defined to be the maximum of the expected number of steps needed to produce an output,maximized over all inputs;the expected number is averaged over all coinflips made by the algo-rithm[Kn97].The following definitions are due to[Ya82].D EFINITION2.1Let D:{0,1}M→{0,1}be a probabilistic algorithm that runs in time T.Let >0.D is called a(T, )-distinguishing attack on pseudorandom generator G if|Pr[D(s)=1|s∈R S M]−Pr[D(s)=1|s∈R{0,1}M]|≥ ,where the probability is also taken over the internal coinflips of D.D EFINITION2.2A pseudorandom generator is(T, )-secure if there exists no(T, )-distinguishing attack on this pseudorandom generator.Note that the original definition of cryptographically secure pseudorandom gener-ator[Ya82]is given in the asymptotic sense.It states that a pseudorandom generator is secure if for any polynomial p no(p(n),1/p(n))-distinguishing attack exists.How-ever,for the concrete security analysis it is important to consider(T, )-distinguishing attacks for exact values of T and .The concrete version of the definition of crypto-graphically secure pseudorandom generatorfirst appears in[Kn97].In general the distinguishing attack D does not have to determine the seed in order to distinguish a pseudorandom sequence from a random sequence.For example,if the pseudorandom sequence is unbalanced D can just output the majority bit of the sequence.2.2State Recovery AttacksLet G be a pseudorandom generator that outputs binary sequences of length M. Let S M be the set of the output sequences of G.D EFINITION2.3Suppose there exists an algorithm A:{0,1}M→{0,1}n that, given s∈R S M,outputs x∈{0,1}n such that G(x)=s with probability and outputs∅with probability1− .Here the probability is taken over all choices of s and over internal coinflips of A.Assume that A(s)=∅for s/∈S M.Let T be the running time of A.Then A is called a(T, )-state recovery attack on pseudorandom generator G.Actually a state recovery attack interprets a pseudorandom generator as a one-way function{0,1}n→{0,1}M.The more secure the one-way function the more time the state recovery attack takes.The following lemma implies that the class of state recovery attacks is a subclass of distinguishing attacks.Lemma2.4Any(T, )-state recovery attack is a(T, − 2n−M)-distinguishing at-tack.P ROOF.The state recovery attack A can be transformed into a distinguishing attack D as follows.On input s∈{0,1}m set D(s)=0if A(s)=∅,otherwise set D(s)=1.Then Pr[D(s)=1|s∈R S M]= and Pr[D(s)=1|s∈R{0,1}M]= 2n−M,so D is a(T, − 2n−M)-distinguishing attack.Pseudorandom generator is a central building block of an important cryptographic primitive,namely additive stream cipher.The pseudorandom generators used for the practical stream ciphers are fast but not provably secure.Therefore new attacks on the stream ciphers are published frequently.Ekdahl[Ek03]proposes an efficient state recovery attack against the Bluetooth standard and against the stream cipher A5/1, which is used in GSM standard for mobile telephones.A series of papers[Kn98, Ma05]analyze state recovery attacks on RC4Keystream Generator.The main result of this paper is that the security of the Blum-Micali pseudorandom generators[BM84]against state recovery attacks is tightly related to the security of the corresponding one-way function.Thus security against state recovery attacks is guaranteed for much smaller seed lengths.The new reduction is discussed in Section 3.Before presenting the main result,we discuss a simple example of an efficient state recovery attack for linear congruential generators.2.3Example of an Efficient State Recovery AttackIt is well-known that for any LFSR-based pseudorandom generator the initial state can be found using the Berlekamp-Massey algorithm[Ma69].Similar problems–but maybe less well-known–concern the popular linear congruential generators.Even when all the details of the generator are kept secret,a state recovery attack is quite easy to implement.The linear congruential generator works as follows.Let A be a positive integer of bit length a for some a>0.Let j,k,s1∈Z A.The seed of the linear congruential generator(LCG)is a quadruple(A,j,k,s1).The length of the seed n=4a.The pseudorandom sequence s={s1,s2,...,s B}is generated as follows:s i=js i−1+k mod A,i=2,3,...,B.The pseudorandom sequence is a sequence of integers s i∈Z A,i=1,2,...,B.s1 is both a component of the seed and thefirst element of the pseudorandom sequence. The length of the pseudorandom sequence is M=aB.Although the LCG works well for a number of Monte Carlo problems it is not suitable for cryptographic purposes.The reason is that it does not resist a state recovery attack.Marsaglia[Ma68]shows that the LCG has a defect that cannot be removed by adjusting the ly,if l-tuples(s1,s2,...,s l),(s2,s3,...,s l+1),...,l≥3, produced by the LCG are viewed as points in the unit cube of n dimensions,then all the points will be found to lie in a relatively small number of parallel hyperplanes. This is the intuition behind the following lemma proposed by Marsaglia[Ma68]. Lemma2.5For all i,i=1,2,...,B−3,∆i is a multiple of A,where∆i=det 24s i s i+11s i+1s i+21s i+2s i+3135.Suppose an adversary knows a fewfirst s i’s.Greatest common divisor of several ∆i’s gives the value of A.Then it can compute j and k by solving the following system of linear equationsjs1+k=s2mod Ajs2+k=s3mod AAs a result the seed is revealed.The above argument shows that there exists an efficient state recovery attack on the LCG.To run this attack,the adversary does not have to know many output values s i.In practice5values s1,...,s5are enough.3The Blum-Micali ConstructionA family of cryptographically strong pseudorandom generators is proposed by Blum and Micali[BM84].Let f be a one-way permutation defined over some do-main D.Let b be a hard-core predicate for f.The seed x1of the pseudorandom generator is a uniformly distributed element of D.The pseudorandom sequence(the BM sequence)s∈{0,1}M is generated as follows:s i=b(x i),x i+1=f(x i)for i=1,...,M.If an adversary does not know the seed it cannot distinguish the BM sequence from a random sequence in polynomial-time with non-negligible advantage[BM84].There-fore due to Lemma2.4no polynomial-time state recovery attack on the BM generator is feasible.It means that,as the seed length increases,no polynomial-time adversary can retrieve the seed of the pseudorandom generator.However this asymptotic state-ment says little about the security of the pseudorandom generator in practice for a particular seed length and against adversaries investing a specific amount of effort.The security of the BM generator is proved by reduction(see e.g.[Go01]).It is shown that the problem of distinguishing BM sequences from random sequences can be reduced to inverting f.The reduction proposed is polynomial time but it is not tight.In this paper we prove the impossibility of state recovery attacks on BM generators in a different way.We show that there exists a tight reduction between state recovery attacks on a BM generator and inversion algorithms for the corresponding one-way function.Theorem3.1Suppose the Blum-Micali pseudorandom generator is vulnerable against a(T A, )-state recovery attack A.Then there exists a probabilistic algorithm S that, given y∈R D,calculates x=f−1(y)in time T S=2T A+cM with probability , where c is the computational complexity of f.The probability is taken over all choices of y and over internal coinflips of S.P ROOF.On input y∈D,algorithm Sfirst constructs M bits of the BM sequence with seed x and then uses A to retrieve x.In total there are three steps.1.Generate s2,...,s M:s2=b(y),s3=b(f(y)),...,s M=b(f M−2(y)).2.Calculate z i=A(i,s2,s3,...,s M)for i=0,1.3.If f(z i)=y for one of the values i∈{0,1}set x=z i.Since the complexity of Step1is cM the running time of algorithm S is T S=2T A+ cM.The success probability of S is .In practice cM T A.Hence T S and T A differ only by a factor of2.Therefore the above reduction is tight.This reduction is called linear-preserving in terms of [Ha99].We analyze two members of the Blum-Micali family:the Blum-Blum-Shub(BBS) generator[BBS86],and the generator based on the elliptic curve discrete logarithm [Ka88].3.1The BBS Pseudorandom GeneratorLet N be a Blum integer,that is N=pq,where p,q are prime,p≡q≡3mod4. Denote by n the bit length of N.Let E N(x)=x2mod N for x∈Z N.E N is referred to as the Rabin function.Let QR N be the set of quadratic residues modulo N.It is known that E N permutes QR N.The seed x1of the BBS generator is a random quadratic residue modulo N.The pseudorandom sequence s∈{0,1}M is generated as follows:s i=x i mod2,x i+1=E N(x i)for i=1,...,M.The BBS generator is a BM generator with D=QR N,f(x)=E N(x),b(x)= x mod2.The following statement shows that a state recovery attack on the BBS generator can be efficiently reduced to a factoring algorithm.Corollary3.2Suppose the BBS pseudorandom generator is vulnerable against a(T A, )-state recovery attack A.Then there exists a probabilistic algorithm F that factors the modulus N in expected time T F=2 −1T A+O( −1n2M).P ROOF.The proof uses a well-known fact that calculating square roots modulo N is equivalent to factoring N.Let x∈R Z N\QR N and y=E N(x).According to Theorem3.1there exists an algorithm S that,given y,calculates z∈QR N such that y=E N(z)in time T S=2T A+O(n2M)(the complexity of squaring modulo N is O(n2),where n is the length of N)with probability .The probability is taken over all choices of x and over internal coinflips of S.The following two possibilities each have probability1/2:1.z=+x mod p and z=−x mod q,2.z=−x mod p and z=+x mod q,In Case1gcd(z−x,N)=p and in Case2gcd(z−x,N)=q.Thus calculation of gcd(z−x,N)yields the factorization of N.The complexity of the gcd evaluation is negligible in comparison with O(n2M).Algorithm F now works as follows.Select x∈R Z N and use S to determine z. If S succeeds calculate gcd(z−x,N).If S fails select x∈R Z N\QR N once again etc.On average F repeats the above procedure −1times.Therefore T F= −1T S= 2 −1T A+O( −1n2M).Alexi et al.[Al88],U.Vazirani and V.Vazirani[VV84]show that the BBS pseudo-random generator remains asymptotically secure if one extracts thefirst few least sig-nificant bits of x i per iteration.Although the BBS generator with several output bits per iteration is beyond the Blum-Micali construction security of this generator against state recovery attacks follows from a straightforward generalization of Theorem3.1. Corollary3.3Suppose the BBS pseudorandom generator with j output bits per it-eration is vulnerable against a(T A, )-state recovery attack A.Then there exists a probabilistic algorithm F that factors the modulus N in expected time T F = 2j −1T A+O( −1n2M).P ROOF.The proof of this statement is similar to the proof of Corollary3.2.When proving Corollary3.2we use Theorem3.1to construct algorithm S that inverts the Rabin function in time T S=2T A+O(n2M)with probability .On Step2(see the proof of Theorem3.1)S runs algorithm A twice.In case of the BBS generator with j output bits per iteration the inversion algorithm S on Step2has to run A for2j times and therefore T S =2j T A+O(n2M).Similarly to the proof of Corollary3.2 T F =2 −1T S =2j −1T A+O( −1n2M).3.2The Pseudorandom Generator Based on the Elliptic Curve Discrete LogarithmA pseudorandom generator based on the elliptic curve discrete logarithm(ECDL generator)is proposed by Kaliski[Ka88].Let p be a prime,p≡2mod3.Consider a curve E(F p)that consists of points (x,y)∈F p×F p such thaty2=x3+c,where c∈F∗p.Kaliski[Ka88]shows that the points of E(F p)together with a point at infinity O form a cyclic additive group of order p+1.Let Q be a generator of this group.The security of the pseudorandom generator[Ka88]is based on the assumption that solving the discrete log problem over E(F p)(given points Q,Y,findα∈Z p+1 such that Y=αQ)is infeasible.Letφ(P)=y,if P=(x,y); p,if P=O.The ECDL generator is a BM generator with D=E(F p),f(P)=φ(P)Q,b(P)=1,ifφ(P)≥(p+1)/2; 0,otherwise.The seed P1of the ECDL generator is a random point on the curve.Corollary3.4Suppose the ECDL pseudorandom generator is vulnerable against a (T A, )-state recovery attack A.Then there exist a probabilistic algorithm L that, given Y∈E(F p),calculatesα∈Z p+1such that Y=αQ in expected time T L= 2 −1T A+O( −1(log p)3M).P ROOF.The proof is based on Theorem3.1and on random-self-reducibility of the discrete logarithm problem.The details follow.According to Theorem3.1there exists an algorithm S that on input Z∈R E(F p) calculates X=f−1(Z)in time T S=2T A+O((log p)3M)with probability .Let Y∈E(F p).Algorithm L works as follows.Select b∈R Z∗p+e S to determine X=f−1(bY).If S succeeds calculate a=b−1mod(p+1)and α=aφ(X)(in this case bY=f(X)=φ(X)Q and thus Y=aφ(X)Q).If S fails select b∈R Z∗p+1once again etc.On average L repeats the above procedure −1 times.Therefore T L= −1T S=2 −1T A+O( −1(log p)3M). 4Concrete Result for the BBS generatorConsider the BBS pseudorandom generator with1output bit per iteration(see Section3.1).Suppose our goal is to generate a pseudorandom sequence of length M=107such that no distinguishing attack D can distinguish the sequence from truly random sequence in time T D=280with advantage =0.01(we use the same choice of parameters as in[FS00,Ge05,Kn97]).The question is what bit length n of the modulus N guarantees this level of security.The security of the BBS generator relies on the intractability of factoring large Blum integers.The fastest general-purpose factoring algorithm today is the Number Field Sieve.According to[LV01]on heuristic grounds the Number Field Sieve is ex-pected to require time proportional toγexp((1.9229+o(1))(ln N)1/3(ln ln N)2/3) for a constantγ.Following[LV01]we make an assumption that the o(1)-term can be treated as zero.Let T F(n)be the number of clock cycles needed to factor an n-bit integer.ThenT F(n)≈γexp(1.9229(n ln2)1/3(ln(n ln2))2/3).We use one clock cycle as a unit of time.Experience from available data points sug-gests that T F(512)≈3·1017[LV01].Thereforeγ≈2.8·10−3andT F(n)≈2.8·10−3·exp(1.9229(n ln2)1/3(ln(n ln2))2/3).A SSUMPTION4.1No algorithm can factor a randomly chosen n-bit Blum integer faster than in T F(n)clock cycles.The strongest security proof for of the BBS generator is proposed by Fischlin and Schnorr[FS00].Lemma4.2(Fischlin and Schnorr)Under Assumption4.1,no(T D, )-distinguishing attack on the BBS generator exists ifT D≤T F(n)6n(log2n) −2M2−27n −2M2log2(8n −1M).We observe that T F T D.Hence the security reduction is not tight.Lemma4.2implies that for M=107no(280,0.01)-distinguishing attack on the BBS exists if n≥6800.On the other hand due to Corollary3.2the security against (280,0.01)-state recovery attacks is guaranteed for n≥1100.Since the security reduction in case of the state recovery attacks is tighter than in case of the distin-guishing attacks the length of the modulus that guarantees security against the state recovery attacks is significantly smaller than the one that guarantees security against distinguishing attacks.5ConclusionProvable security is a substantial and desirable property of cryptographic prim-itives.To our knowledge,there exists no provably secure pseudorandom generator with a tight security reduction.For this reason the provably secure pseudorandom generators are slow.Often it makes them impractical.One of the important examples of provably secure pseudorandom generators is the Blum-Micali family.They are known to be asymptotically secure,shown using polynomial time reductions which are far from tight.Moreover,in case of the BBS generator the reduction cannot be tighter[FS00].In this paper we show that there exists a tight security reduction for a subclass of attacks,namely for state recovery attacks.A way to improve tightness of the security reductions(and thus to make the pseudo-random generators more efficient)is to weaken Yao’s definition of security for pseudo-random generators.The fact that there exists no efficient distinguishing attack on a pseudorandom generator means that every bit of the pseudorandom sequence is unpre-dictable.Is it necessary to demand such a strong property of pseudorandom genera-tors?For example,suppose a pseudorandom generator is used for generating1024-bit RSA keys.In this case leakage of a few bits of the secret key does not affect the secu-rity of the scheme,since the Number Field Sieve can not benefit from this information. Wanders[Wa87]points out that a small deviation from Golumb’s randomness postu-lates results in even smaller information leak per bit(namely,the information leak is quadratic in the deviation).Our work raises an interesting open problem:is it possible to weaken the definition of security for pseudorandom generators in a reasonable way so that there exists a pseudorandom generator with tight security reduction?References[Al88]W.Alexi,B.Chor,O.Goldeich,and C.P.Schnorr.RSA and Rabin func-tions:Certain Parts are as Hard as the Whole.SIAM Journal on Computing,pages194–209,1988.[BBS86]L.Blum,M.Blum,and M.Shub.A Simple Unpredictable Pseudo-Random Number Generator.SIAM Journal on Computing,pages364–383,1986. [BM84]M.Blum and S.Micali.How to Generate Cryptographically Strong Se-quences of Pseudo-Random Bits.SIAM Journal on Computing,13(4):850–864,1984.[BR96]M.Bellare and P.Rogaway.The Exact Security of Digital Signatures–How to Sign with RSA and Rabin.In Advances in Cryptology–Eurocrypt1996,volume1070of Lecture Notes in Computer Science,pages399–416,Berlin,1996.Springer-Verlag.[Ek03]P.Ekdahl.On LFSR based Stream Ciphers–Analysis and Design.PhD thesis,Lund University,Faculty of Technology,Lund,Sweden,2003. [FS00]R.Fischlin and C.P.Schnorr.Stronger Security Proofs for RSA and Rabin Bits.Journal of Cryptology,13(2):221–244,2000.[Ge05]R.Gennaro.An Improved Pseudo-random Generator Based on the Discrete Logarithm Problem.Journal of Cryptology,18(2):91–110,2005.[Go01]O.Goldreich.Foundations of Cryprography.Basic Tools.Cambridge Uni-versity Press,Cambridge,United Kingdom,2001.[Ha99]J.Hastad,R.Impagliazzo,L.A.Levin,and M.Luby.Construction of a pseudo-random generator from any one-way function.SIAM Journal onComputing,28:1364–1396,1999.[Ka88] B.S.Kaliski.Elliptic Curves and Cryptography:A Pseudorandom Bit Generator and Other Tools.PhD thesis,MIT,Cambridge,MA,USA,1988. [Kn98]L.R.Knudsen et al.Analysis Methods for(Alleged)RC4.In Advances in Cryptology–Asiacrypt1998,volume1514of Lecture Notes in ComputerScience,pages327–341,Berlin,1998.Springer-Verlag.[Kn97] D.E.Knuth.Seminumerical Algorithms,volume3.Addison-Wesley, Reading,MA,USA,third edition,1997.[LV01] A.K.Lenstra and E.R.Verheul.Selecting Cryptographic Key Sizes.Jour-nal of Cryptology,14(4):255–293,2001.[Ma05]I.Mantin.Predicting and Distinguishing Attacks on RC4Keystream Gen-erator.In Advances in Cryptology–Eurocrypt2005,volume3494of Lec-ture Notes in Computer Science,pages491–506,Berlin,2005.Springer-Verlag.[Ma68]G.Marsaglia.Random Numbers Fall Mainly in the Planes.Proceedings of the National Academy of Sciences,61(1):25–28,1968.[Ma69]J.L.Massey.Shift-Register Synthesis and BCH Decoding.IEEE Transac-tions on Information Theory,15:122–127,1969.[VV84]U.V.Vazirani and V.V.Vazirani.Efficient and Secure Pseudo-Random Number Generation.In IEEE Symposium on Foundations of ComputingScience,pages458–463,1984.[Wa87]H.Wanders.Two Topics in Secrecy.Master’s thesis,Eindhoven Univer-sity of Technology,Department of Mathematics and Computing Science,Eindhoven,The Netherlands,1987.[Ya82] A.C.Yao.Theory and Application of Trapdoor Functions.In IEEE Sym-posium on Foundations of Computer Science,pages80–91,1982.。

How to Manage the New Generation of Agile Earth Observation Satellites

How to Manage the New Generation of Agile Earth Observation Satellites

How to Managethe New Generation of Agile Earth Observation Satellites?Michel Lemaître,Gérard Verfaillie,Frank JouhaudONERA,Centre de ToulouseJean-Michel Lachiver,Nicolas BatailleCNES,Centre de ToulouseApril19,2000AbstractIn this paper,we address the problem of managing the new generation of agile Earth Ob-serving Satellites.Whereas non-agile satellites such as Spot have only one degree of freedomfor acquiring images,the new generation satellites have three,giving opportunities for a moreefficient scheduling of observations.A counterpart of this advantage is that the scheduling ofobservations is made much more difficult,due to the larger search space for potential solutions.Two quite different methods have been investigated in order to solve the scheduling of agile EarthObserving Satellites:a method based on the existing Constraint Programming framework,and aLocal Search method,based on insertion/removal of images in a sequence.1IntroductionThe new generation of Earth Observing Satellites(EOS),like those studied in the French PLEIADES project,will be Agile Earth Observing Satellites(AEOS).This means that they will benefit from new manoeuvrability capabilities:the observing instrument(camera)isfixed on the satellite,and the whole satellite can move on the three axes(roll,pitch and yaw),allowing manoeuvrability for image acquisitions as well as for transitions between image acquisitions.This must be opposed to the way the current generation of EOS(like Spot,[BLV99])acquire images:they have only one degree of freedom,provided by a mobile mirror in front of each instrument.The mirror,operated on the roll axis,is only moved during the transition between acquisitions,and remainsfixed during an image acquisition.The images are acquired thanks to the movement of the satellite on its track.The advantages provided by the new agility capabilities are the following:there is a potentially infinite number of ways of acquiring a given objective or area,since the azimuth and the start time of the image acquisition are now free(within given limits).Consequently,much more opportunities are available for acquiring a set of demanded images,giving rise to a potentially better efficiency of the whole system.On the other hand,the scheduling of an AEOS is much more difficult.On a non-agile EOS,the start time of any candidate image acquisition isfixed.This means that the order between any candidate image acquisition cannot change,and that compatibilities between images can be pre-computed.On the contrary with an AEOS,the start times are free,giving rise to much more scheduling opportunities.In this paper,we report a preliminary study on the management of these AEOS.First,we de-scribe the general AEOS scheduling problem,and how it can be casted in a sequence of AEOS Track Scheduling Problems.We will present two quite different approaches to the resolution of the Track Scheduling Problem.Then we will compare the results obtained with these two approaches,and state our conclusions.2Scheduling an Agile Earth Observation Satellite2.1The General AEOS Scheduling ProblemThe input of the general AEOS scheduling problem is the current set of observation demands from the clients.This set evolves continuously,with the addition of new coming demands,and with the withdraw of demands which have been(possibly partially)satisfied.The observation demands,which can be polygons or circles to be imaged on the Earth surface,are analyzed and transformed into a set of rectangular images to be acquired each in one shot,covering the demands.Each demand has its own validity time period,outside of which the demand has no utility.Note that,due to the large number of demands concerning some areas,all demands cannot be satisfied in general.So we will have to make choices between them.Hence,in essence,the general AEOS scheduling problem is a continuous one:at each period of time(say,a day or an orbit revolution),we must analyze the current set if demands and select from the corresponding set of candidate images,a feasible sequence of images to be acquired during the period.With each demand is associated a weight reflecting its importance.The selection of images to be acquired must optimize the expected sum of the weights of the satisfied demands over a given time horizon(say,one month).Note that an acquired image is not necessarily a satisfying image,due to possible bad weather conditions(an image can be selected several times,until it becomes acquired with satisfying conditions).The general AEOS scheduling problem is difficult to state precisely and to solve,because we must take into account uncertainties from:incoming future demandsthe weather forecasts.It has been shown[VBMEB99]that these two sources of uncertainty can be taken into account by modifying the weight of demands.This allows the general problem to be casted into a sequence of simpler Track Scheduling Problem instances.2.2The AEOS Track Scheduling ProblemLet us name a track,a part of the satellite trajectory over a continuously enlighten area above the Earth surface1,associated with the period of time during which the satellite is on the track.The idea is to schedule the satellite track by track.The input of a Track Scheduling Problem instance will be a set of candidate images that could be acquired on a given track(visibility constraints satisfied).Each image is associated with a weight, derived from the weight of the corresponding demand.Each image is a rectangular area on the Earthsurface.It can be acquired using two opposite directions,thanks to the agility of the satellite.So,with each candidate image is associated two possible photographs.Only one photograph out of both has to be acquired to acquire the corresponding image.The problem is to select,from the set of candidate photographs,a feasible sequence of pho-tographs,maximizing a quality criterion which is the sum of the weights of the selected photographs2.“Feasible”means that the sequence must respect temporal constraints.In order to state these con-straints more precisely,we need some formal notations.For each candidate photograph,we can compute from the given track data::its earliest start time:its latest start time:its duration.We can also compute,for each possible pair of photographs,a quantity which is the minimal duration for a transition from the end of acquisition to the beginning of acquisition3. Given these quantities,the temporal constraints can be stated as follows:to each selected photograph of the sequence,we must be able to associate a start time such thatto each pair of successive photographs in the sequence,we must haveIt can be shown that the AEOS Track Scheduling Problem is NP-hard,which means that in prac-tice,any algorithm solving this problem to optimality will need a computation time growing exponen-tially with the size of the instance to be solved.In other words,this problem is highly combinatorial.Actually,some additional operational constraints must be taken into account in our real-word problem:some images having priority must be absolutely selectedsome images have to be processed twice,with different time windows,because they are issued from of a stereo-scopic demandwe would like to favor the ending of acquisition of the most possible images from a given demand,before beginning the acquisition of images from another new one.This last constraint will be taken into account by a modification of the criteria to be optimized: instead of maximizing the sum of the weights of the selected images,we will maximize the sum of partial gains obtained on each concerned demand.The partial gain obtained on a demand is a concave function of the acquired surface,instead of a linear one(seefigure1).101GP(x)x0.10.40.70.4Figure 1:Example of a partial gain function .The partial gain resulting from the acquisition of a fraction of the total surface of a demand of weight is .3Solving the AEOS Track Scheduling Problem by Constraint Program-mingConstraint Programming is a technique for modeling and solving combinatorial problems.See for example [Mac85,JM94,HS96].It is now reaching a mature state.We have tried to use this existing framework and its associated algorithms,in order to solve our Track Scheduling Problem.We choose the OPL Studio framework [Van00],because of its elegance and simplicity.With OPL (Optimization Programming Language ),one describes a model of a problem :data,decision variables,constraints and optimization criterion.A solver is associated to the language.It searches through the search space described by a model,trying to find optimal solutions,in an ordered and systematic way.The underlying algorithms are said to be complete ,which means that they offer the guarantee to find an optimal solution,provided they are given enough resolution time.The constraint programming approach is very flexible.It allowed us to take into account all operational constraints.Nevertheless,some care must be taken in the model building,so that it can be solved in practice.In our problem,in order to keep a practicable size for the search space,a key idea is to solve the problem using several resolutions :in each resolution,we search for an optimal sequence of a fixed length.Several resolutions are launched with different lengths,in order to find an overall optimal sequence.The solver must be guided in the search space,in order to find good solutions as quickly as possible.OPL provides some ways to do it.In particular,we can state which decision variables have to be instantiated first.This helps the solver a lot.The above mentioned techniques are not sufficient to allow for solving models in a reasonable amount of time.A way to control the combinatorial explosion is to cut down the search space by adding constraints.These additional constraints are justified by some heuristics.In our case,the following heuristics have been used to add constraints to the model :for large instances,the candidate images are limited to those having a chance to be part of a good solution :we eliminate images having very small weightsin a good solution,the order in which images appear in the sequence,more or less obeys the order in which images areflown over by the satellite;this means that we can constrain each candidate image to appear only in a given sub-sequence of the whole sequence(for example,an image which belongs to the middle of the track is constrained to belongs to the middle of the sequence if it is selected)following the same idea,an optimal sequence will seldom include a“backtrack”,that is a tran-sition between two images going in the opposite direction of the satellite,even if the agility of the satellite permits it;so we add a constraint,not forbidding backtracks,but limiting them in rangetaking advantage of the multiple resolution process explained above,successive resolutions are conducted in increasing order of the sequence length,and we force the optimal sequence computed in a resolution to be a subset of the sequence solution of the next resolution.The use of additional constraints of course removes the guarantee offinding optimal solutions. But associated with a clever way to limit the computation time at each step,the enriched model allows for solving most instances of the problem,in a reasonable amount of time,with good quality results. Note that other authors[HP99]have been more successful with complete methods,due perhaps to more limited search spaces.4Solving the AEOS Track Scheduling Problem by Local SearchLocal search(see for example[AL97])is a well-known technique for solving highly combinatorial problems,including EOS planning and scheduling problems[WS00].When faced with a very large search space,as can be found in our application,local search is often a valuable alternative.It in-cludes methods like Descent Search,Tabu Search,Simulated Annealing,genetic algorithms...These methods are in essence incomplete,meaning that there can be no proof of optimality of the solutions generated.We have adapted the principles of local search to the AEOS Track Scheduling Problem in the following ways.A current feasible solution sequence of photographs is maintained.A better solution is searched for in the neighborhood.A neighboring solution is obtained by insertion or removal of a photograph in the current sequence.It will become the new current solution if it is feasible and if it improves the criterion.The key points in local search algorithms are:the use of heuristics in order to guide the search towards good regions of the search spacea well tuned level of randomness:too much randomness prevents the heuristics to be efficientlyfollowed,and not enough randomness tends to restrict the search to a deterministic walk.The following tunings seem to give good results in our application:insertion or removal is decided randomly but non uniformly,according to a probability which evolves during the search:a successful insertion increases the probability of future insertions, and conversely,a failure in an insertion decreases this probabilitya new photograph to be inserted is randomly but non uniformly selected,among those currentlyrejected,with a probability proportional to the weight of the photographFigure2:A quality profile obtained with the local search method for the resolution of a large instance of the AEOS Track Scheduling Problem(instance id=3:25_22,684candidate photographs).Average, standard deviation and maximum value of the quality criterion,obtained after a given CPU elapsed time,over100different executions.the place where to insert a new photograph is uniformly randomly chosen.An irritating drawback of local search methods is their non-determinism:each execution results in a different solution.To get an idea of the performance of a local search algorithm,it is common practice to compute a quality profile,such as the one presentedfigure2.Thisfigure gives the average, standard deviation and maximum value of the quality criterion obtained after a given CPU elapsed time,over100different resolutions of a large instance.In our case,it can be seen that a steady average and maximum quality is obtained after1minute of CPU time.For this instance,average values are within15%of the maximum,and standard deviations are rather small.5Comparison of methodsWe have compared the performances of both presented methods,by running them on7representative instances of the AEOS Track Scheduling Problem.Results are presentedfigure3.It shows that the best performances are obtained by the local search method.But the methods have to be compared also on other grounds.Constraint Programming is the easiest method to bring into play.A greatflexibility is permitted in the way models can be built. Potentially any kind of constraints can be expressed.Robustness is also a quality of the approach, allowing the model to evolve easily by local changes.These qualities are particularly useful in real-word applications.On the other hand,mastering the combinatorial explosion can be a challenging task.Models must be necessarily tuned for an efficient search.Some support tools are provided forhelp,but they can be difficult to use.instance id C.P.L.S.max501809002:13_1112411904901825904456002:26_9643877659213310684549003:25_22149257298056294145500[HP99]S.A.Harrison,M.E.Price.Task scheduling for satellite based imagery.Dans Proceed-ings of the Eighteenth Workshop of the UK Planning and Scheduling Special InterestGroup,pages64–78,University of Salford,UK,December1999.[HS96]P.Van Hentenryck,V.Saraswat.Strategic directions in constraint programming.ACM Computing Surveys,1996.[JM94]J.Jaffar,M.Maher.Constraint logic programming:A survey.Journal of Logic Pro-gramming,1994.[Mac85] A.K.Mackworth.The complexity of some polynomial network consistency algorithms for constraint satisfaction problems.Artificial Intelligence,25:65–74,1985.[Van00]P.VanHentenryck.ILOG OPL3.0,Optimization Programming Language,Reference Manual.ILOG,Janvier2000.[VBMEB99]G.Verfaillie,E.Bensana,C.Michelon-Edery,N.Bataille.Dealing with Uncertainty when Managing an Earth Observation Satellite.Dans Proc.of the5th InternationalSymposium on Artificial Intelligence,Robotics,and Automation for Space(i-SAIRAS-99),pages205–207,Noordwijk,The Netherlands,1999.[WS00]W.J.Wolfe,S.E.Sorensen.Three scheduling algorithms applied to the earth observing systems domain.Management Science,46(1):148–168,January2000.。

cochrane纳入的RCT文献质量评价风险偏倚评价工具中英文对照版

cochrane纳入的RCT文献质量评价风险偏倚评价工具中英文对照版
没有足够的信息判断随机序列的产生存在高风险或低风 险
分配隐藏
分配前不充足的分配隐藏导致选择偏倚
低风险判断标准
参与者以及纳入参与者的研究者因以下掩盖分配的方法 或相当的方法,事先不了解分配情况
•中心分配(包括电话,网络,药房控制随机)
•相同外形的顺序编号的药物容器;
•顺序编号、不透明、密封的信封
高风险判断标准
参与者以及纳入参与者的研究者可能事先知道分配,因
而引入选择偏倚,譬如基于如下方法的分配:
•使用摊开的随机分配表(如随机序列清单)
•分发信封但没有合适的安全保障(如透明、非密 封、非顺序编号)
•交替或循环
•出生日期
•病历号
•其它明确的非隐藏过程
风险未知
没有足够信息判断为低风险或高风险。通常因分配隐藏 的方法未描述或描述不充分。例如描述为使用信封分配, 但为描述信封是否透明?密封?顺序编号?
outcome assessors.
Attrition bias.
Incomplete outcome dataAssessments should be made for each main outcome (or class of outcomes).?
Describe the completeness of outcome data for each main outcome, including attrition and exclusions from the analysis. State whether attrition and exclusions were reported, the numbers in each intervention group (compared with total randomized participants), reasons for attrition/exclusions where reported, and any re-inclusions in analyses performed by the review authors.

pcr技术2 (4)

pcr技术2 (4)

SOLUTION-BASED METHOD FOR GENOMIC DNA PURIFICATION FROM WHOLE BLOOD
Finally, the genomic DNA is concentrated and desalted by
isopropanol precipitation (异丙
PCR.
DNA ISOLATION
Although the methods described here reproducibly liberate the DNA from the cells and tissues, prior handling and age of the specimen may affect the average length of the resulting DNA fragments.
Genomic DNA isolation
The purification of genomic DNA for downstream use in PCR can be achieved in a number of ways. Here we present a solution-based method for the isolation of genomic DNA from 10 ml whole blood. A four-step process using the Wizard Genomic DNA purification Kit.
It would be unreasonable to expect nucleic acid isolated from a formalin-fixed, paraffin-embedded tissue section to serve as template for products longer than 800 bp.

SI Redox-triggered changes in the self-assembly of a ferrocene–peptide conjugate

SI Redox-triggered changes in the self-assembly of a ferrocene–peptide conjugate

Electronic Supplementary Material (ESI) for Chemical Communications This journal is © The Royal Society of Chemistry 2014
Unlike amides, plots of alpha protons were crossed to each other presumably suggesting a change in conformation of peptides occurred with temperature. It can be note that the NMR peaks are enough sharp even at lower temperature where most of the molecules remain in hydrogen bonded and they are dynamic in nature as they do not form gel in absence of ultrasound.
Fig. S7 VT-NMR of Fc-peptide 1 in Tolene D8 at a concentration of 8 mg/0.5 mL: ultrasound induced gel phase material, on heating starting from 25C (a) stacked spectra and (b) Plots of chemical shift for three amide protons obtained from spectra; square, circle and triangle represent for Phe(2), Phe(3) and Val(1) respectively. Explanation: Intitialy, at 20/30 C gel phase material exhibited broad peaks for amide protons as expected because most of the gelator molecules remain in aggregated (polymeric) state and only a few non-aggregated molecules present in gel phase which contribute to NMR signal. With increasing temperatue up to gel melting point, aggreagated (polymeric) gelators become more soluble/dynamic presumably by trasfering into lower aggregated or low molecular weight polymeric state and they 5

Sejnowski Influence of ionic conductances on spike timing reliability of cortical neurons f

Sejnowski Influence of ionic conductances on spike timing reliability of cortical neurons f

Influence of Ionic Conductances on Spike Timing Reliability of Cortical Neurons for Suprathreshold Rhythmic InputsSusanne Schreiber,1,5Jean-Marc Fellous,2Paul Tiesinga,1,3and Terrence J.Sejnowski1,2,41Sloan-Swartz Center for Theoretical Neurobiology,2Howard Hughes Medical Institute,and Computational Neurobiology Lab,Salk Institute,La Jolla,California92037;3Department of Physics and Astronomy,University of North Carolina,Chapel Hill,North Carolina 27599;4Department of Biology,University of California San Diego,La Jolla,California92037;and5Institute for Theoretical Biology, Humboldt-University Berlin,D-10115Berlin,GermanySubmitted9June2003;accepted infinal form15September2003Schreiber,Susanne,Jean-Marc Fellous,Paul Tiesinga,and Ter-rence J.Sejnowski.Influence of ionic conductances on spike timing reliability of cortical neurons for suprathreshold rhythmic inputs.J Neurophysiol91:194–205,2004.First published September24, 2003;10.1152/jn.00556.2003.Spike timing reliability of neuronal responses depends on the frequency content of the input.We inves-tigate how intrinsic properties of cortical neurons affect spike timing reliability in response to rhythmic inputs of suprathreshold mean. Analyzing reliability of conductance-based cortical model neurons on the basis of a correlation measure,we show two aspects of how ionic conductances influence spike timing reliability.First,they set the preferred frequency for spike timing reliability,which in accordance with the resonance effect of spike timing reliability is well approxi-mated by thefiring rate of a neuron in response to the DC component in the input.We demonstrate that a slow potassium current can modulate the spike timing frequency preference over a broad range of frequencies.This result is confirmed experimentally by dynamic-clamp recordings from rat prefrontal cortical neurons in vitro.Second, we provide evidence that ionic conductances also influence spike timing beyond changes in preferred frequency.Cells with the same DCfiring rate exhibit more reliable spike timing at the preferred frequency and its harmonics if the slow potassium current is larger and its kinetics are faster,whereas a larger persistent sodium current impairs reliability.We predict that potassium channels are an efficient target for neuromodulators that can tune spike timing reliability to a given rhythmic input.I N T R O D U C T I O NIntrinsic neuronal properties,such as their biochemistry,the distribution of ion channels,and cell morphology contribute to the electrical responses of cells(see e.g.Goldman et al.2001; Magee2002;Mainen and Sejnowski1996;Marder et al.1996; Turrigiano et al.1994).In this study we explore the influence of ionic conductances on the reliability of the timing of spikes of cortical cells.Robustness of spike timing to physiological noise is the prerequisite for a spike timing–based code,and has recently been investigated(Beierholm et al.2001;Brette and Guigon2003;Fellous et al.2001;Fricker and Miles2000; Gutkin et al.2003;Mainen and Sejnowski1995;Reinagel and Reid2002;Tiesinga et al.2002).It has been found experimen-tally that different types of neurons are tuned to different stimuli with respect to spike timing reliability.For example, cortical interneurons show maximum reliability in response to higher-frequency sinusoidal stimuli,whereas pyramidal cells respond more reliably to lower-frequency sinusoidal inputs (Fellous et al.2001).An important difference between those types of neurons is the composition of their ion channels. Taking into account that effective numbers of ion channels can be adjusted on short time scales through neuromodulation, changes in ion channels may also provide a useful way for a neuron to dynamically maximize spike timing reliability ac-cording to the properties of the input.Spike timing reliability is enhanced with increasing stimulus amplitude(Mainen and Sejnowski1995).In the intermediate amplitude regime,the frequency content of the stimulus is an important factor determining reliability(Fellous et al.2001; Haas and White2002;Hunter and Milton2003;Hunter et al. 1998;Jensen1998;Nowak et al.1997;Tiesinga2002).Spike timing reliability of a neuron is maximal for those stimuli that contain frequencies matching the intrinsic frequency of a neu-ron(Hunter et al.1998).The intrinsic(or preferred)frequency is given by thefiring rate of a neuron in response to the DC component of the stimulus.Because of the relation to the DC firing rate of a neuron,both the DC value(whether the stimulus mean or additional synaptic input)and the conductances of a cell can be expected to influence the spike timing frequency preference.The former was recently shown by Hunter and Milton(2003).The influence of conductances(rather than injected current)on spike timing reliability through changes in the neuronal activity according to the resonance effect is the focus of thefirst part of this paper,see RESULTS(Influence of conductances on the frequency preference).In this part we specifically seek to understand which ionic conductances of cortical neurons can mediate changes of the preferred frequency(with respect to spike timing reliability) over a broad range of frequencies.Reliability is assessed on the basis of the robustness of spike timing to noise(of amplitude smaller than the stimulus amplitude).Injecting sinusoidal cur-rents on top of a DC current into conductance-based model neurons,we confirm that spike timing reliability is frequency-dependent as predicted by the resonance effect.We show that reliability can be regulated at the level of ion channel popula-tions,and identify the slow potassium channels as powerful to influence the preferred frequency.Our simulations support that the influence of ion channels on spike timing reliability also holds for more realistic rhythmic stimulus waveforms.Dy-namic-clamp experiments in slices of rat prefrontal cortexAddress for reprint requests and other correspondence:T.J.Sejnowski, Computational Neurobiology Laboratory,The Salk Institute,10010N.Torrey Pines Road,La Jolla,CA92037(E-mail:terry@).The costs of publication of this article were defrayed in part by the payment of page charges.The article must therefore be hereby marked‘‘advertisement’’in accordance with18U.S.C.Section1734solely to indicate this fact.J Neurophysiol91:194–205,2004.First published September24,2003;10.1152/jn.00556.2003.con firm the theoretical prediction that slow potassium channels can mediate a change in spike timing reliability,dependent on the frequency of the input.In the second part of the RESULTS section (In fluence of con-ductances on spike timing reliability at the preferred fre-quency )we explore the in fluence of ion channels beyond changes in preferred frequency attributed to the resonance effect.Different neurons may have the same preferred fre-quency (i.e.,the same DC firing rate)but different composition of ion channels.We analyze the in fluence of slow potassium channels and persistent sodium channels on spike timing reli-ability for neurons with the same preferred frequency.We find that both channel types signi ficantly in fluence spike timing reliability.Slow potassium channels increase reliability,whereas persistent sodium channels lower it.M E T H O D SModel cellsThe single-compartment conductance-based model neurons were implemented in NEURON.In the basic implementation,the neurons contained fast sodium channels (Na),delayed-recti fier potassium channels (K dr ),leak channels (leak),slow potassium channels (K s ),and persistent sodium channels (Na P ).The time resolution of the numerical simulation was 0.1ms.The kinetic parameters of the 5basic channel types and reversal potentials were taken from a model of a cortical pyramidal cell Golomb and Amitai (1997),apart from the reversal potential of the leak channels,which was set to Ϫ80mV (to avoid spikes in the absence of input and noise).The conductances of the cell we will refer to as the reference cell were (in mS /cm 2):g Na ϭ24,g Kdr ϭ3,g leak ϭ0.02,g Ks ϭ1,and g NaP ϭ0.07(Golomb and Amitai 1997).Its input resistance was 186M ⍀.The slow potassium conductance represented potassium channels with an activation time on the order of several tens to hundreds of milliseconds (here 75ms).In the model it is responsible for a spike frequency adaptation to a current step,which is experimentally observed in cortical pyramidal neurons (Connors and Gutnick 1990;McCormick et al.1985).We also investigated cells where the K s channels were replaced by mus-carinic potassium channels (K M )and by calcium-dependent potassium channels (K Ca ).The muscarinic channel K M was a slow noninacti-vating potassium channel with Hodgkin –Huxley style kinetics (Barkai et al.1994;Storm 1990).The calcium-dependent conductance K Ca was based on first-order kinetics and was responsible for a slow afterhyperpolarization (Tanabe et al.1998).This channel was acti-vated by intracellular calcium and did not depend on voltage.Because of the dependency of K Ca on calcium,we also inserted an L-type calcium channel as well as a simple Ca-ATPase pump and internal buffering of calcium.For the parameters of these additional currents see APPENDIX .Kinetic parameters of all channels used were set to 36°C.Stimulus waveformsThe stimuli used to characterize spike timing reliability of individ-ual cells consisted of 2components.The first component was a constant depolarizing current I DC ,which was the same for all model cells (apart from the simulations designed to study of the in fluence of the DC),and which also remained fixed throughout experimental recording of a cell.The second component was a sine wave with frequency fi ͑t ͒ϭC sin ͓2␲f t ͔ϩI DCThe amplitude of the sine wave C was always smaller than I DC .Examples of stimuli are shown in Fig.1,B and C .To characterize spike timing reliability in model cells,we applied a set of such stimuli with 70different frequencies (1–70Hz in 1-Hz increments)and 3different amplitudes of the sine wave component (C ϭ0.05,0.1,and 0.15nA).I DC ϭ0.3nA in all cases.For each frequency and amplitude combination,spikes for n ϭ20repeated trials of the same stimulus (duration 2.0s)were recorded.Reliability was calculated based on spiking responses 500ms after the onset of the stimulus,discarding the initial transient.To simulate intrinsic noise,we also injected a different random zero-mean noise of small amplitude [SD ␴ϭ20pA]on each individual trial.For the reference cell the noise resulted in voltage fluctuations of about 1.3mV SD at rest.The noise was generated from a Gaussian distribution and filtered with an alpha function with a time constant of ␶ϭ3ms.Although overall reliability systematically decreased with the size of the noise,neither the frequency content of the noise nor the absolute size of the noise (in that range)signi ficantly changed the results.Spike times were determined as the time when the voltage crossed Ϫ20mV from below.The input resistance for the model cells was estimated by application of a depolarizing DC current step suf ficiently large to depolarize the cell by Ն10mV.Model neurons were also tested with a stimulus where power was distributed around one dominant frequency.These more realistic stimuli were constructed to have a peak in the power spectrum in either the theta-or the gamma-frequency range.These waveforms mimic theta-and gamma-type inputs and were created by inverse Fourier transform of the power spectrum (with random phases).For the theta-rich wave,the power spectrum consisted of a large peak at 8Hz (Gaussian,␴ϭ1Hz)and a small peak at 50Hz (Gaussian,␴ϭ6Hz).For gamma-rich waves,the power spectrum had a large peak at 30or 50Hz (with ␴ϭ3and 6Hz,respectively),and a small peak at 8Hz (␴ϭ1Hz).These waveforms were first normalized to have a root-mean-square (RMS)value of 1and were then used with different scaling factors (yielding different RMS values).The DC component was added after scaling.These stimuli were presented for 10s and when evaluating reliability,the first 500ms after stimulus onset were discarded.The reliability measureSpike timing reliability was calculated from the neuronal responses to repeated presentations of the same stimulus.For the model studies this implied the same initial conditions,but different noise for each trial.Reliability was quanti fied by a correlation-based measure,which relies on the structure of individual trials and does not require the de finition of a priori events.For a more detailed discussion of the method see Schreiber et al.(2003).The spike trains obtained from N repeated presentations of the same stimulus were smoothed with a Gaussian filter of width 2␴t ,and then pairwise correlated.The nor-malized value of the correlation was averaged over all pairs.The correlation measure R corr ,based on the smoothed spike trains,s ជi (i ϭ1,...,N ),isR corr ϭ2N ͑N Ϫ1͒͸i ϭ1N͸j ϭi ϩ1Ns ជi ⅐s ជj ͉s ជi ͉͉s ជj ͉The normalization guarantees that R corr ʦ[0;1].R corr ϭ1indicates the highest reliability and R corr ϭ0the lowest.For all model cell studies,␴t ϭ1.8ms and for the experimental data ␴t ϭ3ms.The value of ␴t for model cells was chosen such that,given the noise level,reliability values R corr exploited the possible range of its values [0;1],allowing for better discrimination between reliable and unreliable spike timing.The experimental data proved more noisy and therefore a larger ␴t was chosen to yield a good distinction between reliable and unreliable states.All evaluation of model and experimental data (beyond obtaining spike times)was performed in Matlab.195INFLUENCE OF IONIC CONDUCTANCES ON SPIKE TIMINGFiring rate analysisFor the firing rate analysis,the full parameter space of Na,Na P ,K dr ,K s ,and leak conductances was analyzed.DC firing rates were ob-tained for all possible parameter combinations within the parameter space of the 5conductances considered (see APPENDIX ).The maximum change in firing rate achievable by one ion channel type was charac-terized (for each combination of the other 4conductances)as the difference between the maximum and minimum (nonzero)firing rates achievable by variation of the ion channel conductance of interest,keeping the other 4conductances fixed.If a cell never fired despite variation in one conductance,it was excluded from the parameter space (Ͻ5%of the total 4-dimensional conductance space for any channel type tested).The distribution of maximum changes in firing rate achievable by variation of the density of one ion channel type over all combinations of the other 4densities is presented in the paper.Experimental protocolsCoronal slices of rat prelimbic and infra limbic areas of prefrontal cortex were obtained from 2-to 4-wk-old Sprague-Dawley rats.Rats were anesthetized with Iso flurane (Abbott Laboratories,North Chi-cago,IL)and decapitated.Brains were removed and cut into 350-␮m-thick slices using standard techniques.Patch-clamp was performed under visual control at 30–32°C.In most experiments Lucifer yellow (RBI,0.4%)or Biocytin (Sigma,0.5%)was added to the internal solution.In all experiments,synaptic transmission was blocked by D -2-amino-5-phosphonovaleric acid (D-APV;50␮M),6,7-dinitroqui-noxaline-2,3,dione (DNQX;10␮M),and biccuculine methiodide (Bicc;20␮M).All drugs were obtained from RBI or Sigma,freshly prepared in arti ficial cerebrospinal fluid,and bath applied.Whole cellpatch-clamp recordings were achieved using glass electrodes (4–10M ⍀)containing (in mM):KMeSO 4,140;Hepes,10;NaCl,4;EGTA,0.1;MgATP,4;MgGTP,0.3;phosphocreatine,14.Data were ac-quired in current-clamp mode using an Axoclamp 2A ampli fier (Axon Instruments,Foster City,CA).Data were acquired using 2computers.The first computer was used for standard data acquisition and current injection.Programs were written using Labview 6.1(National Instrument,Austin,TX)and data were acquired with a PCI16E1data acquisition board (National In-strument).Data acquisition rate was either 10or 20kHz.The second computer was dedicated to dynamic clamp.Programs were written using either a Labview RT 5.1(National Instrument)or a Dapview (Microstar Laboratory,Bellevue,WA)frontend and a C language backend.Dynamic clamp (Hughes et al.1998;Jaeger and Bower 1999;Sharp et al.1993)was implemented using a DAP5216a board (Microstar Laboratory)at a rate of 10kHz.A dynamic clamp was achieved by implementing a rapid (0.1-ms)acquisition/injection loop in current-clamp mode.All experiments were carried in accordance with animal protocols approved by the N.I.H.Stimuli consisted of sine waves of 30different frequencies (1–30Hz)presented for 2s.Only one amplitude was tested.No additional noise was injected.The first 500ms were discarded for analysis of reliability.R E S U L T SSpike timing reliability of conductance-based model neu-rons was characterized using a sine wave stimulation protocol for model cells with different amounts of sodium,potassium,and leak conductances.The voltage response of thereferenceFIG .1.Reliability analysis.A :voltage response of the reference cell to a current step (I DC ϭ0.3nA).B and C :examples of stimuli (f ϭ9Hz,C ϭ0.05nA;f ϭ11Hz,C ϭ0.05nA,respectively).D and E :rastergrams of the spiking responses to the stimuli presented above.Reliability in D is low (R corr ϭ0.10);reliability in E is higher (R corr ϭ0.64).F :reliability as a function of frequency f and amplitude C ,of the sine component in the input (Arnold plot,in contrast to all following data calculated with 0.25-Hz resolution,based on 50trials each).Tongue-shaped regions of increased reliability are visible.Strongest tongue marks the resonant (or preferred)frequency of a cell.Rastergrams underlying reliability at positions D and E are those shown in panels D and E .G :DC firing rate (I DC ϭ0.3nA)vs.the preferred frequency for all model cells derived from the reference cell.DC firing rate is a good predictor of the preferred frequency.196SCHREIBER,FELLOUS,TIESINGA,AND SEJNOWSKImodel cell(see METHODS)to stimulation with a DC step current (I DCϭ0.3nA)is shown in Fig.1A.Model cells were stimulated with a set of sine waves on top of afixed DC.Reliability values for each individual stimulus and cell,based on correlation of responses to repeated presen-tation of a stimulus each with an independent realization of the noise,were derived as a function of the frequency f and the amplitude of the sine component C.Figure1,B–E show examples of2stimuli used and responses to those stimuli obtained from the reference cell.Figure1F shows the complete set of reliability values as a function of frequency and ampli-tude of the sine component of the input.Distinct,tongue-shaped regions of high reliability,so-called Arnold tongues(Beierholm et al.2001),arising from the res-onance effect of spike timing reliability,are visible.Figure1F also shows that the degree of reliability depended on the power of the input at the resonant frequency of a neuron.The higher the amplitude at the resonant frequency,the more pronounced was the reliability.At high amplitudes,frequencies close to the resonant frequency also showed enhanced reliability.The Ar-nold tongues were approximately vertical,so that the fre-quency of maximum reliability showed only a weak depen-dency on the amplitude of the sine component.The difference in input frequency for maximal reliability,as the amplitude C varied from0.05to0.15nA,was usuallyϽ2Hz.In most examples presented in this study,the strongest resonance was found at a1:1locking to the stimulus,where one spike per cycle of the sine wave was elicited.Additional regions of enhanced reliability could be observed at harmonics of the main resonant frequency(1:2,1:3,and1:4phase locking,in order of decreasing strength),and at the1st subharmonic(2:1 phase locking).The location of the strongest Arnold tongue in frequency space revealed the preferred frequency of a neuron,which was well approximated by thefiring rate of the neuron in response to the DC component alone.Figure1G shows a strong corre-lation between the preferred frequency(i.e.,position of the strongest Arnold tongue on the frequency axis determined by the frequency of highest reliability for a given amplitude,C) and the DCfiring rate of a cell for a wide range of conductance values in the model(see APPENDIX).In all(but2)cases the resonant frequency was close to the DCfiring ually,the resonant frequency at the lowest amplitude of the sine compo-nent was closest to the DCfiring frequency.For the2outliers the highest value of reliability was achieved at the subhar-monic,or the1st harmonic of the DCfiring frequency.The importance of the DCfiring rate in generating phase-locked firing patterns was previously emphasized(see e.g.Coombes and Bressloff1999;Hunter et al.1998;Keener et al.1981; Knight1972;Rescigno et al.1970).The resonant frequency is referred to as preferred frequency throughout the paper.Influence of conductances on the frequency preference Because ionic conductances are known to influence neuronal activity levels,we investigated the ability of ion channels to modulate the preferred frequency in thefirst part of this study. SIMULATION RESULTS FOR A CORTICAL SINGLE-COMPARTMENT MODEL CELL.We started from the model of a cortical neuron (the reference cell).First,we varied one channel density at a time,keeping the densities of the other channelsfixed.The Arnold plots of cells whose leak density and slow potassiumdensity were varied respectively are shown in Fig.2.Examplespike shapes(at DC stimulation)are shown next to the Arnoldplots.All cells showed a pronounced resonance—that is,a pro-nounced preferred frequency.For variation of the leak chan-nels,the preferred frequency shifted to slightly lower frequen-cies with increasing density of leak channels.In contrast tovariation in leak channels,variation of K s conductance showeda large shift in preferred frequency(see Fig.2B).We alsoexplored changes in preferred frequency induced by the otherconductances(i.e.,Na,K dr,and Na P).Preferred frequenciesyielding maximum reliability(at Cϭ0.1nA)as a function ofnormalized channel density are shown in Fig.3for all chan-nels,including leak and K s.Because each channel type operated in a different range ofdensities,some of which differed by orders of magnitude,wenormalized(for parameter range criteria see APPENDIX)thedensities to the range[0;1]for each channel type,respectively.For Na,K dr,and Na P large changes in densities were necessaryto shift the preferred frequency.The overall observed changefor these channel types was in the range of5to15Hz.Thusstarting from the reference cell,only variation in the K s densitycould shift the preferred frequency by several tens of Hertz,fromϽ10toϾ60Hz.In all cases studied,for a given channel density the reliabil-ity at the preferred frequency was also higher than it wouldhave been at this stimulus frequency for most other values ofchannel parably high values were achieved onlyfor channel densities where the frequency at the1st harmonicor the subharmonic Arnold tongue coincided with the stimulusfrequency.We also analyzed the influence of2other potassium chan-nels with slower kinetics on frequency preference of the ref-erence cell—a muscarinic potassium channel K M and a cal-cium-dependent potassium channel K Ca(for details see APPEN-DIX).For both cases,we substituted K s by the new potassium conductance,K M or K Ca,respectively.The results of the Ar-nold plot analysis are shown in Fig.3B.For both channel types,an increase of their conductance shifted the preferred fre-quency over a broad range of frequencies.The lowest achiev-able frequency at a given DC depended on the time constant ofthe slow potassium conductance.If2or more slow potassiumconductances were present at high densities,the broad tuningeffect was diminished and eventually suppressed at high con-ductance levels(data not shown).Figure3C presents the pre-ferred frequency as a function of K s conductance for different ␶Ks.The slower the kinetics of the K s channel,the lower the minimum achievable frequency and the broader the frequency range accessible through variation of the slow potassium con-ductance.For completeness we analyzed all combinations of Na,Na P,K dr,K s,and leak conductances.In this case,we relied on theDCfiring rate as an estimate of the preferred frequency.Thedistribution of maximum changes infiring rate(i.e.,preferredfrequency)achievable by variation of the density of one ionchannel type over all combinations of the other4densities ispresented in Fig.4,which shows one curve for each ionchannel type.For a more detailed description of this analysissee METHODS.Variation of K s had a significant effect on thefiring frequency in almost all parameter regimes.Its influence197INFLUENCE OF IONIC CONDUCTANCES ON SPIKE TIMINGwas weakest when another potassium channel,K dr in this case,was present at high density.The mean change achieved with K s was around 20Hz.The mean change achieved by the other ion channels was Ͻ10Hz.The analysis also showed that,in principle,all ion channel types could achieve firing rate changes of Ն20Hz.Within the parameter space investigated,this was true for only a minority of values of the other 4conductances.Figure 4B shows 4examples of parameter regimes where these channels signi fi-cantly changed the preferred frequency.For example,this occurred for K dr when K s was not present or present only in small amounts.Na P could cause a large frequency shift when both potassium conductances,K dr and K s ,were low.Na was potent in changing the frequency when both potassium con-ductances and Na P were low.Its in fluence in these cases weakened further with a higher density of leak channels.Leak channel variation also gave rise to higher frequency shifts when both potassium conductances were low and thesodiumFIG .2.In fluence of leak and slow potassium conductances on spike timing reliability.A :right column of left panel shows Arnold reliability plots for 7different model cells,systematically varying in the amount of leak channels present (0,0.005,0.01,0.015,0.02,0.03,and 0.04mS/cm 2,top to bottom ).Left column :spikes of the corresponding cells in response to pure DC stimulation without intrinsic noise.Input resistance changed signi ficantly with leak conductance over several hundreds of M ⍀.B :Arnold plots and spikes in response to DC stimulation for 7different model cells with increasing amounts of K s (0.05,0.15,0.3,0.6,1.0,1.5,and 2.0mS/cm 2,top to bottom ).Input resistance changed from about 230to 150M ⍀.For both panels the 3rd plot from the bottom (*)represented the reference cell (as in Fig.1F).FIG .3.Dependency of preferred frequency of the reference cell on individual channel densities.A :preferred frequency as a function of normalized channel density (see text for de finition),for 5different conductances.Variation in K s achieves the broadest shift in preferred frequency.B :preferred frequency for variation in a muscarinic potassium channel (K M )and a calcium-dependent potassium channel (K Ca )as a function of normalized channel density (based on sine wave reliability analysis).K M and K Ca ,respectively,replaced K s in the reference cell.C :DC firing rate (an estimate of the preferred frequency)for K s channels of different time constants (␶Ks )as a function of K s peak conductance.Densities are not normalized in this panel.Lowest achievable frequency (at I DC ϭ0.3nA)depended on ␶Ks .198SCHREIBER,FELLOUS,TIESINGA,AND SEJNOWSKIconductances were not too large.In general,higher densities of leak channels tended to lower the minimum achievable fre-quency.To illustrate that regulation of ionic conductances on spiketiming reliability frequency preference would allow a cell to dynamically adjust its spike timing reliability,the effect of a temporary increase in K s conductance on spike timing reliabil-ity is presented in Fig.5.The conductance step was chosen such that the preferred frequency of the cell after the conduc-tance increase matched the stimulus frequency.Spike timing reliability during elevation of the K s conductance was signif-icantly enhanced.RELIABILITY OF INPUTS WITH MORE THAN ONE FREQUENCY.Many biologically relevant periodic inputs,such as inputs to neurons that participate in rhythms,exhibit a broad distribution of frequencies in their power spectrum.We therefore stimu-lated model neurons with quasi-random stimuli whose power spectrum contained 2peaks,one in the theta-range (about 8Hz)and one in the gamma range (30–70Hz).The 3rhythm-like stimuli tested are depicted in Fig.6.The reliability of a response to one of those stimuli depended on the amount of K s present in the neuron.For the theta-dominated input (Fig.6A )cells with higher K s conductances responded more reliably,whereas cells with lower K s conduc-tance (therefore tuned to higher frequencies)responded with lower reliability.For the gamma-dominated input,only cells with an optimally low K s conductance achieved a high reli-ability.A high K s conductance made the cell more unreliable.For all stimuli,the cell with preferred frequency (adjusted by K s )closest to the dominant frequency in the input yielded the highest spike timing reliability (as illustrated by the lower panels in Fig.6).Interestingly,the second (smaller)peak in the power spectra of the inputs was also re flected by a small increase of reliability at corresponding densities of K s .Not surprisingly,reliability also tended to increase with the vari-ance (or RMS value),of the stimuli,across all stimuli and cells.EXPERIMENTAL RESULTS.To test the effects of slow potassiumchannels on preferred frequency physiologically,we per-formed patch-clamp recordings in slices of rat prefrontal cor-tex.We used the dynamic-clamp technique,which allows time-dependent currents to be injected that experimentally simulate conductances through on-line feedback.Thus we were able to arti ficially introduce K s currents (with the same dynamics as the K s reference channel used in the modelsim-FIG .4.In fluence of parameter variation on the preferred frequency.A :normalized distribution of frequency shifts (maximum changes in firing rate)achievable by one ion channel type (measured over all combinations of the other 4channel types).Cells in conductance space that did not fire were discarded.Different curves correspond to different ion channel types.B :4examples of cells where Na,Na P ,K leak ,and K dr could mediate large changes in preferred frequency (for parameters see APPENDIX ).Circles and solid lines indicate the preferred frequency derived with the sine wave protocol (C ϭ0.05nA);crosses indicate DC firingrate.FIG .5.Dynamic changes in spike timing reliability attributed to conductance steps.A :superimposed voltage traces (n ϭ20)in response to a sine wave (f ϭ9Hz,C ϭ0.05nA),which is shown in D .K s conductance was temporarily increased,as indicated in B .C :rastergram of the responses.Parameters of the cell were those of the reference cell;g Ks values were 0.9and 1.4mS/cm 2;noise SD ␴ϭ0.03nA.Reliability (here estimated with ␴t ϭ3ms)changed from 0.18to 0.57at the conductance step and back to 0.17.199INFLUENCE OF IONIC CONDUCTANCES ON SPIKE TIMING。

fluorescence intensities单位 -回复

fluorescence intensities单位 -回复

fluorescence intensities单位-回复Fluorescence Intensities: Understanding the UnitIntroduction:Fluorescence is a fascinating optical phenomenon wherein molecules absorb light of a specific wavelength and re-emit it at a longer wavelength. This characteristic property has revolutionized various fields, ranging from biomedical research to environmental monitoring. When discussing fluorescence, one frequently encounters the term "fluorescence intensities." In this article, we will delve into the concept of fluorescence intensities, explore their significance across different applications, and discuss the units associated with their measurement.Section 1: What are fluorescence intensities?Fluorescence intensity refers to the measure of the emitted light from a fluorescent sample. It represents the strength or magnitude of the fluorescent signal resulting from the excitation of the sample. In simpler terms, fluorescence intensities provide quantitative information about the amount of light emitted by a fluorescentsubstance. Higher fluorescence intensities indicate a stronger emission signal, while lower intensities suggest a weaker signal.Section 2: Significance of fluorescence intensities in research2.1 Biological and biomedical applications:Fluorescence intensities play a crucial role in biological and biomedical research. Researchers utilize fluorescent dyes or fluorescently labeled biomolecules to study various cellular processes, such as protein-protein interactions, gene expression, and intracellular localization. By measuring the fluorescence intensities emitted by these molecules, scientists can gain insights into the dynamic behavior and functional aspects of cellular components.2.2 Environmental monitoring:In environmental monitoring, fluorescence intensities serve as a valuable tool for measuring contaminants and pollutants. Certain substances, such as polycyclic aromatic hydrocarbons (PAHs) or heavy metal ions, exhibit intrinsic fluorescence properties.Monitoring their fluorescence intensities enables scientists to detect and quantify their presence in the environment accurately. This information is crucial for assessing the impact of pollutants on ecosystems and human health.Section 3: Units for measuring fluorescence intensities3.1 Photon Counts:One commonly used unit for measuring fluorescence intensities is the photon count. It represents the number of photons emitted by a fluorescent substance. Photon counts are expressed as counts per second (cps) or counts per millisecond (cpms), depending on the sensitivity of the detection system used. Higher photon counts indicate a stronger fluorescence signal.3.2 Relative fluorescence units (RFU):RFU is another unit often used in fluorescence measurements, particularly in plate readers and fluorimeters. Relative fluorescence units represent the ratio of the sample's fluorescence intensity to a reference standard. RFU values provide a relative measure offluorescence and are especially useful when comparing multiple samples or analyzing changes in fluorescence over time.3.3 Arbitrary Units:Arbitrary units are a versatile unit for fluorescence intensities, primarily used when comparing fluorescent signals from different samples within the same experiment. These units lack a standardized reference standard, but they provide a relative measure of fluorescence intensity. Arbitrary units are commonly seen in fluorescence microscopy experiments or when analyzing fluorescent images.Section 4: Experimental considerations and limitationsWhile fluorescence intensities offer valuable insights, it is essential to consider certain factors that can affect their measurement. Experimental parameters such as excitation power, integration time, detection sensitivity, and photobleaching can influence the recorded intensities. Additionally, differences in fluorescence quantum yield, dye concentration, and detection systems can lead to variations in fluorescence intensities between samples orexperiments. Understanding these limitations is crucial for obtaining accurate and reproducible fluorescence measurements.Conclusion:Fluorescence intensities serve as a vital quantitative parameter in various scientific disciplines. They enable researchers to gain insights into cellular processes, assess environmental pollutants, and monitor biological systems. Measured using units such as photon counts, relative fluorescence units, or arbitrary units, fluorescence intensities provide researchers with valuable information for data analysis and interpretation. By understanding the concept of fluorescence intensities and the units associated with their measurement, scientists can harness the full potential of fluorescence technology and further advance their research.。

TOPSAR Terrain Observation by Progressive Scans

TOPSAR Terrain Observation by Progressive Scans

T
Fig. 1.
Geometry of a typical three-subswath high-resolution ScanSAR.
Fig. 2. ScanSAR acquisition geometry. The contribution of three targets has been represented to highlight the azimuth nonstationarities in both amplitudes and spectra.
Abstract—In this paper, a novel (according to the authors’ knowledge) type of scanning synthetic aperture radar (ScanSAR) that solves the problems of scalloping and azimuthvarying ambiguities is introduced. The technique employs a very simple counterrotation of the radar beam in the opposite direction to a SPOT: hence, the name terrain observation with progressive scan (TOPS). After a short summary of the characteristics of the ScanSAR technique and its problems, TOPSAR, which is the technique of design, the limits, and a focusing technique are introduced. A synthetic example based on a possible future system follows. Index Terms—Array signal processing, interferometry, scanning antennas, synthetic aperture radar (SAR).

6Stochastic Volatility Models with Transaction Time Risk

6Stochastic Volatility Models with Transaction Time Risk

Stochastic Volatility Models with Transaction Time RiskEric Renault∗and Bas J.M.Werker†Universit´e de Montr´e al and Tilburg UniversityDecember4,2002AbstractWe consider possible instantaneous causality between transaction times and transaction prices in afinancial market in a structural setting.Although a large part of the current litera-ture neglects this possible instantaneous causality,we provide moment conditions that identifythese effects both statistically and economically.Based on ultra-high frequency data for IBM,wefind that about two-thirds of its volatility can be attributed to instantaneous durations.From an empirical point of view,wefind that transaction times indeed cause transaction pricesand that failure to take this into account may lead to erroneous inference.Keywords:Causality,Continuous time models,Ultra high frequency data,Transaction prices,Transaction times.1IntroductionEngle(2000)defines“ultra-high frequency”data as those provided by the measurement of economic (financial)variables when all transactions are recorded.He argues that there is no higher frequency data available to econometricians.In this framework,transaction data are described by two random variables:thefirst is the time of the transaction and the second is a vector(marks)observed at the time of the transaction.Following Engle(2000),let t i be the time at which the i-th trade occurs and let∆t i+1=t i+1−t i be the duration between the i+1-th and i-th trade.The so-called marks describe the actual event(trade)that occurs at time t i and consist of a k-vector y i at this time. Engle(2000)states that“the relevant economic questions can all be determined”from the densities:p(y i+1,∆t i+1|G i)=p(∆t i+1|G i)p(y i+1|∆t i+1,G i)(1.1)which decomposes the joint conditional density of(y i+1,∆t i+1)given the natural past in discrete time,i.e.,given G i=σ(y j,∆t j:j≤i).The focus of interest in the present paper is the economic interpretation of the occurrence of the current duration∆t i+1in the function p(y i+1|∆t i+1,G i).As Engle(2000),we want in particular to “admit the possibility that variations in∆t and variations in(the volatility)σcould be related to the same news events”.Basically,we want to further address the issue of interest in Dufour and Engle(2000),which“lies in providing empirical evidence of the relevance of time in the process of price adjustment to information”.Our contribution is to stress that the influence of durations on prices,i.e.,the occurrence of∆t i+1in p(y i+1|∆t i+1,G i),is twofold and should be split,in an identifiable way,into a temporal aggregation effect(compare Meddahi,Renault,and Werker,1998)∗Universit´e de Montr´e al,Montr´e al,CIRANO,and CIREQ†Finance Group and Econometrics Group,CentER,Tilburg University,Tilburg,The Netherlands.and an informational effect.Since both effects have different repercussions for risk measurement and management,this separate identification has important consequences.We have shown in a previous paper (Meddahi,Renault,and Werker,1998),that,even if the time sequence ∆t i ,i =1,...,n ,were purely deterministic,the current duration ∆t i would explicitly appear in the model p (y i |∆t i ,G i )of the price dynamics,simply through a “time-to-build”effect in volatility fluctuations.This dependence is caused by two effects.On the one hand,the application of a standard discrete time volatility model must consider the “volatility per unit of time”,as in Engle (2000)in the context of GARCH modelling.On the other hand,the volatility clustering effect is likely to be erased by longer durations and therefore the model of volatility persistence must be conformable to temporal aggregation formulas (see,e.g.,Drost and Werker,1996,Ghysels and Jasiak,1998,or Grammig and Wellner,2002,for proposals to apply the Drost and Nijman,1993,formulas of temporal aggregation of weak GARCH processes).The exact formulas taking into account both are rigorously derived in Meddahi,Renault,and Werker (1998)using the Meddahi and Renault (2003)formulas for temporal aggregation of continuous time linear autoregressive volatility dynamics.Without the continuous time paradigm,the application of temporal aggregation formulas with random times has to be justified by resorting to something like a latent “normal duration GARCH process”(Grammig and Wellner,2002)whose structural foundations are far to be clear.But the most interesting economic issue,as put forward by Dufour and Engle (2000),has nothing to do with the aforementioned deterministic effects of irregular time sampling.In fact,the issue is to see the time between trades as a measure of trading activity which could affect price behavior.This is the reason why the economic interpretation of the information content of time durations,in models of price and trade dynamics,is better founded by identifying a structural continuous time model.Actually,only such a continuous time model will be able to disentangle what we have called the time-to-build effect from the genuine information effect.Typically,this structural model specifies the joint probability distribution of the price process S t over some reference period[0,T ]as well as a sequence of stopping times t i ,i =1,...,n ,over the same period.The marginal probability distribution of the price process provides,for any (fixed and deterministic)time interval h ,the density function p h (S t i +h |G i )of the conditional probability distribution of S t i +h given the natural past G i .For sake of expositional simplicity,this probability distribution will be assumed to be time-invariant (independent of t ).Then,the economic issue of interest is the validity of the equality:p ∆t i +1 log S t i +1S t i G i =p log S t i +1S t i∆t i +1,G i .(1.2)When this equality is fulfilled and under the additional assumption that the marginal process de-scribing transaction times does not contain information about the structural parameters in the price dynamics,transaction times contain no genuine information regarding these asset price dynamics and there is no cost when these transaction times are considered to be deterministic.But if,on the contrary,some instantaneous causality relationship between durations and asset prices leads to a violation of equality (1.2),the incremental information content of ∆t i +1about S t i +1given the past G i is crucial in several respects.First,one cannot perform meaningful statistical inference about the probability distribution of the price process without taking into account the probability distribution of durations.Typically,when plugging into a likelihood function based on the densities p ∆t i +1 log S t i +1/S t i the observed values S t i as if the times t i at which the trades occur were deterministic,one would introduce some kind of selection bias which may be significant.A¨ıt-Sahalia and Mykland (2003)extensively document this “cost of ignoring randomness”.They also document what they term the “cost of randomness”,that is the cost of not observing the randomly-spaced sampling intervals and still recognizing their relevance by integrating them out in the computation of the likelihood function.They conclude that “empirical researchers using randomly spaced data should pay as much attention,if not more,to sampling randomness as they do to sampling discreteness”.Besides statistical inference issues,the randomness of duration betweentrades is also of foremost importance for risk management.When equation (1.2)is violated,one cannot compute the volatility at time t i −1of the asset return log S t i +1/S t i as if the duration ∆t i were deterministic or even conditionally independent (given the past)of the asset return.Our main contribution is a decomposition of the volatility measurement in a standard component (where the randomness of in the duration is just integrated out)and an additional component which has a direct interpretation as transaction time risk.The interest of this decomposition is to provide a framework for the joint modelling of volatility and inter-transaction duration processes.As stressed by Dufour and Engle (2000),this may give useful insights in the dynamic behavior of market liquidity and thus could be used to design optimal trading and timing strategies.The focus of interest in the present paper is more to state a set of moment conditions that allows one to assess the statistical and economic significance of the aforementioned instantaneous causality relationship.For the purpose of statistical inference about the continuous time price process,this gives an important semiparametric specification test.For the purpose of risk management,this gives insights in the measurement and hedging of liquidity risk.A byproduct of our framework is the possibility to fruitfully revisit the conclusions of some models previously proposed in discrete time for irregularly spaced financial data.Starting from the seminal Engle and Russell (1998)autoregressive conditional duration (ACD)model,Ghysels and Jasiak (1998)have proposed the ACD-GARCH model to jointly model the volatility and inter-transaction duration processes.This joint modelling issue has since been studied in more detail by several authors,including Engle (2000)and Grammig and Wellner (2002).A crucial issue for all these papers (see also Dufour and Engle,2000)is the treatment of causality relationships between asset prices’volatility and durations between trades.Both Engle (2000)and Dufour and Engle (2000)maintain as “a simplifying operative assumption”that durations are not Granger caused by prices.This allows them to estimate a simple ACD model where durations are forecasted only from their own past (Engle,2000)and to compute univariate impulse response functions for durations (Dufour and Engle,2000).However,Dufour and Engle (2000)provide some convincing empirical evidence to show that large price changes,in either direction,tend to be followed by shorter wait times.This seems to give support to the significance of a causality relationship from prices (through their volatility)towards duration consistent with the Easley and O’Hara (1992)microstructure model:long durations are likely to indicate an absence of news received by the traders,and thus to be associated with low volatility of returns.A¨ıt-Sahalia and Mykland (2003)find this negative causality relationship from the absolute value of returns towards the duration so striking that they go even further by considering that durations should be forecasted from past returns rather than from their own past.Although our focus in this paper is on instantaneous causality relations between durations and transaction prices,we do not exclude,nor impose,Granger type causality relations in either direction.The incremental information of the current duration ∆t i +1in the function p (y i +1|∆t i +1,G i )of (1.1),in excess of the deterministic time-to-build effect is typically neglected in the current lit-erature.The ACD-GARCH model as proposed by Ghysels and Jasiak (1998)or Grammig and Wellner (2002)use the temporal aggregation formulas for weak GARCH processes as derived by Drost and Nijman (1993)with time-varying aggregation period (expected duration).This setup does not allow for a parameter taking into account instantaneous causality between durations and transaction prices.For example,the volatility equation of Grammig and Wellner (2002),which just takes into account the temporal aggregation effect in a “normal duration GARCH process”,implic-itly assumes that this “normal”regime is not influenced by unexpected durations.In spite of its name (“interdependent duration-volatility model”)the model of Grammig and Wellner (2002)can-not capture any instantaneous causality relationship between volatility and duration since both the volatility equation and the duration equation are only about conditionally expected squared returns and expected durations given the past.This is the reason why the only discrete time model which can be compared with the discrete time moment restrictions that we derive from our continuous time structural model is the one of Engle (2000).In this model,according to (1.1),the conditionalexpectation of squared returns is computed given not only the past but also given the current dura-tion.This leads Engle(2000)to draw two sets of conclusions,concerning the instantaneous causality relationship between duration and volatility and the Granger causality relationship from durations to volatility,respectively.Regarding thefirst issue,hefinds that“the reciprocal of duration is significantly positive in all volatility specifications supporting the Easley and O’Hara formulation in which no trade is interpreted as no news so that volatility is reduced”.He also introduces the concept of a“surprise in duration”as the duration divided by the expected duration andfinds also for this surprise the same negative instantaneous causality between duration and volatility.The Granger causality is also found to play in the same negative direction.However,it is important to note that the instantaneous causality and the Granger causality relationships may play in opposite directions,making the test of an Easley and O’Hara(1992)kind of model(informed trading induces a higher trading intensity)against the model of Admati and Pfeiderer(1988),according to which liquidity traders could generate a causality in opposite direction,more ambiguous.Wefind that our continuous time structural model is useful for disentangling more precisely the two causality effects. Actually,it allows us to test without ambiguity the significance and the sign of an instantaneous causality relationship between duration and volatility.It also precisely decomposes,in the spirit of Engle(2000),the different impacts of the various causality effects for volatility forecasting.The paper is organized as follows.In Section2,we present our general continuous time framework for joint modelling of transaction times and prices.We allow in our semiparametric framework for some causality property from transaction times towards asset returns that is in line with parametric models considered by A¨ıt-Sahalia and Mykland(2003)and Duffie and Glynn(2001).In Section3, we discuss the informational content of transaction durations by making the difference between the volatility conditional only to the past and the volatility expected given the past and the current duration explicit.In Section4,we consider more explicit volatility and duration dynamics to get testable moment conditions.Following Engle(2000),we stress the role of surprises in durations. The explicit moment conditions and related inference issues are detailed in Section5,while a short empirical application about IBM stock is presented in Section6.Of course,the specific model choice made in Section4and the specific stock considered in Section6prevent us to draw general conclusions about theoretical microstructure debates.But we argue in Section7that the general framework proposed in this paper is sufficiently versatile to accommodate other statistical model specifications and other types of data as well.2A general framework for modelling transaction times and pricesWe introduce our framework for the analysis of continuous time price processes observed at random transaction times.This framework allows us to identify separately the marginal price process,the marginal process for the transaction times,and the interaction between both.An often-used approach,see,e.g.,Engle(2000),is to model the marginal distribution of transac-tion times and the conditional distribution of transaction prices given the transaction times.This, clearly,requires a priori information on the form of the conditional distribution of returns given (future)transaction times.We feel that it is more natural to model the marginal process for trans-action prices,as the majority of the empiricalfinance literature so far deals with this marginal price processes.We show that,given the(marginal)distributions of transaction times and prices,we can model possible causality relations between both using a simple(conditional)regression coefficient. This regression coefficient is sufficient to derivefirst and second-order observable moment conditions. In Section3we will use the results of the present section to identify the noncausality assumptions made in previous papers.We want to stress that not all previous papers assume noncausality of transaction times to transaction prices(e.g.,Engle,2000,and Duffie and Glynn,2001).However, we think that the present paper is thefirst to explicitly address the question of(non)causality in astructural way and does not rely on ad hoc reduced form specifications.The basis of our model is thefiltration that generates the information accumulation in the market.Following the majority of the literature,we suppose that this information structure is exogenously given and that it satisfies the so-called’usual conditions’(see,e.g.,Protter,1995,p.3).Assumption A The informationflow in the market is described by thefiltration(F t)t≥0that is supposed to satisfy the usual conditions.All stochastic processes that appear in the sequel of this paper are assumed to be adapted to the filtration(F t).Note that thefiltration(F t)is generally not completely observed by the econometri-cian.The econometrician’s information,as described in the introduction,is denoted by(G i),with ireferring to the i-th transaction.We assume that G i⊂F ti ,where t i denote the transaction timesto be introduced in Assumption C.Consider afinancial asset with price at time t given by S t.The evolution of the price S t is supposed to be given by S0=1andd log S t=µt d t+σt−d L t,t≥0.(2.1) In this specification,(µt)and(σt)are arbitrary predictable processes and(L t)is a L´e vy process. In particular,we do not assume that the processes are continuous or Markovian.Clearly,in order to derive moment conditions,we need some assumptions on the existence of moments.We assume the following.Assumption B The innovation process(L t)is assumed to be a locally square-integrable local martingale,with respect to(F t),whose compensated quadratic variation is time,i.e.d L,L t=d t. The drift process(µt)and the volatility process(σt)are assumed to be predictable with respect to thefiltration(F t)and sufficiently regular so that fourth conditional moments of the process log S t exist.For any stopping time T,with respect to thefiltration(F t),we write E T for the conditional expectation operator given theσ-field F T(Protter,1995,p.5).Moreover,we defineνT(u)=E TµT+u,(2.2)ξT(u)=E Tσ2T+u.(2.3) We denote by N T andΞT the primitives ofνT andξT,respectively,with the normalization that N T(0)=ΞT(0)=0.The functions N T andΞT will be essential in the sequel.Note that Assumption B implies that(S t)is a semimartingale adapted to thefiltration(F t).In fact, it has to be,since it is well-known that ruling out arbitrage possibilities(in the appropriate way)in continuous time,implies that the price processes are semimartingales(Delbaen and Schachermayer, 1997).The assumption that d L,L t=d t is a normalization that identifiesσt as the volatility process.Assuming that L is continuous would,by L´e vy’s characterization theorem(Protter,1995, p.79),imply that L is a Brownian motion.A Brownian motion for L is the only way to exclude jumps in S.The conditional predictorsνT andξT will appear in the moment conditions that can be used to test for transaction time exogeneity and to estimate parameters in models where transaction times are not necessarily exogenous with respect to the price process.In Section4,we consider some possible choices.In a simply specification,the stochastic volatility is assumed to be integrated of order one.Also in models where the conditional volatility exhibits linear mean-reversion,the predictorsνT andξT(and their primitives N T andΞT)are analytically known.However,we will not pursue this possibility in the present paper.Clearly,for more general volatility specifications (say,in terms of diffusion processes),the predictorsνT andξT can be approximated using simulation techniques,but this may be extremely time-consuming.The process(S t)is not observed in continuous time by the econometrician.If it would be, the inference problems that follow become extremely different and in some sense degenerated.We assume that S t is only observed at some particular(random)transaction times t1,t2,....Assumption C The dates t1,t2,...form an increasing sequence of stopping times with respect to thefiltration(F t).We denote the transaction durations by∆t i+1=t i+1−t i.Finally,F tidenotes the distribution function of the conditional distribution of∆t i+1given F ti.The stopping time assumption merely states that,at time t,all transactions up to time t have been observed.For notational convenience we define t0=0.Under(2.1),returns on the asset S,as they are observed over the interval(t i,t i+1],are given byR ti+1=logS ti+1S ti=∆ti+1µti+ud u+εti+1,i=0,1,2,...,(2.4)whereεti+1=∆ti+1σti+u−d L ti+u,i=0,1,2,....(2.5)Note thatεti+1is a martingale stopped at time∆t i+1,so that,under the assumptions stated,Doob’soptional sampling theorem(Protter,1995,p.10)implies that the(εti+1)form a martingale differencesequence,i.e.E ti εti+1=0,i=0,1,2,....(2.6)The following proposition relates conditional expectations and variances of observed returns to thepredictors N ti (u)andΞti,to the distribution function of the transaction durations F ti,and to someregression coefficients which we will callβµti andβσtiand which we will formally define below.Proposition2.1Under Assumptions A-C we have the following observable moment conditions:E ti R ti+1=∞N ti(u)d F ti(u)+∞βµti(u)F ti(u)(1−F ti(u))d u,(2.7)Var ti {R ti+1}=E tiε2ti+1=∞Ξti(u)d F ti(u)+∞βσti(u)F ti(u)(1−F ti(u))d u,(2.8)whereβµti (u)andβσti(u)are(conditional)regression coefficients(given F ti),i.e.βµti(u)=Cov ti{µti+u,I(0,∆ti+1](u)}Var ti{I(0,∆ti+1](u)},(2.9)βσti(u)=Cov ti{σ2ti+u,I(0,∆ti+1](u)}Var ti{I(0,∆ti+1](u)}(2.10)andαµti (u)=νti(u)−βµti(u)(1−F ti(u)),(2.11)ασti (u)=ξti(u)−βσti(u)(1−F ti(u)).(2.12)Also,we haveE tiµti+u|I(0,∆ti+1](u)=αµti(u)+βµti(u)I(0,∆ti+1](u),(2.13)E tiσ2ti+u|I(0,∆ti+1](u)=ασti(u)+βσti(u)I(0,∆ti+1](u).(2.14)Proposition2.1gives a decomposition for both thefirst and second moment of observed high-frequency asset returns.Throughout the paper,we will focus in all our discussion on the volatility of asset returns as measured in(2.8).From Proposition2.1,it is clear that all remarks also apply to thefirst moment modulo some obvious changes.It is also worth noting that the result of Proposition2.1remains valid if t i and t i+1are replaced by the minimum of a family of stopping times.These may,e.g.,correspond to transactions in other assets.We will not pursue this multivariate extension in this paper.As a special case, Proposition2.1yields the well known moment conditions based on non-random time intervals.In that case,the times t i are deterministic and the corresponding regression coefficientsβµandβσvanish.But,when transaction times are random,they may convey some relevant information about the risk borne at time t i through a non-zero coefficientβσ.A general discussion of this informational content of transaction times is provided in Section3before focusing in Section4on a more specific statistical framework to identify it.3Informational content of transaction durationsThe volatility decomposition(2.8)allows us to characterize the triple role of the current value∆t i+1of the duration between subsequent trades in the measurement of Var ti {R ti+1}.Roughly speakingthese three roles are:1.What we have called in the introduction the time-to-build effect which is nothing but thedeterministic effect of irregular sampling.When the duration∆t i+1is random,one has to integrate out this random variable in order to define an average risk,but this has nothing to do with causality effects.2.Thefiltering effect due to stochastic volatility.Our model is a stochastic volatility one.Theinformation F ti that defines the conditioning in the risk measurement Var ti{R ti+1}does con-tain the current latent valueσti of the spot volatility process.Then,if one wants to specifya GARCH type model that characterizes the dynamics of the conditional variance given thesmaller information set defined only from the past observations of the asset price,one has to reproject the above conditional variance on this smaller information set.Then,if the current value∆t i+1of the transaction duration is added,as,e.g.,in Engle(2000),to this smaller information set,it may have an informational content,just as way to betterfilter the past values of the volatility process.This informational content may occur even when the regression coefficientβσis zero.This would be akin to some indirect Granger causality effect from dura-tions to prices through volatility(see,e.g.,Renault,Sekkat,and Szafarz,1998)and does not correspond to the instantaneous causality relationship between duration and volatility that is the focus of the present paper.3.The instantaneous causality effect between the duration and the volatility is encapsulated inthe second part of the right-hand side of(2.8)when the regression coefficientβσis non zero.It is typically this effect that may capture“the possibility that variations in durations and variations in the volatility could be related to the same news events”.Besides its relevance for microstructure theory,this effect is also important for risk measurement.Typically,neglecting it whenβσis non zero would amount to overlooking a liquidity component of the risk borne by an investor who wonders at time t i how risky the investment in this asset is over the next period.The main advantage of the continuous time framework used in this paper is to allow one to clearly disentangle the afore described three roles of durations in volatility measurement.Let us now discuss more explicitly each of them.Effect1:The time-to-build effectThis effect is encapsulated in thefirst term of the right-hand side of the decomposition(2.8).This term can be seen as an expected integrated volatility imposing non-causality between transaction times and prices.To be more precise,note that∞0Ξti(u)d F ti(u)=E⊗ti∆t i+1σ2ti+ud u.(3.1)Here,⊗indicates that the expectation is taken with respect to the product measure of themarginal(yet conditional on F ti )distributions of∆t i+1and(σti+u:u≤0),i.e.,the measureignoring possible instantaneous causality relations.By application of Fubini’s theorem,it can thenbe seen as the expectation with respect to the marginal distribution F ti of∆t i+1of the commonexpected integrated volatility as computed for a deterministic duration∆t i+1:V(t i,t i+1)= ∆ti+1E ti{σ2ti+u}d u(3.2)To get explicit expressions,let us assume that we know some deterministic functions a(·)and b(·)such that we haveE ti {σ2ti+u}=a(u)σ2ti+b(u).(3.3)Note that the linear volatility prediction formula3.3is conformable to the linear autoregressive volatility model put forward in Meddahi and Renault(2003).In that case,there is a positive coefficientκof mean reversion such that we havea(u)=exp(−κu)(3.4)b(u)=σ2(1−exp(−κu))whereσ2denotes the unconditional variance.Formula(3.3)together with(3.4)is,for instance, implied by a square-root or Ornstein-Uhlenbeck like model of volatility(Barndorff-Nielsen and Shephard,2002).We will also consider,in Section4,the even simpler case of a martingale volatility model(see Hansen,1995,and references therein)that is known to be empirically relevant for very high frequency data:a(u)=1and b(u)=0.(3.5) While(3.4)generalizes the GARCH(1,1)model to a stochastic volatility setting,(3.5)extends the IGARCH(1,1)model.All these models correspond to ARMA(1,1)dynamics for squared innovations of returns(see Meddahi and Renault,2003).The log-normal stochastic volatility model(Harvey, Ruiz,and Shephard,1994)is also conformable to(3.3)with vanishing b(u).In any case,if we denote by A and B,respectively,the primitive functions of a and b normalized to zero for u=0,we deduce from(3.3)V(t i,t i+1)=A(∆t i+1)σ2ti+B(∆t i+1).(3.6) For instance,in the case(3.4),we get immediatelyA(u)=1−exp(−κu)κand B(u)=−σ2A(u)(3.7)Formulas(3.6)and(3.7)basically correspond to the formulas used by Ghysels and Jasiak(1998) or Grammig and Wellner(2002),formula(5)p.374,when one focuses only on the volatility persis-tence parameter A(∆t i+1)that is the sum of the two GARCH(1,1)coefficients.As already stressed, the occurrence of the current duration(∆t i+1)in these formulas has nothing to do with any causal。

The Nature of the UVX-Ray Absorber in PG 2302+029

The Nature of the UVX-Ray Absorber in PG 2302+029

a r X i v :a s t r o -p h /0302555v 1 26 F eb 2003Draft version February 2,2008Preprint typeset using L A T E X style emulateapj v.04/03/99THE NATURE OF THE UV/X-RAY ABSORBER IN PG 2302+029Bassem M.Sabra &Fred HamannDepartment of Astronomy,University of Florida,Gainesville,FL 32611Buell T.JannuziNational Optical Astronomy Observatory,950North Cherry Ave.,Tuscon,AZ 85719Ian M.GeorgeJoint Center for Astrophysics,Department of Physics,University of Maryland,Baltimore County,1000Hilltop Circle,Baltimore,MD 21250Laboratory for High Energy Astrophysics,Code 662,NASA/Goddard Space Flight Center,Greenbelt,MD20771Joseph C.ShieldsDepartment of Physics &Astronomy,Ohio University,Athens,OH 45701Accepted for Publication in the Astrophysical JournalABSTRACTWe present Chandra X-ray observations of the radio-quiet QSO PG 2302+029.This quasar has a rare system of ultra-high velocity (−56,000km s −1)UV absorption lines that form in an outflow from the active nucleus (Jannuzi et al.2003).The Chandra data indicate that soft X-ray absorption is also present.We perform a joint UV and X-ray analysis,using photoionization calculations,to detemine the nature of the absorbing gas.The UV and X-ray datasets were not obtained simultaneously.Nonetheless,our analysis suggests that the X-ray absorption occurs at high velocities in the same general region as the UV absorber.There are not enough constraints to rule out multi-zone models.In fact,the distinct broad and narrow UV line profiles clearly indicate that multiple zones are present.Our preferred estimates of the ionization and total column density in the X-ray absorber (log U =1.6,N H =1022.4cm −2)over predict the O VI λλ1032,1038absorption unless the X-ray absorber is also outflowing at ∼56,000km s −1,but they over predict the Ne VIII λλ770,780absorption at all velocities.If we assume that the X-ray absorbing gas is outflowing at the same velocity of the UV-absorbing wind and that the wind is radiatively accelerated,then the outflow must be launched at a radius of ≤1015cm from the central continuum source.The smallness of this radius casts doubts on the assumption of radiative acceleration.Subject headings:galaxies:active—quasars:absorption lines—quasars:individual(PG 2302+029)—X-rays:galaxies1.INTRODUCTIONX-ray absorption in quasars provides a powerful tool to study QSO environments.The signatures of this absorp-tion are typically the suppression of soft X-rays and /or the presence of absorption edges from O VII and O VIII near 0.8keV (Reynolds 1997;George et al.1998).The strength of these features depends mainly on the degree of ionization and total column density.The relationship of the X-ray absorber to other components of the quasar environs,especially the UV absorbing gas,is not well un-derstood.A natural question is whether these features originate in the same gas.The best-known intrinsic UV absorption lines in QSOs are the broad absorption lines (BALs),but recent work has shown that some of the observed narrow absorption lines (NALs)and “mini-BALs”are also intrinsic to QSO environments (Barlow,Hamann &Sargent 1997;Hamann et al.1997a and 1997b).These features can have velocity shifts comparable to the BALs (up to −51,000km s −1in one confirmed case;Hamann et al.1997b)but the narrow line widths (from <100km s −1to a few thousand km s −1)require outflows with much smaller line-of-sight velocity dispersions.BALs,mini-BALs,and intrinsic NALs are each rare in QSO spectra,but the outflows that cause them might be common if,as expected,the gas covers asmall fraction of the sky as seen from the central source.Among Seyfert 1galaxies,50%show intrinsic UV absorp-tion (Crenshaw et al.1999),and there is also a one-to-one correlation between the detection of X-ray absorption in Seyfert 1galaxies and the appearance of intrinsic UV ab-sorption lines (Crenshaw et al.1999).In QSOs,X-ray absorption is much rarer (Laor et al.1997).Brandt,Laor,&Wills (2000)showed that QSOs with BALs are all significantly weaker soft X-ray sources than comparable-luminosity QSOs without BALs.Posi-tive X-ray detections of BALQSOs (e.g.,Mathur,Elvis,&Singh 1995;Green et al.2001;Sabra &Hamann 2001;Gallagher et al.2002)reveal large absorbing columns ofN H ∼>1023cm −2.QSOs with weaker intrinsic UV ab-sorption lines,e.g.,the NALs and mini-BALs,appear to have systematically weaker X-ray absorption (Brandt et al.2000).However,more work is needed to characterize the X-ray absorption in quasars with different types/strengths of UV absorption.Studies of X-ray absorbers in quasars,especially in relation to the UV absorption lines,will have profound impact on our knowledge of quasar wind prop-erties,such as the acceleration mechanism,outflow geom-etry,wind launch radius,and mass loss rate (e.g.,Mathur et al.1995;Murray,Chiang,&Grossman 1995;Hamann 1998;Sabra &Hamann 2001).For example,it is difficult to radiatively accelerate an outflow with a large column12density,unless it is launched from the inner most regions of the accretion disk.In this paper we will discuss Chandra X-ray observations of the QSO PG2302+029.This object shows a system of intrinsic UV absorption lines that have a velocity shift of −56,000km s−1(Jannuzi et al.1996)with respect to the systemic velocity of the QSO(z ems=1.044).The system consists of NALs(F W HM≈330km s−1)at z=0.7016 and a“mini-BAL”system(F W HM≈3300km s−1)at z=0.695(Jannuzi et al.1996;1998).It is also character-ized by being an X-ray faint source with an HEAO-1A-2flux upper limit at2keV of4×10−12erg s−1cm−2(Della Ceca et al.1990).Our aim is to determine the proper-ties of the X-ray spectrum,search for signs of absorption, and define the relationship between the UV and X-ray ab-sorbing gas.We also discuss the location of the absorber, and the potential implications of high-velocity X-ray ab-sorption for the wind dynamics.If the X-ray and UV absorbers are the same,then the−56,000km s−1velocity shift of the UV absorption lines in PG2302+029is poten-tially resolvable with the Advanced Imaging Spectrometer (ACIS),given sharp features and adequate signal-to-noise ratio.2.OBSERVATIONS AND DATA REDUCTIONWe observed PG2302+029with Chandra using ACIS on7January2000.We used the most recent(2Novem-ber2000)re-processed data released by the Chandra X-ray Center for our analysis.Nofiltering for high background or bad aspect times was required because the light curves did not show anyflare-ups in the count rate and the aspect solution,the pattern by which the telescope was dithered to distribute the incoming photons over different pixels to minimize pixel-to-pixel variation,did not have any out-lying points.We performed data extraction and calibra-tion using version1.4of the Chandra Interactive Analysis of Observations(CIAO).We created the response matrix and ancillary responsefiles using calibration data provided when the chip temperature during observations was−120◦C.We extracted the source counts from a circular region of radius of5′′while the background region was an annu-lus with radii between10′′and20′′,both centered on the position of PG2302+029.The position of PG2302+029 (α(J2000)=23h04m44s,δ(J2000)=+03◦11′46′′),as de-termined from the ACIS image,coincides to within∼1′′with the optical position of the source reported in Schnei-der et al.(1992).We obtained a total of391±21counts in an exposure time of48ksec across the observed energy range∼0.4−4.0keV.3.ANALYSIS AND RESULTSWe bin the spectrum to have at least30counts/bin and use XSPEC(Arnaud1996)to perform the analysis.The low count rate indicates that X-ray absorption may be present.We,therefore,considerfits to the data that in-clude an X-ray continuum attentuated through an ionized X-ray absorber.Neutral X-ray absorption can be ruled out because the UV spectra do not contain any low-ionization metal lines that can be identified with the NAL and/or mini-BAL systems discussed in this paper(Jannuzi et al. 1996;Jannuzi et al.2003).We model the ionized ab-sorbers using the photoionization code CLOUDY(Ferland et al.1998)assuming solar abundances.We use this code to generate grids of absorbed continua in the following way:The incident continuum is a piece-wise powerlaw,fν∝ν−α,(Zheng et al.1997;Laor et al.1997;Telfer et al.2002)and is displayed in Figure1. The far-UV part of the spectrum is characterized by the2-point spectral indexαox,which relates theflux densities at 2500˚A and2keV:αox=0.384log(fν(2500˚A)/fν(2keV)). We use the B magnitude of16.3for PG2302+029to anchor the spectral energy distribution(SED)at2500˚A, taking into account the appropriate k−correction(Green 1996).Consequently,the X-rayflux density is specified throughαox.We stepαox from1.6to2.2in increments of0.2.This effectively gives us4different unabsorbed spectra representing4possible intrinsic SEDs from the quasar’s central engine,leading to4different intrinsic X-ray luminosities(cf.Figure1).Each SED is then attenu-ated through an ionized absorber.The amount of absorp-tion largely depends on the intrinsic total hydrogen col-umn density,N H(cm−2),and the ionization parameter, U,defined as the ratio of the density of hydrogen ioniz-ing photons to that of hydrogen particles(H0+H+).For each of our4SEDs,we create a grid of attenuated continua by calculating ionized absorbers on a grid of U and N H. Therefore,for every SED,there will be a grid of absorbed X-ray spectra identified by the(U,N H)combinations of the absorber through which they were transmitted. XSPEC incorporates aχ2minimization scheme that al-lows us tofind the bestfitting attenuated continuum to the X-ray spectrum of PG2302+029.We note that all our X-rayfits include attenuation by a Galactic columndensity of N Gal.H=5×1020cm−2(Lockman&Savage 1995)and the presence of an emission line with a Gaus-sian profile at∼3keV(probably Fe Kαat the redshift of the QSO).The significance of the Fe Kαdetection is clearly very low,given just391counts(see Figure2),but we include it in ourfits because the line is plausibly present and it improves theχ2results.We adjust the properties of the best Fe Kαfit—normalization(1.7×10−6photon s−1 cm−2keV−1),width(0.3keV),and energy(2.9keV)—by hand afterfinding the bestfit for the continuum in other parts of the spectrum.Our scheme also allows us to place the intrinsic absorber at any redshift we want.Naturally, we choose to place it either at the systemic redshift of the QSO,z XAbs=z QSO≈1,or at the redshift of the UV absorption lines,z Xabs=z UV abs≈0.7.For a specific absorber redshift and for each of our4 grids corresponding to the4intrinsic SEDs,wefit the data,with U and N H as free parameters,and note the value of the lowest reducedχ2,χ2ν,possible for that par-ticular SED,and henceαox,and z.Upon comparison be-tween the8χ2νvalues,wefind that the lowestχ2ν≈1,was that for the SED with intrinsicαox=2.0,corresponding to a far-UV/X-ray slope ofαEUV=2.4,for both z=0.7 and1.0.The rest of the SEDs are rejected at the95% confidence level,regardless of the amount of intrinsic ab-sorption.The need for intrinsic absorption is illustrated in Figure 2,where we show the X-ray spectrum of PG2302+029 together with best possiblefits,for the continuum with αox=2.0,with no intrinsic absorption(upper panel),and with an ionized absorber at z=0.7(lower panel).Clearly,3 thefit that includes intrinsic ionized absorption is muchbetter than the one that does not.In particular,the un-absorbed spectrumfits the data well at high energies,butit substantially over predicts the counts in soft X-rays.AnX-ray absorber naturally lowers the soft X-rayflux to themeasured values.Figure3shows overplots of the67%,90%,and99%confidence contours fromfitting the data for the casewhere the X-ray absorber is at the emission redshift,z XAbs=z QSO(solid contours),and where the X-ray ab-sorber matches the UV absorber redshift,z XAbs=z UV abs(dotted contours).For the z XAbs=z QSO case wefindwhilethat N H=1022.8±0.2cm−2and log U=2.2+0.1−0.2for the z XAbs=z UV abs,N H=1022.4±0.1cm−2andlog U=1.6±0.3,where all errors are at the90%con-fidence level.Theχ2νin both cases is∼1,and thereforethefits do not constrain the redshift of thte X-ray ab-sorber.Note that these results were derived assuming afixed X-ray powerlaw index ofαx=0.9.We did not ex-plore other X-ray spectral slopes because the data qualityis not sufficient to constrain additional free parameters.4.DISCUSSION4.1.Intrinsic X-ray BrightnessOurfinding in Section3that the intrinsicαox is2.0forPG2302+029deserves more attention.It has been knownthat there is a correlation between the optical luminosityat2500˚A of a quasar and its intrinsicαox(e.g.,Yuan et al.1998and references therein).The relation shows a widescatter in the distribution.The averageαox for radio quietQSOs,such as PG2302+029(Kellerman et al.1989),is1.7(Yuan et al.1998;Vignali,Brandt,&Schneider2003).PG2302+029,withαox=2.0and Lν(2500˚A)≈1031.8ergs−1Hz−1,is still consistent with the scatter in the Lν−αoxdistribution discussed in Yuan et al.(1998),although it is∼102.0−1.74given an equivalent width,we calculated the resulting ionic column density,the one shown in Figure4.We assumed that the mini-BAL is due to a single transition.The os-cillator strength of this transition is the sum of the two oscillator strengths of the doublet.The rest wavelength of the single transition is the oscillator-strength-weighted av-erage of the rest wavelengths of the doublet lines.A more appropriate method would be to use the prescription pre-sented in Junkkarinen,Burbidge,&Smith(1983)where an effective optical depth is calculated at every point across the line profile taking into account the combined contri-butions of the lines in the doublet.Upon experimentation with the STIS C IV mini-BALs,we found that the differ-ence between the two methods is∼30%,within the mea-surements errors involved and making minimal difference in Figure4.While weak low-ionization UV absorption lines are seen at the redshift of the quasar and may be of a different na-ture than the NALs and mini-BALs we are discussing here (Jannuzi et al.1998),high-ionization UV absorption lines are absent at this redshift.Moreover,there is no Lyman break,at any redshift,in the UV spectrum,and hence the total hydrogen column density in a low-ionization/neutral absorbing component is not high enough to lead to any observable effects on the X-ray spectrum.Our bestfit to the X-ray absorber at z Xabs=z QSO over predicts the OVI line absorption at this redshift.In particular,the pre-dicted OVI column density at this redshift,1015.6cm−2, would lead to an equivalent width of∼9˚A and∼3˚A in the observed frame if the lines have a Doppler param-eter of b≈2000km s−1and b≈200km s−1,like the high-velocity mini-BALs and NALs,respectively.These features should be detectable at∼>7σin the FOS and STIS spectra.Their absence suggests that the X-ray absorber is not at the quasar systemic velocity.Note that these results do not depend on our assumption of solar metallicity,as long as the relative metal abundances are roughly in their solar ratios.The X-ray absorption is dominated by metal ions,and we use ourfits to that to predict the strength of the metal lines in the UV.The relative abundance of hydrogen,therefore,is not a significant factor.Another important constraint on the UV–X-ray rela-tionship comes from the Ne VIII column densities(Figure 5).The Ne VIII lines are at770and780˚A,within the spectral coverage of the STIS observations but not the FOS.The fact that they are high-ionization lines implies that they are good tracers of the X-ray gas.Our best fits to the X-ray data predict Ne VIII column densities of∼1016.5cm−2if z Xabs=z QSO and∼1017.2cm−2if z Xabs=z UV abs.For Doppler parameters appropriate for the mini-BALs or NALs,b≈2000km s−1or b≈200km s−1,respectively,wefind that the Ne VIII column densities predicted by the X-rays should produce easily measurable UV lines(∼>40σ),corresponding to observed frame equiv-alent widths of∼40˚A or∼3˚A respectively.However, there is no Ne VIII absorption detected at either redshift. There are several possible explanations for the appar-ent discrepancies.First,the absorber overall is complex and time variable.The complexity is evident from the distinct UV kinematic components,e.g.,the NALs and mini-BALs.Also,the column densities derived from the UV lines do not define an isolated location in the log U versus log N H plane(Fig.4),suggesting that the NAL and mini-BAL regions both have multiple zones(with differ-ent values of U and N H).Third,as we have noted above, the NALs and mini-BALs both varied between the1994 and1998HST observations(Jannuzi2002),and neither of those measurements was simultaneous with the X-ray data obtained in2000.Finally,the true uncertainties in U and N H of the X-ray absorber are likely to be larger than shown in Figure3.Those results are based onfits that fixed the underlying continuum shape.Letting both the continuum shape and absorber properties vary in thefit would clearly lead to more uncertain results−although, without pursuing that option,it is not clear if the the pre-dicted Ne VIII absorption could be as low as the upper limit from the UV spectrum.Another complication is that the UV lines may be af-fected by partial coverage.If the absorbing gas does not fully cover the background light source,as we have as-sumed above,then there can be unabsorbed lightfilling in the bottoms of the UV line troughs and the column densities inferred from the lines will be only lower limits. Studies of other sources show that the coverage fraction can differ between ions and vary with velocity(across the line profiles)in the same ion(Hamann et al.1997,Bar-low&Sargent1997,Barlow,Sargent&Hamann1997). Our column density estimates all assume100%coverage. We can determine from the doublet ratios of the NALs that those lines are not optically thick absorption masked by partial coverage,i.e.,their derived column densities should be reasonable(section4.2).Moreover,we have no diagnostic of partial coverage for the mini-BALs.We must therefore keep in mind that their derived column densities are,strictly speaking,only lower limits.Similarily,the line strengths predicted from the X-ray column densities,e.g. for O VI and Ne VIII above,are lower limits because of the assumption of100%coverage in the X-rayfits.In summary,the strengths of most of the UV lines are consistent with the X-ray measurements,if the X-ray and UV absorbers are outflowing at the same speed.The ab-sence of high-ionization absorption lines at the quasar’s redshift argues for the UV and X-ray absorbers occuring in the same general region at high velocity.However,the X-ray observations overpredict the high-ionization UV lines of Ne VIII at all velocities.Simultaneous UV and X-ray observations are needed to probe the UV–X-ray relation-ship further.4.3.Wind DynamicsIf we assume the X-ray absorbing gas is outflowing with the UV absorber,then we can use the outflow velocity de-termined from the UV lines together with the X-ray mea-sured total hydrogen column density to test the viability of radiative acceleration of the wind.Hamann(1998)found that the terminal velocity(v terminal)of a radiatively accel-erated wind is related(cf.equation3of Hamann1998)to the total luminosity of the quasar,the mass of its central blackhole,the total column density of the wind(N H),the radius at which it is launched(R launch),and the fraction (f L)of incident continuum energy absorbed or scattered by the wind along an average line of sight.For BALs,Hamann(1998)estimated that f L could be on the order of a few tenths.However,the narrower and shallower lines in PG2302+029will intercept less con-5tinuumflux.Moreover,lower column densities compared to BALflows imply that reprocessing overall is less ef-ficient.We,therefore,assume that f L≤0.1for our present analysis.We also adopt representative values of the blackhole mass,108M⊙,and luminosity,1046erg s−1, which is the Eddington luminosity associated with that mass.We assume that v terminal=56,000km s−1and N H=2.34×1022cm−2.These values are the outflow velocity of the UV lines(Jannuzi et al.1996)and the column density we determined from the X-rays if the ab-sorber is at z=0.7(see end of section3).Substituting all these numbers into equation(3)of Hamann(1998),wefind that the launch radius is≤1015cm≈100R Schwarzchild. Therefore,if the wind is radiatively accelerated,it must be launched very near to the black hole.The mass loss rate implied by this radius is≤0.1Q M⊙yr−1,where Q is the global covering factorΩ/4π(Hamann &Brandt2002,in preparation).The density of theflow at this launch radius should be n H∼>1011cm−3,if it is photoionized with log U=1.6appropriate for the X-ray absorber(see Hamann&Brandt2002for explicit equa-tions).4.4.Geometry and Physical ModelsThe small launch radius required for a high-velocity X-ray absorber may be problematic for models of the out-flow in PG2302+029.For comparison,this maximum launch radius is much smaller than the nominal radius of the broad emission line region for a quasar of the same luminosity(R BLR≈2×1018cm,based on reverberation studies,e.g.,Kaspi et al.2000).On the other hand,the size of the X-ray continuum source is∼1014cm(e.g., Peterson1993),an order of magnitude smaller than the launch radius derived above.Given the small launch ra-dius,it seems likely that the X-ray absorber is either not radiatively accelerated,or not outflowing at the same high speed as the UV lines.In either case,the absence of high-ionization UV absorp-tion near the systemic velocity places another important constraint on wind models.If it is a steady-stateflow, then the acceleration must occur in a region that does not intersect our sightline to the continuum source.Models of outflows that lead to such UV lines have been discussed by Murray et al.(1995),Murray&Chiang(1995),and Elvis (2000).In these scenarios,the wind is initially perpen-dicular to the accretion disk.As itflows farther from the disk,the radiation pressure accelerating it bends andflares in the radial direction.Oblique lines of sight then could pass through the bent part of the wind,thus explaining the absence of zero-velocity absorption.The models mentioned in the previous paragraph dif-fer in subtle ways.Murray et al.(1995)described a case in which the X-ray absorber is at rest.Its function is to shield the UV gas from soft X-rays and prevent it from be-coming highly ionized,in which case resonant line driving would not be effective.Another variant of this scheme is the possibility of a self-shielding wind(Murray&Chiang 1995;Elvis2000).The X-ray and UV absorption arise in the same outflowing gas.The high column density of the wind requires a small launch radius,although probably larger than the radius we derive in section4.3.5.CONCLUSIONSWe presented Chandra X-ray observations of the radio-quiet QSO PG2302+029and demonstrated the presence of soft X-ray absorption in its spectrum.Older UV spec-tra of this quasar have been used to identify the presence of rarely observed ultra-high velocity(−56,000km s−1) absorption lines consistent with this quasar containing a remarkable outflow from its active nucleus(Jannuzi et al. 1996,Jannuzi et al.2003).Using photoionization models and the combined X-ray and UV data sets we have investi-gated the possible physical properties of the gas producing the X-ray and UV absorption.We suggest that the X-ray absorption also occurs at high velocities in the same gen-eral region as the UV absorber.There are not enough con-straints to rule out multi-zone models.Multi-zone mod-els are required if the distinct broad and narrow UV line profiles are both intrinsic to the QSO.The properties of the X-ray and UV absorption,as inferred from the data, are consistent with each other,if the X-ray absorbing gas is outflowing with the UV absorber.However,the X-ray data over predict the strength of the high-ionization UV lines of Ne VIII at all velocities.If we assume the X-ray absorbing gas is in an outflow with the same velocity as the gas producing the UV-absorbing wind and that such winds are radiatively accelerated,then the outflow must be started at a radius of≤1015cm from the central source of the radiation.Acknowledgements:We wish to acknowledge support through Chandra grants GO0-1123X and GO0-1157X. BTJ acknowledges support from the National Science Foundation through their support of the National Op-tical Astronomy Observatory,which is operated by the Association of Universities for Research in Astronomy, Inc.(A.U.R.A.)under cooperative agreement with the National Science Foundation and from NASA through a grant to proposal GO-07348.01-A from the Space Tele-scope Science Institute,which is operated by A.U.R.A., Inc.,under NASA contract NAS5-26555REFERENCESArnaud,K. A.1996,Astronomical Data Analysis Software and Systems V,eds.Jacoby G.and Barnes J.,p17,ASP Conf.Series volume101Barlow,T.A.,&Sargent,W.L.W.1997,AJ,113,136Barlow,T.A.,Hamann,F.,&Sargent,W.L.W.1997,ASP Conf.Ser.,128,13Brandt,W.N.,Laor,A.,&Wills,B.J.2000,ApJ,528,637 Crenshaw, D.M.,Kraemer,S. B.,Boggess, A.,Maran,S.P., Mushotzky,R.F.,&Wu,Chi-Chao1999,ApJ,516,750Della Ceca,R.Palumbo,G.G.C.,Peris,M.,Boldt,E.A.,Marshall,E.E.,&de Zotti,G.1990,ApJS,72,47Elvis,M.2000,ApJ,545,63Ferland,G.,Korista,K.T.,Verner,D.A.,Ferguson,J.W.,Kingdon, J.B.,&Verner,E.M.1998,PASP,110,761Gallagher,S.C.,Brandt,W.N.,Chartas,G.,&Garmire,G.P.2002, ApJ,567,37George,I.A.,Turner,T.J.,Netzer,H.,Nandra,K.,Mushotzky,R.F.,&Yaqoob,T.1998,ApJS,114,73Green,P.J.1996,ApJ,467,61Green,P.J.,Aldcroft,T.L.,Mathur,S.,Wilkes,B.J,&Elvis,M.2001,ApJ,558,109Hamann,F.1997,ApJS,109,279Hamann,F.,et al.1997a,ApJ,478,80Hamann,F.,et al.1997b,ASP Conf.Ser.,128,19Hamann,F.1998,ApJ,500,7986Hamann,F.,&Brandt,W.N.2002,in prep.Jannuzi,B.,T.,et al.1996,ApJ,470,11Jannuzi,B.,T.,et al.1998,ApJS,118,1Jannuzi,B.,T.2002,in Extragalactic Gas at Low Redshift,eds.Mulchey J.S.and Stocke J.T.,p13,ASP Conf.Series volume 254Jannuzi,B.,T.,et al.2003,in prep.Junkkarinen,V.T.,Burbidge,E.M.,&Smith,H.E.1983,ApJ,265, 51Kellerman,K.I.,Sramek,R.,Schmidt,M.,Shaffer,D.B.,&Green, R.1989,AJ,98,1195Kaspi,S.,Smith,P.S.,Netzer,H.,Maoz,D.,Jannuzi,B.T.,& Giveon,U.2000,ApJ,533,631Laor,A.,Fiore,F.,Elvis,M.,Wilkes,B.J.,&McDowell,J.C.1997, ApJ,477,93Lockman,F.J.,&Savage,B.D.1995,ApJS,97,1Mathur,S.,Elvis,M.,&Singh,K.1995,ApJ,455,L9Murray,N.,&Chiang,J.1995,ApJ,454,L101Murray,N.,Chiang,J.,Grossman,J.A.,&Voit,G.M.1995,ApJ, 451,L498Peterson,B.M.1997,An Introduction to Active Galactic Nuclei (Cambridge:Cambridge University Press)Reynolds,C.S.1997,MNRAS,286,513Sabra,B.M.,&Hamann,F.2001,ApJ,563,555Schneider,D.P.,et al.1992,PASP,104,678Spitzer,L.,Jr.1978,Physical Processes in the Interstellar Medium (New York:Wiley)Telfer,Zheng,W.,Kriss,G.,&Davidsen,A.F.2002,ApJ,565,773 Vignali,C.,Brandt,W.N.,&Schneider,D.P.2003,AJ,125,433 Yuan,W.,Siebert,J.,&Brinkmann,W.,1998,A&A,334,498 Zheng,W.,Kriss,G.A.,Telfer,R.C.,Grimes,J.P.,&Davidsen,A.F.1997,ApJ,475,4697Fig. 1.—Incident piece-wise powerlaw continua used in generating the grid of ionized absorbers.The slopes are indicated(fν∝ν−α). The dotted,dashed,long-dashed,and dash dotted lines correspond toαox=1.6,1.8,2.0,and2.2,respectively.8Fig.2.—X-ray spectrum of PG2302+029.Upper Panel:with no intrinsic absorption.Lower Panel:ionized absorber at z≈0.7.In both casesαox=2.09Fig.3.—Confidence contours,67%,90%,and99%,levels for X-rayfits.Solid-line contours are for z Xabs=z QSO,while dashed-line ones are for z XAbs=z UV abs.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :a s t r o -p h /0007438v 2 20 M a r 2001Revised,ApJ subm 3/15/01(7/27/00)A POSSIBLE INTRINSIC FLUENCE-DURATIONPOWER-LAW RELATION IN GAMMA-RAY BURSTSL.G.Bal´a zs 1,P.M´e sz´a ros 2,3,Z.Bagoly 4,I.Horv´a th 5&A.M´e sz´a ros 61Konkoly Observatory,Budapest,Box 67,H-1525,Hungary;balazs@ogyalla.konkoly.hu 2Dept.of Astronomy &Astrophysics,Pennsylvania State University,525Davey Lab.University Park,PA 16802,USA;nnp@ 3Institute of Astronomy,University of Cambridge,Madingley Road,Cambridge CB30HA,England,U.K.4Laboratory for Information Technology,E¨o tv¨o s University,P´a zm´a ny P´e ter s´e t´a ny 1/A,H-1518,Hungary;bagoly@ludens.elte.hu 5Department of Physics,ZMNE BJKMFK,Budapest,Box 12,H-1456,Hungary;hoi@bjkmf.hu 6Department of Astronomy,Charles University,18000Prague 8,V Holeˇs oviˇc k´a ch 2,Czech Republic;meszaros@mbox.cesnet.cz ABSTRACT We argue that the distributions of both the intrinsic fluence and the intrinsic duration of the γ-ray emission in gamma-ray bursts from the BATSE sample are well represented by log-normal distributions,in which the intrinsic dispersion is much larger than the cosmological time dilatation and redshift effects.We perform separate bivariate log-normal distribution fits to the BATSE short and long burst samples.The bivariate log-normal behavior results in an ellipsoidal distribution,whose major axis determines an overall statistical relation between the fluence and the duration.Weshow that this fit provides evidence for a power-law dependence between the fluenceand the duration,with a statistically significant different index for the long and shortgroups.We discuss possible biases which might affect this result,and argue that theeffect is probably real.This may provide a potentially useful constraint for models oflong and short bursts.Subject headings:gamma-rays:bursts –methods:statistical –methods:data analysis1.IntroductionThe simplest grouping of gamma-ray bursts (GRBs),which is still lacking a clear physical interpretation,is given by their well-known bimodal duration distribution.This divides burstsinto long(T∼>2s)and short(T∼<2s)duration groups(Kouveliotou et al.1993),defined through some specific duration definition such as T90,T50or similar.The bursts measured with the BATSE instrument on the Compton Gamma-Ray Observatory are usually characterized by9observational quantities,i.e.2durations,4fluences and3peakfluxes(Meegan et al.1996,Paciesas et al. 1999,Meegan et al.2000a).In a previous paper(Bagoly et al.1998),we used the principal components analysis(PCA)technique to show that these9quantities can be reduced to only two significant independent variables,or principal components(PCs).These PCs can be interpreted as principal vectors,which are made up of some subset of the original observational quantities. The most important PC is made up essentially by the durations and thefluences,while the second,weaker PC is largely made up of the peakfluxes.This simple observational fact,that the dominant principal component consists mainly of the durations and thefluences,may be of consequence for the physical modeling of the burst mechanism.In this paper we investigate in greater depth the nature of this principal component decomposition,and in particular,we analyze quantitatively the relationship between thefluences and durations implied by thefirst PC.In our previous PCA treatment of the BATSE Catalog Paciesas et al.1999we used logarithmic variables,since these are useful for dealing with the wide dynamic ranges involved.Since the logarithms of the durations and thefluences can be explained by only one quantity(thefirst PC),one might suspect the existence of only one physical variable responsible for both of these observed quantities.The PCA assumes a linear relationship between the observed quantities and the PC variables.The fact that the logarithmic durations andfluences can be adequately described by only one PC implies a proportionality between them and,consequently,a power law relation between the observed durations andfluences.We analyze the distribution of the observedfluences and durations of the long and the short bursts,and we present arguments indicating that the intrinsic durations andfluences are well represented by log-normal distributions.The implied bivariate log-normal distribution represents an ellipsoid in these two variables,whose major axis inclinations are statistically different for the long and the short bursts.An analysis of the possible biases and complications is made,leading to the conclusion that the relationship between the durations andfluences appears to be intrinsic, and may thus be related to the physical properties of the sources themselves.We calculate the exponent in the power-laws for the two types of bursts,andfind that for the short bursts the energyfluence is roughly proportional to the intrinsic duration,while for the long ones thefluences are roughly proportional to the square of intrinsic durations.The possible implications for GRB models are briefly discussed.2.Analysis of the Duration DistributionOur GRB sample is selected from the current BATSE Gamma-Ray Burst Catalog according to two criteria,namely,that they have both measured T90durations andfluences(for the definition of these quantities see Meegan et al.2000a,henceforth referred to as the Catalog).The Catalogin itsfinal version lists2041bursts for which a value of T90is given.Thefluences are given in four different energy channels,F1,F2,F3,F4,whose energy bands correspond to[25,50]keV,[50,100] keV,[100,300]keV and>300keV.The“total”fluence is defined as F tot=F1+F2+F3+F4, and we restrict our sample to include only those GRBs which have F i>0values in at least the channels F1,F2,F3.Concerning the fourth channel,whose energy band is>300keV,if we had required F4>0as well this would have reduced the number of eligible GRBs by≃20%. Hence,we decided to accept also the bursts with F4=0,rather than deleting them from the sample.(With this choice we also keep in the sample the no-high-energy(NHE)subgroup defined by Pendleton et al.1997.)Our choice of F≡F tot,instead of some other quantity as the main variable,is motivated by two arguments.First,as discussed in Bagoly et al.1998,F tot is the main constituent of one of the two PCs which represent the data embodied in the BATSE catalog,and hence it can be considered as a primary quantity,rather than some other combination or subset of its constituents.Second,Petrosian and collaborators in a series of articles(Efron&Petrosian 1992,Petrosian&Lee1996,Lee&Petrosian1996,Lee&Petrosian1997)have also argued for the use of thefluence as the primary quantity instead of,e.g.,the peakfling therefore these two cuts,we are left with N=1929GRBs,all of which have defined T90and F tot,as well as peak fluxes P256.This is the sample that we study in this paper.The distribution of the logarithm of the observed T90displays two prominent peaks1,which is interpreted as reflecting the existence of two groups of GRBs(Kouveliotou et al.1993).This bimodal distribution can be wellfitted by means of two Gaussian distributions(e.g.,Horv´a th 1998),indicating that both the long and the short bursts are individually wellfitted by pure Gaussian distributions in the logarithmic durations.The fact that the distribution of the BATSE T90quantities within a group is log-normal is of interest,since we can show that this property may be extended to the intrinsic durations as well. Let us denote the observed duration of a GRB with T90(which may be subject to cosmological time dilatation),and denote with t90the duration which would be measured by a comoving observer,i.e.the intrinsic duration.One has thenT90=t90f(z)(1) where z is the redshift,and f(z)measures the time dilatation.For the concrete form of f(z) one can take f(z)=(1+z)k,where k=1or k=0.6,depending on whether energy stretching is included or not(see Fenimore&Bloom1995and M´e sz´a ros&M´e sz´a ros1996).If energy stretching is included,for different photon frequenciesνthe t90depends on these frequencies as t90(ν)=t90(νo)(ν/νo)−0.4∝ν−0.4,whereνo is an arbitrary frequency in the measured range(i.e.for higher frequencies the intrinsic duration is shorter).The observed duration atνissimply(1+z)times the intrinsic duration atν×(1+z).Thus,T90(ν)=t90(ν(1+z))(1+z) =t90(νo)(ν(1+z)/νo)−0.4(1+z)=t90(ν)(1+z)0.6.Hence,when stretching is included,f(z)=(1+z)0.6is used.Taking the logarithms of both sides of equation(1)one obtains the logarithmic duration as a sum of two independent stochastic variables.According to a theorem of Cram´e r1937(see also R´e nyi1962),if a variableζwhich has a Gaussian distribution is given by the sum of two independent variables,e.g.ζ=ξ+η,then bothξandηhave Gaussian distributions.Therefore, the Gaussian distribution of log T90(confirmed for the long and short groups separately,Horv´a th 1998)implies that the same type of distribution exists for the variables log t90and log f(z). However,unless the space-time geometry has a very particular structure,the distribution of log f(z)cannot be Gaussian.This means that the Gaussian nature of the distribution of log T90 must be dominated by the distribution of log t90,and therefore the latter must then necessarily have a Gaussian distribution.This holds for both duration groups separately.This also implies that the cosmological time dilatation should not affect significantly the observed distribution of T90,which therefore is not expected to differ statistically from that of t90.(We note that several other authors,e.g.Wijers&Paczy´n ski1994,Norris et al.1994,Norris et al.1995,have already suggested that the distribution of T90reflects predominantly the distribution of t90.) One can check the above statement quantitatively by calculating the standard deviationof f(z),using the available observed redshifts of GRB optical afterglows.The number of the latter is,however,relatively modest,and so far they have been obtained only for long bursts. There are currently upwards of18GRBs with known redshifts(Norris&Marani2000,Bloom et al.2001,Bloom et al.2001b).The calculated standard deviation isσlog f(z)=0.14,assuminglog f(z)=log(1+z).Comparing the varianceσ2log f(z)with that of the group of long burstdurations(see Table1,which givesσlog T90=0.37),one infers that the standard deviation of log f(z),or log(1+z),can explain only about(0.14/0.37)2≃14%of the total variance of the logarithmic durations.(If f(z)=(1+z)0.6,then the standard deviation of log f(z)can only explain an even smaller amount than14%,becauseσlog f(z)=0.6×0.14.)This comparison gives support to the conclusion obtained by applying Cram´e r’s theorem to the long duration group.3.Distribution of the Energy FluencesThe observed totalfluence F tot can be expressed asF tot=(1+z)E totusual relation between the luminosity andflux is given by a similar equation without the extra (1+z)term in the numerator.Here this extra term is needed because both the left-hand-side is integrated over observer-frame time an the right-hand-side is integrated over time at the source).Assuming as the null hypothesis that the log F tot of the short bursts has a Gaussian distribution,for the sample of447bursts with T90<2s,aχ2test with26degrees of freedom gives an excellentfit withχ2=20.17.Accepting the hypothesis of a Gaussian distribution within this group,one can apply again Cramer’s theorem similarly to what was done for the logarithm of durations.This leads to the conclusion that either both the distribution of log c(z)and the distribution of log E tot are Gaussian,or else the variance of one of these quantities is negligible compared to the other,which then must be mainly responsible for the Gaussian behavior.The above argument,however,should be taken with some caution.As shown in Bagoly et al. 1998,the stochastic variable corresponding to the duration is independent from that of the peak flux.This means that afixed level of detection,given by the peakfluxes,does not have significant influence on the shape of the detected distribution of the durations(e.g.Efron&Petrosian 1992,Wijers&Paczy´n ski1994,Norris et al.1994,Norris et al.1995,Petrosian&Lee1996, Lee&Petrosian1996,Lee&Petrosian1997).In the case of thefluences,however,a detection threshold in the peakfluxes induces a bias on the true distribution,since they are stochastically not independent.Therefore the log-normal distribution recognized in the data does not necessarily imply the same behavior for the true distribution offluences occurring at the detector.A further complication arises from the differences of the spectral distribution among GRBs.A discussion of these problems can be found in a series of papers published by Petrosian and collaborators(Efron &Petrosian1992,Petrosian&Lee1996,Lee&Petrosian1996,Lee&Petrosian1997,Lloyd& Petrosian1999).Despite these difficulties,there are substantial reasons to argue that the observed distribution offluences is dominated by the intrinsic distribution.This assumption can be tested by comparing the variance of thefluences with those obtained for c(z)considering the GRBS that have measured z.We will return to this problem in more detail in§5dealing with the effect of possible observational biases.A Gaussian behavior of log c(z)can almost certainly be excluded.One can do this on the basis of the current observed distribution of redshifts(e.g.Bloom et al.2001,Bloom et al.2001b), or on the basis offits of the number vs.peakflux distributions(e.g.Fenimore&Bloom1995, Ulmer&Wijers1995,Horv´a th et al.1996,Reichart&M´e sz´a ros1997).In suchfits,using a number density n(z)∝(1+z)D with D≃(3−5),onefinds no evidence for the stopping of this increase with increasing z(up to z≃(5−20)).Hence,it would be contrived to deduce from this result that the distribution of log c(z)is normal.In order to do this,one would need several ad hoc assumptions.First,the increasing of number density would need to stop around some unknown high z.This was studied,e.g.by M´e sz´a ros&M´e sz´a ros1995,Horv´a th et al.1996,M´e sz´a ros& M´e sz´a ros1996,and no such effect was found.Second,even if this were the case,above this z the decrease of n(z)should mimic the behavior of a log-normal distribution for c(z),without any obvious justification.Third,below this z one must again have a log-normal behavior for c(z),incontradiction with the various number vs.peakfluxfits.Fourth,this behavior should occur for any subclass separately.Hence,the assumption of log-normal distribution of c(z)appears highly improbable,and this holds for both groups of GRBs.Thus,for the short bursts the variance of log c(z)must be negligible compared to the variance of log E tot.The latter possibility means that the observed distribution of the logarithmicfluences is essentially intrinsic,and therefore log E tot should have a Gaussian distribution for the group of short bursts.In the case of the long bursts,afit to a Gaussian distribution of logarithmicfluences does not give a significance level which is as convincing as for the short duration group.For the1482 GRBs with T90>2s aχ2test on log F tot with22degrees of freedom gives afit withχ2=35.12. Therefore,in this case theχ2test gives only a low probability of3.5%for a Gaussian distribution. This circumstance prevent us from applying Cram´e r’s theorem directly in the same way as we did with the short duration group.Calculating the variance of log c(z)for the GRBs with known redshifts Bloom et al.2001b one obtainsσlog c(z)=0.43.From Table1of this article it follows thatσlog Ftot=0.66.Hence,the variance of c(z)gives roughly a(0.43/0.66)2≃43%contribution to the entire variance,which is also a larger value than in the case of the durations.We return to this question in§6.4.Fitting the Logarithmic Fluences and Durations by the Superposition of twoBivariate DistributionsWe assume here as a working hypothesis that the distributions of the variables T90and F tot, for both the short and long groups,can be approximated by log-normals.In this case,it is possible tofit simultaneously the values of log F tot and log T90by a single two-dimensional(bivariate) normal distribution.This distribution hasfive parameters(two means,two dispersions,and the correlation coefficient).Its standard form isf(x,y)dxdy=N1−r2×exp −1σ2x+(y−a y)2σxσy dxdy,(3) where x=log T90,y=log F tot a x,a y are the means,σx,σy are the dispersions,and r is the correlation coefficient(Trumpler&Weaver1953;Chapt.1.25).An equivalent set of parameters consists of taking the same two means with two other dispersionsσ,xσ,y,and(instead of the correlation coefficient)the angleαbetween the axis log T90and the semi-major axis of the “dispersion ellipse”.(In the case of bivariate normal distributions,the constant probability curves define ellipses with well-defined axis directions).In this caseαand the correlation coefficient arerelated unambiguously through the analytical formula2rσxσytan2α=Paciesas et al.1999,Hakkila et al.2000b,Meegan et al.2000b).The effect of these biases is non-negligible,and they may in principle have an impact on the correlations betweenfluence and duration.In other words,it could be that the correlations between the measuredfluences and measured durations do not necessarily reflect(due to several instrumental effects)the actual correlations between the realfluences and durations(i.e.between the ideal data which would be obtained by ideal bias-free instruments).In this Section we discuss several tests which indicate that these biases do not significantly influence thefinal results presented.Roughly speaking,there are two kinds of instrumental biases.First,some of the faint GRB below the detection threshold may not be detected,and these missing GRBs-if they were detected -could change(at least in principle)the statistical relations between the durations andfluences. Second,even for GRB which are detected,the measured F tot and T90do not reproduce the real values of these quantities,due to the different background noise effects and other complications mainly in the values of F tot(for a more detailed survey of biases see the review of Meegan et al. 2000b).Both of these types of biases are particularly important for the fainter GRBs.To evaluate the impact of these effects,without going into instrumentation details,we will perform two different sets of test calculations.First,we will truncate the whole sample of GRBs with respect to the peakflux,and we will restrict ourselves to the brighter ones.For a sufficiently high truncation limit,this truncated sample should be free from the effects of thefirst type of bias above.Second, we will modify the measured values of F tot and T90in order to approximate them by real bias-free values.Then we repeat the calculations of the previous Section for these test samples.To restrict ourselves to the brighter GRBs we do two truncations.First,we take only GRBs with P256>0.65photon/(cm2s)(N=1571);second,we take only GRBs with P256>1.26 photon/(cm2s)(N=994).Thefirst choice is motivated by the analysis of Pendleton et al.1997, and this should already cancel the impact of biases offirst type(Stern et al.1999,Pendleton et al.1997).The second choice is an ad-hoc one,and is motivated by two opposite requirements. Clearly,to avoid the impact of biases of second type,the truncation should be done on the highest possible peakflux value.On the other hand,the number of remaining bright GRBs should not be small compared with the whole sample.A sample with P256>1.26photon/(cm2s)(N=994) appears to be a reasonable choice from these opposite points of view;for these high peakfluxes the biases in the values of F tot and T90should be largely negligible.The results after thefirst truncation are collected in Table2and can be seen in Figure2.We see that all the values are practically identical with the values of Table1.To complete the ML estimation,we need to calculate also the uncertainties in the obtained bestfit parameters.For our present purposes,it is enough to obtain these uncertainties forα1andα2,respectively.For this we use the fact that2(L2,max−L2)≃χ211,(6) where L2,max is the likelihood for the best parameters,and L2is the likelihood for the truevalues of the parameters,cf.Kendall&Stuart1976.Theχ211is the value ofχ2function for11 degrees of freedom(with the degrees of freedom given by the number of parameters).Taking the values ofχ211corresponding to1σprobability(≃68%)one obtains a10dimensional hypersurface in the11dimensional parameter-space around the point defined by the bestfit parameters. This hypersurface can be obtained through computer simulations by changing the values of the parameters around the bestfit values and for any new set calculating the value of the likelihood. Then,using Eq.(6),one can estimate the1σuncertainty in the parameters.This procedure leads to the values0.87≤tanα1≤1.21and2.02≤tanα2≤2.52,respectively.In other words,there is a68%probability that the tangents of angles are in these intervals.The random probability of obtaining identical angles for both groups may be calculated as follows.We do again an ML estimation,similarly to the procedure used to derive Table2,except for one difference.That is,we consider only10independent parameters,because we require that the two angles be identical.Doing this,we obtain that the L2,max value is smaller by4.46than the value obtained for11parameters.(The concrete values of the10parameters are unimportant here.)Then the one degree of freedomχ2=8.92,corresponding to the difference between the 11and10parameter ML estimation(see Eq.(6))defines a0.3%probability.This is the random probability that the two angles are identical(Kendall&Stuart1976).The two angles are therefore definitely different,with a high level of significance.Thus,for the two groups the dependence of the totalfluence on the measured duration is,(7)F tot∝ (T90)1;(short bursts);(T90)2.3;(long bursts)and the exponents of the two groups are different at a high level of confidence,with∼<0.3% probability of random occurrence.The results obtained with the second truncation are collected in Table3.We see that all values are again similar to the values of Table1and Table2.The difference in the averages of the durations andfluences understandable,because we take brighter GRBs.However the changes in them are small,and our conclusions therefore remain the same.To further verify this result,we also perform a second type of test.We modify the values of the measured F tot and T90in order to approximate them by their real bias-free values.We consider a very simplified model of GRB pulse shapes,namely,we assume that the measured time behavior of a GRB may be described by a triangle.If a GRB pulse arises at a time instant t1,after this time itsflux increases linearly,and reaches its peakflux(denoted by P)at time t2.Then it again decreases linearly,and theflux becomes zero at a time instant t3.Then the real measured duration is t=t3−t1,and the realfluence is F=t×P/2.Assume now that there is a background noise, causing aflux P o.Then,clearly,the GRB will be detected only when theflux P is larger than P o. This means that the measured duration will be t′=(1−P o/P)t.Also,the measuredfluence will be lower,because noflux is seen during the time when theflux is smaller than P o.The value of the measuredfluence will be given in this case by F′=(1−(P o/P)2)F.Taking this into account,we will do the modifications F tot;mod=F tot/(1−(P o/P256)2)and T90;mod=T90/(1−P o/P256), respectively.This means that the modified values(which are expected to be closer to the real ones)are larger than the measured ones.There are of course possible objections that can be raised against such a procedure.One might argue that these modifications are ad-hoc for several reasons,e.g.,that t′and t in the previous consideration are not identical to T90and T90;mod;or similarly,that F and F′are not identical to F tot and T tot;mod;also that P cannot be substituted automatically by P256;that the concrete value of P o is subject to change;that the”triangle”approximation is arbitrary;etc.Nevertheless,keeping all these caveats in mind,it is still useful to see what would be the change,if the calculation of the previous Section is repeated for these modifiedfluences and durations.The results of this second test calculation are collected in Table4.The concrete value ofP o=0.3photon/(cm2s)is based on the results of Stern et al.1999and Pendleton et al.1997, which suggest that the background noise is≃(0.2−0.3)photon/(cm2s).In order to avoid the problem with GRBs which have P256<P o,here we do not use the whole sample of N=1929, but a sample with P256>0.65photon/(cm2s)(N=1571).This truncation does not lead to any essential changes,as was already seen earlier in this Section.The values of Table4are again very similar to those of Table2.Omitting the calculation of the uncertainties in the best parameters, we calculate only the probability of having identical angles;this probability is0.4%(the value ofχ2drops by4.22).This tends to support,again,the earlier results obtained with the larger sample.Therefore,from these two different tests we are led to conclude that the instrumental biases do not change the basic results.The discussion in this Section gives support to the interpretation that the correlation between the logarithms of the durations and the logarithms of thefluences are real,that they are different for the long and the short bursts,and that these conclusions remain valid even after taking into account instrumental biases.6.Are the Correlations Actually Intrinsic?In the previous Section we presented arguments showing that the different power law relations between F tot and T90,expressed through Eq.(7),are real,and are not substantially affected by instrumental bias effects.In§3it was shown that these same power-law relations hold between the intrinsic t90and E tot values.Since,however,for the long bursts the validity of a log-normal representation of thefluences is not so obvious,we return here for the sake of completeness to this question.In general,the PCA analysis shows that the logarithm of any measured quantity can be represented by a linear combination of PCs.In the PCA analysis of Bagoly et al.1998the whole BATSE sample of GRBs(lumping together the long and short groups)it was shown that there are two important PCs,which may be identified-to a high accuracy-with the log T90and log P256variables.This means that also the logarithm offluence may be written aslog F tot=a1log T90+a2log P256+e,(8) where a1and a2are some constants(defining the importance of the PC)and e is some noise term (see Bagoly et al.1998).As shown in§2,the distribution of the measured T90is well be described by the superposition of two log-normal distributions.It is also shown in§2that the intrinsic durations t90for the two groups(short and long)separately are distributed log-normally.Hence,if it were the case that a2 were negligibly small with respect to a1,it would be possible to conclude(i.e.without any further separate investigation of log F tot itself)that log F tot is given by the superposition of two log-normal distributions.However,the smallness of a2is not fulfilled generally.Therefore,an additional study of F tot is needed,and this is done in§3.It is shown there that,because c(z)in2is very unlikely to obey a log-normal distribution(which holds for both groups separately),from the log-normal behavior of log F tot it follows that E tot must also have log-normal distribution.More precisely,the log-normal distribution of E tot is well justified for the short group,but not so well for the long group.Therefore,a further analysis,based on the PCA method,may help to clarify the situation.As mentioned,the coefficient a2in equation(8)is not always negligible with respect to a1. Nonetheless,a simple trick can be used which gets one around this obstacle.If we take a narrow enough interval of log P256so that in it it is approximately valid to take P256=const,thena2log P256is also constant there,and will not play a role in the form of the distribution of log F tot. Keeping in mind this possibility,we again do truncations in P256(as in the previous Section), but here we restrict P256from both sides(i.e.there is also an upper limit).In addition,we need to take as narrow an interval of log P256as possible;of course,the number of GRBs remaining cannot be too small(we will require at least N>500),otherwise a statistical study is not possible. Because PCA studies were done for the whole sample,we also do afitting for the whole sample, similarly to what was done in previous Sections.Results of the truncation with0.65<P256<1.26(where P256has units of photons/(cm s2))is given in Table5.We see that the results are again similar to the whole sample presented in Table 1;of course,some concrete values may differ from that of Table1.due to the choice of sample. For our purpose here it is the most essential conclusion that thefitting with the superposition of two two-dimensional log-normal distributions may again be done well.This means that the distribution in log F tot is therefore also a sum of two log-normal distributions.The results using a truncation with1.26<P256<3.98is given in Table6.We again see that the results are similar to those of the whole sample given in Table1.For our purposes here,it is again essential that the data is wellfitted with a superposition of two two-dimensional log-normal distributions;the distribution in log F tot is therefore also a sum of two log-normal distributions.Having thus concluded that the distribution of F tot is a sum of two log-normal distributions, since c(z)is very unlikely to be log-normally distributed(for both groups separately),E tot should。

相关文档
最新文档