A validated enantioselective LC–MS-MS assay for the simultaneous影响因子2.967

合集下载

Validation of Computerized Systems - Robert Tollefsen

Validation of Computerized Systems - Robert Tollefsen

8
Q7A Good Manufacturing Practice Guidance for Active Pharmaceutical Ingredients
• D. Computerized Systems (5.4)
– Changes to computerized systems should be made according to a change procedure and should be formally authorized, documented, and tested. – Records should be kept of all changes, including modifications and enhancements made to the hardware, software, and any other critical component of the system. – These records should demonstrate that the system is maintained in a validated state.
2
Regulations
• 21 CFR 211- current Good Manufacturing Practices for Pharmaceuticals • 21 CFR 11- Electronic records, Electronic Signatures final rule (Guidance: Scope & Application)
9
Q7A Good Manufacturing Practice Guidance for Active Pharmaceutical Ingredients

FDA关于ANDA强制降解试验的观点-英文版

FDA关于ANDA强制降解试验的观点-英文版

Scientific Considerations of Forced degradation Studies in aNda SubmissionsaBStraCta well-designed stress study can provide insight in choos-ing the appropriate formulation for a proposed product prior to intensive formulation development studies. it can prevent re-development or re-validation of a stabil-ity indicating analytical method. this paper outlines the scientific aspects of forced degradation studies that should be considered in relation to aNda submissions. iNtroduCtioNForced degradation is synonymous with stress test-ing and purposeful degradation. Purposeful degra-dation can be a useful tool to predict the stability of a drug substance or a drug product with effects on purity, potency, and safety. it is imperative to know the impurity profile and behavior of a drug substance under various stress conditions. Forced degradation also plays an important role in the development of analytical methods, setting specifications, and design of formulations under the quality-by-design (Qbd) paradigm. the nature of the stress testing depends on the individual drug substance and the type of drug product (e.g., solid oral dosage, lyophilized powders, and liquid formulations) involved (1).the international Conference on Harmonisation (iCH) Q1B guideline provides guidance for perform-ing photostability stress testing; however, there are no additional stress study recommendations in the iCH sta-bility or validation guidelines (2). there is also limited information on the details about the study of oxidation and hydrolysis. the drug substance monographs of analytical Profiles of drug Substances and excipients provide some information with respect to different stress conditions of various drug substances (3).the forced degradation information provided in the abbreviated new drug application (aNda) submissions is often incomplete and in those cases deficiencies are cited. an overview of common deficiencies cited through-out the chemistry, manufacturing, and controls (CMC) section of the aNdas has been published (4-6). Some examples of commonly cited deficiencies related to forced degradation studies include the following:• y our d rug s ubstance d oes n ot s how a ny d egrada-tion under any of the stress conditions. Pleaserepeat stress studies to obtain adequate degra-dation. if degradation is not achievable, pleaseprovide your rationale.• P lease note that the conditions employed forstress study are too harsh and that most of yourdrug s ubstance h as d egraded. P lease r epeat y ourstress s tudies u sing m ilder c onditions o r s horterexposure t ime t o g enerate r elevant d egradationproducts.• i t is noted that you have analyzed your stressedsamples as per the assay method conditions.For the related substances method to be sta-bility indicating, the stressed samples shouldbe analyzed using related substances methodconditions.• P lease state the attempts you have made toensure that all the impurities including thedegradation p roducts o f t he u nstressed a nd t hestressed samples are captured by your analyti-cal method.ragine MaheswaranaBout tHe autHorragine Maheswaran, Ph.d., is a CMC reviewer at the office of generic drugs within the office of Pharmaceutical Science, under the uS Food and drug administration’s Center for drug evaluation and research and may be reached by e-mail at ragine.Maheswaran@.[For more author information, go to /bios• P lease provide a list summarizing the amountof d egradation p roducts (known a nd u nknown)in your stressed samples.• P lease verify the peak height requirement ofyour s oftware f or t he p eak p urity d etermination.• P lease e xplain t he m ass i mbalance o f t he s tressedsamples.• P lease identify the degradation products thatare formed due to drug-excipient interactions.• y our photostability study shows that the drugproduct is very sensitive to light. Please explainhow this is reflected in the analytical method,manufacturing process, product handling, etc.in an attempt to minimize deficiencies in the aNda submissions, some general recommendations to conduct forced degradation studies, to report relevant information in the submission, and to utilize the knowledge of forced degradation in developing stability indicating analytical methods, manufacturing process, product handling, and storage are provided in this article.StreSS CoNditioNStypical stress tests include four main degradation mecha-nisms: heat, hydrolytic, oxidative, and photolytic degrada-tion. Selecting suitable reagents such as the concentration of acid, base, or oxidizing agent and varying the conditions (e.g., temperature) and length of exposure can achieve the preferred level of degradation. over-stressing a sample may lead to the formation of secondary degradants that would not be seen in formal shelf-life stability studies and under-stressing may not serve the purpose of stress test-ing. therefore, it is necessary to control the degradation to a desired level. a generic approach for stress testing has been proposed to achieve purposeful degradation that is predictive of long-term and accelerated storage condi-tions (7). the generally recommended degradation varies between 5-20% degradation (7-10). this range covers the generally permissible 10% degradation for small molecule pharmaceutical drug products, for which the stability limit is 90%-110% of the label claim. although there are refer-ences in the literature that mention a wider recommended range (e.g., 10-30%), the more extreme stress conditions often provide data that are confounded with secondary degradation products.PhotostabilityPhotostability testing should be an integral part of stress testing, especially for photo-labile compounds. Some recommended conditions for photostability testing are described in iCH Q1B Photostability testing of New drug Substances and Products (2). Samples of drug substance, and solid/liquid drug product, should be exposed to a minimum of 1.2 million lux hours and 200 watt hours per square meter light. the same samples should be exposed to both white and uv light. to minimize the effect of temperature changes during exposure, tempera-ture control may be necessary. the light-exposed samples should be analyzed for any changes in physical proper-ties such as appearance, clarity, color of solution, and for assay and degradants. the decision tree outlined in the iCH Q1B can be used to determine the photo stability testing conditions for drug products. the product label-ing should reflect the appropriate storage conditions. it is also important to note that the labeling for generic drug products should be concordant with that of the reference listed drug (rld) and with united States Pharmacopeia (uSP) monograph recommendations, as applicable. Heatthermal stress testing (e.g., dry heat and wet heat) should be more strenuous than recommended iCH Q1a accel-erated testing conditions. Samples of solid-state drug substances and drug products should be exposed to dry and wet heat, whereas liquid drug products can be exposed to dry heat. it is recommended that the effect of temperature be studied in 10ºC increments above that for routine accelerated testing, and humidity at 75% rela-tive humidity or greater (1). Studies may be conducted at higher temperatures for a shorter period (10). testing at multiple time points could provide information on the rate of degradation and primary and secondary degrada-tion products. in the event that the stress conditions pro-duce little or no degradation due to the stability of a drug molecule, one should ensure that the stress applied is in excess of the energy applied by accelerated conditions (40º for 6 months) before terminating the stress study. acid and Base Hydrolysisacid and base hydrolytic stress testing can be carried out for drug substances and drug products in solution at ambient temperature or at elevated t emperatures. the selection of the type and concentrations of an acid or a base depends on the stability of the drug substance.a strategy for generating relevant stressed samples for hydrolysis is stated as subjecting the drug substance solution to various pHs (e.g., 2, 7, 10-12) at room tem-perature for two weeks or up to a maximum of 15% degradation (7). Hydrochloric acid or sulfuric acid (0.1 M to 1 M) for acid hydrolysis and sodium hydroxide or potassium hydroxide (0.1 M to 1 M) for base hydrolysis are suggested as suitable reagents for hydrolysis (10). For lipophilic drugs, inert co-solvents may be used tosolubilize the drug substance. attention should be given to the functional groups present in the drug molecule when selecting a co-solvent. Prior knowledge of a com-pound can be useful in selecting the stress conditions. For instance, if a compound contains ester functionality and is very labile to base hydrolysis, low concentrations of a base can be used. analysis of samples at various intervals can provide information on the progress of degradation and help to distinguish primary degradants from secondary degradants.oxidationoxidative degradation can be complex. although hydro-gen peroxide is used predominantly because it mimics possible presence of peroxides in excipients, other oxi-dizing agents such as metal ions, oxygen, and radical initiators (e.g., azobisisobutyronitrile, aiBN) can also be used. Selection of an oxidizing agent, its concentration, and conditions depends on the drug substance. Solutions of drug substances and solid/liquid drug products can be subjected to oxidative degradation. it is reported that subjecting the solutions to 0.1%-3% hydrogen peroxide at neutral pH and room temperature for seven days or up to a maximum 20% degradation could potentially generate relevant degradation products (10). Samples can be analyzed at different time intervals to determine the desired level of degradation.different stress conditions may generate the same or different degradants. the type and extent of degradation depend on the functional groups of the drug molecule and the stress conditions.aNalySiS MetHodthe preferred method of analysis for a stability indicating assay is reverse-phase high-performance liquid chroma-tography (HPlC). reverse-phase HPlC is preferred for several reasons, such as its compatibility with aqueous and organic solutions, high precision, sensitivity, and ability to detect polar compounds. Separation of peaks can be carried out by selecting appropriate column type, column temperature, and making adjustment to mobile phase pH. Poorly-retained, highly polar impurities should be resolved from the solvent front. as part of method development, a gradient elution method with varying mobile phase composition (very low organic composi-tion to high organic composition) may be carried out to capture early eluting highly polar compounds and highly retained nonpolar compounds. Stressed samples can also be screened with the gradient method to assess poten-tial elution pattern. Sample solvent and mobile phase should be selected to afford compatibility with the drug substance, potential impurities, and degradants. Stress sample preparation should mimic the sample preparation outlined in the analytical procedure as closely as possible. Neutralization or dilution of samples may be necessary for acid and base hydrolyzed samples. Chromatographic profiles of stressed samples should be compared to those of relevant blanks (containing no active) and unstressed samples to determine the origin of peaks. the blank peaks should be excluded from calculations. the amount of impurities (known and unknown) obtained under each stress condition should be provided along with the chromatograms (full scale and expanded scale show-ing all the peaks) of blanks, unstressed, and stressed samples. additionally, chiral drugs should be analyzed with chiral methods to establish stereochemical purity and stability (11, 12).the analytical method of choice should be sensitive enough to detect impurities at low levels (i.e., 0.05% of the analyte of interest or lower), and the peak responses should fall within the range of detector’s linearity. the analytical method should be capable of capturing all the impurities formed during a formal stability study at or below iCH threshold limits (13, 14). degradation product identifica-tion and characterization are to be performed based on for-mal stability results in accordance with iCH requirements. Conventional methods (e.g., column chromatography) or hyphenated techniques (e.g., lC-MS, lC-NMr) can be used in the identification and characterization of the degradation products. use of these techniques can provide better insight into the structure of the impurities that could add to the knowledge space of potential structural alerts for genotoxicity and the control of such impurities with tighter limits (12-17). it should be noted that structural characterization of degradation products is necessary for those impurities that are formed during formal shelf-life stability studies and are above the qualification threshold limit (13).various detection types can be used to analyze stressed samples such as uv and mass spectroscopy. the detec-tor should contain 3d data capabilities such as diode array detectors or mass spectrometers to be able to detect spectral non-homogeneity. diode array detection also offers the possibility of checking peak profile for multiple wavelengths. the limitation of diode array arises when the uv profiles are similar for analyte peak and impurity or degradant peak and the noise level of the system is high to mask the co-eluting impurities or degradants. Compounds of similar molecular weights and functional groups such as diastereoisomers may exhibit similar uv profiles. in such cases, attempts must be made to modify the chromatographic parameters to achieve necessaryseparation. an optimal wavelength should be selected to detect and quantitate all the potential impurities and degradants. use of more than one wavelength may be necessary, if there is no overlap in the uv profile of an analyte and impurity or degradant peaks. a valuable tool in method development is the overlay of separation signals at different wavelengths to discover dissimilarities in peak profiles.Peak Purity analysisPeak purity is used as an aid in stability indicating meth-od development. the spectral uniqueness of a compound is used to establish peak purity when co-eluting com-pounds are present.Peak purity or peak homogeneity of the peaks of interest of unstressed and stressed samples should be established using spectral information from a diode array detector. when instrument software is used for the determination of spectral purity of a peak, relevant parameters should be set up in accordance with the man-ufacturer’s guidance. attention should be given to the peak height requirement for establishing spectral purity. uv detection becomes non linear at higher absorbance values. thresholds should be set such that co-eluting peaks can be detected. optimum location of reference spectra should also be selected. the ability of the soft-ware to automatically correct spectra for continuously changing solvent background in gradient separations should be ascertained.establishing peak purity is not an absolute proof that the peak is pure and that there is no co-elution with the peak of interest. limitations to peak purity arise when co-eluting peaks are spectrally similar, or below the detec-tion limit, or a peak has no chromophore, or when they are not resolved at all.Mass BalanceMass balance establishes adequacy of a stability indicat-ing method though it is not achievable in all circum-stances. it is performed by adding the assay value and the amounts of impurities and degradants to evaluate the closeness to 100% of the initial value (unstressed assay value) with due consideration of the margin of analytical error (1).Some attempt should be made to establish a mass balance for all stressed samples. Mass imbalance should be explored and an explanation should be provided. varying responses of analyte and impurity peaks due to differences in uv absorption should also be examined by the use of external standards. Potential loss of volatile impurities, formation of non-uv absorbing compounds, formation of early eluants, and potential retention of compounds in the column should be explored. alternate detection techniques such as ri lC/MS may be employed to account for non-uv absorbing degradants. terMiNatioN oF StudyStress testing could be terminated after ensuring adequate exposure to stress conditions. typical a ctivation energy of drug substance molecules varies from 12-24 kcal/mol (18). a compound may not necessarily degrade under every single stress condition, and general guideline on exposure limit is cited in a review article (10). in cir-cumstances where some stable drugs do not show any degradation under any of the stress conditions, specificity of an analytical method can be established by spiking the drug substance or placebo with known impurities and establishing adequate separation.otHer CoNSideratioNSStress testing may not be necessary for drug substances and drug products that have pharmacopeial methods and are used within the limitations outlined in uSP <621>. in the case where a generic drug product uses a different polymorphic form from the rld, the drug substance should be subjected to stress testing to evaluate the physiochemical changes of the polymorphic form because different polymorphic forms may exhibit dif-ferent stability characteristics.ForCed degradatioNiN QBd ParadigMa systematic process of manufacturing quality drug prod-ucts that meet the predefined targets for the critical quality attributes (CQa) necessitates the use of knowledge obtained in forced degradation studies.a well-designed, forced degradation study is indis-pensable for analytical method development in a Qbd paradigm. it helps to establish the specificity of a stability indicating method and to predict potential degradation products that could form during formal stability studies. incorporating all potential impurities in the analytical method and establishing the peak purity of the peaks of interest helps to avoid unnecessary method re-development and revalidation.Knowledge of chemical behavior of drug substances under various stress conditions can also provide useful information regarding the selection of excipients for formu-lation development. excipient compatibility is an integral part of understanding potential formulation interactions during product development and is a key part of product understanding. degradation products due to drug-excipi-ent interaction or drug-drug interaction in combina-tion products can be examined by stressing samples of drug substance, drug product, and placebo separately and comparing the impurity profiles. information obtained regarding drug-related peaks and non-drug-related peaks can be used in the selection and devel-opment of more stable formulations. For instance, if a drug substance is labile to oxidation, addition of an antioxidant may be considered for the formulation. For drug substances that are labile to acid or undergo stereochemical conversion in acidic medium, delayed-release formulations may be necessary. acid/base hydrolysis testing can also provide useful insight in the formulation of drug products that are liquids or suspensions.Knowledge gained in forced degradation studies can facilitate improvements in the manufacturing process. if a photostability study shows a drug substance to be photolabile, caution should be taken during the manufacturing process of the drug product. useful information regarding process development (e.g., wet versus dry processing, temperature selection) can be obtained from thermal stress testing of drug substance and drug product.additionally, increased scientific understanding of degradation products and mechanisms may help to determine the factors that could contribute to stability failures such as ambient temperature, humidity, and light. appropriate selection of packaging materials can be made to protect against such factors. CoNCluSioNan appropriately-designed stress study meshes well with the Qbd approaches currently being promoted in the pharmaceutical industry. a well-designed stress study can provide insight in choosing the appropriate formulation for a proposed product prior to inten-sive formulation development studies. a thorough knowledge of degradation, including mechanistic understanding of potential degradation pathways, is the basis of a Qbd approach for analytical method development and is crucial in setting acceptance criteria for shelf-life monitoring. Stress testing can provide useful insight into the selection of physical form, stereochemical stability of a drug substance, packaging, and storage conditions. it is important to perform stress testing for generic drugs due to allowable qualitative and quantitative differences in formulation with respect to the rld, selection of manufacturing process, processing parameters, and packaging materials.reFereNCeS1. iCH, Q1a(r2) Stability testing of New drug Substances andProducts, geneva, February 2003.2. iCH, Q1B Stability testing: Photostability testing of New drugSubstances and Products, geneva, November 1996.3. H. Brittain, analytical Profiles of drug Substances and excipients,academic Press, london.4. a. Srinivasan and r. iser, Pharm. technol. 34(1), 50-59, 2010.5. a. Srinivasan, r. iser, and d. gill, Pharm. technol. 34(8), 45-51, 2010.6. a. Srinivasan, r. iser, and d. gill, Pharm. technol. 35(2), 58-67, 2011.7. S. Klick, et al., Pharm.technol. 29(2) 48-66, 2005.8. K. M. alsante, l. Martin and S. w. Baertschi, Pharm.technol.27(2) 60-72, 2003.9. d. w. reynolds, K. l. Facchine, J. F. Mullaney, K. M. alsante,t. d. Hatajik, and M. g. Motto, Pharm.technol. 26(2), 48-56, 2002.10. K. M. alsante, a. ando, r. Brown, J. ensing, t. d. Hatajik, w.Kong, and y. tsuda, advanced drug delivery reviews 59, 29-37 (2007).11. Fda, guidance for industry on analytical Procedures and methodsvalidation Chemistry, Manufacturing, and C ontrols documenta-tion (draft), rockville, Md, august 2000.12. iCH, Q6a: Specifications: test Procedures and acceptance Crite-ria for New drug Substances and New drug Products: Chemical Substances, geneva, october 1999.13. iCH, Q3a(r2) impurities in New drug Substances, geneva,october 2006.14. iCH, Q3B(r2) impurities in New drug Products, geneva, June2006.15. Fda, guidance for industry aNdas: impurities in drug Sub-stances (draft), rockville, Md, august 2005.16. Fda, guidance for industry aNdas: impurities in drug Products(draft), rockville, Md, November 2010.17. eMea, guideline on the limits of genotoxic impurities, Com-mittee for Medical Products for Human use (CHMP) (doc. ref eMea/CHMP/QwP/251344/2006), Jan. 1, 2007.18. K. a. Conners et al., Chemical Stability of Pharmaceuticals,wiley and Sons, New york, New york, 2nd ed., 1986) p.19.JvtaCKNowledgMeNtS aNd diSClaiMerthe author would like to thank Bob iser, Naiqi ya, dave Skanchy, Bing wu, and ashley Jung for their scientific input and support.disclaimer: the views and opinions in this articleare only those of the author and do not necessarily reflect the views or policies of the uS Food and drug administration.。

手性药物和它的未来

手性药物和它的未来

手性药物和它的未来在听了王梅祥老师的课之后,我对手性产生了非常大的兴趣。

手性是自然界中广泛存在的现象,是自然界的本质属性之一。

如果一个物体不能与其镜像重合,那么这个物体就称为手性物体。

作为生命活动重要基础的生物大分子,如蛋白质、多糖、核酸和酶等,几乎全是手性的,这些分子在体内往往具有重要生理功能。

手性的研究是目前化学领域最热门的话题之一,而手性药物的研发也是未来医药的发展方向。

手性药物是指药物分子结构中引入手性中心后,得到的一对互为实物与镜像的对映异构体。

这些对映异构体的理化性质基本相似,仅仅是旋光性有所差别,分别被命名为R-型(右旋)或S-型(左旋)、外消旋。

在生物体系中,立体异构识别是极明显的。

一般就手性药物分子而言,可能四种不同的行为:(1)只有一种异构体具有所希望的活性,另一种没有显著活性。

(2)两种对映体都具有等同的或者近似等同的定性和定量的生物活性。

(3)两种对映体具有定量上等同但定性上不同的活性。

(4)各对映体具有定量上不同的活性。

更通俗地说,两种对映体的性质可能相同也可能不同,效力也有大有小,有的对映体对治疗疾病有益,其他的对映体可能非但无益甚至有害。

在老师讲到的“沙利度胺悲剧”中,原本的镇静剂之所以能够导致畸形婴儿的出生,正是因为在其中含有S-对应异构体,而这种异构体具有强烈的致畸性,因而导致了悲剧的发生。

今天,手性药物的开发已经进入一个相对成熟的阶段,更展现出蓬勃发展的态势。

类似于几十年前“沙利度胺”的悲剧也应该不会有再次发生的可能了。

目前世界上使用的药物总数约为1900种手性药物占50%以上,在临床常用的200种药物中,手性药物多达114种。

全球2001年以单一光学异构体形式出售的市场额达到1 472亿美元,相比于2000年的1 330亿美元增长了10%以上。

预计手性药物到2010年销售额将达到2 000亿美元。

手性药相比于平面药物而具有非常大的优势。

正如前面所说,对于手性药物,一个异构体可能是有效的,而另一个异构体可能是无效甚至是有害的。

RECSIT1[1].1中英文对照全文

RECSIT1[1].1中英文对照全文

New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1)新版实体瘤疗效评价标准:修订的RECIST指南(1.1版本)Abstract摘要Background背景介绍Assessment of the change in tumour burden is an important feature of the clinical evaluation of cancer therapeutics: both tumour shrinkage (objective response) and disease progression are useful endpoints in clinical trials. Since RECIST was published in 2000, many investigators, cooperative groups, industry and government authorities have adopted these criteria in the assessment of treatment outcomes. However, a number of questions and issues have arisen which have led to the development of a revised RECIST guideline (version 1.1). Evidence for changes, summarised in separate papers in this special issue, has come from assessment of a large data warehouse (>6500 patients), simulation studies and literature reviews.临床上评价肿瘤治疗效果最重要的一点就是对肿瘤负荷变化的评估:瘤体皱缩(目标疗效)和病情恶化在临床试验中都是有意义的判断终点。

edqm-计算机系统验证核心文件-2018(中英文)

edqm-计算机系统验证核心文件-2018(中英文)
验证活动范围应根据风险评估来确定,并考虑OMCL计算机化系统的测试结果对正确性和 可追溯性的依赖程度。
Due to the great variety of computerised systems available, it is not possible to state in a single document all the specific validation elements that are applicable.
由于可用的计算机化系统种类繁多,不可能在一份文件中说明所有适用的具体验证要素。
This guideline is intended for use by OMCLs working under Quality Management Systems based on the ISO/IEC 17025 standard, which use computerised systems for a part or the totality of the processes related to the quality control of medicines.
Previous titles/other references / last valid version 原文件名/其他索引号/ 最新验证版本
Custodian Organisation 托管机构
Concerned Network 相关网络
Validation of Computerised Systems – Core document PA/PH/OMCL (08) 69 R7 计算机化系统的验证-核心文件PA/PH/OMCL (08) 69 R7 Guideline 指南 -
本文件适用于OMCL中使用的所有类型计算机化系统。但是,根据其复杂程度,测试与文 件管理的范围将有所不同。计算机化系统可以分为三类:豁免的、简单的和复杂的(见第3 部分表1)。本文件描述了简单和复杂计算机化系统的可扩展验证方法。

A Conformal Finite Difference Time Domain Technique for Modeling Curved Dielectric Surfaces

A Conformal Finite Difference Time Domain Technique for Modeling Curved Dielectric Surfaces

A Conformal Finite Difference Time Domain Technique for Modeling Curved Dielectric SurfacesWenhua Yu ,Senior Member,IEEE,and Raj Mittra ,Life Fellow,IEEEAbstract—In this paper,we present a simple yet accurate con-formal Finite Difference Time Domain (FDTD)technique,which can be used to analyze curved dielectric surfaces.Unlike the ex-isting conformal techniques for handling dielectrics,the present approach utilizes the individual electric field component along the edges of the cell,rather than requiring the calculation of its area or volume,which is partially filled with a dielectric material.The new technique shows good agreement with the results derived by Mode Matching and analytical methods.Index Terms—CFDTD,dielectric resonator (DR).I.I NTRODUCTIONDIELECTRIC loaded resonators and filters have important applications in many microwave communication devices.Dielectric resonators (DRs)are usually rod-like structures inside a cylindrical enclosure [1],[2].For the cylindrical resonators commonly used in practical applications,the con-ventional FDTD algorithm designed for the Cartesian system cannot be employed directly to simulate curved dielectric surfaces [3]in an accurate manner.This is because evenwith a very fine mesh(),the staircasing procedure introduces errors that are significant for narrow-band filter-type applications.Several enhanced FDTD techniques [4]–[6]have been proposed in the literature for modeling curved dielectric surfaces.These approaches employ a weighted volume average concept,which requires the computation of the area and volume of the partially-filled cell.Because these algorithms are based only on the use of the effective dielectric constant of the FDTD cell that is filled with dissimilar materials,they cannot distinguish between cells that have different geometrical properties insofar as the partial filling is concerned,but have the same fill factor.As mentioned above,the technique for handling curved di-electric surfaces used in this paper is not based on the effective dielectric constant approach.Instead,it makes use of the infor-mation on the edges of the cell to devise a field update algo-rithm,bypassing the area and volume calculations.The numer-ical results presented in the paper demonstrate that the algorithm produces results that have improved accuracy over existing con-formal dielectric techniques.To validate the proposed approach,we consider two test ex-amples.First,we calculate the resonant frequencies of a cylin-drical DR loaded in a rectangular cavity.Next,we investigate a cylindrical DR sandwiched between two parallel PEC plates.Manuscript received July 13,2000;revised December 1,2000.The authors are with Electromagnetic Communication Laboratory,The Penn-sylvania State University,University Park,PA 16802USA.Publisher Item Identifier S 1531-1309(01)01967-5.Both of these problems have been investigated previously by using other conformal techniques.II.FDTD M ETHOD FOR C URVED D IELECTRICSA.Existing Conformal Dielectric FDTD AlgorithmsTypically,existing conformal dielectric FDTD algorithms employ the weighted area or volume average to deal with the cells filled with different materials [4]–[6],as shown in Fig.1.The corresponding effective dielectric constant used in [5]and [6]is writtenas(1)anddirection;dielectric surface parameter inside the cells that are filled with dissimilar materials[5];Fig.1.Conformal dielectric techniques[4,5,6].We note from Fig.3that the edges(5)(6) where-andYU AND MITTRA:CONFORMAL FDTD TECHNIQUE27 loaded with a cylindrical dielectric rod(Fig.4)as well as acylindrical DR sandwiched between two parallel PEC plates.For both of these examples,the time step was taken tobe(7)We note from(7)that we do not have to compromise the Courantcondition in this algorithm.Returning to the dielectric-loaded cavity problem in Fig.4,we choose,for the sake of facilitating the comparison with pre-viously published results,the relative dielectric constant of thecylindrical rod to be38and assume that the pedestal below thisrod has a dielectric constant of1,in accordance with the spec-ifications given in[1].Next,we generate a nonuniform meshvia the mesh generation software available in[7].In the simu-lation of the geometry1in Table I,the entire domain includes4847cells,theand46and46and-andandHEM),for which the results given in both[5]and[6]deviatenoticeably from the theoretical values,are shown in Table IIbelow,along with those derived by using the present scheme.TABLE IC OMPARISON OFD IFFERENT M ETHODS FOR A D IELECTRIC R OD IN AR ECTANGULAR C AVITYTABLE IIC OMPARISON OFD IFFERENT M ETHODS FOR ADRIt is evident from the above table that the results obtainedvia the present conformal dielectric scheme are in very goodagreement with the theoretical values.IV.C ONCLUSIONA simple yet accurate scheme to model curved dielectric sur-faces in the context of FDTD has been introduced in this paper.The new updating scheme does not require area or volume cal-culations,and is convenient to apply without the burden of cal-culating the truncated cell areas and volumes which are partiallyfilled with a dielectric material.Hence,the mesh generation forthis conformal technique is quite simple.A more general av-erage formulation may be employed to calculate the effectivedielectric constant for large difference between the two dielec-tric constants.R EFERENCES[1]X.-P.Liang and K.A.Zakim,“Modeling of cylindrical dielectric res-onators in rectangular waveguides and cavity,”IEEE Trans.MicrowaveTheory Tech.,vol.41,no.12,pp.2174–2181,Dec.1993.[2]S.J.Fiedziusko,“Dual-mode dielectric resonator loaded cavity filters,”IEEE Trans.Microwave Theory Tech.,vol.MTT–30,no.9,pp.1311–1316,Sept.1982.[3]K.S.Yee,“Numerical solution of initial boundary value problems in-volving Maxwell’s equations in isotropic media,”IEEE Trans.AntennasPropagat.,vol.AP-14,pp.302–307,May1966.[4]M.Celuch-Marcysiak and W.K.Gwarek,“Higher order modeling ofmedia surfaces for enhanced FDTD analysis of microwave circuits,”inProc.24th European Microwave Conf.,vol.2,Cannes,France,1994,pp.1530–1535.[5]N.Kaneda,B.Houshm,and T.Itoh,“FDTD analysis of dielectric res-onators with curved surfaces,”IEEE Trans.Microwave Theory Tech.,vol.45,no.9,pp.1645–1649,Sept.1997.[6]S.Dey and R.Mittra,“A conformal finite-difference time-domain tech-nique for modeling cylindrical dielectric resonators,”IEEE Trans.Mi-crowave Theory Tech.,vol.47,no.9,pp.1737–1739,Sept.1999.[7]W.Yu and R.Mittra,“A conformal FDTD software package modelingantennas and microstrip circuit components,”IEEE Antennas Propagat.Mag.,vol.42,no.5,pp.28–39,Oct.2000.。

开启片剂完整性的窗户(中英文对照)

开启片剂完整性的窗户(中英文对照)

开启片剂完整性的窗户日本东芝公司,剑桥大学摘要:由日本东芝公司和剑桥大学合作成立的公司向《医药技术》解释了FDA支持的技术如何在不损坏片剂的情况下测定其完整性。

太赫脉冲成像的一个应用是检查肠溶制剂的完整性,以确保它们在到达肠溶之前不会溶解。

关键词:片剂完整性,太赫脉冲成像。

能够检测片剂的结构完整性和化学成分而无需将它们打碎的一种技术,已经通过了概念验证阶段,正在进行法规申请。

由英国私募Teraview公司研发并且以太赫光(介于无线电波和光波之间)为基础。

该成像技术为配方研发和质量控制中的湿溶出试验提供了一个更好的选择。

该技术还可以缩短新产品的研发时间,并且根据厂商的情况,随时间推移甚至可能发展成为一个用于制药生产线的实时片剂检测系统。

TPI技术通过发射太赫射线绘制出片剂和涂层厚度的三维差异图谱,在有结构或化学变化时太赫射线被反射回。

反射脉冲的时间延迟累加成该片剂的三维图像。

该系统使用太赫发射极,采用一个机器臂捡起片剂并且使其通过太赫光束,用一个扫描仪收集反射光并且建成三维图像(见图)。

技术研发太赫技术发源于二十世纪九十年代中期13本东芝公司位于英国的东芝欧洲研究中心,该中心与剑桥大学的物理学系有着密切的联系。

日本东芝公司当时正在研究新一代的半导体,研究的副产品是发现了这些半导体实际上是太赫光非常好的发射源和检测器。

二十世纪九十年代后期,日本东芝公司授权研究小组寻求该技术可能的应用,包括成像和化学传感光谱学,并与葛兰素史克和辉瑞以及其它公司建立了关系,以探讨其在制药业的应用。

虽然早期的结果表明该技术有前景,但日本东芝公司却不愿深入研究下去,原因是此应用与日本东芝公司在消费电子行业的任何业务兴趣都没有交叉。

这一决定的结果是研究中心的首席执行官DonArnone和剑桥桥大学物理学系的教授Michael Pepper先生于2001年成立了Teraview公司一作为研究中心的子公司。

TPI imaga 2000是第一个商品化太赫成像系统,该系统经优化用于成品片剂及其核心完整性和性能的无破坏检测。

黄色短杆菌中L-异亮氨酸同位素丰度及分布的分析方法研究

黄色短杆菌中L-异亮氨酸同位素丰度及分布的分析方法研究

第43 卷第 3 期2024 年3 月Vol.43 No.3496~500分析测试学报FENXI CESHI XUEBAO(Journal of Instrumental Analysis)黄色短杆菌中L-异亮氨酸同位素丰度及分布的分析方法研究赵雅梦1,2,范若宁1,2,雷雯1,2*(1.上海化工研究院有限公司,上海 200062;2.上海市稳定同位素检测及应用研发专业技术服务平台,上海 200062)摘要:随着代谢组学、蛋白质组学等生命科学领域的迅猛发展,稳定同位素标记试剂,尤其是标记氨基酸,因无放射性、与非标记化合物理化性质一致等优势得到广泛应用。

该文建立了一种稳健、快速的氨基酸同位素丰度分析方法。

方法采用Hypersil Gold Vanquish(100 mm × 2.1 mm,1.9 μm)色谱柱,以水和含0.1%甲酸的甲醇为流动相,正离子模式下进行液相色谱-高分辨质谱联用(LC-HRMS)分析;测得细菌发酵液中L-异亮氨酸-15N的同位素丰度为98.58%,相对标准偏差为0.03%,可应用于不同稳定同位素(15N或13C)示踪的黄色短杆菌中L-异亮氨酸同位素丰度及分布的准确测定。

该方法具有简便、灵敏、稳健等优点,有望在合成生物学、同位素示踪代谢流等研究中发挥重要作用。

关键词:同位素标记氨基酸;液相色谱-高分辨质谱(LC-HRMS);黄色短杆菌;同位素分布及丰度中图分类号:O657.72;O629.7文献标识码:A 文章编号:1004-4957(2024)03-0496-05Analysis of Isotope Abundance and Distribution for L-Isoleucinein Brebvibacterium flavumZHAO Ya-meng1,2,FAN Ruo-ning1,2,LEI Wen1,2*(1.Shanghai Research Institution of Chemical Industry Co. Ltd.,Shanghai 200062,China;2.Shanghai Professional Technology Service Platform on Detection and Application Development for Stable Isotope,Shanghai 200062,China)Abstract:In the rapidly advancing life science fields such as metabolomics and proteomics,stable isotope labeling reagents that are non-radioactive and have similar physiochemical properties with un⁃labeled compounds have been widely utilized. Biological fermentation is one of the major synthesis ap⁃proaches for labeled amino acids. In this study,we have established an accurate,robust,and rapid method to determine the isotope abundance of the amino acids in the fermentation broth to aid in early assessment of batch quality and optimization of fermentation conditions and amino acid yield. A Hy⁃persil Gold Vanquish column(100 mm × 2.1 mm,1.9 μm)with water and methanol containing 0.1%formic acid as mobile phase and a liquid chromatography-high resolution mass spectrometry(LC-HRMS) system in positive ion mode were used for the study. The isotopic abundance of L-iso⁃leucine-15N samples was determined to be 98.58%,closely matching the indicated value(>98%),with a relative standard deviation of 0.03%,demonstrating excellent accuracy and precision for the method. Then the method was successfully applied to determine the isotopic abundance and distribu⁃tion of L-isoleucine in Brevibacterium flavum labeled with 15N or 13C. The proposed method is simple to perform,convenient,highly sensitive,and robust,holding wide application potentials in syn⁃thetic biology and research in stable isotope traced metabolic pathways.Key words:stable isotope labeled amino acid;liquid chromatography-high resolution mass spec⁃trometry(LC-HRMS);Brebvibacterium flavum;isotope distribution and abundance利用同位素标记技术将化合物中普通原子替换为同位素核素所合成的稳定同位素标记化合物,结合质谱技术,已在蛋白质组学、代谢组学、生物靶标发现、临床诊断等生命科学研究中发挥重要作用[1-4]。

Bioanalytical Method ValidationGuidance for Indust

Bioanalytical Method ValidationGuidance for Indust

Guidance for Industry Bioanalytical Method ValidationU.S. Department of Health and Human ServicesFood and Drug AdministrationCenter for Drug Evaluation and Research (CDER)Center for Veterinary Medicine (CVM)May 2001BPGuidance for Industry Bioanalytical Method ValidationAdditional copies are available from:Drug Information Branch (HFD-210)Center for Drug Evaluation and Research (CDER)5600 Fishers Lane, Rockville, MD 20857 (Tel) 301-827-4573Internet at /cder/guidance/index.htmorCommunications Staff (HFV-12)Center for Veterinary Medicine (CVM)7500 Standish Place, Rockville, MD 20855 (Tel) 301–594-1755Internet at /cvmU.S. Department of Health and Human ServicesFood and Drug AdministrationCenter for Drug Evaluation and Research (CDER)Center for Veterinary Medicine (CVM)May 2001BPTable of ContentsI.INTRODUCTION (1)II.BACKGROUND (1)A.F ULL V ALIDATION (2)B.P ARTIAL V ALIDATION (2)C.C ROSS-V ALIDATION (3)III.REFERENCE STANDARD (4)IV.METHOD DEVELOPMENT: CHEMICAL ASSAY (4)A.S ELECTIVITY (4)B.A CCURACY, P RECISION, AND R ECOVERY (5)C.C ALIBRATION/S TANDARD C URVE (5)D.S TABILITY (6)E.P RINCIPLES OF B IOANALYTICAL M ETHOD V ALIDATION AND E STABLISHMENT (8)F.S PECIFIC R ECOMMENDATIONS FOR M ETHOD V ALIDATION (10)V.METHOD DEVELOPMENT: MICROBIOLOGICAL AND LIGAND-BINDING ASSAYS (11)A.S ELECTIVITY I SSUES (11)B.Q UANTIFICATION I SSUES (12)VI.APPLICATION OF VALIDATED METHOD TO ROUTINE DRUG ANALYSIS (13)A CCEPTANCE C RITERIA FOR THE R UN (15)VII.DOCUMENTATION (16)A.S UMMARY I NFORMATION (16)B.D OCUMENTATION FOR M ETHOD E STABLISHMENT (17)C.A PPLICATION TO R OUTINE D RUG A NALYSIS (17)D.O THER I NFORMATION (19)GLOSSARY (20)GUIDANCE FOR INDUSTRY1Bioanalytical Method ValidationI.INTRODUCTIONThis guidance provides assistance to sponsors of investigational new drug applications (INDs), new drug applications (NDAs), abbreviated new drug applications (ANDAs), and supplements in developing bioanalytical method validation information used in human clinical pharmacology, bioavailability (BA), and bioequivalence (BE) studies requiring pharmacokinetic (PK) evaluation. This guidance also applies to bioanalytical methods used for non-human pharmacology/toxicology studies and preclinical studies. For studies related to the veterinary drug approval process, this guidance applies only to blood and urine BA, BE, and PK studies.The information in this guidance generally applies to bioanalytical procedures such as gas chromatography (GC), high-pressure liquid chromatography (LC), combined GC and LC mass spectrometric (MS) procedures such as LC-MS, LC-MS-MS, GC-MS, and GC-MS-MS performed for the quantitative determination of drugs and/or metabolites in biological matricessuch as blood, serum, plasma, or urine. This guidance also applies to other bioanalytical methods, such as immunological and microbiological procedures, and to other biological matrices, such as tissue and skin samples.This guidance provides general recommendations for bioanalytical method validation. The recommendations can be adjusted or modified depending on the specific type of analytical method used. II.BACKGROUND1 This guidance has been prepared by the Biopharmaceutics Coordinating Committee in the Center for Drug Evaluation and Research (CDER) in cooperation with the Center for Veterinary Medicine (CVM) at the Food and Drug Administration.This guidance has been developed based on the deliberations of two workshops: (1) Analytical Methods Validation: Bioavailability, Bioequivalence, and Pharmacokinetic Studies (held on December 3B5, 19902 ) and (2) Bioanalytical Methods Validation C A Revisit With a Decade of Progress (held on January 12B14, 20003).Selective and sensitive analytical methods for the quantitative evaluation of drugs and their metabolites (analytes) are critical for the successful conduct of preclinical and/or biopharmaceutics and clinical pharmacology studies. Bioanalytical method validation includes all of the procedures that demonstrate that a particular method used for quantitative measurement of analytes in a given biological matrix, such as blood, plasma, serum, or urine, is reliable and reproducible for the intended use. The fundamental parameters for this validation include (1) accuracy, (2) precision, (3) selectivity, (4) sensitivity, (5) reproducibility, and (6) stability. Validation involves documenting, through the use of specific laboratory investigations, that the performance characteristics of the method are suitable and reliable for the intended analytical applications. The acceptability of analytical data corresponds directly to the criteria used to validate the method.Published methods of analysis are often modified to suit the requirements of the laboratory performing the assay. These modifications should be validated to ensure suitable performance of the analytical method. When changes are made to a previously validated method, the analyst should exercise judgment as to how much additional validation is needed. During the course of a typical drug development program, a defined bioanalytical method undergoes many modifications. The evolutionary changes to support specific studies and different levels of validation demonstrate the validity of an assay’s performance. Different types and levels of validation are defined and characterized as follows:A.Full Validation•Full validation is important when developing and implementing a bioanalytical method for the first time.•Full validation is important for a new drug entity.• A full validation of the revised assay is important if metabolites are added to an existing assay for quantification.B.Partial ValidationPartial validations are modifications of already validated bioanalytical methods. Partial validation can range from as little as one intra-assay accuracy and precision determination to a nearly full2 Workshop Report: Shah, V.P. et al., Pharmaceutical Research: 1992; 9:588-592.3 Workshop Report: Shah, V.P. et al., Pharmaceutical Research: 2000; 17:in press.validation. Typical bioanalytical method changes that fall into this category include, but are not limited to:•Bioanalytical method transfers between laboratories or analysts•Change in analytical methodology (e.g., change in detection systems)•Change in anticoagulant in harvesting biological fluid•Change in matrix within species (e.g., human plasma to human urine)•Change in sample processing procedures•Change in species within matrix (e.g., rat plasma to mouse plasma)•Change in relevant concentration range•Changes in instruments and/or software platforms•Limited sample volume (e.g., pediatric study)•Rare matrices•Selectivity demonstration of an analyte in the presence of concomitant medications•Selectivity demonstration of an analyte in the presence of specific metabolitesC.Cross-ValidationCross-validation is a comparison of validation parameters when two or more bioanalytical methods are used to generate data within the same study or across different studies. An example of cross-validation would be a situation where an original validated bioanalytical method serves as thereference and the revised bioanalytical method is the comparator. The comparisons should be done both ways.When sample analyses within a single study are conducted at more than one site or more than one laboratory, cross-validation with spiked matrix standards and subject samples should be conducted at each site or laboratory to establish interlaboratory reliability. Cross-validation should also be considered when data generated using different analytical techniques (e.g., LC-MS-MS vs.ELISA4) in different studies are included in a regulatory submission.All modifications should be assessed to determine the recommended degree of validation. The analytical laboratory conducting pharmacology/toxicology and other preclinical studies for regulatory submissions should adhere to FDA=s Good Laboratory Practices (GLPs)5 (21 CFR part 58) and to sound principles of quality assurance throughout the testing process. The bioanalytical method for human BA, BE, PK, and drug interaction studies must meet the criteria in 21 CFR 320.29. The analytical laboratory should have a written set of standard operating procedures (SOPs) to ensure a complete system of quality control and assurance. The SOPs should cover all aspects of analysis from the time the sample is collected and reaches the laboratory until the results of the analysis are reported. The SOPs also should include record keeping, security and chain of sample custody4 Enzyme linked immune sorbent assay5 For the Center for Veterinary Medicine, all bioequivalence studies are subject to Good Laboratory Practices.(accountability systems that ensure integrity of test articles), sample preparation, and analytical tools such as methods, reagents, equipment, instrumentation, and procedures for quality control and verification of results.The process by which a specific bioanalytical method is developed, validated, and used in routine sample analysis can be divided into (1) reference standard preparation, (2) bioanalytical method development and establishment of assay procedure, and (3) application of validated bioanalytical method to routine drug analysis and acceptance criteria for the analytical run and/or batch. These three processes are described in the following sections of this guidance.III.REFERENCE STANDARDAnalysis of drugs and their metabolites in a biological matrix is carried out using samples spiked with calibration (reference) standards and using quality control (QC) samples. The purity of the reference standard used to prepare spiked samples can affect study data. For this reason, an authenticated analytical reference standard of known identity and purity should be used to prepare solutions of known concentrations. If possible, the reference standard should be identical to the analyte. When this is not possible, an established chemical form (free base or acid, salt or ester) of known purity can be used. Three types of reference standards are usually used: (1) certified reference standards (e.g., USP compendial standards); (2) commercially supplied reference standards obtained from a reputable commercial source; and/or (3) other materials of documented purity custom-synthesized by an analytical laboratory or other noncommercial establishment. The source and lot number, expiration date, certificates of analyses when available, and/or internally or externally generated evidence of identity and purity should be furnished for each reference standard.IV.METHOD DEVELOPMENT: CHEMICAL ASSAYThe method development and establishment phase defines the chemical assay. The fundamental parameters for a bioanalytical method validation are accuracy, precision, selectivity, sensitivity, reproducibility, and stability. Measurements for each analyte in the biological matrix should be validated. In addition, the stability of the analyte in spiked samples should be determined. Typical method development and establishment for a bioanalytical method include determination of (1) selectivity, (2) accuracy, precision, recovery, (3) calibration curve, and (4) stability of analyte in spiked samples.A.SelectivitySelectivity is the ability of an analytical method to differentiate and quantify the analyte in thepresence of other components in the sample. For selectivity, analyses of blank samples of theappropriate biological matrix (plasma, urine, or other matrix) should be obtained from at leastsix sources. Each blank sample should be tested for interference, and selectivity should be ensured at the lower limit of quantification (LLOQ).Potential interfering substances in a biological matrix include endogenous matrix components, metabolites, decomposition products, and in the actual study, concomitant medication and other exogenous xenobiotics. If the method is intended to quantify more than one analyte, each analyte should be tested to ensure that there is no interference.B.Accuracy, Precision, and RecoveryThe accuracy of an analytical method describes the closeness of mean test results obtained by the method to the true value (concentration) of the analyte. Accuracy is determined by replicate analysis of samples containing known amounts of the analyte. Accuracy should be measured using a minimum of five determinations per concentration. A minimum of three concentrations in the range of expected concentrations is recommended. The mean value should be within 15% of the actual value except at LLOQ, where it should not deviate by more than 20%. The deviation of the mean from the true value serves as the measure of accuracy.The precision of an analytical method describes the closeness of individual measures of an analyte when the procedure is applied repeatedly to multiple aliquots of a single homogeneous volume of biological matrix. Precision should be measured using a minimum of five determinations per concentration. A minimum of three concentrations in the range of expected concentrations is recommended. The precision determined at each concentration level should not exceed 15% of the coefficient of variation (CV) except for the LLOQ, where it should not exceed 20% of the CV. Precision is further subdivided into within-run, intra-batch precision or repeatability, which assesses precision during a single analytical run, and between-run, inter-batch precision or repeatability, which measures precision with time, and may involve different analysts, equipment, reagents, and laboratories.The recovery of an analyte in an assay is the detector response obtained from an amount of the analyte added to and extracted from the biological matrix, compared to the detector response obtained for the true concentration of the pure authentic standard. Recovery pertains to the extraction efficiency of an analytical method within the limits of variability. Recovery of the analyte need not be 100%, but the extent of recovery of an analyte and of the internal standard should be consistent, precise, and reproducible. Recovery experiments should be performed by comparing the analytical results for extracted samples at three concentrations (low, medium, and high) with unextracted standards that represent 100% recovery.C.Calibration/Standard CurveA calibration (standard) curve is the relationship between instrument response and known concentrations of the analyte. A calibration curve should be generated for each analyte in thesample. A sufficient number of standards should be used to adequately define the relationship between concentration and response. A calibration curve should be prepared in the same biological matrix as the samples in the intended study by spiking the matrix with known concentrations of the analyte. The number of standards used in constructing a calibration curve will be a function of the anticipated range of analytical values and the nature of theanalyte/response relationship. Concentrations of standards should be chosen on the basis of the concentration range expected in a particular study. A calibration curve should consist of a blank sample (matrix sample processed without internal standard), a zero sample (matrix sample processed with internal standard), and six to eight non-zero samples covering the expected range, including LLOQ.1.Lower Limit of Quantification (LLOQ)The lowest standard on the calibration curve should be accepted as the limit ofquantification if the following conditions are met:C The analyte response at the LLOQ should be at least 5 times the responsecompared to blank response.C Analyte peak (response) should be identifiable, discrete, and reproducible witha precision of 20% and accuracy of 80-120%.2.Calibration Curve/Standard Curve/Concentration-ResponseThe simplest model that adequately describes the concentration-response relationshipshould be used. Selection of weighting and use of a complex regression equation should be justified. The following conditions should be met in developing a calibration curve:C#20% deviation of the LLOQ from nominal concentrationC#15% deviation of standards other than LLOQ from nominal concentrationAt least four out of six non-zero standards should meet the above criteria, including the LLOQ and the calibration standard at the highest concentration. Excluding thestandards should not change the model used.D.StabilityDrug stability in a biological fluid is a function of the storage conditions, the chemical properties of the drug, the matrix, and the container system. The stability of an analyte in a particular matrix and container system is relevant only to that matrix and container system and should not be extrapolated to other matrices and container systems. Stability procedures should evaluate the stability of the analytes during sample collection and handling, after long-term (frozen at theintended storage temperature) and short-term (bench top, room temperature) storage, and after going through freeze and thaw cycles and the analytical process. Conditions used in stability experiments should reflect situations likely to be encountered during actual sample handling and analysis. The procedure should also include an evaluation of analyte stability in stock solution.All stability determinations should use a set of samples prepared from a freshly made stock solution of the analyte in the appropriate analyte-free, interference-free biological matrix. Stock solutions of the analyte for stability evaluation should be prepared in an appropriate solvent at known concentrations.1.Freeze and Thaw StabilityAnalyte stability should be determined after three freeze and thaw cycles. At least three aliquots at each of the low and high concentrations should be stored at the intendedstorage temperature for 24 hours and thawed unassisted at room temperature. Whencompletely thawed, the samples should be refrozen for 12 to 24 hours under the sameconditions. The freeze–thaw cycle should be repeated two more times, then analyzedon the third cycle. If an analyte is unstable at the intended storage temperature, thestability sample should be frozen at -700C during the three freeze and thaw cycles.2.Short-Term Temperature StabilityThree aliquots of each of the low and high concentrations should be thawed at roomtemperature and kept at this temperature from 4 to 24 hours (based on the expectedduration that samples will be maintained at room temperature in the intended study) and analyzed.3.Long-Term StabilityThe storage time in a long-term stability evaluation should exceed the time between the date of first sample collection and the date of last sample analysis. Long-term stabilityshould be determined by storing at least three aliquots of each of the low and highconcentrations under the same conditions as the study samples. The volume of samples should be sufficient for analysis on three separate occasions. The concentrations of allthe stability samples should be compared to the mean of back-calculated values for the standards at the appropriate concentrations from the first day of long-term stabilitytesting.4.Stock Solution StabilityThe stability of stock solutions of drug and the internal standard should be evaluated at room temperature for at least 6 hours. If the stock solutions are refrigerated or frozenfor the relevant period, the stability should be documented. After completion of thedesired storage time, the stability should be tested by comparing the instrumentresponse with that of freshly prepared solutions.5.Post-Preparative StabilityThe stability of processed samples, including the resident time in the autosampler, should be determined. The stability of the drug and the internal standard should be assessedover the anticipated run time for the batch size in validation samples by determiningconcentrations on the basis of original calibration standards.Although the traditional approach of comparing analytical results for stored samples with those for freshly prepared samples has been referred to in this guidance, other statistical approaches based on confidence limits for evaluation of an analyte=s stability in abiological matrix can be used. SOPs should clearly describe the statistical method andrules used. Additional validation may include investigation of samples from dosedsubjects.E.Principles of Bioanalytical Method Validation and Establishment•The fundamental parameters to ensure the acceptability of the performance of a bioanalytical method validation are accuracy, precision, selectivity, sensitivity,reproducibility, and stability.• A specific, detailed description of the bioanalytical method should be written. This can be in the form of a protocol, study plan, report, and/or SOP.•Each step in the method should be investigated to determine the extent to which environmental, matrix, material, or procedural variables can affect the estimation of analyte in the matrix from the time of collection of the material up to and including the time ofanalysis.•It may be important to consider the variability of the matrix due to the physiological nature of the sample. In the case of LC-MS-MS-based procedures, appropriate steps should be taken to ensure the lack of matrix effects throughout the application of the method,especially if the nature of the matrix changes from the matrix used during method validation.• A bioanalytical method should be validated for the intended use or application. All experiments used to make claims or draw conclusions about the validity of the methodshould be presented in a report (method validation report).•Whenever possible, the same biological matrix as the matrix in the intended samples should be used for validation purposes. (For tissues of limited availability, such as bone marrow, physiologically appropriate proxy matrices can be substituted.)•The stability of the analyte (drug and/or metabolite) in the matrix during the collection process and the sample storage period should be assessed, preferably prior to sampleanalysis.•For compounds with potentially labile metabolites, the stability of analyte in matrix from dosed subjects (or species) should be confirmed.•The accuracy, precision, reproducibility, response function, and selectivity of the method for endogenous substances, metabolites, and known degradation products should beestablished for the biological matrix. For selectivity, there should be evidence that thesubstance being quantified is the intended analyte.•The concentration range over which the analyte will be determined should be defined in the bioanalytical method, based on evaluation of actual standard samples over the range,including their statistical variation. This defines the standard curve.• A sufficient number of standards should be used to adequately define the relationship between concentration and response. The relationship between response and concentration should be demonstrated to be continuous and reproducible. The number of standards used should be a function of the dynamic range and nature of the concentration-responserelationship. In many cases, six to eight concentrations (excluding blank values) can define the standard curve. More standard concentrations may be recommended for nonlinear than for linear relationships.•The ability to dilute samples originally above the upper limit of the standard curve should be demonstrated by accuracy and precision parameters in the validation.•In consideration of high throughput analyses, including but not limited to multiplexing, multicolumn, and parallel systems, sufficient QC samples should be used to ensure control of the assay. The number of QC samples to ensure proper control of the assay should be determined based on the run size. The placement of QC samples should be judiciously considered in the run.•For a bioanalytical method to be considered valid, specific acceptance criteria should be set in advance and achieved for accuracy and precision for the validation of QC samples over the range of the standards.F.Specific Recommendations for Method Validation•The matrix-based standard curve should consist of a minimum of six standard points, excluding blanks, using single or replicate samples. The standard curve should cover the entire range of expected concentrations.•Standard curve fitting is determined by applying the simplest model that adequately describes the concentration-response relationship using appropriate weighting and statistical tests for goodness of fit.•LLOQ is the lowest concentration of the standard curve that can be measured with acceptable accuracy and precision. The LLOQ should be established using at least five samples independent of standards and determining the coefficient of variation and/orappropriate confidence interval. The LLOQ should serve as the lowest concentration on the standard curve and should not be confused with the limit of detection and/or the low QC sample. The highest standard will define the upper limit of quantification (ULOQ) of an analytical method.•For validation of the bioanalytical method, accuracy and precision should be determined using a minimum of five determinations per concentration level (excluding blank samples).The mean value should be within ±15% of the theoretical value, except at LLOQ, where it should not deviate by more than ±20%. The precision around the mean value should not exceed 15% of the CV, except for LLOQ, where it should not exceed 20% of the CV.Other methods of assessing accuracy and precision that meet these limits may be equally acceptable.•The accuracy and precision with which known concentrations of analyte in biological matrix can be determined should be demonstrated. This can be accomplished by analysis ofreplicate sets of analyte samples of known concentrations C QC samples C from anequivalent biological matrix. At a minimum, three concentrations representing the entire range of the standard curve should be studied: one within 3x the lower limit of quantification (LLOQ) (low QC sample), one near the center (middle QC), and one near the upperboundary of the standard curve (high QC).•Reported method validation data and the determination of accuracy and precision should include all outliers; however, calculations of accuracy and precision excluding values that are statistically determined as outliers can also be reported.•The stability of the analyte in biological matrix at intended storage temperatures should be established. The influence of freeze-thaw cycles (a minimum of three cycles at twoconcentrations in triplicate) should be studied.•The stability of the analyte in matrix at ambient temperature should be evaluated over a time period equal to the typical sample preparation, sample handling, and analytical run times.•Reinjection reproducibility should be evaluated to determine if an analytical run could be reanalyzed in the case of instrument failure.•The specificity of the assay methodology should be established using a minimum of six independent sources of the same matrix. For hyphenated mass spectrometry-basedmethods, however, testing six independent matrices for interference may not be important.In the case of LC-MS and LC-MS-MS-based procedures, matrix effects should beinvestigated to ensure that precision, selectivity, and sensitivity will not be compromised.Method selectivity should be evaluated during method development and throughout methodvalidation and can continue throughout application of the method to actual study samples.•Acceptance/rejection criteria for spiked, matrix-based calibration standards and validation QC samples should be based on the nominal (theoretical) concentration of analytes.Specific criteria can be set up in advance and achieved for accuracy and precision over therange of the standards, if so desired.V.METHOD DEVELOPMENT: MICROBIOLOGICAL AND LIGAND-BINDING ASSAYSMany of the bioanalytical validation parameters and principles discussed above are also applicable to microbiological and ligand-binding assays. However, these assays possess some unique characteristics that should be considered during method validation.A.Selectivity IssuesAs with chromatographic methods, microbiological and ligand-binding assays should be shown to be selective for the analyte. The following recommendations for dealing with two selectivity issues should be considered:1.Interference From Substances Physiochemically Similar to the Analyte•Cross-reactivity of metabolites, concomitant medications, or endogenouscompounds should be evaluated individually and in combination with the analyteof interest.•When possible, the immunoassay should be compared with a validated reference method (such as LC-MS) using incurred samples and predetermined criteria foragreement of accuracy of immunoassay and reference method.。

Summarizing Scientific Articles Experiments with Relevance and Rhetorical Status

Summarizing Scientific Articles Experiments with Relevance and Rhetorical Status

The MIT Press Journals/journalsThis article is provided courtesy of The MIT Press.To join an e-mail alert list and receive the latest news on our publications, please visit: /e-mailSummarizing Scientific Articles:Experiments with Relevance andRhetorical StatusSimone Teufel∗Marc Moens†Cambridge University Rhetorical Systems and University ofEdinburghIn this article we propose a strategy for the summarization of scientific articles that concentrates on the rhetorical status of statements in an article:Material for summaries is selected in such a way that summaries can highlight the new contribution of the source article and situate it with respect to earlier work.We provide a gold standard for summaries of this kind consisting of a substantial corpus of conference articles in computational linguistics annotated with human judgments of the rhetorical status and relevance of each sentence in the articles.We present several experiments measuring our judges’agreement on these annotations.We also present an algorithm that,on the basis of the annotated training material,selects content from unseen articles and classifies it into afixed set of seven rhetorical categories.The output of this extraction and classification system can be viewed as a single-document summary in its own right;alternatively,it provides starting material for the generation of task-oriented and user-tailored summaries designed to give users an overview of a scientificfield.1.IntroductionSummarization systems are often two-phased,consisting of a content selection step followed by a regeneration step.In thefirst step,text fragments(sentences or clauses) are assigned a score that reflects how important or contentful they are.The highest-ranking material can then be extracted and displayed verbatim as“extracts”(Luhn 1958;Edmundson1969;Paice1990;Kupiec,Pedersen,and Chen1995).Extracts are often useful in an information retrieval environment since they give users an idea as to what the source document is about(Tombros and Sanderson1998;Mani et al.1999), but they are texts of relatively low quality.Because of this,it is generally accepted that some kind of postprocessing should be performed to improve thefinal result,by shortening,fusing,or otherwise revising the material(Grefenstette1998;Mani,Gates, and Bloedorn1999;Jing and McKeown2000;Barzilay et al.2000;Knight and Marcu 2000).The extent to which it is possible to do postprocessing is limited,however,by the fact that contentful material is extracted without information about the general discourse context in which the material occurred in the source text.For instance,a sentence describing the solution to a scientific problem might give the main contri-∗Simone Teufel,Computer Laboratory,Cambridge University,JJ Thomson Avenue,Cambridge,CB3OFD,England.E-mail:Simone.Teufel@†Marc Moens,Rhetorical Systems and University of Edinburgh,2Buccleuch Place,Edinburgh,EH89LS, Scotland.E-mail:marc@c 2002Association for Computational LinguisticsComputational Linguistics Volume28,Number4 bution of the paper,but it might also refer to a previous approach that the authors criticize.Depending on its rhetorical context,the same sentence should be treated very differently in a summary.We propose in this article a method for sentence and con-tent selection from source texts that adds context in the form of information about the rhetorical role the extracted material plays in the source text.This added contextual information can then be used to make the end product more informative and more valuable than sentence extracts.Our application domain is the summarization of scientific articles.Summariza-tion of such texts requires a different approach from,for example,that used in the summarization of news articles.For example,Barzilay,McKeown,and Elhadad(1999) introduce the concept of information fusion,which is based on the identification of re-current descriptions of the same events in news articles.This approach works well because in the news domain,newsworthy events are frequently repeated over a short period of time.In scientific writing,however,similar“events”are rare:The main focus is on new scientific ideas,whose main characteristic is their uniqueness and difference from previous ideas.Other approaches to the summarization of news articles make use of the typical journalistic writing style,for example,the fact that the most newsworthy information comesfirst;as a result,thefirst few sentences of a news article are good candidates for a summary(Brandow,Mitze,and Rau1995;Lin and Hovy1997).The structure of scientific articles does not reflect relevance this explicitly.Instead,the introduction often starts with general statements about the importance of the topic and its history in thefield;the actual contribution of the paper itself is often given much later.The length of scientific articles presents another problem.Let us assume that our overall summarization strategy isfirst to select relevant sentences or concepts,and then to synthesize summaries using this material.For a typical10-to20-sentence news wire story,a compression to20%or30%of the source provides a reasonable input set for the second step.The extracted sentences are still thematically connected, and concepts in the sentences are not taken completely out of context.In scientific ar-ticles,however,the compression rates have to be much higher:Shortening a20-page journal article to a half-page summary requires a compression to2.5%of the original. Here,the problematic fact that sentence selection is context insensitive does make a qualitative difference.If only one sentence per two pages is selected,all information about how the extracted sentences and their concepts relate to each other is lost;with-out additional information,it is difficult to use the selected sentences as input to the second stage.We present an approach to summarizing scientific articles that is based on the idea of restoring the discourse context of extracted material by adding the rhetorical status to each sentence in a document.The innovation of our approach is that it defines principles for content selection specifically for scientific articles and that it combines sentence extraction with robust discourse analysis.The output of our system is a list of extracted sentences along with their rhetorical status(e.g.sentence11describes the scientific goal of the paper,and sentence9criticizes previous work),as illustrated in Figure1.(The example paper we use throughout the article is F.Pereira,N.Tishby,and L.Lee’s“Distributional Clustering of English Words”[ACL-1993,cmp lg/9408011];it was chosen because it is the paper most often cited within our collection.)Such lists serve two purposes:in themselves,they already provide a better characterization of scientific articles than sentence extracts do,and in the longer run,they will serve as better input material for further processing.An extrinsic evaluation(Teufel2001)shows that the output of our system is al-ready a useful document surrogate in its own right.But postprocessing could turn 410Teufel and Moens Summarizing Scientific Articles A IM10Our research addresses some of the same questions and uses similar raw data,but we investigate how to factor word association tendencies into associationsof words to certain hidden senses classes and associations between the classesthemselves.11While it may be worthwhile to base such a model on preexisting sense classes(Resnik,1992),in the work described here we look at how to derive the classesdirectly from distributional data.162We have demonstrated that a general divisive clustering procedure for probabilitydistributions can be used to group words according to their participation inparticular grammatical relations with other words.B ASIS19The corpus used in ourfirst experiment was derived from newswire text auto-matically parsed by Hindle’s parser Fidditch(Hindle,1993).113The analogy with statistical mechanics suggests a deterministic annealing pro-cedure for clustering(Rose et al.,1990),in which the number of clusters isdetermined through a sequence of phase transitions by continuously increasingthe parameter EQN following an annealing schedule.C ONTRAST9His notion of similarity seems to agree with our intuitions in many cases,but it isnot clear how it can be used directly to construct word classes and correspondingmodels of association.14Class construction is then combinatorially very demanding and depends onfrequency counts for joint events involving particular words,a potentially un-reliable source of information as we noted above.Figure1Extract of system output for example paper.0This paper’s topic is to automatically classify words according to their contexts of use.4The problem is that for large enough corpora the number of possible joint events is muchlarger than the number of event occurrences in the corpus,so many events are seen rarelyor never,making their frequency counts unreliable estimates of their probabilities.162This paper’s specific goal is to group words according to their participation in particulargrammatical relations with other words,22more specifically to classify nouns accordingto their distribution as direct objects of verbs.Figure2Nonexpert summary,general purpose.the rhetorical extracts into something even more valuable:The added rhetorical con-text allows for the creation of a new kind of summary.Consider,for instance,the user-oriented and task-tailored summaries shown in Figures2and3.Their composi-tion was guided byfixed building plans for different tasks and different user models, whereby the building blocks are defined as sentences of a specific rhetorical status. In our example,most textual material is extracted verbatim(additional material is underlined in Figures2and3;the original sentences are given in Figure5).Thefirst example is a short abstract generated for a nonexpert user and for general information; itsfirst two sentences give background information about the problem tackled.The second abstract is aimed at an expert;therefore,no background is given,and instead differences between this approach and similar ones are described.The actual construction of these summaries is a complex process involving tasks such as sentence planning,lexical choice and syntactic realization,tasks that are outside the scope of this article.The important point is that it is the knowledge about the rhetorical status of the sentences that enables the tailoring of the summaries according to users’expertise and task.The rhetorical status allows for other kinds of applications too:Several articles can be summarized together,contrasts or complementarity among411Computational Linguistics Volume28,Number4 44This paper’s goal is to organise a set of linguistic objects such as words according tothe contexts in which they occur,for instance grammatical constructions or n-grams.22More specifically:the goal is to classify nouns according to their distribution as directobjects of verbs.5Unlike Hindle(1990),9this approach constructs word classes andcorresponding models of association directly.14In comparison to Brown et al.(1992),the method is combinatorially less demanding and does not depend on frequency countsfor joint events involving particular words,a potentially unreliable source of information.Figure3Expert summary,contrastive links.articles can be expressed,and summaries can be displayed together with citation links to help users navigate several related papers.The rest of this article is structured as follows:section2describes the theoretical and empirical aspects of document structure we model in this article.These aspects include rhetorical status and relatedness:•Rhetorical status in terms of problem solving:What is the goal andcontribution of the paper?This type of information is often marked bymetadiscourse and by conventional patterns of presentation(cf.section2.1).•Rhetorical status in terms of intellectual attribution:What information is claimed to be new,and which statements describe other work?This typeof information can be recognized by following the“agent structure”oftext,that is,by looking at all grammatical subjects occurring in sequence(cf.section2.2).•Relatedness among articles:What articles is this work similar to,and in what respect?This type of information can be found by examiningfixedindicator phrases like in contrast to...,section headers,and citations(cf.section2.3).These aspects of rhetorical status are encoded in an annotation scheme that we present in section2.4.Annotation of relevance is covered in section2.5.In section3,we report on the construction of a gold standard for rhetorical status and relevance and on the measurement of agreement among human annotators.We then describe in section4our system that simulates the human annotation.Section5 presents an overview of the intrinsic evaluation we performed,and section6closes with a summary of the contribution of this work,its limitations,and suggestions for future work.2.Rhetorical Status,Citations,and RelevanceIt is important for our task tofind the right definition of rhetorical status to describe the content in scientific articles.The definition should both capture generalizations about the nature of scientific texts and also provide the right kind of information to enable the construction of better summaries for a practical application.Another requirement is that the analysis should be applicable to research articles from different presentational traditions and subject matters.412Teufel and Moens Summarizing Scientific Articles For the development of our scheme,we used the chronologicallyfirst80articles in our corpus of conference articles in computational linguistics(articles presented at COLING,ANLP,and(E)ACL conferences or workshops).Because of the inter-disciplinarity of thefield,the papers in this collection cover a challenging range of subject matters,such as logic programming,statistical language modeling,theoreti-cal semantics,computational dialectology,and computational psycholinguistics.Also, the research methodology and tradition of presentation is very different among these fields;(computer scientists write very different papers than theoretical linguists).We thus expect our analysis to be equally applicable in a wider range of disciplines and subdisciplines other than those named.2.1Rhetorical StatusOur model relies on the following dimensions of document structure in scientific articles.Problem structure.Research is often described as a problem-solving activity(Jordan 1984;Trawinski1989;Zappen1983).Three information types can be expected to occur in any research article:problems(research goals),solutions(methods),and results.In many disciplines,particularly the experimental sciences,this problem-solution struc-ture has been crystallized in afixed presentation of the scientific material as introduc-tion,method,result and discussion(van Dijk1980).But many texts in computational linguistics do not adhere to this presentation,and our analysis therefore has to be based on the underlying logical(rhetorical)organization,using textual representation only as an indication.Intellectual attribution.Scientific texts should make clear what the new contribution is,as opposed to previous work(specific other researchers’approaches)and back-ground material(generally accepted statements).We noticed that intellectual attribu-tion has a segmental character.Statements in a segment without any explicit attribution are often interpreted as belonging to the most recent explicit attribution statement (e.g.,Other researchers claim that).Our rhetorical scheme assumes that readers have no difficulty in understanding intellectual attribution,an assumption that we verified experimentally.Scientific argumentation.In contrast to the view of science as a disinterested“fact factory,”researchers like Swales(1990)have long claimed that there is a strong social aspect to science,because the success of a researcher is correlated with her ability to convince thefield of the quality of her work and the validity of her arguments.Au-thors construct an argument that Myers(1992)calls the“rhetorical act of the paper”: The statement that their work is a valid contribution to science.Swales breaks down this“rhetorical act”into single,nonhierarchical argumentative moves(i.e.,rhetorically coherent pieces of text,which perform the same communicative function).His Con-structing a Research Space(CARS)model shows how patterns of these moves can be used to describe the rhetorical structure of introduction sections of physics articles. Importantly,Swales’s moves describe the rhetorical status of a text segment with re-spect to the overall message of the document,and not with respect to adjacent text segments.Attitude toward other people’s work.We are interested in how authors include refer-ence to other work into their argument.In theflow of the argument,each piece of other work is mentioned for a specific reason:it is portrayed as a rival approach,as a prior approach with a fault,or as an approach contributing parts of the authors’own solution.In well-written papers,this relation is often expressed in an explicit way.The next section looks at the stylistic means available to the author to express the connection between previous approaches and their own work.413Computational Linguistics Volume28,Number4 2.2Metadiscourse and AgentivityExplicit metadiscourse is an integral aspect of scientific argumentation and a way of expressing attitude toward previous work.Examples for metadiscourse are phrases like we argue that and in contrast to common belief,we.Metadiscourse is ubiquitous in scientific writing:Hyland(1998)found a metadiscourse phrase on average after every 15words in running text.A large proportion of scientific metadiscourse is conventionalized,particularly in the experimental sciences,and particularly in the methodology or result section(e.g., we present original work...,or An ANOV A analysis revealed a marginal interaction/a main ef-fect of...).Swales(1990)lists many suchfixed phrases as co-occurring with the moves of his CARS model(pages144,154–158,160–161).They are useful indicators of overall importance(Pollock and Zamora1975);they can also be relatively easily recognized with information extraction techniques(e.g.,regular expressions).Paice(1990)intro-duces grammars for pattern matching of indicator phrases,e.g.,the aim/purpose of this paper/article/study and we conclude/propose.Apart from this conventionalized metadiscourse,we noticed that our corpus con-tains a large number of metadiscourse statements that are less formalized:statements about aspects of the problem-solving process or the relation to other work.Figure4, for instance,shows that there are many ways to say that one’s research is based on somebody else’s(“research continuation”).The sentences do not look similar on the surface:The syntactic subject can be the authors,the originators of the method,or even the method itself.Also,the verbs are very different(base,be related,use,follow). Some sentences use metaphors of change and creation.The wide range of linguistic expression we observed presents a challenge for recognition and correct classification using standard information extraction patterns.With respect to agents occurring in scientific metadiscourse,we make two sug-gestions:(1)that scientific argumentation follows prototypical patterns and employs recurrent types of agents and actions and(2)that it is possible to recognize many of these automatically.Agents playfixed roles in the argumentation,and there are so •We employ Suzuki’s algorithm to learn case frame patterns as dendroid distributions.(9605013)•Our method combines similarity-based estimates with Katz’s back-off scheme,which is widely used for languagemodeling in speech recognition.(9405001)•Thus,we base our model on the work of Clark and Wilkes-Gibbs(1986),and Heeman and Hirst(1992)...(9405013)•The starting point for this work was Scha and Polanyi’s discourse grammar(Scha and Polanyi,1988;Pruest et al.,1994).(9502018)•We use the framework for the allocation and transfer of control of Whittaker and Stenton(1988).(9504007)•Following Laur(1993),we consider simple prepositions(like“in”)as well as prepositional phrases(like“in front of”).(9503007)•Our lexicon is based on afinite-state transducer lexicon(Karttunen et al.,1992).(9503004)•Instead of...we will adopt a simpler,monostratal representation that is more closely related to those found in dependency grammars(e.g.,Hudson(1984)).(9408014)Figure4Statements expressing research continuation,with source article number.414Teufel and Moens Summarizing Scientific Articles few of these roles that they can be enumerated:agents appear as rivals,as contrib-utors of part of the solution(they),as the entire research community in thefield,or as the authors of the paper themselves(we).Note the similarity of agent roles to the three kinds of intellectual attribution mentioned above.We also propose prototypical actions frequently occurring in scientific discourse:thefield might agree,a particular researcher can suggest something,and a certain solution could either fail or be success-ful.In section4we will describe the three features used in our implementation that recognize metadiscourse.Another important construct that expresses relations to other researchers’work is formal citations,to which we will now turn.2.3Citations and RelatednessCitation indexes are constructs that contain pointers between cited texts and citing texts(Garfield1979),traditionally in printed form.When done on-line(as in CiteSeer [Lawrence,Giles,and Bollacker1999],or as in Nanba and Okumura’s[1999]work), citations are presented in context for users to browse.Browsing each citation is time-consuming,but useful:just knowing that an article cites another is often not enough. One needs to read the context of the citation to understand the relation between the articles.Citations may vary in many dimensions;for example,they can be central or perfunctory,positive or negative(i.e.,critical);apart from scientific reasons,there is also a host of social reasons for citing(“politeness,tradition,piety”[Ziman1969]).We concentrate on two citation contexts that are particularly important for the information needs of researchers:•Contexts in which an article is cited negatively or contrastively.•Contexts in which an article is cited positively or in which the authors state that their own work originates from the cited work.A distinction among these contexts would enable us to build more informative citation indexes.We suggest that such a rhetorical distinction can be made manually and automatically for each citation;we use a large corpus of scientific papers along with humans’judgments of this distinction to train a system to make such distinctions.2.4The Rhetorical Annotation SchemeOur rhetorical annotation scheme(cf.Table1)encodes the aspects of scientific argu-mentation,metadiscourse,and relatedness to other work described before.The cat-egories are assigned to full sentences,but a similar scheme could be developed for clauses or phrases.The annotation scheme is nonoverlapping and nonhierarchical,and each sentence must be assigned to exactly one category.As adjacent sentences of the same status can be considered to form zones of the same rhetorical status,we call the units rhetorical zones.The shortest of these zones are one sentence long.The rhetorical status of a sentence is determined on the basis of the global context of the paper.For instance,whereas the O THER category describes all neutral descrip-tions of other researchers’work,the categories B ASIS and C ONTRAST are applicable to sentences expressing a research continuation relationship or a contrast to other work. Generally accepted knowledge is classified as B ACKGROUND,whereas the author’s own work is separated into the specific research goal(A IM)and all other statements about the author’s own work(O WN).415Computational Linguistics Volume28,Number4Table1Annotation scheme for rhetorical status.A IM Specific research goal of the current paperT EXTUAL Statements about section structureO WN(Neutral)description of own work presented in current paper:Method-ology,results,discussionB ACKGROUND Generally accepted scientific backgroundC ONTRAST Statements of comparison with or contrast to other work;weaknesses ofother workB ASIS Statements of agreement with other work or continuation of other workO THER(Neutral)description of other researchers’workThe annotation scheme expresses important discourse and argumentation aspects of scientific articles,but with its seven categories it is not designed to model the full complexity of scientific texts.The category O WN,for instance,could be further sub-divided into method(solution),results,and further work,which is not done in the work reported here.There is a conflict between explanatory power and the simplicity necessary for reliable human and automatic classification,and we decided to restrict ourselves to the rhetorical distinctions that are most salient and potentially most useful for several information access applications.The user-tailored summaries and more in-formative citation indexes we mentioned before are just two such applications;another one is the indexing and previewing of the internal structure of the article.To make such indexing and previewing possible,our scheme contains the additional category T EXTUAL,which captures previews of section structure(section2describes our data...). Such previews would make it possible to label sections with the author’s indication of their contents.Our rhetorical analysis,as noted above,is nonhierarchical,in contrast to Rhetorical Structure Theory(RST)(Mann and Thompson1987;Marcu1999),and it concerns text pieces at a lower level of granularity.Although we do agree with RST that the structure of text is hierarchical in many cases,it is our belief that the relevance and function of certain text pieces can be determined without analyzing the full hierarchical structure of the text.Another difference between our analysis and that of RST is that our analysis aims at capturing the rhetorical status of a piece of text in respect to the overall message,and not in relation to adjacent pieces of text.2.5RelevanceAs our immediate goal is to select important content from a text,we also need a second set of gold standards that are defined by relevance(as opposed to rhetorical status). Relevance is a difficult issue because it is situational to a unique occasion(Saracevic 1975;Sparck Jones1990;Mizzaro1997):Humans perceive relevance differently from each other and differently in different situations.Paice and Jones(1993)report that they abandoned an informal sentence selection experiment in which they used agriculture articles and experts in thefield as participants,as the participants were too strongly influenced by their personal research interest.As a result of subjectivity,a number of human sentence extraction experiments over the years have resulted in low agreementfigures.Rath,Resnick,and Savage (1961)report that six participants agreed on only8%of20sentences they were asked to select out of short Scientific American texts and thatfive agreed on32%of the sentences.They found that after six weeks,subjects selected on average only55% of the sentences they themselves selected previously.Edmundson and Wyllys(1961)416Teufel and Moens Summarizing Scientific Articles find similarly low human agreement for research articles.More recent experiments reporting more positive results all used news text(Jing et al.1998;Zechner1995). As discussed above,the compression rates on news texts are far lower:there are fewer sentences from which to choose,making it easier to agree on which ones to select.Sentence selection from scientific texts also requires more background knowl-edge,thus importing an even higher level of subjectivity into sentence selection experiments.Recently,researchers have been looking for more objective definitions of relevance. Kupiec,Pedersen,and Chen(1995)define relevance by abstract similarity:A sentence in a document is considered relevant if it shows a high level of similarity to a sentence in the abstract.This definition of relevance has the advantage that it isfixed(i.e.,the researchers have no influence over it).It relies,however,on two assumptions:that the writing style is such that there is a high degree of overlap between sentences in the abstract and in the main text and that the abstract is indeed the target output that is most adequate for thefinal task.In our case,neither assumption holds.First,the experiments in Teufel and Moens (1997)showed that in our corpus only45%of the abstract sentences appear elsewhere in the body of the document(either as a close variant or in identical form),whereas Kupiec,Pedersen,and Chen report afigure of79%.We believe that the reason for the difference is that in our case the abstracts were produced by the document authors and by professional abstractors in Kupiec,Pedersen,and Chen’s case.Author summaries tend to be less systematic(Rowley1982)and more“deep generated,”whereas sum-maries by professional abstractors follow an internalized building plan(Liddy1991) and are often created through sentence extraction(Lancaster1998).Second,and more importantly,the abstracts and improved citation indexes we intend to generate are not modeled on traditional summaries,which do not pro-vide the type of information needed for the applications we have in r-mation about related work plays an important role in our strategy for summarization and citation indexing,but such information is rarely found in abstracts.We empir-ically found that the rhetorical status of information occurring in author abstracts is very limited and consists mostly of information about the goal of the paper and specifics of the solution.Details of the analysis we conducted on this topic are given in section3.2.2.We thus decided to augment our corpus with an independent set of human judg-ments of relevance.We wanted to replace the vague definition of relevance often used in sentence extraction experiments with a more operational definition based on rhetorical status.For instance,a sentence is considered relevant only if it describes the research goal or states a difference with a rival approach.More details of the instructions we used to make the relevance decisions are given in section3.Thus,we have two parallel human annotations in our corpus:rhetorical annotation and relevance selection.In both tasks,each sentence in the articles is classified:Each sentence receives one rhetorical category and also the label irrelevant or relevant.This strategy can create redundant material(e.g.,when the same fact is expressed in a sentence in the introduction,a sentence in the conclusion,and one in the middle of the document).But this redundancy also helps mitigate one of the main problems with sentence-based gold standards,namely,the fact that there is no one single best extract for a document.In our annotation,all qualifying sentences in the document are identified and classified into the same group,which makes later comparisons with system performance fairer.Also,later steps cannot onlyfind redundancy in the intermediate result and remove it,but also use the redundancy as an indication of importance.417。

LC-MS_MS多反应监测筛选分析血液中132种毒药物

LC-MS_MS多反应监测筛选分析血液中132种毒药物

[收稿日期]2005-11-15[作者简介]沈敏(1955-),女,硕士,研究员,司法部司法鉴定科学技术研究所所长,主要从事法医毒物化学研究。

生物检材中毒药物筛选分析始终是法医毒物学和临床毒物学面临的一个重要研究课题,方法的高灵敏度、高可靠性是筛选分析具有实际应用价值的基本要求。

血液由于血药质量浓度反映了药物作用强度和中毒程度而成为法医毒物学中最常见、最重要的检材,但同时受低血药质量浓度的限制,通常仅用于定量分析。

随着分析技术的发展,高灵敏度、高选择性的LC-MS 和GC-MS 技术使低药浓的血液检材用于毒药物筛选分析成为可能。

LC-MS 在毒物分析中的优越性[1-2]表现为:(1)可直接分析结合型代谢物;(2)能分析极性毒药物或极性代谢物而无需衍生化;(3)可分析热不稳定毒药物;(4)可分析水性样品而简化样品处理或进行在线样品处理;(5)可直接给出从原体至代谢物的药物代谢图。

与GC-MS 相比,LC-MS 分析的适用面更广,样品处理更为简便,分析结果可提供更多信息。

因此,LC-MS 可与GC-MS 互补,成为法医毒物分析、临床毒物分析和兴奋剂检测中的金标准。

LC-MS 用于筛选分析可有三种模式[3]:(1)单级质谱LC-MS 的SCAN(全扫描)方式;(2)串联质谱LC-MS/MS 的DDE (动态数据交换)方式;(3)串联质谱LC-MS/MS 的MRM (多反应监测)方式。

LC-MS/MS 的MRM 技术兼具有选择离子扫描的灵敏度和二级质谱的特征性,适用于选定范围内的毒药物筛选分析。

本研究以低药浓的血液检材为对象,建立基于LC-MS/MS 的MRM 技术的血液中毒药物筛选和确认方法,并考察了方法的有效性。

1材料和方法1.1实验材料132种毒药物标准品、对照品和氘代内标来源于Sig-ma 公司、Cerilliant 公司和国家麻醉品实验室。

流动相乙腈、甲酸和乙酸胺来源于Fluka 公司。

其它试剂均为国产分析纯。

METHOD FOR VALIDATING A MEDICAL APPLICATION, END U

METHOD FOR VALIDATING A MEDICAL APPLICATION, END U

专利名称:METHOD FOR VALIDATING A MEDICALAPPLICATION, END USER DEVICE ANDMEDICAL SYSTEM发明人:Kai-Oliver Schwenker,ThomasEissenloeffel,Bimal Thayyil申请号:US16824463申请日:20200319公开号:US20200218641A1公开日:20200709专利内容由知识产权出版社提供专利附图:摘要:An inventive method for validating an end user device for use with a medicalapplication. A medical application and a validation application are received in the end user device and the validation application is then executed, which includes: (i) determining the hardware and software environment of the end user device; (ii) providing a validation process compatible with the hardware and software environment; (iii) executing a test mode of the medical application; (iv) running the validation process during the test mode; and (v) determining from running the validation process whether the medical application is compatible with the end user device. When the medical application is determined to be compatible with the end user device, a validation report is generated and stored in the end user device and/or a server. When the medical application is determined to be incompatible with the end user device, the medical application is at least partially blocked.申请人:Roche Diabetes Care, Inc.地址:Indianapolis IN US国籍:US更多信息请下载全文后查看。

ms的amorphous cell案例

ms的amorphous cell案例

ms的amorphous cell案例The amorphous cell case involving Ms. involved a complex issue that required a thorough investigation and understanding of the facts. Upon examination, it became clear that there were multiple factors contributing to the problem at hand. Ms. was faced with a challenging situation that required her to make difficult decisions regarding her health and treatment options. The amorphous cell diagnosis added another layer of complexity to an already confusing and overwhelming situation.在ms涉及的无定形细胞案件中,涉及到一个复杂的问题,需要对事实进行彻底的调查和了解。

在检查过程中,很明显有多种因素导致了当前的问题。

Ms.面临着一个具有挑战性的局面,需要她就自身的健康和治疗选择做出艰难的决定。

无定形细胞的诊断为本已令人困惑和不堪重负的情况增加了另一个复杂性。

Ms. likely experienced a mix of emotions upon receiving her amorphous cell diagnosis. The news of having cells that lack a defined structure or order can be terrifying and confusing for anyone. It's natural for her to feel a sense of fear, uncertainty, and evendespair when faced with such a diagnosis. The unknown nature of amorphous cells can add to the anxiety and stress of dealing with a serious health issue.在接受无定形细胞的诊断后,ms很可能经历了一系列的情绪起伏。

cerficate valicate method

cerficate valicate method

cerficate valicate methodCertificate validation is a crucial process that ensures the authenticity and trustworthiness of certificates issued by certification authorities (CAs). Without proper validation, digital certificates may be compromised, leading to security vulnerabilities and potential risks for users.There are several methods and techniques used to validate certificates, each serving a specific purpose in confirming the validity of a certificate. These methods typically involve checking various attributes and properties of the certificate, as well as verifying the chain of trust and the authenticity of the issuing CA.One of the primary methods of certificate validation is the "Certificate Revocation List" (CRL). A CRL is a list maintained by the CA that contains the serial numbers of certificates that have been revoked or expired. During certificate validation, the certificate chain is checked against the CRL to ensure that none of the certificates in the chain are listed as revoked or expired.Another commonly used method is the "Online Certificate Status Protocol" (OCSP). Unlike the CRL method, which requires downloading and checking a list, OCSP provides real-time validation by querying the CA's OCSP server directly. The client sends the certificate's serial number to the server, which responds with the certificate's status – whether it is valid, revoked, or expired. OCSP offers a more efficient and reliable approach to certificate validation, especially in situations where the CRL may be large or difficult to download.In addition to the CRL and OCSP methods, the "Certificate Authority Authorization" (CAA) method is gaining popularity. CAA allows domain owners to specify which CAs are authorized to issue certificates for their domain. During certificate validation, the client checks whether the issuing CA is authorized by looking for the relevant CAA record in the domain's DNS settings. This method helps prevent unauthorized or fraudulent certificates from being issued.Furthermore, the "Certificate Transparency" (CT) method aims to increase transparency in the issuance and management of certificates. CT requires CAs to submit certificates to publicly accessible logs. When validating a certificate, the client checks these logs to ensure that the certificate has been properly logged. This method helps detect misissued certificates and improves accountability in the CA ecosystem.Apart from these methods, certificate validation also involves verifying the integrity and validity of the digital signature on the certificate. The client checks whether the certificate's digital signature can be successfully verified using the public key of the issuing CA. This ensures that the certificate has not been tampered with and is indeed issued by a trusted CA.In summary, certificate validation is a critical process for ensuring the integrity and trustworthiness of digital certificates. Methods like CRL, OCSP, CAA, and CT are employed to check various aspects of the certificate's validity, including revocation status, authorized issuance, transparency, and digital signature verification. By employing these methods, organizations can establish a secureand reliable infrastructure that safeguards against potential risks resulting from compromised or fraudulent certificates.。

msa exam -category a certificate -回复

msa exam -category a certificate -回复

msa exam -category a certificate -回复[msa考试A类证书]Introduction:The MSA (Middle School Assessment) is an examination conducted to assess the knowledge and skills of middle school students in various subjects. The MSA exam category A certificate is a prestigious award that recognizes students who excel in their academic performance. In this article, we will explore the steps required to obtain the MSA exam category A certificate.Step 1: RegistrationTo begin the journey towards obtaining the MSA exam category A certificate, students must first register for the exam. The registration process usually takes place in the school where the student is enrolled. The required documents may vary depending on the educational institution, but typically include a completed registration form, a copy of the student's identification document, and proof of payment for the registration fee.Step 2: Exam PreparationOnce registered for the MSA exam, students must diligently prepare to achieve the high standards set for the category A certificate. This involves revisiting and mastering the subjects that will be assessed in the examination. Students should pay particular attention to the core subjects - mathematics, science, language arts, and social studies. Developing a study schedule, seeking assistance from teachers or tutors, and practicing with past exam papers can greatly enhance the chances of success.Step 3: Examination DayOn the designated day of the MSA exam, students must arrive at the examination center well-prepared. It is essential to bring all the necessary materials such as pens, pencils, erasers, calculators (if allowed), and any other items required according to the exam guidelines. Following the instructions of the exam proctor is crucial to maintain a fair and organized testing environment. Students should remain calm and focused throughout the duration of the exam.Step 4: Scoring SystemThe MSA exam category A certificate is awarded based on the performance of the student in the examination. Each subject is graded separately, and the scores are then combined to determine the overall achievement. The scoring system may differ depending on the educational institution. Typically, students must achieve a minimum percentage or grade in each subject to be eligible for the category A certificate.Step 5: Results and CertificationAfter the completion of the examination, the exam papers are collected and sent for grading. The duration of the grading process may vary, but students are usually informed of their results within a few weeks. Once the results are announced, students who successfully meet the criteria for the MSA exam category A certificate will receive their certification either in person or by mail. The certificate serves as a testament to their academic excellence and can be proudly added to their portfolio.Step 6: Benefits of the MSA Exam Category A CertificateThe MSA exam category A certificate holds numerous advantages for students. Firstly, it demonstrates their high level of academic proficiency, which can greatly enhance their college or university application process. It also serves as a testimony of their hard work and dedication. Moreover, the certificate can open doors to scholarships, sponsorships, and other academic opportunities that are often reserved for high achievers.Conclusion:Obtaining the MSA exam category A certificate requires dedication, preparation, and perseverance. By registering, preparing diligently, performing well on the exam day, and meeting the necessary criteria, students can attain this prestigious certification. The benefits of the category A certificate extend beyond just the recognition of academic excellence. It unlocks doors to higher education and future opportunities. With determination and commitment, students can strive for success and excel in their educational journey.。

经管实证英文文献常用的缺失值处理方法

经管实证英文文献常用的缺失值处理方法

经管实证英文文献常用的缺失值处理方法Title: Common Methods for Handling Missing Values in Empirical English Literature in the Field of Economics and ManagementIntroductionMissing data is a common issue in empirical research in the field of economics and management. Incomplete data can pose challenges for researchers when conducting statistical analyses and drawing conclusions from their findings. This paper aims to discuss the common methods used for handling missing values in empirical English literature in the field of economics and management.1. Listwise DeletionListwise deletion is a simple method for handling missing values where any observation with missing data on any variable is deleted from the dataset. This method is straightforward and easy to implement, but it can lead to a significant loss of data and may introduce bias into the analysis.2. Pairwise DeletionPairwise deletion is a less restrictive method compared to listwise deletion, where observations with missing data on one or more variables are retained for analysis. This method uses all available data for each statistical analysis, but it can lead to biased results as it assumes that missing data are missing completely at random (MCAR).3. Mean ImputationMean imputation involves replacing missing values with the mean of the observed values for that variable. This method is easy to implement and preserves the sample size, but it can lead to biased estimates and underestimates the variability of the data.4. Median ImputationSimilar to mean imputation, median imputation involves replacing missing values with the median of the observed values for that variable. This method is more robust to outliers compared to mean imputation, but it still may lead to biased estimates and underestimates the variability of the data.5. Regression ImputationRegression imputation involves predicting missing values using regression models based on other observed variables. Thismethod can provide more accurate estimates compared to mean or median imputation, but it requires the assumption that the relationship between the missing variable and other variables is linear and consistent.6. Multiple ImputationMultiple imputation is a more advanced method that involves creating multiple imputed datasets by replacing missing values with plausible values based on the observed data. Statistical analyses are then performed on each imputed dataset, and the results are combined to generate more accurate estimates and standard errors.ConclusionIn conclusion, there are several common methods for handling missing values in empirical English literature in the field of economics and management. Each method has its strengths and limitations, and researchers should carefully consider which method is most appropriate for their specific research context. It is important to transparently report the method used for handling missing values and any potential implications on the validity of the findings. Further research is needed to explore the effectiveness of different methods for handling missing values in empirical studies in the field of economics and management.。

identity 液相色谱鉴定

identity 液相色谱鉴定

identity 液相色谱鉴定英文回答:Identity in Liquid Chromatography.Liquid chromatography (LC) is a versatile analytical technique that is used to separate and identify compounds in a mixture. It is based on the principle that different compounds interact with a stationary phase in different ways, which causes them to elute from the column at different times. The elution order of the compounds is determined by their polarity, molecular weight, and size.There are a number of different ways to identify compounds using LC. One common method is to use a UV/Vis detector. This detector measures the absorbance of the eluent at different wavelengths. The absorbance spectrum of a compound can be used to identify it, as each compound has a unique absorbance spectrum.Another method of identification is to use a mass spectrometer (MS). MS measures the mass-to-charge ratio of the ions that are produced when the eluent is passedthrough a high-energy field. The mass-to-charge ratio of an ion can be used to identify the compound that produced it.LC can also be used to identify compounds by comparing their elution times to the elution times of known standards. This method is known as retention time locking. Retention time locking is a simple and reliable method of identification, but it is only possible if the standardsare available.中文回答:液相色谱中的身份鉴定。

encyclopedia of dna element -回复

encyclopedia of dna element -回复

encyclopedia of dna element -回复Encyclopedia of DNA Elements (ENCODE): Decoding the Blueprint of LifeIntroduction:The Encyclopedia of DNA Elements, also known as ENCODE, is an international research initiative aimed at identifying and characterizing all functional elements in the human genome. This groundbreaking project provides a comprehensive catalog of the elements that regulate gene expression, chromatin organization, and DNA transcription and replication. In this article, we will delve into the ENCODE project, discuss its significance, and explore the intricacies of DNA elements.I. Unraveling the Human Blueprint:The human genome consists of approximately three billion base pairs of DNA, which contain the instructions for building and operating a human being. However, only a fraction of this genetic material, around 2, directly codes for proteins. The remaining portion, once regarded as so-called "junk DNA," has since proven to hold vital functional elements. ENCODE seeks to uncover and understand the role of these non-coding elements in generegulation and cellular processes.II. Cataloging Functional Elements:The ENCODE project utilizes advanced molecular biology techniques and high-throughput sequencing to map and identify various DNA elements across the human genome. The catalog of elements includes protein-coding genes, non-coding RNA genes, transcription factor binding sites, enhancers, promoters, insulators, and more. These elements work in concert to regulate gene expression, ensuring the proper functioning and development of cells and tissues.III. Decoding the Regulatory Grammar:One of the key objectives of ENCODE is to decipher the regulatory grammar encoded in our DNA. This involves understanding the complex network of interactions between DNA elements and their associated proteins. For example, enhancers are DNA regions that can significantly increase the expression of specific genes. Determining the specific enhancers for each gene is critical to comprehending their regulation accurately.IV. Functional Annotation of the Genome:ENCODE not only identifies DNA elements but also assigns functional annotations to them. Through careful experimentation and analysis, researchers can determine the function and importance of each element in various biological processes. This knowledge is invaluable for understanding human development, disease mechanisms, and potential therapeutic targets.V. Insights into Disease and Evolution:The ENCODE project provides valuable insights into the genetic basis of human diseases. By identifying disease-associated genetic variants within functional DNA elements, researchers can elucidate the mechanisms underlying diseases such as cancer, diabetes, and neurological disorders. Furthermore, comparative genomics studies utilizing ENCODE data help elucidate the evolutionary changes in DNA elements that have shaped human diversity.VI. Open Science and Collaborative Efforts:ENCODE follows an open science model, making its data and findings freely available to the scientific community. This approach fosters collaboration among experts worldwide, encouraging knowledge sharing and interdisciplinary research. The accessibility of ENCODE data empowers researchers to investigate their specificinterests within the vast realm of DNA elements.VII. Future Directions:While ENCODE has made significant progress since its initiation in 2003, there is still much to uncover. Future efforts will involve expanding the catalog of DNA elements in other organisms to achieve a comprehensive understanding of the functional components of genomes. Additionally, integrating ENCODE data with other large-scale biological datasets may lead to breakthroughs in precision medicine and personalized therapies.Conclusion:The Encyclopedia of DNA Elements, or ENCODE, is an ambitious project that has revolutionized our understanding of genetic regulation and gene expression. By cataloging and annotating functional elements within the human genome, ENCODE has provided unparalleled insights into disease mechanisms, evolutionary biology, and human development. As the project continues to unravel the complexities of DNA elements, it promises to shape the future of genomics and drive biomedical researchtowards new frontiers.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Journal of Pharmaceutical and Biomedical Analysis 70 (2012) 574–579Contents lists available at SciVerse ScienceDirectJournal of Pharmaceutical and BiomedicalAnalysisj o u r n a l h o m e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /j p baShort communicationA validated enantioselective LC–MS/MS assay for the simultaneousdetermination of carvedilol and its pharmacologically active 4 -hydroxyphenyl metabolite in human plasma:Application to a clinical pharmacokinetic studyMichael T.Furlong a ,∗,Bing He b ,William Mylott c ,Song Zhao c ,Thomas Mariannino c ,Jim Shen a ,Bruce Stouffer aaBristol-Myers Squibb,Research and Development,Analytical and Bioanalytical Development,Route 206&Province Line Road,Princeton,NJ 08543,USAbBristol-Myers Squibb,Research and Development,Discovery Medicine and Clinical Pharmacology,Route 206&Province Line Road,Princeton,NJ 08543,USA cPPD,Research and Development,2244Dabney Road,Richmond,VA 23230,USAa r t i c l ei n f oArticle history:Received 5April 2012Received in revised form 15May 2012Accepted 21May 2012Available online 30 May 2012Keywords:Carvedilol Metabolite LC–MS/MS AssayEnantiomer Derivatizationa b s t r a c tCarvedilol is widely prescribed for the treatment of hypertension,heart failure and left ventricular dys-function following myocardial infarction.A sensitive and reliable liquid chromatography–tandem mass spectrometry (LC–MS/MS)assay was developed and validated to enable reliable and efficient bioanal-ysis of the (R)-and (S)-enantiomers of carvedilol and its pharmacologically active 4 -hydroxyphenyl metabolite in human plasma.Following plasma extraction using supported liquid extraction (SLE)in a 96-well plate format,extracted samples were derivatized with 2,3,4,6-tetra-O-acetyl-␤-d -glucopyranosyl isothiocyanate (GITC).Chromatographic separation was achieved by gradient elution on an ACQUITY UPLC HSS T3analytical column.The impact of several potentially interfering isobaric metabolites on the quantification of the 4 -hydroxyphenyl metabolite (R)-and (S)-enantiomers was minimized by imple-mentation of a combination of chromatographic and mass spectrometric techniques.Derivatized analytes and stable-labeled internal standards were detected by positive ion electrospray tandem mass spectrom-etry.The assay was validated over concentration ranges of 0.200–100ng/mL for (R)-and (S)-carvedilol and 0.0200–10.0ng/mL for (R)-and (S)-4 -hydroxyphenyl carvedilol.Intra-and inter-assay precision val-ues for replicate quality control samples were within 11.9%for all analytes during the assay validation.Mean quality control accuracy values were within ±9.4%of nominal values for all analytes.Assay recov-eries were high (>76%)and internal standard normalized matrix effects were minimal.The four analytes were stable in human plasma for at least 24h at room temperature,89days at −20◦C and −70◦C,and following at least five freeze–thaw cycles.The validated assay was successfully applied to the quantifica-tion of the (R)-and (S)-enantiomers of both carvedilol and its pharmacologically active 4 -hydroxyphenyl metabolite in human plasma in support of a human pharmacokinetic study.© 2012 Elsevier B.V. All rights reserved.1.IntroductionCardiovascular disease is a leading cause of death worldwide.Carvedilol ((2RS)-1-(9H-carbazol-4-yloxy)-3-[[2-(2-methoxyphenoxy)ethyl]amino]propan-2-ol –Fig.1)is widely used to treat a variety of cardiovascular ailments,including hypertension,heart failure and left ventricular dysfunction fol-lowing myocardial infarction [1].This prescribed drug is a racemic mixture of (R)-and (S)-enantiomers.The S enantiomer has non-selective ␤-adrenoreceptor blocking activity,whereas both (R)-and (S)-enantiomers exhibit ␣1-adrenergic blocking activity with approximately equal potency [1].Each enantiomer of carvedilol∗Corresponding author.Tel.:+16092525069;fax:+16092523315.E-mail address:michael.furlong@ (M.T.Furlong).is extensively metabolized in humans via both oxidative and conjugative pathways [2–4].Several metabolites of carvedilol are pharmacologically active [5].One metabolite in particu-lar,4 -hydroxyphenyl carvedilol (Fig.1),exhibits approximately thirteen-fold higher ␤-adrenoreceptor blocking potency compared to carvedilol itself [5].Therefore,co-administered medications that induce or inhibit the conversion of carvedilol to its active 4 -hydroxyphenyl metabolite may alter their respective plasma concentrations,which may in turn impact the safety and efficacy of carvedilol [6–8].In order to routinely investigate this possibility in a clinical setting,a reliable assay capable of quantifying the R and S enantiomers of carvedilol and its active 4 -hydroxyphenyl metabolite in human plasma is essential.Several enantioselective bioanalytical assays for (R)-and (S)-carvedilol quantification in human plasma or blood have been reported [3,5,9–15].However,these assays are not capable of0731-7085/$–see front matter © 2012 Elsevier B.V. All rights reserved./10.1016/j.jpba.2012.05.026M.T.Furlong et al./Journal of Pharmaceutical and Biomedical Analysis 70 (2012) 574–579575ANALYTES R,S (±)-carvedilol (X=H )R,S (±)-4'-hydroxyphenylcarvedilol (X=OH )NHNHD 3INTERNALSTANDARDSd 3-R,S (±)-carvedilol (X=H )d 3-R,S (±)-4'-hydroxyphenyl carvedilol (X=OH )derivatizationwith GITCFig.1.Chemical structures and derivatization reactions of analytes and deuterium isotope-labeled internal standards.Asterisk denotes chiral center.GITC:2,3,4,6-tetra-O-acetyl-␤-d -glucopyranosyl isothiocyanate.quantifying any carvedilol metabolites.Bioanalytical assays have been developed to support the simultaneous quantification of total (R +S)carvedilol and its 4 -hydroxyphenyl metabolite in human plasma.[5,11,16,17].However,these assays are not capable of separately quantifying the (R)-and (S)-enantiomers of either ana-lyte.Herein,we report the development,validation and successful application of the first assay capable of quantifying the (R)-and (S)-enantiomers of both carvedilol and its pharmacologically active 4 -hydroxyphenyl metabolite in human plasma.We also describe the use of a combination of chromatographic and mass spectro-metric techniques to minimize the potential for assay interferences from multiple isobaric carvedilol metabolites.2.Experimental2.1.Assay proceduresEnantiometric analytes were extracted from plasma sam-ples using supported liquid extraction (SLE)in a 96-well plate format.The resulting extracts were derivatized with 2,3,4,6-tetra-O-acetyl-␤-d -glucopyranosyl isothiocyanate (GITC).The derivatized analytes were separated by reverse phase chromatog-raphy,followed by detection using tandem mass spectrometry.Detailed assay procedures,including materials and reagents,liquid chromatography–tandem mass spectrometry,LC–MS/MS data acquisition and processing,preparation of calibration stan-dards/quality control (QC)samples,and plasma sample extraction are provided in Appendix A –Supplementary Data .2.2.LC–MS/MS assay validationValidation of the LC–MS/MS assay was carried out in accordance with the FDA Guidance for Industry –Bioanalytical Method Val-idation [18]and PPD Standard Operating Procedures.A detailed validation summary,including acceptance criteria,is provided in Appendix A –Supplementary Data .2.3.Stability evaluationDetails of analyte stability evaluations in human plasma and extracted samples are provided in Appendix A –Supplementary Data .2.4.Pharmacokinetic studyA detailed summary of the pharmacokinetic study is provided in Appendix A –Supplementary Data .3.Results and discussion3.1.Assay development to enable simultaneous quantification of the (R)-and (S)-enantiomers of both carvedilol and its 4 -hydroxyphenyl metaboliteOur assay development approach was based in part upon previ-ously reported enantioselective carvedilol assays that relied upon derivatization of each enantiomer of carvedilol with the chiral derivatization agent GITC,followed by chromatographic separation of the resulting diastereomeric analytes and mass spectrometric detection [5,12].We reasoned that due to the presence of a GITC-reactive nitrogen atom in both carvedilol and its 4 -hydroxyphenyl metabolite,the (R)-and (S)-enantiomers of the 4 -hydroxyphenyl metabolite enantiomers could be concurrently derivatized along with the carvedilol enantiomers (Fig.1)and then chromato-graphically separated.Furthermore,the derivatized carvedilol enantiomers could be distinguished from the 4 -hydroxyphenyl enantiomers on the basis of their mass differences.At least four isobaric hydroxyl metabolites –5 -hydroxyphenyl carvedilol,along with 1-,3-and 8-hydroxycarbazolyl carvedilol –have been observed in human plasma following carvedilol dosing [4,19,20](Fig.2).Thus,it was necessary to develop an LC–MS/MS assay that could distinguish the 4 -hydroxyphenyl metabolite enantiomers from each other,as well as distinguish these analytes from as many as eight potentially interfering isobaric metabo-lites (i.e.,the (R)-and (S)-forms of 5 -hydroxyphenyl carvedilol,576M.T.Furlong et al./Journal of Pharmaceutical and Biomedical Analysis 70 (2012) 574–579Fig.2.Chemical structures and selected fragmentation pathways of GITC-derivatized carvedilol hydroxyl metabolites.Product ion m /z values shown for the GITC-derivatized 4 -and 5 -hydroxyphenyl metabolites were observed during assay development using authentic reference standards.Presumptive product ion structures and m /z values shown for the GITC-derivatized 1-,3-and 8-hydroxycarbazolyl metabolites were based upon previously reported fragmentation patterns of these metabolites The letter “G”in the structures denotes the GITC moiety that is incorporated during analyte derivatization.1-,3-and 8-hydroxycarbazolyl carvedilol)that might be present in pharmacokinetic plasma study samples.The 4 -and 5 -hydroxyl metabolites differ only with respect to the position of the hydroxyl group on the phenyl ring (Fig.2).As shown in Fig.3,the product ion spectra of the GITC-derivatized 4 -and 5 -hydroxyphenyl metabolites were essentially identical,and therefore the use of a 4 -hydroxyphenyl metabolite-specific MRM (Multiple Reaction Monitoring)transition for quantification would not be a viable option to obviate assay interference from the derivatized 5 -hydroxyphenyl metabolite enantiomers.We there-fore pursued a chromatographic separation approach to obtain the desired selectivity.The m /z 812→m /z 222MRM transition was chosen for 4 -hydroxyphenyl carvedilol quantification on the basis of (1)the excellent precursor ion to product ion conversion effi-ciency of the derivatized 4 -hydroxyphenyl metabolite (Fig.3)and (2)additional attributes that will be discussed in more detail later in this report.Representative chromatograms demonstrating the successful chromatographic separation of the derivatized analytes following extraction and derivatization of an analyte-spiked human plasma sample are shown in Appendix A –Supplementary Data .Attention was focused next on minimizing potential assay inter-ferences that may arise from the 1-,3-and 8-hydroxycarbazolyl carvedilol metabolites.Assessment and mitigation of potential assay interferences from these three metabolites was challenging due to the lack of commercial availability of authentic reference standards.The 1-,3-and 8-hydroxycarbazolyl carvedilol metabo-lites differ structurally from the 4 -hydroxyphenyl metabolite with respect to the location of the metabolically incorporated hydroxyl group on the carbazolyl ring rather than the phenyl ring (Fig.2).The m /z 222product ion already chosen for quantification ofthe 4 -hydroxyphenyl metabolite has a different mass from the corresponding product ion in all of the hydroxycarbazolyl metabo-lites (m /z 238–Fig.2)[21].Therefore,deployment of the m /z 222product ion in the MRM transition used for 4 -hydroxyphenyl metabolite quantification should not result in assay interferences from the three hydroxycarbazolyl metabolites,even if chromato-graphic resolution were not achieved.In summary,a combination of chromatographic and mass spectrometric strategies enabled the development of a selec-tive LC–MS/MS assay capable of quantifying the (R)-and (S)-enantiomers of both carvedilol and its pharmacologically active 4 -hydroxyphenyl metabolite in human plasma.Potential assay interference from the derivatized 5 -hydroxyphenyl metabolite enantiomers was minimized via chromatographic separation;potential assay interference from the derivatized 1-,3-and 8-hydroxycarbazolyl metabolite enantiomers was minimized via use of a 4 -hydroxyphenyl metabolite-specific product ion in the MRM transition.3.2.Assay validation and stability evaluationAssay validation criteria were met for all four analytes.Calibra-tion curve correlation coefficients (R 2)were >0.9960for all analytes in all validation analytical runs,indicating a good fit of the cali-bration data to the regression lines.Precision and accuracy data obtained for calibration curve and quality control samples from the three core validation analytical runs are summarized in Appendix A –Supplementary Data .Analysis of six unique lots of human plasma showed no sig-nificant interfering peaks at the retention times of any of theM.T.Furlong et al./Journal of Pharmaceutical and Biomedical Analysis 70 (2012) 574–579577I n t e n s i t y , c p sm/zI n t e n s i t y , c p sm/z(A)(B)Fig.3.Product ion spectra of GITC-derivatized (A)4 -and (B)5 -hydroxyphenyl metabolites of carvedilol.Asterisk denotes the product ion chosen for quantification of 4 -hydroxyphenyl carvedilol.analytes or internal standard samples.Representative lower limit of quantification (LLOQ)MRM chromatograms for each analyte are shown in Appendix A –Supplementary Data .Extraction recoveries were generally high and consistent for all four analytes at the test concentrations,ranging from 76.3%to 93.4%.Mean matrix factor values were 1.10,1.17,1.02and 1.11for (R)-carvedilol,(S)-carvedilol,(R)-4 -hydroxyphenyl carvedilol and (S)-4 -hydroxyphenyl carvedilol,respectively.Coefficients of varia-tion for the matrix factor experiments met acceptance criteria,and ranged from 3.33%to 12.0%.All analytes were found to be stable in human plasma for at least 24h at room temperature,89days at −20◦C and −70◦C and following at least five freeze–thaw cycles.Extracted samples were shown to be stable for up to 99h.Reinjection reproducibility was also demonstrated for extracted samples by reinjection of an entire analytical run after storage at +2to +8◦C.3.3.Application of the validated LC–MS/MS assay to a human pharmacokinetic studyThe validated LC–MS/MS assay was successfully applied to the quantification of (R)-and (S)-enantiomers of both carvedilol and its pharmacologically active 4 -hydroxyphenyl metabolite in a phar-macokinetic study.Representative MRM chromatograms for the derivatized enantiomers of carvedilol and its 4 -hydroxyphenyl metabolite are shown in Fig.4A and B,respectively.As shown in Fig.4B,only the derivatized enantiomers of the 4 -and 5 -hydroxyphenyl metabolites were detected in study sample chro-matograms;no additional peaks were detected.These observations indicated that,as predicted,deployment of the hydroxyphenyl metabolite-specific product ion (m /z 222)in the validated assay minimized any potential interference from the 1-,3-and 8-hydroxycarbazolyl carvedilol metabolites that might have been8.608.809.00Time, min 3.3e 4I n t e n s i t y , c p s(R)-Carve dil ol(S)-Carve dil ol8.608.809.00Time, min5.9e 4I n t e n s i t y , c p s[d 3]-(R)-Carve dil ol[d 3]-(S)-Carvedil ol 4.6 5.1 5.6 6.1Time, min1.8e 4I n t e n s i t y , c p s[d 3]-S)-4’-Hydroxy-phenyl carve dil ol [d 3]-(R)-4’-Hydroxy-phenyl carve dil ol 1.4e 4I n t e n s i t y , c p s4.75.2 5.76.2Time, min(S)-4’-HC(R)-4’-HC(S)-5’-HC(R)-5’-HC(A)(B)Fig.4.Representative study sample chromatograms for (A)carvedilol and (B)4 -hydroxyphenyl carvedilol.Note the presence in (B)of the chromatographically separatedderivatized enantiomers of the isobaric metabolite 5 -hydroxyphenyl carvedilol that were present,as expected,in the study samples.Left panels –analytes;right panels –internal standards.578M.T.Furlong et al./Journal of Pharmaceutical and Biomedical Analysis 70 (2012) 574–579Fig.5.Mean (+SD)plasma concentration–time profiles and pharmacokinetic data of carvedilol and its 4 -hydroxy metabolite following administration of carvedilol to healthy subjects.C max and AUC 0–∞data are represented as geometric means [N ](%CV)and T max is represented as median [N ](minimum–maximum).present in the pharmacokinetic plasma study samples.Fig.5shows mean (±SD)plasma concentration–time profiles for the (R)-and (S)-enantiomers of carvedilol and its 4 -hydroxyphenyl metabo-lite,along with their corresponding single dose pharmacokinetic parameters in healthy subjects.4.ConclusionIn summary,we have described the development,validation and successful application of the first assay capable of quantifying the (R)-and (S)-enantiomers of both carvedilol and its pharmacolog-ically active 4 -hydroxyphenyl metabolite in human plasma.The use of a combination of chromatographic and mass spectrometric techniques minimized the potential for assay interference from iso-baric metabolites known to be produced in humans following oral administration of carvedilol.AcknowledgementAnne-Franc ¸oise Aubry (Bristol-Myers Squibb)is acknowledged for careful review of the manuscript.Appendix A.Supplementary dataSupplementary data associated with this article can be found,in the online version,at /10.1016/j.jpba.2012.05.026.References[1]GlaxoSmithKline,COREG CR ®(Carvedilol Phosphate)Extended-Release Cap-sules:Prescribing Information,2009.[2]H.G.Oldham,S.E.Clarke,In vitro identification of the human cytochrome P450enzymes involved in the metabolism of R(+)-and S(−)-carvedilol,Drug Metab.Dispos.25(1997)970–977.[3]G.Neugebauer,W.Akpan,B.Kaufmann,K.Reiff,Stereoselective dispositionof carvedilol in man after intravenous and oral administration of the racemic compound,Eur.J.Clin.Pharmacol.38(1990)S108–S111.[4]G.Neugebauer,P.Neubert,Metabolism of carvedilol in man,Eur.J.Drug Metab.Pharmacokinet.16(1991)257–260.[5]T.W.B.Gehr,D.M.Tenero,D.A.Boyle,Y.Qian,D.A.Sica,N.H.Shusterman,Thepharmacokinetics of carvedilol and its metabolites after single and multiple dose oral administration in patients with hypertension and renal insufficiency,Eur.J.Clin.Pharmacol.55(1999)269–277.[6]D.W.Graff,K.M.Williamson,J.A.Pieper,S.W.Carson,K.F.Adams Jr.,W.E.Cas-cio,J.H.Patterson,Effect of fluoxetine on carvedilol pharmacokinetics,CYP2D6activity,and autonomic balance in heart failure patients,J.Clin.Pharmacol.41(2001)97–106.[7]K.Fukumoto,T.Kobayashi,K.Komamura,S.Kamakura,M.Kitakaze,K.Ueno,Stereoselective effect of amiodarone on the pharmacokinetics of racemic carvedilol,Drug Metab.Pharmacokinet.20(2005)423–427.[8]S.M.Stout,J.Nielsen,B.E.Bleske,M.Shea,R.Brook,K.Kerber,L.S.Welage,Theimpact of paroxetine coadministration on stereospecific carvedilol pharma-cokinetics,J.Cardiovasc.Pharmacol.Ther.15(2010)373–379.[9]E.J.Eisenberg,W.R.Patterson,G.C.Kahn,High-performance liquid chromato-graphic method for the simultaneous determination of the enantiomers of carvedilol and its O-desmethyl metabolite in human plasma after chiral deriva-tization,J.Chromatogr.Biomed.Appl.493(1989)105–115.[10]M.Fujimaki,Y.Murakoshi,H.Hakusui,Assay and disposition of carvedilolenantiomers in humans and monkeys:evidence of stereoselective presystemic metabolism,J.Pharm.Sci.79(1990)568–572.[11]H.-H.Zhou,A.J.J.Wood,Stereoselective disposition of carvedilol is determinedby CYP2D6,Clin.Pharmacol.Ther.(St.Louis,MO,United States)57(1995)518–524.[12]E.Yang,S.Wang,J.Kratz,M.J.Cyronak,Stereoselective analysis of carvedilol inhuman plasma using HPLC/MS/MS after chiral derivatization,J.Pharm.Biomed.Anal.36(2004)609–615.[13]M.Saito,J.Kawana,T.Ohno,M.Kaneko,K.Mihara,K.Hanada,R.Sugita,N.Okada,S.Oosato,M.Nagayama,T.Sumiyoshi,H.Ogata,Enantioselective and highly sensitive determination of carvedilol in human plasma and whole blood after administration of the racemate using normal-phase high-performance liquid chromatography,J.Chromatogr.B:Analyt.Technol.Biomed.Life Sci.843(2006)73–77.[14]S.Wang,M.Cyronak,E.Yang,Does a stable isotopically labeled internal stan-dard always correct analyte response?J.Pharm.Biomed.Anal.43(2007)701–707.[15]M.Zakrzewski-Jakubiak,S.de Denus,M.-H.Leblanc,M.White,J.Turgeon,Enantioselective quantification of carvedilol in human plasma by HPLC in heavily medicated heart failure patients,J.Pharm.Biomed.Anal.52(2010)636–641.M.T.Furlong et al./Journal of Pharmaceutical and Biomedical Analysis70 (2012) 574–579579[16]N.C.Hughes,N.Bajaj,J.Fan,E.Y.K.Wong,Assessing the matrix effects ofhemolyzed samples in bioanalysis,Bioanalysis1(2009)1057–1066.[17]I.Sarath Chandiran,K.N.Jayaveera,S.Raghunadha Reddy,High-throughputliquid chromatography–tandem mass spectrometric method for simultaneous quantification of carvedilol and its metabolite4-hydroxyphenyl carvedilol in human plasma and its application to bioequivalence study,J.Chem.Pharm.Res.3(2011)341–353.[18]FDA Guidance for Industry,Bioanalytical Method Validation,2001.[19]M.Machida,M.Watanabe,S.Takechi,S.Kakinoki,A.Nomura,Measurementof carvedilol in plasma by high-performance liquid chromatography withelectrochemical detection,J.Chromatogr.B:Analyt.Technol.Biomed.Life Sci.798(2003)187–191.[20]D.Tenero,S.Boike, D.Boyle, B.Ilson,H.F.Fesniak,S.Brozena, D.Jorkasky,Steady-state pharmacokinetics of carvedilol and its enantiomers in patients with congestive heart failure,J.Clin.Pharmacol.40(2000) 844–853.[21]W.H.Schaefer,J.Politowski,B.Hwang,F.Dixon Jr.,A.Goalwin,L.Gutzait,K.Anderson,C.Debrosse,M.Bean,G.R.Rhodes,Metabolism of carvedilol in dogs, rats,and mice,Drug Metab.Dispos.26(1998)958–969.。

相关文档
最新文档