Test Vector Decompression via Cyclical Scan Chains and Its Application to Testing Core-Base

合集下载

负催化动力学光度法测定痕量铈

负催化动力学光度法测定痕量铈

负催化动力学光度法测定痕量铈1. 研究背景负催化动力学光度法是一种常用的分析技术,用于测定溶液中微量金属离子的浓度。

铈是一种重要的稀土元素,具有广泛的应用价值,例如在催化剂、电池材料和陶瓷等领域。

准确测定痕量铈含量对于相关领域的研究和应用具有重要意义。

2. 实验原理负催化动力学光度法是基于溶液中金属离子与某种配体形成络合物,并通过光度计测定络合物的吸光度来确定金属离子的浓度。

实验中常用的配体是控制反应速率和选择性的关键因素。

对于铈离子的测定,常用的配体为二苯基荧光酮(DPF)。

二苯基荧光酮与铈形成络合物后,在紫外-可见光区域(UV-Vis)会发生吸收峰位移,从而可以通过测定吸光度来确定铈离子的浓度。

3. 实验步骤3.1. 实验准备•准备一定浓度的铈标准溶液,用于构建标准曲线。

•准备一系列待测样品溶液,含有未知浓度的铈离子。

•预热和校准紫外-可见光光度计。

3.2. 构建标准曲线•取一定体积的铈标准溶液,加入适量的二苯基荧光酮配体。

•在紫外-可见光光度计中设置合适的波长(通常为最大吸收峰处),记录吸光度值。

•重复上述步骤,使用不同浓度的铈标准溶液,得到一系列吸光度值。

•绘制标准曲线,以吸光度为纵轴,铈离子浓度为横轴。

3.3. 测定待测样品•取一定体积的待测样品溶液,加入适量的二苯基荧光酮配体。

•在紫外-可见光光度计中设置与构建标准曲线相同的波长,记录吸光度值。

•根据标准曲线,确定待测样品中铈离子的浓度。

4. 数据处理与结果分析通过实验测得的吸光度值,结合已知铈标准溶液的浓度和构建的标准曲线,可以计算出待测样品中铈离子的浓度。

在数据处理过程中,需要注意以下几点: - 确保标准曲线的线性范围内选择合适的标准溶液浓度。

- 标准曲线应该经过原点(吸光度为零)。

- 对于样品溶液中铈离子超出标准曲线范围的情况,可以进行稀释后重新测定。

5. 实验注意事项•实验过程中应注意安全操作,避免接触有害物质。

•正确使用紫外-可见光光度计,避免光路污染和误差。

射线检测术语

射线检测术语

《无损检测术语射线照相检测》1范围本标准界定了工业射线照相检测的术语。

2术语和定义2.1吸收absorption2.2活度activity2.3老化灰雾ageing fog2.4阳极anode2.5阳极电流anode current2.6伪像(假显示)artefact ( false indication )2.7衰减attenuation2.8衰减系数attenuation coefficientμ2.9平均梯度average gradient2.10背散射back scatter背散射线back scattered radiation2.11射束角beam angle2.12电子回旋加速器betatron2.13遮挡介质blocking medium2.14累积因子build-up factor2.15暗盒cassette暗袋2.16阴极cathode2.17已校验的阶梯密度片calibrated density step wedge2.18(胶片的)特性曲线characteristic curve ( of a film )2.19清澈时间clearing time2.20准直collimation2.21准直器collimator2.22康普顿散射Compton scatter2.23计算机层析成像computerized tomography ( CT )2.24恒电势电路constant potential circuit2.25连续谱continuous spectrum2.26对比度contrast2.27反衬介质contrast medium2.28对比灵敏度(厚度灵敏度)contrast sensitivity ( thickness sensitivity )2.29衰减曲线decay curve2.30密度计densitometer2.31(胶片或相纸的)显影development ( of a film or paper )2.32衍射斑纹diffraction mottle2.33剂量计dosemeter ( dosimeter )2.34剂量率计dose rate meter2.35双焦点管dual focus tube2.36双线像质计duplex wire image quality indicator双丝像质计双线图像质量指示器2.37边缘遮挡材料edge-blocking material2.38均值过滤器(射线束致平器)equalizing filter ( beam flattener ) 2.39等效X射线电压equivalent X-ray voltage2.40曝光exposure2.41曝光计算器exposure calculator2.42曝光曲线exposure chart2.43曝光宽容度exposure latitude2.44曝光时间exposure time2.45片基film base2.46胶片梯度film gradientG2.47观片灯(观察屏)film illuminator ( viewing screen )2.48胶片处理film processing2.49胶片系统速度film system speed2.50滤光板filter2.51定影fixing2.52探伤灵敏度flaw sensitivity2.53荧光增感屏fluorescent intensifying screen2.54金属荧光增感屏fluorometallic intensifying screen2.55荧光透视fluoroscopy2.56焦点focal spot2.57焦点尺寸focal spot size2.58焦距focus-to-film-distanceffd2.59灰雾度fog density2.60伽玛射线照相gamma radiography2.61伽玛射线gamma raysγ射线2.62伽玛射线源gamma-ray source2.63伽玛射线源容器gamma-ray source container2.64几何不清晰度geometric unsharpness2.65颗粒性graininess2.66颗粒度granularity2.67半衰期half life2.68半价层half value thicknessHVT2.69光源illuminator观片灯2.70图像对比度image contrast2.71图像清晰度image definition2.72图像增强image enhancement2.73图像增强器image intensifier2.74像质image quality图像质量2.75像质计image quality indicator图像质量指示器IQI2.76像质值image quality value图像质量值IQI灵敏度IQI sensitivity2.77入射射线束轴线incident beam axis2.78工业放射学industrial radiology2.79固有过滤inherent filtration2.80固有不清晰度inherent unsharpness2.81增感因子intensifying factor2.82增感屏intensifying screen2.83潜影latent image2.84直线电子加速器linear electron accelerator ( LINAC ) 2.85屏蔽masking2.86金属屏metal screen2.87微焦点射线照相microfocus radiography2.88调制传递函数modulation transfer functionMTF2.89运动不清晰度movement unsharpness2.90工件对比度object contrast2.91工件至胶片距离object-to-film distance2.92周向曝光panoramic exposure2.93透度计penetrameter2.94压痕pressure mark2.95初始射线primary radiation2.96投影放大率projective magnification2.97投影放大技术projective magnification technique 2.98(射线束)质量quality ( of a beam of radiation ) 2.99照射对比度radiation contrast2.100辐射源radiation source2.101射线照相底片/照片radiograph2.102射线照相胶片radiographic film2.103射线照相radiography2.104放射性同位素radioisotope2.105射线透视radioscopy2.106棒阳极管rod anode tube2.107散射线scattered radiation2.108增感型胶片screen type film2.109源固定器source holder2.110源尺寸source size2.111源至胶片距离(sfd)source-to-film distance ( sfd ) 2.112空间分辨力spatial resolution2.113比活度specific activity2.114阶梯楔块step wedge2.115立体射线照相stereo radiography2.116靶target2.117管子光阑tube diaphragm2.118管头tube head2.119管罩tube shield2.120管子遮光器tube shutter2.121管子窗口tube window2.122管电压tube voltage2.123未密封源unsealed source2.124不清晰度unsharpness2.125有效密度范围useful density range2.126真空暗盒vacuum cassette2.127观察屏蔽viewing mask2.128可视对比度visual contrast2.129X射线X-rays2.130X射线胶片X-ray film2.131X射线管X-ray tube。

斯仑贝谢所有测井曲线英文名称解释

斯仑贝谢所有测井曲线英文名称解释

斯仑贝谢所有测井曲线英文名称解释OCEAN DRILLING PROGRAMACRONYMS USED FOR WIRELINE SCHLUMBERGER TOOLS ACT Aluminum Clay ToolAMS Auxiliary Measurement SondeAPS Accelerator Porosity SondeARI Azimuthal Resistivity ImagerASI Array Sonic ImagerBGKT Vertical Seismic Profile ToolBHC Borehole Compensated Sonic ToolBHTV Borehole TeleviewerCBL Casing Bond LogCNT Compensated Neutron ToolDIT Dual Induction ToolDLL Dual LaterologDSI Dipole Sonic ImagerFMS Formation MicroScannerGHMT Geologic High Resolution Magnetic ToolGPIT General Purpose Inclinometer ToolGR Natural Gamma RayGST Induced Gamma Ray Spectrometry ToolHLDS Hostile Environment Lithodensity SondeHLDT Hostile Environment Lithodensity ToolHNGS Hostile Environment Gamma Ray SondeLDT Lithodensity ToolLSS Long Spacing Sonic ToolMCD Mechanical Caliper DeviceNGT Natural Gamma Ray Spectrometry ToolNMRT Nuclear Resonance Magnetic ToolQSST Inline Checkshot ToolSDT Digital Sonic ToolSGT Scintillation Gamma Ray ToolSUMT Susceptibility Magnetic ToolUBI Ultrasonic Borehole ImagerVSI Vertical Seismic ImagerWST Well Seismic ToolWST-3 3-Components Well Seismic ToolOCEAN DRILLING PROGRAMACRONYMS USED FOR LWD SCHLUMBERGER TOOLSADN Azimuthal Density-NeutronCDN Compensated Density-NeutronCDR Compensated Dual ResistivityISONIC Ideal Sonic-While-DrillingNMR Nuclear Magnetic ResonanceRAB Resistivity-at-the-BitOCEAN DRILLING PROGRAMACRONYMS USED FOR NON-SCHLUMBERGER SPECIALTY TOOLSMCS Multichannel Sonic ToolMGT Multisensor Gamma ToolSST Shear Sonic ToolTAP Temperature-Acceleration-Pressure ToolTLT Temperature Logging ToolOCEAN DRILLING PROGRAMACRONYMS AND UNITS USED FOR WIRELINE SCHLUMBERGER LOGSAFEC APS Far Detector Counts (cps)ANEC APS Near Detector Counts (cps)AX Acceleration X Axis (ft/s2)AY Acceleration Y Axis (ft/s2)AZ Acceleration Z Axis (ft/s2)AZIM Constant Azimuth for Deviation Correction (deg)APLC APS Near/Array Limestone Porosity Corrected (%)C1 FMS Caliper 1 (in)C2 FMS Caliper 2 (in)CALI Caliper (in)CFEC Corrected Far Epithermal Counts (cps)CFTC Corrected Far Thermal Counts (cps)CGR Computed (Th+K) Gamma Ray (API units)CHR2 Peak Coherence, Receiver Array, Upper DipoleCHRP Compressional Peak Coherence, Receiver Array, P&SCHRS Shear Peak Coherence, Receiver Array, P&SCHTP Compressional Peak Coherence, Transmitter Array, P&SCHTS Shear Peak Coherence, Transmitter Array, P&SCNEC Corrected Near Epithermal Counts (cps)CNTC Corrected Near Thermal Counts (cps)CS Cable Speed (m/hr)CVEL Compressional Velocity (km/s)DATN Discriminated Attenuation (db/m)DBI Discriminated Bond IndexDEVI Hole Deviation (degrees)DF Drilling Force (lbf)DIFF Difference Between MEAN and MEDIAN in Delta-Time Proc. (microsec/ft) DRH HLDS Bulk Density Correction (g/cm3)DRHO Bulk Density Correction (g/cm3)DT Short Spacing Delta-Time (10'-8' spacing; microsec/ft)DT1 Delta-Time Shear, Lower Dipole (microsec/ft)DT2 Delta-Time Shear, Upper Dipole (microsec/ft)DT4P Delta- Time Compressional, P&S (microsec/ft)DT4S Delta- Time Shear, P&S (microsec/ft))DT1R Delta- Time Shear, Receiver Array, Lower Dipole (microsec/ft)DT2R Delta- Time Shear, Receiver Array, Upper Dipole (microsec/ft)DT1T Delta-Time Shear, Transmitter Array, Lower Dipole (microsec/ft)DT2T Delta-Time Shear, Transmitter Array, Upper Dipole (microsec/ft)DTCO Delta- Time Compressional (microsec/ft)DTL Long Spacing Delta-Time (12'-10' spacing; microsec/ft)DTLF Long Spacing Delta-Time (12'-10' spacing; microsec/ft)DTLN Short Spacing Delta-Time (10'-8' spacing; microsec/ftDTRP Delta-Time Compressional, Receiver Array, P&S (microsec/ft)DTRS Delta-Time Shear, Receiver Array, P&S (microsec/ft)DTSM Delta-Time Shear (microsec/ft)DTST Delta-Time Stoneley (microsec/ft)DTTP Delta-Time Compressional, Transmitter Array, P&S (microsec/ft)DTTS Delta-Time Shear, Transmitter Array, P&S (microsec/ft)ECGR Environmentally Corrected Gamma Ray (API units)EHGR Environmentally Corrected High Resolution Gamma Ray (API units) ENPH Epithermal Neutron Porosity (%)ENRA Epithermal Neutron RatioETIM Elapsed Time (sec)FINC Magnetic Field Inclination (degrees)FNOR Magnetic Field Total Moment (oersted)FX Magnetic Field on X Axis (oersted)FY Magnetic Field on Y Axis (oersted)FZ Magnetic Field on Z Axis (oersted)GR Natural Gamma Ray (API units)HALC High Res. Near/Array Limestone Porosity Corrected (%)HAZI Hole Azimuth (degrees)HBDC High Res. Bulk Density Correction (g/cm3)HBHK HNGS Borehole Potassium (%)HCFT High Resolution Corrected Far Thermal Counts (cps)HCGR HNGS Computed Gamma Ray (API units)HCNT High Resolution Corrected Near Thermal Counts (cps)HDEB High Res. Enhanced Bulk Density (g/cm3)HDRH High Resolution Density Correction (g/cm3)HFEC High Res. Far Detector Counts (cps)HFK HNGS Formation Potassium (%)HFLC High Res. Near/Far Limestone Porosity Corrected (%)HEGR Environmentally Corrected High Resolution Natural Gamma Ray (API units) HGR High Resolution Natural Gamma Ray (API units)HLCA High Res. Caliper (inHLEF High Res. Long-spaced Photoelectric Effect (barns/e-)HNEC High Res. Near Detector Counts (cps)HNPO High Resolution Enhanced Thermal Nutron Porosity (%)HNRH High Resolution Bulk Density (g/cm3)HPEF High Resolution Photoelectric Effect (barns/e-)HRHO High Resolution Bulk Density (g/cm3)HROM High Res. Corrected Bulk Density (g/cm3)HSGR HNGS Standard (total) Gamma Ray (API units)HSIG High Res. Formation Capture Cross Section (capture units) HSTO High Res. Computed Standoff (in)HTHO HNGS Thorium (ppm)HTNP High Resolution Thermal Neutron Porosity (%)HURA HNGS Uranium (ppm)IDPH Phasor Deep Induction (ohmm)IIR Iron Indicator Ratio [CFE/(CCA+CSI)]ILD Deep Resistivity (ohmm)ILM Medium Resistivity (ohmm)IMPH Phasor Medium Induction (ohmm)ITT Integrated Transit Time (s)LCAL HLDS Caliper (in)LIR Lithology Indicator Ratio [CSI/(CCA+CSI)]LLD Laterolog Deep (ohmm)LLS Laterolog Shallow (ohmm)LTT1 Transit Time (10'; microsec)LTT2 Transit Time (8'; microsec)LTT3 Transit Time (12'; microsec)LTT4 Transit Time (10'; microsec)MAGB Earth's Magnetic Field (nTes)MAGC Earth Conductivity (ppm)MAGS Magnetic Susceptibility (ppm)MEDIAN Median Delta-T Recomputed (microsec/ft)MEAN Mean Delta-T Recomputed (microsec/ft)NATN Near Pseudo-Attenuation (db/m)NMST Magnetometer Temperature (degC)NMSV Magnetometer Signal Level (V)NPHI Neutron Porosity (%)NRHB LDS Bulk Density (g/cm3)P1AZ Pad 1 Azimuth (degrees)PEF Photoelectric Effect (barns/e-)PEFL LDS Long-spaced Photoelectric Effect (barns/e-)PIR Porosity Indicator Ratio [CHY/(CCA+CSI)]POTA Potassium (%)RB Pad 1 Relative Bearing (degrees)RHL LDS Long-spaced Bulk Density (g/cm3)RHOB Bulk Density (g/cm3)RHOM HLDS Corrected Bulk Density (g/cm3)RMGS Low Resolution Susceptibility (ppm)SFLU Spherically Focused Log (ohmm)SGR Total Gamma Ray (API units)SIGF APS Formation Capture Cross Section (capture units)SP Spontaneous Potential (mV)STOF APS Computed Standoff (in)SURT Receiver Coil Temperature (degC)SVEL Shear Velocity (km/s)SXRT NMRS differential Temperature (degC)TENS Tension (lb)THOR Thorium (ppm)TNRA Thermal Neutron RatioTT1 Transit Time (10' spacing; microsec)TT2 Transit Time (8' spacing; microsec)TT3 Transit Time (12' spacing; microsec)TT4 Transit Time (10' spacing; microsec)URAN Uranium (ppm)V4P Compressional Velocity, from DT4P (P&S; km/s)V4S Shear Velocity, from DT4S (P&S; km/s)VELP Compressional Velocity (processed from waveforms; km/s)VELS Shear Velocity (processed from waveforms; km/s)VP1 Compressional Velocity, from DT, DTLN, or MEAN (km/s)VP2 Compressional Velocity, from DTL, DTLF, or MEDIAN (km/s)VCO Compressional Velocity, from DTCO (km/s)VS Shear Velocity, from DTSM (km/s)VST Stonely Velocity, from DTST km/s)VS1 Shear Velocity, from DT1 (Lower Dipole; km/s)VS2 Shear Velocity, from DT2 (Upper Dipole; km/s)VRP Compressional Velocity, from DTRP (Receiver Array, P&S; km/s) VRS Shear Velocity, from DTRS (Receiver Array, P&S; km/s)VS1R Shear Velocity, from DT1R (Receiver Array, Lower Dipole; km/s) VS2R Shear Velocity, from DT2R (Receiver Array, Upper Dipole; km/s) VS1T Shear Velocity, from DT1T (Transmitter Array, Lower Dipole; km/s) VS2T Shear Velocity, from DT2T (Transmitter Array, Upper Dipole; km/s) VTP Compressional Velocity, from DTTP (Transmitter Array, P&S; km/s) VTS Shear Velocity, from DTTS (Transmitter Array, P&S; km/s)#POINTS Number of Transmitter-Receiver Pairs Used in Sonic Processing W1NG NGT Window 1 counts (cps)W2NG NGT Window 2 counts (cps)W3NG NGT Window 3 counts (cps)W4NG NGT Window 4 counts (cps)W5NG NGT Window 5 counts (cps)OCEAN DRILLING PROGRAMACRONYMS AND UNITS USED FOR LWD SCHLUMBERGER LOGSAT1F Attenuation Resistivity (1 ft resolution; ohmm)AT3F Attenuation Resistivity (3 ft resolution; ohmm)AT4F Attenuation Resistivity (4 ft resolution; ohmm)AT5F Attenuation Resistivity (5 ft resolution; ohmm)ATR Attenuation Resistivity (deep; ohmm)BFV Bound Fluid Volume (%)B1TM RAB Shallow Resistivity Time after Bit (s)B2TM RAB Medium Resistivity Time after Bit (s)B3TM RAB Deep Resistivity Time after Bit (s)BDAV Deep Resistivity Average (ohmm)BMAV Medium Resistivity Average (ohmm)BSAV Shallow Resistivity Average (ohmm)CGR Computed (Th+K) Gamma Ray (API units)DCAL Differential Caliper (in)DROR Correction for CDN rotational density (g/cm3).DRRT Correction for ADN rotational density (g/cm3).DTAB AND or CDN Density Time after Bit (hr)FFV Free Fluid Volume (%)GR Gamma Ray (API Units)GR7 Sum Gamma Ray Windows GRW7+GRW8+GRW9-Equivalent to Wireline NGT window 5 (cps) GRW3 Gamma Ray Window 3 counts (cps)-Equivalent to Wireline NGT window 1GRW4 Gamma Ray Window 4 counts (cps)-Equivalent to Wireline NGT window 2GRW5 Gamma Ray Window 5 counts (cps)-Equivalent to Wireline NGT window 3GRW6 Gamma Ray Window 6 counts (cps)-Equivalent to Wireline NGT window 4GRW7 Gamma Ray Window 7 counts (cps)GRW8 Gamma Ray Window 8 counts (cps)GRW9 Gamma Ray Window 9 counts (cps)GTIM CDR Gamma Ray Time after Bit (s)GRTK RAB Gamma Ray Time after Bit (s)HEF1 Far He Bank 1 counts (cps)HEF2 Far He Bank 2 counts (cps)HEF3 Far He Bank 3 counts (cps)HEF4 Far He Bank 4 counts (cps)HEN1 Near He Bank 1 counts (cps)HEN2 Near He Bank 2 counts (cps)HEN3 Near He Bank 3 counts (cps)HEN4 Near He Bank 4 counts (cps)MRP Magnetic Resonance PorosityNTAB ADN or CDN Neutron Time after Bit (hr)PEF Photoelectric Effect (barns/e-)POTA Potassium (%) ROPE Rate of Penetration (ft/hr)PS1F Phase Shift Resistivity (1 ft resolution; ohmm)PS2F Phase Shift Resistivity (2 ft resolution; ohmm)PS3F Phase Shift Resistivity (3 ft resolution; ohmm)PS5F Phase Shift Resistivity (5 ft resolution; ohmm)PSR Phase Shift Resistivity (shallow; ohmm)RBIT Bit Resistivity (ohmm)RBTM RAB Resistivity Time After Bit (s)RING Ring Resistivity (ohmm)ROMT Max. Density Total (g/cm3) from rotational processing ROP Rate of Penetration (m/hr)ROP1 Rate of Penetration, average over last 1 ft (m/hr).ROP5 Rate of Penetration, average over last 5 ft (m/hr)ROPE Rate of Penetration, averaged over last 5 ft (ft/hr)RPM RAB Tool Rotation Speed (rpm)RTIM CDR or RAB Resistivity Time after Bit (hr)SGR Total Gamma Ray (API units)T2 T2 Distribution (%)T2LM T2 Logarithmic Mean (ms)THOR Thorium (ppm)TNPH Thermal Neutron Porosity (%)TNRA Thermal RatioURAN Uranium (ppm)OCEAN DRILLING PROGRAMADDITIONAL ACRONYMS AND UNITS(PROCESSED LOGS FROM GEOCHEMICAL TOOL STRING)AL2O3 Computed Al2O3 (dry weight %)AL2O3MIN Computed Al2O3 Standard Deviation (dry weight %) AL2O3MAX Computed Al2O3 Standard Deviation (dry weight %) CAO Computed CaO (dry weight %)CAOMIN Computed CaO Standard Deviation (dry weight %) CAOMAX Computed CaO Standard Deviation (dry weight %) CACO3 Computed CaCO3 (dry weight %)CACO3MIN Computed CaCO3 Standard Deviation (dry weight %) CACO3MAX Computed CaCO3 Standard Deviation (dry weight %) CCA Calcium Yield (decimal fraction)CCHL Chlorine Yield (decimal fraction)CFE Iron Yield (decimal fraction)CGD Gadolinium Yield (decimal fraction)CHY Hydrogen Yield (decimal fraction)CK Potassium Yield (decimal fraction)CSI Silicon Yield (decimal fraction)CSIG Capture Cross Section (capture units)CSUL Sulfur Yield (decimal fraction)CTB Background Yield (decimal fraction)CTI Titanium Yield (decimal fraction)FACT Quality Control CurveFEO Computed FeO (dry weight %)FEOMIN Computed FeO Standard Deviation (dry weight %) FEOMAX Computed FeO Standard Deviation (dry weight %) FEO* Computed FeO* (dry weight %)FEO*MIN Computed FeO* Standard Deviation (dry weight %) FEO*MAX Computed FeO* Standard Deviation (dry weight %) FE2O3 Computed Fe2O3 (dry weight %)FE2O3MIN Computed Fe2O3 Standard Deviation (dry weight %) FE2O3MAX Computed Fe2O3 Standard Deviation (dry weight %) GD Computed Gadolinium (dry weight %)GDMIN Computed Gadolinium Standard Deviation (dry weight %) GDMAX Computed Gadolinium Standard Deviation (dry weight %) K2O Computed K2O (dry weight %)K2OMIN Computed K2O Standard Deviation (dry weight %)K2OMAX Computed K2O Standard Deviation (dry weight %) MGO Computed MgO (dry weight %)MGOMIN Computed MgO Standard Deviation (dry weight %) MGOMAX Computed MgO Standard Deviation (dry weight %)S Computed Sulfur (dry weight %)SMIN Computed Sulfur Standard Deviation (dry weight %) SMAX Computed Sulfur Standard Deviation (dry weight %)SIO2 Computed SiO2 (dry weight %)SIO2MIN Computed SiO2 Standard Deviation (dry weight %) SIO2MAX Computed SiO2 Standard Deviation (dry weight %) THORMIN Computed Thorium Standard Deviation (ppm) THORMAX Computed Thorium Standard Deviation (ppm)TIO2 Computed TiO2 (dry weight %)TIO2MIN Computed TiO2 Standard Deviation (dry weight %) TIO2MAX Computed TiO2 Standard Deviation (dry weight %) URANMIN Computed Uranium Standard Deviation (ppm) URANMAX Computed Uranium Standard Deviation (ppm) VARCA Variable CaCO3/CaO calcium carbonate/oxide factor。

cleavable arylamines test method

cleavable arylamines test method

cleavable arylamines test method
可切割芳香胺的测试方法是一种用于检测芳香胺类化合物的分析方法。

这些化合物通常包含一个或多个芳香环和一个或多个氨基官能团。

以下是一种常用的可切割芳香胺测试方法:
1. 首先,将待测试的样品溶解在适当的溶剂中,使其完全溶解。

2. 接下来,加入一种特定的试剂,如dinitrobenzoyl氯(DNBC)试剂。

这种试剂会与芳香胺反应生成稳定的产物。

3. 将混合物进行适当的搅拌或加热,并保持一定的反应时间,以确保充分的反应发生。

4. 反应完成后,可以通过测量吸收光谱或其他适当的分析方法来定量测定产物的浓度。

这可以使用分光光度计或色谱等仪器进行。

这是一种简单而有效的测试方法,可以用于确定样品中是否存在可切割芳香胺化合物。

然而,具体的实验条件和试剂选择可能因不同的应用而有所不同。

因此,在进行测试前,建议参考相关的文献或咨询专业人士以获取更详细和准确的信息。

表面活性剂增敏阻抑动力学光度法测定痕量草酸

表面活性剂增敏阻抑动力学光度法测定痕量草酸

表面活性剂增敏阻抑动力学光度法测定痕量草酸张爱梅3 贾丽萍 牛学丽(聊城大学化学化工学院,聊城252059)摘 要 在稀盐酸介质中,微量草酸对H 2O 2氧化靛红的褪色反应有显著的阻抑作用,非离子表面活性剂T riton X 2100对此体系有强烈的增敏作用,据此建立了表面活性剂增敏阻抑动力学光度分析测定微量草酸的新方法。

方法的线性范围是0.005~0.50mg ΠL ,检出限为0.005mg ΠL 。

方法简便,快速,灵敏度高,用于菠菜和尿样中草酸含量的测定,结果满意。

关键词 靛红,阻抑动力学光度分析,表面活性剂,草酸 2002210227收稿;2003203224接受1 引 言草酸是蔬菜中常见的一种成分,易被人体吸收。

如果血液和尿液中草酸含量过高,就会导致维生素缺乏症、肠道病及草酸尿等疾病。

草酸可以与Ca 2+形成稳定的螯合物,妨碍人体对钙的吸收。

草酸还可以与人体中的钙形成沉淀,从而促进和加速肾结石的形成。

因此,建立高灵敏、高选择性的测定草酸含量的方法对食品及临床尿液分析都有重要意义。

测定草酸的方法有分光光度法1、极谱法2、离子色谱法3、液相4和气相色谱法3。

这些方法有的灵敏度不高,有的选择性不好。

液相和气相色谱法灵敏度较高,但仪器价格昂贵,不易推广使用。

动力学分析法一般灵敏度较高,但多用于测定无机离子。

动力学光度法测定草酸的方法已有报道5~7,但灵敏度欠佳。

冯素玲等6基于草酸对Fe 3+催化H 2O 2氧化罗丹明G 反应的抑制作用,建立了阻抑动力学荧光法测定草酸的新方法,其检出限达到了93μg ΠL 。

Jiang 等8以罗丹明B 为指示剂建立了测定草酸的催化动力学分光光度法,方法的检出限为20μg ΠL 。

Ensafi 等9应用流动注射催化光度法建立了一种高灵敏的测定草酸的方法,其检出限达到了5μg ΠL 。

近年来,表面活性剂在分析化学中有不少应用,伍正清等10利用溴化十六烷基三甲基铵对NO -2催化K BrO 3氧化曙红的褪色反应的增敏作用,使灵敏度提高了约20倍。

vic-2d基本原理 -回复

vic-2d基本原理 -回复

vic-2d基本原理-回复VIC-2D(Virtual Image Correlation 2D)是一种用于测量物体变形和应力的计算机图像处理技术。

它基于数学原理和图像特征匹配算法,通过分析两幅图像之间的像素位移来推导出物体的形变和应力分布。

本文将一步一步回答有关VIC-2D的基本原理。

第一步是建立实验模型。

首先,需要准备实验样品,通常为一块薄片或几何体。

接下来,将样品固定在一个测试装置上,以便能够施加载荷或应变。

最后,使用一个数字摄像机或摄像机阵列来捕捉样品在不同加载条件下的图片序列。

第二步是进行图像预处理。

图像预处理主要涉及去除噪声、增强图像对比度和边缘检测等操作,以提高后续图像处理的准确性。

常用的预处理方法包括平滑滤波、直方图均衡化和Canny边缘检测等。

第三步是图像特征提取。

VIC-2D使用基于图像特征的匹配算法来测量形变和应力。

在这一步骤中,通过检测图像中的特征点或线段来获取待测物体的几何信息,并为后续的图像配准做准备。

常用的图像特征提取算法包括Harris角点检测和SIFT(尺度不变特征转换)算法等。

第四步是图像配准。

图像配准是指将两张图片对齐,使得它们之间的像素对应关系最好,以便准确地测量像素位移。

在VIC-2D中,一般使用全局或局部的配准算法。

全局配准算法通常通过寻找两张图像之间的刚性变换参数(如平移、旋转和缩放)来实现对齐。

而局部配准算法则使用更复杂的非刚性变换模型,并考虑到样品表面的非均匀形变特征。

第五步是位移计算。

在完成图像配准后,可以通过计算两幅图像之间的像素位移场来测量物体的形变。

VIC-2D使用基于区域或灰度相关的位移计算方法,通过比较两幅图像中的像素亮度值来获取像素的位移。

常用的位移计算方法包括基于亮度梯度和基于亮度变化率的算法。

最后一步是形变和应力计算。

根据位移场和已知的几何尺寸,可以得到物体的形变和应力分布情况。

形变计算通常基于物体的几何关系和位移场的导数,具体可以使用拉格朗日应变或亚当斯应变等方法。

沙琪玛中增塑剂的测定

沙琪玛中增塑剂的测定

沙琪玛中塑化剂的测定
一、提取
取沙琪玛5g ,捣碎,然后放于玻璃离心管中,加入石油醚20ml ,漩涡振荡5min ,4000RPM 条件下,离心5min ,取上清液;再加入石油醚20ml ,漩涡振荡5min ,离心5min ,与上次上清液合并;再加入10ml 石油醚,重复上述操作,并合并上清液。

取上述上清液于100ml 旋转蒸发瓶中,在35℃,150RPM 条件下蒸发至干,加入5ml 环己烷:乙酸乙酯=1:1溶剂溶解作为凝胶净化样品。

二、净化
仪器设备:LabTech AutoClean 凝胶净化系统、LabTech EV115旋转蒸发仪 流动相:环己烷:乙酸乙酯=1:1 流速:5ml/min 收集8-18分钟馏分
收集液在40℃,150RPM 条件下旋转蒸发至干,加入2ml 已腈溶解,作为HPLC 样品。

图1:沙琪玛凝胶净化谱图
三、测定
仪器:LabTech LC 600高效液相色谱
色谱柱:LabTech ODS 250×4.6mm ID 5μm 柱温:30℃
检测波长:230nm 沙琪玛增塑剂加标凝胶净化谱图 沙琪玛凝胶净化谱图
-100
100
200
300
400
500
600
700
800
900
1000
1100
mV
2
4
6
8
10
12
14
16
18
20
min
图2: 5种塑化剂混标HPLC谱图
(依次为:1、DMP 2、DEP 3、DBP 4、DEHP 5、DNOP)。

U盘安全问题及防护措施_袁弘恺

U盘安全问题及防护措施_袁弘恺

2005,13(6):719-731. A. Jas ,N.A. Touba.Test vector decompression via cyclical scan chains and its application to testing core-based designs [C].Proceedings of Int. Test Conf.,1998:458-464. [6] Nourani M, Tehranipour M.RL-Huffman encoding for test compression and power reduction in scan application [J]. ACM Trans. Des. Autom. Electron. Syst., 2005,10 (1): 91-115. A.Chandra,K.Chakrabarty.System-on-a-chip test data compression and decompression architectures based on Golomb codes[J].IEEE put.-Aided Design Integr. Circuits Syst.,2001,20(3):355-368. Chandra A,Chakrabarty K.Test data compression and test resource partitioning for system-on-a-chip using frequencydirected run-length (FDR) codes [J]. IEEE Trans. Comput., 2003,52(8):1076-1088.
76
—— 科协论坛 ・ 2012 年第 4 期 (下) ——

线性扫描极谱法测定盐酸酚苄明

线性扫描极谱法测定盐酸酚苄明
21 0 2年第 3 期 ( 总第 7 ) 7期
漳州师范学院学报 ( 自然 科学版)
J u n l f a z o r l ie st ( t S i) o r a Zh ng h u No ma Unv ri Na. c. o y
No 3 2 1 . . 0 2年
Ge e a . 7 nrl No 7
文章编号:087 2 (0 20 —0 70 1 0 -8 62 1 )30 7 —4
线性 扫描极谱法 测定盐酸酚 苄明
丘 则海 ,胡志彪 2,周云龙 2,拓宏桂 2 , 一
(. 1 福州大学 化学化工学院, 福建 福州 300 ; 2 518 . 龙岩学院 化学与材料学院 福建 龙岩 34 1 ) 602 摘 要: 用线性扫描极谱法研究盐酸酚苄明的电化学行为. H8 5 . t l H -H C 缓冲溶液中, 在p . 的0 6 oL N 3 4 i s 0o ・ N
盐 酸 酚 苄 明 化 学 名 称 为 N (. 基 . 苯 氧 乙 基 ) . . 乙 基 ) 甲胺 盐 酸 盐 .分 子 式 为 . 甲 1 2 . .( 氯 N 2 苯 C s 2 ]OH I I 2 N .C,相对分子质量为 30 9 H C 4 . ,其结构式为: 2


盐酸酚苄明是一种很好的C受体阻滞剂 . c - 它能在节后 i 肾上腺受 体上 防止或 逆转外源性和 内源性 f , 儿茶酚胺 的作用 ,扩 张周 围的血 管,增 加血液 流量,药效时间长达 3 4日 至 ….可用于前列腺增 生所
¨
实验用 水为二次蒸 馏水 ;纯 氮;所用试剂均 为分析纯.
12 分析 方法 .
移 取 一定 体 积 盐酸 酚 苄 明溶 液 于 2 L 比色 管 中,加 入 06 m H 85 的 1 o・ 5m . Lp . 5 . ml 0 L

核磁共振中常用的英文缩写和中文名称

核磁共振中常用的英文缩写和中文名称

NMR中常用的英文缩写和中文名称收集了一些NMR中常用的英文缩写,译出其中文名称,供初学者参考,不妥之处请指出,也请继续添加.相关附件NMR中常用的英文缩写和中文名称APT Attached Proton Test 质子连接实验ASIS Aromatic Solvent Induced Shift 芳香溶剂诱导位移BBDR Broad Band Double Resonance 宽带双共振BIRD Bilinear Rotation Decoupling 双线性旋转去偶(脉冲)COLOC Correlated Spectroscopy for Long Range Coupling 远程偶合相关谱COSY ( Homonuclear chemical shift ) COrrelation SpectroscopY (同核化学位移)相关谱CP Cross Polarization 交叉极化CP/MAS Cross Polarization / Magic Angle Spinning 交叉极化魔角自旋CSA Chemical Shift Anisotropy 化学位移各向异性CSCM Chemical Shift Correlation Map 化学位移相关图CW continuous wave 连续波DD Dipole-Dipole 偶极-偶极DECSY Double-quantum Echo Correlated Spectroscopy 双量子回波相关谱DEPT Distortionless Enhancement by Polarization Transfer 无畸变极化转移增强2DFTS two Dimensional FT Spectroscopy 二维傅立叶变换谱DNMR Dynamic NMR 动态NMRDNP Dynamic Nuclear Polarization 动态核极化DQ(C) Double Quantum (Coherence) 双量子(相干)DQD Digital Quadrature Detection 数字正交检测DQF Double Quantum Filter 双量子滤波DQF-COSY Double Quantum Filtered COSY 双量子滤波COSYDRDS Double Resonance Difference Spectroscopy 双共振差谱EXSY Exchange Spectroscopy 交换谱FFT Fast Fourier Transformation 快速傅立叶变换FID Free Induction Decay 自由诱导衰减H,C-COSY 1H,13C chemical-shift COrrelation SpectroscopY 1H,13C化学位移相关谱H,X-COSY 1H,X-nucleus chemical-shift COrrelation SpectroscopY 1H,X-核化学位移相关谱HETCOR Heteronuclear Correlation Spectroscopy 异核相关谱HMBC Heteronuclear Multiple-Bond Correlation 异核多键相关HMQC Heteronuclear Multiple Quantum Coherence异核多量子相干HOESY Heteronuclear Overhauser Effect Spectroscopy 异核Overhause效应谱HOHAHA Homonuclear Hartmann-Hahn spectroscopy 同核Hartmann-Hahn谱HR High Resolution 高分辨HSQC Heteronuclear Single Quantum Coherence 异核单量子相干INADEQUATE Incredible Natural Abundance Double Quantum Transfer Experiment 稀核双量子转移实验(简称双量子实验,或双量子谱)INDOR Internuclear Double Resonance 核间双共振INEPT Insensitive Nuclei Enhanced by Polarization 非灵敏核极化转移增强INVERSE H,X correlation via 1H detection 检测1H的H,X核相关IR Inversion-Recovery 反(翻)转回复JRES J-resolved spectroscopy J-分解谱LIS Lanthanide (chemical shift reagent ) Induced Shift 镧系(化学位移试剂)诱导位移LSR Lanthanide Shift Reagent 镧系位移试剂MAS Magic-Angle Spinning 魔角自旋MQ(C) Multiple-Quantum ( Coherence ) 多量子(相干)MQF Multiple-Quantum Filter 多量子滤波MQMAS Multiple-Quantum Magic-Angle Spinning 多量子魔角自旋MQS Multi Quantum Spectroscopy 多量子谱NMR Nuclear Magnetic Resonance 核磁共振NOE Nuclear Overhauser Effect 核Overhauser效应(NOE)NOESY Nuclear Overhauser Effect Spectroscopy 二维NOE谱NQR Nuclear Quadrupole Resonance 核四极共振PFG Pulsed Gradient Field 脉冲梯度场PGSE Pulsed Gradient Spin Echo 脉冲梯度自旋回波PRFT Partially Relaxed Fourier Transform 部分弛豫傅立叶变换PSD Phase-sensitive Detection 相敏检测PW Pulse Width 脉宽RCT Relayed Coherence Transfer 接力相干转移RECSY Multistep Relayed Coherence Spectroscopy 多步接力相干谱REDOR Rotational Echo Double Resonance 旋转回波双共振RELAY Relayed Correlation Spectroscopy 接力相关谱RF Radio Frequency 射频ROESY Rotating Frame Overhauser Effect Spectroscopy 旋转坐标系NOE谱ROTO ROESY-TOCSY Relay ROESY-TOCSY 接力谱SC Scalar Coupling 标量偶合SDDS Spin Decoupling Difference Spectroscopy 自旋去偶差谱SE Spin Echo 自旋回波SECSY Spin-Echo Correlated Spectroscopy自旋回波相关谱SEDOR Spin Echo Double Resonance 自旋回波双共振SEFT Spin-Echo Fourier Transform Spectroscopy (with J modulation) (J-调制)自旋回波傅立叶变换谱SELINCOR Selective Inverse Correlation 选择性反相关SELINQUATE Selective INADEQUA TE 选择性双量子(实验)SFORD Single Frequency Off-Resonance Decoupling 单频偏共振去偶SNR or S/N Signal-to-noise Ratio 信/ 燥比SQF Single-Quantum Filter 单量子滤波SR Saturation-Recovery 饱和恢复TCF Time Correlation Function 时间相关涵数TOCSY Total Correlation Spectroscopy 全(总)相关谱TORO TOCSY-ROESY Relay TOCSY-ROESY接力TQF Triple-Quantum Filter 三量子滤波WALTZ-16 A broadband decoupling sequence 宽带去偶序列WATERGATE Water suppression pulse sequence 水峰压制脉冲序列WEFT Water Eliminated Fourier Transform 水峰消除傅立叶变换ZQ(C) Zero-Quantum (Coherence) 零量子相干ZQF Zero-Quantum Filter 零量子滤波T1 Longitudinal (spin-lattice) relaxation time for MZ 纵向(自旋-晶格)弛豫时间T2 Transverse (spin-spin) relaxation time for Mxy 横向(自旋-自旋)弛豫时间tm mixing time 混合时间τ c rotational correlation time 旋转相关时间。

以二氯甲烷为溶剂对乙酰基二茂铁mannich反应的研究

以二氯甲烷为溶剂对乙酰基二茂铁mannich反应的研究

以二氯甲烷为溶剂对乙酰基二茂铁mannich反应的研究以二氯甲烷为溶剂对乙酰基二茂铁Mannich反应的研究Mannich反应是一种重要的有机合成反应,是由于美国化学家Carl Mannich在1912年发明的,主要用于在芳香性化合物上引入一个氨基甲酰基官能团。

此反应的特点是选择性容易控制,反应条件温和,生成的产物对生物活性具有重要的影响。

近年来,人们对Mannich反应的研究越来越深入,其官能团引入方式不断增加,研究范围也越来越广。

在本次实验中,我们以二氯甲烷为溶剂探究了该反应途径。

实验过程中,首先先将氨基甲酸在二氯甲烷中还原生成氨基乙醇;随后将乙酰基二茂铁、苯并酚以及所得的氨基乙醇加入到装有磷酸的混合溶液中,混合后反应性物质经3小时的反应过程,产出想要的产物。

得到的产物通过一系列的实验方式进行鉴定对其结构进行界定。

如何评价二氯甲烷在Mannich反应中的作用呢?首先,二氯甲烷作为一种较为稳定的极性溶剂,能够在反应过程中有效地吸附水分和其他杂质,保证中间物和产物质量更高。

其次,在本次实验中充当还原剂的氨基甲酸质量相对较低,而乙酰基二茂铁熔点较高,需要加热才能够溶解,但是加热的温度过高会导致反应副产物的生成,影响产物质量。

二氯甲烷作为一种不挥发的溶剂,在反应过程中气氛稳定,温度和反应时间都能够得到良好的控制。

因此,二氯甲烷在Mannich反应中具有重要的应用价值。

最终得到的产物通过红外光谱和质谱分析得知,酰胺的-NH2在Mannich反应中发生缩合反应,引入了一个N-C 的构型,尽管存在一些杂质,但是产物被鉴定为理论预测的产物。

至此,本次实验得到了很好的结论,二氯甲烷在Mannich反应中具有较高的反应效率,可以作为一种潜在的有机溶剂。

总的来说,二氯甲烷在Mannich反应中具有良好的反应性能和普适性,反应时间和温度都能够得到有效的控制,能够保证实验的可靠性和准确性。

这一实验结论的发现对Mannich反应的研究具有重要的意义,并为该领域的研究发展提供了新思路。

VASP计算前的各种测试

VASP计算前的各种测试

BatchDoc Word文档批量处理工具(计算前的)验证一、检验赝势的好坏:(一)方法:对单个原子进行计算;(二)要求:1、对称性和自旋极化均采用默认值;2、ENCUT要足够大;3、原胞的大小要足够大,一般设置为15 ?足矣,对某些元素还可以取得更小一些。

(三)以计算单个Fe原子为例:1、INCAR文件:SYSTEM = Fe atomENCUT = 450.00 eVNELMDL = 5 ! make five delays till charge mixing,详细意义见注释一ISMEAR = 0SIGMA=0.12、POSCAR文件:atom15.001.00 0.00 0.000.00 1.00 0.000.00 0.00 1.001Direct0 0 03、KPOINTS文件:(详细解释见注释二。

)AutomaticGamma1 1 10 0 04、POTCAR文件:(略)注释一:关键词“NELMDL”:A)此关键词的用途:指定计算开始时电子非自洽迭代的步数(即NELMDL gives the number of non-selfconsistent steps at the beginning),文档批量处理工具BatchDoc Word文档批量处理工具BatchDoc Word densitycharge fastermake calculations 。

目的是“非自洽”指的是保持“非自Charge density is used to set up the Hamiltonian, 所以不变,由于洽”也指保持初始的哈密顿量不变。

:B)默认值(default value)(时) 当ISTART=0, INIWANELMDL = -5 V=1, and IALGO=8 ) ISTART=0, INIWA V=1, and IALGO=48( NELMDL = -12 时当)其他情况下NELMDL = 0 (NELMDL might be positive or negative.ionic each applied means A positive number that after a delay is(movement -- in general not a convenient option. )在每次核运动之后(只在A negative value results in a delay only for the start-configuration.第一步核运动之前)NELMDL”为什么可以减少计算所需的时间?C)关键词“the the is Charge density used Hamiltonian, to set then upwavefunctions are optimized iteratively so that they get closer to the exacta optimized wavefunctions wavefunctions of Hamiltonian. this From theold with density charge is calculated, the which is then mixed newManual P105input-charge density. A brief flowchart is given below.(参自页)是比较离谱的,在前一般情况下,the initial guessed wavefunctions不变、保持初始的density次非自洽迭代过程中保持NELMDLcharge哈密顿量不变,只对wavefunctions进行优化,在得到一个与the exact文档批量处理工具BatchDoc WordBatchDoc Word文档批量处理工具wavefunctions of initial Hamiltonian较为接近的wavefunctions后,再开始同时优化charge density。

几种气相色谱不常用_常用的检测器原理图1

几种气相色谱不常用_常用的检测器原理图1

几种不常用的GC检测器的原理图(英文)-1The Thermionic Ionization DetectorElectrons produced by a heated filament can be accelerated by an appropriate potential so that they attain sufficient energy to ionize any gas or vapor molecules in their path. In 1957, the early days of gas chromatography, Ryce and Bryce modified a standard vacuum ionization gauge to examine its possibilities as a GC detector. A diagram of the device is shown in figure 47.The sensor consisted of a vacuum tube containing a filament, grid and anode, very similar in form to the thermionic triode valve. The tube was operated under reduced pressure and an adjustable leak was arranged to feed a portion of the column eluent into the gauge. The sensor was fitted with its own pumping system and vacuum gauge and the usual necessary cold traps. Helium was used as a carrier gas and the grid collector–electrode was set at +18 V with respect to the cathode and the plate at -20 V to collect any positive ions that are formed. As the ionization potential of helium is 24.5 volts, the electrons would not have sufficient energy to ionize the helium gas. However, most organic compounds have ionization voltages lying between 9.5 and 11.5 V and consequently would be ionized by the 18 V electrons and provide a plate current. The plate current was measured by an impedance converter in much the same way as the FID ionization current. The detection limit was reported to be 5 x 10-11 moles, but unfortunately the actual sensitivity in terms of g/ml is not known and is difficult to estimate. The sensitivity is likely to be fairly high, probably approaching that of the FID. The response of the detector is proportional to the pressure of the gas in the sensor from about 0.02 mm to 1.5 mm of mercury. In this region of pressure it was claimed that the response of the detector was linear . Hinkle et al. who also examined the performance of the detector, suggested the sensor must be operated under conditions of molecular flow i.e. where the mean freepathofthe molecules is about the same as the electrode separation. Very pure helium was necessary to ensure a low noise and base signal. The detector had a "fast" response but its main disadvantage was the need to operate at very low pressures so that it required a vacuum pump; furthermore, forstability, thesensorpressureneeded to be very precisely controlled.The Discharge DetectorAbout the same time that Ryce and Bryce were developing the thermionic ionization detector, Harley and Pretorious and (independently) Pitkethly and his co-workers were developing the discharge detector. By applying the appropriate potential, a discharge can be maintained between two electrodes situated in a gas providing the pressure is maintained between 0.1–10 mm of mercury. After the discharge has been initiated, the electrode potential can be reduced and the discharge will still continue. Under stable discharge, the electrode potential remains constant and independent of the gas pressure and the electrodecurrent.The electrode potential, however, depends strongly on the composition of the gas. It follows, that the system could function as a GC detector. Pitkethly modified a small domestic neon lamp for this purpose and a diagram of his sensor is shown in figure 48.The lamp was operated at about 3 mm of mercury pressure with a current of 1.5. Under these conditions the potential across the electrodes was 220 V. Pitkethly reported that a concentration of 10-6 g/l gave an electrode voltage change of 0.3 V.The noise level was reported to be about 10 mV thus at a signal–to–noise level of 2 the minimum detectable concentration would be about 3 x 10-11g/ml. This sensitivity is comparable to that of the FID and the argon ionization detector. The detector was claimed to be moderately linear with a linear dynamic range of three orders of magnitude but values for the response index were not reported. It was not apparent whether the ass℃iated electronics contained non linear signal modifying circuitry or not. Unfortunately, there were several disadvantages to this detector. One disadvantage was the erosion of the electrodes due to "spluttering" In addition, the electrodes were contaminated by sample decomposition and it was essential that it was used with a well–controlled vacuum system.The Spark Discharge DetectorLovel℃k noted that the voltage at which a spark will ℃cur between two electrodes situated in a gas will depend on the composition of the gas between the electrode tips and suggested that this could form the basis for a GC detector. The system suggested by Lovel℃k is shown in figure 49.The sensor consists of a glass tube in which two electrodes are sealed. The electrodes are connected in the circuit depicted in figure 49. The voltage across the electrodes is adjusted to a value that is just less than that required to produce a spark. When a solvent vapor enters the sensor, the sparking voltage is reduced and a spark discharge ℃curs. This discharges the capacitor until its voltage falls below that which will maintain the spark discharge. The capacitor is then charged up through the charging resistor until the breakdown voltage is again reached and another spark is initiated. Thus the spark frequency will be proportional to (or at least be a monotonic function of) the vapor concentration. The total counts in a peak will be proportional to the peak area and, if a digital–to–analog converter is also employed, the output will be proportional to the concentration in the detector and thus, plotted against time, will provide the normal chromatogram. This detector does not appear to have been developed further but is an interesting example of a sensor that, in effect, produces a digital output.The Radio Frequency Discharge DetectorWhen an RF discharge ℃curs across two electrodes between which the field is diverging (i.e. within a coaxial electrode orientation) a DC potential appears across the electrodes, the magnitude of which depends on the composition of the gas through which the discharge is passing. Karman and Bowman developed a detector based on this principle. A diagram of their detector is shown in figure 50.The sensor consisted of a metal cylinder that acted as one electrode with a coaxial wire passing down the center that acted as the other. A 40 MHz radio frequency was applied across the electrodes and the DC potential that developed across them was fed via simple electronic circuit to a potentiometric recorder. The resistance capacity decoupling shown in their circuit appears hardly sufficient to achieve the removal of the AC signal in a satisfactory manner and consequently, the circuit shown in figure 50 may be only schematic. The column was connected directly to the sensor and the eluent passed through the annular channel between the central electrode and the sensor wall.The response of the radio frequency discharge detector was reported as 106mV for a concentration change of 10-3 g/ml ofmethyl laureate. The noise level was reported to be 0.05 mV, which would give the minimum detectable concentration for a signal–to–noise ratio of 2 as about 6×10-10 g/ml. This detector had the advantage of operating at atmospheric pressure and so no vacuum system was required. The effect of temperature on the detector performance was not reported, nor was its linearity over a significant concentration range. This detector appears not to have been made commerciallyThe Ultrasound Whistle DetectorThe vel℃ity of the propagation of sound through a gas depends on its density and, thus, the presence of a solute vapor in a gas changes the vel℃ity of sound through it. This vel℃ity change can be utilized as a basis for vapor detection in GC. The frequency of a whistle, consisting of an orifice which directs a stream of gas against a jet edge proximate to a resonant cavity, is related to the vel℃ity of sound in the gas passing through it. A diagram of such a whistle is shown in figure 12. Nyborg et al. (38) showed that the frequency (fn) of the whistle could be described by the following equation.Testerman and McLeod designed and built a detector based on the whistle principle. In their sensor design, typical values taken for the dimensions in the diagram, and variables in the equation, were (t), 0.064 mm, (d), 0.74 mm, (h), 1.676 mm and (L) 3.81 mm.Under the flow conditions normally used for GC separations, frequencies ranging from 30-50 kHz (supersonic frequencies) were observed. The sensor contained two sound generators, one operating with pure carrier gas and the other with the eluent from the column. The two frequencies were allowed to beat together, the beat frequency being directly related to the frequencydifference between the two whistles and consequently the density difference between the contents of the two sensors. An example of the use of the whistle detector to monitor the separation of a mixture of hydr℃arbons is shown in figure 60. The sample size was 7.5 ml of gas mixture and the carrier gas flow rate was 180 ml /min. This chromatogram illustrates the effective use of the detector and the operating conditions shows its limitations. The sensitivity appears somewhat less than that of the katharometer but the very high flow rates necessary to activate the whistle restrict the use of this type of detector very severely. In the original report the linearity was stated to cover 2 orders of magnitude of concentration but with modern electronics it is likely that this linear range could be extended by at least another order of magnitude.The Absolute Mass DetectorThe absolute mass detector adsorbs the material as it is eluted from the column onto a suitable adsorbent and continually weighs the mass adsorbed.This system was devised by Bevan and Thorburn [43,44], who adsorbed the eluent from a GC column on to the coated walls of a vessel supported on a recording balance. A diagram of their apparatus is shown in figure 61. The adsorption vessel was 1.4 cm I.D. and about 5 cm high. The walls of the vessel were coated with a high boiling absorbent such as polyethylene glycol or an appropriate normal hydr℃arbon depending on the samples being trapped. Under such circumstances the solutes separated had to be relatively low boiling otherwise they would condense in the capillary connecting tube to the adsorption vessel. The tube dipped to the base of the absorber where a baffle was situated to direct the eluent to the walls of the adsorption vessel. The balance record represented an integral chromatogram, the step height giving directly the mass of solute eluted.Despite the relative casual arrangement of the adsorbent, it would appear that the adsorption was quite efficient and, with 10 mg charges on the column, an accuracy of 1% could be easily achieved. Later Bevan et al [45,46], reduced the size of the absorber and employed charcoal as the adsorbing material. Although this improved the performance of the detector and reduced the necessary sample size, the detecting system was never made commercially. Even after modification, its sensitivity was relatively poor and despite it being an absolute detecting system, it placed too many restrictions on the operation of the chromatograph and the samples that could be chromatographed to be generally useful.The Surface Potential DetectorThe surface potential detector was developed by Griffiths and Phillips [47,48] in the early 1950s and consisted of a cell containing two parallel metal plates between which flowed the column eluent. One plate was mechanically attach ed to an oscillator that vibrated the plate at about 10 kHz. If the plates are identical, the surface charge on each plate is the same and so no potential is induced into the second plate by the vibrating plate. If however the surfaces are d issimilar, then the surface charge on each plate will differ and the vibrating plate will induce a potential on the other pl ate. A diagram of the detector is shown in figure 62.Both plates were constructed of the same metal but one plate was coated with a monolayer of a suitable substance that would absorb any vapors present in the column eluent. The absorbing layer caused the charge on the two plates to be dissimilar and thus a potential appeared acrossthetwoplates which was balanced out by the bias potentiometer. When a solute vapor passes through the detector, some is distributed into the absorbent layer, changing the surface charge and thus inducing a change in potential between the electrodes. This produces an AC signal voltage that can then amplified, rectified and the output passed to a recorder (or to a data acquisition system). The signals provided by the detector sensor could be as great as several hundred millivolts.The sensitivity of the detector was claimed to be similar to that of the katharometer (i.e. about 10-6 g/ml). Its response was partly determined by the distribution coefficient of the solute vapor between the carrier gas and the absorbing layer (and thus the chemical characteristic of the coating) as well as the chemical nature of the solute itself. As a consequence, the response varied considerably between different solutes. Within a given homologous series, however, the response increased with the molecular weight of the solute, but this was probably merely a reflection of the increase in the value of the distribution coefficient with molecular weight. Although an interesting alternative method of detection, this detector has been little used in GC and is not commercially available。

痕量挥发性有机物质子转移反应质谱检测研究的开题报告

痕量挥发性有机物质子转移反应质谱检测研究的开题报告

痕量挥发性有机物质子转移反应质谱检测研究的开题报告一、选题背景痕量挥发性有机物质(trace volatile organic compounds,TVOCs)是一类多种有机物质的统称,主要包括呼吸道刺激、臭味和毒性等多种性质。

TVOCs的来源很广泛,包括地下水和土壤污染、建筑材料的释放、人体生活和办公环境等。

TVOCs的存在对人类健康、环境保护和生态平衡等方面的影响日益受到关注。

质谱技术是一种重要的TVOCs检测手段,有着高灵敏度、高选择性和高分辨率等优点。

而痕量挥发性有机物质子转移反应质谱检测技术是其中一种新兴的方法,它能够利用质谱技术进行快速、高灵敏度的TVOCs检测。

二、研究目的本研究的主要目的是探究痕量挥发性有机物质子转移反应质谱检测技术的优势和特点,评估该技术在TVOCs检测领域中的应用前景以及存在的问题和局限性。

三、研究内容(1) 痕量挥发性有机物质子转移反应质谱检测技术原理和特点的介绍:首先,介绍TVOCs的定义和来源,然后引入质谱技术在TVOCs检测中的应用以及该方法的特点。

接着,介绍痕量挥发性有机物质子转移反应质谱检测技术的基本原理和特点。

(2) 痕量挥发性有机物质子转移反应质谱检测技术的优势和应用前景:主要从检测灵敏度、选择性、分辨率和高通量等方面介绍该技术的优势和应用前景,并结合实际案例分析其应用实践。

(3) 痕量挥发性有机物质子转移反应质谱检测技术的问题和局限性:主要从检测准确性、样品前处理、仪器精度和质谱库等方面分析该技术的问题和局限性,并提出改进方案。

四、研究意义本研究的意义在于:(1) 探究痕量挥发性有机物质子转移反应质谱检测技术的原理、优势和应用前景以及存在的问题和局限性,有助于提高TVOCs检测的准确性和灵敏度。

(2) 通过实际案例分析该技术在不同领域的应用实践,为该技术的推广和应用提供参考和借鉴。

(3) 提出改进方案,为该技术的进一步发展和完善提供支持。

高效液相色谱检测器

高效液相色谱检测器

结构图
氘灯
发射单色器 光电倍增管
激发单色器
样品流通池
荧光检测器
光二极管(UV)
检 测 器
光 源
滤光片
滤光片 检测室
常见荧光检测器光路图
➢3、局限:只能适合于能产生荧光的物 质(或通过衍生化能产生荧光的物质) 的检测。其线性范围不如紫外检测器宽。
示差折光检测器
(differential refractive Index detector, RID)
结构图
电 极





路电

电导仪
✓局限:对流速、温度敏感、干扰比较多。
➢2、安培检测器:
✓ 特点:高灵敏度、高选择性、应用很广 ,检测具有氧化还原活性(能发生电极 反应)的物质。适于与反相色谱匹配。
原理:当被分离的电活性物质流经电极表面 时,由于溶液与电极间有电势差,电活性物 质就要得到或失去电子,被还原或氧化,因 此,溶液和电极间发生电荷转移,形成电流, 该电流符合法拉第定律,即电流大小与待测 物浓度成正比。记录电流随时间的变化,得 到电泳谱图。
✓原理及结构:基于离子性物质的溶液具有导电性,
利用离子在电场中迁移导电进行检测,其电导率与离 子的性质和浓度相关。 当向电导池的两个电极施加电 压时,溶液中的阴离子向阳极移动,阳离子向阴极移 动。在电解质溶液中的离子数目和离子的移动速率决 定溶液的电阻大小,离子的迁移率或单位电场中离子 的速率取决于离子的电荷及其大小、介质类型、溶液 温度和离子浓度。离子的迁移速率取决于施加电压的 大小。所施加的电压既可以是直流电压,也可以是正 弦波或方波电压。当施加的有效电位确定后,即可测 量出电路中的电流值,即能测出电导值。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Test Vector Decompression via Cyclical Scan Chains and Its Application to TestingCore-Based DesignsAbhijit Jas and Nur A. ToubaComputer Engineering Research CenterDepartment of Electrical and Computer EngineeringUniversity of Texas, Austin, TX 78712-1084E-mail: {jas, touba}@AbstractA novel test vector compression/decompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design. A small amount of on-chip circuitry is used to reduce both the test storage and test time required for testing a core-based design. The fully specified test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core (the compression is lossless). Instead of having to transfer each entire test vector from the tester to the core, a smaller amount of compressed data is transferred instead. This reduces the amount of test data that must be stored on the tester and hence reduces the total amount of test time required for transferring the data with a given test data bandwidth. 1.IntroductionTesting systems-on-a-chip containing multiple cores is a major challenge due to limited test access to each core [Chandramouli 96], [Zorian 97]. The test vectors for each core must be applied to the core’s inputs and internal scan, and the test response of the core must be observed at the core’s outputs and shifted out of its internal scan. Some means for getting the test data from the tester to each core and getting the test response from each core to the tester is required. The best possible situation is to have full parallel access to the inputs and outputs of the cores [Immaneni 90]. However, this requires multiplexing all of the core I/Os to the chip pins. The routing complexity and overhead for this can be enormous. A more efficient means for providing test access to the cores is to use scan chains. The number of scan chains that are used and the way in which they are organized determines the test data bandwidth for each core (i.e., rate at which test vectors can be scanned in and test response scanned out). The number of scan chains and their organization typically depend on the capabilities of the tester being used and on the scan routing costs. The total test time required for testing a core-based design depends on the amount of test data that must be transferred between the tester and the chip and the test data bandwidth for transferring the data.Figure 1 shows a general block diagram for how test data is transferred from the tester to the cores. The amount of test data that must be transferred from the tester to a particular core is equal to the number of test vectors (T) for the core times the number of input bits and internal scan elements for the core (m), i.e., T x m. For systems-on-a-chip that contain many complex cores, both the amount of test data and the test time can become very large.Figure 1. Block Diagram for Transferring Test Data between Tester and Embedded Cores One solution to this problem is to use built-in self-test (BIST) where on-chip hardware is used to test the cores. However for logic cores, this is only practical if the core is made “BISTable” by the core vendor. Currently, there are few cores that include BIST features. Usually, only a setof test vectors for the core is given. The amount of BIST hardware required to apply a large set of specified test vectors is generally prohibitive.This paper presents an efficient compression/ decompression scheme to reduce the amount of test data that must be stored on the tester and transferred to a core.A small amount of on-chip circuitry is used to reduce both the test storage and test time required for testing a core-based design. The fully specified test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core (the compression is lossless). Instead of having to transfer each entire test vector from the tester to the core, a smaller amount of compressed data is transferred instead. This reduces the amount of test data that must be stored on the tester and hence reduces the total amount of test time required for transferring the data with a given test data bandwidth. Thus, the technique presented in this paper can be used to reduce the test time required for testing a system-on-a-chip given a tester’s limited memory and channel capacity.Test vector compression/decompression techniques can be classified based on the amount of information they require. Four general classifications are described below: Schemes Requiring ATPG - These are schemes that involve using special ATPG procedures in generating the test set. This includes techniques that try to compact test sets [Tromp 91], [Pomeranz 93], [Kajihara 93], or to find easy to encode test vectors [Reeb 96], [Hellebrand 95a]. Schemes Requiring Fault Simulation - These are schemes that do not decompress a particular test set, but rather use pseudo-random generators (e.g., LFSR’s) to apply a large number of vectors to detect most of the faults, thereby reducing the number of deterministic test vectors that are required. These techniques require fault simulation of the circuit-under-test (CUT) to verify fault coverage. Schemes Requiring Test Cubes - These are schemes that compress test cubes, which are ATPG generated vectors in which the unspecified inputs are left as don’t cares. These schemes include LFSR reseeding [Koenemann 91], [Hellebrand 95b], [Zacharia 96], and width compression [Chakrabarty 97].Schemes for Fully Specified Test Vectors - These are schemes that are able to compress fully specified test vectors. These schemes were developed for compressing test vectors stored in on-chip ROM’s [Agarwal 81], [Aboulhamid 83], [Dandapani 84], [Edirisooriya 92], [Dufaza 93], [Iyengar 98].For intellectual property cores where no information is given about the internal structure of the core, test vector compression/decompression techniques that require either ATPG or fault simulation cannot be used. The core integrator must test the cores with the set of test vectors given by the core vendor. Furthermore, in most cases, the test vectors that are given are fully specified. Thus, techniques which require test cubes also cannot be used. For this reason, compression/decompression techniques for fully specified test vectors are needed. Previous work in this area has been focused on reducing the size of an on-chip ROM needed to store the test vectors.The cyclical scan chain decompression technique described in this paper can be used for fully specified test vectors and thus is applicable for intellectual property cores. It requires very little additional hardware. Rather it takes advantage of the fact that existing scan chains on the chip can be configured as cyclical decompressors.A test data compression/decompression scheme for reducing the time for downloading test data from a workstation to a tester has recently been proposed by Yamaguchi, et al., [Yamaguchi 97], [Ishida 98]. Note that this is a software based approach which targets a different problem then the one addressed here. It would be too complex and slow for an on-chip implementation as described here.The paper is organized as follows: The basic idea of cyclical scan chain decompression is explained in Sec. 2. Section 3 describes how the tester transfers encoded data to cyclical scan chain decompressors. Section 4 discusses ways in which cyclical scan chain decompression can be implemented in systems-on-a-chip containing many cores. Experimental results indicating the amount of compression that can be achieved are shown in Sec. 5. Section 6 is a conclusion.2.Cyclical Scan Chain DecompressionThe section describes the basic idea of test vector decompression via cyclical scan chains. Practical issues on how to implement it in a core-based design will be described in subsequent sections. Cyclical scan chain decompression involves the use of two scan chains as shown in Fig. 2. One is the “test scan chain” where the test vector will be applied to the circuit-under-test (CUT), and the other is the “cyclical scan chain” where the decompression will take place. The serial output of the cyclical scan chain feeds the serial input of the test scan chain and also loops back and is XORed in with the serial input of the cyclical scan chain. There are two requirements for the cyclical scan chain:1. It must have the same number of scan elements as thetest scan chain.2. Its contents must not be overwritten when the systemclock for the CUT is applied.When the system clock for CUT is applied, the test vector in the test scan chain is applied to the CUT and itsresponse is loaded back into the test scan chain. However, the contents of the cyclical scan chain must not be overwritten. The cyclical scan chain can be configured using the chip boundary scan, or using the boundary scan around a core, or using a scan chain in a different system clock domain. Note that if the test scan chain is a boundary scan that is driving the primary inputs of the CUT and is not capturing test response, then its contents are not lost when the system clock is applied and thus it can act as its own cyclical scan chain. This is illustrated in Fig. 3.Figure2.Cyclical Scan Chain Decompression ArchitectureFigure 3. Cyclical Scan Chain Decompression UsingBoundary ScanThe cyclical scan chain has the property that if it contains test vector t, then the next test vector that is generated in the cyclical scan chain will be the XOR of t and the“difference vector” that is shifted in.So generating a test set consisting of n test vectors, t1, t2,…, t n, in a cyclical scan chain would involve first initializing the scan chain to all 0’s, and then shifting t1 into the scan chain followed by the difference vector t1⊕ t2, followed by t2⊕ t3, and so on up to t n-1⊕ t n.The difference vectors that need to be shifted in to the cyclical scan chain depend on the way in which the test set is ordered. By carefully ordering the test vectors in the test set, the number of 0’s in the difference vectors can be maximized. Test vectors tend to be very correlated. Faults in the CUT that are structurally related require similar input value assignments in order to be provoked and sensitized to an output. Thus, many pairs of test vectors in the test set will have similar input combinations such that the difference vectors have a lot of 0’s. Ordering the test vectors in the test set such that correlated test vectors follow each other results in difference vectors with many more 0’s than 1’s.Data which is skewed such that the probability of one value exceeds that of another can be efficiently compressed with a run-length code. An example of a variable-to-block run-length code is shown in Fig. 4. A variable number of bits is encoded by a fixed number of bits. In this example, the fixed number of bits is 3. If a difference vector was 0000010000001100001, then the encoded vector would be 101 110 000100.Note that if the last few bits at the very end of the difference vector bit stream (all difference vectors concatenated together) cannot be encoded, extra bits can be added to solve the problem. For example, if the last two bits were 00, then the codeword 111 could be used to encode the bits even though it generates 000000. The extra bits would simply be ignored.Figure 4. Example of Run-Length Code The hardware required for decompressing an encoded vector for a run-length code is very simple. Each three bit block of encoded data is just a count of the number of 0’s in the run, so a three bit counter can be used to decompress the data. The counter is loaded with a three bit block and counts down to zero. When it reaches a count of zero, it outputs a 1 (unless the initial state was 111) and then is reloaded with the next three bit block.Ordering the test set to minimize the run-length encoding corresponds to forming a complete weighted graph and finding the minimum cost Hamiltonian path. Each node in the graph corresponds to a test vector and is connected by a weighted edge to every other node. The weight on the edge between two nodes is computed by forming the difference vector between the two corresponding test vectors and computing the number of bits needed to encode the difference vector. The minimum cost path through the graph that does not repeat any vectors corresponds to the optimum ordering of the test vectors to maximize the compression. Many efficient heuristic procedures for finding a good ordering exist.So the basic idea of cyclical scan chain decompression can be summarized as follows. Given a test set that needs to be applied to a CUT, a cyclical scan chain is formed where the number of stages is equal to the number of bits in the test vectors. The test vectors are then ordered tominimize the run-length encoding of the difference vectors. Rather than storing the full test vectors themselves, the compressed difference vectors (encoded with the run-length code) can be stored instead. To test the CUT, the compressed difference vectors are shifted in to a run-length decoder which decompresses them into the original difference vector bit stream one bit at a time which is fed into the cyclical scan chain to generate the test vectors.This is the basic theory of cyclical scan chain decompression. There are many ways in which it can be implemented and used in different applications. The remainder of this paper will focus on its application for testing core-based designs. There are a number of practical issues related to how the tester transfers encoded data to cyclical scan chain decompressors, and how a core-based design can be configured during testing to allow cyclical scan chain decompression.3.Transferring Data to Cyclical Scan Chain DecompressorsBecause a variable length code is used, the number of encoded bits transferred to the run-length decoder is less than the number of decoded bits that are transferred out.Since the run-length decoder shifts only one bit of data into the scan chain each clock cycle, there will be clock cycles when it will not be ready to receive data from the tester. These clock cycles can be overlapped with the clock cycles required to transfer data to another cyclical scan chain decompressor in order to reduce test time. One way to accomplish this is to use a single channel from the tester to transfer encoded data to multiple cyclical scan chain decompressors.Consider the simplest case where a single tester channel is used to transfer data to two cyclical scan chain decompressors using the three bit code shown in Fig. 5.The code in Fig. 5 is a slight modification to therun-length code shown in Fig. 4 which provides some useful advantages. In our experiments, we found that in most cases it allows more compression because it is not as inefficient when runs of 1’s occur. The other major advantage is that it takes no more than 6 clock cycles to decompress each encoded block of 3 bits. Thus, a single tester channel can shift in 6 bits of encoded data, 3 bits for each decompressor, and then load both decompressors at once (this is illustrated in Fig. 6). The decompressors can then start decompressing the encoded data while the tester takes 6 cycles to shift in the next 6 bits of encoded data.By the time the tester is ready to load the next block of encoded data, the decompressors are guaranteed to be finished decoding the previous block of encoded data.While the code in Fig. 5 is slightly more complicated to decode than the one in Fig. 4, it can still be decoded by asmall finite state machine (FSM).Figure 5. Modified 3-Bit Run-Length Code When the decompressor is loaded with a 3 bit block of encoded data, it generates the appropriate sequence of decoded bits and advances the cyclical scan chain and test scan chain for each bit of the sequence. For the code in Fig. 5, the length of the decoded sequence varies from 2 to 6 bits. A scan counter is used to count the number of bits that are shifted into the test scan chain. When the test scan chain is full, the system clock is activated to apply the test vector to the CUT.from TesterFigure 6. Tester Shifts in 6 Bits of Encoded Data and Loads Two Run-Length DecodersThis approach has a number of attractive features. One tester channel is used to load two scan chains through the decompressors. The test program is simple. The tester just shifts in 6 bits of encoded data and applies a control signal to load the decompressors. The decompressors and related control circuitry are very simple. In effect, the decompressors allow the tester to load two scan chains with compressed test vectors in close to the time normally required to load one scan chain with uncompressed test vectors. This increases the effective bandwidth of a single tester channel and reduces the amount of data that needs to be stored in tester memory. 4.Application to Testing Core-Based DesignsCyclical scan chain decompression can be used for testing core-based designs. No knowledge of the internal structure of the cores is required. The test vectors given by core vendors can be encoded and stored on the tester and then decompressed with cyclical scan chains on-chip. There are typically many different scan chains in a core-based design. Each core may have an internal scan as well as a boundary scan collar, the user-defined logic (UDL) may contain scan chains, and the chip may have a boundary scan around its pins. The length of the internal scan chains in the cores cannot be changed, but the other scan chains are designed by the core integrator and thus can be configured in different ways.The requirement for cyclical scan chain decompression as described in Sec. 2 is that a cyclical scan chain of equal length to the test scan chain is needed and the cyclical scan chain must not lose its contents during the test session. The simplest case for using cyclical scan chain decompression is for the boundary scan around the cores since the boundary scan can be configured as its own cyclical scan chain (assuming that it is not simultaneously used to capture the response of the logic surrounding the core). Using scan chain decompression in the internal scan of the core requires a separate cyclical scan chain of equal length. This cyclical scan chain can be configured in many different ways. Perhaps the simplest way is to use the boundary scan around the chip pins if one exists. If the chip boundary scan is longer than the internal scan of the core, it can of course be looped back at an intermediate point to form a cycle of the necessary length. In the same manner, the boundary scan around another core could also be used (as illustrated in Fig. 7). The internal scan in a different core whose system clock can be controlled independently from the core-under-test can also be configured as part of the cyclical chain provided it is the same length or shorter than the test scan chain. If it is shorter, than it can be configured with other boundary scan elements or scan elements in the UDL to form the correct length chain (as illustrated in Fig. 8).There are many options for configuring the cyclical scan chain decompressors. There is a lot of flexibility for developing a test schedule that tests all the cores in a system-on-a-chip using cyclical scan chain decompression to maximize the test data bandwidth of the tester. The run-length decoders and scan counters can be reused in different test sessions for different cyclical scan chain configurations. The output of a run-length decoder need not directly connect to the cyclical scan chain, there can be any number of scan chain elements between the XOR at the input of the cyclical scan and the run-length decoder.Figure 7. Configuring Boundary Scan as Cyclical Scan Chain for Internal Scan in CoreFigure 8. Using Internal Scan of a Core Plus Scan Elements in the UDL to Form Cyclical Scan Chain for the InternalScan of the Core-Under-Test5.Experimental ResultsExperiments were performed for the ISCAS 85 [Brglez 85] and large ISCAS 89 [Brglez 89] circuits. The test set for each circuit was ordered to minimize the run-length encoding of the difference vectors, and then was encoded. Two different codes were tried. Code 1 is the 2-bit code shown in Fig. 9. Code 2 is the 3-bit code shown in Fig. 5. Table 1 shows the size of the scan chain for each circuit (for the ISCAS 85 circuits, it is assumed that their primary inputs are controlled by a scan chain). The original amount of test data is shown followed by the amount of compressed data for each code. The percentage of compression is computed as:(Original Bits - Compressed Bits)/(Original Bits)As can be seen, the 3-bit code provided better compression than the 2-bit code for most circuits. Results for codes having more than 3 bits were found to have much worse compression and thus are not shown.When the test vectors are ordered, we noticed that some of the difference vectors require many fewer bits to encode with a run-length code than if they were not encoded while others required more bits to encode with a run-length code than if they were not encoded. Some test vectors are not very correlated with the others. Thus, one option for improving the compression would be to use the cyclical scan chain decompressor to generate all the test vectors that it is efficient for, and then shift in the remaining test vectors normally. We tried this for the large ISCAS 89 circuits and the results are shown in Table 2. On average, it increased the amount ofcompression by about 5%.Figure 9. 2-Bit Run-Length CodeNote that the compression that is achieved is a two-fold advantage. Not only does it result in less test storage requirements, but it also results in less test time which translates directly into lower test costs. The amount of compression could be much greater if test cubes were compressed instead of fully specified test vectors.Table 1. Compression Results for Compressing Whole Test SetTable 2. Compression Results with Turning Off Compression for Part of Test Set6.ConclusionsThis paper presents a new approach for compression/decompression of test vectors. Several key ideas are proposed:1. Using existing scan chains on the chip to from a cycleto generate test vectors by shifting in a difference vector.2. Ordering the test vectors in the test set to maximizethe 0’s in the difference vectors.3. Encoding the difference vectors with a run-lengthcode.4. Using a single channel from the tester to load multiplerun-length decoders which in effect increases the “effective” bandwidth of the channel.5. Configuring scan chains in a core-based design to actas decompressors for other scan chains.These ideas lay a ground work for further advancement in test vector compression/decompression. Some areas for further research would be to study the use of variable-to-variable length encoding for the difference vectors as well as other codes, and to look at ways to combine cyclical scan chain decompression with BIST.AcknowledgementsThis material is based on work supported in part by the Defense Advanced Research Projects Agency (DARPA) under Contract No. DABT63-94-C-0045, in part by the National Science Foundation under Grant No. MIP-9702236, and in part by the Texas Advanced Research Program under Grant No. 1997-003658-369.References[Aboulhamid 83] Aboulhamid, M.E., and E. Cerny, “A Class of Test Generators for Built-In Testing,” IEEE Transactions on Computers , Vol. C-32, No. 10, pp. 957-959, Oct. 1983. [Agarwal 81] Agarwal, V.K., and E. Cerny, “Store and Generate Built-In Testing Approach,” Proc. of FTCS-11, pp. 35-40, 1981.[Brglez 85] Brglez, F., and H. Fujiwara, “A Neutral Netlist of 10 Combinational Benchmark Circuits and a Target Translator in Fortan,” Proc. of International Symposium on Circuits and Systems, pp. 663-698, 1985.[Brglez 89] Brglez, F., D. Bryan, and K. Kozminski,“Combinational Profiles of Sequential Benchmark Circuits,” Proc. of International Symposium on Circuits and Systems, pp. 1929-1934, 1989.[Chakrabarty 97] Chakrabarty, K., B.T. Murray, J. Liu, and M.Zhu, “Test Width Compression for Built-In Self-Testing,”Proc. of International Test Conference, pp. 328-337, 1997. [Chandramouli 96] Chandramouli, R., and S. Pateras, “Testing Systems on a Chip,” IEEE Spectrum, pp. 42-47, Nov. 1996.[Dandapani 84] Dandapani, R., J. Patel, and J. Abraham,“Design of Test Pattern Generators for Built-In Test,” Proc.of International Test Conference, pp. 315-319, 1984. [Dufaza 93] Dufaza, C., C. Chevalier, and L.F.C. Lew Yan Voon, “LFSROM: A Hardware Test Pattern Generator for Deterministic ISCAS85 Test Sets,” Proc. of Asian Test Symposium, pp. 160-165, 1993.[Edirisooriya 92] Edirisooriya, G., and J.P. Robinson, “Design of Low Cost ROM Based Test Generators,” Proc. of VLSI Test Symposium, pp. 61-66, 1992.[Hellebrand 95a] Hellebrand, S., B. Reeb, S. Tarnick, and H.-J.Wunderlich, ”Pattern Generation for a Deterministic BIST Scheme,” Proc. of International Conference on Computer-Aided Design (ICCAD), pp. 88-94, 1995.[Hellebrand 95b] Hellebrand, S., J. Rajski, S. Tarnick, S.Venkataraman and B. Courtois, ”Built-In Test for Circuits with Scan Based on Reseeding of Multiple-Polynomial Linear Feedback Shift Registers,” IEEE Transactions on Computers, Vol. 44, No. 2, pp. 223-233, Feb. 1995. [Immaneni 90] Immaneni, V., and S. Raman, “Direct Access Test Scheme-Design of Block and Core Cells for Embedded ASICS,” Proc. of Int. Test Conference, pp. 488-492, 1990. [Ishida 98] Ishida, M., D.S. Ha, T. Yamaguchi, “COMPACT: A Hybrid Method for Compressing Test Data,” Proc. of VLSI Test Symposium, pp. 62-69, 1998.[Iyengar 98] Iyengar, V., K. Chakrabarty, and B. T. Murray,“Built-in Self Testing of Sequential Circuits Using Precomputed Test Sets,” Proc. of VLSI Test Symposium, pp. 418-423, 1998.[Kajihara 93] Kajihara, S., I. Pomeranz, K. Kinoshita, S.M.Reddy, “Cost-Effective Generation of Minimal Test Sets for Stuck-at Faults in Combinational Logic Circuits,” Proc.of the 30th Design Automation Conf., pp. 102-106, 1993. [Koenemann 91] Koenemann, B., “LFSR-Coded Test Patterns for Scan Designs,” Proc. of European Test Conference, pp.237-242, 1991.[Pomeranz 93] Pomeranz, I., L.N. Reddy, and S.M. Reddy,“COMPACTEST: A Method to Generate Compact Test Sets for Combinational Circuits,” IEEE Trans. Computer-Aided Design, Vol. 12, No. 7, pp. 1040-1049, Jul. 1993. [Reeb 96] Reeb, B., H.-J. Wunderlich, “Deterministic Pattern Generation for Weighted Random Pattern Testing,” Proc.of European Design & Test Conference, pp. 30-36, 1996. [Tromp 91] Tromp, G., “Minimal Test Sets for Combinational Circuits,” Proc. of Int. Test Conference, pp. 204-209, 1991. [Yamaguchi 97] Yamaguchi, T., M. Tilgner, M. Ishida, and D.S. Ha, “An Efficient Method for Compressing Test Data,”Proc. of International Test Conference, pp. 191-199, 1997. [Zacharia 96] Zacharia, N., J. Rajski, J. Tyszer, and J.A.Waicukauski, “Two-Dimensional Test Data Decompressor for Multiple Scan Designs,” Proc. of International Test Conference, pp. 186-194, 1996.[Zorian 97] Zorian, Y., “Test Requirements for Embedded Core-based Systems and IEEE P1500,” Proc. of International Test Conference, pp. 191-199, 1996.。

相关文档
最新文档