Real time tracking of borescope tip pose
profr 0.3.3 性能分析工具文档说明书
Package‘profr’October14,2022Title An Alternative Display for Profiling InformationVersion0.3.3Description An alternative data structure and visual renderingfor the profiling information generated by Rprof.License MIT+file LICENSEURL https:///hadley/profrBugReports https:///hadley/profr/issuesImports plyr,stringrSuggests ggplot2Encoding UTF-8LazyData trueRoxygenNote6.1.1NeedsCompilation noAuthor Hadley Wickham[aut,cre]Maintainer Hadley Wickham<******************>Repository CRANDate/Publication2018-12-0523:40:03UTCR topics documented:ggplot.profr (2)parse_rprof (2)plot.profr (3)profr (4)sample-data (5)Index612parse_rprofggplot.profr Visualise profiling data with ggplot2.Visualise profiling data stored ina profr data.frame.DescriptionThis will plot the call tree of the specified stop watch object.If you only want a small part,you will need to subset the objectUsageggplot.profr(data,...,minlabel=0.1,angle=0)Argumentsdata profile output to plot...other arguments passed on to ggplotminlabel minimum percent of time for function to get a labelangle function label angleSee Alsoplot.profrExamplesif(require("ggplot2")){ggplot(nesting_prof)ggplot(reshape_prof)}parse_rprof Parse Rprof output.DescriptionParses the output of Rprof into an alternative format described in profr.This produces aflat data frame,which is somewhat easier to summarise and visualise.Usageparse_rprof(path,interval=0.02)plot.profr3Argumentspath path to Rprof outputinterval real-time interval between samples(in seconds)Valuedata.frame of class profrSee Alsoprofr for profiling and parsingExamplesnesting_ex<-system.file("samples","nesting.rprof",package="profr")nesting<-parse_rprof(nesting_ex)reshape_ex<-system.file("samples","reshape.rprof",package="profr")diamonds<-parse_rprof(reshape_ex)plot.profr Visualise profiling data with base graphics.Visualise profiling datastored in a profr data.frame.DescriptionIf you only want a small part of the total call tree,you will need to subset the object as demonstrated by the example.Usage##S3method for class profrplot(x,...,minlabel=0.1,angle=0)Argumentsx profile output to plot...other arguments passed on to plot.defaultminlabel minimum percent of time for function to get a labelangle function label angleSee Alsoggplot.profrExamplesplot(nesting_prof)plot(reshape_prof)4profr profr Profile the performance of a function call.DescriptionThis is a wrapper around Rprof that provides results in an alternative data structure,a data.frame.The columns of the data.frame are:Usageprofr(expr,interval=0.02,quiet=TRUE)Argumentsexpr expression to profileinterval interval between samples(in seconds)quiet should output be discarded?Detailsf name of functionlevel level in call stacktime total time(seconds)spent in functionstart time at which control entered functionend time at which control exited functionleaf TRUE if the function is a terminal node in the call tree,i.e.didn’t call any other functions source guess at the package that the function came fromValuedata.frame of class profrSee Alsoparse_rprof to parse standalone Rproffile,plot.profr and ggplot.profr to visualise the pro-filing dataExamples##Not run:glm_ex<-profr({Sys.sleep(1);example(glm)},0.01)head(glm_ex)summary(glm_ex)plot(glm_ex)##End(Not run)sample-data5 sample-data Sample profiling datasetsDescriptionThese two datasets illustrate the results of running parse_rprof on the sample Rprof output stored in the samples directory.The output was generated by the code in samples/generate.r.Usagenesting_profreshape_profFormata data frameIndex∗datasetssample-data,5∗debuggingparse_rprof,2profr,4∗hplotggplot.profr,2plot.profr,3data.frame,3,4ggplot,2ggplot.profr,2,3,4nesting_prof(sample-data),5parse_rprof,2,4,5plot.default,3plot.profr,2,3,4profr,2,3,4reshape_prof(sample-data),5Rprof,2–5sample-data,56。
一种高精度PTP 时钟同步方法及应用
文献引用格式: - 96.HUANG H J,WU Q W,JIANG R,et al.A high precision PTP clock synchronization method and its application[J].Video 中图分类号:摘要:时钟同步至关重要。
因此,SDI over IP技术,取得了满足产品实际应用的高精度时钟同步效果。
关键词Abstract:the field of industrial Ethernet. In the field of IP streaming media transmission, clock synchronization is very important. This paper introduces the principle of PTP clock synchronization, and puts forward a specific implementation method of clock synchronization. This method is applied to the SDI over IP technology of SMPTE STthe practical application of the product.Keywords:一个核心技术。
高精度时间同步协议Time Protocol,PTP)4个时间戳中计算得到主从时钟之间的往返时延和主从时钟之间的时间偏差,行偏差校正。
PTP主时钟从时钟T1sync报文图具体同步过程可以分为(1)主时钟在报文,以开启一次时钟同步,(5),包含。
在更多的应用场景中,T∆的中恢复在本文所提来分析主从时钟的同步及可以考虑使用递推平均滤波法或一阶滞后滤波法对Toffset2.2.1续N每次采样的新样本进入队尾,样本点,算获得滤波结果,周期性干扰和白噪声干扰产生较强的抑制作用。
课件环球慧思数据与同行对比图文
关单数据显示价格信息
Page 46
阿根廷数据实现了买 卖双方对应;并且增 加了联系方式、电话 信息。
Page 47
增加了交易条款
Page 48
工业分类: 行业分析、 跨数据库查 询
国家个数的对比
环球 慧思
美国、加拿大、墨西哥、巴西、阿根廷、智利、秘鲁、乌拉圭、巴拉圭、 厄瓜多尔、哥伦比亚、哥斯达黎加、巴拿马、玻利维亚、印度、越南、韩 国、巴基斯坦、阿富汗、英国、俄罗斯、西班牙、乌克兰、摩尔多瓦、委 内瑞拉、萨尔瓦多、中国台湾等27个国家和地区
Page 40
企业登记号;联系方式更详尽:邮箱、 联系人、联系人职务
Page 41
左侧设置条件栏显 示的都是可以查询 的
Page 42
详尽的联系方 式:联系人、 联系人职务、 进口商纳税代 码、法人身份 证号码
买卖双方对应国家
Page 43
详尽的货运 量价的体现
Page 44
巴基斯坦提供关单数据, 可以按照产品关键词和 海关编码双重检索。
滕道
特易
Page 49
小结
环球慧思目前能够提供27个国家按月/天更新的海关数据。 腾道数据中巴西、委内瑞拉、玻利维亚、西班牙、加拿大 均为统计数据,没有进出口商,并非真实的海关数据,中美洲 大部分为船公司数据,不是海关提关单数据,数据量极小。 特易只能提供19个国家的数据,不能提供智利、玻利维亚、 萨尔瓦多、阿富汗、加拿大、西班牙、巴西、摩尔多瓦。
深度挖掘出物流 公司背后真实的 采购商和供应商
Page 18
能具体体现货物 的箱量、货柜
Page 19
进口商是物流公 司,体现不到真 正的进口商
Page 20
TIMMS 室内地图建模系统说明书
DATASHEETTIMMS is a manually operated push-cartdesigned to accurately model interior spaceswithout accessing GPS. It consists of 3core elements: LiDAR and camera systemsengineered to work indoors in mobile mode,computers and electronics for completing dataacquisition, and data processing workflow forproducing final 2D /3D maps and models. Themodels are “geo-located”, meaning the real worldposition of each area is known.With TIMMS a walk-through of an interior spacedelivers full 360 degree coverage. The spatialdata is captured and georeferenced in real-time.Thousands of square feet are mapped in minutes,entire buildings in a single day.TIMMS is ideal for applications such assituational awareness, emergency response,and creating accurate floor plans. All typesof infrastructure can be mapped, even thoseextending over several city blocks:• Plant and factory facilities• High-rise office, residential, and governmentbuildings• Airports, train stations and othertransportation facilities• Music halls, theatres, auditoriums and otherpublic event spaces• Covered pedestrian concourses (above andbelow ground) with platforms, corridors,stair locations and ramps• Underground mines and tunnelsYOUR BENEFITS• High efficiency, accuracy and speed• Lower data acquisition cost for as-builts• Reduced infringement on operationsT rimble Indoor Mobile Mapping Solution (TIMMS)►No need for GNSS►Little or no LiDAR shadowing►Long-range LiDAR►Self-contained►Simple workflow►Fully customizable►Use survey control for precisegeoreferencingTHE OPTIMAL FUSION OF TECHNOLOGIES FOR CAPTURING SPATIAL DATA OFINDOOR AND GNSS-DENIED SPACESDATASHEETTRIMBLE APPLANIX 85 Leek CrescentRichmond Hill, Ontario L4B 3B3, Canada+1-289-695-6000 Phone +1-905-709-6027 Fax © 2017, T rimble Navigation Limited. All rights reserved. T rimble logo are trademarks of T rimble, registered in the United States and in other countries. All other trademarks are the property of their respective owners. (07/1)PERFORMANCE Onboard powerU p to 4 hours without charge or swap Hot swappable for unlimited operational time Data storage 1TB SSD OperationsNominal data collection speed at 1 meter per second Maximum distance between position fix 100 meters Typical field metricsLiDAR point clouds - 1 cm relative to position accuracy*P roductivity – in excess of 250,000 square feet per day PHYSICAL DIMENSIONSHeight with mast low..........................................................................173 cm Height with mast high........................................................................221 cm Distance to wheel with mast low (front to back)..............................80 cm Distance to wheel with mast high (front to back).............................88 cm Distance between wheels (outside surface of wheels).....................51 cm Weight...................................................................................109 lb or 49.5 kg*rms derived by comparison of TIMMS with static laser scan, results may vary according tobuilding configuration and trajectory chosen.* System performance may vary with scanner type and firmware version. Published valuesbased on X-130.TIMMS COMPONENTS Mobile Unit & MastTIMMS aquisition systemI nertial Measurement Unit (IMU)P OS Computer System (PCS)L iDAR Control Systems (LCS)One LiDARS upported scanners include:T rimble TX-5FARO Focus X-130, X-330, S-70-A, S-150-A, S-350-A One spherical camera (6 camera configuration)Field of View (FOV) >80% of full sphere 2 MegaPixel (MP) per camera Six (6) 3.3 mm focal length 1 meter/second (Up to 4 FPS)One operator and logging computer 16 batteries (8 + 8 spare)2 battery chargers SOFTWARE COMPONENTRealtime monitoring and control GUI Post-processing suiteSYSTEM DELIVERABLEGeoreferenced trajectory in SBET formatGeoreferenced point cloud in ASPRS LAS format Georeferenced spherical imagery in JPEG format Georeferenced raster 2D floorplanUSER SUPPLIED EQUIPMENT PC for post processing Windows 7 64-Bit OSMinimum of 300 GB of disk32 gigabytes of RAM required (64 recommended)USER SUPPLIED SOFTWAREBasic LiDAR processing tools: recommended functionality LAS import compatible Visualization ClippingRaster to Vector tools (manual and/or automated)Trimble Indoor Mobile Mapping Solution (TIMMS)。
高速摄像机产品规格说明书
High Speed VideoRange Overview i-SPEED LT i-SPEED 2i-SPEED TR i-SPEED 3i-SPEED FSResolution (full sensor)800 x 600 pixels800 x 600 pixels1280 x 1024 pixels1280 x 1024 pixels1280 x 1024 pixelsSpeed at full resolution1,000 fps1,000 fps2,000 fps2,000 fps2,000 fpsMaximum recording speed2,000 fps33,000 fps10,000 fps150,000 fps1,000,000 fpsShutterUser selectable to 5microsecondsUser selectable to 5microsecondsUser selectable to 2.14microsecondsUser selectable to 1microsecondUser selectable to 200nanosecondsInternal memory options 1 GB/2 GB/4 GB 2 GB/4 GB 4 GB/8 GB/16 GB 4 GB/8 GB/16 GB 4 GB/8 GB/16 GBLens mount C-mount C-mount F-mount F-mount F-mountCDU compatibility✓✓✓✓✓CF card storage compression✓✓✓✓✓Ethernet connection✗✓✓✓✓Multiple camerasynchronisation✗✓✓✓✓Text/logo overlay✗✗✓✓✓User settings✗✗✓✓✓Battery backup✗✗Optional✓✓i-FOCUS✗✗✓✓✓i-CHEQ✗✗✓✓✓HiG options✗✓✗✓✓IRIG-B✗✗✗✗✓Ecomomy modes3939 + manual9 + manualSee individual camera features and specification sheets for full product details.High Speed, High Quality Imaging specificationsAdvanced Test Equipment Rentals 800-404-ATEC (2832)®E s t a b l i s h e d1981With years of experience in high quality digital image processing, the i-SPEED product range from Olympus offers high speed video cameras which are suitable for numerous applications,including: Automotive Crash Testing, Research and Development, Production, Fault Diagnosis, Bottling and Packaging, Pharmaceutical, Manufacturing, Component Testing, Ballistic and Broadcast industries.Olympus is a world leading manufacturer of imaging products with a long history ofproducing high quality systems, providing solutions within a variety of industrial applications. The Olympus i-SPEED high speed video range is no exception,whatever the high speed application, industry or specialist requirements,Olympus has a high speed camera for you. Multiple camera synchronisation High resolution at higher frame speedsOn board image measurementPortable and easy to useHigh speedelectronic shutteringhigh speed videoAdvanced Test Equipment Rentals 800-404-ATEC (2832)®E s t a b l i s h e d 1981The i-SPEED LT has been designed to be quick to set up and simple to use. With the ability of instant video playback, it is a complete ‘point and shoot’ inspection tool.The i-SPEED 2is an invaluable tool for general research anddevelopment requirements,with recording rates of up to 33,000 fps and instant playback & analysis via the CDU.2,000 fps max33,000 fps maxCONNECTION TO FIBERSCOPE, BORESCOPE OR MICROSCOPE With over 30 years experience within the Remote Visual Inspection industry, the Olympus i-SPEED camera has been optimised for use with Olympus fiberscopes, rigid borescopes and microscopes.DATA CHANNELSMulti-channel analogue data input from 0-5 V provides graphical representation synchronised exactly with the captured video.MULTIPLE CAMERA SYNCHRONISATIONBy synchronising two or more i-SPEED cameras, multiple views of the same event can be obtained and downloaded via Ethernet for review.ETHERNET CONNECTION The Olympus i-SPEED 2can be connected to a PC via Ethernet allowing full camera control and image download.DOWNLOAD VIA COMPACT FLASHBy selecting frames required for review via the CDU,download times are reduced. The video clip is easily transferred from the camera’s internal memory to a removable Compact Flash card in either compressed or uncompressed format for transfer to a PC.CONTROLLER DISPLAY UNIT (CDU)The unique CDU facilitates the operation of the i-SPEED high speed video camera through an intuitive menu structure, without the need of a PC, making the system portable, easy to use and also provides instant playback.Thei-SPEED TR provides highresolution, extreme low light sensitivity at speeds up to 10,000 fpsrecording, making it the ideal analysis toolfor research and development.USER SETTINGS User settings can be stored through the camera and allows users to store up to five favouredcamera settings or test parameters, which can be easily restored.i-FOCUSA feature unique to Olympus, i-FOCUS is an electronic image focusing tool which provides confirmation of focus within a live image and provides a visual indication of depth of field.LUMINANCE HISTOGRAMProvides graphical representation of the average brightness within a live image, allowing easy aperture set up in real time.The i-SPEED 3has been designed to an advanced specificationproviding high frame ratecapture, on board image analysistools and electronic shuttering to 1 μs.BATTERY BACK UP i-SPEED 3has an internal battery back up, to ensure camera operation, should AC power fail.VIDEO TRIGGER* An advancedtriggering function that begins the recording process when movement occurs within a defined area of a live image.*Triggering utilises changes in the luminance of the imageVIDEO CLIP PERSONALISATIONText and company logos can be permanently burnt into video clips to assist with report generation and video accreditation.MEASUREMENTAccessed via the CDU, distance,angle, velocity, angular velocity,acceleration and angular acceleration measurements can be calculated,allowing instant analysis of captured images.IMAGE PROCESSINGCaptured footage can be processed via the CDU to enhance an image and identify detail that would not otherwise be seen.10,000 fps max150,000 fps maxThe i-SPEED Software Suite is designed to mirror the ease-of-use and high specificationpower of the camera range. There are three levels of PC software available for use with all cameras:ControlControl is supplied as standard with all i-SPEED cameras (excluding LT model)Connect to camera*Control camera*Download images to PC Manual distance, speed, angle,angular velocity measurements Additional functionality available when using Control with i-SPEED LT/3/FSi-FOCUS for confirmation of depth of focusi-CHEQ for instant camera status determinationLuminance histogram for precise image set-upControl-ProBy upgrading to Control-Pro as an optional extra, additional features to the Control software are availableAuto Capture and download to PCFree text facility including data and frame comments Video annotation HTML report generator 64 point auto-tracking Perspective projection Data filtering Video triggeringLens distortion correction and saving Permanent text burned onto video,including customer’s logosi-SPEED ViewerTo allow i-SPEED footage to be reviewed within an organisation (in addition to the Control andControl-Pro PC Software Suites) a simple Viewer software is available as a free of charge download from the Olympus website (). This offers the capability to view i-SPEED footage and change playback speed only.*i-SPEED LT is not Ethernet enabled. i-SPEED Software Suite may be purchased to enable saved image manipulation and analysis as described.For further information on i-SPEED software options, please see the dedicated literature.IRIG-BOnboard IRIG-B receiver provides lock synchronisation/lock exposure to sub 5 microsecond accuracy.ECONOMY MODESUp to nine preset economy modes are available to utilise a smaller area of the CMOS sensor, which provides extended recording times without the need to reduce frame rates. Manual economy mode allows the user to define the sensor size.The i-SPEED FS provides high resolution, extreme low light sensitivity at speeds up to 1,000,000 fps recording, with anelectronic global shutter selectable to 0.2 μs,making the camera suitable for capturing eventhe quickest high speed phenomena.i-CHEQA display of external LED indicators provide confirmation of camera record status - useful in ballistics or crash test environments to provide absolute confidence that tests will be captured.software suite1280 x 1024 resolution @ 2,000 fps1,000,000 fps max 800-404-ATEC (2832)E s t a b l i s h e d 1981Adding value to Olympus i-SPEED camerasLocal, quick and efficient service and repairsProduct and application supportContinued education through local high speed video training coursesLoan equipment available during servicing and repairExpertise in endoscopy, microscopy and non-destructive testingSystem upgradesFlood-LightingFor more information on the products below please see the Olympus RVI Product GuideTo complement the i-SPEED digital high speed video cameras and to suit the varying and demanding needs of high speed video applications, Olympus offers a wide range of lens and lighting accessories.Olympus have over 30 years of experience in the Remote Visual Inspection (RVI) industry and offer a range of products suitable for use with i-SPEED cameras.SERIES 5 BORESCOPESOlympus rigid borescopes offer high quality images andprovide visual access to confined areas. Available in a range of diameters from 4 to 16 mm, the Olympus Series 5 Borescope can be connected to any high speed camera via an optical adaptor to allow capture of high speed applications.INDUSTRIAL FIBERSCOPESOlympus also offer a range of flexible fiberscopes for usewhen direct line access to the inspection area is not available.With diameters ranging from 6 to 11 mm, lengths of up to 3 m and four way tip articulation, Olympus fiberscopes can provide a view of the hardest to reach high speed applications.LIGHT SOURCESOlympus high intensity light sources can provide illumination for applications that require the use of borescopes andfiberscopes or can be utilised for focused illumination of high speed events.OPTICAL ADAPTORSA range of adaptors are available for connection between i-SPEED cameras and Olympus borescopes and fiberscopes.C-mount and F-mount lensesaccessoriesserviceAdvanced Test Equipment Rentals 800-404-ATEC (2832)®E s t a b l i s h e d 1981。
GPS
Step 1:Triangulating • Position is calculated from distance measurements (ranges) to satellites. • Mathematically we need four satellite ranges to determine exact position. • Three ranges are enough if we reject ridiculous answers or use other tricks.
Loran –Long Range Navigation 罗兰
低频、脉冲式的双曲线无线电导航与定位系统
Omega 奥米伽-远距、全球、全天候的无线电导航系统
夫尔-甚高频全向信标 VORVery High Frequency Omnidirectional Ranging DME-Distance Measurement Equipment 测距仪
中解调获得的卫星轨道参数
Ionospheric delay 电离层延迟 L波段-频率为390-1550MHz的无线电频率范围 L-band Multipath error 多(路)径误差-由两条以上传播路径 Multichannel receiver 多通道接收机 Pseudorandom noise code 伪随机噪声码PNR
• Each satellite has a unique Pseudo Random Code
• The Pseudo Random Code (PRC, shown above) is a fundamental part of GPS. Physically it's just a very complicated digital code, or in other words, a complicated sequence of "on" and "off" pulses as shown here: • The signal is so complicated that it almost looks like random electrical noise. Hence the name "Pseudo-Random." • There are several good reasons for that complexity: First, the complex pattern helps make sure that the receiver doesn't accidentally sync up to some other signal. The patterns are so complex that it's highly unlikely that a stray signal will have exactly the same shape.
隐极同步发电机转子气体内冷通风道检验方法及限值
隐极同步发电机转子气体内冷通风道检验方法及限值The inspection method and limits of the cooling ventilation channel of the gas inside the rotor of a hydro-generator are crucial to ensuring the safe and efficient operation of the generator. It is important to regularly inspect and monitor the condition of the cooling ventilation channel to prevent any gas leakage and ensure the proper functioning of the generator.隐极同步发电机转子气体内冷通风道的检验方法和限值对于确保发电机的安全和高效运行至关重要。
定期检查和监测冷通风道的状况,防止气体泄漏,确保发电机的正常运转至关重要。
There are several methods to inspect the cooling ventilation channel of the gas inside the rotor of a hydro-generator. One common method is using non-destructive testing techniques such as ultrasonic testing and eddy current testing to assess the integrity of the ventilation channel. These techniques can detect any potential defects or damages in the channel without causing any harm to the equipment.检查隐极同步发电机转子气体内冷通风道的方法有几种。
mdbthemetallopro...
© 2002 Oxford University Press Nucleic Acids Research, 2002, Vol. 30, No. 1379–382 MDB: the Metalloprotein Database and Browser atThe Scripps Research InstituteJesus M. Castagnetto, Sean W. Hennessy, Victoria A. Roberts, Elizabeth D. Getzoff,John A. Tainer and Michael E. Pique*The Scripps Research Institute, 10550 North Torrey Pines Road, La Jolla, CA 92037, USAReceived August 21, 2001; Accepted October 26, 2001ABSTRACTThe Metalloprotein Database and Browser (MDB; ) at The Scripps Research Institute is a web-accessible resource for metallo-protein research. It offers the scientific community quantitative information on geometrical parameters of metal-binding sites in protein structures available from the Protein Data Bank (PDB). The MDB also offers analytical tools for the examination of trends or patterns in the indexed metal-binding sites. A user can perform interactive searches, metal-site structure visualization (via a Java applet), and analysis of the quantitative data by accessing the MDB through a web browser without requiring an external application or platform-dependent plugin. The MDB also has a non-interactive interface with which other web sites and network-aware applications can seamlessly incorporate data or statistical analysis results from metal-binding sites. The information contained in the MDB is periodically updated with automated algorithms that find and index metal sites from new protein structures released by the PDB.INTRODUCTIONThe Metalloprotein Database and Browser (MDB, http:// ) is part of the Metalloprotein Structure, Bioinformatics and Design Program (/ research/metallo) at The Scripps Research Institute (TSRI; ). The main role of the MDB is to collect, and make viewable and easily accessible, key quantitative information on protein metal-binding sites from structures available at the Protein Data Bank (PDB) (1).A major emerging challenge in structural biology is to develop a sufficient understanding of metalloproteins to allow their rational design with engineered metal-site geometries and properties. To achieve this, we need to comprehend the set of structural, environmental and functional requirements for metal-binding sites in existing metalloproteins. We need to know not only what types of metal ions are bound by proteins, but also the types of ligands that bind these metal ions (i.e. the first-shell ligands), the residues that contact the metal-binding ligands (i.e. the second-shell ligands) and what other geometrical or environmental effects may modulate the properties of the metal-binding site. This information is crucial whether the objective is to construct new metal-binding sites into a given protein scaffold or to modify an existing metal site.We have created the MDB to address the need for quantitative information by biochemists, biologists, computational chemists, bioinformaticians and other metalloprotein researchers. The MDB contains key structural information that can be used not only for metal-binding site design, but also for obtaining parameters for the determination of X-ray structures of metal-containing proteins, developing constraints for computational modeling of metallosystems and statistical analyses of ligand geometries and ligating patterns.The MDB is a bioinformatic web application designed for easy user access, which takes advantage of well known web tech-nologies: a fast database engine (MySQL; ), a stable web server (Apache; ) and a powerful web scripting language (PHP; ). With these Open Source tools, we have constructed interactive web query interfaces, functions to allow remote viewing and searching and a non-interactive web application program interface (API).INDEXING PROTEIN METAL-SITE DATA FOR THE MDBThe metal-site structural data in the MDB are collected with automatic tools that periodically index protein structures with metal sites from the latest PDB release. The current publicly available version of the MDB reliably indexes first-shell data from mononuclear metal sites. To enhance the quality and useful-ness of the data present in the MDB, we have developed the metal-binding site indexing tool (MSIT). The MSIT, an appli-cation written in Java, extracts first- and second-shell data, recognizes multinuclear metal sites and cluster-containing sites, classifies metal-binding sites according to several criteria (number of metal ions in the site, metal complexation geometry, type of metal ion, etc.) and determines non-covalent interactions within each indexed shell and among shells.The MSIT uses a distance-dependent algorithm and table-based heuristics to recognize and extract metal-binding sites and to work around malformed PDB structure files (missing records, misaligned fields, etc.). The program reads and parses the input PDB file, recognizes metal centers in the structure and generates a first shell. It then executes a breadth-first search through all metal sites that have been found to search for sites that share common residues. Such sites are merged, creating a multinuclear site. Once the mono- and multinuclear sites have been defined,*Towhomcorrespondenceshouldbeaddressed.Tel:+18587849775;Fax:+18787842860;Email:**************380Nucleic Acids Research, 2002, Vol. 30, No. 1second-shell residues and non-covalent interactions are identified. The geometrical data are written to structure files (in PDB, XML/CML and VRML formats) and the calculated data are inserted into the SQL database (Fig. 1).INTERACTIVE AND NON-INTERACTIVE ACCESS TO THE MDBInteractive interfacesThe MDB web site offers the researcher a series of query options, from the simple to the complex. Interactive interfaces are implemented either as simple HTML forms or as forms combined with an applet for real-time three-dimensional viewing of structures. In the MDB, you can search for metal-binding sites with HTML forms that allow you to:1.Specify PDB codes of proteins (e.g. 2sod or 1fer), resulting in alist of all metal-binding sites found within the indicated proteins.2.Select sites based on type of metal, number of ligands andother parameters. F or example, you can search for zinc-containing sites with four to six ligands found in protein structures with resolutions <2.0 Å. A more extensive HTML form allows even more restricted searches. For example, a specified number of ligands can be restricted to be of certain type (‘one must be water’) or a range of resolutions may be specified instead of an upper limit (‘resolution between 1.5 and 2.8 Å’).3.Perform a query using the SQL language, offering the usercomplete flexibility to examine any possible correlation among the data sets contained in the database. To assist the user, the database structure is documented (/ sql_docs/structure.html and / sql_docs/table_descriptions.html). A possible complex query would be to select all copper-binding sites that contain exactly one Asp or Glu, two His residues and a water molecule as the metal-ligating pattern and then to display the copper–water distance, the resolution and the date when the structure was released for each e an interface with a lightweight (70 kB in size) Javaapplet to view and manipulate the metal-binding sites found by the queries. This viewer allows the user to perform geometrical measurements, including atom to atom distances, valence angles and torsion angles. This applet also allows simple superpositions, stereo visualization, display of atoms within a given distance of the one selected, independent selection and movement of structures when several are being displayed, the ability to detach or reattach the viewer from/to the page (to enlarge the window), etc.Non-interactive interfacesThe non-interactive interfaces allow one to embed MDB data or visualization tools in any other web page or application. Currently there are three interfaces available.1.The remote query/viewer interface, which uses somesimple HTML code to allow transparent querying and visualization of metal-sites of interest (with the interactive Java viewer). Several sites are using this interface, including the IMB Jena Image Library of Biological Macromolecules (2), the PROMISE database (3) and the EF-Hand Calcium-Binding Proteins Data Library (http:// /cabp_database).2.An SQL query interface (/api)that allows application developers to call a particular URL on the MDB site, pass the appropriate parameters and obtain the results in program parsable format, such as comma delimited records (for spreadsheet analysis) or WDDX packets (/), or in a format ready to embed into a web page (HTML tables).3.An XML-RPC-based interface ()that accepts remote procedure calls from any application using the XML-RPC protocol, independent of the platform or programming language used by the application requesting the service. Added advantages are that the protocol supports introspection (i.e. an application can askthe server: ‘what procedures do you offer and how should I Figure 1. Algorithm used by the MSIT to identify and extract information about metal-binding sites in metalloproteins.Nucleic Acids Research, 2002, Vol. 30, No. 1381call them?’) and that there are XML-RPC libraries for most scripting and programming languages.This last interface (XML-RPC), which is under vigorous development, will comprise a whole set of callable methods (API) for the MDB. F or example, a program for metal-site design could use this interface to communicate directly with the MDB and request a list of observed ranges for a particular geometrical feature (distance, angle, etc.), and thus compare a proposed model value with those found in known metalloproteins. DATA ANALYSIS WITH THE MDBThe MDB has been used to analyze the distribution of geometrical parameters such as metal–ligand bond distances and side-chain torsion angles, and also to obtain the frequencies for ligating patterns in metal sites. These analyses have been used to assist in the determination of X-ray crystallographic structures, to validate and compare designed metal-site candidates, and to find ligating tendencies (such as differences between Cu–S distances when Cys and Met are metal ligands).These types of analyses have been very useful in our research, so we designed HTML forms to perform the analyses online. Using the forms, we can generate the distribution of the metal–ligand atom distances or the ligand patterns that are most common for a particular metal ion with a given coordination number. Figure 2 shows the result of analysis of the bond distance distribution for Zn-Nδ(His) (coordination number = 4 and fixed range of 1.5–3.0 Å). We find a normal distribution with an average Zn-Nδ(His) distance of 2.09 Å (SD 0.15) from 561indexed distances. Not shown in Figure 2 is a table listing each of the plotted bins, their corresponding counts and a button allowing the researcher to obtain (in another window) a list of the metal-binding sites in which a particular distance occurs. From that list, the scientist can choose to download the structure of the site (in PDB format) or view it using our Java viewer. Usually, this tool takes between 2 and 10 s to process a distribution (including plot, table, etc.) and behaves linearly with the number of matching distances. Improvements planned for this tool include discriminating distances from mono-nuclear, multinuclear or cluster sites and making the analysis on a dehomologized list of protein structures.A complementary analysis tool identifies ligand pattern tendencies of a particular metal ion in a specific coordination number. Using a simple form, we can choose what metal ion we are interested in and decide what coordination number it should present, and we will obtain a ranked list (by frequency count) of the ligand patterns that match those constraints. For example, if we were interested in designing a metal-binding site that matched the pattern CuL4, we would find that the three most frequent patterns are Cu(Cys)(Gly)(His)2, Cu(His)4 and Cu(His)3(H2O), and that the pattern Cu(Cys)4 does not appear in the data indexed in the MDB to date.RELATIONSHIP OF THE MDB WITH OTHER DATABASESThe MDB currently includes, in the lists generated fromqueries, links to the appropriate structure page at the PDB site (1) Figure 2. Distribution and statistics of the Zn-Nδ(His) distance (coordination number = 4, range 1.5–3.0) using the online histogram plotting tool.382Nucleic Acids Research, 2002, Vol. 30, No. 1and to the NIH Molecules R Us site (/ cgi-bin/pdb). The MDB is used by other databases to provide three-dimensional viewing of specific metal-binding sites through the use of our remote query/viewer tool. Sites using the MDB in this manner include the IMB Jena Image Library of Biological Macromolecules (2), the PROMISE database (3) and the E-F Hand Calcium-Binding Proteins Data Library (/cabp_database).We also make use of data from the PDB’s Het group dictionary (/het_dictionary.txt) and the lists of dehomologized structures from PDBSELECT (4) and from WHATIF SELECT (5). These lists have been manually converted into SQL tables so that they can be correlated with the other data being indexed in the MDB.AVAILABILITY AND ACCESS STATISTICSThe MDB is available at /, which includes interactive interfaces for querying and browsing the MDB, requiring just a common web browser. The non-interactiveinterfaces are described at /api (MDB’s web API) and /remote/ (remote query/viewer tool). New analytical tools and features are available for testing at /beta/. The use of the MDB by the research community has increased steadily since its inception back in early 1998. Table 1 summarizes some access statistics for the MDB from the third quarter of 1999 until the second quarter of 2001. In the last 2years the number of users has increased almost 3.5-fold and the number of documents viewed (counted as full page displays, not as individual hits in a document) has increased 3-fold. More importantly, the number of queries performed and the number of structures of metal-binding sites downloaded have increased by a factor of 4.5 and 2, respectively. ACKNOWLEDGEMENTSWe are grateful to all our users for their valuable and continuous feedback that drives improvements to the MDB. We would also like to acknowledge the work of Marij van Gorkom. Her enthusiasm helped us complete a prototype of the MSIT tool during her summer stay at TSRI. The MDB is part of the Metalloprotein Structure and Design Program Project at TSRI, funded by NIH grant P01-GM48495. The macromolecular Java viewer was developed as part of the Computational Center for Macromolecular Structure (/ CCMS), funded by NSF grant BIO/DBI 99-04559. REFERENCES1.Berman,H.M., Westbrook,J., Feng,Z., Gilliland,G., Bhat,T.N.,Weissig,H., Shindyalov,I.N. and Bourne,P.E. (2000) The Protein Data Bank. Nucleic Acids Res., 28, 235–242. Updated article in this issue:Nucleic Acids Res. (2002), 30, 245–248.2.Reichter,J., Jabs,A., Slickers,P. and Sühnel ,J. (2000) The IMB JenaImage Library of Biological Macromolecules. Nucleic Acids Res., 28, 246–249. Updated article in this issue: Nucleic Acids Res. (2002), 30,253–254.3.Degtyarenko,K.N., North,A.C.T. and Findlay,J.B.C (1999) PROMISE:a database of bioinorganic motifs. Nucleic Acids Res., 27, 233–236.4.Hobohm,U. and Sander,C. (1994) Enlarged representative set of proteinstructures. Protein Sci., 3, 522–524.5.Hooft,R.W.W., Sander,C. and Vriend,G. (1996) Verification of proteinstructures: side-chain planarity. J. Appl. Cryst., 29, 714–716.Table 1. Access statistics of the MDB web site from July 1999–June 2001 (outside users only)a Number of searches performed.b Number of structure files downloaded or viewed using the interactive Java viewer.c Number of pages viewed (complete documents).d Each host is counted only once per day.Year Quarter Queries a Structures b Pages c Hosts d 1999Jul–Sep14131327 8043 3431 Oct–Dec2983167912 2574762 2000Jan–Mar3711202514 9517522 Apr–Jun2522191413 0267733Jul–Sep5471157315 7727785Oct–Dec5414208018 6629292 2001Jan–Mar5716283219 76110 208 Apr–Jun6379228524 46811 987。
北美海关提单数据使用指导 易之家
北美海关提单数据使用指导易之家 北美海关数据普通版使用流程北美海关数据增强版使用流程一、“易之家”北美提单数据库是全球出口到北美(美国、加拿大、墨西哥)的每一票货物记录。
时间从2004年至今每月更新,每月大概60万—90万条数据;1、内容包括:采购商信息:北美各行业,产品的所有进口商名称及联系信息;供应商信息:全球所有出口商名称、分布情况和联系方式;通知人信息:北美各行业,产品的所有通知人名称及联系信息;交易信息:到港日期、产品详细描述、数量、重量、件数、运输信息:启运港、目的港、船名航次,集装箱数量、提单号,承运人、原产国2、八张重要的监控报表:完整提单监控报表、供应商监控报表、采购商监控报表、通知人监控报表、原产国监控报表、指定供应商报表、指定采购商报表、指定通知人报表2004-2005年所有数据开放高级功能,可以去亲自体验!!详细报表样版请查看:/report_guide.php8张重要的监控报表:1、完整提单每一票货物的详细交易细节,内容包含全球海运出口到北美国家每一票货物的详细买家及联系方式、竞争对手及联系方式、通知人及联系方式、详细的交易细节:到港日期、产品名称、数量、产品描述、成交方式;运输信息:启运港、目的港、船名航次,集装箱数量、提单号,承运人;开发新客户;了解竞争对手;流行产品的发展趋势;选择投资方向;了解市场需求量;判断产品的淡、旺季等。
2、供应商监控报表所有供应商的具体供应量、次数、供应的商品的汇总。
分析供应量最大的供应商到供应量最小的供应商;分别所占市场的百分比。
了解竞争对手出口现状、出口周期、所占市场分额。
清楚的知道自己所处的市场位置。
制定合理的战略。
3、采购商监控报表所有采购商的具体采购量、次数、采购的商品的汇总。
分析采购量最大的采购商到采购量最小的采购商;分别所占市场的百分比。
了解采购商采购状况;采购周期;明确产品的市场主导采购商,发掘新的、潜在的买家。
4、通知人监控报表海运提单中有的收货人和通知人为一个公司。
timescaledb字段类型
TimescaleDB是一个开源的时间序列数据库,它是建立在PostgreSQL上的一个扩展,使得用户可以在传统的关系数据库中处理大规模的时间序列数据。
在使用TimescaleDB时,有一些常见的字段类型可以帮助用户更好地存储和查询时间序列数据。
本文将介绍一些常见的字段类型,并探讨它们在TimescaleDB中的用法和优势。
1. timestamp时间戳是一种用来表示日期和时间的数据类型。
在TimescaleDB中,用户可以使用timestamp字段类型来存储时间戳数据。
时间戳字段类型可以帮助用户准确地记录事件发生的时间,以便后续的查询和分析。
2. timestamptztimestamptz是timestamp with time zone的缩写,表示带有时区信息的时间戳。
在时间序列数据处理中,时区的信息非常重要,因为不同的时区可能会对数据分析产生影响。
使用timestamptz字段类型可以确保用户在存储和查询时间序列数据时考虑到了时区的影响。
3. intervalinterval是一种用来表示时间间隔的数据类型。
在时间序列数据处理中,经常需要计算时间之间的间隔,比如计算两个事件之间的时间差。
使用interval字段类型可以方便地进行时间差的计算和比较。
4. datedate字段类型用来表示日期,不带有时间信息。
在一些时间序列数据中,只需要记录事件发生的日期而不需要精确到时分秒的时间信息。
使用date字段类型可以简化数据存储和查询的复杂度。
5. timetime字段类型用来表示时间,不带有日期信息。
在一些时间序列数据中,只需要记录事件发生的具体时间而不需要日期信息。
使用time字段类型可以节省存储空间并提高查询效率。
以上是一些常见的字段类型,它们在TimescaleDB中都有自己的用途和优势。
使用合适的字段类型可以帮助用户更好地存储和查询时间序列数据,提高数据处理的效率和准确性。
希望本文对读者在使用TimescaleDB时有所帮助。
promutheus iot指标
Promutheus Iot指标
Prometheus是一个开源的系统监控和告警工具包,广泛应用于物联网(IoT)场景。
在Prometheus中,常见的IoT指标包括:
1.设备状态指标:用于监控设备的工作状态,如设备是否在线、设备的工作
电压、工作电流等。
2.网络指标:包括网络带宽、丢包率、延迟等,用于评估网络连接的质量和
稳定性。
3.应用程序指标:包括应用程序的响应时间、吞吐量、错误率等,用于评估
应用程序的性能和可靠性。
4.传感器指标:用于监控环境参数,如温度、湿度、气压等。
5.事件指标:用于记录设备或应用程序发生的事件,如设备重启、应用程序
崩溃等。
Prometheus的强大之处在于它可以通过自定义的规则和告警来对IoT设备进行实时监控和报警,帮助管理员及时发现和解决潜在的问题。
同时,Prometheus 还支持与其他开源工具和平台的集成,如Grafana、AlertManager等,方便用户进行可视化和告警管理。
一文 precision time protocol -回复
一文precision time protocol -回复什么是精确时间协议(PTP)?精确时间协议(PTP),又称为IEEE 1588协议,是一种网络协议,旨在提供高精度的时间同步。
它能够通过网络将时间信息传播到网络中的各个节点,使得节点间能够保持高度一致的时间。
这使得PTP在各种应用场景中得到广泛应用,尤其是需要高精度时间同步的领域,如金融交易、工业控制和通信系统。
PTP通过利用网络中的主从结构来实现时间同步。
网络中的一个节点被指定为主时钟(Master Clock),负责产生精确的时间信号。
其他节点则被指定为从时钟(Slave Clock),通过与主时钟进行通信来同步时间。
当从时钟接收到主时钟的时间信号时,它会与自己内部的本地时钟进行比对,并对本地时钟进行调整,从而保持与主时钟的高度一致。
PTP的工作原理可以简单概括为以下几个步骤:1. 主时钟配置:首先,需要将一个节点设置为主时钟,并进行其它一些必要的配置。
主时钟需要有稳定的时钟源,可以是原子钟或GPS接收器等精确的时间源。
2. 从时钟加入:接下来,其他节点需要与主时钟进行通信来加入网络。
从时钟会向主时钟发送加入请求,主时钟会为其分配一个时钟标识符,并指定主从时钟的相关参数。
3. 时间同步:一旦从时钟成功加入网络,主时钟会周期性地向从时钟发送时间戳消息。
时间戳消息包含主时钟的当前时间信息。
从时钟接收到时间戳消息后,会与其本地时钟进行比较,并计算出主从时钟之间的时间差。
从时钟根据时间差来对本地时钟进行微小调整,以与主时钟保持同步。
4. 时钟调整:由于网络延迟等因素的存在,从时钟与主时钟之间可能会存在微小的时间差。
为了确保整个网络中的时钟能够保持一致,PTP使用了时钟调整机制。
该机制利用反馈控制算法来逐渐调整从时钟的频率和相位,从而使其尽可能地接近主时钟。
从这些步骤中可以看出,PTP是一种分布式的时间同步机制。
通过网络中的主从结构、时间戳消息的交换和时钟调整,PTP实现了高精度的时间同步。
prometheus 业务指标 time注解
prometheus 业务指标 time注解
“Prometheus 业务指标 time注解”这句话的意思是,在Prometheus监控系统中,对业务指标进行时间注解。
Prometheus是一个开源的监控系统,它主要用于收集和存储大量的时间序列数据。
时间注解是指将时间戳信息附加到业务指标上,以便记录业务指标在每个时间点的数值。
这种时间注解可以帮助Prometheus用户更好地理解和分析业务指标的趋势和变化。
Prometheus 业务指标 time注解主要包括以下方面:
1.时间戳注解:将每个数据点的时间戳信息记录下来,以便后续分析和追溯。
2.时间范围注解:将业务指标的时间范围进行标注,以便用户了解数据在特
定时间段内的变化情况。
3.时间序列注解:将业务指标的时间序列数据进行标注,以便用户了解数据
随时间变化的情况。
最后总结,Prometheus 业务指标 time注解是指将时间戳信息与业务指标相关联,以便更好地跟踪和分析业务性能数据。
这有助于Prometheus用户了解业务指标在特定时间点的状态和趋势,从而做出更明智的决策。
同时,时间注解还可以帮助用户更好地理解和分析业务指标的时间趋势和变化,提高监控系统的可用性和可维护性。
贝斯普萨(侦查文书)说明书
Besponsa™ (inotuzumab ozogamicin)(Intravenous)Document Number: IC-0317 Last Review Date: 10/24/2022Date of Origin: 09/19/2017Dates Reviewed: 09/2017, 11/2018, 11/2019, 11/2020, 11/2021, 07/2022, 11/2022Customized Date: 07/20/2022Effective Date: 01/01/2023I.Length of AuthorizationCoverage will be provided for 6 months (for up to a maximum of 6 cycles) and may not berenewed.II.Dosing LimitsA.Quantity Limit (max daily dose) [NDC Unit]:•Besponsa 0.9 mg powder for injection single-dose vial: 7 vials per 21 daysB.Max Units (per dose and over time) [HCPCS Unit]:Cycle 1•27 billable units (2.7 mg) on Day 1, 18 billable units (1.8 mg) on Days 8 and 15 of a 21 to 28-day cycleSubsequent Cycles (maximum of 5 cycles)•27 billable units (2.7 mg) on Day 1, 18 billable units (1.8 mg) on Days 8 and 15 of a 28-day cycle for up to 2 cycles•18 billable units (1.8 mg) on Day 1, Day 8, and Day 15 of a 28-day cycle for up to 3 cycles III.Initial Approval Criteria 1Coverage is provided in the following conditions:•Baseline electrocardiogram (ECG) is within normal limits; AND•Patient has not previously received treatment with inotuzumab ozogamicin; ANDUniversal Criteria 1-3•Patient has CD22-positive disease; ANDAdult B-Cell Precursor Acute Lymphoblastic Leukemia (ALL) † Ф 1-3•Patient is at least 18 years of age; ANDo Patient has relapsed or refractory disease; AND▪Used as single agent therapy; OR▪Used in combination with mini-hyper CVD (cyclophosphamide, dexamethasone, vincristine, methotrexate, cytarabine); AND➢Patient is Philadelphia chromosome (Ph)-negative; OR➢Patient is Philadelphia chromosome (Ph)-positive and refractory to prior tyrosine kinase inhibitor therapy (e.g., imatinib, dasatinib, ponatinib,nilotinib, bosutinib, etc.); OR▪Used in combination with tyrosine kinase inhibitor (TKI) therapy (e.g., bosutinib, dasatinib, imatinib, nilotinib, or ponatinib); AND➢Patient is Philadelphia chromosome (Ph)-positive; ORo Used as induction therapy in patients ≥65 years of age or with substantial comorbidities; AND▪Used in combination with mini-hyper CVD; AND▪Patient is Philadelphia chromosome (Ph)-negativePediatric B-Cell Precursor Acute Lymphoblastic Leukemia (ALL) ‡ 3,4•Patient is at least 2 years of age; AND•Patient has relapsed or refractory disease; AND•Used as single agent therapy; ANDo Patient is Philadelphia chromosome (Ph)-negative; ORo Patient is Philadelphia chromosome (Ph)-positive; AND▪Patient is intolerant or refractory to prior tyrosine kinase inhibitor (TKI) therapy(e.g., imatinib, dasatinib, etc.)† FDA Approved Indication(s); ‡ Compendium Recommended Indication(s);Ф Orphan DrugIV.Renewal Criteria 1Coverage cannot be renewed.V.Dosage/Administration 1VI.Billing Code/Availability InformationHCPCS Code:•J9229 − Injection, inotuzumab ozogamicin, 0.1 mg: 1 billable unit = 0.1 mgNDC:•Besponsa 0.9 mg lyophilized powder in single-dose vial: 00008-0100-xxVII.References1.Besponsa [package insert]. Philadelphia, PA; Pfizer Inc., March 2018. Accessed September2022.2.Kantarjian HM, DeAngelo DJ, Stelljes M, et al. Inotuzumab Ozogamicin versus StandardTherapy for Acute Lymphoblastic Leukemia. N Engl J Med. 2016 Aug 25;375(8):740-53.3.Referenced with permission from the NCCN Drugs & Biologics Compendium (NCCNCompendium®) inotuzumab ozogamicin. National Comprehensive Cancer Network,2022. The NCCN Compendium® is a derivative work of the NCCN Guidelines®.NATIONAL COMPREHENSIVE CANCER NETWORK®, NCCN®, and NCCNGUIDELINES® are trademarks owned by the National Comprehensive Cancer Network, Inc. To view the most recent and complete version of the Compendium, go online to. Accessed September 2022.4.Bhojwani D, Sposto R, Shah NN, et al. Inotuzumab ozogamicin in pediatric patients withrelapsed/refractory acute lymphoblastic leukemia [published correction appears inLeukemia. 2019 Mar 7;:]. Leukemia. 2019;33(4):884–892. doi:10.1038/s41375-018-0265-z. Appendix 1 – Covered Diagnosis CodesAppendix 2 – Centers for Medicare and Medicaid Services (CMS)Medicare coverage for outpatient (Part B) drugs is outlined in the Medicare Benefit Policy Manual (Pub. 100-2), Chapter 15, §50 Drugs and Biologicals. In addition, National Coverage Determination (NCD), Local Coverage Articles (LCAs), and Local Coverage Determinations (LCDs) may exist and compliance with these policies is required where applicable. They can be found at: https:///medicare-coverage-database/search.aspx. Additional indications may be covered at the discretion of the health plan.Medicare Part B Covered Diagnosis Codes (applicable to existing NCD/LCA/LCD): N/A。
LCSOLUTION POSTRUN ANALYSIS(离线)操作步骤
液相色谱及工作站基本操作流程此基本操作步骤不断完善中,如有变化恕不另行知会,敬请谅解。
作校准曲线及分析未知样品的基本操作流程第一步首先在实时分析(LC Real Time)工作站中做好标准品及未知样品的色谱图(即数据文件)。
第二步在后处理(LC Postrun)工作站中打开需处理的色谱图,在辅助栏中的数据分析(LC Data Analysis)→→组分表向导(Wizard)中设置好相关的方法(至少包括以下参数:定量方法、校准曲线级别数、校准曲线类型、组分名称、组分类型、保留时间、组分浓度。
),并保存好此方法(File(文件)→→Save Method As(方法另存为))。
第三步然后做校准曲线:点击辅助栏中的“校准曲线(Calibration)”图标,打开第二步保存的方法(File(文件)→→Open Method(打开方法文件))。
在中下方空白处的“数据文件”(Data File)中添加标准品的色谱图到该方法中(单击右键→→添加(Add)→→选择标准品数据文件)(注意一一对应)。
再保存好此方法(File(文件)→→SaveMethod File(保存方法文件))。
第四步最后分析未知样品:打开需分析的未知样品色谱图,加载第三步骤中保存的方法即可(File(文件)→→Load Method Parameters(加载方法))。
此基本操作步骤不断完善中,如有变化恕不另行知会,敬请谅解。
此基本操作步骤不断完善中,如有变化恕不另行知会,敬请谅解。
LCSOLUTION (离线) 操作步骤:1.双击LCSOLUTION 工作站图标。
此基本操作步骤不断完善中,如有变化恕不另行知会,敬请谅解。
2.单击POSTRUN (再解析)图标,进入如下界面:此基本操作步骤不断完善中,如有变化恕不另行知会,敬请谅解。
3.打开样品图谱。
(File (文件)→→Open Data File (打开数据文件))此基本操作步骤不断完善中,如有变化恕不另行知会,敬请谅解。
precision time protocol原理
precision time protocol原理
精密时钟协议(Precision Time Protocol,简称PTP)是一种用
于在计算机网络中实现精确时间同步的协议。
该协议旨在提供亚微秒级别的同步精度,并具有较高的可用性和可靠性。
PTP的工作原理如下:
1. 主从架构:PTP网络中有一个主时钟(Master)和多个从时
钟(Slave)。
主时钟通过广播时间戳的信息给从时钟,从时
钟通过接收和处理主时钟的时间戳信息来实现同步。
2. 时间戳传递:主时钟周期性地向从时钟发送时间戳包,时间戳包中包含主时钟的时间信息。
从时钟接收到时间戳包后,将其中的时间信息与自身的时间进行比较,并将差异作为补偿因子,用于调整自身的时钟。
3. 延迟补偿:PTP通过测量主时钟和从时钟之间的延迟,并将其补偿,以消除网络传输引起的时钟偏差。
在时间戳包中,主时钟使用两个时间戳:发送时间戳和接收时间戳,通过计算这两个时间戳的差值,可以获得网络延迟。
4. 时钟校正:PTP通过迭代算法进行时钟校准。
从时钟接收到时间戳包后,将时间戳与自身的时间进行比较,计算时间差值,并将其作为补偿因子用于调整自身的时钟。
这个过程逐步迭代,直到达到所需的同步精度。
5. 精度控制:PTP通过定义时钟运行速度、传输延迟和测量误差等参数,以及根据网络条件动态调整时钟校准延迟等方式,
来控制同步的精度。
总结来说,PTP通过周期性的时间戳传递、延迟补偿和时钟校准等机制,实现计算机网络中的精确时间同步。
联合国商品贸易统计数据库使用方法Uncomtrade
选择Data Query Basic Selection的步骤是这样的:第一步是选择Commodity:1.首先要确保HS选项中,选择As reported,这个意思是商品的HS编码按照数据报告国的选择。
比如有些国家用的是07年的,有些是12年的,也许会有不同。
2.在step1. Select Source中,选择Commodities Search;在Step2. Select Items中,填4410(刨花板的HS码),然后点Search,你会得到下图。
然后,点开44前面的加号,就可以看到4410编码。
3.选中4410(这个过程要稍微等一下,需要几秒,等加载完成),点Add,这个商品就已经选择成功了。
如下图:Selected Items里的就是选定的商品。
然后可以按照这个步骤,再添加4411和4412,如果想删除Selected Items里的商品,只需要选中后,点Remove,全删是Remove All就可以了。
第二步是选择Reporter:这个您的操作是没问题的,只是要确保点了Add, 将World(Aggregate)填入Selected Items第三步是选择Partners:选择All和World是不一样的,All返回的是所有国家的数据,World返回的是整体数据。
如果您需要世界整体出口数据,那选择World就可以了。
第四步是选择年份,与Partners相同。
最后是Others,可以选择进口或者出口、数据范围和输出的格式。
所有都选好后,应该就没有别的问题了。
得到这样的数据:这里面第一行的数据是所需要的,说实话,我还弄不清后两行的数据是什么意思。
Quantity Unit是8,代表kg,Flag是数字6,代表数量和重量都有估计。
选择Data Query Express Selection的步骤是这样的:1.首先还是要确保HS选项中,选择As reported2.然后填写这几个输入框。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Real Time Tracking of Borescope Tip PoseKen Martin, Charles V. Stewart*Kitware Inc., Clifton Park, NY 12065*Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180 ken.martin@, stewart@Keywordspose estimation, borescope inspection, industrial inspection, lens distortionAbstractIn this paper we present a technique for tracking borescope tip pose in real-time. While borescopes are used regularly to inspect machinery for wear or damage, knowing the exact location of a borescope is difficult due to its flexibility. We present a technique for incremental borescope pose determination consisting of off-line fea-ture extraction and on-line pose determination. The off-line feature extraction precomputes from a CAD model of the object the features visible in a selected set of views. These cover the region over which the borescope should travel. The on-line pose determination starts from a cur-rent pose estimate, determines the visible model features, and projects them into a two-dimensional image coordi-nate system. It then matches each to the current borescope video image (without explicitly extracting features from this image), and uses the differences between the predicted and matched feature positions in a least squares technique to iteratively refine the pose estimate. Our approach sup-ports the mixed use of both matched feature positions and errors along the gradient within the pose determination. It handles radial lens distortions inherent in borescopes and executes at video frame rates regardless of CAD model size. The complete algorithm provides a continual indica-tion of borescope tip pose.1. IntroductionA borescope is a flexible tube which allows a person at one end of the tube to view images acquired at the other end. This can be accomplished with coherent fiber bundles running through the tube, or a small video camera located at the tip of the borescope and a video monitor at the other end. Borescopes are typically a centimeter or less in diam-eter, a few meters long, and used to inspect inaccessible regions of objects. For example, when inspecting the inter-nal structure of a jet engine for cracks or fatigue, small openings from the outside allow the borescope to be snaked into the engine without having to drop the engine from the plane. In such an inspection, it is often difficult for the inspector to know the exact borescope tip location within the engine, making it difficult to identify the loca-tion of new cracks found or to return to previously identi-fied trouble spots. When a crack is found, the inspector must measure its length and width which is problematic if the borescope’s pose is unknown.Traditional, non-video techniques for tracking an object do not work well within the environment a bores-cope typically encounters. Electromagnetic fields are cor-rupted by the surrounding presence of metal. Acoustic techniques are susceptible to ringing caused by metal parts and convoluted passages, and any technique that relies on a clear line of sight is by definition implausible in the borescope inspection environment. The only option is tracking based on the borescope images themselves.2. ApproachBorescope tip tracking is essentially the camera pose problem with added constraints and difficulties. First, the light source for a borescope is collocated with the camera, so the light is moving but always coincident with the cam-era. Second, object recognition is non-standard: while the complete object is known, the position within the object (the jet engine) is not and only a small fraction of the com-plete object will appear in any single image. Third, the object contains structural repetition, making an approach based solely on dead reckoning unrealistic. Fourth, track-ing (pose estimate updates) must occur at near frame-rates, preferably without specialized hardware. Combined, these constraints make borescope tip tracking a novel and chal-lenging problem.Our solution to these problems is a two fold approach consisting of off-line feature extraction from a CAD model and on-line pose estimation. The off-line extraction precomputes from the CAD model 3D edges consisting ofa 3D position and normal direction in the coordinate sys-tem of the CAD model. These features are computed for local regions of the model and consider visibility con-straints. This off-line process need only be done once for a given CAD model. The on-line process starts from a known landmark and compares the precomputed features for that region of the model to the borescope video. From this comparison an error vector is created which is used to compute an updated pose. This process repeats for each frame of video. The performance of the on-line algorithm is unaffected by the size or complexity of the CAD model.Beyond borescope inspection, this approach would be of value for any model-based pose-estimation problem where the camera is placed within the model.3. Related WorkWhile borescope tip tracking is a novel problem, there are a number of related techniques which could be consid-ered. One approach is to subdivide the CAD model into many smaller parts and then determine the borescope pose based on standard object recognition strategies. Unfortu-nately, this approach, which is employed in a different context by Kuno, Okamoto and Okada[8], suffers from aperture problems caused by the structural repetition inside a jet engine. Further, in the time required for part recognition and associated pose estimation, the borescope may have moved too far for unique localization. By con-trast, our algorithm determines pose incrementally based on the image locations of precomputed features, allowing it to run nearly at frame rate.While camera pose and object pose have been treated extensively [5][12][15], existing techniques are difficult to apply in our situation. These techniques match 3D object features with a set of 2D features extracted from a single image. In contrast, since we must estimate pose in real time, we can not extract complicated image features and the matching process must be rapid. To accomplish this, we match simple 3D edgels to 2D intensity images. We use the current pose estimate to project these 3D edgels into the image and match by searching along the projected edgel's normal direction, usually deriving only one con-straint per projected edgel. This implies that the match for a 3D feature moves and changes during pose determina-tion, making our technique similar in spirit to other tech-niques that determine pose without specific point matches[16].The first of such techniques is that of Branca [1] who presents a passive navigation technique based on calculat-ing the focus of expansion (FOE). Essentially, Branca cal-culates the image movement vectors between two frames, performs an iterative error minimization to find a set of feature matches between the two frames, then calculates the FOE and camera translation from them. This approach uses planar invariants to find the point matches by making the condition of planarity part of the error minimization process.This approach is certainly novel and it doesn’t require a model to work from, but it is not suitable for borescope inspection. It requires planar surfaces with at least five features on them while borescopes typically inspect curved surfaces such as the inside of a jet engine. Further-more it only calculates camera translations (not rotation), and it has no method to prevent the incremental errors from accumulating. This last point is critical in incremen-tal pose estimation. Many techniques use interframe dif-ferences without any form of dead reckoning. The errors from each pose estimate accumulate and create stability problems and gross errors. This is part of the motivation behind using model based pose estimation. The model can be used for dead reckoning as long as its 3D features are static. This paper does not directly address deformable models.Another paper worth noting is Khan’s [7] paper on vision based navigation for an endoscope. This is a very similar problem to borescope navigation although Khan et. al. solve it in a different manner. Instead of starting with a MR or CT model of the patient’s colon, they construct the model as they go. When the endoscope enters an uncharted region of the colon they start extending the model. Navigation is accomplished by using two features that are commonly found in colonoscopies: rings of tissue along the colon wall and a dark spot (the lumen) in the image where the colon extends away from the endoscope. This approach is real-time and once it has built the model it can support dead-reckoning. It requires no modification of current endoscope hardware or precomputed models. Unfortunately its choice of features restricts its usage to inspection of ribbed tubes.Lowe presents a model based motion technique that goes beyond pose estimation to also handle models with limited moving parts [10]. His technique is essentially a modified hypothesize & verify search tree based on line segment matches. It uses probabilities from the previous pose estimate to order the evaluation of the possible matches; this significantly improves the typically exces-sive computational requirements. There are three signifi-cant drawbacks to such an approach. The first is that it relies on line segments as features. As will be demon-strated later, there are few line segments in a jet engine. Second, it requires complete feature extraction from the input image which is time consuming. In his application, dedicated image processing hardware was required and yet the resulting performance was limited to three to five frames per second. Finally, it is still sensitive to the num-ber of potential features in the model. Results werereported for a model consisting of only a few lines. Unfor-tunately, pose estimation inside a complex part typically results in millions of features in the model, nearly all of which will be obscured in a given view.Some well known related work was performed by Dickmanns and his colleagues. His work has focused on real-time, model-based pose estimation and navigation. In an early paper [2] he describes a new approach based on measuring error vectors between predicted and actual fea-ture locations, and then using these errors as input to mod-ify the pose estimate. In later work the state estimation is handled by a Kalman filter [3] and also incorporates non vision based inputs such as airspeed [4]. The later paper presents a good overview of his technique.There are two significant differences between Dick-manns’work and this work. First Dickmanns uses the CAD model to develop his control mechanism, i.e. his software, feature selection, and Kalman filter are tuned to a specific task with a specific model. His approach does not support supplying a general CAD model to work from. This customization can be used to reduce the computation load and improve robustness. For example, when landing an airplane, Dickmanns uses ten predefined image features that correspond to specific parts of a runway and the hori-zon. Incoming images are analyzed only at the predicted locations of those ten features. Likewise those ten features represent unique defined inputs to the Kalman filter. This paper does address the issue of automatically calculating a set of pertinent features from a CAD model, and it assigns no semantic meaning to those features.The second difference is Dickmanns’choice of fea-tures. He uses constant features such as oriented edge tem-plates which yield one constraint. Our paper considers the interpretation of a feature as providing zero to two con-straints as appropriate. This is especially important as the features are automatically generated, not manually defined as in Dickmanns’work. This impacts the central pose esti-mation calculation which must be adapted to the number of constraints provided by the features. There are also some minor differences such as Dickmanns’work not incorporating wide angle lens distortions as found in a borescope, but most remaining differences would require minor modifications to either approach.One of the most closely related approaches is that of Chris Harris which uses control points from the CAD model combined with a Kalman filter [6]. The first key difference is that he doesn’t indicate how his control points are generated. It is implied that they are hand selected from the 3D model which is impractical in many situations. Second, since his approach is focused on esti-mating the pose of an object from the outside, its visibility tests for the control points are too limited for internal pose estimation. Finally, his approach deals with features as providing zero or one constraint. As will be discussed later, it is sometimes necessary to have features provide two constraints (position) instead of just a gradient vector displacement.4. Feature ExtractionAs summarized above, our approach consists of two primary pieces: off-line feature extraction, and on-line pose determination. Off-line feature extraction must take the CAD model, which could be greater than one gigabyte in size, and produce 3D features that can be loaded quickly based on a current pose. This process is critical because just traversing the CAD model in memory would consume more time than allowed for incremental pose estimation. The major components of the feature extraction process are depicted in Figure 1 and the process is summarized in the following five steps:1.Based on the CAD model, a list of 3D sample pointsis generated over the 3D region where the borescope might travel. This could be either a uniform sampling of the 3D region completely enclosing the CAD model, or it could be a non-uniform sampling of all or part of the region.2.At each sample point, computer graphics hardware isused to render a collection of 2D images of the CAD model from that location. The view directions of these images are selected to ensure that the collected images form a mosaic completely enclosing the loca-tion in question.3.For each synthetic image generated in step two, edgedetection is used to extract edgel chains having large intensity gradients, and then a fixed size subset of edgels are selected. The selected edgels have the larg-est intensity gradients but also satisfy a minimum pairwise image separation.4.The 3D location and direction on the CAD model iscomputed for each selected edgel. These become the 3D features for the current sample point.5.The process repeats for each image and sample point.The first step is to determine where the sample points will be and how many of them will be used. This operation must generate enough sample points and 3D features that there will be one near any position encountered where the borescope travels. For the results presented here, the sam-ple points were generated simply as a regular grid.In the second step computer graphics hardware is used to render images of the CAD model. The eventual goal is to produce features useful during on-line pose esti-mation. The problem then becomes determining how to “predict” a useful feature from a CAD model. From theperspective of the on-line algorithm, a useful feature will be one that can be used to provide an image movement vector. As such, image intensity discontinuities, or edges, will be likely candidates. One possibility is that range or normal discontinuities in the CAD model will form image intensity discontinuities in the video. This is often, but not always true, and some intensity discontinuities will arise solely from markings and changes in materials. A more general and more reliable approach is to use computer ren-dering to create a realistic image of what the part should look like. This can then be examined for intensity edges.In practice, step two involves rendering six images for each sample point. These images are rendered with a view angle of ninety degrees and oriented along the positive and negative directions of the three cartesian axes to form an image cube. To obtain the best features possible, the spec-ular and diffuse material properties of the CAD model are manually adjusted so that the computer rendered images closely resemble video of the part to be inspected. This requires manually selecting material properties for render-ing that match the properties of the physical part.In step three, standard techniques are used to extract edgel chains from each rendered image. From the edgel chains up to fifty features are extracted with a minimum angular separation of three degrees relative to the bores-cope tip. That gives fifty features for each view, six views for each sample point yielding up to 250 features per sam-ple point. If the synthetic images contain few edgels then there will be fewer than 250 features. Likewise more fea-tures could be used if available. Limiting the number of features selected effectively limits the processing time required for the on-line pose estimation.These image features are then converted to 3D edges by casting a ray from the sample point, through the edgel in the image plane and into the CAD model. The closest ray-polygon intersection provides the 3D position of the edgel. The gradient in 3D can be found in a similar man-ner. These 3D positions and orientations are recorded in the global coordinate system of the CAD model so they can be projected onto any image, not just the ones from which they were generated.Overall, this off-line process is very time consuming, necessitating use of ray-polygon acceleration techniques, such as spatial hashing [11].5. On-line Pose DeterminationThe on-line pose estimation algorithm combines an initial pose estimate, the 3D features computed off-line, and the live video from the borescope to produce a running pose estimate. This algorithm uses the current pose esti-mate to select the appropriate subset of the 3D features extracted during preprocessing, project these features into the image coordinate system, match these features to the borescope image, and refine the pose estimate based on differences between the projected and matched feature positions. By using precomputed features, by avoiding the need for explicit feature extraction in the borescope images, and by iterative pose refinement, the algorithm achieves video rate pose determination. An important fea-ture of this pose determination algorithm is that a pro-jected 3D feature (edgel) may either (a) be matched exactly, giving a 2D position error vector, or (b) be matched along the gradient direction, giving only the 1D component of position error along this direction, or (c) be ignored entirely as an outlier. The approach also accounts for the significant lens distortions of a borescope. An over-view of the algorithm follows:1.Obtain the previous borescope location and orienta-tion. Initially this comes from the operator positioning the borescope at a known location or landmark. Sub-sequently, it is taken from the results of the estimated pose for the previous borescope image.2.Determine the 3D sample point closest to the previousborescope location.3.Determine which features for that point would be vis-ible to the borescope based on its previous pose.4.Repeat the following three steps until the change inthe pose estimate falls below a threshold or the inter-frame time has expired.5.Project the N 3D features selected in step threeonto a 2D image coordinate system based on thecurrent pose estimate (initially the estimate fromstep one), computing both the 2D position andgradient (normal) direction of each feature.6.For each feature, estimate the error in its pro-jected position by finding the position of the bestmatch between the projected feature and theborescope image region near the projected posi-tion. The difference in position between pro-jected and matched image positions forms a 1Dor 2D error term for each feature, depending onthe results of the matching process.e the N error terms to update the borescopepose estimate.8.Return to step one and start working on the next frameof video.In step one, there typically are a few predefined land-marks for the inspector to choose as a starting point. Step two is a very simple calculation to find the closest sample point to the current pose. Step three selects only the fea-tures that could be seen by the borescope in its previous pose. Since the features are 3D locations, this involvesdetermining if they are in the borescope's view frustum. This set is further restricted to eliminate features near the edges of the view frustum by selecting an angle slightly smaller than the borescope’s view angle. Step four starts an iterative error minimization process to determine the optimal pose estimate. This process is limited to the inter-frame time of the borescope. In practice the optimal pose may not have been found when the time has elapsed, but the current pose estimate is typically close enough to the optimum to provide a suitable pose for projecting features for the next frame. In addition, first order motion predic-tion is used to aid the projection of features for the next frame. In step five the 3D features are projected onto the current image plane yielding 2D edgels. This projection is the standard pinhole perspective projection followed by a second order radial lens distortion (see below).Steps six and seven warrant additional detail. Step six starts with N 2D edgel locations, their corresponding 2D edgel gradients, and the current borescope image. For each 2D location, the normalized cross-correlation between a one dimensional step function template (as shown in Fig-ure 2) oriented along the edgel gradient, and the video at that location (see Figure 3) is measured. This process is repeated at locations along the positive and negative edgel gradient direction up to a maximum distance determined from the camera’s uncertainty. If the maximum correlation found is greater than a threshold, then the feature is con-sidered a gradient feature. It provides one constraint to the pose refinement computation --- the optimal gradient dis-placement. This displacement is calculated using a weighted average of the correlations, essentially giving a subpixel location to the matched position. In practice the gradient direction will not be axis aligned as in Figure 2, so calculating the normalized cross-correlation involves resampling the step function template onto the pixel array. For maximum performance a set of step function tem-plates can be precomputed for a set of discrete orientations and sub-pixel positions. This is what is actually done in the implementation to avoid the cost of on-line resam-pling. The step function template selected provides a bal-ance between discrimination and noise resistance for the 300x300 pixel images produced, but other templates could be used.The initial template matching is restricted to image positions along the feature's gradient direction because each feature consists of a location and a direction --- there is nothing to limit tangential movement. Only if a correla-tion above the threshold isn't found along the gradient direction, is the search extended to include tangential dis-placements. For example, if the feature shown in Figure 3 were moved to the left, eventually the gradient direction search would no longer intersect the edge it previously had. In this situation the tangential search can provide the necessary error vector --- a two component error vector as opposed to the earlier one component gradient displace-ment error vector. This error vector will drive incremental pose estimation toward a pose where the projected feature location will produce a match along its gradient in subse-quent iterations of steps five through seven.After calculating the normalized cross-correlation over the entire region the maximum value may still be quite small. This corresponds to the situation where an edge is expected within the region but nothing suitable is found. In this case no error vector is produced but the fea-ture still provides information by its lack of an error vec-tor. Its low correlation will impact the average correlation which is used as a confidence measure for the algorithm. This process is repeated independently for all N features, each contributing zero, one or two components to the error vector for the overall local error at iteration . This is the error vector to be minimized by the delta pose calcula-tion. Step seven, calculating the change in pose based on the error vector , is described in the next section.6. Delta Pose CalculationThe change in pose is computed from the error vector . For simplicity in deriving the delta pose estimate we first consider the case where each of the features are 2D matches each producing two constraints. This will then be extended to handle all three conditions. We start with the following definitions:Where for we are using the simple perspective camera model with known intrinsic parameters (This is extended to handle radial lens distortion below.) Starting from the equationwe can derive an expression for the change in the fea-ture’s image coordinates based on changes in the bores-cope pose as follows:Where is the Jacobian. Dropping the higher order terms, yields a constraint on the pose for each error vector.E t tE tE tP t the 6D boroscope pose vector at iteration t =u it the 2D image position for feature i at iteration t =x i the 3D position of feature i=F the boroscope projection function=Fu it F P t x it,()=u it∆u it u i t1–()–=F P t1–()P∆+x i,()F P t1–()x i,()–=J i P t1–()x i,()P∆H.O.T.+=J iWe can combine the constraints for all N >= 3 matches to solve for the pose. First we combine the vectors into a 2N error vector . Likewise we combine the Jacobians,, into a 2N by 6 matrix . Then, we determine the pose error by minimizing the following error norm: yielding The resulting provides an error vector for the borescope pose and can be computed using singular value decomposition (SVD). For a given frame this technique is applied in an iterative manner to account for non-linearity and feature rematching. The Jacobians can be computed along the lines of the technique described in Lowe[9]. For a given borescope image frame this technique is applied in an iterative manner to account for non-linearities and fea-ture rematching, as discussed above.This technique is easily extended to handle the radial lens distortion typical of borescopes. The following distor-tion model [13], solves for the distorted coordinates as a function of the non-distorted coordinates.In this and are the non-distorted pixel coordinates in consideration, is the non-distorted radius of that point (measured form the center of the image), and is the dis-tortion constant computed off-line during camera calibra-tion. This is a second order approximation and it can easily be extended to higher orders. This model is less commonly used than the traditional formulation which computes non-distorted coordinates from distorted values. Our motiva-tion is that this formulation is easily differentiated and hence can be incorporated into the above Jacobian calcula-tions using the chain rule. The derivatives are:The second extension to the above derivation handles the situation where some features provide one constraint (“gradient features”), others provide two constraints (“position features”) and the remainder are ignored. Start-ing with the equation for gradient displacement:where is the unit gradient direction, we can con-u it∆E tJ i JP t∆J P t E t–∆2P t ∆J tJ ()1–J tE t=P t∆u˜u 1kr 2+()=v˜v 1kr 2+()=r˜r 1kr 2+()=u vrku˜∂u ∂-----13ku 2kv 2++=u˜∂v∂-----2kuv =v˜∂u ∂-----2kuv =v˜∂v∂-----13kv 2ku 2++=d ∆it g ˆtu it∆=g ˆstruct an error vectorThe error norm becomes where we have introduced the matrix constructed as follows:If is the number of gradient features then will have rows similar to the first two shown. These rows map 2D displacements into gradient displacements essentially by performing a dot product with the unit gradient. Likewise if is the number of position features then the bottom right corner of will be the identity matrix. The resulting size of will be by 2N. We can then solve this in the same manner as before yielding:While is not invertible, in general is.7. ResultsWe have implemented and tested the feature extrac-tion and pose determination algorithms on both real and synthetic data. The off-line feature extraction was devel-oped on a UNIX platform using the Visualization Tool-kit[14] as a framework. The on-line system is PC based which, with a simple frame capture card, maintains pose updates at over ten frames per second independent of the size of the CAD model.The top image in Figure 4 shows a portion of a CAD model from a F110A exhaust duct and liner. The liner is an undulating surface with numerous cooling holes. The 3D features extracted for a portion of this CAD model are shown as small white spheres typically surrounding the cooling holes. Note that there are no line segments or flat surfaces to use for complex features. The bottom image in Figure 4 shows a computer rendering of a simple CAD model with some of the 3D feature positions shown as white spheres. Note that some of the features are located at changes in material properties, not just at range or normal discontinuities.Figure 5 shows two sample images from a Welch Allen VideoProbe 2000 borescope. The image quality suf-E t d it ∆for all i that are gradient features u it ∆for all i that are position features=HJ P t E t–∆2HH g 1u g 1v 00...00000g 2u g 2v ...000 ... 0000...1000000...0100000...001=a H abH 2bH a 2b+P t ∆HJ ()tHJ ()()1–HJ ()tE t=H tH HJ ()tHJ ()。