CAMEO-SIM a physics-based broadband scene

合集下载

HST snapshot imaging of BL Lac objects

HST snapshot imaging of BL Lac objects
1
Bahcall et al. 1997; Hooper, Impey & Foltz 1997; Malkan Gorjian and Raymond 1998).
BL Lac objects are radio loud AGN seen closely along the jet emission (see e.g. the review by Urry and Padovani 1995). The beaming properties of the jets suggest that low-luminosity radio galaxies are the corresponding misaligned population. Observations of the host galaxies are a direct probe of this unification hypothesis.
The use of HST data has improved the capability to investigate the galaxies hosting nuclear activity and in fact a number of specific studies of nearby and intermediate redshift AGN have been pursued (see e.g. Disney et al. 1995 ;
Joseph E. Pesce Department of Astronomy, Pennsylvania State University, USA
Aldo Treves University of Como, Italy
Abstract. Snapshot images of ∼ 100 BL Lac objects were obtained with WFPC2 on HST. Sources from various samples, in the redshift range 0.05 to 1.2, were observed and 61 resolved (51 with known z). The high resolution and homogeneity of the images allow us to address the properties of the immediate environments of BL Lacs with unprecedented capability. Host galaxies of BL Lacs are luminous ellipticals (on average 1 mag brighter than L∗) with no or little disturbed morphology. The nucleus, that is always well centered onto the galaxy, contributes in the optical (R band) to about half of the total luminosity of the object (range of the contribution from 0.1 to 10). The undisturbed morphology suggests that the nuclear activity has marginal effect on the overall properties of the hosts. Nonetheless several examples of close companions have been detected. The luminosity distribution of host galaxies is compared with that of a large sample of FR-I radio galaxies.

Cooperative Caching in Wireless Multimedia Sensor Nets

Cooperative Caching in Wireless Multimedia Sensor Nets

• Variable nnel capacity
• multi-hop nature of WMSNs implies that wireless link capacity depends on the interference level among nodes
• Multimedia in-network processing
• multiple sensor nodes share and coordinate cache data to cut communication cost and exploit the aggregate cache space of cooperating sensors • Each sensor node has a moderate local storage capacity associated with it, i.e., a flash memory
Articulation nodes (in bridges), e.g., 3, 4, 7, 16, 18 With large fanout, e.g., 14, 8, U Therefore: geodesic nodes
15
The cache discovery protocol (1/2)
5
What’s so special about WMSNs ?
• [Ian Akyildiz: Dec’06] We have to rethink the computation-communication paradigm of traditional WSNs
• which focused only on reducing energy consumption

皮尔斯电子枪的PIC模拟

皮尔斯电子枪的PIC模拟

thermal velocity make a difference to formation and quality of electron beam Were
studied,and the design of the grid control electron gun was researched.
Hi班一current Electron Gun Series.111e results were contrasted with results by MAGIC simulation.
Keyword:Piercegun,Particle In Cell,electron optics,,Computer Aided Design
1.本文首先对粒子模拟方法进行了综述,重点介绍粒子模拟方法的基本原理, 包括宏粒子模型、有限尺寸大小粒子模型以及本文程序设计模拟电子枪系统PIC 模拟采用的静电模型。对粒子模拟软件MAGIC和MAFIA所具有的特点及其应 用情况进行了介绍。
2.介绍了皮尔斯电子枪的发展,总结了皮尔斯电子枪的关键设计参量,研究 了在空间电荷效应及发射电子热初速影响下发射电流在极间的流通规律并对3/2 规律进行了修正,阴极电子初始热速度对电子注成形和电子注质量的影响,并对
an important character of the high density electron optics,SO the accuracy of its
distribution has great effect on electron optics.The usual solutions of the pace charges
follows:
In this paper,particle-in-cell(PIC)method is reviewed and summarized.The fundamental of PIC method is introduced,including the Marco particle model,the finite—size particle model and the electrostatic model which is used in this paper to

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point CloudsUsing Filtered Normals and Voxel GrowingJean-Emmanuel DeschaudFranc¸ois GouletteMines ParisTech,CAOR-Centre de Robotique,Math´e matiques et Syst`e mes60Boulevard Saint-Michel75272Paris Cedex06jean-emmanuel.deschaud@mines-paristech.fr francois.goulette@mines-paristech.frAbstractWith the improvement of3D scanners,we produce point clouds with more and more points often exceeding millions of points.Then we need a fast and accurate plane detection algorithm to reduce data size.In this article,we present a fast and accurate algorithm to detect planes in unorganized point clouds usingfiltered normals and voxel growing.Our work is based on afirst step in estimating better normals at the data points,even in the presence of noise.In a second step,we compute a score of local plane in each point.Then, we select the best local seed plane and in a third step start a fast and robust region growing by voxels we call voxel growing.We have evaluated and tested our algorithm on different kinds of point cloud and compared its performance to other algorithms.1.IntroductionWith the growing availability of3D scanners,we are now able to produce large datasets with millions of points.It is necessary to reduce data size,to decrease the noise and at same time to increase the quality of the model.It is in-teresting to model planar regions of these point clouds by planes.In fact,plane detection is generally afirst step of segmentation but it can be used for many applications.It is useful in computer graphics to model the environnement with basic geometry.It is used for example in modeling to detect building facades before classification.Robots do Si-multaneous Localization and Mapping(SLAM)by detect-ing planes of the environment.In our laboratory,we wanted to detect small and large building planes in point clouds of urban environments with millions of points for modeling. As mentioned in[6],the accuracy of the plane detection is important for after-steps of the modeling pipeline.We also want to be fast to be able to process point clouds with mil-lions of points.We present a novel algorithm based on re-gion growing with improvements in normal estimation and growing process.For our method,we are generic to work on different kinds of data like point clouds fromfixed scan-ner or from Mobile Mapping Systems(MMS).We also aim at detecting building facades in urban point clouds or little planes like doors,even in very large data sets.Our input is an unorganized noisy point cloud and with only three”in-tuitive”parameters,we generate a set of connected compo-nents of planar regions.We evaluate our method as well as explain and analyse the significance of each parameter. 2.Previous WorksAlthough there are many methods of segmentation in range images like in[10]or in[3],three have been thor-oughly studied for3D point clouds:region-growing, hough-transform from[14]and Random Sample Consen-sus(RANSAC)from[9].The application of recognising structures in urban laser point clouds is frequent in literature.Bauer in[4]and Boulaassal in[5]detect facades in dense3D point cloud by a RANSAC algorithm.V osselman in[23]reviews sur-face growing and3D hough transform techniques to de-tect geometric shapes.Tarsh-Kurdi in[22]detect roof planes in3D building point cloud by comparing results on hough-transform and RANSAC algorithm.They found that RANSAC is more efficient than thefirst one.Chao Chen in[6]and Yu in[25]present algorithms of segmentation in range images for the same application of detecting planar regions in an urban scene.The method in[6]is based on a region growing algorithm in range images and merges re-sults in one labelled3D point cloud.[25]uses a method different from the three we have cited:they extract a hi-erarchical subdivision of the input image built like a graph where leaf nodes represent planar regions.There are also other methods like bayesian techniques. In[16]and[8],they obtain smoothed surface from noisy point clouds with objects modeled by probability distribu-tions and it seems possible to extend this idea to point cloud segmentation.But techniques based on bayesian statistics need to optimize global statistical model and then it is diffi-cult to process points cloud larger than one million points.We present below an analysis of the two main methods used in literature:RANSAC and region-growing.Hough-transform algorithm is too time consuming for our applica-tion.To compare the complexity of the algorithm,we take a point cloud of size N with only one plane P of size n.We suppose that we want to detect this plane P and we define n min the minimum size of the plane we want to detect.The size of a plane is the area of the plane.If the data density is uniform in the point cloud then the size of a plane can be specified by its number of points.2.1.RANSACRANSAC is an algorithm initially developped by Fis-chler and Bolles in[9]that allows thefitting of models with-out trying all possibilities.RANSAC is based on the prob-ability to detect a model using the minimal set required to estimate the model.To detect a plane with RANSAC,we choose3random points(enough to estimate a plane).We compute the plane parameters with these3points.Then a score function is used to determine how the model is good for the remaining ually,the score is the number of points belonging to the plane.With noise,a point belongs to a plane if the distance from the point to the plane is less than a parameter γ.In the end,we keep the plane with the best score.Theprobability of getting the plane in thefirst trial is p=(nN )3.Therefore the probability to get it in T trials is p=1−(1−(nN )3)ing equation1and supposing n minN1,we know the number T min of minimal trials to have a probability p t to get planes of size at least n min:T min=log(1−p t)log(1−(n minN))≈log(11−p t)(Nn min)3.(1)For each trial,we test all data points to compute the score of a plane.The RANSAC algorithm complexity lies inO(N(Nn min )3)when n minN1and T min→0whenn min→N.Then RANSAC is very efficient in detecting large planes in noisy point clouds i.e.when the ratio n minN is 1but very slow to detect small planes in large pointclouds i.e.when n minN 1.After selecting the best model,another step is to extract the largest connected component of each plane.Connnected components mean that the min-imum distance between each point of the plane and others points is smaller(for distance)than afixed parameter.Schnabel et al.[20]bring two optimizations to RANSAC:the points selection is done locally and the score function has been improved.An octree isfirst created from point cloud.Points used to estimate plane parameters are chosen locally at a random depth of the octree.The score function is also different from RANSAC:instead of testing all points for one model,they test only a random subset and find the score by interpolation.The algorithm complexity lies in O(Nr4Ndn min)where r is the number of random subsets for the score function and d is the maximum octree depth. Their algorithm improves the planes detection speed but its complexity lies in O(N2)and it becomes slow on large data sets.And again we have to extract the largest connected component of each plane.2.2.Region GrowingRegion Growing algorithms work well in range images like in[18].The principle of region growing is to start with a seed region and to grow it by neighborhood when the neighbors satisfy some conditions.In range images,we have the neighbors of each point with pixel coordinates.In case of unorganized3D data,there is no information about the neighborhood in the data structure.The most common method to compute neighbors in3D is to compute a Kd-tree to search k nearest neighbors.The creation of a Kd-tree lies in O(NlogN)and the search of k nearest neighbors of one point lies in O(logN).The advantage of these region growing methods is that they are fast when there are many planes to extract,robust to noise and extract the largest con-nected component immediately.But they only use the dis-tance from point to plane to extract planes and like we will see later,it is not accurate enough to detect correct planar regions.Rabbani et al.[19]developped a method of smooth area detection that can be used for plane detection.Theyfirst estimate the normal of each point like in[13].The point with the minimum residual starts the region growing.They test k nearest neighbors of the last point added:if the an-gle between the normal of the point and the current normal of the plane is smaller than a parameterαthen they add this point to the smooth region.With Kd-tree for k nearest neighbors,the algorithm complexity is in O(N+nlogN). The complexity seems to be low but in worst case,when nN1,example for facade detection in point clouds,the complexity becomes O(NlogN).3.Voxel Growing3.1.OverviewIn this article,we present a new algorithm adapted to large data sets of unorganized3D points and optimized to be accurate and fast.Our plane detection method works in three steps.In thefirst part,we compute a better esti-mation of the normal in each point by afiltered weighted planefitting.In a second step,we compute the score of lo-cal planarity in each point.We select the best seed point that represents a good seed plane and in the third part,we grow this seed plane by adding all points close to the plane.Thegrowing step is based on a voxel growing algorithm.The filtered normals,the score function and the voxel growing are innovative contributions of our method.As an input,we need dense point clouds related to the level of detail we want to detect.As an output,we produce connected components of planes in the point cloud.This notion of connected components is linked to the data den-sity.With our method,the connected components of planes detected are linked to the parameter d of the voxel grid.Our method has 3”intuitive”parameters :d ,area min and γ.”intuitive”because there are linked to physical mea-surements.d is the voxel size used in voxel growing and also represents the connectivity of points in detected planes.γis the maximum distance between the point of a plane and the plane model,represents the plane thickness and is linked to the point cloud noise.area min represents the minimum area of planes we want to keep.3.2.Details3.2.1Local Density of Point CloudsIn a first step,we compute the local density of point clouds like in [17].For that,we find the radius r i of the sphere containing the k nearest neighbors of point i .Then we cal-culate ρi =kπr 2i.In our experiments,we find that k =50is a good number of neighbors.It is important to know the lo-cal density because many laser point clouds are made with a fixed resolution angle scanner and are therefore not evenly distributed.We use the local density in section 3.2.3for the score calculation.3.2.2Filtered Normal EstimationNormal estimation is an important part of our algorithm.The paper [7]presents and compares three normal estima-tion methods.They conclude that the weighted plane fit-ting or WPF is the fastest and the most accurate for large point clouds.WPF is an idea of Pauly and al.in [17]that the fitting plane of a point p must take into consider-ation the nearby points more than other distant ones.The normal least square is explained in [21]and is the mini-mum of ki =1(n p ·p i +d )2.The WPF is the minimum of ki =1ωi (n p ·p i +d )2where ωi =θ( p i −p )and θ(r )=e −2r 2r2i .For solving n p ,we compute the eigenvec-tor corresponding to the smallest eigenvalue of the weightedcovariance matrix C w = ki =1ωi t (p i −b w )(p i −b w )where b w is the weighted barycenter.For the three methods ex-plained in [7],we get a good approximation of normals in smooth area but we have errors in sharp corners.In fig-ure 1,we have tested the weighted normal estimation on two planes with uniform noise and forming an angle of 90˚.We can see that the normal is not correct on the corners of the planes and in the red circle.To improve the normal calculation,that improves the plane detection especially on borders of planes,we propose a filtering process in two phases.In a first step,we com-pute the weighted normals (WPF)of each point like we de-scribed it above by minimizing ki =1ωi (n p ·p i +d )2.In a second step,we compute the filtered normal by us-ing an adaptive local neighborhood.We compute the new weighted normal with the same sum minimization but keep-ing only points of the neighborhood whose normals from the first step satisfy |n p ·n i |>cos (α).With this filtering step,we have the same results in smooth areas and better results in sharp corners.We called our normal estimation filtered weighted plane fitting(FWPF).Figure 1.Weighted normal estimation of two planes with uniform noise and with 90˚angle between them.We have tested our normal estimation by computing nor-mals on synthetic data with two planes and different angles between them and with different values of the parameter α.We can see in figure 2the mean error on normal estimation for WPF and FWPF with α=20˚,30˚,40˚and 90˚.Us-ing α=90˚is the same as not doing the filtering step.We see on Figure 2that α=20˚gives smaller error in normal estimation when angles between planes is smaller than 60˚and α=30˚gives best results when angle between planes is greater than 60˚.We have considered the value α=30˚as the best results because it gives the smaller mean error in normal estimation when angle between planes vary from 20˚to 90˚.Figure 3shows the normals of the planes with 90˚angle and better results in the red circle (normals are 90˚with the plane).3.2.3The score of local planarityIn many region growing algorithms,the criteria used for the score of the local fitting plane is the residual,like in [18]or [19],i.e.the sum of the square of distance from points to the plane.We have a different score function to estimate local planarity.For that,we first compute the neighbors N i of a point p with points i whose normals n i are close toFigure parison of mean error in normal estimation of two planes with α=20˚,30˚,40˚and 90˚(=Nofiltering).Figure 3.Filtered Weighted normal estimation of two planes with uniform noise and with 90˚angle between them (α=30˚).the normal n p .More precisely,we compute N i ={p in k neighbors of i/|n i ·n p |>cos (α)}.It is a way to keep only the points which are probably on the local plane before the least square fitting.Then,we compute the local plane fitting of point p with N i neighbors by least squares like in [21].The set N i is a subset of N i of points belonging to the plane,i.e.the points for which the distance to the local plane is smaller than the parameter γ(to consider the noise).The score s of the local plane is the area of the local plane,i.e.the number of points ”in”the plane divided by the localdensity ρi (seen in section 3.2.1):the score s =card (N i)ρi.We take into consideration the area of the local plane as the score function and not the number of points or the residual in order to be more robust to the sampling distribution.3.2.4Voxel decompositionWe use a data structure that is the core of our region growing method.It is a voxel grid that speeds up the plane detection process.V oxels are small cubes of length d that partition the point cloud space.Every point of data belongs to a voxel and a voxel contains a list of points.We use the Octree Class Template in [2]to compute an Octree of the point cloud.The leaf nodes of the graph built are voxels of size d .Once the voxel grid has been computed,we start the plane detection algorithm.3.2.5Voxel GrowingWith the estimator of local planarity,we take the point p with the best score,i.e.the point with the maximum area of local plane.We have the model parameters of this best seed plane and we start with an empty set E of points belonging to the plane.The initial point p is in a voxel v 0.All the points in the initial voxel v 0for which the distance from the seed plane is less than γare added to the set E .Then,we compute new plane parameters by least square refitting with set E .Instead of growing with k nearest neighbors,we grow with voxels.Hence we test points in 26voxel neigh-bors.This is a way to search the neighborhood in con-stant time instead of O (logN )for each neighbor like with Kd-tree.In a neighbor voxel,we add to E the points for which the distance to the current plane is smaller than γand the angle between the normal computed in each point and the normal of the plane is smaller than a parameter α:|cos (n p ,n P )|>cos (α)where n p is the normal of the point p and n P is the normal of the plane P .We have tested different values of αand we empirically found that 30˚is a good value for all point clouds.If we added at least one point in E for this voxel,we compute new plane parameters from E by least square fitting and we test its 26voxel neigh-bors.It is important to perform plane least square fitting in each voxel adding because the seed plane model is not good enough with noise to be used in all voxel growing,but only in surrounding voxels.This growing process is faster than classical region growing because we do not compute least square for each point added but only for each voxel added.The least square fitting step must be computed very fast.We use the same method as explained in [18]with incre-mental update of the barycenter b and covariance matrix C like equation 2.We know with [21]that the barycen-ter b belongs to the least square plane and that the normal of the least square plane n P is the eigenvector of the smallest eigenvalue of C .b0=03x1C0=03x3.b n+1=1n+1(nb n+p n+1).C n+1=C n+nn+1t(pn+1−b n)(p n+1−b n).(2)where C n is the covariance matrix of a set of n points,b n is the barycenter vector of a set of n points and p n+1is the (n+1)point vector added to the set.This voxel growing method leads to a connected com-ponent set E because the points have been added by con-nected voxels.In our case,the minimum distance between one point and E is less than parameter d of our voxel grid. That is why the parameter d also represents the connectivity of points in detected planes.3.2.6Plane DetectionTo get all planes with an area of at least area min in the point cloud,we repeat these steps(best local seed plane choice and voxel growing)with all points by descending order of their score.Once we have a set E,whose area is bigger than area min,we keep it and classify all points in E.4.Results and Discussion4.1.Benchmark analysisTo test the improvements of our method,we have em-ployed the comparative framework of[12]based on range images.For that,we have converted all images into3D point clouds.All Point Clouds created have260k points. After our segmentation,we project labelled points on a seg-mented image and compare with the ground truth image. We have chosen our three parameters d,area min andγby optimizing the result of the10perceptron training image segmentation(the perceptron is portable scanner that pro-duces a range image of its environment).Bests results have been obtained with area min=200,γ=5and d=8 (units are not provided in the benchmark).We show the re-sults of the30perceptron images segmentation in table1. GT Regions are the mean number of ground truth planes over the30ground truth range images.Correct detection, over-segmentation,under-segmentation,missed and noise are the mean number of correct,over,under,missed and noised planes detected by methods.The tolerance80%is the minimum percentage of points we must have detected comparing to the ground truth to have a correct detection. More details are in[12].UE is a method from[12],UFPR is a method from[10]. It is important to notice that UE and UFPR are range image methods and our method is not well suited for range images but3D Point Cloud.Nevertheless,it is a good benchmark for comparison and we see in table1that the accuracy of our method is very close to the state of the art in range image segmentation.To evaluate the different improvements of our algorithm, we have tested different variants of our method.We have tested our method without normals(only with distance from points to plane),without voxel growing(with a classical region growing by k neighbors),without our FWPF nor-mal estimation(with WPF normal estimation),without our score function(with residual score function).The compari-son is visible on table2.We can see the difference of time computing between region growing and voxel growing.We have tested our algorithm with and without normals and we found that the accuracy cannot be achieved whithout normal computation.There is also a big difference in the correct de-tection between WPF and our FWPF normal estimation as we can see in thefigure4.Our FWPF normal brings a real improvement in border estimation of planes.Black points in thefigure are non classifiedpoints.Figure5.Correct Detection of our segmentation algorithm when the voxel size d changes.We would like to discuss the influence of parameters on our algorithm.We have three parameters:area min,which represents the minimum area of the plane we want to keep,γ,which represents the thickness of the plane(it is gener-aly closely tied to the noise in the point cloud and espe-cially the standard deviationσof the noise)and d,which is the minimum distance from a point to the rest of the plane. These three parameters depend on the point cloud features and the desired segmentation.For example,if we have a lot of noise,we must choose a highγvalue.If we want to detect only large planes,we set a large area min value.We also focus our analysis on the robustess of the voxel size d in our algorithm,i.e.the ratio of points vs voxels.We can see infigure5the variation of the correct detection when we change the value of d.The method seems to be robust when d is between4and10but the quality decreases when d is over10.It is due to the fact that for a large voxel size d,some planes from different objects are merged into one plane.GT Regions Correct Over-Under-Missed Noise Duration(in s)detection segmentation segmentationUE14.610.00.20.3 3.8 2.1-UFPR14.611.00.30.1 3.0 2.5-Our method14.610.90.20.1 3.30.7308Table1.Average results of different segmenters at80%compare tolerance.GT Regions Correct Over-Under-Missed Noise Duration(in s) Our method detection segmentation segmentationwithout normals14.6 5.670.10.19.4 6.570 without voxel growing14.610.70.20.1 3.40.8605 without FWPF14.69.30.20.1 5.0 1.9195 without our score function14.610.30.20.1 3.9 1.2308 with all improvements14.610.90.20.1 3.30.7308 Table2.Average results of variants of our segmenter at80%compare tolerance.4.1.1Large scale dataWe have tested our method on different kinds of data.We have segmented urban data infigure6from our Mobile Mapping System(MMS)described in[11].The mobile sys-tem generates10k pts/s with a density of50pts/m2and very noisy data(σ=0.3m).For this point cloud,we want to de-tect building facades.We have chosen area min=10m2, d=1m to have large connected components andγ=0.3m to cope with the noise.We have tested our method on point cloud from the Trim-ble VX scanner infigure7.It is a point cloud of size40k points with only20pts/m2with less noise because it is a fixed scanner(σ=0.2m).In that case,we also wanted to detect building facades and keep the same parameters ex-ceptγ=0.2m because we had less noise.We see infig-ure7that we have detected two facades.By setting a larger voxel size d value like d=10m,we detect only one plane. We choose d like area min andγaccording to the desired segmentation and to the level of detail we want to extract from the point cloud.We also tested our algorithm on the point cloud from the LEICA Cyrax scanner infigure8.This point cloud has been taken from AIM@SHAPE repository[1].It is a very dense point cloud from multiplefixed position of scanner with about400pts/m2and very little noise(σ=0.02m). In this case,we wanted to detect all the little planes to model the church in planar regions.That is why we have chosen d=0.2m,area min=1m2andγ=0.02m.Infigures6,7and8,we have,on the left,input point cloud and on the right,we only keep points detected in a plane(planes are in random colors).The red points in thesefigures are seed plane points.We can see in thesefig-ures that planes are very well detected even with high noise. Table3show the information on point clouds,results with number of planes detected and duration of the algorithm.The time includes the computation of the FWPF normalsof the point cloud.We can see in table3that our algo-rithm performs linearly in time with respect to the numberof points.The choice of parameters will have little influence on time computing.The computation time is about one mil-lisecond per point whatever the size of the point cloud(we used a PC with QuadCore Q9300and2Go of RAM).The algorithm has been implented using only one thread andin-core processing.Our goal is to compare the improve-ment of plane detection between classical region growing and our region growing with better normals for more ac-curate planes and voxel growing for faster detection.Our method seems to be compatible with out-of-core implemen-tation like described in[24]or in[15].MMS Street VX Street Church Size(points)398k42k7.6MMean Density50pts/m220pts/m2400pts/m2 Number of Planes202142Total Duration452s33s6900sTime/point 1ms 1ms 1msTable3.Results on different data.5.ConclusionIn this article,we have proposed a new method of plane detection that is fast and accurate even in presence of noise. We demonstrate its efficiency with different kinds of data and its speed in large data sets with millions of points.Our voxel growing method has a complexity of O(N)and it is able to detect large and small planes in very large data sets and can extract them directly in connected components.Figure 4.Ground truth,Our Segmentation without and with filterednormals.Figure 6.Planes detection in street point cloud generated by MMS (d =1m,area min =10m 2,γ=0.3m ).References[1]Aim@shape repository /.6[2]Octree class template /code/octree.html.4[3] A.Bab-Hadiashar and N.Gheissari.Range image segmen-tation using surface selection criterion.2006.IEEE Trans-actions on Image Processing.1[4]J.Bauer,K.Karner,K.Schindler,A.Klaus,and C.Zach.Segmentation of building models from dense 3d point-clouds.2003.Workshop of the Austrian Association for Pattern Recognition.1[5]H.Boulaassal,ndes,P.Grussenmeyer,and F.Tarsha-Kurdi.Automatic segmentation of building facades using terrestrial laser data.2007.ISPRS Workshop on Laser Scan-ning.1[6] C.C.Chen and I.Stamos.Range image segmentationfor modeling and object detection in urban scenes.2007.3DIM2007.1[7]T.K.Dey,G.Li,and J.Sun.Normal estimation for pointclouds:A comparison study for a voronoi based method.2005.Eurographics on Symposium on Point-Based Graph-ics.3[8]J.R.Diebel,S.Thrun,and M.Brunig.A bayesian methodfor probable surface reconstruction and decimation.2006.ACM Transactions on Graphics (TOG).1[9]M.A.Fischler and R.C.Bolles.Random sample consen-sus:A paradigm for model fitting with applications to image analysis and automated munications of the ACM.1,2[10]P.F.U.Gotardo,O.R.P.Bellon,and L.Silva.Range imagesegmentation by surface extraction using an improved robust estimator.2003.Proceedings of Computer Vision and Pat-tern Recognition.1,5[11] F.Goulette,F.Nashashibi,I.Abuhadrous,S.Ammoun,andurgeau.An integrated on-board laser range sensing sys-tem for on-the-way city and road modelling.2007.Interna-tional Archives of the Photogrammetry,Remote Sensing and Spacial Information Sciences.6[12] A.Hoover,G.Jean-Baptiste,and al.An experimental com-parison of range image segmentation algorithms.1996.IEEE Transactions on Pattern Analysis and Machine Intelligence.5[13]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,andW.Stuetzle.Surface reconstruction from unorganized points.1992.International Conference on Computer Graphics and Interactive Techniques.2[14]P.Hough.Method and means for recognizing complex pat-terns.1962.In US Patent.1[15]M.Isenburg,P.Lindstrom,S.Gumhold,and J.Snoeyink.Large mesh simplification using processing sequences.2003.。

海康威视 30X 高清鱼眼网络摄像机 用户手册说明书

海康威视 30X 高清鱼眼网络摄像机 用户手册说明书

Copyright ©1993- 2017 Infinova. All rights reserved. Appearance and specifications are subject to change without prior notice.• Inbuilt 30X HD integrated camera module• 1/1.9" large area progressive scanning CMOS sensor• ICR infrared filter type automatic switch to realize true day/nightsurveillance• Starlight ‐level ultra ‐low illumination: 0.0005 lux• Supports multi ‐frame composite pattern wide dynamic and the maximumdynamic range is 120dB• Inbuilt efficient infrared lamps, wave length 850nm, ensure stablelong-term use and reduce maintenance cost • Night vision distance up to 200m• The ways of turning on the infrared lamps can be flexible to meetdiversified surveillance environment demands• The infrared power is adjusted automatically based on dome drive zoomingor manually to optimize the night vision fill ‐in light effects • HD network video output: 1920×1080@60fps• Features smart functions to achieve border protection (wire cross,intrusion) • Three Simultaneous Video Streams: Dual H.265 & M-JPEG or Dual H.264 &Scalable M-JPEG• Supports embedded storage/NAS storage• Supports alarm recording and alarm snapshots• Bi ‐directional audio, G.711a/G.711u/AAC standard optional • Two alarm inputs and one alarm output• Supports motion detection, 4 detecting areas dividable• Supports servo feedback mechanism and multiple alarm-triggered ways,such as IO input, network disconnected, motion detection, smart detection, etc.; supports flexible alarm associated configurations, such as I/O output, email, FTP picture-uploading, audio and TF card recording • Supports Local Recording• Supports Region of Interest (ROI), 8 regions dividable• Allows multiple users to perform live access and parameters settings viaWeb Server• Supports preset, autopan, pattern, autoscan, time tour, normal tour, power‐on return, etc.• Supports Auto-Flip and Image overlapping • Manual Consumption adjustment• Compatible with Infinova digital video surveillance software and convenient to integrate with other video surveillance software • Supports ONVIF Profile S & G standards• Standard SDK, easy to integrate with other digital system• Supports RS485 control and analog video output for easy debugging • IP67 protection rate, inbuilt a heater and air circulation system to avoid icing• Supports remote network upgrade•Adopts hydrophobic lens, lens self-cleanVT231-A230-A series is our newly introduced high definition infrared network dome camera that supports 1920×1080@60fps HDnetwork video output. It adopts H.265/H.264/M ‐JPEG encoding and its output provides excellent definition and color revivificationdegree which enables the acquisition of rich and accurate details so as to effectively guarantee smart analysis accuracy.This product adopts large power LED infrared lamps, infrared wave length 850nm, long night vision distance up to 200m, strongillumination. The IR lamps can turn on or off automatically based on environmental lighting conditions or can be adjusted manually. The IR illumination allows flexible adjustment so as to reduce IR lamp calorific value and extend its service life.User ‐friendly GUI interface design allows users to perform dome PTZ control easily via network and to configure detailed camera parameters settings. At Web interface users can perform dome camera settings and operations by using a mouse which is more convenient than the traditional keyboard control. It also supports area zoom and image PTZ function.VT231-A230-A series dome cameras also feature general dome functions such as preset, pattern, autopan, autoscan, time tour and normal tour .5E232008-A53VT231-A230-A061HD IR IP dome cameras, 2.0M, 30X, 1/1.9" CMOS, day/night, H.265/H.264/MJPEG, with audio alarm, outdoor, bracket mount, 24VDC/24VAC/PoEIf select POE power source, LAS60-57CN-RJ45-F is must.AccessoriesV1761K Wall Mount, bayonet, 10 inches V1762K Corner Mount, bayonet, 10 inches V1763K Pole-side Mount, bayonet, 10 inchesLAS60-57CN-RJ45-F PoE power sourcing equipment, 100-240VAC inputs, 60W output(Unit: inch, in the parentheses is mm)MountingWall Mounting Corner MountingPole Mounting。

海康威视 DS-2ZMN2507(C) 2MP ICR 日间夜间网络聚焦摄像头模块说明书

海康威视 DS-2ZMN2507(C) 2MP ICR 日间夜间网络聚焦摄像头模块说明书

DS-2ZMN2507(C)25 × 2MP 1/2.8″ ICR Day/Night Zoom Camera ModuleHikvision DS-2ZMN2507 (C) 2MP ICR Day/Night Network Zoom Camera Module adopts 1/2.8″ progressive scan CMOS chip. With the 25 × optical zoom lens, the camera module offers more details over expansive areas.This series of camera modules can be used for different types of speed domes and PTZ cameras. ⏹2MP 1/2.8″ progressive scan CMOS⏹Up to 1920×1080 resolution⏹25 × optical zoom; focal length 4.8 mm to 120 mm⏹Min. illumination: color: 0.05Lux @(F1.6, AGC ON), B/W: 0.01Lux @(F1⏹Zoom speed: 3.6 s⏹IR cut filter with auto switch⏹3D DNR, low bit rate, digital WDR⏹Small size and low power consumption⏹Easy to connect to speed domes and PTZ camerasSpecificationCamera ModuleImage Sensor 1/2.8" progressive scan CMOSMin. Illumination color: 0.05Lux @ (F1.6, AGC ON); B/W: 0.01Lux @(F1.6, AGC ON)Resolution and Frame Rate main stream: 50Hz: 25fps (1920×1080, 1280×960, 1280×720); 60Hz: 30fps (1920×1080, 1280×960, 1280×720)sub-stream: 50Hz: 25fps (704×576, 640×480, 352×288); 60Hz: 30fps (704×480, 640×480, 352×240)Video Compression H.264, H.265Audio Compression G.722.1, G.711-a law, G.711-u law, MP2L2, G.726, PCMWhite Balance manual, auto 1, auto 2, sodium lamp, indoor, outdoor, fluorescent lamp Gain Control auto, manualSNR > 52dB3D DNR yesBLC yesRegional Focus yesShutter 1/1s to 1/30,000sDay & Night Auto/Color/BW/Scheduled-Switch/Triggered by Alarm InputDigital Zoom 12 ×Focus auto, semi-auto, manualRegional Exposure yesVideo Bit Rate 32 Kbps to 16 MbpsHeartbeat yesExposure Mode auto, iris priority, shutter priority, manualDay/Night Switch IR Cut FilterLensFocal Length 4.8 mm to 120 mm, 25 × opticalZoom Speed approx. 3.6 s (optical, wide-tele)Horizontal FOV 57.6° to 2.5° (wide-tele)Min. Working Distance 100 mm to 1500 mm (wide-tele)Aperture F1.6 to F3.5FunctionImage Enhancement WDR, HLCSmart Encoding low bit rate, ROIException Detection illegal loginPower-off Memory yesSmart Detection motion detection, video tampering detection, audio exception detection, intrusion detection, line crossing detectionNetworkProtocols IPv4/IPv6, HTTP, HTTPS, 802.1x, Qos, FTP, SMTP, UPnP, DNS, DDNS, NTP, RTSP, RTCP,RTP, TCP/IP, DHCP, BonjourAPI open-ended, support ONVIF and ISAPI, support HIKVISION SDK and third-partymanagement platformWeb Browser IE 8 to 11, Chrome 31.0+, Firefox 30.0+, Safari 11+Simultaneous Live View up to 20 channelsUser up to 32 users; 3 levels: administrator, operator, and userSecurity Measures user authentication (ID and PW); host authentication (MAC address); HTTPS encryption; IEEE 802.1x port-based network access controlInterfacePower Interface DC12V ±10%Communication Interface 10 M/100 M ethernet interface Audio I/O 1-ch audio input and 1-ch audio output Alarm I/O 1-ch alarm input and 1-ch alarm output Video Output yesSDI Output noRS-485 yesOn-board Storage yesInterface 36pin FFC (including network interface, RS485, RS232, CVBS, SDHC, alarm in/out, line in/out, power supply)Communication RS232 interface, HIKVISION protocol, RS485 interface, Pelco protocol GeneralPower Consumption static: 2.5W; dynamic: 4.5WOperating Conditions temperature: -10 °C to 60 °C (14°F to 140°F); humidity: <90% Dimensions 50 mm × 60 mm ×88.6 mmWeight 266 g (0.59 lb)⏹DimensionUnit: mm⏹Available ModelDS-2ZMN2507(C)。

Personal Profile

Personal Profile

V a s s i l i s T h e o d o r a k o p o u l o sWireless Communications Engineer30 Richmond Mount, Leeds, West YorkshireTel: 07796 978197 (mobile) 0113 2946608 (home)Email: vtheodor@ URL: /~vtheodorNationality: Greek (EU Citizen. No work permit required)Personal ProfileMy academic experience has given me a thorough grounding in the field of engineering and I have developed a great interest in the area of wireless communications. My main focus in the recent 5 years has been the design and analysis of image and video coding, processing, and communication systems. I am familiar with programming and simulation tools and I enjoy learning new languages and toolsets which I acquire easily. I maintained a high level of achievement throughout my academic studies and I have a record of scientific publications in peer-reviewed journal and conferences. I consider myself as a hardworking person, able to adapt to new situations and to work effectively in team and individual basis.Key Technical Abilities•Modelling skills for layered image and video transmission over mobile and wireless networks•Computing and system development with C, Matlab and Python•User of Microsoft / Unix / Linux operating systemsEducation•2003 – 2007: PHD in Computer ScienceSchool of Informatics, Department of Computing, University of Bradford, Bradford, UKResearch thesis:Multi-priority QAM Transmission System for High Quality Mobile Video Applications. An experimental comparison of an M-QAM transmission system suitable for video transmission targeted for wireless networks is presented. The communication system is based in layer coding and unequal error protection to make the video data robust to channel errors.Key skills acquired: The philosophy of research, Effective written and oral presentation, Supervision of projects, Advanced video coding, Wireless communications, etc. (For a full list of publications please visit my web-page) •2000 – 2001: MSc in Communications and Real-Time Electronic SystemsSchool of Engineering, Department of Electronics & Communications, University of Bradford, Bradford, UK Research thesis:A Signal-Space Simulation for QAM. This Master’s thesis carries out an investigation in a signal space simulation for a digital communication system for transmission via single AWGN channel and via multipath Rayleigh fading channel. For the transmission an M-QAM system is considered.Key skills acquired:Wireless communications, Image and video processing, Digital Signal Processing, etc.•1996 – 2000: BEng in Electronic, Telecommunications & Computer EngineeringSchool of Engineering, Department of Electronics & Communications, University of Bradford, Bradford, UKProject Report: Investigation of Techniques for the Reduction of Howl-Round in Public Address Systems. This project attempts to use various echo cancellation techniques to control the effects of acoustic feedback between an adjacent microphone and loudspeaker in a public address systems.Key skills/knowledge acquired: Basics of electronics design, Advanced computer programming, etc.Professional ExperienceFeb 2002 – Jul 2002 :Research AssistantSchool of Informatics, University of Bradford, Bradford, West Yorkshire, U.K.My key responsibility in this role was to undertake a project on Multi-Priority Mobile Transmission Systems that involved modelling and simulation analysis of a novel M-QAM transmission system for mobile video applications. I accomplished this with leading the design and development of the transmission system. This role allowed me to further develop my project management skills. Responsibilities included the supervision of laboratories, support and help to a PhD researcher with active project and MSc students.Jul 1998 – Jul 1999 :Student EngineerDepartment of Development, Pace Micro Communications, Shipley, West Yorkshire, U.K.During the 12 months in the Development Department I was involved in building and testing development products, supervising the Pace ISDN and technical check of the imported electrical components. My duties also involved communicating with suppliers, clients, warehouse staff and the administration office. As a student engineer I was also working for other departments inside the company and developed the ability to understand the dynamics of a working environment, learned the aims of a business and how the different functions such as development, production, sales and marketing all relate.Jul 1997 – Sep1997 :Student Engineer (vocational work)Jul 1996 – Sep1996 :Hellenic Sugar Factory – Factory of Orestiada, Orestiada, 68200, GreeceI had the opportunity to work and be trained in the Technical Support Department in the factory of Orestiada. My duties included: installing and upgrading the employees PCs, installing a small network inside the factory, familiarising with the company’s database and central computer administration.Key SkillsSelf Management: Approaching the PhD from a project management perspective, being the project manager I was equipped with effective organisational, time and resource management skills in order to successfully complete the course on time and remain in control.Problem Solving: During my academic career I developed the ability to see a task through to its conclusion.There were several times during my research career when the results I had were leading to a dead end, but by employing efficient problem solving strategies (I am adept at looking at the bigger picture, while at the same time can pull out and analyse the important details of any problem) I could overcome the problem and lead my work to publishable results.Communication: I have strong communication skills, both written and verbal. My academic career has necessitated the importance of writing state of the art reports and articles and presenting them to a wide cross-section of academics and industrial professionals both at the University of Bradford and at conferences worldwide.Selected Publications (For a full list of publications please visit my web-page)•“Comparative analysis of a twin-class M-QAM transmission system for wireless video applications”, Theodorakopoulos V., Woodward M., Journal of Multimedia Tools and Applications, Special Issue: Wireless Multimedia, Vol. 28, Issue 1, Feb. 2006, pp. 125-139.•“Uniform and Non-uniform Partitioned 64-QAM for Mobile Video Transmission”, Theodorakopoulos V., Woodward M., Sotiropoulou K., 9th IASTED International Conference on Internet & Multimedia Systems & Applications (IMSA), Honolulu, USA, 2005.•“Comparison of uniform and non-uniform M-QAM schemes for mobile video applications”, Theodorakopoulos V., Woodward M., Sotiropoulou K., IEEE International Conference on Multimedia communications Systems (ICMCS), Montreal, Canada, 2005.•“A Dual Priority M-QAM Transmission System for High Quality Video over Mobile Channels”, Theodorakopoulos V., Woodward M., Sotiropoulou K., IEEE First International Conference on Distributed Frameworks for Multimedia Applications (DFMA), Besançon, France, 2005.•“Partitioned Quadrature Amplitude Modulation for Mobile Video Transmission”, Theodorakopoulos V., Woodward M., Sotiropoulou K., IEEE Sixth International Symposium on Multimedia Software Engineering (ISMSE), Miami, USA, 2004.Professional Activities•Reviewer for the Institution of Engineering and Technology (IET) Proceedings in Communications.•Member of IEEE Communications Society, IEEE Computer Society, IETReferencesAvailable upon request。

iDS-2CD7A46G0 P-IZHS(Y) 4 MP ANPR IR Varifocal Bul

iDS-2CD7A46G0 P-IZHS(Y) 4 MP ANPR IR Varifocal Bul
Security
Network Storage
Client
Web Browser
Image Image Parameters Switch Image Settings Day/Night Switch Wide Dynamic Range (WDR) SNR Image Enhancement Picture Overlay Image Stabilization Interface Video Output Ethernet Interface
Wide: 2.8 to 12 mm: D: 60 m, O: 23.8 m, R: 12 m, I: 6 m 8 to 32 mm: 150.3 m, O: 59.7 m, R: 30.1 m, I: 15 m Tele: 2.8 to 12 mm: D: 149 m, O: 59.1 m, R: 29.8 m, I: 14.9 m 8 to 32 mm: D: 400 m, O: 158.7 m, R: 80 m, I: 40 m
Specification
Camera Image Sensor Max. Resolution Min. Illumination Shutter Time Day & Night Lens
Focal Length & FOV
Focus Iris Type Aperture DORI
DORI
Illuminator Supplement Light Type Supplement Light Range Smart Supplement Light IR Wavelength Video Main Stream
iDS-2CD7A46G0/P-IZHS(Y) 4 MP ANPR IR Varifocal Bullet Network Camera

进步扫描黑白CCD摄像头模块组件 OEM概述说明书

进步扫描黑白CCD摄像头模块组件 OEM概述说明书

Progressive Scan B/W CCD CameraModuleComponent/OEMOUTLINEDIMENSIONSThe XC-7500/XC-8500CE is a frame shutter camera that mounts a newly developed CCD. Square grid cells most suitable for a machine vision are used for this CCD that enables all pixels to be read.The resolution is equal in the vertical and horizontal directions.Therefore, it is not required to correct the dimension on the image processing side. The XC-7500 conforms to an EIA system of 659(H) x 494(V), and the XC-8500CE conforms to a higher-resolution CCIR system of 782(H) x 582(V). The XC-7500 and XC-8500CE enable a trigger frame shutter (E-DONPISHA)control function,signal format conversion function, and high-rate scanning function when they are connected with an optional memory adaptor (CMA-87). Moreover,the still pictures of various high-speed movable objects can be read at a high resolution (in the horizontal and vertical directions).FEATURESXC-7500/XC-8500CE •1/2"Hyper HAD IT CCD•Square Pixels 9.9 x 9.9µm(XC-7500), 8.3 x 8.3µm(XC-8500CE)•Frame ShutterNormal : 1/60~1/10,000sec, 1/50~1/10,000sec, Flickerless E-DONPISHA :Low-Speed : o o~approx.1/60sec, o o~approx.1/50secNormal-Speed : 1/1,000~1/11,000sec, 1/1,000~1/10,000sec High-Speed : 1/10,000~1/100,000secExternal Control : o o~1/60~1/10,000sec, o o~1/50~1/10,000sec •Three Mode OutputsInterlace(1/60sec, 1/50sec)-2I modeNon interlace(1/60sec, 1/50sec)-2Nmode Non interlace(1/30sec, 1/25sec)-1N mode •Restart Reset Function (Trigger Input)CMA-87•E-DONPISHA(Frame Shutter)Control•Signal format conversion EIA/CCIR, VGA/SVGA •High-Rate scan-Up to 4XC-7500XC-8500CECMA-87XC-7500/8500CECMA-87XC-8500CE1/2" IT CCD782(H) X 582(V), CCIRXC-75001/2" IT CCD659(H) X 494(V),EIACMA-87CCD OUTPUT WAVE TIMING CHARTINTERNAL SWITCH FUNCTIONS E-DONPISHA ®•Location•Set the E-DONPISHA switch on SG-235 boardE-DONPISHA ®This function accumulates electric charges with the external input trigger pulse as reference, places them on a continuous sync signal, and outputs a video signal. The objects that move at high speed are recognized using a sensor, and the image can be precisely shot in the fixed place.Normal speed (shown in the figure on the left), low speed, high speed, and external control speed are available as the shutter speed. The shutter operates in the range of o o to 1/100,000 sec.MTF,SPECTRAL RESPONSE•Spectral Response•MTFVARIOUS SHUTTER FUNCTIONS*In case of XC-7500, the connection with CMA-87 and E-Tg mode are available after #500001.Sec.CMA-87Sports ModeIn the sports mode, a continuous picture at two times the normal speed (1/100 sec.) can be read as a 50-field (CCIR/PAL) output by combination with the XC-8500CE. Since 1/100 sec. are precisely required in the sports world, this mode can be used for video recording.Example)XC-8500CE: 220/64 lines at 2/4 times the normal speed XC-7500: 180/48 lines at 2/4 times the normal speedOne-Shot Memory of E-DONPISHA ®This function controls the E-DONPISHA frame shutter by inputting a trigger pulse and memorizes the simultaneous timing signal output from the VIDEO OUT 1 and 2 terminals of the camera.High-Rate ScanningThe image in a CCD can be read partially (in the vertical center portion)at high speed.This function is useful in the field where a shorter trigger cycle than one field is required.(1.5, 2, 3, and 4 times the normal speed)CONDITIONS & TIMING FOR SHUTTER FUNCTIONThe sync signal at the VIDEO OUT 2 terminal is the same as that at the VIDEOOUT 1 terminal.In the 2I mode, O1, O2/E1,E2 signals can be continuously output by inputtinga non-interlaced external sync signal.O = ODD field imageE = EVEN field imageIn the 2N mode, the VIDEO OUT 1 and 2 terminals do not operate.The sync signal at the VIDEO OUT 2 terminal is the same as that at the VIDEOOUT 1 terminal.Output ImageOutput Image( ):XC-8500CEShutter Range•Low-Speed•Normal-Speed•High-Speed•External ControlOutput ImageShutter Range•Low-Speed•Normal-Speed•High-Speed•External Control( ):XC-8500CEThe E-DONPISHA mode can capture oneshot image by external trigger.Write Enable Pulse (WEN)In a camera, the write enable pulse is output 1V before a video output signal isproduced or it is output simultaneously with a video output signal. This pulse isused for combination with peripheral equipment.•Non Reset Mode(one shot image/standard sync) (S7:Factory setting)•Reset Mode(one shot image/non standard sync)CONNECTIONSTrigger Cable *VIDEO 2 is not availableREAR SWITCHES &CONNECTORSC-mount lensVCL-08YMVCL-12YMVCL-16Y-MVCL-25Y-MF1.6 VCL-50Y-MF2.8 50mmTripod Attachment2/3"F3.245mmXC-7500 use60mm fairiyF2.8 50mm100m fairiy6p-6pJB-7712p-12p(SHIELD)VCT-37SPECIFICATIONS* : 1. In some lenses, the color shading peculiar to a prism block may occur. Therefore, use an XC-003 lens (VCL-08WM/16WM/25WM)or a lens with a exit pupil distance of more than 100 mm.2. VBS and Y/C signals are used as a monochromatic video output signal during external synchronization.3. VBS and Y/C signals are used as a monochromatic video output signal during external synchronization, but they can be color-monitored by changing the internal setting of the camera.4. The internal sync restart reset mode and E-DONPISHA ®reset mode cannot be externally synchronized using an HD/VD signal or VS signal.DIMENSIONSLong-Time ExposureThe long-time exposure up to 128 frames can be carried out at low-speed shutter mode at on-screen menu. The XC-003/003P automatically calculates the integration time corresponding to the frame numbers on the menu,and outputs continued frame images.The long-time exposure also can be available with Restart Reset function.For this function,2 trigger pulses are required;one for starting the integration,the other for ending it. Field or frame output is available by Restart Reset mode setting.Sony Electronics Inc. (USA) HQ1 Sony Drive Park Ridge, NJ 07656(TEL:+1-201-930-7451)(FAX:+1-201-358-4401)/professional Sony of Canada Ltd.(CANADA)411 Gordon Baker Road, Willowdale, Ontario M2H 2S6 (TEL:+1-416-499-1414)(FAX:+1-416-497-1774)Sony Broadcast & Professional Europe HQ 15, rue Floreal 75831 Paris Cedex 17, France(TEL:+33-1-40-87-35-11)(FAX:+33-1-40-87-35-17) Germany Hugo-Eckener-Str. 20, 50829 Koln (TEL:+49-221-5966-322)(FAX:+49-221-5966-491)France 15, rue Floreal 75831 Paris Cedex 17(TEL:+33-1-49-45-41-62)(FAX:+33-1-47-31-13-57)UK The Heights, Brooklands, Weybridge, Surrey KT13 0XW (TEL:+44-990-331122)(FAX:+44-1932-817011)Nordic Per Albin Hanssons vag 20 S-214 32 Malmo Sweden (TEL:+46-40-190-800)(FAX:+46-40-190-450)ItalyVia Galileo Galilei 40 I-20092 Cinisello Balsamo, Milano (TEL:+39-2-618-38-431)(FAX:+39-2-618-38-402)Sony Corp. B&P Systems Co. ISP Dpt.(JAPAN)4-16-1 Okata, Atsugi-shi, Kanagawa-ken, 243-0021(TEL:+81-462-27-2345)(FAX:+81-462-27-2347)http://www.sony.co.jp/ISP/Design and specifications are subject to change without notice.97A••••COMPARISON WITH SONY XC-711。

Layout Dependent Proximity Effects

Layout Dependent Proximity Effects

6
Well proximity effect
• |VT| ↑ if FET is too close to resist edge due to dopant ions scattering off resist sidewall into active area during well implants • |ΔVT| depends on: • FET channel distance to well mask edge • Implanted ion species/energy • Other effects: µ ↓, Leff ↑, Rextension ↑ Idsat ↓ • Well mask symmetry now critical for FET matching High-energy well implant
12
Source of stress… • Un-intentional • Shallow trench isolation (nFET & pFET) compressive • Intentional • Stress memorization (nFET) • Dual-stress liners (nFET & pFET) tensile & compressive • Embedded SiGe (pFET only) compressive
Layout-Dependent Proximity Effects in Deep Nanoscale CMOS
John Faricelli
Acknowledgements
This work is the result of the combined effort of many people at AMD and GLOBALFOUNDRIES. AMD – Alvin Loke, James Pattison, Greg Constant, Kalyana Kumar, Kevin Carrejo, Joe Meier, Yuri Apanovich, Victor Andrade, Bill Gardiol, Steve Hejl GLOBALFOUNDRIES – Akif Sultan, Sushant Suryagandh, Hans VanMeer, Kaveri Mathur, Rasit Topologlu, Uwe Hahn, Thorsten Knopp, Sean Hannon, Darin Chan, Ali Icel, David Wu

GAMING SYSTEMS, GAMING DEVICES AND METHODS WITH NO

GAMING SYSTEMS, GAMING DEVICES AND METHODS WITH NO

专利名称:GAMING SYSTEMS, GAMING DEVICES ANDMETHODS WITH NON-COMPETITIVE PLAYAND OPTIONAL COMPETITIVE PLAY发明人:Cameron A. Filipour,Dwayne A. Davis申请号:US13679524申请日:20121116公开号:US20130079072A1公开日:20130328专利内容由知识产权出版社提供专利附图:摘要:In an embodiment, a gaming system includes a plurality of gaming devices and a controller configured to communicate with the gaming devices. The gaming systemenables a plurality of players to play an interactive game in a non-competitive mode and in a competitive mode. If at least two players play the interactive game in the competitive mode, for a competitive wagering event, which includes a competition between two players, the gaming system determines a winning player and a losing player. The gaming system causes the winning player to contribute a winning player portion toward a wager associated with the competitive wagering event and causes the losing player to contribute a losing player portion toward the wager associated with the competitive wagering event. The losing player portion is less than the winning player portion. The gaming system randomly determines and provides any awards to the winning player based on the wager.申请人:IGT地址:Reno NV US国籍:US更多信息请下载全文后查看。

MPX-5C Pro高分辨率微观摄像头说明书

MPX-5C Pro高分辨率微观摄像头说明书

MPX -5C PRO 5MP MICROSCOPY CAMERAHIGH RESOLUTION MICROSCOPY CAMERASONY Professional CMOS SensorThe MPX-5C Pro uses aSony Pregius ® 5MP 2/3”CMOS sensor —IMX264, with3.45 x 3.4µm pixels. The resolu-tion of the captured image canreach 2448x2048, easily resolvingfine details in samples.Advanced Global Shutter TechnologyGlobal shutters are ideal for capturing dynamic samplesmore accurately, avoiding the distortion of the moving objectcaused by non-synchronized pixel exposure. A must-havefor fluorescence applications and provides faster operationwith real-time stitching.USB 3.0 High-SpeedTransmissionUSB 3.0 super-speed trans-mission interface is simple, convenient and ensures a stablehigh-data transmission rate allowing fast-focusing at highresolution. Imaging can be performed at a rate of Imagingcan be performed at a rate of 35fps! Excellent Color ReproductionThe MPX-5C Pro’s core ISP color-interpolation algorithm effectively simulates the human eye’s sensitivity to color. The colors in the image are true to the color seen in the eyepiece, whether it is a biological brightfield, stereo or fluorescence image.Feature-RichCaptaVision +Imaging SoftwareThe innovative interface andworkflow-based design redefines the image acquisition→ editing → measurement → report output workflow process saving operating time and improving productivity.The new Excelis MPX-5C Pro CMOS microscopy camera delivers exceptional performancein a compact, low-profile design.The revolutionary, feature-rich CaptaVision + software provides real-time image stitching, real-time depth-of- field fusion, report generation & export, plus more!v041819 73 Mall Drive • Commack, NY 11725 • 631-864-1000 (P) • 631-543-8900 (F) info@•*******************•www.accu • CAMERA & SOFTWARE SPECIFICATIONSUltra-high speed Image TransferInnovative interface streamlines the image acquisition / editing / measurement & report output workflow processAdvanced noise reduction for fluorescence imaging Real-time stitching under a 10x lens Real-time depth-of-field fusion Outstanding color reproductionMICROSCOPY CAMERA SERIES MPX -5C PROCaptaVision + Features。

雷卡D-Lux7摄影机说明书

雷卡D-Lux7摄影机说明书

Camera Leica D-Lux 7Order no.Leica D-Lux 7 silver: 19115 (E-Version), 19116 (U-Version), 19117 (TK-Version), 19118 (IN-Version) Leica D-Lux 7 black: 19140 (E-Version), 19141 (U-Version), 19142 (TK-Version)Lens Leica DC Vario-Summilux 10.9-34 f/1.7-2.8 ASPH., 35mm camera equivalent: 24 - 75mm,aperture range: 1.7 – 16 / 2.8 - 16 (at 10.9 / 34mm)Optical Image stabilizationOptical compensation systemDigital zoom Max. 4xFocusing rangeAF 0.5m / 1´6“ to ∞AF Macro / MF / Snapshot Modes /Motion Pictures Maximum wideangle setting: 3cm / 13/16“ to ∞Maximum telephoto setting: 30cm / 117/8“ to ∞Image sensor 4/3“ MOS sensor, total pixel number: 21,770,000,effective pixels: 17,000,000, primary color filterMinimum Illuminance approx. 5lx (when i-Low light is used, the shutter speed is 1/30 s)Shutter system Electronically and mechanically controlledShutter speedsStill pictures T (max. approx. 30min),60 - 1/4000 s (with the mechanical shutter)1 - 1/16000 s (with the electronic shutter function)Motion pictures 1/25 - 1/16000 s (When [4K/100M/24p] is set in [Rec Quality])1/2 - 1/16000 s (When Manual Exposure Mode is set and [MF] is selected)1/30 - 1/16000 s (Other than the above)Continuous recordable time:– When the resolution for [Rec Quality] is set to [FHD]: 29 minutes– When the resolution for [Rec Quality] is set to [4K]: 15 minutesSeries exposureContinuous series exposurefrequencyElectronic / mechanical shutter: 2fps (L) / 7fps (M) / 11fps (H)Number of serially recordable pictures With RAW files: 32 or more*Without RAW files: 100 or more** Based on CIPA standards and a card with a fast read/write speedExposureExposure control modes Program (P), Aperture-priority (A),Shutter-priority (S),Manual setting (M)Exposure compensation±5EV in 1/3 EV steps (±3EV dial setting range)Exposure metering modesMulti-zone, center-weighted, spotRecording file formatsStill pictures RAW/JPEG (based on “Design rule for Camera File system” and on the “Exif 2.31” standard)L EICAD-LUX 7Technical data.Motion pictures (with audio)[MP4]3840a2160/30p (100 Mbit/s) 3840a2160/24p (100 Mbit/s) 1920a1080/60p (28 Mbit/s) 1920a1080/30p (20 Mbit/s) 1280a720/30p (10 Mbit/s)Audio recording format AAC (stereo)Monitor 3.0“ TFT LCD, resolution: approx. 1,240,000 dots,field of view: approx. 100%, aspect ratio: 3:2,touch screen functionalityViewfinder0.38“ LCD viewfinder,resolution: approx. 2,760,000 dots,field of view: approx. 100%, aspect ratio: 16:9,with diopter adjustment -4 to +3 diopters,Magnification: approx. 0.7x (35mm camera equivalent),eye sensorFlash CF DExternal flash unit (included in scope of delivery) Attachment In the camera’s hot shoeGuide number10 / 7 (with ISO 200 / 100)Flash range (with ISO AUTO and no ISO limit set)Approx. 0.6 - 14.1m/2 - 46´ / 0.3 - 8.5m/1 - 27´(at shortest / longest focal length)Illumination angle Matched to cover the lens’ shortest focal length of 10.9mmFlash modes (set on camera)AUTO, AUTO/Red-Eye Reduction, ON, ON/Red-Eye Reduction, Slow Sync., Slow Sync./Red-Eye Reduction, OFFDimensions (W x H x D)Approx. 31 x 41.5 x 30mm / 17/32 x 15/16 x 111/64“Weight Approx. 25g / 0.05lbMicrophones StereoSpeaker MonauralRecording media SD / SDHC* /SDXC* memory cards,(*UHS-I/UHS Speed Class 3)Wi-FiCompliance standard IEEE 802.11b/g/n (standard wireless LAN protocol)Frequency range used (central frequency)2412- 2462MHz (1 to 11ch), maximum output power: 13dBm (EIRP)Encryption method Wi-Fi compliant WPA™ / WPA2™Access method Infrastructure modeBluetooth functionCompliance standard Bluetooth Ver. 4.2 (Bluetooth low energy (BLE))Frequency range used (central frequency)2402 to 2480MHz,maximum output power: 10dBm (EIRP)Operatingtemperature/humidity0 - 40°C (32 - 104°F) / 10 - 80% RHPower Consumption 2.1W/2.8W (When recording with monitor/viewfinder)1.7W/1.9W (When playing back with monitor/viewfinder)Terminals / Interfaces [HDMI]: Micro HDMI Type D[USB/CHARGE]: USB 2.0 (High Speed) Micro-BDimensions(W x H x D)approx. 118 x 66 x 64mm / 421/32 x 241/64 x 29/16“Weight approx. 403g/14,2 oz / 361g/12,7 oz。

0 Explosion-Proof Dome Network Camera 快速启动指南说明书

0 Explosion-Proof Dome Network Camera 快速启动指南说明书

Explosion-Proof Dome Network CameraQuick Start Guide©2021 Hangzhou Hikvision Digital Technology Co., Ltd. All rights reserved.About this ManualThe Manual includes instructions for using and managing the Product. Pictures, charts, images and all other information hereinafter are for description and explanation only. The information contained in the Manual is subject to change, without notice, due to firmware updates or other reasons. Please find the latest version of this Manual at the Hikvision website (https:///). Please use this Manual with the guidance and assistance of professionals trained in supporting the Product.Trademarksand other Hikvision’s trademarks and logos are the properties of Hikvision in various jurisdictions.Other trademarks and logos mentioned are the properties of their respective owners.DisclaimerTO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS MANUAL AND THE PRODUCT DESCRIBED, WITH ITS HARDWARE, SOFTWARE AND FIRMWARE, ARE PROVIDED “AS IS” AND “WITH ALL FAULTS AND ERRORS”. HIKVISION MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION, MERCHANTABILITY, SATISFACTORY QUALITY, OR FITNESS FOR A PARTICULAR PURPOSE. THE USE OF THE PRODUCT BY YOU IS AT YOUR OWN RISK. IN NO EVENT WILL HIKVISION BE LIABLE TO YOU FOR ANY SPECIAL, CONSEQUENTIAL, INCIDENTAL, OR INDIRECT DAMAGES, INCLUDING, AMONG OTHERS, DAMAGES FOR LOSS OF BUSINESS PROFITS,BUSINESS INTERRUPTION, OR LOSS OF DATA, CORRUPTION OF SYSTEMS, OR LOSS OF DOCUMENTATION, WHETHER BASED ON BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), PRODUCT LIABILITY, OR OTHERWISE, IN CONNECTION WITH THE USE OF THE PRODUCT, EVEN IF HIKVISION HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES OR LOSS.YOU ACKNOWLEDGE THAT THE NATURE OF THE INTERNET PROVIDES FOR INHERENT SECURITY RISKS, AND HIKVISION SHALL NOT TAKE ANY RESPONSIBILITIES FOR ABNORMAL OPERATION, PRIVACY LEAKAGE OR OTHER DAMAGES RESULTING FROM CYBER-ATTACK, HACKER ATTACK, VIRUS INFECTION, OR OTHER INTERNET SECURITY RISKS; HOWEVER, HIKVISION WILL PROVIDE TIMELY TECHNICAL SUPPORT IF REQUIRED.YOU AGREE TO USE THIS PRODUCT IN COMPLIANCE WITH ALL APPLICABLE LAWS, AND YOU ARE SOLELY RESPONSIBLE FOR ENSURING THAT YOUR USE CONFORMS TO THE APPLICABLE LAW. ESPECIALLY, YOU ARE RESPONSIBLE, FOR USING THIS PRODUCT IN A MANNER THAT DOES NOT INFRINGE ON THE RIGHTS OF THIRD PARTIES, INCLUDING WITHOUT LIMITATION, RIGHTS OF PUBLICITY, INTELLECTUAL PROPERTY RIGHTS, OR DATA PROTECTION AND OTHER PRIVACY RIGHTS. YOU SHALL NOT USE THIS PRODUCT FOR ANY PROHIBITED END-USES, INCLUDING THE DEVELOPMENT OR PRODUCTION OF WEAPONS OF MASS DESTRUCTION, THE DEVELOPMENT OR PRODUCTION OF CHEMICAL OR BIOLOGICAL WEAPONS, ANY ACTIVITIES IN THE CONTEXT RELATED TO ANY NUCLEAR EXPLOSIVE OR UNSAFE NUCLEAR FUEL-CYCLE, OR IN SUPPORT OF HUMAN RIGHTS ABUSES.IN THE EVENT OF ANY CONFLICTS BETWEEN THIS MANUAL AND THE APPLICABLE LAW, THE LATTER PREVAILS.Regulatory InformationFCC InformationPlease take attention that changes or modification not expressly approved by the party responsible for compliance could void the user’s authority to operate the equipment.FCC compliance: This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at his own expense.FCC ConditionsThis device complies with part 15 of the FCC Rules. Operation is subject to the following two conditions:1. This device may not cause harmful interference.2. This device must accept any interference received, including interference that may cause undesired operation.EU Conformity StatementThis product and - if applicable - the suppliedaccessories too are marked with "CE" and complytherefore with the applicable harmonized European standards listed under the EMC Directive 2014/30/EU, the RoHS Directive 2011/65/EU, the ATEX Directive 2014/34/EU.2012/19/EU (WEEE directive): Products markedwith this symbol cannot be disposed of as unsortedmunicipal waste in the European Union. For properrecycling, return this product to your local supplier upon the purchase of equivalent new equipment, or dispose of it at designated collection points. For more information see:2006/66/EC (battery directive): This productcontains a battery that cannot be disposed of asunsorted municipal waste in the European Union.See the product documentation for specific batteryinformation. The battery is marked with this symbol, which may include lettering to indicate cadmium (Cd), lead (Pb), or mercury (Hg). For proper recycling, return the battery to your supplier or to a designated collection point. For more information see:Intended use of the cameraATEX:II 2 G D Ex db IIC T6 Gb/Ex tb IIIC T80℃Db IP68 IECEx: Ex db IIC T6 Gb/Ex tb IIIC T80℃Db IP68Hazardous Area Classification: Zone 1, Zone 2, Zone 21, Zone 22.IP Degree: IP68 (2m, 2h)Ex Standards:IEC 60079-0: 2011EN 60079-0: 2012/All: 2013IEC 60079-1: 2014EN 60079-1: 2014IEC 60079-31: 2013EN 60079-31: 2014Nameplate:Special Conditions for Safe Use:1.Ambient Temperature: -40°C to +60°C.2.DO NOT OPEN WHEN ENERGIZED.3.POTENTIAL ELECTROSTATIC CHARGING HAZARD – SEE INSTRUCTIONS.4.When assembly, operation and maintenance, the operator must follow the requirements of the IEC 60079-14: latest version Explosive atmosphere- Part 14: Electrical installation design, selection and erection, beside of the manufacturer’s operation instruction or its National equivalent.5.Repair and overhaul shall comply with IEC 60079-19: latest version or its National equivalent.Industry Canada ICES-003 ComplianceThis device meets the CAN ICES-3 (A)/NMB-3(A) standards requirements.Warning:This is a class A product. In a domestic environment this product may cause radio interference in which case the user may be required to take adequate measures.Safety InstructionThese instructions are intended to ensure that user can use the product correctly to avoid danger or property loss.The precaution measure is divided into “Warnings” and “Cautions”Warnings: Serious injury or death may occur if any of the warnings are neglected.Cautions:Injury or equipment damage may occur if any of the cautions are neglected.Indicates a hazard with a Indicates a potentially hazardous●CAUTION: Hot parts! Burned fingers when handling the parts.Wait one-half hour after switching offbefore handling parts. This sticker is toindicate that the marked item can be hot and should not be touched without taking care. For device with this sticker, this device is intended for installation in a restricted access location, access can only be gained by service persons or by users who have been instructed about the reasons for the restrictions applied to the location and about any precautions that shall be taken.●Grounding:The both internal and external earthing shall be connectedreliably.Ground wire cross-sectional area of not less than the phaseconnector cross-sectional area level, at least 4 mm2.●All the electrical operation should be strictly compliance withthe electrical safety regulations, fire prevention regulations and other related regulations of the nation and region.●Make sure that the power has been disconnected before youwire, install or disassemble the camera. Never wire, install or disassemble the camera in explosive environment.●The power source should meet limited power source or PS2requirements according to IEC 60950-1 or IEC 62368-1standard.●Do not connect several devices to one power adapter asadapter overload may cause over-heating or a fire hazard.●To avoid fire danger caused by electrostatic charge, never touchor wipe the camera in explosive environment. Perform thewiping and replacing accessories only under non-explosiveenvironment with the provided glove.●When the camera is installed on wall or ceiling, the device shallbe firmly fixed.●If smoke, odors or noise rise from the camera, turn off thepower at once and unplug the power cable, and then contact the service center.●If the camera does not work properly, contact your dealer orthe nearest service center. Never attempt to disassemble the speed dome yourself. (We shall not assume any responsibility for problems caused by unauthorized repair or maintenance.)●Make sure the power supply voltage is correct before using thecamera.●Do not drop the camera or subject it to physical shock.●To ensure explosion-proof performance, do not damageexplosion-proof surface.●Do not touch sensor modules with fingers. If cleaning isnecessary, use clean cloth with a bit of ethanol and wipe itgently. If the camera will not be used for an extended period, please replace the lens cap to protect the sensor from dirt.●Do not aim the camera at the sun or extra bright places.Blooming or smearing may occur otherwise (which is not amalfunction), and affect the endurance of sensor at the same time.●The sensor may be burned out by a laser beam, so when anylaser equipment is in using, make sure that the surface ofsensor will not be exposed to the laser beam.●Do not place the camera in extremely hot, cold (the operatingtemperature shall be -40°C to +60°C), dusty or damp locations, and do not expose it to high electromagnetic radiation.●To avoid heat accumulation, good ventilation is required foroperating environment.●To prevent accumulation of electrostatic charge, only dampcloth can be used during the cleaning.●Keep the camera away from liquid while in use.●While in delivery, the camera shall be packed in its originalpacking, or packing of the same texture.●Regular part replacement: a few parts (e.g. electrolyticcapacitor) of the equipment shall be replaced regularlyaccording to their average enduring time. The average timevaries because of differences between operating environment and using history, so regular checking is recommended for all the users. Please contact with your dealer for more details.●Improper use or replacement of the battery may result inhazard of explosion. Replace with the same or equivalent type only. Dispose of used batteries according to the instructions provided by the battery manufacturer.●CAUTION: Risk of explosion if the battery is replaced by anincorrect type. Dispose of used batteries according to theinstructionsATTENTION: IL Y A RISQUE D'EXPLOSION SI LA BATTERIE ESTREMPLACÉE PAR UNE BATTERIE DE TYPE INCORRECT. METTRE AU REBUT LES BATTERIES USAGÉES CONFORMÉMENT AUXINSTRUCTIONS●Improper replacement of the battery with an incorrect typemay defeat a safeguard (for example, in the case of somelithium battery types).●Do not dispose of the battery into fire or a hot oven, ormechanically crush or cut the battery, which may result in an explosion.●Do not leave the battery in an extremely high temperaturesurrounding environment, which may result in an explosion or the leakage of flammable liquid or gas.●Do not subject the battery to extremely low air pressure, whichmay result in an explosion or the leakage of flammable liquid or gas.●If the product does not work properly, please contact yourdealer or the nearest service center. Never attempt todisassemble the camera yourself. (We shall not assume anyresponsibility for problems caused by unauthorized repair ormaintenance.)Table of Contents1 Introduction (12)1.1 Overview (12)1.2 Model Description (12)2 Appearance Description (14)2.1 Overview (14)2.2 Cable Description (15)3 Installation (16)3.1 Wall Mounting (17)3.2 Pendant Mounting (20)4 Activate and Access Network Camera (22)1Introduction1.1OverviewExplosion-proof network camera is a video security product capable of smart encoding and network transmitting. It adopts an embedded system and a high-performed hardware process platform to achieve good stability and reliability.You can visit and configure your camera via web browser and client software.Explosion-proof network camera adopts a stainless steel enclosure, receiving an IP68 rating for ingress protection.Application Scenarios: oil industry, mine fields, chemical industry, port, grain processing industry, etc.1.2Model DescriptionThis manual is applicable to the following models:Table 1-1Applicable Model ListO n -b o a r d S t o r a g e L o g o (F /B l a n k , n o t h i n g t o d o w i t h e x p l o s i o n -p r o o f p r o p e r t i e s ) F :s u p p o r t s o n -b o a r d s t o r a g e ,b l a n k :n o s u p p o r t s o n -b o a r d s t o r a g e D S -2X E6126F -H SH i k v i s i o n F r o n t -e n d P r o d u c t L o g oF o r S p e c i a l i z e d A p p l i c a t i o n L o g o (X E ) X E :p r o f e s s i o n a l p r o d u c t s P r o d u c t T y p e (6) 6:p r o f e s s i o n a l p r o d u c t s P r o d u c t S h a p e L o g o (1) 1:d o m e c a m e r a M a x .R e s o l u t i o n L o g o (0~9/A ~Z , n o t h i n g t o d o w i t h e x p l o s i o n -p r o o f p r o p e r t i e s ) 0:W D 1, 1: 1.3 M P , 2: 2.0 M P , 3: 3.0 M P , 4: 4.0 M P , 5: 5.0 M P , 6: 6.0 M P , 8: 8.0 M P , 9: 9.0 M P , e t c . H a r d w a r e P e r f o r m a n c e L o g o (0-9, n o t h i n g t o d o w i t h e x p l o s i o n -p r o o f p r o p e r t i e s ) 0:h a r d w a r e v e r s i o n V -0, 1:h a r d w a r e v e r s i o n V -1, 2:h a r d w a r e v e r s i o n V -2, 3:h a r d w a r e v e r s i o n V -3, 4:h a r d w a r e v e r s i o n V -4, 5:h a r d w a r e v e r s i o n V -5, 6:h a r d w a r e v e r s i o n V -6, 7:h a r d w a r e v e r s i o n V -7,e t c .I n t e r f a c e L o g o (S /B l a n k ) S :a u d i o I /O , a l a r m I /O , R S -485 i n t e r f a c e ; b l a n k : w i t h o u t A u d i o I /O , A l a r m I /O , R S -485 i n t e r f a c e W D W i d e -d y n a m i c F u n c t i o n L o g o (W D /B l a n k , n o t h i n g t o d o w i t h e x p l o s i o n -p r o o f p r o p e r t i e s ) W D :s u p p o r t s 120d B w i d e -d y n a m i c ,b l a n k :n o s u p p o r t s 120d B w i d e -d y n a m i c r a n g eH e a t e r L o g o (H /B l a n k ) H : h a v e h e a t e r ; b l a n k : w i t h o u t .Figure 1-1 Model Explanation2Appearance Description2.1OverviewFigure 2-1Explosion-Proof Network Dome Camera Overview2.2Cable Description1234 51 1 1 2 4 481Figure 2-2Overview of CablesTable 2-1Description of Cables3InstallationBefore you start:●Make sure the device in the package is in good condition and allthe assembly parts are included.●The standard power supply is PoE or 12 VDC. Please make sureyour power supply matches with your camera.●Make sure all the related equipment is power-off during theinstallation.●Check the specification of the products for the installationenvironment.●Make sure that the wall is strong enough to withstand fourtimes the weight of the camera and the bracket.For the camera that supports IR, you are required to pay attention to the following precautions to prevent IR reflection:●Dust or grease on the dome cover will cause IR reflection.Please do not remove the dome cover film until the installation is finished. If there is dust or grease on the dome cover, cleanthe dome cover with clean soft cloth and isopropyl alcohol.●Make sure that there is no reflective surface too close to thecamera lens. The IR light from the camera may reflect back into the lens causing reflection.●The foam ring around the lens must be seated flush against theinner surface of the bubble to isolate the lens from the IR LEDS.Fasten the dome cover to camera body so that the foam ringand the dome cover are attached seamlessly.3.1Wall MountingBefore you start:There is no wall mounting bracket included in the package. You have to prepare a wall mounting bracket if you choose this mounting type. The shown bracket below is only for demonstration.Figure 3-1DS-1695ZJ Wall MountSteps:1.Fix the dome camera to wall mount with four screws (SC-PSFM3 × 8-SUS).Figure 3-2Fix Camera to Wall Mount2.Mark the 3 screw holes on desired mounting surface according to the wall mount.3.Drill screws holes (Ø10.5 mm, 0.413 inch) for expansion bolts.4.Secure the wall mount to wall.Expansion BoltNutFlat Washer Spring Washer Figure 3-3 Secure Wall Mount3.2Pendant MountingBefore you start:There is no pendant mounting bracket included in the package. You have to prepare a pendant mounting bracket if you choose this mounting type. The shown bracket below is only for demonstration.Figure 3-4DS-1694ZJ Pendant MountSteps:1.Fix the dome camera to pendant mount with four screws(SC-PSFM3 × 8-SUS).Figure 3-5Fix Camera to Pendant Mount2.Mark the 3 screw holes on desired mounting surface according to the pendant mount.3.Drill screws holes (Ø10.5 mm, 0.413 inch) for expansion bolts.4.Secure the pendant mount to ceiling.Figure 3-6Secure Pendant Mount4Activate and Access Network CameraScan the QR code to get Activate and Access Network Camera. Note that mobile data charges may apply if Wi-Fi is unavailable.UD25540B-A。

Simmon Omega Pro-Lab 0-6 胶片开发机说明书

Simmon Omega Pro-Lab 0-6 胶片开发机说明书
[]J UP TO 400/0 GREATER MAGNIFICATION
Extra-long girders and an oversized 18" x 34" baseboard permit up to 40% greater magnification directly on the baseboard.
CD ZOOM CONDENSERS
A single knob adjustment of the three-element condenser system adjusts its focal length to match that of the enlarging lens, resulting in even light distribution. This eliminates handling as well as any need for accessory condensers, resulting in more efficiency, convenience and speed .
CD DOUBLE-EXTENSION BELLOWS
An adjustable double-extension bellows extends the focusing range of all lenses and permits reduction printing without extension attachments. Eliminates need for lens cones.
The highly convenient and time saving new features of this enlarger are:
A triple lens turret for almost instantaneous change from one focal length enlarging lens to another; a condenser lamphouse equipped with a 3-element zoom condenser system for rapid adaptation of its focal length to that of the enlarging lens for optimum uniformity of '..illumina tion; an adjustable, detachable masking device for easy composition; a twin track , double extension bellows making lens cups or auxiliary focusing attachments for reductions unnecessary; extra-long girders for giant en -

VIVOTEK SC8131双目计数摄像头产品介绍说明书

VIVOTEK SC8131双目计数摄像头产品介绍说明书

SC8131 Stereo Network CameraVIVOTEK’s SC8131 is a stereo counting camera, armed with VIVOTEK’s 3D Depth Technology and video surveillance functionality, providing 2-Megapixel real-time precise tracking video and high accuracy counting up to 98%. The stereo camera generates data information such as people counting, flow path tracking that applied to in store layout improvement, promotional evaluation, staff planning, and the control of service times, providing business owners key metrics to effectively make operating decisions and increase ROI.Mounted over a store entrance, the dual-lens camera enables the stereo vision to accurately track the 3D positions of objects moving across the field of view. Adults or children, single individuals or groups, can be distinguished from non-human objects such as shopping carts and strollers, providing accurate counting analytics even at the busiest and most congested times. Furthermore, seamless integration with VCA (video content analysis) reports in VIVOTEK’s VAST, the metadata is displayed in comprehensive graphs and line charts, making the SC8131 ideal for retail analytics. The computation of height disparity data is embedded in the stereo camera, instead of sending video streams to a dedicated computer running a separate analytics software. The counting solution saves bandwidth and reduces the risk of data loss in the event of network or power disruption.Features• VIVOTEK's 3D Depth Technology• High Accuracy Rate up to 98%• Local Storage Data for Counting Report • Easy Installation and Configuration • Seamless Counting with VAST CMSPrecisionVideo SecurityEasy InstallationEfficiency• VIVOTEK's 3D Depth Technology • Up to 98% counting accuracy rate • Real-time data (system/counting/daily reports; time/region reports)• Video surveillance (viewing and recording)• Multiple video streaming • Remote management • Seamless integration with VAST CMS• Discreet ceiling mount• Compatible with 4"x2" electrical box• Object tracking path • Filter carts, children and strollers • Bi-directional counting on definable flow • Detection of U-turns to avoid double counting • Not Influenced by shadows, reflections, glare or low light conditionsCounting: 0Counting: 1T echnical SpecificationsModel SC8131System InformationCPU Multimedia SoC (System-on-Chip)Flash 256 MB RAM512 MBCamera FeaturesImage Sensor1/3" Progressive CMOS Maximum Frame Rate 15 fps @ 2560x960On-board StorageMicroSD/SDHC/SDXC card slot VideoCompressionH.264 & MJPEGMaximum Streams 3 simultaneous streams Report FormatJSON/XML/CSVGeneralConnectorsRJ-45 for Network/PoE connection DI/DOUSB 2.0 (Only as a power bank, not for data transmission)MicroSD SlotLED Indicator System power and status indicator Power Input IEEE 802.3af PoE Class 3Power ConsumptionPoE: Max.12.95W USB: Max. 300mAAll specifications are subject to change without notice. Copyright © VIVOTEK INC. All rights reserved. Ver. 166F, No.192, Lien-Cheng Rd., Chung-Ho, New Taipei City, 235, Taiwan, R.O.C. T: +886-2-82455282 F: +886-2-82455532 E: *****************W: w w VIVOTEK INC.。

8MP Sony IMX219 USB摄像头模块说明书

8MP Sony IMX219 USB摄像头模块说明书

8MP Sony IMX219(SKU:B0196)QUICK START GUIDEUSB Camera ModuleSPECSHow to use the program (Windows demo only)The Menu bar at the top of the shown image con-tains few menu items and the current preview resolution and the frame rate are displayed on the bottom bar when the application is running. The following sections describe each of the menu items•Menu>DevicesThis menu will show the available video devices to host PC. The B0196 named USB Camera .•Menu>OptionsThe options menu can be used to select the pre-view and image parameters supported by this camera.How to use the program (Windows demo only)-Video Capture PinINTRODUCTION•About ArducamArducam has been a professional designer and manufacturer of SPI, MIPI, DVP and USB cameras since 2012. We also offer customized turnkey design and manufacturing solution services for customers who want their products to be unique.•About this USB CameraThe B0196 is a new member of the Arducam ’s USB camera family. It ’s an 8MP, UVC compliant, USB 2.0 camera. This USB camera is based on 1/4” Sony IMX219 image sensor, and you can learn more about its specs in the next chapter. Arducam also provides the sample application that demonstrates some features of this camera.•About UVCThe B0196 is a UVC -compliant camera. The native UVC drivers of Windows, Linux and Mac shall be compatible with this camera so that it does not require extra drivers to be installed.•About Customer ServiceIf you need our help or want to customize other models of USB cameras, feel free to contact us at *******************QUICK START•How to download the program1.Download the APP Amcap from the following link https:///downloads/app/AMCap.exe2.NOTE:If used with the Android device, USB Camera APP and connect adapter are needed.For Mac OS, please open the native software facetime and select the video camera “USB Camera” .•How to connect the cameraConnect the one end of the USB 2.0 cable to the USB 2.0 connector provided on the back of B0196, and connect the other end to the USB 2.0 host controller on the computer.How to use the program (Windows demo only)-Video Capture Filter -> Video Proc Amp/Camera ControlHow to use the program (Windows demo only)•Menu> CaptureThe capture menu is used to capture the still image and video by using this application. You can also select the related parameters.。

泊松斑成像特点及应用初探

泊松斑成像特点及应用初探

泊松斑成像特点及应用初探
王恩宏;胡以华;王迪;张发强
【期刊名称】《光电技术应用》
【年(卷),期】2007(22)2
【摘要】从泊松斑成像的基本原理和成像特点出发,分析了成像轴上光强的变化情况,研究了和主轴夹角为α的副轴光强与主轴光强的关系,给出了影响泊松斑亮度和宽度的因素,以及成像对光滑圆球选择的要求,并对泊松斑成像技术的应用前景进行了展望.
【总页数】4页(P32-35)
【作者】王恩宏;胡以华;王迪;张发强
【作者单位】合肥电子工程学院,安徽,合肥,230037;安徽省电子制约重点实验室,安徽,合肥,230037;合肥电子工程学院,安徽,合肥,230037;安徽省电子制约重点实验室,安徽,合肥,230037;合肥电子工程学院,安徽,合肥,230037;安徽省电子制约重点实验室,安徽,合肥,230037;合肥电子工程学院,安徽,合肥,230037;安徽省电子制约重点实验室,安徽,合肥,230037
【正文语种】中文
【中图分类】O436.1
【相关文献】
1.泊松亮斑的模拟仿真研究 [J], 赵靓; 南香伶; 张林海; 张凌岳
2.泊松亮斑的实验演示与教学启示 [J], 沈卫
3.泊松亮斑的实验演示与教学启示 [J], 沈卫
4.改进实验装置全天候演示泊松亮斑 [J], 王进;卞龙宝
5."泊松亮斑"中的物理学史及其对教育的启示 [J], 阿西伍惹
因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

CAMEO-SIM:a physics-based broadband scene simulation tool for assessment of camouflage, concealment,and deception methodologiesIan R.MoorheadQinetiQ Ltd.Ively Road,Farnborough Hampshire,GU140LXUnited KingdomMarilyn A.GilmoreAlex W.HoulbrookDefence Science and Technology LaboratoryIvely Road,Farnborough Hampshire,GU140LXUnited KingdomDavid E.OxfordQinetiQ Ltd.Ively Road,Farnborough Hampshire,GU140LXUnited KingdomDavid FilbeeColin StroudGeorge HutchingsAlbert KirkHunting Engineering Ltd(HEL) Reddings Wood,Ampthill Bedford,MK452HDUnited Kingdom Abstract.Assessment of camouflage,concealment,and deception (CCD)methodologies is not a trivial problem;conventionally the only method has been to carry outfield trials,which are both expensive and subject to the vagaries of the weather.In recent years computing power has increased,such that there are now many research programs using synthetic environments for CCD assessments.Such an approach is at-tractive;the user has complete control over the environment parameters and many more scenarios can be investigated.The UK Ministry of De-fence is currently developing a synthetic scene generation tool for as-sessing the effectiveness of air vehicle camouflage schemes.The soft-ware is sufficientlyflexible to allow it to be used in a broader range of applications,including full CCD assessment.The synthetic scene simu-lation system(CAMEO-SIM)has been developed,as an extensible sys-tem,to provide imagery within the0.4to14␮m spectral band with as high a physicalfidelity as possible.It consists of a scene design tool,an image generator,that incorporates both radiosity and ray-tracing pro-cesses,and an experimental trials tool.The scene design tool allows the user to develop a three-dimensional representation of the scenario of interest from afixed viewpoint.Target(s)of interest can be placed any-where within this3-D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects.The user has com-plete control over the level offidelity of thefinal image.The output from the rendering tool is a sequence of radiance maps,which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks.The software also maintains an audit trail of all data selected to generate a particular image,both in terms of material properties used and the rendering options chosen.A range of verification tests has shown that the software computes the correct values for ana-lytically tractable scenarios.Validation tests using simple scenes have also been undertaken.More complex validation tests using observer tri-als are planned.The current version of CAMEO-SIM and how its images are used for camouflage assessment is described.The verification and validation tests undertaken are discussed.In addition,example images will be used to demonstrate the significance of different effects,such as spectral rendering and shadows.Planned developments of CAMEO-SIM are also outlined.©2001Society of Photo-Optical Instrumentation Engineers. [DOI:10.1117/1.1390298]Subject terms:scene simulation;CCD assessment;camouflage;concealment; deception.Paper ATA-15received Feb.16,2001;revised manuscript received Mar.18, 2001;accepted for publication Mar.23,2001.1IntroductionAdvances in synthetic image generation methods are now achieving very high levels of photorealism in the imagery that is produced.These methodsfind ready application in the games and entertainment industries,where,increas-ingly,sophisticated imagery is both required and expected. Within the military environment there is also a growing interest in using synthetic imagery as a method for assess-ing the benefits of new technologies such as camouflage systems.1Field trials are expensive,subject to the vagaries of the weather,and cannot be used to design or assess new systems and technologies.By using synthetic imagery, however,not only is there considerably more control pos-sible at a reduced cost,but also many more scenarios can be investigated than is possible using real equipment in the real world.There is,however,a fundamental difference be-tween the requirements of the games and entertainment in-dustry and the military user.The latter wishes to use the imagery to make quantitative predictions of the effects on performance of manipulations of objects in the image.Pho-1896Opt.Eng.40(9)1896–1905(September2001)0091-3286/2001/$15.00©2001Society of Photo-Optical Instrumentation Engineerstorealism alone,therefore,is not sufficient.The militaryuser needs a synthetic image generator able to correctlymodel physical interactions of electromagnetic radiation.One of the more challenging areas for a scene simulationsystem is the design of camouflage.All camouflage is acompromise.It is required to match different backgrounds,in different wavebands and at different times of the year.The compromises made in the past were determined bysubjective assessment of the visibility of a military assetwhen viewed against some relevant background.Typically,this assessment was carried out in the visible band only.However,sensors now operate throughout a large part ofthe electromagnetic spectrum,and it is likely that futurecamouflage will need to be effective across a correspondingbroad range of wavebands.In addition,new techniques andmaterials offer the potential of increased effectiveness ofcamouflage against sensor threats.Cost-effective and quan-titatively correct assessment of these techniques and mate-rials is essential for future system survivability.As a result of these requirements,the UK has developeda physics-based,broadband,scene simulation toolset,which we have called CAMEO-SIM,and which enables thequantitative evaluation of both current and future camou-flage.We provide an overview of the functionality ofCAMEO-SIM and describe the verification and validationexperiments that have been conducted.Section2providesan overview of the methods used by the CAMEO-SIMtoolset and illustrates some of the physical effects that itcan produce.Section3reviews the verification tests thathave been carried out to date,Sec.4describes a part of theongoing validation program,and Sec.5presents discussionand conclusions.2Overview of Cameo-SimThe goal of the CAMEO-SIM system is to produce syn-thetic,high resolution,physically accurate radiance imagesof target vehicles in operational scenarios,at any wave-length between0.4and14␮m.Version1of CAMEO-SIM was designed to create a scene as viewed by a static sensorwith either moving or static vehicles in the scene.2Recentimprovements to the system now mean that in CAMEO-SIM Version2a moving sensor can be modeled.This canbe either open loop,where the movement path is predeter-mined,or closed loop,where the path is determined inter-actively by feedback from the sensor.Improving the controlof the targets means that changes to the target,such ashigher engine temperatures at increased speeds,or higherairframe temperatures due to increased speeds,or differentshapes of the vehicle such as different wing configurationsfor aircraft during manoeuvre,can be simulated.CAMEO-SIM has two key elements that distinguish it from conven-tional ray-tracing packages.Firstly,there is a complete au-dit trail between the material properties used by therenderer and thefinal image.This means that it is possibleto conduct carefully controlled parametric manipulations ofscene properties.Secondly,because CAMEO-SIM isphysics-based it is a predictive tool.That is,once a particu-lar scenario has been created,it can then be used to predicttarget visibility under many alternative conditions.Thesemight typically be diurnal variations,different weather con-ditions,or changes to atmospheric conditions.In addition,the same scene geometry may then be used,but different material properties can be assigned to create different times of the year.Traditional ray tracers rely on arbitrary param-eter manipulation to simulate changes in the environment ͑e.g.,change of atmosphere͒.CAMEO-SIM,on the otherhand,incorporates these directly as a result of solving the underlying physical equations.All geometric objects forming the synthetic environment are modeled using textured faceted structures.Texel values in these textures are mapped to real materials,which have measured physical properties associated with them,e.g.,bi-directional reflectance,solar absorptivity,conductivity,and density.Each texel is then considered as a mixture of up to three different materials.Material properties are accessed by CAMEO-SIM from a set of databases.The bidirectional reflectance functions͑BRDFs͒,however,are computed us-ing an off-line parameterization of the raw data.This means that the physical properties of both soil and grass are mod-eled when using a grass plex objects are mod-eled as a number of polygons.This means that the three-dimensional effects of trees,including shadow effects,can be simulated.Spatial resolution can be set to any value,but spatial detail is dependent on the polygon count.In the process of generating radiometrically accurate synthetic images of scenes,all synthetic scene generators are ultimately attempting to solve a form of the general rendering equation,which states that the radiance at wave-length␭leaving a point x,y in the scene in a direction (␪0,␾0)is given by:N0͑␭,x,y,␪0,␾0͒ϭ␧͑␭,x,y,␪0,␾0͒N bb͓␭,T͑x,y͔͒ϩ͵പ␳bd͑␭,x,y,␪0,␾0,␪i,␾i͒ϫN i͑␭,x,y,␪i,␾i͒cos͑␪i͒d␻i,͑1͒where N0(␭,x,y,␪0,␾0)is the total radiance͑W mϪ2srϪ1͒in the direction(␪0,␾0)from the point x,y at wavelength␭.␧(␭,x,y,␪0,␾0)is the directional emittance in the direc-tion(␪0,␾0)from the point x,y at wavelength␭. N bb͓␭,T(x,y)͔is the blackbody radiance͑W mϪ2srϪ1͒at the temperature͑T͒of the point x,y at wavelength␭.␳bd(␭,x,y,␪0,␾0,␪i,␾i)is the bidirectional reflectance distribution function͑srϪ1͒of the material at the point x,y at wavelength␭.N i(␭,x,y,␪i,␾i)is the radiance ͑W mϪ2srϪ1͒incident at the point x,y from the direction (␪i,␾i)at wavelength␭.And͐പd␻i is the integral over the hemispherical solid angle subtended by the point x,y. The radiance(N s)arriving at a sensor positioned at a point xЈ,yЈin the scene is then given by:N s͑␭,xЈ,yЈ,x,y͒ϭ␶͑␭,xЈ,yЈ,x,y͒N0͑␭,x,y,␪0,␾0͒ϩN path͑␭,xЈ,yЈ,x,y͒,͑2͒where N s(␭,xЈ,yЈ,x,y)is the radiance arriving at the sen-sor at the point xЈ,yЈdue to the radiance from the point x,y at wavelength␭;␶(␭,xЈ,yЈ,x,y)is the transmittance of the path between the point x,y and the sensor at the point xЈ,yЈ1897 Optical Engineering,Vol.40No.9,September2001at wavelength␭;and N path(␭,xЈ,yЈ,x,y)is the path radi-ance between the point x,y and the sensor at the point xЈ,yЈat wavelength␭.The three main components of the rendering equation are the thermal self-emission,the atmospheric terms,and the global illumination term accounting for reflected radia-tion.Due to the complexity and recursive nature of the general integral equation,the equation is always reformu-lated to produce a computationally tractable solution for synthetic scene generation applications.The nature of this reformulation and the spectral,spatial,and directional de-pendencies placed on each term are the main differences that occur between synthetic image generation systems.In CAMEO-SIM a general bidirectional,spectral,recursive solution is sought using importance driven Monte Carlo sampling.This algorithm can be scaled to provide radiance image predictions,which contain a subset of all of the fea-tures defined by Eq.͑1͒.At longer wavelengths,such as the mid-infrared,the full hemispherical integration of the incident irradiance enables the software to account for the radiative interaction be-tween different surfaces,e.g.,hot engine radiation reflect-ing off other parts of the vehicle or neighboring scene ele-ments.The importance in scene simulation of target-scene radiative interaction frequently becomes evident,as shown in the example in Fig.1.The images shown in Fig.1were produced using CAMEO-SIM and illustrate the effects of different assump-tions made in the solution of the EO radiation transport equation for an air target in near-zero contrast with the sky as background.In the simulation,an aircraft with a near-normal hemispherical surface reflectivity of70%flying over a terrain at an altitude of500m is viewed against a sky background in the3to5␮m band.Figure1͑a͒shows the predicted image calculated using an approximation to the radiation transport equation commonly employed in simulations where target-scene interactions are ignored andonly direct atmospheric illumination is accounted for.Fig-ure1͑b͒shows the aircraft under the same conditions,but accounting for the full hemispherical integral of the inci-dentflux arriving at the surfaces due to the scene environ-ment.In this case,the underside of the aircraft is now in positive contrast to the sky background due to the incorpo-ration of both earth thermal reflection and albedo terms. The importance of interactions and appropriate solution of the radiation transport equations in the camouflage assess-ment role is clearly highlighted in determining the surviv-ability of aircraft.This interaction feature can be extended,as is shown in Fig.2,which shows CAMEO-SIM predictions of self-illumination during a predicted countermeasure release.By solving the full radiation transport equation͓Eq.͑1͔͒,these important radiative interactions can be modeled in detail.CAMEO-SIM computes the radiance in user specified subbands for each pixel in the image.These subband radi-ance images can then be summed to produce an in-band radiance image.The software can display each subband ei-ther as a gray scale image or any three subbands as a false color image.CAMEO-SIM can also display visible band true-color imagery.This is done byfirst evaluating the spectral radiance image cube for a defined number of sub-bands between380and780nm.The spectral image cube is then converted into device independent color space,repre-sented by the tristimulus values X,Y,and Z,using the CIE 1931Colorimetric Standard Observer.3The monitor that is used for the image display is calibrated both in terms of luminance and phosphor radiance,allowing the X,Y,and Z values to be converted to R,G,B values.Variouslumi-Fig.1Simulation of an image of an aircraft in the3to5␮m band: (a)predicted appearance without scene interaction and(b)pre-dicted with sceneinteraction.Fig.2Simulation of an aircraft releasing a countermeasure(flare), illustrating how the CAMEO-SIM solution captures the self-illumination of the aircraft.1898Optical Engineering,Vol.40No.9,September2001nance transforms are employed to make best use of the limited CRT dynamic range.An example 20-subband im-age along with its true-color equivalent is displayed in Fig.3.The output from CAMEO-SIM must still be processed in some fashion to derive target visibility predictions.It therefore relies either on the use of observer trials or the input of the imagery into further models able to predict visual target conspicuity.Since it generates imagery,it is most suitable to use imaging vision models such as the Georgia Tech Vision ͑GTV ͒model,4rather than traditional parametric models such as ORACLE.53Analytical Verification TestsCAMEO-SIM Version 1.0is complete and is now undergo-ing verification and validation.A range of verification tests has been developed that exercises different elements of the high fidelity rendering equations implemented within CAMEO-SIM.All the tests have analytic solutions.Table 1summarizes the tests and the results obtained.A description of each test is given in the following sections.3.1Blackbody Radiance TestThe purpose of this test is to ensure that the blackbody radiance is calculated correctly.A one-meter-square uni-formly textured facet is created and the temperature of the facet set to a known value.The line of sight of the observer is centered and perpendicular to the facet.The radiance of a perfect blackbody is calculated and compared with the value computed within CAMEO-SIM.3.2Contrast in an Isothermal EnvironmentThe purpose of this test is to ensure that the correct radi-ance contrast is predicted for isothermal vacuum,radiomet-ric environments.The skyshine radiance terms are set to constant values.A one-meter-square surface is defined to be a perfect diffuse reflector and the line of sight of the ob-server is centered and perpendicular to the facet.The radi-ance of the square is calculated and compared with the value computed within CAMEO-SIM.3.3Calculation of Shadowing and BlockingBlocking is the rendering process that ensures parts of the object which are not visible to the observer due to obstruc-tion by another part,are correctly accounted for.Shadow-ing is the rendering process that ensures parts of the object do not reflect the point sources if they are obscured from it by other parts.This test has been designed to ensure that the blocking and shadowing algorithms are working accu-rately.The geometry for this test is shown in Fig.4,which shows two square plates with the lower plate 100%diffuse reflecting,the top plate black,and at 0K.The observer and sun are 45deg to the geometry.The radiance ͑N ͒of the illuminated pixels in the image is:N ϭQ ␳/␲,͑3͒where N is the radiance in W m Ϫ2sr Ϫ1,Q is the normal incident irradiance in W m Ϫ2,and ␳is the diffuse reflec-tance of the lower plate.The solar irradiance is set to a fixed value.The radiance of the shadowed,blocked,and irradiated areas is calculated and compared with the values computed within CAMEO-SIM.3.4Spectral CalculationsThe purpose of this test is to verify that the spectral inte-grations are being calculated accurately.To test this,a de-fined solar spectral irradiance is used to illuminate an arti-ficial spectral material being observed with a spectrally selective sensor.The spectral variation in the material properties,the light intensity,and the sensor response is defined.For the gen-eral case,the in-band reflected radiance between the upper and lower wavelengths is given by:N ϭ͵␭1␭2J ͑␭͒s 2cos ␪i .␪͑␭͒.␳Љ͑␭͒.d ␭,͑4͒where N ϭin-band radiance ͑W sr Ϫ1m Ϫ2͒,␪i ϭincidence angle between source and reflector ͑radians ͒,J (␭)ϭsource intensity ͑W sr Ϫ1͒at wavelength ␭,␪(␭)ϭsensor spectral response,␳Љ(␭)ϭspectral bidirectional reflectivity ͑sr Ϫ1͒,and s ϭdistance to the source ͑m ͒.Fig.3Illustration of the combination of subband images to create a true-color image.Upper part of the figure shows the sequence of subband images from 380through 780nm.These are combined in a weighted fashion using conventional colorimetric methods to pro-duce a true color image,which can then be displayed on a cali-brated color monitor.1899Optical Engineering,Vol.40No.9,September 20013.5Radiometric Calculation of Lighting EffectsThe purpose of this test is to verify that the radiometric effects of light sources are being accurately represented.The geometry of the test is shown in Fig.5͑a ͒,and a plot of computed and rendered radiance is shown in Fig.5͑b ͒,to-gether with the difference between the computed and ren-dered radiance.It must be noted that the analytical solution assumes radiant intensity is at the pixel’s center,but the image’s radiant intensity is supersampled across a pixel.This will introduce a small difference to the analytical so-lution.Table 1Summary of validation test results.All values are radiance (W m Ϫ2sr Ϫ1)unless otherwise stated.TestExpected resultCalculatedBlackbody radiance Blackbody Radiance ϭ42.89(8to 12.5␮m band)42.89Contrast in isothermal environmentCenter pixel radiance ϭ35.23(8to 12.5␮m band)35.23Shadowing and blockinga.Radiance of irradiated area ϭ5.1768a.5.1768b.Radiance of blocked area ϭ0.0b 0.0c.Radiance of shadowed area ϭ0.0(3to 5␮m band)c.0.0Spectral calculation Center pixel radiance (3to 5␮m band)ϭ1.491.49Radiometric calculation of lighting effectsRadiance variation:Center 0.31806Center:0.31831edge:0.0094239edge:0.0094248Directional emission Slope of radiance along centreline ϭ60.01W m Ϫ2pixel Ϫ159.932W m Ϫ2pixel Ϫ1Multiple materialassignment on a textureBlackbody radiance ϭ8.975Blackbody radiance ϭ8.975Gray body radiance ϭ4.4875(3to 5␮m band)Gray body radiance ϭ4.4875Bidirectional reflectivity Illuminated pixel radiance ϭ2.3 2.3Small target renderingIntegrated facet radiant intensity ϭ1.806W sr Ϫ1(3to 5␮m band)1.813W sr Ϫ1Fig.4Diagram showing the geometry used to verify the blocking and shadowing computations.The size of the upper shadowing plate is D and is distance D from the lower plate.The lower plate is 3ϫD in extent.1900Optical Engineering,Vol.40No.9,September 20013.6Directional Emission of Uniformly Textured andHeated Spheres This test verifies that the second pass renderer is accounting for the directional emissivity correctly when the object is nominated as having directional optical properties.Two uniformly textured spheres of 2m diam are set to a known temperature.For one of the spheres,the vertex normals are equal to the facet normal,and for the other an appropriate angle is chosen for generating the vertex normals.There-fore,in the test both flat faceting and vertex normal inter-polation in the second-pass renderer are tested.The varia-tion in pixel radiance from the center of the sphere to the outside edge should vary linearly ͑for the vertex normal interpolated sphere,and approximated with a stepped varia-tion for the flat facet sphere ͒.3.7Textured Heated Billboard for Testing MultipleMaterial Assignments on a Texture The purpose of this test is to ensure that textures that have been classified using multiple material associations and transparency are interpreted properly by CAMEO-SIM.Totest this aspect,a heated uniformly textured billboard with a transparent section is rendered.A 256ϫ256texture image containing two rectangles and a transparent region is cre-ated.One rectangle is classified as a blackbody perfect dif-fuser and set to a known temperature.The other rectangle is set to be a gray-body perfect diffuser at the same tempera-ture.3.8Bidirectional Reflectivity of Uniformly Texturedand Heated Spheres The purpose of this test was to verify that CAMEO-SIM is interpreting the bidirectional reflectance function correctly.To keep the solution to the bidirectional reflectance distri-bution function problem analytically tractable,a BRDF file was used that represents a gray semispecular retroreflecting BRDF such thatBRDF ϭ1cos ͑␪͒␪р30degBRDF ϭ0.0␪Ͼ30deg,where ␪is the angle of incidence.Two spheres are created:for one sphere the vertex normals are equal to the facet normal,and for the other sphere an appropriate angle is chosen for generating the vertex normals.The line of sight of the observer is set to view the spheres from above with the sun position above the observer3.9Small Target RenderingThe purpose of this test was to ensure that CAMEO-SIM is treating small targets to an acceptable accuracy ͑an essen-tial requirement for simulating potentially subpixel targets ͒.To test this requirement,a sphere identical to that used in the BRDF test is rendered against a simple uniform back-ground.The geometry of the test case is shown in Fig.6͑a ͒,and the image formed for this test case should be similar to that shown in Fig.6͑b ͒.4Validation4.1IssuesThe issues surrounding the validation of any piece of simu-lation software are often complex,and CAMEO-SIM is no exception.Furthermore,the fact that CAMEO-SIM aims to physically represent the real world,in many electromag-netic wavebands,adds considerably to the difficulties,since we still have neither the basic databases nor the necessary understanding of what constitutes the real world.6In addi-tion,since the whole purpose of CAMEO-SIM is to repre-sent scenarios that may not exist or are impossible to docu-ment,there may in fact be no equivalent real world,unlike the situation that exists within the image compression domain.7–9This can be illustrated at the simplest level by considering the geometry and culture that are used to de-scribe a scenario.It is possible to achieve an exactmatchFig.5(a)Diagram of the geometry used to verify the distribution of illumination from a light source and (b)graph of computed and pre-dicted radiance as a function of pixel position for the geometry in 5(a).1901Optical Engineering,Vol.40No.9,September 2001with the terrain geometry by using detailed map informa-tion,but it is impossible to achieve exactly the same geom-etry for the culture present in that terrain,e.g.,tree struc-ture.This means that validation methods that assume there is some real world database of measurements that can be directly compared with the output from the simulation can-not work.The validation processes that we are using are,as a result,more abstract and involve three separate ap-proaches.First,to use highly simplified scenarios that can be both synthesized within CAMEO-SIM and measured,second,to compare simulation and real world imagery,and finally,to examine observer performance with real and syn-thetic imagery.We only report initial results from the first two of these approaches to validation.The comparisons re-ported are made using radiometric data.Additional valida-tion is making use of a range of image metrics,such as higher order statistics to better understand these issues.4.2Simple Imagery ValidationA trial has been conducted involving imaging a simple ob-ject viewed against a uniform background.The object is a hollow metal step-like structure called CUBI.It is con-structed from 3-mm mild steel lined with 23-mm polysty-rene insulation and finished in a matte green paint,shown in Fig.7.It was mounted on a turntable,so that the aspect could be changed,and was positioned on a uniform area of concrete.A large area blackbody was positioned close to CUBI,so that it could be included in any imagery.Images were taken using an AGEMA™980imaging radiometer ͑3to 5and 8to 12␮m bands ͒at different times of the day under sunny and cloudy conditions.Calibrated visible band imagery was obtained with a Kodak DCS 420™digital camera.The surface temperatures of the concrete,brick wall,and different parts of CUBI were measured with a contact temperature probe.The same scene was rendered within CAMEO-SIM us-ing predicted temperatures and measured temperatures.CAMEO-SIM currently depends on another suite of models ͑MOSART and TERTEM ͒to provide a parameterized at-mosphere and to perform the heat transfer calculations for the materials used in the scene.MAT,the front end of this suite,has limited functionality.It will allow the use of 19standard atmosphere types,which can only be modified in terms of temperature,humidity,and wind speed by one standard deviation from their mean.A default option is available,which picks what it believes to be the most ap-propriate atmosphere,according to a user-defined latitude and longitude position.The apparent temperatures in the CAMEO-SIM images ͑8to 12␮m band ͒were then com-pared ͑see Fig.8͒.One condition has been analyzed so far and the pre-dicted temperatures were found to be different to those measured.Similar effects have been observed elsewhere.10Initial analysis of the imagery showed that the default op-tion was not producing a viable atmosphere or set of terrain temperatures.The main differences have been found to be due to incorrect definition of the thermal material properties and inaccurate atmospheric modeling,in particular solar irradiance levels.The sunny condition in the UK was not as sunny as that predicted using MOSART™͑there were in-termittent clouds ͒,and the overcast day was probably not as overcast as that modeled within MOSART.The measured temperatures were found to lie between the predictedtem-Fig.6(a)Geometry used to carry out the small target test and (b)a typical image produced by thetest.Fig.7Photograph of the metal step object (CUBI)used in the vali-dation experiments.Object is mounted on a rotation table standing on a concrete base.1902Optical Engineering,Vol.40No.9,September 2001。

相关文档
最新文档