CAMEO-SIM a physics-based broadband scene
HST snapshot imaging of BL Lac objects

Bahcall et al. 1997; Hooper, Impey & Foltz 1997; Malkan Gorjian and Raymond 1998).
BL Lac objects are radio loud AGN seen closely along the jet emission (see e.g. the review by Urry and Padovani 1995). The beaming properties of the jets suggest that low-luminosity radio galaxies are the corresponding misaligned population. Observations of the host galaxies are a direct probe of this unification hypothesis.
The use of HST data has improved the capability to investigate the galaxies hosting nuclear activity and in fact a number of specific studies of nearby and intermediate redshift AGN have been pursued (see e.g. Disney et al. 1995 ;
Joseph E. Pesce Department of Astronomy, Pennsylvania State University, USA
Aldo Treves University of Como, Italy
Abstract. Snapshot images of ∼ 100 BL Lac objects were obtained with WFPC2 on HST. Sources from various samples, in the redshift range 0.05 to 1.2, were observed and 61 resolved (51 with known z). The high resolution and homogeneity of the images allow us to address the properties of the immediate environments of BL Lacs with unprecedented capability. Host galaxies of BL Lacs are luminous ellipticals (on average 1 mag brighter than L∗) with no or little disturbed morphology. The nucleus, that is always well centered onto the galaxy, contributes in the optical (R band) to about half of the total luminosity of the object (range of the contribution from 0.1 to 10). The undisturbed morphology suggests that the nuclear activity has marginal effect on the overall properties of the hosts. Nonetheless several examples of close companions have been detected. The luminosity distribution of host galaxies is compared with that of a large sample of FR-I radio galaxies.
Cooperative Caching in Wireless Multimedia Sensor Nets

• Variable nnel capacity
• multi-hop nature of WMSNs implies that wireless link capacity depends on the interference level among nodes
• Multimedia in-network processing
• multiple sensor nodes share and coordinate cache data to cut communication cost and exploit the aggregate cache space of cooperating sensors • Each sensor node has a moderate local storage capacity associated with it, i.e., a flash memory
Articulation nodes (in bridges), e.g., 3, 4, 7, 16, 18 With large fanout, e.g., 14, 8, U Therefore: geodesic nodes
15
The cache discovery protocol (1/2)
5
What’s so special about WMSNs ?
• [Ian Akyildiz: Dec’06] We have to rethink the computation-communication paradigm of traditional WSNs
• which focused only on reducing energy consumption
皮尔斯电子枪的PIC模拟

thermal velocity make a difference to formation and quality of electron beam Were
studied,and the design of the grid control electron gun was researched.
Hi班一current Electron Gun Series.111e results were contrasted with results by MAGIC simulation.
Keyword:Piercegun,Particle In Cell,electron optics,,Computer Aided Design
1.本文首先对粒子模拟方法进行了综述,重点介绍粒子模拟方法的基本原理, 包括宏粒子模型、有限尺寸大小粒子模型以及本文程序设计模拟电子枪系统PIC 模拟采用的静电模型。对粒子模拟软件MAGIC和MAFIA所具有的特点及其应 用情况进行了介绍。
2.介绍了皮尔斯电子枪的发展,总结了皮尔斯电子枪的关键设计参量,研究 了在空间电荷效应及发射电子热初速影响下发射电流在极间的流通规律并对3/2 规律进行了修正,阴极电子初始热速度对电子注成形和电子注质量的影响,并对
an important character of the high density electron optics,SO the accuracy of its
distribution has great effect on electron optics.The usual solutions of the pace charges
follows:
In this paper,particle-in-cell(PIC)method is reviewed and summarized.The fundamental of PIC method is introduced,including the Marco particle model,the finite—size particle model and the electrostatic model which is used in this paper to
A Fast and Accurate Plane Detection Algorithm for Large Noisy Point Clouds Using Filtered Normals

A Fast and Accurate Plane Detection Algorithm for Large Noisy Point CloudsUsing Filtered Normals and Voxel GrowingJean-Emmanuel DeschaudFranc¸ois GouletteMines ParisTech,CAOR-Centre de Robotique,Math´e matiques et Syst`e mes60Boulevard Saint-Michel75272Paris Cedex06jean-emmanuel.deschaud@mines-paristech.fr francois.goulette@mines-paristech.frAbstractWith the improvement of3D scanners,we produce point clouds with more and more points often exceeding millions of points.Then we need a fast and accurate plane detection algorithm to reduce data size.In this article,we present a fast and accurate algorithm to detect planes in unorganized point clouds usingfiltered normals and voxel growing.Our work is based on afirst step in estimating better normals at the data points,even in the presence of noise.In a second step,we compute a score of local plane in each point.Then, we select the best local seed plane and in a third step start a fast and robust region growing by voxels we call voxel growing.We have evaluated and tested our algorithm on different kinds of point cloud and compared its performance to other algorithms.1.IntroductionWith the growing availability of3D scanners,we are now able to produce large datasets with millions of points.It is necessary to reduce data size,to decrease the noise and at same time to increase the quality of the model.It is in-teresting to model planar regions of these point clouds by planes.In fact,plane detection is generally afirst step of segmentation but it can be used for many applications.It is useful in computer graphics to model the environnement with basic geometry.It is used for example in modeling to detect building facades before classification.Robots do Si-multaneous Localization and Mapping(SLAM)by detect-ing planes of the environment.In our laboratory,we wanted to detect small and large building planes in point clouds of urban environments with millions of points for modeling. As mentioned in[6],the accuracy of the plane detection is important for after-steps of the modeling pipeline.We also want to be fast to be able to process point clouds with mil-lions of points.We present a novel algorithm based on re-gion growing with improvements in normal estimation and growing process.For our method,we are generic to work on different kinds of data like point clouds fromfixed scan-ner or from Mobile Mapping Systems(MMS).We also aim at detecting building facades in urban point clouds or little planes like doors,even in very large data sets.Our input is an unorganized noisy point cloud and with only three”in-tuitive”parameters,we generate a set of connected compo-nents of planar regions.We evaluate our method as well as explain and analyse the significance of each parameter. 2.Previous WorksAlthough there are many methods of segmentation in range images like in[10]or in[3],three have been thor-oughly studied for3D point clouds:region-growing, hough-transform from[14]and Random Sample Consen-sus(RANSAC)from[9].The application of recognising structures in urban laser point clouds is frequent in literature.Bauer in[4]and Boulaassal in[5]detect facades in dense3D point cloud by a RANSAC algorithm.V osselman in[23]reviews sur-face growing and3D hough transform techniques to de-tect geometric shapes.Tarsh-Kurdi in[22]detect roof planes in3D building point cloud by comparing results on hough-transform and RANSAC algorithm.They found that RANSAC is more efficient than thefirst one.Chao Chen in[6]and Yu in[25]present algorithms of segmentation in range images for the same application of detecting planar regions in an urban scene.The method in[6]is based on a region growing algorithm in range images and merges re-sults in one labelled3D point cloud.[25]uses a method different from the three we have cited:they extract a hi-erarchical subdivision of the input image built like a graph where leaf nodes represent planar regions.There are also other methods like bayesian techniques. In[16]and[8],they obtain smoothed surface from noisy point clouds with objects modeled by probability distribu-tions and it seems possible to extend this idea to point cloud segmentation.But techniques based on bayesian statistics need to optimize global statistical model and then it is diffi-cult to process points cloud larger than one million points.We present below an analysis of the two main methods used in literature:RANSAC and region-growing.Hough-transform algorithm is too time consuming for our applica-tion.To compare the complexity of the algorithm,we take a point cloud of size N with only one plane P of size n.We suppose that we want to detect this plane P and we define n min the minimum size of the plane we want to detect.The size of a plane is the area of the plane.If the data density is uniform in the point cloud then the size of a plane can be specified by its number of points.2.1.RANSACRANSAC is an algorithm initially developped by Fis-chler and Bolles in[9]that allows thefitting of models with-out trying all possibilities.RANSAC is based on the prob-ability to detect a model using the minimal set required to estimate the model.To detect a plane with RANSAC,we choose3random points(enough to estimate a plane).We compute the plane parameters with these3points.Then a score function is used to determine how the model is good for the remaining ually,the score is the number of points belonging to the plane.With noise,a point belongs to a plane if the distance from the point to the plane is less than a parameter γ.In the end,we keep the plane with the best score.Theprobability of getting the plane in thefirst trial is p=(nN )3.Therefore the probability to get it in T trials is p=1−(1−(nN )3)ing equation1and supposing n minN1,we know the number T min of minimal trials to have a probability p t to get planes of size at least n min:T min=log(1−p t)log(1−(n minN))≈log(11−p t)(Nn min)3.(1)For each trial,we test all data points to compute the score of a plane.The RANSAC algorithm complexity lies inO(N(Nn min )3)when n minN1and T min→0whenn min→N.Then RANSAC is very efficient in detecting large planes in noisy point clouds i.e.when the ratio n minN is 1but very slow to detect small planes in large pointclouds i.e.when n minN 1.After selecting the best model,another step is to extract the largest connected component of each plane.Connnected components mean that the min-imum distance between each point of the plane and others points is smaller(for distance)than afixed parameter.Schnabel et al.[20]bring two optimizations to RANSAC:the points selection is done locally and the score function has been improved.An octree isfirst created from point cloud.Points used to estimate plane parameters are chosen locally at a random depth of the octree.The score function is also different from RANSAC:instead of testing all points for one model,they test only a random subset and find the score by interpolation.The algorithm complexity lies in O(Nr4Ndn min)where r is the number of random subsets for the score function and d is the maximum octree depth. Their algorithm improves the planes detection speed but its complexity lies in O(N2)and it becomes slow on large data sets.And again we have to extract the largest connected component of each plane.2.2.Region GrowingRegion Growing algorithms work well in range images like in[18].The principle of region growing is to start with a seed region and to grow it by neighborhood when the neighbors satisfy some conditions.In range images,we have the neighbors of each point with pixel coordinates.In case of unorganized3D data,there is no information about the neighborhood in the data structure.The most common method to compute neighbors in3D is to compute a Kd-tree to search k nearest neighbors.The creation of a Kd-tree lies in O(NlogN)and the search of k nearest neighbors of one point lies in O(logN).The advantage of these region growing methods is that they are fast when there are many planes to extract,robust to noise and extract the largest con-nected component immediately.But they only use the dis-tance from point to plane to extract planes and like we will see later,it is not accurate enough to detect correct planar regions.Rabbani et al.[19]developped a method of smooth area detection that can be used for plane detection.Theyfirst estimate the normal of each point like in[13].The point with the minimum residual starts the region growing.They test k nearest neighbors of the last point added:if the an-gle between the normal of the point and the current normal of the plane is smaller than a parameterαthen they add this point to the smooth region.With Kd-tree for k nearest neighbors,the algorithm complexity is in O(N+nlogN). The complexity seems to be low but in worst case,when nN1,example for facade detection in point clouds,the complexity becomes O(NlogN).3.Voxel Growing3.1.OverviewIn this article,we present a new algorithm adapted to large data sets of unorganized3D points and optimized to be accurate and fast.Our plane detection method works in three steps.In thefirst part,we compute a better esti-mation of the normal in each point by afiltered weighted planefitting.In a second step,we compute the score of lo-cal planarity in each point.We select the best seed point that represents a good seed plane and in the third part,we grow this seed plane by adding all points close to the plane.Thegrowing step is based on a voxel growing algorithm.The filtered normals,the score function and the voxel growing are innovative contributions of our method.As an input,we need dense point clouds related to the level of detail we want to detect.As an output,we produce connected components of planes in the point cloud.This notion of connected components is linked to the data den-sity.With our method,the connected components of planes detected are linked to the parameter d of the voxel grid.Our method has 3”intuitive”parameters :d ,area min and γ.”intuitive”because there are linked to physical mea-surements.d is the voxel size used in voxel growing and also represents the connectivity of points in detected planes.γis the maximum distance between the point of a plane and the plane model,represents the plane thickness and is linked to the point cloud noise.area min represents the minimum area of planes we want to keep.3.2.Details3.2.1Local Density of Point CloudsIn a first step,we compute the local density of point clouds like in [17].For that,we find the radius r i of the sphere containing the k nearest neighbors of point i .Then we cal-culate ρi =kπr 2i.In our experiments,we find that k =50is a good number of neighbors.It is important to know the lo-cal density because many laser point clouds are made with a fixed resolution angle scanner and are therefore not evenly distributed.We use the local density in section 3.2.3for the score calculation.3.2.2Filtered Normal EstimationNormal estimation is an important part of our algorithm.The paper [7]presents and compares three normal estima-tion methods.They conclude that the weighted plane fit-ting or WPF is the fastest and the most accurate for large point clouds.WPF is an idea of Pauly and al.in [17]that the fitting plane of a point p must take into consider-ation the nearby points more than other distant ones.The normal least square is explained in [21]and is the mini-mum of ki =1(n p ·p i +d )2.The WPF is the minimum of ki =1ωi (n p ·p i +d )2where ωi =θ( p i −p )and θ(r )=e −2r 2r2i .For solving n p ,we compute the eigenvec-tor corresponding to the smallest eigenvalue of the weightedcovariance matrix C w = ki =1ωi t (p i −b w )(p i −b w )where b w is the weighted barycenter.For the three methods ex-plained in [7],we get a good approximation of normals in smooth area but we have errors in sharp corners.In fig-ure 1,we have tested the weighted normal estimation on two planes with uniform noise and forming an angle of 90˚.We can see that the normal is not correct on the corners of the planes and in the red circle.To improve the normal calculation,that improves the plane detection especially on borders of planes,we propose a filtering process in two phases.In a first step,we com-pute the weighted normals (WPF)of each point like we de-scribed it above by minimizing ki =1ωi (n p ·p i +d )2.In a second step,we compute the filtered normal by us-ing an adaptive local neighborhood.We compute the new weighted normal with the same sum minimization but keep-ing only points of the neighborhood whose normals from the first step satisfy |n p ·n i |>cos (α).With this filtering step,we have the same results in smooth areas and better results in sharp corners.We called our normal estimation filtered weighted plane fitting(FWPF).Figure 1.Weighted normal estimation of two planes with uniform noise and with 90˚angle between them.We have tested our normal estimation by computing nor-mals on synthetic data with two planes and different angles between them and with different values of the parameter α.We can see in figure 2the mean error on normal estimation for WPF and FWPF with α=20˚,30˚,40˚and 90˚.Us-ing α=90˚is the same as not doing the filtering step.We see on Figure 2that α=20˚gives smaller error in normal estimation when angles between planes is smaller than 60˚and α=30˚gives best results when angle between planes is greater than 60˚.We have considered the value α=30˚as the best results because it gives the smaller mean error in normal estimation when angle between planes vary from 20˚to 90˚.Figure 3shows the normals of the planes with 90˚angle and better results in the red circle (normals are 90˚with the plane).3.2.3The score of local planarityIn many region growing algorithms,the criteria used for the score of the local fitting plane is the residual,like in [18]or [19],i.e.the sum of the square of distance from points to the plane.We have a different score function to estimate local planarity.For that,we first compute the neighbors N i of a point p with points i whose normals n i are close toFigure parison of mean error in normal estimation of two planes with α=20˚,30˚,40˚and 90˚(=Nofiltering).Figure 3.Filtered Weighted normal estimation of two planes with uniform noise and with 90˚angle between them (α=30˚).the normal n p .More precisely,we compute N i ={p in k neighbors of i/|n i ·n p |>cos (α)}.It is a way to keep only the points which are probably on the local plane before the least square fitting.Then,we compute the local plane fitting of point p with N i neighbors by least squares like in [21].The set N i is a subset of N i of points belonging to the plane,i.e.the points for which the distance to the local plane is smaller than the parameter γ(to consider the noise).The score s of the local plane is the area of the local plane,i.e.the number of points ”in”the plane divided by the localdensity ρi (seen in section 3.2.1):the score s =card (N i)ρi.We take into consideration the area of the local plane as the score function and not the number of points or the residual in order to be more robust to the sampling distribution.3.2.4Voxel decompositionWe use a data structure that is the core of our region growing method.It is a voxel grid that speeds up the plane detection process.V oxels are small cubes of length d that partition the point cloud space.Every point of data belongs to a voxel and a voxel contains a list of points.We use the Octree Class Template in [2]to compute an Octree of the point cloud.The leaf nodes of the graph built are voxels of size d .Once the voxel grid has been computed,we start the plane detection algorithm.3.2.5Voxel GrowingWith the estimator of local planarity,we take the point p with the best score,i.e.the point with the maximum area of local plane.We have the model parameters of this best seed plane and we start with an empty set E of points belonging to the plane.The initial point p is in a voxel v 0.All the points in the initial voxel v 0for which the distance from the seed plane is less than γare added to the set E .Then,we compute new plane parameters by least square refitting with set E .Instead of growing with k nearest neighbors,we grow with voxels.Hence we test points in 26voxel neigh-bors.This is a way to search the neighborhood in con-stant time instead of O (logN )for each neighbor like with Kd-tree.In a neighbor voxel,we add to E the points for which the distance to the current plane is smaller than γand the angle between the normal computed in each point and the normal of the plane is smaller than a parameter α:|cos (n p ,n P )|>cos (α)where n p is the normal of the point p and n P is the normal of the plane P .We have tested different values of αand we empirically found that 30˚is a good value for all point clouds.If we added at least one point in E for this voxel,we compute new plane parameters from E by least square fitting and we test its 26voxel neigh-bors.It is important to perform plane least square fitting in each voxel adding because the seed plane model is not good enough with noise to be used in all voxel growing,but only in surrounding voxels.This growing process is faster than classical region growing because we do not compute least square for each point added but only for each voxel added.The least square fitting step must be computed very fast.We use the same method as explained in [18]with incre-mental update of the barycenter b and covariance matrix C like equation 2.We know with [21]that the barycen-ter b belongs to the least square plane and that the normal of the least square plane n P is the eigenvector of the smallest eigenvalue of C .b0=03x1C0=03x3.b n+1=1n+1(nb n+p n+1).C n+1=C n+nn+1t(pn+1−b n)(p n+1−b n).(2)where C n is the covariance matrix of a set of n points,b n is the barycenter vector of a set of n points and p n+1is the (n+1)point vector added to the set.This voxel growing method leads to a connected com-ponent set E because the points have been added by con-nected voxels.In our case,the minimum distance between one point and E is less than parameter d of our voxel grid. That is why the parameter d also represents the connectivity of points in detected planes.3.2.6Plane DetectionTo get all planes with an area of at least area min in the point cloud,we repeat these steps(best local seed plane choice and voxel growing)with all points by descending order of their score.Once we have a set E,whose area is bigger than area min,we keep it and classify all points in E.4.Results and Discussion4.1.Benchmark analysisTo test the improvements of our method,we have em-ployed the comparative framework of[12]based on range images.For that,we have converted all images into3D point clouds.All Point Clouds created have260k points. After our segmentation,we project labelled points on a seg-mented image and compare with the ground truth image. We have chosen our three parameters d,area min andγby optimizing the result of the10perceptron training image segmentation(the perceptron is portable scanner that pro-duces a range image of its environment).Bests results have been obtained with area min=200,γ=5and d=8 (units are not provided in the benchmark).We show the re-sults of the30perceptron images segmentation in table1. GT Regions are the mean number of ground truth planes over the30ground truth range images.Correct detection, over-segmentation,under-segmentation,missed and noise are the mean number of correct,over,under,missed and noised planes detected by methods.The tolerance80%is the minimum percentage of points we must have detected comparing to the ground truth to have a correct detection. More details are in[12].UE is a method from[12],UFPR is a method from[10]. It is important to notice that UE and UFPR are range image methods and our method is not well suited for range images but3D Point Cloud.Nevertheless,it is a good benchmark for comparison and we see in table1that the accuracy of our method is very close to the state of the art in range image segmentation.To evaluate the different improvements of our algorithm, we have tested different variants of our method.We have tested our method without normals(only with distance from points to plane),without voxel growing(with a classical region growing by k neighbors),without our FWPF nor-mal estimation(with WPF normal estimation),without our score function(with residual score function).The compari-son is visible on table2.We can see the difference of time computing between region growing and voxel growing.We have tested our algorithm with and without normals and we found that the accuracy cannot be achieved whithout normal computation.There is also a big difference in the correct de-tection between WPF and our FWPF normal estimation as we can see in thefigure4.Our FWPF normal brings a real improvement in border estimation of planes.Black points in thefigure are non classifiedpoints.Figure5.Correct Detection of our segmentation algorithm when the voxel size d changes.We would like to discuss the influence of parameters on our algorithm.We have three parameters:area min,which represents the minimum area of the plane we want to keep,γ,which represents the thickness of the plane(it is gener-aly closely tied to the noise in the point cloud and espe-cially the standard deviationσof the noise)and d,which is the minimum distance from a point to the rest of the plane. These three parameters depend on the point cloud features and the desired segmentation.For example,if we have a lot of noise,we must choose a highγvalue.If we want to detect only large planes,we set a large area min value.We also focus our analysis on the robustess of the voxel size d in our algorithm,i.e.the ratio of points vs voxels.We can see infigure5the variation of the correct detection when we change the value of d.The method seems to be robust when d is between4and10but the quality decreases when d is over10.It is due to the fact that for a large voxel size d,some planes from different objects are merged into one plane.GT Regions Correct Over-Under-Missed Noise Duration(in s)detection segmentation segmentationUE14.610.00.20.3 3.8 2.1-UFPR14.611.00.30.1 3.0 2.5-Our method14.610.90.20.1 3.30.7308Table1.Average results of different segmenters at80%compare tolerance.GT Regions Correct Over-Under-Missed Noise Duration(in s) Our method detection segmentation segmentationwithout normals14.6 5.670.10.19.4 6.570 without voxel growing14.610.70.20.1 3.40.8605 without FWPF14.69.30.20.1 5.0 1.9195 without our score function14.610.30.20.1 3.9 1.2308 with all improvements14.610.90.20.1 3.30.7308 Table2.Average results of variants of our segmenter at80%compare tolerance.4.1.1Large scale dataWe have tested our method on different kinds of data.We have segmented urban data infigure6from our Mobile Mapping System(MMS)described in[11].The mobile sys-tem generates10k pts/s with a density of50pts/m2and very noisy data(σ=0.3m).For this point cloud,we want to de-tect building facades.We have chosen area min=10m2, d=1m to have large connected components andγ=0.3m to cope with the noise.We have tested our method on point cloud from the Trim-ble VX scanner infigure7.It is a point cloud of size40k points with only20pts/m2with less noise because it is a fixed scanner(σ=0.2m).In that case,we also wanted to detect building facades and keep the same parameters ex-ceptγ=0.2m because we had less noise.We see infig-ure7that we have detected two facades.By setting a larger voxel size d value like d=10m,we detect only one plane. We choose d like area min andγaccording to the desired segmentation and to the level of detail we want to extract from the point cloud.We also tested our algorithm on the point cloud from the LEICA Cyrax scanner infigure8.This point cloud has been taken from AIM@SHAPE repository[1].It is a very dense point cloud from multiplefixed position of scanner with about400pts/m2and very little noise(σ=0.02m). In this case,we wanted to detect all the little planes to model the church in planar regions.That is why we have chosen d=0.2m,area min=1m2andγ=0.02m.Infigures6,7and8,we have,on the left,input point cloud and on the right,we only keep points detected in a plane(planes are in random colors).The red points in thesefigures are seed plane points.We can see in thesefig-ures that planes are very well detected even with high noise. Table3show the information on point clouds,results with number of planes detected and duration of the algorithm.The time includes the computation of the FWPF normalsof the point cloud.We can see in table3that our algo-rithm performs linearly in time with respect to the numberof points.The choice of parameters will have little influence on time computing.The computation time is about one mil-lisecond per point whatever the size of the point cloud(we used a PC with QuadCore Q9300and2Go of RAM).The algorithm has been implented using only one thread andin-core processing.Our goal is to compare the improve-ment of plane detection between classical region growing and our region growing with better normals for more ac-curate planes and voxel growing for faster detection.Our method seems to be compatible with out-of-core implemen-tation like described in[24]or in[15].MMS Street VX Street Church Size(points)398k42k7.6MMean Density50pts/m220pts/m2400pts/m2 Number of Planes202142Total Duration452s33s6900sTime/point 1ms 1ms 1msTable3.Results on different data.5.ConclusionIn this article,we have proposed a new method of plane detection that is fast and accurate even in presence of noise. We demonstrate its efficiency with different kinds of data and its speed in large data sets with millions of points.Our voxel growing method has a complexity of O(N)and it is able to detect large and small planes in very large data sets and can extract them directly in connected components.Figure 4.Ground truth,Our Segmentation without and with filterednormals.Figure 6.Planes detection in street point cloud generated by MMS (d =1m,area min =10m 2,γ=0.3m ).References[1]Aim@shape repository /.6[2]Octree class template /code/octree.html.4[3] A.Bab-Hadiashar and N.Gheissari.Range image segmen-tation using surface selection criterion.2006.IEEE Trans-actions on Image Processing.1[4]J.Bauer,K.Karner,K.Schindler,A.Klaus,and C.Zach.Segmentation of building models from dense 3d point-clouds.2003.Workshop of the Austrian Association for Pattern Recognition.1[5]H.Boulaassal,ndes,P.Grussenmeyer,and F.Tarsha-Kurdi.Automatic segmentation of building facades using terrestrial laser data.2007.ISPRS Workshop on Laser Scan-ning.1[6] C.C.Chen and I.Stamos.Range image segmentationfor modeling and object detection in urban scenes.2007.3DIM2007.1[7]T.K.Dey,G.Li,and J.Sun.Normal estimation for pointclouds:A comparison study for a voronoi based method.2005.Eurographics on Symposium on Point-Based Graph-ics.3[8]J.R.Diebel,S.Thrun,and M.Brunig.A bayesian methodfor probable surface reconstruction and decimation.2006.ACM Transactions on Graphics (TOG).1[9]M.A.Fischler and R.C.Bolles.Random sample consen-sus:A paradigm for model fitting with applications to image analysis and automated munications of the ACM.1,2[10]P.F.U.Gotardo,O.R.P.Bellon,and L.Silva.Range imagesegmentation by surface extraction using an improved robust estimator.2003.Proceedings of Computer Vision and Pat-tern Recognition.1,5[11] F.Goulette,F.Nashashibi,I.Abuhadrous,S.Ammoun,andurgeau.An integrated on-board laser range sensing sys-tem for on-the-way city and road modelling.2007.Interna-tional Archives of the Photogrammetry,Remote Sensing and Spacial Information Sciences.6[12] A.Hoover,G.Jean-Baptiste,and al.An experimental com-parison of range image segmentation algorithms.1996.IEEE Transactions on Pattern Analysis and Machine Intelligence.5[13]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,andW.Stuetzle.Surface reconstruction from unorganized points.1992.International Conference on Computer Graphics and Interactive Techniques.2[14]P.Hough.Method and means for recognizing complex pat-terns.1962.In US Patent.1[15]M.Isenburg,P.Lindstrom,S.Gumhold,and J.Snoeyink.Large mesh simplification using processing sequences.2003.。
海康威视 30X 高清鱼眼网络摄像机 用户手册说明书

Copyright ©1993- 2017 Infinova. All rights reserved. Appearance and specifications are subject to change without prior notice.• Inbuilt 30X HD integrated camera module• 1/1.9" large area progressive scanning CMOS sensor• ICR infrared filter type automatic switch to realize true day/nightsurveillance• Starlight ‐level ultra ‐low illumination: 0.0005 lux• Supports multi ‐frame composite pattern wide dynamic and the maximumdynamic range is 120dB• Inbuilt efficient infrared lamps, wave length 850nm, ensure stablelong-term use and reduce maintenance cost • Night vision distance up to 200m• The ways of turning on the infrared lamps can be flexible to meetdiversified surveillance environment demands• The infrared power is adjusted automatically based on dome drive zoomingor manually to optimize the night vision fill ‐in light effects • HD network video output: 1920×1080@60fps• Features smart functions to achieve border protection (wire cross,intrusion) • Three Simultaneous Video Streams: Dual H.265 & M-JPEG or Dual H.264 &Scalable M-JPEG• Supports embedded storage/NAS storage• Supports alarm recording and alarm snapshots• Bi ‐directional audio, G.711a/G.711u/AAC standard optional • Two alarm inputs and one alarm output• Supports motion detection, 4 detecting areas dividable• Supports servo feedback mechanism and multiple alarm-triggered ways,such as IO input, network disconnected, motion detection, smart detection, etc.; supports flexible alarm associated configurations, such as I/O output, email, FTP picture-uploading, audio and TF card recording • Supports Local Recording• Supports Region of Interest (ROI), 8 regions dividable• Allows multiple users to perform live access and parameters settings viaWeb Server• Supports preset, autopan, pattern, autoscan, time tour, normal tour, power‐on return, etc.• Supports Auto-Flip and Image overlapping • Manual Consumption adjustment• Compatible with Infinova digital video surveillance software and convenient to integrate with other video surveillance software • Supports ONVIF Profile S & G standards• Standard SDK, easy to integrate with other digital system• Supports RS485 control and analog video output for easy debugging • IP67 protection rate, inbuilt a heater and air circulation system to avoid icing• Supports remote network upgrade•Adopts hydrophobic lens, lens self-cleanVT231-A230-A series is our newly introduced high definition infrared network dome camera that supports 1920×1080@60fps HDnetwork video output. It adopts H.265/H.264/M ‐JPEG encoding and its output provides excellent definition and color revivificationdegree which enables the acquisition of rich and accurate details so as to effectively guarantee smart analysis accuracy.This product adopts large power LED infrared lamps, infrared wave length 850nm, long night vision distance up to 200m, strongillumination. The IR lamps can turn on or off automatically based on environmental lighting conditions or can be adjusted manually. The IR illumination allows flexible adjustment so as to reduce IR lamp calorific value and extend its service life.User ‐friendly GUI interface design allows users to perform dome PTZ control easily via network and to configure detailed camera parameters settings. At Web interface users can perform dome camera settings and operations by using a mouse which is more convenient than the traditional keyboard control. It also supports area zoom and image PTZ function.VT231-A230-A series dome cameras also feature general dome functions such as preset, pattern, autopan, autoscan, time tour and normal tour .5E232008-A53VT231-A230-A061HD IR IP dome cameras, 2.0M, 30X, 1/1.9" CMOS, day/night, H.265/H.264/MJPEG, with audio alarm, outdoor, bracket mount, 24VDC/24VAC/PoEIf select POE power source, LAS60-57CN-RJ45-F is must.AccessoriesV1761K Wall Mount, bayonet, 10 inches V1762K Corner Mount, bayonet, 10 inches V1763K Pole-side Mount, bayonet, 10 inchesLAS60-57CN-RJ45-F PoE power sourcing equipment, 100-240VAC inputs, 60W output(Unit: inch, in the parentheses is mm)MountingWall Mounting Corner MountingPole Mounting。
海康威视 DS-2ZMN2507(C) 2MP ICR 日间夜间网络聚焦摄像头模块说明书

DS-2ZMN2507(C)25 × 2MP 1/2.8″ ICR Day/Night Zoom Camera ModuleHikvision DS-2ZMN2507 (C) 2MP ICR Day/Night Network Zoom Camera Module adopts 1/2.8″ progressive scan CMOS chip. With the 25 × optical zoom lens, the camera module offers more details over expansive areas.This series of camera modules can be used for different types of speed domes and PTZ cameras. ⏹2MP 1/2.8″ progressive scan CMOS⏹Up to 1920×1080 resolution⏹25 × optical zoom; focal length 4.8 mm to 120 mm⏹Min. illumination: color: 0.05Lux @(F1.6, AGC ON), B/W: 0.01Lux @(F1⏹Zoom speed: 3.6 s⏹IR cut filter with auto switch⏹3D DNR, low bit rate, digital WDR⏹Small size and low power consumption⏹Easy to connect to speed domes and PTZ camerasSpecificationCamera ModuleImage Sensor 1/2.8" progressive scan CMOSMin. Illumination color: 0.05Lux @ (F1.6, AGC ON); B/W: 0.01Lux @(F1.6, AGC ON)Resolution and Frame Rate main stream: 50Hz: 25fps (1920×1080, 1280×960, 1280×720); 60Hz: 30fps (1920×1080, 1280×960, 1280×720)sub-stream: 50Hz: 25fps (704×576, 640×480, 352×288); 60Hz: 30fps (704×480, 640×480, 352×240)Video Compression H.264, H.265Audio Compression G.722.1, G.711-a law, G.711-u law, MP2L2, G.726, PCMWhite Balance manual, auto 1, auto 2, sodium lamp, indoor, outdoor, fluorescent lamp Gain Control auto, manualSNR > 52dB3D DNR yesBLC yesRegional Focus yesShutter 1/1s to 1/30,000sDay & Night Auto/Color/BW/Scheduled-Switch/Triggered by Alarm InputDigital Zoom 12 ×Focus auto, semi-auto, manualRegional Exposure yesVideo Bit Rate 32 Kbps to 16 MbpsHeartbeat yesExposure Mode auto, iris priority, shutter priority, manualDay/Night Switch IR Cut FilterLensFocal Length 4.8 mm to 120 mm, 25 × opticalZoom Speed approx. 3.6 s (optical, wide-tele)Horizontal FOV 57.6° to 2.5° (wide-tele)Min. Working Distance 100 mm to 1500 mm (wide-tele)Aperture F1.6 to F3.5FunctionImage Enhancement WDR, HLCSmart Encoding low bit rate, ROIException Detection illegal loginPower-off Memory yesSmart Detection motion detection, video tampering detection, audio exception detection, intrusion detection, line crossing detectionNetworkProtocols IPv4/IPv6, HTTP, HTTPS, 802.1x, Qos, FTP, SMTP, UPnP, DNS, DDNS, NTP, RTSP, RTCP,RTP, TCP/IP, DHCP, BonjourAPI open-ended, support ONVIF and ISAPI, support HIKVISION SDK and third-partymanagement platformWeb Browser IE 8 to 11, Chrome 31.0+, Firefox 30.0+, Safari 11+Simultaneous Live View up to 20 channelsUser up to 32 users; 3 levels: administrator, operator, and userSecurity Measures user authentication (ID and PW); host authentication (MAC address); HTTPS encryption; IEEE 802.1x port-based network access controlInterfacePower Interface DC12V ±10%Communication Interface 10 M/100 M ethernet interface Audio I/O 1-ch audio input and 1-ch audio output Alarm I/O 1-ch alarm input and 1-ch alarm output Video Output yesSDI Output noRS-485 yesOn-board Storage yesInterface 36pin FFC (including network interface, RS485, RS232, CVBS, SDHC, alarm in/out, line in/out, power supply)Communication RS232 interface, HIKVISION protocol, RS485 interface, Pelco protocol GeneralPower Consumption static: 2.5W; dynamic: 4.5WOperating Conditions temperature: -10 °C to 60 °C (14°F to 140°F); humidity: <90% Dimensions 50 mm × 60 mm ×88.6 mmWeight 266 g (0.59 lb)⏹DimensionUnit: mm⏹Available ModelDS-2ZMN2507(C)。
Personal Profile

V a s s i l i s T h e o d o r a k o p o u l o sWireless Communications Engineer30 Richmond Mount, Leeds, West YorkshireTel: 07796 978197 (mobile) 0113 2946608 (home)Email: vtheodor@ URL: /~vtheodorNationality: Greek (EU Citizen. No work permit required)Personal ProfileMy academic experience has given me a thorough grounding in the field of engineering and I have developed a great interest in the area of wireless communications. My main focus in the recent 5 years has been the design and analysis of image and video coding, processing, and communication systems. I am familiar with programming and simulation tools and I enjoy learning new languages and toolsets which I acquire easily. I maintained a high level of achievement throughout my academic studies and I have a record of scientific publications in peer-reviewed journal and conferences. I consider myself as a hardworking person, able to adapt to new situations and to work effectively in team and individual basis.Key Technical Abilities•Modelling skills for layered image and video transmission over mobile and wireless networks•Computing and system development with C, Matlab and Python•User of Microsoft / Unix / Linux operating systemsEducation•2003 – 2007: PHD in Computer ScienceSchool of Informatics, Department of Computing, University of Bradford, Bradford, UKResearch thesis:Multi-priority QAM Transmission System for High Quality Mobile Video Applications. An experimental comparison of an M-QAM transmission system suitable for video transmission targeted for wireless networks is presented. The communication system is based in layer coding and unequal error protection to make the video data robust to channel errors.Key skills acquired: The philosophy of research, Effective written and oral presentation, Supervision of projects, Advanced video coding, Wireless communications, etc. (For a full list of publications please visit my web-page) •2000 – 2001: MSc in Communications and Real-Time Electronic SystemsSchool of Engineering, Department of Electronics & Communications, University of Bradford, Bradford, UK Research thesis:A Signal-Space Simulation for QAM. This Master’s thesis carries out an investigation in a signal space simulation for a digital communication system for transmission via single AWGN channel and via multipath Rayleigh fading channel. For the transmission an M-QAM system is considered.Key skills acquired:Wireless communications, Image and video processing, Digital Signal Processing, etc.•1996 – 2000: BEng in Electronic, Telecommunications & Computer EngineeringSchool of Engineering, Department of Electronics & Communications, University of Bradford, Bradford, UKProject Report: Investigation of Techniques for the Reduction of Howl-Round in Public Address Systems. This project attempts to use various echo cancellation techniques to control the effects of acoustic feedback between an adjacent microphone and loudspeaker in a public address systems.Key skills/knowledge acquired: Basics of electronics design, Advanced computer programming, etc.Professional ExperienceFeb 2002 – Jul 2002 :Research AssistantSchool of Informatics, University of Bradford, Bradford, West Yorkshire, U.K.My key responsibility in this role was to undertake a project on Multi-Priority Mobile Transmission Systems that involved modelling and simulation analysis of a novel M-QAM transmission system for mobile video applications. I accomplished this with leading the design and development of the transmission system. This role allowed me to further develop my project management skills. Responsibilities included the supervision of laboratories, support and help to a PhD researcher with active project and MSc students.Jul 1998 – Jul 1999 :Student EngineerDepartment of Development, Pace Micro Communications, Shipley, West Yorkshire, U.K.During the 12 months in the Development Department I was involved in building and testing development products, supervising the Pace ISDN and technical check of the imported electrical components. My duties also involved communicating with suppliers, clients, warehouse staff and the administration office. As a student engineer I was also working for other departments inside the company and developed the ability to understand the dynamics of a working environment, learned the aims of a business and how the different functions such as development, production, sales and marketing all relate.Jul 1997 – Sep1997 :Student Engineer (vocational work)Jul 1996 – Sep1996 :Hellenic Sugar Factory – Factory of Orestiada, Orestiada, 68200, GreeceI had the opportunity to work and be trained in the Technical Support Department in the factory of Orestiada. My duties included: installing and upgrading the employees PCs, installing a small network inside the factory, familiarising with the company’s database and central computer administration.Key SkillsSelf Management: Approaching the PhD from a project management perspective, being the project manager I was equipped with effective organisational, time and resource management skills in order to successfully complete the course on time and remain in control.Problem Solving: During my academic career I developed the ability to see a task through to its conclusion.There were several times during my research career when the results I had were leading to a dead end, but by employing efficient problem solving strategies (I am adept at looking at the bigger picture, while at the same time can pull out and analyse the important details of any problem) I could overcome the problem and lead my work to publishable results.Communication: I have strong communication skills, both written and verbal. My academic career has necessitated the importance of writing state of the art reports and articles and presenting them to a wide cross-section of academics and industrial professionals both at the University of Bradford and at conferences worldwide.Selected Publications (For a full list of publications please visit my web-page)•“Comparative analysis of a twin-class M-QAM transmission system for wireless video applications”, Theodorakopoulos V., Woodward M., Journal of Multimedia Tools and Applications, Special Issue: Wireless Multimedia, Vol. 28, Issue 1, Feb. 2006, pp. 125-139.•“Uniform and Non-uniform Partitioned 64-QAM for Mobile Video Transmission”, Theodorakopoulos V., Woodward M., Sotiropoulou K., 9th IASTED International Conference on Internet & Multimedia Systems & Applications (IMSA), Honolulu, USA, 2005.•“Comparison of uniform and non-uniform M-QAM schemes for mobile video applications”, Theodorakopoulos V., Woodward M., Sotiropoulou K., IEEE International Conference on Multimedia communications Systems (ICMCS), Montreal, Canada, 2005.•“A Dual Priority M-QAM Transmission System for High Quality Video over Mobile Channels”, Theodorakopoulos V., Woodward M., Sotiropoulou K., IEEE First International Conference on Distributed Frameworks for Multimedia Applications (DFMA), Besançon, France, 2005.•“Partitioned Quadrature Amplitude Modulation for Mobile Video Transmission”, Theodorakopoulos V., Woodward M., Sotiropoulou K., IEEE Sixth International Symposium on Multimedia Software Engineering (ISMSE), Miami, USA, 2004.Professional Activities•Reviewer for the Institution of Engineering and Technology (IET) Proceedings in Communications.•Member of IEEE Communications Society, IEEE Computer Society, IETReferencesAvailable upon request。
iDS-2CD7A46G0 P-IZHS(Y) 4 MP ANPR IR Varifocal Bul

Network Storage
Client
Web Browser
Image Image Parameters Switch Image Settings Day/Night Switch Wide Dynamic Range (WDR) SNR Image Enhancement Picture Overlay Image Stabilization Interface Video Output Ethernet Interface
Wide: 2.8 to 12 mm: D: 60 m, O: 23.8 m, R: 12 m, I: 6 m 8 to 32 mm: 150.3 m, O: 59.7 m, R: 30.1 m, I: 15 m Tele: 2.8 to 12 mm: D: 149 m, O: 59.1 m, R: 29.8 m, I: 14.9 m 8 to 32 mm: D: 400 m, O: 158.7 m, R: 80 m, I: 40 m
Specification
Camera Image Sensor Max. Resolution Min. Illumination Shutter Time Day & Night Lens
Focal Length & FOV
Focus Iris Type Aperture DORI
DORI
Illuminator Supplement Light Type Supplement Light Range Smart Supplement Light IR Wavelength Video Main Stream
iDS-2CD7A46G0/P-IZHS(Y) 4 MP ANPR IR Varifocal Bullet Network Camera
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
CAMEO-SIM:a physics-based broadband scene simulation tool for assessment of camouflage, concealment,and deception methodologiesIan R.MoorheadQinetiQ Ltd.Ively Road,Farnborough Hampshire,GU140LXUnited KingdomMarilyn A.GilmoreAlex W.HoulbrookDefence Science and Technology LaboratoryIvely Road,Farnborough Hampshire,GU140LXUnited KingdomDavid E.OxfordQinetiQ Ltd.Ively Road,Farnborough Hampshire,GU140LXUnited KingdomDavid FilbeeColin StroudGeorge HutchingsAlbert KirkHunting Engineering Ltd(HEL) Reddings Wood,Ampthill Bedford,MK452HDUnited Kingdom Abstract.Assessment of camouflage,concealment,and deception (CCD)methodologies is not a trivial problem;conventionally the only method has been to carry outfield trials,which are both expensive and subject to the vagaries of the weather.In recent years computing power has increased,such that there are now many research programs using synthetic environments for CCD assessments.Such an approach is at-tractive;the user has complete control over the environment parameters and many more scenarios can be investigated.The UK Ministry of De-fence is currently developing a synthetic scene generation tool for as-sessing the effectiveness of air vehicle camouflage schemes.The soft-ware is sufficientlyflexible to allow it to be used in a broader range of applications,including full CCD assessment.The synthetic scene simu-lation system(CAMEO-SIM)has been developed,as an extensible sys-tem,to provide imagery within the0.4to14m spectral band with as high a physicalfidelity as possible.It consists of a scene design tool,an image generator,that incorporates both radiosity and ray-tracing pro-cesses,and an experimental trials tool.The scene design tool allows the user to develop a three-dimensional representation of the scenario of interest from afixed viewpoint.Target(s)of interest can be placed any-where within this3-D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects.The user has com-plete control over the level offidelity of thefinal image.The output from the rendering tool is a sequence of radiance maps,which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks.The software also maintains an audit trail of all data selected to generate a particular image,both in terms of material properties used and the rendering options chosen.A range of verification tests has shown that the software computes the correct values for ana-lytically tractable scenarios.Validation tests using simple scenes have also been undertaken.More complex validation tests using observer tri-als are planned.The current version of CAMEO-SIM and how its images are used for camouflage assessment is described.The verification and validation tests undertaken are discussed.In addition,example images will be used to demonstrate the significance of different effects,such as spectral rendering and shadows.Planned developments of CAMEO-SIM are also outlined.©2001Society of Photo-Optical Instrumentation Engineers. [DOI:10.1117/1.1390298]Subject terms:scene simulation;CCD assessment;camouflage;concealment; deception.Paper ATA-15received Feb.16,2001;revised manuscript received Mar.18, 2001;accepted for publication Mar.23,2001.1IntroductionAdvances in synthetic image generation methods are now achieving very high levels of photorealism in the imagery that is produced.These methodsfind ready application in the games and entertainment industries,where,increas-ingly,sophisticated imagery is both required and expected. Within the military environment there is also a growing interest in using synthetic imagery as a method for assess-ing the benefits of new technologies such as camouflage systems.1Field trials are expensive,subject to the vagaries of the weather,and cannot be used to design or assess new systems and technologies.By using synthetic imagery, however,not only is there considerably more control pos-sible at a reduced cost,but also many more scenarios can be investigated than is possible using real equipment in the real world.There is,however,a fundamental difference be-tween the requirements of the games and entertainment in-dustry and the military user.The latter wishes to use the imagery to make quantitative predictions of the effects on performance of manipulations of objects in the image.Pho-1896Opt.Eng.40(9)1896–1905(September2001)0091-3286/2001/$15.00©2001Society of Photo-Optical Instrumentation Engineerstorealism alone,therefore,is not sufficient.The militaryuser needs a synthetic image generator able to correctlymodel physical interactions of electromagnetic radiation.One of the more challenging areas for a scene simulationsystem is the design of camouflage.All camouflage is acompromise.It is required to match different backgrounds,in different wavebands and at different times of the year.The compromises made in the past were determined bysubjective assessment of the visibility of a military assetwhen viewed against some relevant background.Typically,this assessment was carried out in the visible band only.However,sensors now operate throughout a large part ofthe electromagnetic spectrum,and it is likely that futurecamouflage will need to be effective across a correspondingbroad range of wavebands.In addition,new techniques andmaterials offer the potential of increased effectiveness ofcamouflage against sensor threats.Cost-effective and quan-titatively correct assessment of these techniques and mate-rials is essential for future system survivability.As a result of these requirements,the UK has developeda physics-based,broadband,scene simulation toolset,which we have called CAMEO-SIM,and which enables thequantitative evaluation of both current and future camou-flage.We provide an overview of the functionality ofCAMEO-SIM and describe the verification and validationexperiments that have been conducted.Section2providesan overview of the methods used by the CAMEO-SIMtoolset and illustrates some of the physical effects that itcan produce.Section3reviews the verification tests thathave been carried out to date,Sec.4describes a part of theongoing validation program,and Sec.5presents discussionand conclusions.2Overview of Cameo-SimThe goal of the CAMEO-SIM system is to produce syn-thetic,high resolution,physically accurate radiance imagesof target vehicles in operational scenarios,at any wave-length between0.4and14m.Version1of CAMEO-SIM was designed to create a scene as viewed by a static sensorwith either moving or static vehicles in the scene.2Recentimprovements to the system now mean that in CAMEO-SIM Version2a moving sensor can be modeled.This canbe either open loop,where the movement path is predeter-mined,or closed loop,where the path is determined inter-actively by feedback from the sensor.Improving the controlof the targets means that changes to the target,such ashigher engine temperatures at increased speeds,or higherairframe temperatures due to increased speeds,or differentshapes of the vehicle such as different wing configurationsfor aircraft during manoeuvre,can be simulated.CAMEO-SIM has two key elements that distinguish it from conven-tional ray-tracing packages.Firstly,there is a complete au-dit trail between the material properties used by therenderer and thefinal image.This means that it is possibleto conduct carefully controlled parametric manipulations ofscene properties.Secondly,because CAMEO-SIM isphysics-based it is a predictive tool.That is,once a particu-lar scenario has been created,it can then be used to predicttarget visibility under many alternative conditions.Thesemight typically be diurnal variations,different weather con-ditions,or changes to atmospheric conditions.In addition,the same scene geometry may then be used,but different material properties can be assigned to create different times of the year.Traditional ray tracers rely on arbitrary param-eter manipulation to simulate changes in the environment ͑e.g.,change of atmosphere͒.CAMEO-SIM,on the otherhand,incorporates these directly as a result of solving the underlying physical equations.All geometric objects forming the synthetic environment are modeled using textured faceted structures.Texel values in these textures are mapped to real materials,which have measured physical properties associated with them,e.g.,bi-directional reflectance,solar absorptivity,conductivity,and density.Each texel is then considered as a mixture of up to three different materials.Material properties are accessed by CAMEO-SIM from a set of databases.The bidirectional reflectance functions͑BRDFs͒,however,are computed us-ing an off-line parameterization of the raw data.This means that the physical properties of both soil and grass are mod-eled when using a grass plex objects are mod-eled as a number of polygons.This means that the three-dimensional effects of trees,including shadow effects,can be simulated.Spatial resolution can be set to any value,but spatial detail is dependent on the polygon count.In the process of generating radiometrically accurate synthetic images of scenes,all synthetic scene generators are ultimately attempting to solve a form of the general rendering equation,which states that the radiance at wave-lengthleaving a point x,y in the scene in a direction (0,0)is given by:N0͑,x,y,0,0͒ϭ͑,x,y,0,0͒N bb͓,T͑x,y͔͒ϩ͵പbd͑,x,y,0,0,i,i͒ϫN i͑,x,y,i,i͒cos͑i͒di,͑1͒where N0(,x,y,0,0)is the total radiance͑W mϪ2srϪ1͒in the direction(0,0)from the point x,y at wavelength.(,x,y,0,0)is the directional emittance in the direc-tion(0,0)from the point x,y at wavelength. N bb͓,T(x,y)͔is the blackbody radiance͑W mϪ2srϪ1͒at the temperature͑T͒of the point x,y at wavelength.bd(,x,y,0,0,i,i)is the bidirectional reflectance distribution function͑srϪ1͒of the material at the point x,y at wavelength.N i(,x,y,i,i)is the radiance ͑W mϪ2srϪ1͒incident at the point x,y from the direction (i,i)at wavelength.And͐പdi is the integral over the hemispherical solid angle subtended by the point x,y. The radiance(N s)arriving at a sensor positioned at a point xЈ,yЈin the scene is then given by:N s͑,xЈ,yЈ,x,y͒ϭ͑,xЈ,yЈ,x,y͒N0͑,x,y,0,0͒ϩN path͑,xЈ,yЈ,x,y͒,͑2͒where N s(,xЈ,yЈ,x,y)is the radiance arriving at the sen-sor at the point xЈ,yЈdue to the radiance from the point x,y at wavelength;(,xЈ,yЈ,x,y)is the transmittance of the path between the point x,y and the sensor at the point xЈ,yЈ1897 Optical Engineering,Vol.40No.9,September2001at wavelength;and N path(,xЈ,yЈ,x,y)is the path radi-ance between the point x,y and the sensor at the point xЈ,yЈat wavelength.The three main components of the rendering equation are the thermal self-emission,the atmospheric terms,and the global illumination term accounting for reflected radia-tion.Due to the complexity and recursive nature of the general integral equation,the equation is always reformu-lated to produce a computationally tractable solution for synthetic scene generation applications.The nature of this reformulation and the spectral,spatial,and directional de-pendencies placed on each term are the main differences that occur between synthetic image generation systems.In CAMEO-SIM a general bidirectional,spectral,recursive solution is sought using importance driven Monte Carlo sampling.This algorithm can be scaled to provide radiance image predictions,which contain a subset of all of the fea-tures defined by Eq.͑1͒.At longer wavelengths,such as the mid-infrared,the full hemispherical integration of the incident irradiance enables the software to account for the radiative interaction be-tween different surfaces,e.g.,hot engine radiation reflect-ing off other parts of the vehicle or neighboring scene ele-ments.The importance in scene simulation of target-scene radiative interaction frequently becomes evident,as shown in the example in Fig.1.The images shown in Fig.1were produced using CAMEO-SIM and illustrate the effects of different assump-tions made in the solution of the EO radiation transport equation for an air target in near-zero contrast with the sky as background.In the simulation,an aircraft with a near-normal hemispherical surface reflectivity of70%flying over a terrain at an altitude of500m is viewed against a sky background in the3to5m band.Figure1͑a͒shows the predicted image calculated using an approximation to the radiation transport equation commonly employed in simulations where target-scene interactions are ignored andonly direct atmospheric illumination is accounted for.Fig-ure1͑b͒shows the aircraft under the same conditions,but accounting for the full hemispherical integral of the inci-dentflux arriving at the surfaces due to the scene environ-ment.In this case,the underside of the aircraft is now in positive contrast to the sky background due to the incorpo-ration of both earth thermal reflection and albedo terms. The importance of interactions and appropriate solution of the radiation transport equations in the camouflage assess-ment role is clearly highlighted in determining the surviv-ability of aircraft.This interaction feature can be extended,as is shown in Fig.2,which shows CAMEO-SIM predictions of self-illumination during a predicted countermeasure release.By solving the full radiation transport equation͓Eq.͑1͔͒,these important radiative interactions can be modeled in detail.CAMEO-SIM computes the radiance in user specified subbands for each pixel in the image.These subband radi-ance images can then be summed to produce an in-band radiance image.The software can display each subband ei-ther as a gray scale image or any three subbands as a false color image.CAMEO-SIM can also display visible band true-color imagery.This is done byfirst evaluating the spectral radiance image cube for a defined number of sub-bands between380and780nm.The spectral image cube is then converted into device independent color space,repre-sented by the tristimulus values X,Y,and Z,using the CIE 1931Colorimetric Standard Observer.3The monitor that is used for the image display is calibrated both in terms of luminance and phosphor radiance,allowing the X,Y,and Z values to be converted to R,G,B values.Variouslumi-Fig.1Simulation of an image of an aircraft in the3to5m band: (a)predicted appearance without scene interaction and(b)pre-dicted with sceneinteraction.Fig.2Simulation of an aircraft releasing a countermeasure(flare), illustrating how the CAMEO-SIM solution captures the self-illumination of the aircraft.1898Optical Engineering,Vol.40No.9,September2001nance transforms are employed to make best use of the limited CRT dynamic range.An example 20-subband im-age along with its true-color equivalent is displayed in Fig.3.The output from CAMEO-SIM must still be processed in some fashion to derive target visibility predictions.It therefore relies either on the use of observer trials or the input of the imagery into further models able to predict visual target conspicuity.Since it generates imagery,it is most suitable to use imaging vision models such as the Georgia Tech Vision ͑GTV ͒model,4rather than traditional parametric models such as ORACLE.53Analytical Verification TestsCAMEO-SIM Version 1.0is complete and is now undergo-ing verification and validation.A range of verification tests has been developed that exercises different elements of the high fidelity rendering equations implemented within CAMEO-SIM.All the tests have analytic solutions.Table 1summarizes the tests and the results obtained.A description of each test is given in the following sections.3.1Blackbody Radiance TestThe purpose of this test is to ensure that the blackbody radiance is calculated correctly.A one-meter-square uni-formly textured facet is created and the temperature of the facet set to a known value.The line of sight of the observer is centered and perpendicular to the facet.The radiance of a perfect blackbody is calculated and compared with the value computed within CAMEO-SIM.3.2Contrast in an Isothermal EnvironmentThe purpose of this test is to ensure that the correct radi-ance contrast is predicted for isothermal vacuum,radiomet-ric environments.The skyshine radiance terms are set to constant values.A one-meter-square surface is defined to be a perfect diffuse reflector and the line of sight of the ob-server is centered and perpendicular to the facet.The radi-ance of the square is calculated and compared with the value computed within CAMEO-SIM.3.3Calculation of Shadowing and BlockingBlocking is the rendering process that ensures parts of the object which are not visible to the observer due to obstruc-tion by another part,are correctly accounted for.Shadow-ing is the rendering process that ensures parts of the object do not reflect the point sources if they are obscured from it by other parts.This test has been designed to ensure that the blocking and shadowing algorithms are working accu-rately.The geometry for this test is shown in Fig.4,which shows two square plates with the lower plate 100%diffuse reflecting,the top plate black,and at 0K.The observer and sun are 45deg to the geometry.The radiance ͑N ͒of the illuminated pixels in the image is:N ϭQ /,͑3͒where N is the radiance in W m Ϫ2sr Ϫ1,Q is the normal incident irradiance in W m Ϫ2,and is the diffuse reflec-tance of the lower plate.The solar irradiance is set to a fixed value.The radiance of the shadowed,blocked,and irradiated areas is calculated and compared with the values computed within CAMEO-SIM.3.4Spectral CalculationsThe purpose of this test is to verify that the spectral inte-grations are being calculated accurately.To test this,a de-fined solar spectral irradiance is used to illuminate an arti-ficial spectral material being observed with a spectrally selective sensor.The spectral variation in the material properties,the light intensity,and the sensor response is defined.For the gen-eral case,the in-band reflected radiance between the upper and lower wavelengths is given by:N ϭ͵12J ͑͒s 2cos i .͑͒.Љ͑͒.d ,͑4͒where N ϭin-band radiance ͑W sr Ϫ1m Ϫ2͒,i ϭincidence angle between source and reflector ͑radians ͒,J ()ϭsource intensity ͑W sr Ϫ1͒at wavelength ,()ϭsensor spectral response,Љ()ϭspectral bidirectional reflectivity ͑sr Ϫ1͒,and s ϭdistance to the source ͑m ͒.Fig.3Illustration of the combination of subband images to create a true-color image.Upper part of the figure shows the sequence of subband images from 380through 780nm.These are combined in a weighted fashion using conventional colorimetric methods to pro-duce a true color image,which can then be displayed on a cali-brated color monitor.1899Optical Engineering,Vol.40No.9,September 20013.5Radiometric Calculation of Lighting EffectsThe purpose of this test is to verify that the radiometric effects of light sources are being accurately represented.The geometry of the test is shown in Fig.5͑a ͒,and a plot of computed and rendered radiance is shown in Fig.5͑b ͒,to-gether with the difference between the computed and ren-dered radiance.It must be noted that the analytical solution assumes radiant intensity is at the pixel’s center,but the image’s radiant intensity is supersampled across a pixel.This will introduce a small difference to the analytical so-lution.Table 1Summary of validation test results.All values are radiance (W m Ϫ2sr Ϫ1)unless otherwise stated.TestExpected resultCalculatedBlackbody radiance Blackbody Radiance ϭ42.89(8to 12.5m band)42.89Contrast in isothermal environmentCenter pixel radiance ϭ35.23(8to 12.5m band)35.23Shadowing and blockinga.Radiance of irradiated area ϭ5.1768a.5.1768b.Radiance of blocked area ϭ0.0b 0.0c.Radiance of shadowed area ϭ0.0(3to 5m band)c.0.0Spectral calculation Center pixel radiance (3to 5m band)ϭ1.491.49Radiometric calculation of lighting effectsRadiance variation:Center 0.31806Center:0.31831edge:0.0094239edge:0.0094248Directional emission Slope of radiance along centreline ϭ60.01W m Ϫ2pixel Ϫ159.932W m Ϫ2pixel Ϫ1Multiple materialassignment on a textureBlackbody radiance ϭ8.975Blackbody radiance ϭ8.975Gray body radiance ϭ4.4875(3to 5m band)Gray body radiance ϭ4.4875Bidirectional reflectivity Illuminated pixel radiance ϭ2.3 2.3Small target renderingIntegrated facet radiant intensity ϭ1.806W sr Ϫ1(3to 5m band)1.813W sr Ϫ1Fig.4Diagram showing the geometry used to verify the blocking and shadowing computations.The size of the upper shadowing plate is D and is distance D from the lower plate.The lower plate is 3ϫD in extent.1900Optical Engineering,Vol.40No.9,September 20013.6Directional Emission of Uniformly Textured andHeated Spheres This test verifies that the second pass renderer is accounting for the directional emissivity correctly when the object is nominated as having directional optical properties.Two uniformly textured spheres of 2m diam are set to a known temperature.For one of the spheres,the vertex normals are equal to the facet normal,and for the other an appropriate angle is chosen for generating the vertex normals.There-fore,in the test both flat faceting and vertex normal inter-polation in the second-pass renderer are tested.The varia-tion in pixel radiance from the center of the sphere to the outside edge should vary linearly ͑for the vertex normal interpolated sphere,and approximated with a stepped varia-tion for the flat facet sphere ͒.3.7Textured Heated Billboard for Testing MultipleMaterial Assignments on a Texture The purpose of this test is to ensure that textures that have been classified using multiple material associations and transparency are interpreted properly by CAMEO-SIM.Totest this aspect,a heated uniformly textured billboard with a transparent section is rendered.A 256ϫ256texture image containing two rectangles and a transparent region is cre-ated.One rectangle is classified as a blackbody perfect dif-fuser and set to a known temperature.The other rectangle is set to be a gray-body perfect diffuser at the same tempera-ture.3.8Bidirectional Reflectivity of Uniformly Texturedand Heated Spheres The purpose of this test was to verify that CAMEO-SIM is interpreting the bidirectional reflectance function correctly.To keep the solution to the bidirectional reflectance distri-bution function problem analytically tractable,a BRDF file was used that represents a gray semispecular retroreflecting BRDF such thatBRDF ϭ1cos ͑͒р30degBRDF ϭ0.0Ͼ30deg,where is the angle of incidence.Two spheres are created:for one sphere the vertex normals are equal to the facet normal,and for the other sphere an appropriate angle is chosen for generating the vertex normals.The line of sight of the observer is set to view the spheres from above with the sun position above the observer3.9Small Target RenderingThe purpose of this test was to ensure that CAMEO-SIM is treating small targets to an acceptable accuracy ͑an essen-tial requirement for simulating potentially subpixel targets ͒.To test this requirement,a sphere identical to that used in the BRDF test is rendered against a simple uniform back-ground.The geometry of the test case is shown in Fig.6͑a ͒,and the image formed for this test case should be similar to that shown in Fig.6͑b ͒.4Validation4.1IssuesThe issues surrounding the validation of any piece of simu-lation software are often complex,and CAMEO-SIM is no exception.Furthermore,the fact that CAMEO-SIM aims to physically represent the real world,in many electromag-netic wavebands,adds considerably to the difficulties,since we still have neither the basic databases nor the necessary understanding of what constitutes the real world.6In addi-tion,since the whole purpose of CAMEO-SIM is to repre-sent scenarios that may not exist or are impossible to docu-ment,there may in fact be no equivalent real world,unlike the situation that exists within the image compression domain.7–9This can be illustrated at the simplest level by considering the geometry and culture that are used to de-scribe a scenario.It is possible to achieve an exactmatchFig.5(a)Diagram of the geometry used to verify the distribution of illumination from a light source and (b)graph of computed and pre-dicted radiance as a function of pixel position for the geometry in 5(a).1901Optical Engineering,Vol.40No.9,September 2001with the terrain geometry by using detailed map informa-tion,but it is impossible to achieve exactly the same geom-etry for the culture present in that terrain,e.g.,tree struc-ture.This means that validation methods that assume there is some real world database of measurements that can be directly compared with the output from the simulation can-not work.The validation processes that we are using are,as a result,more abstract and involve three separate ap-proaches.First,to use highly simplified scenarios that can be both synthesized within CAMEO-SIM and measured,second,to compare simulation and real world imagery,and finally,to examine observer performance with real and syn-thetic imagery.We only report initial results from the first two of these approaches to validation.The comparisons re-ported are made using radiometric data.Additional valida-tion is making use of a range of image metrics,such as higher order statistics to better understand these issues.4.2Simple Imagery ValidationA trial has been conducted involving imaging a simple ob-ject viewed against a uniform background.The object is a hollow metal step-like structure called CUBI.It is con-structed from 3-mm mild steel lined with 23-mm polysty-rene insulation and finished in a matte green paint,shown in Fig.7.It was mounted on a turntable,so that the aspect could be changed,and was positioned on a uniform area of concrete.A large area blackbody was positioned close to CUBI,so that it could be included in any imagery.Images were taken using an AGEMA™980imaging radiometer ͑3to 5and 8to 12m bands ͒at different times of the day under sunny and cloudy conditions.Calibrated visible band imagery was obtained with a Kodak DCS 420™digital camera.The surface temperatures of the concrete,brick wall,and different parts of CUBI were measured with a contact temperature probe.The same scene was rendered within CAMEO-SIM us-ing predicted temperatures and measured temperatures.CAMEO-SIM currently depends on another suite of models ͑MOSART and TERTEM ͒to provide a parameterized at-mosphere and to perform the heat transfer calculations for the materials used in the scene.MAT,the front end of this suite,has limited functionality.It will allow the use of 19standard atmosphere types,which can only be modified in terms of temperature,humidity,and wind speed by one standard deviation from their mean.A default option is available,which picks what it believes to be the most ap-propriate atmosphere,according to a user-defined latitude and longitude position.The apparent temperatures in the CAMEO-SIM images ͑8to 12m band ͒were then com-pared ͑see Fig.8͒.One condition has been analyzed so far and the pre-dicted temperatures were found to be different to those measured.Similar effects have been observed elsewhere.10Initial analysis of the imagery showed that the default op-tion was not producing a viable atmosphere or set of terrain temperatures.The main differences have been found to be due to incorrect definition of the thermal material properties and inaccurate atmospheric modeling,in particular solar irradiance levels.The sunny condition in the UK was not as sunny as that predicted using MOSART™͑there were in-termittent clouds ͒,and the overcast day was probably not as overcast as that modeled within MOSART.The measured temperatures were found to lie between the predictedtem-Fig.6(a)Geometry used to carry out the small target test and (b)a typical image produced by thetest.Fig.7Photograph of the metal step object (CUBI)used in the vali-dation experiments.Object is mounted on a rotation table standing on a concrete base.1902Optical Engineering,Vol.40No.9,September 2001。