Generic Framework for Parallel and Distributed Processing of Video-Data

合集下载

ABAP-SPTA并行处理

ABAP-SPTA并行处理

ABAP-SPTA并⾏处理Parallel Processing Technique in SAP ABAP using SPTA FrameworkWith the advent of HANA and In-Memory processing, this topic might look mis-timed. But, there are many organizations, which still have no plan to move to HANA within couples of years. As they say, the show must go on and that motivated us to publish this long pending article here for those ABAPers, who still have to deal millions of rows in batch jobs and who feel the “Nights are getting too shorter to execute those batch jobs in SAP” (inspired from a friends blog).Why parallel processing required?Parallel processing is required mainly to improve the performance of any ABAP program. Using parallel processing framework we can significantly improve the processing time of any program, particularly where data volume is very high. The basic concept behindthe parallel processing framework is to divide the large volume of data into several small work packets and process different work packets into different tasks. So each work process will be processed at the same time in parallels and it will significantly reduce time. Nowadays every distribution-related projects have a large volume of data so invoking parallel processing framework is very useful to reduce time.Conventional Parallel ProcessingWe can use parallel processing framework by calling any RFC enabled function module in NEW TASK. In this way, after determining the number of work packets we can create different tasks for each work packets and process them in parallel.Also Read:Why SPTA framework required?SPTA framework is the most sophisticated and secured framework for parallel processing provided by SAP. If we want to handle multiple records and want to update/check multiple database tables in parallel, in that case, using conventional way to invoke parallel processing is difficult and there can be some ABAP memory issue. But in SPTA framework there are build in security for all the ABAP memory related issues so it is very secure. Also, SPTA framework is very easy to implement and all the parallel processing work is handled by SAP we do not need to bother how to handle it. In this way, it is also a very sophisticated framework.SPTA Parallel Processing FrameworkTo invoke SPTA framework we need to call function module SPTA_PARA_PROCESS_START_2. This is a standard SAP provided function module. In this function module, we have to use three subroutines to build our own processing logic.1. BEFORE_RFC_CALLBACK_FORM: This routine is called by the function module before calling the RFC function module. Here wehave to build different work packets which we want to process in the RFC function module.2. IN_RFC_CALLBACK_FORM: This routine is called by the function module after work packets are created. In this routine, we can useour own RFC enabled function module or custom code to process each work packets.3. AFTER_RFC_CALLBACK_FORM: This routine is called at the end by the function module. After processing of all the work packets,we have to collect all the processed data.We have mentioned server group also when calling the function module. The server group can be maintained in the RZ12 transaction. But this is BASIS activity.In the changing parameter, we have to pass our total internal table which contains all the data. From this internal table, we will create different small work packets (i.e. Internal tables) for parallel processing.In the call back program name, we have to pass the calling program name.Hope you are not confusing Parallel Processing with Parallel Cursor Technique.Now we will discuss the main three subroutines and how to call them in details.BEFORE_RFC_CALLBACK_FORM: In this routine, we have to create small internal tables which we are referring as work packets for parallel processing in the IN RFC routine. Please refer the below screenshot.All the parameters which are passed in this subroutine are mandatory. Here first we have to create small work packets. In the above code, it is defined like one work packet will contain 10 records. After creating one work packet I have ENCODE the data for further downstream processing. Also, we have to inform task manager that one RFC can be started by passing ‘X’ in the START_RFC field.Also Read:IN_RFC_CALLBACK_FORM: In this routine, we have to write own processing logic to process all the data. We can call a RFC enabled function module from this routine or we can write our own logic inside this routine also. For each work packets, different tasks will be created and each task will call this routine for processing of data. Please refer below screenshot.In the above code, I have first decoded the encoded data which is coming from BEFORE_RFC_CALLBACK_FORM routine for each work packets. Then write your own logic or call RFC enabled function module for processing. In the above example, I just sorted the randomdata. Then again I have encoded data for the downstream processing in AFTER_RFC_CALLBACK_FORM routine.AFTER_RFC_CALLBACK_FORM: In this routine after processing of all the data we have to collect data. In this routine basically, we have to prepare final internal table after all the data processing. Please refer the attached screenshot.In the above example I have decoded the data again and then displayed all the record. Here if any unit fails during processingin IN_RFC_CALLBACK_FORM no data will be available to collect because if any unit fails we must not prepare final table with less number of valid records. We can catch the failed unit by using IF_RFCSUBRC and IF_RFCMSG.So by using this function module, we can invoke parallel processing framework in a sophisticated and secure manner.Please download the code used in the above demonstration from .Please note: We can design our own parallel processing technique without using SPTA Framework. The concept remains the same in the custom design too i.e. Records get processed into multiple different tasks and that runs parallel. So the processing time is reduced manifold.My friend Partha (whom I referred in the first paragraph) has explained the custom Parallel Processing using a Program. Please check here:Also, check SCN Blog onHere Partha has explained the concept and debugging of SPTA framework in a very illustrative way. Please check theDo you have any tips, tricks, tutorial, concept, config, business case or anything related to SAP to share? Write articles at SAPYardand EARN up to 500 INR per article? Please contact us at mail@ to know more.If you GENUINELY like our articles then it would be a HUGE help if you shared, subscribed and liked us on . It might seem insignificant, but it helps more than you might think.We have organized all our SAP Tutorials on one page. Please visit the below link to find all materials at one convenient place.。

HID协议

HID协议
Universal Serial Bus (USB)
Device Class Definition for Human Interface Devices (HID)
Firmware Specification—6/27/01 Version 1.11 Please send comments via electronic mail to: hidcomments@
3. Management Overview ................................................................................ 4 4. Functional Characteristics ............................................................................ 7
1996-2001 USB Implementers’ Forum—All rights reserved.
6/27/01
iii
Contents
1. Preface ........................................................................................................ vii 1.1 Intellectual Property Disclaimer ........................................................... vii 1.2 Contributors .......................................................................................... vii 1.3 Scope of this Revision.......................................................................... viii 1.4 Revision History .................................................................................. viii 1.5 Document Conventions.......................................................................... ix

Image parsing Unifying segmentation, detection, and recognition

Image parsing Unifying segmentation, detection, and recognition

Image Parsing:Unifying Segmentation,Detection,and Recognition Zhuowen Tu,Xiangrong Chen,Alan L.Yuille,Song-Chun ZhuUniversity of California,Los AngelesLos Angeles,CA,90095ztu,xrchen,yuille,sczhu@AbstractWe propose a general framework for parsing images into regions and objects.In this framework,the detection and recognition of objects proceed simultaneously with image segmentation in a competitive and cooperative manner.We illustrate our approach on natural images of complex c-ity scenes where the objects of primary interest are faces and text.This method makes use of bottom-up proposals combined with top-down generative models using the Data Driven Markov Chain Monte Carlo(DDMCMC)algorith-m which is guaranteed to converge to the optimal estimate asymptotically.More precisely,we define generative model-s for faces,text,and generic regions–e.g.shading,texture, and clutter.These models are activated by bottom-up pro-posals.The proposals for faces and text are learnt using a probabilistic version of AdaBoost.The DDMCMC com-bines reversible jump and diffusion dynamics to enable the generative models to explain the input images in a competi-tive and cooperative manner.Our experiments illustrate the advantages and importance of combining bottom-up and top-down models and of performing segmentation and ob-ject detection/recognition simultaneously.1.IntroductionThis paper presents an framework for parsing images into regions and objects.We demonstrate a specific application on outdoor/indoor scenes where image segmentation,the detection of faces,and the detection and reading of text are combined in an integrated framework.Fig.1shows an ex-ample in which a natural image is decomposed into gener-ic regions(e.g.texture or shading),text,and faces.The tasks of obtaining these three constituents have tradition-ally been studied separately sometimes with detection and recognition being performed after segmentation[10],and sometimes with detection being a separate process,see for example[20].But there is no commonly accepted method of combining segmentation with recognition.In this paper we show that our image parsing approach gives a princi-pled way for addressing all three tasks simultaneously in a common framework which enables them to be solved inaa.An example imageb.Genericregionsc.Textd.FacesFigure1:Illustration of parsing an image into generic re-gions(e.g.texture and shading)and objects.An example image(a)is decomposed into two layers:(b).the region layer and the object layer which is further divided into text (c)and faces(d).cooperative and competitive manner.There are clear ad-vantages to solving these tasks at the same time.For exam-ple,examination of the Berkeley dataset[11]suggests that human observers sometimes use object specific knowledge to perform segmentation but this knowledge is not used by current computer vision segmentation algorithms[9,18].In addition,as we will show,segmentation algorithms can help object detection by“explaining away”shadows and occlud-ers.The application in this paper is motivated by the goal of designing a computer vision system for the blind that can segment images and detect and recognize important objects such as faces and text.We formulate the problem as Bayesian inference.Top-down generative models are used to describe how objects and generic region models(e.g.texture and shading)gener-ate the image intensities.The goal of image parsing is to in-vert this process and represent an input image by the param-eters of the generative models that best describe it together with the boundaries of the regions and objects.It is crucial that all the generative models generate raw image intensi-ties.This enables us to directly compare different models (e.g.by model selection)and thereby treat segmentation, detection and recognition in an integrated framework.For example,this requirement prevents us from using Hinton et al’s generative models for text[14]because these models generate image features and not raw intensities.In order to estimate these parameters we use bottom-up proposals,based on low-level cues,to guide the search through the parameter space.More specifically,we com-bine bottom-up and top-down cues using the Data Driv-en Markov Chain Monte Carlo(DDMCMC)algorithm [18,19]which is,in theory,guaranteed to converge to the MAP estimate asymptotically.The bottom-up proposals for faces and text are learnt from training data by using a variant of the AdaBoost al-gorithm that outputs conditional probabilities[5]instead of classifications[20].The use of conditional probabili-ties means that we do not have to make afirm decision based on AdaBoost and can instead use evidence from the generative models to resolve difficult cases.This improves performance particularly in the presence of occluders and shadows(which can be explained away by the other region models).The top-down generative models for faces and text are based on models with parameters estimated from train-ing data.The bottom-up proposals and top-down generative models for generic regions are those used in previous work [18,19]where they were tested on several hundred images.The structure of this paper is as follow.Section(2) briefly reviews previous work on segmentation,face de-tection,and text detection and reading.In section(3),we describe the representation and the DDMCMC algorithm. Section(4)describes the generative models for faces and text.In section(5),we describe the use of AdaBoost al-gorithm to learn conditional probabilities distributions.D-DMCMC jump and diffusion dynamics design is briefly dis-cussed in section(6).Section(7)shows the results of using AdaBoost by itself and then the results obtained by our im-age parsing approach.2.Related Work on Segmentation,De-tection and RecognitionNo existing work,to the best of our knowledge,combines segmentation,detection,and recognition in an integrated framework.These tasks have often been treated indepen-dently and/or sequentially.For example,Marr([10])pro-posed performing high-level tasks,such as object recogni-tion,on intermediate representations obtained by segmenta-tion and grouping.Current segmentation algorithms[9,18]perform well on large datasets although they do not yet achieve the ground truth results obtained by human subjects[11].From one perspective,the work in this paper extends the DDMCMC segmentation algorithm([18])by introducing object specif-ic models.There has also been impressive work using image fea-tures for face detection[3,15,17,21,22,20]and for text detection and recognition[8,16,1].These approaches can all be used to specify bottom-up proposals for object detec-tion in DDMCMC.It is most convenient for us to use the AdaBoost approach([20])because of it effectivness and its probabilistic interpretation,see section(5).The generative models we use are based on generic re-gion models(e.g.texture and shade)[18]and deformable templates[6,7].Similar models were proposed for tex-t([14])but cannot be used here because they generate image features and not intensities.3.Bayesian FormulationWe formulate image parsing as Bayesian inference.A scene interpretation includes a number of generic regions,letters and digits,and faces denoted by,,and respec-tively.The region representation includes the number of regions,and each region has a label and parameter for its intensity modelwhere.Similarly,we havewhere and.Thus,the solution vector is of the formThe goal is to estimate the most probable interpretation of an input image.This requires computing the that maximizes a posteriori probability over,,the solution s-pace of,(1)The likelihood specifies the image generating pro-cesses from to and the prior probability repre-sents our prior knowledge of the world.By assuming the mutual independence between we have the prior modelTo make generic regions,text,and faces directly compara-ble,we define(2)Details about the definition of region model can be found in[18].We define,,and .The likelihood function can be written asWe use the DDMCMC algorithm for estimating.D-DMCMC[18]is a version of the Metroplis-Hastings al-gorithm and hence is guaranteed to converge to samples from the posterior.It employs data-driven bottom-up pro-posals to drive the convergence of top-down generative models.Moves are selected by samplingfrom and they are accepted with probability :Figure2:Illustration of the DDMCMC approach for seg-mentation,detection,and recognition.These moves can be subdivided into two basic types,jumps which realize moves between different dimensions and diffusion which realizes moves withinfixed dimension. Firstly,jump moves which are discrete and correspond to the birth/death of region hypotheses,splitting and merging of regions,and switching the model for a region(e.g.changing from a texture model to a spline model),changing a generic region into a face,creating a letter,etc.Secondly,diffusion processes which correspond to continuous changes such as altering the boundary shape of a region,text or a face and changing the parameters of a model used to describe a re-gion.Fig.2gives a schematic illustration of howthe jump and diffusion dynamics proceed driven by bottom-up pro-posals.The bottom up proposals for faces and text are learnt us-ing a probabilistic version of AdaBoost,see section(5).The bottom up proposals for generic regions(e.g.shading and texture)were described in[18].In summary,bottom-up proposals drive top-down gener-ative models which compete with each other to explain the image.4.Generative ModelsThis section describes our generative models.We will con-centrate on our text model for space.The models will be used for text detection and reading.Figure3:Random samples drawn from the generative mod-els for letters and digits.In natural scenes,text such as street signs and store names are usually painted in regular fonts,which can be modeled by deformable templates.We define a set of tem-plates,,corresponding to ten digits and twenty six letters in upper case and lower case. Each template is represented by an outer boundary and0or up to2inner boundaries,each of which is modeled by twentyfive control points.Given an input image,we need to inference how many text symbols there are,which type they are and what deformations they have.From the standard shape of each text,we denote its shape by where is the index of template,in-cludes positions of control points,and denotes the affine transformation of.Thus,the prior distribution on can be specified asHere is a uniform distribution on all the digits and letters.is the probability of perturbation of control points w.r.t.the template and it is computed by the distance between contour points of and the template .Using quadratic B-Splines,the contour points can be computed as and.Thus the distribution are expressed aswhere is the distance between con-tour point and.The prior on affine trans-formation is defined such that severe rotation and distor-tion are penalized.Figure(3)shows some samples drawnfrom the above model.The intensities of the text exhibit smooth shading pattern and we use a quadratic formwith parameters.Therefore,the gen-erative model for pixel on the text isFigure4:Samples drawn from the PCA face model.The generative model for faces is simpler and uses tech-niques like Principal Component Analysis(PCA)to obtain representations of the faces.Lower level features,also mod-eled by PCA,can be added[12].Fig.4shows some faces sampled from the PCA model.We also add other features such as occlusion process,as described in Hallinan et al[7].5.AdaBoost and Conditional Proba-bilitiesThe standard AdaBoost algorithm,see for example[20], produces a binary decision–e.g.face or non-face.Here we follow Friedman et al[5]and allow AdaBoost to esti-mate the conditional probabilities instead.Standard AdaBoost learns a“strong classifier”by combining a set of“weak classifiers”using a set of weights:where the selection of features and weights are learned through supervised training off-line[4].Our variant of AdaBoost outputs conditional probabili-ties and is based on the following theorem[5].Theorem.The AdaBoost algorithm trained on data from two classes converges,in probability,to estimates of the conditional distributions of the data:(3)(4)We use AdaBoost to learn these conditional probability distributions so that they can activate our generative mod-els(in practice,the conditional probabilities are extremely small for almost all parts of an image).This allows us to avoid premature decisions about the presence or absence of a face.By contrast,standard AdaBoost can be thought of as using these conditional distributions for classification by the log-likelihood ratio test.5.1.AdaBoost TrainingWe used standard AdaBoost training methods[4,5]com-bined with Viola and Jones’cascade approach using asym-metric weighting[20].The cascade enables the algorithm to rule out most of the image as face,or text,locations with a few tests and allows computational resources to be con-centrated on the more challenging parts of the images(i.e. in our terminology,regions where the conditional probabil-ities arenon-negligible).a.Text(From these,we extracted textsegments.)b.FacesFigure5:Positive training examples for AdaBoost.Our text database contains561text images,some of which can be seen in Fig.5.They are extracted by hand from162static images of San Francisco street sceens.More than half of the images were taken by blind volunteers(so as to simulate the conditions under which our system will eventually be used).We divided each text image into sev-eral overlapping text segments withfixed width-to-height ration2:1.There are in total7,000text segments in the positive training set.The negative examples were obtained by a bootstrap process similar to Drucker et al[2].First we selected negative examples by randomly sampling from windows in the image dataset.After training with these samples,we applied the AdaBoost algorithm to classify all windows in the training images(at a range of sizes).Those misclassified as text were then used as negative examplesfor learning conditional distributions.The image regions most easily confused with text were vegetation,repetitive structures such as railings or building facades,and some chance patterns.The features used for AdaBoost were im-age tests corresponding to the statistics of elementaryfilters –see technical report for more details.The AdaBoost for faces was trained in a similar way. This time we used Haar basis vectors[20]as elementary features.We used the FERET[13]database for our positive examples,see Fig.5,and by allowing small rotation and translation transformation we had5,000positive examples. We used the same strategy as described above(for text)to obtain negative examples.In both cases,we tested AdaBoost for detection(i.e.for classification)using a number of different thresholds.In a-greement with previous work on faces[20],AdaBoost gave very high performance with low false positives and false negatives,see table(1).But the low error rates are slightly misleading because of the enormous number of windows in each image,see table(1).This means that by varying the threshold,we can either eliminate the false positives or the false negatives but not both at the same time.We illustrate this by showing the face regions and text regions proposed by AdaBoost infigure(6).If we attempt classification by putting a threshold then we can only correctly detect all the faces at the expense of false positives.Object False Positive False Negative Images Subwindows Face6526162355,960,040 Face91814162355,960,040 Face75421162355,960,040 Text118273520,183,316 Text187953520,183,316 Table1:Performance of AdaBoost at different thresholds.Instead,we prefer to use AdaBoost as proposals to gen-erative models.Also,generic region proposals canfind text that AdaBoost misses,for example,the‘9’in the bottom panel offigure(6)will fail to be detected by AdaBoost for text,but will be detected as a generic“shading region”and later recognized as a‘9’.putation and algorithmGiven the mixture models in the formulation and our inter-est in obtaining nearly globally optimal solutions,we design Markov chains to simulate walks in the solution space.6.1.Diffusion equationsGiven withfixed number of generic regions,text,and faces,and their model parameters,the interactions between these elements are governed by PDEs for the boundary and template deformation.Fig.7illustrates the motion.The Figure6:The boxes show faces and text as detected by AdaBoost.Observe the false positives due to vegetation, tree structure,and random image patterns.It is impossible to select a threshold which has no false positives and false negatives for this image.Instead we use AdaBoost to out-put conditional probabilities,which will take their biggest values in the boxes,which are used in the DDMCMC algo-rithm.Figure7:The diffusion and evolution of the boundaries is driven by the competition PDEs between regions.PDEs are derived as greedy steps for minimizing the en-ergy functions(or minus log-posterior probability)through variational calculus,especially the Green’s theory.For a boundary whose left and right components are regions or faces,its motion equation is similar as the one in the region competition algorithm[23].There are three energy terms for region:one for the likelihood,and two for the prior on area and perimeter defined in eqns.(2).Likewise,for a letterLet be a point on the boundary of and,i.e.The motion equation for control points can be obtained as where is the Jacobian matrix for the spline function. Thus,control points are moved by the forces transferredfrom boundary points through this motion equation.6.2.Jump dynamicsStructural changes in the solution are realized by Markov chain jumps(see[18]).We design the following reversible jumps between:(i)two regions–model switching:(ii)a region and a text:(iii)a region and a face:(iv)split or merge a region:(v)birth or death of a text:The Markov chain selects one of the above moves at each time,triggered by bottom-up compatibility conditions. 7.ExperimentsWe test the proposed image parsing algorithm on a number of outdoor/indoor images.The speed is comparable to seg-mentation methods such as normalized cuts[9].A detailed description and demonstrations of convergence of the basic DDMCMC paradigm can be seen in[18].The results of our experiments are shown in three ways: (i)synthesized images sampled from using the parameters and boundaries estimated by the DDMCM-C algorithm,(ii)the segmentation boundaries of the image, and(iii)the text and faces extracted from the image,with text symbols indicating the text that has been correctly read by the algorithm.Fig.9shows that we can obtain segmenta-tion,face detection(at a range of scales),and text detection and correct text reading.Moreover,the synthesized images are fairly realistic.High-level knowledge helps segmentation to overcome problem of oversegmentation and provides better synthesis in comparison to[18].Segmentation supports the recogni-tion of objects.Intuitively,the generative models for faces, text,texture,and shading compete to explain the image data. But this competition also enables cooperation.For example, the dark glasses on the two women in Fig.8.a are detected as generic“shading regions”and not as part of the faces. They are then treated as“outlier”data which the face model does not need to explain and hence increases the robustness of the face detection.In Fig.8.d,we show the synthesised faces by removing the sun-glasses.The Parking image in the third row of Fig.9also illustrates another example of cooperativity.For this image,where the bottom-up text Ad-aBoost model failed to propose the digit“9”as a text region, see Fig.9.However,the generic region processes detected it as a homogeneous image region and then proposed it as a letter”9”which was confirmed by the generativemodel.a.Input imageb.Boundariesc.Synthesis1d.Synthesis2 Figure8:Parsing a close-up of the Parking Image.Generic “shading region”processes detect the dark glasses and so the face model does not have to explain this part of the da-ta.Otherwise the face model would have difficulty because it would try tofit the glasses to eyes.Standard AdaBoost would only correctly classify these faces at the expense of false positives,see Fig.6.The Street Image,see the forth row of Fig.9,shows an example where the generative models for faces were required to reject face regions wrongly proposed by Ad-aBoost,see Fig.6.Moreover,this example shows coop-eratively because the shaded regional models were used to “explain away”shadows that otherwise would have disrupt-ed the detection and reading of the text(observe the heavy shading patterns on the text“Heights Optical”).The ability to synthesize the image after estimating the parameters is an advantage of our Bayesian approach, see[18].The synthesis helps illustrate the successes,andsometime the weaknesses,of our generative models.More-over,the synthesized images show how much information about the image has been captured by our models.In ta-ble(2),we show the number of bytes used in our represen-tation and compare them to the jpeg compression for the equivalent images.Image encoding is not the goal of our current work,however,and more sophisticated gener-ative models would be needed to synthesize very realistic images.Nevertheless,our synthesized images are fair ap-proximations and we could reduce the coding of sub-stantially by encoding the boundaries more efficiently(at present,we code boundary pixels independently).Image Stop Soccer Parking Street Westwoodjpg bytes23,99819,56323,31126,17027,7904,8863,9715,0136,3469,687 Table2:Comparison of bytes required by jpg and for each image.8.Summary and ConclusionsThis paper has introduced a framework for image parsing by defining generative models for the processes that create images including specific objects and generic regions such as shading and texture.Bottom-up proposals are learnt by the AdaBoost algorithm which provides conditional prob-abilities for the presence of objects in the image.These conditional probabilities enable inference by rapid search through the parameters of the generative models,and the segmentation boundaries,using the DDMCMC algorithm.We implement our system using generative models for text and faces combined with generic models for shaded and textured regions.Our approach enables these differ-ent models to compete and cooperate to describe the input images.We were able to segment the images,detect faces, and detect and read text in city scenes.Our experiments showed several cases where the shaded models helped face and text detection by explaining away shadows and occlud-ers(sun-glasses).In turn,the text and face models improved the quality of the segmentations.The current limitations of our approach lie in the limited class of objects we currently model.This limitation was motivated by our application goal of detecting text and faces for the visually disabled.But,in principle,our approach can include broad types of objects. AcknowledgmentsThis work is supported by the National Institute of Health (NEI)RO1-EY012691-04and an NSF grant0240148.The authors thank the Smith-Kettlewell research institute for providing us with text training images.References[1]S.Belongie,J.Malik,and J.Puzicha,“Matching shapes”,Proc.of ICCV,2001.[2]H.Drucker,R.Schapire,and P.Simard,“Boosting perfor-mance in neural networks,”Intl J.Pattern Rec.and Artificial Intelligence,vol.7,no.4,1993.[3] F.Fleuret,and D.Geman,“Coarse-to-Fine face detection”,IJCV,June,2000.[4]Y.Freund and R.Schapire,“Experiments with a new boostingalgorithm”,Proc.of ICML,1996.[5]J.Friedman,T.Hastie and R.Tibshirani.“Additive logisticregression:a statistical view of boosting”,Dept.of Statistics, Stanford Univ.Technical Report.1998.[6]U.Grenander,Y.Chow,and D.Keenan.HANDS:A PatternTheoretic Study of Biological Shapes.Springer-Verlag,1990.[7]P.Hallinan,G.Gordon,A.Yuille,P.Giblin,and D.Mumford,“Two and Three Dimensional Patterns of the Face”,AKPeter-s,1999.[8] A.K.Jain and B.Yu,“Automatic text localication in imagesand video frames”,Pattern Recognition,31(12),1998. [9]J.Malik,S.Belongie,T.Leung and J.Shi,“Contour and tex-ture analysis for image segmentation”,IJCV,vol.43,no.1, 2001.[10] D.Marr.Vision.W.H.Freeman and Co.San Francisco,1982.[11] D.Martin,C.Fowlkes,D.Tal and J.Malik,“A database ofhuman segmented natural images and its application to eval-uating segmentation algorithms and measuring ecological s-tatistics”,Proc.of ICCV,2001.[12] B.Moghaddam and A.Pentland,“Probabilistic VisualLearning for Object Representation”,IEEE Trans.PAMI, vol.19,no.7,1997.[13]P.J.Phillips,H.Wechsler,J.Huang,and P.Rauss,“TheFERET database and evaluation procedure for face recogni-tion algorithms”,Image and Vision Computing J,vol.16,no.5,1998.[14]M.Revow,G.K.I.Williamst and G.E.Hinton,“Using gener-ative models for handwritten digit recognition”,IEEE Trans.PAMI,vol.18,1996.[15]H.Rowley,S.Baluja,and T.Kanade,“Neural network-basedface detection”,In IEEE Trans.PAMI,vol.20,1998. [16]T.Sato,T.Kanade,E.Hughes,and M.Smith,“Video OCRfor Digital News Archives,”IEEE Intl.Workshop on Content-Based Access of Image and Video Databases,Jan.,1998. [17]H.Schniederman and T.Kanade,“A Statistical method for3D object detection applied to faces and cars”,Proc.of Com-puter Vision and Pattern Recognition,2000.[18]Z.Tu and S.C.Zhu,“Image segmentation by Data DrivenMarkov chain Monte Carlo”,IEEE Trans.PAMI,vol.24,no.5,2002.[19]Z.Tu and S.C.Zhu,“Parsing images into regions and curveprocesses”,Proc.of ECCV,June,2002.[20]P.Viola and M.Jones,“Fast and Robust Classification usingAsymmetric AdaBoost and a Detector Cascade”,In Proc.of NIPS01,2001.[21]M.Weber,W.Einhuser,M.Welling,P.Perona,“Viewpoint-invariant learning and detection of human heads”,Proc.of Int.Conf.Automatic Face and Gesture Recognition,2000.a.Input imageb.Region layerc.Object layerd.Synthesis imageFigure 9:Results of segmentation and recognition on several outdoor/indoor images:Stop sign (row 1),Soccer (row 2),Parking (row 3),Street (row 4),and Westwood (row 5).[22]Ming-Hsuan Yang,N.Ahuja,D.Kriegman,“Face detectionusing mixtures of linear subspaces”,In Proc.of Int.Conf.Au-tomatic Face and Gesture Recognition ,2000.[23]S.C.Zhu and A.L.Yuille,“Region competition,”IEEETrans.PAMI ,vol.18,no.9,1996.。

优秀的fortran程序编程规范

优秀的fortran程序编程规范

Programming Guidelines for PARAMESH Software Development(NOTE: This document is heavily based upon theIntroductionThis document describes the programming guidelines to be used by software developers wishing to contribute software to the PARAMESH, parallel, adaptive mesh refinement software. We welcome people to contribute software and/or bug fixes to the PARAMESH AMR software. Software to be added to PARAMESH can come in 2 forms:∙Improvements to the basic PARAMESH kernal software found in the mpi_source, source and hearders directories.∙Software the addes additional functionality to PARAMESH. This type of software should be added as separate entities within the utilities directory.Complete applications should not be added as part of PARAMESH. PARAMESH is only meant to be a tool which supports parallel adaptive mesh applications and any software which supports this goal will be considered for acceptance into PARAMESH. For instance, a solver for the poisson equation that works with PARAMESH would be acceptable, but an application that solves the equation of gas dynamics would not.The PARAMESH software is slowly being evolved to be consistent with this document. Any new software which is contributed should follow these guidlines. If not, it will be rejected. This document deals mainly with Fortran 90, since most new PARAMESH software will probably be written in that language. [Throughout this document, the term "Fortran" should be understood to mean Fortran 90.] Since we expect C and C++ also to be used, a separate document dealing with them will be developed in thefuture. In the meantime, this document can serve as a general guideline for developing code to be used with PARAMESH in those programming languages.The guidlines in this document should be adhered to by ANY software which will be released as part of the PARAMESH package ofsource code. This includes software 'utilities' (stored in the paramesh/utilities directory) which add functionality to PARAMESH for different algorithms. It also should be applied to any new code developed and added to the main source code for PARAMESH in the paramesh/source, paramesh/mpi_source, or paramesh/headers directories.The guidelines are intended to enhance the following aspects of the final product, listed in decreasing order of importance:∙Maintainability- refers to how easy it is to understand the purpose of each element of the program, and to modify and extend the program.∙Portability- refers to how easily the program can be ported to new computational platforms.∙Efficiency- refers to the amount of computer resources (CPU time, memory, disk storage, etc.) required to run the program.Program Development and DesignItems in this section are fairly general and fundamental in nature. They impact all three of the items listed above - maintainability, portability, and efficiency.LanguageTry to use ANSI standard Fortran 90 exclusively. If you must, you can use C or C++, but it must work with PARAMESH and be callable from a Fortran 90 program.Organization∙Write modular code.∙In general, put each subprogram in a separate file, using the subprogram name as the file name, with a .F90 extension.∙Within each routine, use interface blocks to explicitly specify the interface to your contributed routines.∙Group related files in a single directory.∙Names of files and directories should reflect their purpose. Common Blocks∙Don't use common blocks, use Modules instead. Period !Data Types∙Use Implicit none in each program unit, and explicitly declare all variables and parameters. Common variables and parameters should be declared in the relevant include file.∙Don't use *'ed forms, like Real*8. Declare variables using Real ::∙Don't compare arithmetic expressions of different types; convert the type explicitly.Dynamic Memory∙Assign memory for arrays dynamically, using automatic arrays, allocatable arrays, and/or array pointers. Explicitly deallocate memory used by allocatable arrays and array pointers when they're no longer needed.Coding Style (see mpi_source/mpi_amr_guardcell.F90 for a complete example)Items in this section are fairly specific, and primarily impact the readability, and thus the maintainability, of the final product. It is recognized that rules for "good coding style" are somewhat subjective. Program Units∙Begin main programs with a Program statement.∙Don't use multiple entries or alternate returns.∙Use the intent attribute in the type declaration statement for all variables passed into our out of subroutines and functions. Make sure to include these in the interface block that you create for the surbroutine.∙Match the arguments in the calling (sub)program to those of the called subprogram in both number and type.∙Use the following order for statements within each subprogram: o Standard header section. This should be comments in the format used with the robodoc code documenation software (Seethe PARAMESH source code for examples).o Use moduleso Parameter definitionso Type declarations for subprogram argumentso Type declarations for local variableso Executable code∙Functions should not have side effects. (I.e., don't change the arguments or any common variables inside the function.) ∙Use generic names for library functions, rather thanprecision-specific ones.∙Name external functions in an External statement.Statement Form∙Use free-form formatting, but for readability:o Keep line lengths below 80 characters.o Start each line in column 7 or higher.o Reserve columns 1-5 for statement labels.o Don't use the optional continuation character (i.e., &) at the start of continuing lines. Avoid splitting keywords andcharacter strings between lines.Note that with free-form formatting, an & must be the last character (except for comments) in a line that is to be continued.∙Use a ! in column 1 for non-blank comment lines.∙Split long lines before or after an operator, preferably a + or -.∙Don't write more than one statement per line.Statement Labels∙Minimize the use of statement labels, where appropriate.∙Don't use unreferenced labels.∙Use labels in ascending order.Upper/Lower Case∙Use upper case for parameters, upper case for subrotines and functions from libraries outside of PARMAESH (such as MPI), lower case with an initial capital letter for Fortran keywords, and lower case for everything else except comments and character strings.∙Write comments as normal text, with normal capitalization rules. Spacing∙Use spacing to enhance readability.∙Indent contents of code blocks (i.e., do loops, block if, etc.).Suggested amount is three spaces.∙Don't use tabs.∙Use spacing in equations to clarify precedence of operators. I.e., normally put one space on either side of =, +, and - operators(except in subscripts), but none around *, /, or ** operators. For example, this:y1 = (-b + Sqrt(b**2 - 4.*a*c))/(2.*a)is easier to read than this:y1=(-b+Sqrt(b**2-4.*a*c))/(2.*a)or this:y1 = ( - b + Sqrt ( b ** 2 - 4. * a * c ) ) / ( 2. * a ) ∙Use spacing to reveal patterns in continuation lines and in separate but logically related statements. For example, this:dum1 = Sqrt((fr (i,j) - fr ( 1, j))**2 + &(fth(i,j) - fth( 1, j))**2)dum2 = Sqrt((fr (i,j) - fr (n1, j))**2 + &(fth(i,j) - fth(n1, j))**2)dum3 = Sqrt((fr (i,j) - fr ( i, 1))**2 + &(fth(i,j) - fth( i, 1))**2)dum4 = Sqrt((fr (i,j) - fr ( i,n2))**2 + &(fth(i,j) - fth( i,n2))**2)is easier to read than this:dum1 = Sqrt((fr(i,j) - fr(1,j))**2 + (fth(i,j) - &fth(1,j))**2)dum2 = Sqrt((fr(i,j) - fr(n1,j))**2 + (fth(i,j) - &fth(n1,j))**2)dum3 = Sqrt((fr(i,j) - fr(i,1))**2 + (fth(i,j) - &fth(i,1))**2)dum4 = Sqrt((fr(i,j) - fr(i,n2))**2 + (fth(i,j) - &fth(i,n2))**2)Variable Names∙Use names that are descriptive of the entity being represented, and/or are consistent with the standard notation in the field.∙In general, follow standard Fortran convention for the variable type. I.e., integers start with i, j, k, l, m, or n, all others are real.∙Don't use keyword, subprogram, or module names for variables.∙Don't give a local variable the same name as any common variable. Arrays∙Dimension arrays in the type declaration statement, not in a separate Dimension statement.∙When passing character variables into a subprogram, use the assumed-length form in the type declaration statement inside the subprogram. I.e.,Subroutine sub (c)Character*(*) c∙Don't exceed the bounds of the array dimensions.Control Statements∙Short do loops may be written using simple Do and End do statements, without labels.∙Long do loops and if blocks (more than a page or so), should mark the end of the construct in some way that "connects" it with the start. One convenient and readable method is to use an in-linecomment on the ending statement that repeats the beginningstatement. E.g.,If ( bccode == 13 ) then[Lines and lines of code]Do i = 1,nzonesIf ( zondim(1,i) > 0 ) then[More lines and lines of code]End if ! If ( zondim(1,i) > 0 ) thenEnd do ! Do i = 1,nzonesEnd if ! If ( bccode == 13 ) then∙Minimize the use of Go to statements, especially where they can be replaced by short'ish if blocks, but don't create convoluted code just to avoid using them. Don't be afraid to use a Go to where it makes sense. An example might be a long (more than a page)conditional section of code. In this case a well-commented Go to block, which ends with an easily-noticed statement label, may be more readable than an indented if block without an ending statement label. Also consider making a long conditional section a separate subprogram.Calls to other Libraries (such as MPI).<>Capitalize the entire subroutine name when making the call to the libary routine, e.g.Call MPI_BARRIER(MPI_COMM_WORLD,ierr)Comments∙Use comments liberally to describe what's being done. Where code may be confusing, use longer comments to describe why something's being done the way that it is.∙Make each comment meaningful; don't simply re-iterate what's already obvious from the coding itself. As an obvious example, this:!-----Fill the guardcells of all PARAMESH blocksCall amr_guardcellis more meaningful than this:!-----Call amr_guardcellCall amr_guardcell∙Use a consistent method to help the reader distinguish comments from code, such as the "-----" leaders in the examples above.∙Start the text of comments at the same indentation level as the code being described.∙Use a standard header section at the beginning of each subprogram defining its purpose.∙Use in-line comments, with ! as the delimiter, where appropriate for short explanations or clarifications. Start in-line comments far enough to the right (e.g., three spaces or more from the end of the statement) to help distinguish comments from code. Whereappropriate, align them vertically with nearby in-line comments.∙Define each common block variable using an in-line comment on its type statement in the include file. Each common variable will thus have a separate type statement.∙Define key local variables using in-line comments on the type statements in the subprogram.Obsolete/Forbidden FeaturesThe following Fortran features are either formally declared as obsolete, or widely considered to be poor programming practice, and should not be used:∙Arithmetic if statements∙Do loops with non-integer indices∙Shared do loop termination statements∙Pause statements∙Assigned and computed Go to statements∙Hollerith edit descriptors and Hollerith character strings∙Equivalence statements∙Alternate return statements。

[国内标准]国标综合布线规范GB

[国内标准]国标综合布线规范GB

(国内标准)国标综合布线规范GB建筑与建筑群综合布线工程系统设计规范Codeforengineeringdesignofgenericcablingsystemforbuildingandcampus GBT/T50311-2000主编部门:中华人民共和国信息产业部批准部门:中华人民共和国建设部施行日期:2000年8月1日中国计划出版社2000北京目次1.总则2.术语和符号3.系统设计4.系统指标5.工作区6.配线子系统7.干线子系统8.设备间9.管理10.建筑群子系统11.电气防护、接地及防火12.安装工艺要求本规范用词说明附:条文说明总则1.0.1为了适应经济建设高速发展和改革开放的社会需求,配合现代化城市建设和信息通信网向数字化、综合化、智能化方向发展,搞好建筑与建筑群的电话、数据、图文、图像等多媒体综合网络建设,制定本规范。

1.0.2本规范适用于新建、扩建、改建建筑与建筑群的综合布线系统工程设计。

1.0.3综合布线系统的设施及管线的建设,应纳入建筑与建筑群相应的规划之中。

1.0.4综合布线系统应与大楼办公自动化(OA)、通信自动化(CA)、楼宇自动化(BA)等系统统筹规划,按照各种信息的传输要求做到合理使用,并应符合相关的标准。

1.0.5工程设计时,应根据工程项目的性质、功能、环境条件和近、远用户要求、进行综合布线系统设施和管线的设计。

工程设计施工必须保证综合布线系统的质量和安全,考虑施工和维护方便,做到技术先进,经济合理。

1.0.6工程设计中必须选用符合国家有关技术标准的定型产品。

未经国家认可的产品质量监督检验机构鉴定合格的设备及主要材料,不得在工程中使用。

1.0.7综合布线系统的工程设计。

除应符合本规范外,尚应符合国家现行的相关强制性标准的规定。

术语和符号2.1术语2.1.1建筑与建筑群综合布线系统genericcablingsystemforbuildingandcampus建筑物或建筑群内的传输网络。

IBM_Project_Management

IBM_Project_Management

Project Management Methods (PMM) At IBMIntroductionIBM uses a set of generic methods based on principles common to all projects. PMM guides project management activities from startup to formal closure, allowing for an iterative set of tasks for managing the ongoing work effort and deliverable components. In PMM, there is a set of good practices and key techniques for managing projects including'Startup', 'Manage', and 'Close' activities.A Project Management System is a documented collection of plans, procedures and records that direct all project management activity and provide the current state and history of the projectThe following is a brief description of the Project Management System that IBM use for all their projects. The purpose of this system is to guide project management activities during Solution Delivery. The phases support the smooth transition from proposal of the solution to actual solution implementation, management of the implementation activities and contract components and bringing the engagement to a close. The system follows the following phases:The Project Management Methods wrap themselves around the activitiesnecessary to build, validate and deliver the solution to the customer. The phasespresent the IBM standard set of tasks that aid the Project Manager in the transition of the accepted proposal or contract into the Startup of the developmentenvironment; in providing ongoing Managerial support throughout the life of theproject; and to guide the Project Manager through formal Closure including all reviews and collection of Customer feedback after formal acceptance and Customer signoff.The Project Management Methods must be used in conjunction with any project development activities and is organized to provide a complete set of tasks for starting the project and closing the project, while allowing for an iterative set of tasks for managing the ongoing work effort. The activities and associated tasks in the managerial processes are both linear and iterative in nature and can be performed on an as-needed basis, daily, weekly, monthly or can be triggered by events and run continually for a period of time.There are four major managerial processes:·Project Plan Management Activities: Tracking progress, ensuring commitments are met and reporting to IBM management and theCustomer.·Contract Management Activities: Maintaining Contract files, ensuring contractual obligations are met, invoicing the Customer and ensuringcollections of funds and approving and paying invoices from Suppliers.·Exceptions Management Activities: Changes, issues, problems and risks.·Quality Assurance Activities: Focused on Project Management techniques.The reviews are carried out against the management practices being used.These reviews do not focus on the quality of a deliverables.Listed below are the three phases of the Project Management Methods, and a high-level statement of what is accomplished by performing the activities and tasks of each phase. The tasks of the Solution Startup and Close the Solution phases are performed once, while the Manage the Solution phase is iterative and runs parallel with the phases of the selected Solution Methods.Solution StartupStart the project in an orderly manner and ensure that all necessary infrastructure, assets and resources are in place. The Project Plan is reviewed for the appropriate level of detail and validated before commencing any project work.Manage the SolutionOngoing and iterative activities to ensure that the project is under control and that the proposed solution will be delivered, on time, within budget and meeting all quality standards and acceptance criteria as set forth by the customer.Close the SolutionFormal closure of all project and contract files and release and redeploy of assets and resources. A Post-Project Workshop is held to collect data for updating estimating techniques, to document any lessons learned while on the engagementand to ascertain the effectiveness of the process and methods employed. TheCustomer's satisfaction with the solution is also assessed and action is taken to correct any issues which might be identified.Solution StartupEnsure that the project responsibility is transferred, all necessary planning is accomplished and that work on delivery can begin.DescriptionSolution Startup begins with transfer of responsibility for implementation of the solution from the Proposal Team Leader to the Project Manager. The Project Manager then reconfirms the scope and objectives, sets up the projectenvironment and activates the rest of the Delivery Team. When the team isassembled, the Project Manager provides the orientation necessary to implement the solution.The major activities are to:·Transfer responsibility to the Delivery Project Manager·Activate and orient the Delivery Team·Develop the detailed Project Plan·Ensure that the initial project review with QA is completed·Review the Project Plan, with the participation of the Opportunity Owner, with the Customer and any Vendors or Subcontractors to ensure allparticipants are working to the same schedules·Obtain Customer agreement to the Project Plan·Release the orders for IBM and OEM hardware, software and services needed to complete the project.Major DeliverablesProject Control BookThe Project Control Book is populated with the turnover documentation from Solution Design and work products developed during the execution of Startup.Project PlanThe Preliminary Project Plan is delivered to the Delivery Team, who refines and updates it. The refined plan is the Project Plan.Entry CriteriaThe tasks within this phase assume the availability of the following input workproducts. These work products will have been developed during Solution Design and will be turned over to the Delivery Project Manager during orientation.·Accepted Proposal or Contract·Preliminary Project Services and Support Plan·Assignment Information (for Delivery Team)·Supplier Allocation Confirmation·Supplier ContractsTransfer Project ResponsibilityActivate the delivery Project Manager and ensure that all data and information developed by the proposal team is handed over to the Project Manager and that the Project Manager can assume responsibility for the projectTasksActivate Delivery Project managerActivate the Project Manager that was previously reserved by submitting an activation request to Skills Management who will respond by providing the assigned resource. Provide OrientationProvide all of the information necessary to deliver the Customer solution.Transfer ResponsibilityOnce the Customer has made a commitment to the solution, transfer responsibility for the delivery of the solution to the Project Manager so that the process of delivering the solution may begin. It is beneficial that this person would have been part of the design effort so that they have a good understanding of the Customer expectations and the solution that was designed to address those needs and the negotiations between IBM, the Suppliers and the Customer.If the Project Manager is new to the project, a complete orientation will have to be presented to introduce the project to the Project Manager.DescriptionOnce the Customer has made a commitment to the solution, the Proposal TeamLeader must transfer responsibility for delivery of the solution to the ProjectManager so that the process of delivering the solution may begin.If the Project Manager was part of the Design Team, most likely in the role of the Proposal Team Leader, the Project Manager would have a very goodunderstanding of Customer expectations, the solution that was designed to address those needs and the negotiations between IBM, the Suppliers and the Customer.The Project Manager is identified in the Solution Design phase. However the resource is not assigned until the proposal is accepted by the Customer. If the Project Manager is newly assigned to the project, the Proposal Team Leader, from the Solution Design phase, will provide a complete orientation on the Customer's requirements and expectations.Launch Delivery Project:Set up the project environment, processes and controls and activate the project teamTasksReconfirm, scope, objectives, dependencies, assumptionsEnsure that the work stated in the contract still applies and that no changes in scope, dependencies or assumptions have occurred since the contract was signed. This will involve a review of the contract, a review of the project plan and discussions with the Customer to verify that there is a common understanding of what IBM is about to deliver. Any changes that are found must be reported to the Opportunity Business Manager so that action can be taken to make appropriate changes to contract, price, scope or scheduling prior to the start of work.Set Up Project EnvironmentSet up the project environment to put into place any organizational, procedural or structural requirements necessary to support the project.Areas addressed are:∙Facilities (offices, desks, phones, meeting rooms)∙Equipment and tools (hardware, host software, application development software) ∙Standards and guidelines∙Process and proceduresActivate project control bookUpdate the necessary project files that will be used during the project. These become the audit and reference documentation which will be used during project execution and after project completion. Steps will include:∙Obtain, organize and update all project materials∙Update to include delivery team members, their roles and responsibilities∙Open financial filesActivate Delivery teamActivate the delivery team resource that was previously reserved by submitting an activation request to Skills Management (and Supplier Management, if applicable) who will respond by providing the assigned resource.Provide OrientationConduct a project team kick-off or orientation meeting to cover:∙Customer requirements and expectations∙Signed proposal (plan, schedules, roles and responsibilities)∙Issues and concerns∙Statement of WorkDescriptionAfter receiving an orientation from the Proposal Team Leader regarding therequirements for delivery of the solution and accepting responsibility for delivery of the solution to the Customer, the Project Manager launches the project by: ·Reconfirming scope, objectives, assumptions and dependencies·Setting up the project environment·Activating the Project Control Book·Activating the previously reserved delivery team resource(s)·Providing them with all of the information necessary for delivery of the solution to the Customer.Develop Project Plan:Refine the Project Plan before allowing the technical effort to begin.TasksFinalize SubcontractsProvide the final requirements and deliverable schedule to the Supplier(s) and request reconfirmation of their commitment to provide the requested solution and deliverable(s). Refine TasksExecute these steps in accordance with the implementation Methods selected: ∙Update high-level tasks∙Develop low-level tasks∙Confirm assumptions and dependencies∙Update test strategies and plansRefine and Update Risk Management PlanUpdate the following in the risk management plan:∙Risk item(s)∙Risk level(s)∙Containment planRefine ScheduleDevelop all schedules by:∙Updating the high level schedule∙Developing the low level schedule∙Updating the Quality Assurance review schedule∙Updating the deliverable schedule∙Confirming the payment schedule.Confirm ResourcesRefine the Resource Plan and confirm that the plan is still valid.Confirm AssetsRefine the Asset Plan and confirm acquired assets.Create Other Subordinate PlansCreate any other subordinate plans necessary for the successful creation or delivery of the Customer's solution.Refine CostsRefine the costs of the project based on the refined Project Plan and update the budget to reflect any changes.Package Project PlanPackage together all deliverables from the individual work efforts to form a complete Project Plan. This packet will include:∙Resource and Asset Plans∙Final Requirements and Scope∙Major Tasks∙Schedules∙Risk Containment Plan and Costs∙Subordinate Plans∙Financial StatusDescriptionDevelop the detailed steps required to develop and deliver the solution: ·Required deliverables·Tasks·Resources·Task durationThe Supplier or Subcontractor Project Plans are incorporated as input into thetotal Project Plan. All dependent Supplier and Customer tasks are identified.Conduct Initial Project Review:Ensure that a quality assurance review is completed before project work commences. TasksPlan the ReviewThe Quality Assurer will coordinate the project review and will:∙Determine dates and locations for the reviewDevelop the agendaSelect the attendeesDevelop interview schedule with appropriate participants (delivery team,subcontractor(s) and the Customer)∙Prepare for the reviewReview status against planAssess effectiveness of project management systemDetermine areas of risk included in the solutionIdentify any special terms and conditions committed in the solutionBecome familiar with the Project Plan and the project management activitiesAssess scope and containment and progress on the solutionValidate deliverable conformity to the solution design documentationIdentify problems confronting the delivery teamDevelop questions and notes for focus areas for delivery team interviews(includes any supplier or Customer team members that may be involved)Develop questions and notes for focus areas for Customer interviews ∙Invite appropriate people to reviewParticipants in the review should represent a cross-section of the organization and be chosen based on:∙Organizational structure∙Criticality to project success∙AvailabilityCustomer interviews should be determined based on:∙Organizational structure∙Involvement in the project∙Availability.Perform the ReviewPerform the Quality Assurance review by completing the following activities: ∙Gather the solution delivery documentation∙Review the project plan∙Review deliverable conformance to contract∙Review subordinate plans∙Conduct the interviewsDevelop Findings and RecommendationsIn order to develop findings and recommendations, the Quality Assurer will: ∙Review notes and interview results∙Document preliminary findings and recommendations∙Determine project review classification∙Present preliminary findings, recommendations and project classifications ∙Revise findings based on feedback∙Develop and distribute the project review report∙Schedule the subsequent progress reviews.DescriptionAs soon as the Project Plan has been finalized, it must be reviewed by the Quality Assurer. The milestones, planned revenue, costs and profit as well as the Resource and Asset Plan will be used as the reference point to measure the performance of IBM on this delivery. This initial review will also form the basis for subsequent reviews. The results of the review will either be the approval to proceed asplanned or the identification of problems and issues that need to be resolved. Review Project Plan:Ensure that all parties agree on the Project Plan and understand the schedule, the deliverables and any other implications.TasksReview Project Plan with CustomerReview the following aspects of the Project Plan with the Customer: ∙Milestones and schedules∙Deliverables∙Dependencies∙Roles and responsibilities∙Completion criteria∙Relevant subordinate plansA subset of the Project Plan is reviewed with the customer to confirm compliance with the customer's requirements. The areas reviewed are:∙Customer dependencies (resource and deliverables)∙Master Schedule (planned and actual)Deliverables Schedule (milestones)Hardware and Software Product Installation ScheduleReview Schedule (internal and external)External Deliverables (external providers and Customer)Phase definitionCustomer Payment Schedule∙IBM Interface (IBM employee or representative who will serve as the primary linkage to the Customer for this project)∙Project organization (reporting structure, roles and responsibilities, project staffing)∙Acceptance criteria.The Project Plan may have to be modified based on the customer's feedback. Any change items will be handled using the manage exceptions criteria.Conduct Project Kick off with CustomerConduct a project kick-off with all project and Customer personnel. The task includes the following:∙Prepare agenda and materials∙Schedule date and location∙Determine participants∙Conduct the meetingProject objectivesOrganizationRoles and responsibilitiesCommunicationFacilitiesProject scheduleImplementation approachDescriptionAfter all Quality Assurance conditions have been satisfied, the Project Plan should be reviewed with the Customer. Both the Opportunity Owner and ProjectManager will be involved in the review. The review should cover the plan andschedule and result in the reconfirmation with the Customer that the plans are still in line with the requirements.Order and Validate Products and Services:Ensure supply and receipt of all the necessary goods and services required .TasksOrder Products and ServicesPlace orders for all required hardware, software and services required by the project. Receive Products and ServicesThe Delivery Team receives the products and services from the IBM and Original Equipment Manufacturer suppliers and records the received products and services information.Reconcile Products and Services against ContractVerify the accuracy of the products and services that were received against what was ordered (the contract) and ensure that all equipment or software passes any acceptance criteria.Notify Supplier of DiscrepanciesNotify the IBM or Original Equipment Manufacturer Supplier of any discrepancies. The Supplier will be expected to resolve the problem. Any problems should be recorded, the resolution tracked and if necessary, proactively notifying the Sponsor and Customer of any potential slippage in the delivery dates that have been committed to the Customer. DescriptionTrigger the previously committed orders for the products or services necessary to implement the solution. This is an iterative activity that may be done several times during the solution delivery phase of the project.Receive the products and services from IBM, Original Equipment Manufacturer supplier (those that are not shipping directly to the customer) and reconcile them against the contract and the product and services order. The supplier is notified of any discrepancies which they are expected to resolve.Manage the SolutionEnsure that the project is under control and delivers the correct solution on time, within budget and to the agreed level of quality.ActivitiesManage the Project PlanManage ExceptionsManage ContractConduct Project Management ReviewReview Solution and Deliverables ReadinessMajor DeliverablesProject Control Book (Manage)Additions to the Project Control Book from the Manage the Solution PhaseProject Plan (Manage)Project Plan generated during the Manage activitiesDescriptionManage the Solution runs in parallel with the Implement the Solution. There are two distinct aspects to managing the solution. The first is to manage theimplementation activities:·Tracking progress·Ensuring all commitments are met·Reporting back to IBM management and the Customer·Interfacing with quality assurance. A most critical activity is theManagement of Exceptions: Changes, Issues, Problems and Risks. These,wherever initiated, will have an impact on the customer's conditions ofsatisfaction and IBM's profit expectations.Another important aspect of Managing the Solution is focused on contractmanagement. Contract files must be maintained to ensure that all contractualobligations are met. The invoice(s) are prepared and provided to the customer as per the terms of the accepted contract and monies collected as per the terms and conditions on the invoice(s). Supplier invoices must also be approved and paid.A vital aspect of Managing the Solution is good communications within theDelivery Team and with the Customer and to IBM opportunity management.Manage the Solution contains both iterative processes whose activities and tasks will be executed daily, weekly and monthly, and processes which are triggered by events.Those activities which are iterative are:·Planning·Tracking·Reporting·Project Management Reviews·Deliverables ReviewsThose activities which are, however, triggered by events are:·Exception ManagementChange request initiated by the customer, IBM or the Supplier.Early closure request.Critical situation identified by the customer.Customer payment or adjustment.·Deliverable status, deliverable status feedback or completion information.·Supplier invoice.·Discrepancy between actual progress and planned progress.·New risk identified.Entry CriteriaThe tasks within this phase assume the availability of the following input work products. These work products will have been developed during the SolutionStartup phase:·Accepted Proposal or Contract·Project Environment·Project Control Book·Supplier Allocation Commitment·Project Plan Accepted by the Customer·Initial Project Review Findings and Recommendations·Received Products and ServicesBranch CriteriaThe Project Manager and the Quality Assurer will determine the schedule andoccurrences of the Review Function and Quality of Deliverables.Manage the Project Plan:Ensure that the project progresses in a controlled fashion and that all discrepancies are identified and resolved.TasksAdminister Project PlanMonitor and maintain the status of the Project Plan and perform the following activities on an on-going basis:∙Review the schedules and milestones∙Compare actual and target dates to plan∙Record any revisions to the planUpdate activities and tasksUpdate schedulesUpdate resource∙Record any actions resulting from plan discrepanciesAdminister FinancialsMonitor and maintain the financial aspects of the Project Plan as follows: ∙Review financials (actual expenses and costs)∙Record the actual financial status∙Assess progress of actuals versus planned (milestones)Review ProgressReview the progress of the Project Plan with the Customer and the Delivery Team. Participants in the review may also include any suppliers or Customer team members who are part of the delivery team.Manage ResourcesMonitor the activities of all of the members of the delivery team to ensure that all aspects of the Project Plan are being followed. If any personnel issues arise, ensure that they are addressed. If any resource changes are required as a result of modifications that may have been made to the plan, inform Skills Management, and if necessary, Supplier Management.Manage AssetsManage any assets being used for the implementation of the solution. Assets include: ∙Capital assets∙Furniture∙Equipment∙Facilities∙SuppliesIf any asset changes are required as a result of modifications that may have been made to the plan, interface with Supplier Management, Integrated Supply Chain or Site Services. DescriptionAdminister the Project Plan and continually review project progress to ensure that all aspects of the solution are delivered according to IBM's commitment to theCustomer. In addition, financials, resources and assets are also monitored.Any exception modifications to the Project Plan must be evaluated to ensure that they can be accommodated and contained within the agreed to scope.Any changes to the scheduling of the solution or deliverables must be agreed to by both IBM and the customer and reflected in the Project Plan.Any changes to the actual content of the solution or deliverables will require anamendment to the accepted proposal and must be carefully analyzed to ensure that there is no impact to IBM's profitability.Any modifications to the project that impact schedule, cost, deliverables, qualityand content, must be evaluated and processed in a way that protects IBM'sprofitability and the Customer's expectations.The Project Manager must:·Ensure that the project plan is baselined so that out-of-scope changes can be handled properly.·Deal with any problems and issues that arise during the course of the work in a structured way.·Continuously reassess the project risks and seek to contain and mitigate these.·Review progress on a regular basis and report the progress to bothcustomer and IBM.·Communicate to the Delivery Team, Customer and IBM in both formal and informal ways to ensure that there are no surprises!Manage Exceptions:Resolve project exceptions and determine impact on the Project Plan.TasksManage Problems and IssuesAs soon as a problem or issue is identified, it should be resolved as quickly as possible to minimize the impact on the project in terms of either cost or schedule. The resolution of a problem and issue may be in the form of a change request and will be handled in thisprocess. The problems and issues and their resolution should be recorded in the Project Control Book.The problems and issues should be:∙Submitted to a central point or contact person∙Reviewed by appropriate levels of management∙Assessed for impact upon implementation∙Communicated to all people who need to be informed of the resolution of the change request.Manage RiskMonitor any risk items, events or factors, that could negatively impact the schedule, cost or the actual delivery of the solution. Use or build upon the Risk Assessment Checklist in the Project Plan in the monitoring of the risk items, and proactively formulating risk containment measures to mitigate the risk.Perform Change AdministrationThe following is a general description of the steps that must be performed: ∙Process change managementDefine change policiesManage change activitiesAnalyze completed change activity for effectiveness and compliance ∙Change administrationCreate change requestReview the change requestReview the scope and baselinePerform technical assessmentPerform business assessmentIf the change is deemed to be containable or has no impact on the scope of thesolution or deliverables, approve the change and update the Project Planaccordingly.If it is deemed that the change is within the scope of the Project Plan, but has anaffect on the solution, the requirements are again confirmed with the Customer.This work will be handled relatively quickly and will likely result in a simplecontract amendment.If it is determined that the change is beyond the scope of the Project Plan, but the Customer disagrees, arbitration will be required. Again, the process loops back to confirming requirements with the Customer, but more time will be needed to fully understand and incorporate the requirements.If the Customer agrees that the change is beyond the scope of the current ProjectPlan, the requirements will be passed to opportunity management as a newopportunity.Description。

Porter's competitive strategy

Porter's competitive strategy

Porter's Competitive Strategies
A B
Overall cost leadership Differentiation
C
Focus
Focus
Definition:
The business focuses on one or more narrow market segments,
Porter's Competitive Strategies
Advantage
Target Scope
Low Cost Broad (Industry Wide) Narrow (Market Segment) Cost Leadership Strategy Focus Strategy (low cost) Product Uniqueness Differentiation Strategy Focus Strategy (differentiation)
Porter's Competitive Strategies
A B C
Overall cost leadership Differentiation Focus
Porter's Competitive Strategies
A B C
Overall cost leadership Differentiation Focus
• Differentiation through Multiple sources: L&T, the engineering firm , recruits engineers with excellent qualification and claims superiority in executing projects. • Coke and Pepsi differentiated through brand power. • Product Differentiation based on ingredients: HUL Close Up used glycerin instead of calcium carbonate and secured differentiation and Colgate compelled to copy the same.

ITU-T分组传送技术标准

ITU-T分组传送技术标准


应用业务层(APP) (如:IP、MPLS等)
控 制 数 平 据 平 面 面
理 平 面
以太网业务层(APP) (以太网业务PDU) 传送业务层(TRAN) (如:IEEE802.1,SONET/SDH、MPLS等)
8 MEF4 MEN网 网 层
电信科学技术第五研究所
电信级以太网
MEF技术规范 MEF技术规范
规范号 MEF2 MEF3 规范名称 Requirements and Framework for Ethernet Service Protection Circuit Emulation Service Definitions, Framework and Requirements in Metro Ethernet Networks Metro Ethernet Network Architecture Framework Part 1: Generic Framework Metro Ethernet Service definition Phase 2 EMS-NMS Information Model Implementation Agreement for the Emulation of PDH Circuits over Metro Ethernet Networks Abstract test Suite for Ethernet Services at the UNI Ethernet Service Attributes Phase 2 User Network Interface (UNI) Requirements and Framework Metro Ethernet Network Architecture Framework Phase 2: Ethernet Services Layer User Network Interface (UNI)Type 1 Implementation Agreement Abstract Test Suite for Traffic Management Phase 1 规范号 MEF15 MEF16 规范名称 Requirements for Management of Metro Ethernet Phase 1 Network Elements Ethernet Local Management Interface

学院风之国外顶尖商学院MBA分析

学院风之国外顶尖商学院MBA分析

.....
.....
.....
......
.....
Subtitle comes here
Driver Tree - Moons Showing Base Trends
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
.....
strong
weak


.....
.....
.....
.....
Subtitle comes here
Multiple Boxes
...... ...... ......
.....
...... ...... ......
.....
...... ...... ......
.....
...... ...... ......
R = 84 G = 156 B = 181
Agenda
Structured text Graphs Pictures Service Line Charts
Text Blocks - Unrelated List..... Nhomakorabea.....
.....
.....
.....
.....
.....
.....
R = 9 G = 29 B = 93
R = 156 G = 209 B = 0
R = 221 G = 210 B = 181
Primary colours
Highlight colors using standard PowerPoint palette (charts & diagrams)

英汉术语翻译术语

英汉术语翻译术语

绝对翻译。

Absolute Translation。

按照古阿德克的解释,指专业译员为应付不同翻译要求而采用的七种翻译类型之一。

According to Gouadec,one of 7 types of translation which can be used by professional translators to respond to the various translation requirements2摘要翻译。

Abstract Translation 古阿德提出的用以对付不同翻译要求的七种翻译策略之一。

One of seven strategies proposed by Gouadec to fulfil the various translation needs which arise in a professional environment。

3滥译。

Abusive Translation。

路易斯用来表示文学翻译中极端做法的一个术语。

A term used by Lewis to refer to a radical approach to literary translation。

4可接受性。

Accetability。

图里采用的术语,用来指可以从翻译作品中观察到的两种倾向之一。

A term used by Toury to denote one of two tendencies which can be observed in translated texts。

5准确。

Accuracy。

翻译评估中用来表示译文与原文相符成都的术语。

A term used in translation evaluation to refer to the extent to which a translation matches its original。

6改编Adaptation。

传统上用来指采用特别自由的翻译策略而做出任何目标文本的术语。

英语语法定义中英对照

英语语法定义中英对照

语法定义1. NounA noun is the name of a person, place, or thing or some quality, state, or action.1.名词名词是一个人、地方、事物或某种品质、状态或行为的名称。

2. Noun phraseAs has been pointed out, the noun phrase is a phrase with a noun as its head. It is the noun head that determines the way the noun phrase is organized.2.名词短语名词短语如之所述, 名词短语是以名词为标题的短语。

名词头决定了名词短语的组织方式。

3. DeterminersWords that precede any premodifying adjectives in a noun phrase and which denote such referential meanings(所指意义) as specific reference(特指),generic reference (泛指), definite quantity (定量) or indefinite quantity (不定量) are referred to as determiners.在名词短语中任何前置修饰形容词之前表示这种所指意义的词, 即特指、泛指、定量或不定量被称为限定词。

4. AdverbAn adverb is a word that modifies a verb, an adjective, another adverb or a whole sentence. 4.副词副词是修饰动词、形容词、另一个副词或整个句子的词。

5. AdjectivesAn adjective is a word giving a description of the quality or character of a person or thing.5.形容词形容词是描述一个人或事物的质量或性格的词。

otis rsl远程串行接口协议标准大全

otis rsl远程串行接口协议标准大全

Engineering CenterFive Farm SpringsFarmington, CT 06032Original Date: 2005-10-24 Document: SID00022 Project Number: C867 PC Number:Sheet 1 of 16Dwg / Part No:Remote Serial Link (RSL) Protocol Interface StandardDISTRIBUTION:Per notification document 53627.ORIGINAL APPROVAL:Prepared By:(Last, F I typed) (Signature and date) Approved By:(Last, F I typed) (Signature and date)Lerner, Bruce Bruce Lerner 2005-10-24 Leach, R. B. Bryan Leach 10/24/05Hoopes, Bruce Bruce Hoopes 2/15/06Vela, Haran Haran Vela 10/24/05 REVISION APPROVAL RECORD:Rev Date (yyyy-mm-dd) Project/PCRevised By:(Last, F I typed)Approved By:(Last, F I typed) (Signature and date)This work and the information it contains are the property of Otis Elevator Company (“Otis”). It is delivered to others on the express condition that it will be used only for, or on behalf of, Otis; that neither it nor the information it contains will be reproduced or disclosed, in whole or in part, without the prior written consent of Otis; and that on demand it and any copies will be promptly returned to Otis.Table of Contents1 Introduction (4)1.1 Purpose (4)1.2 Overview (4)1.3 Referenced Documents (4)2 RSL Overview (5)2.1 Communication Context (5)2.2 Otis RSL Protocol Mapping to the 7-layer ISO model (5)3 Otis RSL Functionality (6)4 OSI Layer 1: – RSL Physical (6)4.1 RSL Physical layer (6)4.1.1 Signal levels (6)4.1.2 Power levels (6)4.1.3 Bit Representation (6)4.1.4 Transmission Medium (6)4.1.5 Connectors (7)5 OSI Layer 2: – RSL Link (7)5.1 Frames (7)5.2 Data Cycles (Master Station driven) (8)5.2.1 Synchronization Cycle (8)5.2.2 Write Cycle (8)5.2.3 Read Cycle (8)5.3 Frame Timing (8)5.4 Error Detection (8)6 OSI Layers 3 – 6: Not Supported (9)7 OSI Layer 7 (9)7.1 Basic RSL Data format (9)7.1.1 Master Write /Slave Read (9)7.1.2 Master Read /Slave Write (9)7.2 CPI-11 / New Europa Line (NEL) RSL Data format (9)7.2.1 ELD Frames (9)7.2.2 7 & 16 Segment LCD Character Bit-maps (11)7.2.3 7 & 16 bit LCD Character Code list (12)7.2.4 ELD Frame Description (13)7.2.5 ELD Commands (14)7.3 Global Aesthetics RSL Data format (15)7.3.1 Frame 0 (15)7.3.2 Frame 1 (15)Appendix (16)APPENDIX A Definitions and Acronyms (16)1 Introduction1.1 PurposeThis document describes Otis’ application of the Remote Serial Link (RSL) within the Elevator system. This document should serve as a reference for creating modules that communicate using the RSL protocol. This document is meant to be referenced by multiple Module SIDs and does not intend to capture Module specific information.Any Module or Message specific information found in this document are examples and should not be construed as a definition of a standard for a Module or Message.This document will be updated when the RSL protocol standard or Otis’ application of RSL is amended, following approval by the Interface Change Control Board.1.2 OverviewThis document has been divided into the following sections and subsections. The combined information in these sections provides a complete description of Otis’ application of the RSL protocol.! Protocol Descriptiono The protocol description is an introduction to the Otis RSL Protocol and its expected functionality in relation to commercial protocol standards, primarily the 7-layer OSI standard.! Functional Allocationo This section specifies the functions that are supported by the interface. The functions in this section support or interact with functionality in other modules. The interface requirements aregenerally driven by the cross-module functional interactions.! Appendixo List of Acronyms and Definitions.o List of references that may be useful in understanding this document or its history.1.3 Referenced Documents[1] “System Architecture Interface Standard”, Otis Elevator Company, 2005. Document # SID00000.[2] “Signalling Subsystem Basic Data”, Flohr Ois Engineering, Otis Elevator Company, 8-September-1987.[3] Gewinner, J “Remote Station Family A9693 C1, C2, C3, C4 Basic Data”, Version 1.0, Flohr Otis,Berlin, 26-Feb-1988.[4] Shull, J “Industrial Control Unit Integrated Circuit Design Specification”, Revision 8, Otis ElevatorCompany, July 25, 1988.[5] Schulte, J “CPI11 ELD for OTIS2000 - Serial Protocol Definition”, Otis Elevator Company, Europeanand Transcontinental Operations, Berlin. Document # GAA25250B_BD1, 1994-03-03.[6] Drop, D “RSL Global Aesthetics Generic Fixtures Serial Protocol Description”, Otis ElevatorCompany, 2000-11-17, Document # 54723.2 RSL OverviewThis section describes the mapping of Otis’ application of RSL to the OSI model.This Remote Serial Link is a low-cost 4-wire bus that is intended for use in a single master – multiple slaves (I/O devices) configuration. Information is transferred as a continuous 64 frames of 5 bits each. The four wires applied by RSL between a master device and a slave device, are arranged as two pairs of wires, one providing communication, the other pair providing power. All stations on the RSL are connected in parallel. The RSL master provides synchronization for the read/write data transfer.The RSL is used as a long distance (hoistway length) communication channel for fixtures and bit-oriented inputs and outputs.2.1 Communication ContextRSL is being applied as one of the communication protocols in the elevator control system communication architecture. See reference [1].2.2 Otis RSL Protocol Mapping to the 7-layer ISO modelThis diagram maps the implemented layering for the RSL Protocol to the industry standard ISO 7-layer model.OSI Reference Model OSI-RMDefintionOtisLayersOtis Application Comm.Term7. Application Application-to-applicationcommunicationApplicationBit Mapping for: Basic RSL,CPI11/NEL and Global AestheticsProtocolData Unit(PDU)76. Presentation ConvertEncryptCompressPresentation65. Session EstablishManageTerminate54. Transport Error recover,Flow control Segment43. Network Determine path RouteN/APacket32. Link Media AccessControl,Framing errorLink RSL Frame21. Physical Transmit raw bitsPhysical RSL Bits13 Otis RSL FunctionalityThe Otis RSL protocol interface standard describes the mechanism for bit-oriented input and output betweenRSL slave stations and an RSL master station, and for bit-oriented output and commands/output between amaster and its slave devices. The RSL protocol addresses the physical and link layers, therefore they will be theonly layers described in this document.Typical use includes:• Bit-oriented input to the controller e.g. buttons, keyswitches• Bit-oriented output from the controller e.g. tell-tale lights• Position indicator data stream• Electro-luminescent Display (ELD) data4 OSI Layer 1: – RSL PhysicalFollowing a synchronization sequence, the master station transmits data for each slave station and then readsdata from the slave stations.4.1 RSL Physical layerThis section defines the signal levels, cabling and logical representation of signals for the RSL.4.1.1 Signal levelsParameter MinMaxUnit Signal voltage levels for L1 and L2 data lines -6.0 35.0 VdcCommon mode rejection voltage High level: VL1 >= VL2 + 0.8 Vdc Low level: VL1 <= VL2 + 0.3 Vdc +4.0Vdc4.1.2 Power levelsParameter MinMaxUnit Power lines 17.7 35.0 VdcVoltage drop on power distribution line 2.0 VdcCurrent - board specific4.1.3 Bit Representation4.1.3.1 Signal LevelsParameter MinMaxUnit Logical “0” -6.0 0.8 VdcLogical “1” 17.0 35.0 Vdc4.1.4 Transmission MediumThe four wire serial link consists of two data lines and two power lines. The system allows for a maximumserial line length of 300 m and a tap length of 2m.4.1.4.1 Data transmission linesThe data lines are twisted pair to minimize differential voltage distortion. L1 is the data line and L2 is the clockline used for synchronization.4.1.4.2 Power linesThe power line pair may be twisted or parallel. The transmission line should have end-to-end impedance ofapproximately 100 ohms and approximate capacitance of 60 pf/meter. See reference [2].4.1.5 Connectors4.1.5.1 “Slave” Platforms (e.g. fixtures)4.1.5.1.1 One row of 4 pinsThis connector supports two links, the link supplying power and the link supplying control messages.Post RSL Signal Name Signal DescriptionFunction1 DL1 Differential – signal forRSLData Line 1 2 DL2 Differential – signal forRSLData Line 23 ReturnVoltage return for fixture powerReturn power for FixtureElectronics4 V RS+33 Volt to power fixture Supply for Fixture Electronics power5 OSI Layer 2: – RSL LinkThe master station provides the basis for data transfer cycles. It provides the clock pulsed, drives the output for all frames (stations/addresses) during the write cycle and reads the inputs from all frames (slave stations/addresses) during the read cycle.5.1 FramesData is transferred in ‘frames’ that are a nominally defined time of 800 µS made up of a 100 µS clock pulse and 5 data bits of 100 µS each. The data bits are bounded by 100 µS ‘dead’ slots. See reference [3].5.2 Data Cycles (Master Station driven)Each data transfer cycle consists of a three sequential sub-cycles (from the master station perspective): write, read, synchronization. See reference [3].5.2.1 Synchronization CycleThe Synchronization Cycle consists of two frames without clock pulses that are inserted between everyL2L2data write/read cycle.5.2.2 Write Cycle5.2.3 Read Cycle5.3 Frame TimingSignal Min Typ Max Unit Cycle Time 788 800 835 µS Clock Width 80 100 146 µS Data 80 100 146 µS5.4 Error DetectionThe RSL protocol supports detection of loss of synchronization. The master station monitors for bit and clock pulse errors, providing a flag on the detected error. See reference [2].6 OSI Layers 3 – 6: Not Supported7 OSI Layer 7This section describes the application of the frames and sdata (PI) bits as applied for basic RSL, CPI11/NEL and Global Aesthetic fixtures.7.1 Basic RSL Data formatThe master station performs as described above (section 5). Each slave station/address) is responsible to read 5-bits from frame 0, the 5th bit from frames 4 – 39 (36 ‘sdata’ bits related to PI information) and the 5 bits associated with its address. See references [2] and [3].7.1.1 Master Write /Slave ReadFrame 0: master provides a command that is read by all slave stations Frames 1 – 3: Not usedFrames 4 – 63: Master provides data for 60 slave stations addressed 4 to 63, including 5th bit ‘sdata’for frames 4-39. Slave stations read 4 bits from their station time slot (address) and the 5th bit from frames 4-39.7.1.2 Master Read /Slave WriteMaster station reads all frames.L2L1L2L1Frame 64: stations with address 0 (for test purposes) respond in this frameFrames 65 – 67: Not usedFrames 68 – 127: slave stations with address 4-63 respond in their sequential frame (time slot)7.2 CPI-11 / New Europa Line (NEL) RSL Data formatThis format modifies the meaning of the sdata (5th data bit in each frame of frames 4-39) and is NOT compatible with PI devices using the basic RSL protocol. See reference [5].7.2.1 ELD Frames7.2.1.1 Frame 07.2.1.2 Frame 17.2.2 7 & 16 Segment LCD Character Bit-maps7.2.2.1 Segment locations for LCD Position Indicator7.2.2.2 Bit map for LCD Segment code7.2.3 7 & 16 bit LCD Character Code list7.2.4 ELD Frame Description7.2.5 ELD Commands7.3 Global Aesthetics RSL Data formatThis format modifies the meaning of the sdata (5th data bit in each frame of frames 4-39) extending the sdata fields through the 55th frame and is NOT compatible with PI devices using the basic RSL protocol but IS compatible with CPI11/NEL devices. See reference [6].7.3.1 Frame 056-1277.3.2 Frame 1AppendixAPPENDIX A Definitions and AcronymsCPI Car Position IndicatorELD Electro-luminescent DisplayISO International Standards OrganizationNEL New Europa LineIndicator PI PositionRSL RemoteLinkSerialDefinition SID StandardInterface串行通讯的基本概念:与外界的信息交换称为通讯。

麦肯锡供应链管理流程与绩效英文原版课件

麦肯锡供应链管理流程与绩效英文原版课件
麦肯锡供应链管理流程与绩效英文原版
*
Possible data sources
CIPS (UK): Purchasing (& Supply Chain). APICS (US): Supply Chain. CAPS (US): Purchasing & Supply Chain (US & Legal): Research Benchmark Industry Listings (). NAPM (US): Purchasing. Kaiser Associates: Benchmark Specialist Consultant. US University Research: New global initiative (investigating entry opportunities—Bob Ackerman).
Recognise Cross-Industry; In-Industry; and In-Company similarities and differences. Interface the solution to the current clients’ measures, systems, processes and culture: . . . and guide migration over time.
麦肯锡供应链管理流程与绩效英文原版
*
Performance measurement is an important but complex subject
This document’s an initial step in the right direction.
Companies see the need for metrics. . .

Chapter_22

Chapter_22
确保开发的软件可追溯到客户需求的另外一系列任务

Boehm [Boe81] states this another way: Verification: "Are we building the product right?" Validation: "Are we building the right product?"
driver
interface
local data structures
Module
independent paths boundary conditions error handling paths
stub
stub test cases Results
Driver:接收测试用例数据,将这些数据传递给被测模块,并输出结果
可以进行回归测试(即全部或部分地重复已做过的测试) ,以避免引入新错误。

回到第2步继续执行此过程,直到完成整个程序结构的构造。
23
Bottom-Up Integration

Bottom-up integration testing begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure).
DO 5 I = 1, 3
DO 5 I = 1. 3
4
Software Testing

Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.

超高效液相色谱-静电场轨道阱高分辨质谱法测定法莫替丁及其制剂中的痕量N-亚硝基二甲胺

超高效液相色谱-静电场轨道阱高分辨质谱法测定法莫替丁及其制剂中的痕量N-亚硝基二甲胺

第39卷第9期ZO?。

年9月分析测试学报FENXI CESHI XUEBAO (Jon/ai af 1110/1116x 1:81 Analysis )Voi. 39 No 91779 〜DAdoi : 10. 3969/j. issn. 1004 -4457. 2020. 09. 003超高效液相色谱-静电场轨道阱高分辨质谱法测定法莫替丁及其制剂中的痕量N ■亚硝基二甲胺郭常川,杨书娟,刘 琦,王维剑,文松松,牛 冲,徐玉文**收稿日期:2722 -75 -21;修回日期:2722 -06 -21基金项目:国家自然科学基金项目(71573606, 71673076)*通讯作者:徐玉文,博士,主任药师,研究方向:药品检验,E - maii : *********************(山东省食品药品检验研究院国家药品监督管理局仿制药研究与评价重点实验室山东省仿制药一致性评价工程技术研究中心,山东济南250101)摘要:建立了测定法莫替丁及其制剂中N-亚硝基二甲胺(NDMA )含量的超高效液相色谱-静电场轨道阱高 分辨质谱法(UHPLC-Or/it/y HRMS)。

样品以甲醇作为提取溶剂,经涡旋混匀、恒温振荡、高速离心、微 孔过滤后进行液相色谱-质谱(LC - MS)分析。

采用ACE EXCEL 3氐-AR ( 164 mmx4.9 mm , 3 jm )色谱 柱,以0. 3%甲酸水溶液和0. 3%甲酸乙睛为流动相梯度洗脱分离,流速为0. 5。

mL/mm ,柱温为3。

C ,自 动进样器温度为4 C ,设置六通阀切换保护质谱系统。

质谱分析采用ESI 离子源,正离子平行反应监测 (PRM )扫描模式,外标法定量。

NDMA 在1.00~100.00 11/1^范围内线性良好,相关系数()为0.999 7,检 岀限和定量下限分别为0. 24 n/mL 和1. 00 n/mL ,在法莫替丁及其制剂中的平均回收率为98. 5% - 108% , 相对标准偏差(RSD )为2. 3% ~ 6. 7%。

IEC-61162-420

IEC-61162-420

Commission Electrotechnique Internationale International Electrotechnical Commission
PRICE CODE
XD
For price, see current cata001(E)
CONTENTS
FOREWORD...........................................................................................................................6 INTRODUCTION .....................................................................................................................8 1 2 3 Scope and object ..............................................................................................................9 Normative references...................................................................................................... 10 Definitions ...............................................................................................................

高中英语大作文主旨升华句子

高中英语大作文主旨升华句子

高中英语大作文主旨升华句子The art of crafting a compelling high school English essay lies not solely in the technical mastery of grammar and syntax but rather in the ability to elevate the main idea, transforming a simple prompt into a thought-provoking exploration of complex themes. As students navigate the academic landscape, the development of this skill becomes crucial, empowering them to transcend the confines of a basic essay and create a work that resonates with both the reader and the writer.At the heart of a successful high school English essay lies the main idea the writer seeks to convey. This central theme serves as the foundation upon which the entire composition is built, guiding the flow of ideas and shaping the narrative. Crafting a main idea that is both insightful and well-articulated is a delicate balance, requiring the writer to delve deep into the subject matter and uncover its inherent complexities.One of the key strategies in elevating the main idea is the strategic use of supporting evidence. Rather than relying on superficialexamples or generic statements, the skilled essayist weaves a tapestry of carefully selected details, quotes, and anecdotes that lend credibility and depth to the central argument. This approach not only strengthens the overall persuasiveness of the essay but also invites the reader to engage more deeply with the ideas presented.Moreover, the ability to draw connections between the main idea and broader societal, historical, or philosophical contexts further elevates the essay's significance. By contextualizing the central theme within a larger framework, the writer demonstrates a nuanced understanding of the subject matter and its implications, challenging the reader to consider the essay's relevance beyond the confines of the prompt.The skilled high school English essayist also recognizes the power of language in shaping the reader's perception. The judicious use of rhetorical devices, such as metaphor, irony, and parallel structure, can elevate the main idea by imbuing it with a sense of artistry and emotional resonance. These linguistic tools not only enhance the essay's aesthetic appeal but also serve to amplify the writer's core message, leaving a lasting impression on the reader.Furthermore, the ability to anticipate and address counterarguments within the essay demonstrates a level of critical thinking that sets the writer apart. By acknowledging and thoughtfully engaging withalternative perspectives, the essayist showcases a depth of understanding that transcends the binary nature of many academic prompts. This approach not only strengthens the overall persuasiveness of the essay but also fosters a sense of intellectual humility, a trait highly valued in the academic realm.In the realm of high school English essays, the true measure of success lies not in the mere fulfillment of a prompt but in the writer's capacity to elevate the main idea, transforming a simple task into a profound exploration of the human experience. By harnessing the power of language, drawing connections to broader contexts, and engaging with diverse perspectives, the skilled essayist crafts a work that resonates long after the last word is read.Ultimately, the art of elevating the main idea in a high school English essay is a testament to the writer's intellectual curiosity, analytical prowess, and creative expression. It is a skill that not only serves the immediate academic purpose but also lays the foundation for a lifetime of critical thinking and effective communication – invaluable assets in a world that increasingly demands nuanced, thoughtful engagement with complex ideas.。

罗尼库尔曼介绍英语作文

罗尼库尔曼介绍英语作文

罗尼库尔曼介绍英语作文Rowan Atkinson, the renowned British comedian and actor, is perhaps best known for his iconic roles in television shows like "Mr. Bean" and "Blackadder." However, Atkinson's talents extend far beyond the realm of comedy, as he has also made significant contributions to the art of English essay writing. In this essay, we will explore Atkinson's insights and techniques for crafting engaging and effective essays in the English language.One of the fundamental principles that Atkinson emphasizes in his approach to essay writing is the importance of a clear and coherent structure. He believes that a well-organized essay, with a logical flow of ideas, is essential for capturing the reader's attention and effectively conveying the writer's message. Atkinson advocates for the use of a classic five-paragraph structure, which includes an introduction, three body paragraphs, and a conclusion.In the introduction, Atkinson suggests that the writer should establish the main topic or thesis of the essay, providing the reader with a clear understanding of the essay's purpose. He encourageswriters to use engaging language and to avoid starting the essay with a bland or generic statement. Instead, Atkinson recommends opening with a hook, such as an intriguing question or a thought-provoking observation, to pique the reader's interest and draw them into the essay.The three body paragraphs, according to Atkinson, should each focus on a distinct aspect of the essay's main topic. He emphasizes the importance of using clear topic sentences to introduce each paragraph and to provide a smooth transition between ideas. Atkinson also stresses the need for strong supporting evidence, such as examples, facts, or expert opinions, to substantiate the writer's claims and strengthen the overall argument.In the conclusion, Atkinson suggests that the writer should revisit the main thesis or argument, summarizing the key points made throughout the essay and leaving the reader with a lasting impression. He cautions against simply restating the introduction or introducing new information in the conclusion, as this can be confusing for the reader. Instead, Atkinson recommends that the writer should use the conclusion to reinforce the essay's central message and to leave the reader with a sense of closure.Another critical aspect of Atkinson's approach to essay writing is the importance of using clear and concise language. He believes thateffective writing should be free from unnecessary jargon or complex sentence structures, and instead, should prioritize simplicity and clarity. Atkinson encourages writers to use active voice, to avoid passive constructions, and to choose words that are precise and unambiguous.Furthermore, Atkinson emphasizes the significance of developing a unique and engaging writing style. He believes that the best essays are those that reflect the writer's personality and voice, rather than adhering to a formulaic or generic approach. Atkinson suggests that writers should experiment with different techniques, such as the use of humor, anecdotes, or rhetorical questions, to create a distinctive and memorable essay.One of the key strengths of Atkinson's approach to essay writing is his emphasis on the importance of revision and editing. He believes that the first draft of an essay is rarely the final product, and that writers should be willing to critically examine their work and make necessary changes. Atkinson encourages writers to read their essays aloud, to check for grammar and spelling errors, and to seek feedback from others to help refine and improve their writing.In conclusion, Rowan Atkinson's insights and techniques for essay writing offer a valuable perspective for both novice and experienced writers. His emphasis on structure, language, and style, combinedwith his emphasis on revision and editing, provide a comprehensive framework for crafting engaging and effective essays in the English language. By following Atkinson's guidance, writers can develop the skills and confidence necessary to express their ideas clearly and persuasively, and to leave a lasting impression on their readers.。

基于阀门非线性补偿的亚临界机组主蒸汽温度控制

基于阀门非线性补偿的亚临界机组主蒸汽温度控制

DOI:10.15913/ki.kjycx.2024.07.022基于阀门非线性补偿的亚临界机组主蒸汽温度控制王航(国家能源投资集团有限责任公司,北京100011)摘要:为保证火电机组的安全稳定运行,需维持主蒸汽温度处在正常范围内。

利用减温水调节主蒸汽温度,若在设计控制方案时不考虑减温水调节阀的流量特性,会出现由于减温水调节阀的非线性特性导致控制品质下降。

通过搭建某亚临界机组的主蒸汽温度超前区、惰性区和阀门对象仿真模型,研究了带有阀门非线性补偿的主蒸汽温度控制。

通过采集阀门输入输出数据,拟合阀门流量特性曲线,设计了基于多项式拟合方法的阀门开度补偿器。

仿真结果表明,此方法可在一定程度上克服阀门流量特性引起的非线性,使整个系统的控制效果更加快速和平稳,提升主蒸汽温度控制的品质。

关键词:火电厂亚临界机组;主蒸汽温度;阀门流量特性;非线性补偿中图分类号:TP273;TM621 文献标志码:A 文章编号:2095-6835(2024)07-0085-04根据中国当前电力能源结构的构成,火电机组在国内总装机容量中仍占50%以上[1]。

主蒸汽温度控制对于火电机组的安全、稳定和高效运行起着非常关键的作用,而主蒸汽温度对象是一个非线性、大惯性、大迟延对象,其参数会随着工况的变化而变化,要建立其精确的数学模型相对困难[2-3],从而导致主蒸汽温度的有效控制成为火电机组运行中的一个难题。

目前火电机组主要通过调节减温水调节阀来进行主蒸汽温度控制。

阀门是工业生产中广泛使用的流量控制设备。

阀门具有非线性特性,在控制系统分析设计过程中,如果不将其非线性加以考虑,会导致控制品质下降,甚至对工艺回路系统造成冲击,导致控制系统的预期控制效果与实际控制效果出现较大差异[4]。

常见的阀门非线性因素包括限位、死区、间隙、行程时间、开关特性、黏滞特性和流量特性[5]。

针对阀门死区的非线性特性,文献[6]通过对阀门模型的非线性特性进行辨识,确定了模型的相关参数,通过克服死区非线性特性对系统的影响,提高了系统的动态性能;文献[7]对阀门的非线性特性进行建模,以模拟阀门的限位、死区、黏滞和开关特性等非线性特性,从而更深入地分析调节阀的非线性动态模型。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Generic Framework for Parallel and Distributed Processing of Video-DataDirk Farin1and Peter H.N.de With1,21University Eindhoven,Signal Processing Systems,LG0.10,5600MB Eindhoven,Netherlands d.s.farin@tue.nlWWW home page:http://vca.ele.tue.nl,2LogicaCMG,PO Box7089,5605JB Eindhoven,Netherlands Abstract.This paper presents a software framework providing a plat-form for parallel and distributed processing of video data on a clusterof SMP computers.Existing video-processing algorithms can be easilyintegrated into the framework by considering them as atomic processingtiles(PTs).PTs can be connected to form processing graphs that modelthe dataflow.Parallelization of the tasks in this graph is carried outautomatically using a pool-of-tasks scheme.The data format that canbe processed by the framework is not restricted to image data,such thatalso intermediate data,like detected feature points,can be transferredbetween PTs.Furthermore,the processing can be carried out efficientlyon special-purpose processors with separate memory,since the frameworkminimizes the transfer of data.We also describe an example applicationfor a multi-camera view-interpolation system that we successfully imple-mented on the proposed framework.1IntroductionVideo processing and analysis systems pose high demands on the computation platform in terms of processing speed,memory bandwidth,and storage capac-ity.Requirements grow even further if real-time processing speed is needed for analysis and visualization applications.On current commodity hardware,this processing power is not available.For example,simultaneous capturing of two 640×480@25fps videos from IEEE-1394cameras alreadyfills the full bus band-width.The straightforward solution for this problem is to use computation clus-ters instead of expensive specialized hardware.However,the design of distributed systems is a troublesome task,prone to designflaws.Most previous approaches to parallel computation have concentrated onfine-granular data-parallelism.In this approach,the algorithms themselves are par-allelized,which requires a reimplementation of the algorithms.This is difficult, especially because computer-vision engineers are rarely experts in distributed processing[6].Since complex systems are composed of many processing steps, parallelization can also be carried out by keeping the(sequential)algorithms as atomic processing steps.This has two advantages:algorithms are easier to implement and algorithm development stays independent of the parallelization.Note that it is still possible to use the distributed-processing framework to also parallelize the algorithms itself by splitting the task into smaller sub-tasks that can be computed independently.In this paper,we propose a generic framework for distributed real-time pro-cessing,into which existing software components(algorithms)can be integrated effortlessly.Algorithm parallelization is achieved by splitting the processing into a set of processing tiles(PT).Each processing tile performs a specific operation on a set of inputs and generates one or several outputs.The inputs and outputs of the PTs can be connected to build arbitrary processing graphs,describing the order and dependencies of processing.The framework software takes care about the appropriate distribution of the algorithms over the processing resources and the control of execution.The proposed framework provides the following features.1.The processing graph is not limited to a certain topology(like pro-cessing pipelines).In particular,PT outputs can be efficiently connected to the inputs of several PTs.2.Automatic parallelization.While the processing within one PT is consid-ered an atomic operation,parallelism is obtained by running PTs in parallel.The framework also allows to process data from different time instances in parallel,such that not only horizontal parallelism(concurrency of indepen-dent tasks),but also vertical parallelism(pipelining)is exploited.3.The framework is network transparent.The processing graph can bedistributed over a cluster of computers and still be accessed in a uniform way.If data has to be sent across the network,this is done transparently by the framework.4.The framework supports operations on arbitrary data-types.Hence,not only image data can be processed,but also structured data-types like mesh-based geometry.An important alternative view onto this is that data can be processed in different representations.For example,low-level process-ing tasks(lens-distortion correction,image-rectification,depth estimation) can be implemented more efficiently on the graphics processor(GPU).In this case,the image data is loaded into the texture memory of the graphics card.Since the overhead of transferring the image data between main memory and the graphics card would annihilate the performance gain of processing on the GPU,it should be avoided whenever possible.This is achieved by passing only handles to the texture data between PTs and doing the conversion to image data in main memory only when necessary.In this paper,we willfirst describe the main considerations taken into account when designing the framework software(see Section2),and we give an overview of the framework architecture.In Section3,we describe the implementation in more detail.An example application and its implementation using the proposed framework is presented in Section4.2Design of the Distributed Processing Framework2.1Design ConsiderationsThe design of our Distributed Processing Framework(DPF)was driven by the following requirements.Fine-grained vs.coarse-grained parallelization.Previous work on paral-lel algorithms has mostly considered parallelization using afine-granular data-parallelism within a single algorithm.The difficulty with this approach is that it complicates the implementation of the algorithm,because it requires knowl-edge of the image-processing algorithms as well as knowledge about parallel and distributed programming.On the other hand,large applications consist of a mul-titude of processing steps.This allows to apply parallel processing on a coarser level,at which every algorithm is(conceptually)implemented in a sequential program(in practice,this does not have to be followed strictly,as we will de-scribe in Section4).Since we are targeting complex video-processing systems comprising several algorithms,we have chosen the coarse-grained paralleliza-tion because it simplifies the implementation of each algorithm.Furthermore, the coarse-grained parallelization has a lower communication overhead,which is usually one of the most restricting bottlenecks.SMP and cluster parallelization.Currently,multi-core processors begin to replace single-core processors because they allow to increase the computation speed more economically than by increasing the processor frequency.While cur-rent processors have two or four cores,the number of cores is expected to increase further in the future.But even with multi-core processors,the processing speed is limited because of the limited memory-bandwidth,limited I/O bandwidth,or simply because the computation speed of these processors is still not sufficient for the application.To overcome these limitations,processing in a cluster of computers is a viable approach[1].For these reasons,the DPF should enable parallel processing in both ways,by exploiting multiple processing cores within one shared-memory system(SMP), and by distributing the work over a cluster of computers.Automatic parallelization vs.manual splitting of tasks.In designing a system using our DPF,it must be decided which processing tiles should be pro-cessed on which computer.The optimal design of the processing network can be determined automatically,but we still prefer to specify the assignment of the processing tiles to the computers by hand for the following reasons.First,the number of processing tiles is rather limited for most systems,and the placement of some of these tiles is dictated by the hardware(the camera capturing must take place at the computer to which the cameras are connected).Second,for an automatic optimal distribution of the tasks,exact knowledge about the pro-cessing times,the required bandwidths,or the available computation resourcesis required.These are often difficult to specify formally,particularly in a hetero-geneous architecture with special-purpose processors.Flexibility.Our intention in designing the DPF was to provide a general frame-work for a wide area of applications.Hence,the design should not impose a specific processing architecture,like afixed processing pipeline,which might be unsuitable for many applications.Furthermore,the framework should strengthen the reuse of processing tiles for different applications.This is supported by al-lowing processing tiles to have aflexible number of inputs and outputs,which do not have to be connected all.Unconnected outputs can indicate to the pro-cessing tile to disable part of its processing,and optional inputs can be used for optional hints that may help the algorithm.Special-purpose processor utilization.Many low-level image-processing tasks are well suited for parallel processing,but this parallelism cannot be achieved with standard general-purpose processors.Even though these proces-sors nowadays have support for SIMD instructions,especially designed for mul-timedia applications,(like MMX on x86,or Altivec on PowerPC architectures), the degree of parallelism is limited and restricted to simple processing.Additional to the CPU,specific media-processors do exist which offer higher parallelization factors than regular CPUs.When utilizing special-purpose pro-cessors with independent memory(like the GPU on graphics cards),it should be considered that the image data to be processed has to be stored in its local memory.The time to transfer the image between different memories is not neg-ligible,and can even exceed the actual computation time.For this reason,it is important to avoid unnecessary memory transfers wherever possible.This has to be considered in the design of the DPF by providing several options how the image data is transferred between successive processing tiles.2.2Overview of the Distributed Processing FrameworkThe core of our distributed processing framework(DPF)is a set of user-definable processing tiles(PT).A processing tile conducts an operation on its input data and generates new output.The number of inputs and outputs isflexible.In order to define the data-flow through the PTs,they can be connected to form a processing graph of an arbitrary topology(without cycles).A Processing Graph Control(PGC)distributes the tasks to carry out in the graph of PTs over a set of worker threads,hereby exploiting multi-processor parallelism in an SMP system.The scheduling is determined by splitting the processing into a set of“(PT,sequence-number)”pairs.Each of these pairs rep-resents a processing task that can be issued to a thread.The scheduler maintains the set of tasks in three queues:the tasks that are currently processed,tasks that are ready to run(all input dependencies are met),and tasks that cannot be started yet because of unmet dependencies.These queues are updated whenever a PT hasfinished its computation.For specifying processing graphs that are distributed over several comput-ers in a cluster,the PGC can be wrapped into a Distributed Processing Graph (DPG).The DPG provides a uniform access to all PTs even though they might be distributed over different computers.Whenever data is to be transferred be-tween computers,a network connection is established transparently to the user. From a user point-of-view,a DPG looks like a local processing graph,because most of the network distribution is hidden from the user.It should be emphasized here that the DPF is organized in three separate layers that represent subsets of the features made available by the DPF.The idea is to provide simpler APIs to the programmer when the full-featured framework is not required.The typical usage of these three layers is briefly explained below.–A system building upon thefirst layer includes only the PT objects,which are connected to a graph.There is no central scheduler organizing the data processing.Data can be processed directly by pushing new data into some PT input.This will trigger the processing of this tile and also the successive tiles,where additional input is requested as required.On the other hand, the graph of PTs can also be used in a pull mode,where new output data is requested at some PT,which again triggers the processing of the tile.If there is missing input,the PTfirst acquires the required inputs from the preceding tiles.–The second layer adds a scheduler(PGC)to the graph of connected tiles.This scheduler manages multiple worker threads to carry out the computa-tions in parallel.–The third layer adds a network-transparent distribution of the processing graph(DPG)over a cluster of computers.To this end,a server application is run on every computer in the network,and the servers are registered at a central control computer.PTs can be instantiated at any arbitrary computer through a uniform API at the control computer.Connecting two PTs across the network is possible and handled transparently to the user.3Implementation Details3.1Processing TilesAll algorithms are wrapped into Processing Tile(PT)objects.Each PT can accept a number of inputs and can also create several outputs.For the DPF, a PT appears like an atomic operation(however,the algorithm in the PT may itself be implemented as a parallel algorithm,independent to the parallelization performed in the DPF).Each PT provides the memory for storing the computed results,but it does not include buffers for holding its own input data.The input data is accessed directly from the output buffers of the connected tiles.This prevents that data is unnecessarily copied between PTs,which could constitute a considerable part of the computation time.On the other hand,this blocks the PT that provides the input from already starting the work on the next Data-Unit.If this is a problem,an additional buffering PT can be inserted in-between the two PTs to decouple the data-dependency between these two PTs.Every output can be connected to several inputs,without any extra cost. Moreover,inputs and outputs can also be left unconnected.While the algorithm within the PT might simply proceed as usual even when there are unconnected outputs,it can be more efficient by disabling generating the data for this output. This feature can be used,for example,to create outputs that provide a visualiza-tion of the algorithm.When the visualization is not required,the output can be left unconnected.Connecting only some of the inputs can be used,for example, for operations that work with a varying number of inputs(like composition of images,depth estimation from multiple cameras),or to support additional hints for the algorithm(segmentation masks that can help to increase the quality of the result,but which are not required).Each PT also saves a sequence-number,indicating which frame was processed in the last step,now being available at the ing this sequence-number,a PT can check if all its inputs are available such that it can start processing.3.2Data-UnitsData that should be passed between PTs is encapsulated in Data-Unit objects. Each Data-Unit provides a uniform interface for communicating with the DPF, but it can nevertheless hold arbitrary types of data.The Data-Unit must provide a function to serialize the data and to reconstruct the Data-Unit again from the serialized data.This is used by the DPF in order to send data across the network, transparently for the user.3.3Using a Graph of PTs Without Processing Graph ControlAs noted above,it is possible to use a graph of connected PTs without any further central control.For simple pipelined processing,this comes close to the Decorator design-pattern[3]used in software engineering.However,processing in our graph of PTs is moreflexible than a straight processing pipeline.We do not only allow arbitrary(acyclic)graphs of PTs,but also allow a push-data semantic as well as a pull-data ing a graph of PTs without a central Processing Graph Control(see next section)is simple to use,but note that parallel processing is not available.Processing of a new unit of data is triggered at an arbitrary PT in the graph. If new input is fed into the graph,then triggering happens at the moment when the new data is passed into one PT.Whenever a PT is triggered,it checks if the data at its inputs are already available.This is visible from the sequence-numbers in the predecessor PTs.If some input is missing,this predecessor PT is triggered.When all input is available,the tile performs its operation and triggers all output PTs if they are not yet at the same sequence-number(this can happen if triggering one output PT has propagated to another output).3.4Processing Graph ControlThe Processing Graph Contol(PGC)comprises a scheduler that distributes the processing tasks in a graph of PTs over several processors running in parallel.To this end,a pool-of-tasks scheme is applied.The PGC maintains a set of PTs that are ready to process the next Data-Unit.A PT is ready if all the input data is available at the connected inputs,if the PT is not internally blocked,and if the outputs of the PT are not required by any other PT anymore.The PGC maintains three sets of“(PT,sequence-number)”pairs to schedule the tasks to the threads.The to-be-processed set is initiallyfilled with all tasks for thefirst sequence-number.Whenever thefirst task with the latest sequence-number has started,all tasks for the next sequence-number are added to this set.After a PT hasfinished its processing,the successor and predecessor PTs are examined if they are now ready to be processed.If they are,the corresponding task is moved from the to-be-processed set into the ready-to-run set.Whenever a working thread hasfinished a task,it gets a new task from the ready-to-run set,moves this task into the in-progress set and starts processing.In order to efficiently test if a task can be processed,an active-edges set is maintained.An active edge is an edge in the PT graph,connecting an already-processed output of a tile P T o with a not-yet processed tile P T i.This active edge represents a data-dependency which prevents that P T o can process another unit before P T i hasfinished using this output data.An example of this scheduling algorithm is presented in Fig.1.In thisfigure, PTs that are ready-to-run are indicated with a black bar at the left(input)side. PTs that are currently being processed are depicted in grey color.When they have their output data available,this is indicated with a black bar at the right (output)side.Active edges between PTs are shown with bold lines.The number in the PT denotes the current sequence-number of the outputs.In this example, we have assumed that processing takes equal time-periods for every tile.Note that this is not required by the scheduling algorithm,in which the processing time is not defined.Also note that the shown schedule is not the only possible one.In Fig.1(b),instead of processing the two top tiles in the second column, any two tiles from the second column would be a valid next step.The step in Fig.1(f)is similar to(b),just with the sequence-numbers increased to the next frame.3.5Network TransferIn order to connect two processing graphs on two different computers,the data processed on one computer must be sent to the input at the second computer. This is realized with a network-sender PT and a network-receiver PT.The network-sender PT uses the Serialize method of the Data-Unit interface to generate a bit-stream representation of the data.This bit-stream is then stored in a FIFO buffer from which it is sent over the network.The sending is carried out in a separate thread which is managed by the PT itself instead of a global scheduler.This has the twofold advantages that(1)streaming can run with maximum throughput because the thread is always active,and(2)the network-connection PTs can also be used without any PGC.Since the sending thread is almost always blocked in the system-function for network transmission,its computation time is negligible.(a)Step 1.(b)Step 2.(c)Step 3.(d)Step4.(e)Step5.(f)Step 6.Fig.1.Example processing graph running on two CPUs.It is assumed here that pro-cessing of each tile takes the same time.The network-receiver PT at the other side works similarly to the network-sender.A separate thread is responsible for receiving new data bit-streams from the network.Whenever the PT is triggered,one data packet is removed from the FIFO and used to reconstruct a Data-Unit.3.6Distributed Processing GraphA Distributed Processing Graph (DPG)wraps a PGC into a network interface.When DPGs are created on different computers,they can be joined together with a control connection.From that moment onwards,both graphs in the DPGs appear as a joint graph and can also be controlled in a unified way,even though the PTs might be on different computers.New tiles can be created on any computer via the DPG interface by specifying the name of the PT and the systemon which it should be instantiated.Universally Unique Identifier s(UUIDs)are used to access the PTs in the network.In the general case,many computers with DPG daemons running on them can be connected into a control tree.Each DPG stores which PTs can be reached via each of its neighboring computers.Control commands for a non-local PT are forwarded to the closest neighbor DPG,which then further handles the request.The DPG interface can also be used to connect pairs of PTs.If both PTs are on the same computer,a simple direct connection is established like before. If the PTs are on different computers,special PTs are added that serialize the data into a network stream on one computer,and then reconstruct the Data-Unit on the receiving computer(like described in Section3.5).These network-transmission PTs are connected to the pair of PTs that originally should be connected.This process is transparent to the user of the DPF.Note that the data-transfer connections are independent from the control connections between DPGs.Hence,while the control connections always have a tree topology,data is transferred directly between the involved pair of computers.In order to run our distributed processing framework on a cluster of comput-ers,a DPG is started as a daemon process on each computer.One computer acts as the control computer,running the actual application program.Note that the DPG daemons are independent of the application program as long as all PTs required by the application are compiled in the DPG daemons.Future work will provide a method to also transmit the PT code over the network and link it dynamically to the DPG.4Example ApplicationAs an example application,we implemented a multi-camera view-interpolation system which allows to synthesize images along a set of cameras[2].For the view-interpolation system,the following tasks have to be carried out.–Capturing the video streams from the digital cameras.–Correction for the radial lens distortion.–Stereo image rectification.–Depth estimation.–View interpolation.These sub-tasks can be implemented in separate PTs.Note that some of the tasks(lens undistortion,rectification,and part of the view interpolation)are implemented on the GPU,while the depth estimation is implemented on the CPU.At the connection between GPU processing and CPU processing,there are conversion PTs to transfer the data between graphics-card memory and main memory.Because the depth-estimation process is computationally expensive,it is at-tractive to parallelize this algorithm itself.Even though parallelization within one PT is not supported directly by our framework,it can be easily achieved by splitting the PT into separate PTs which compute a partial solution each.The results are then combined into afinal solution in a multiplexer PT.5ConclusionsThis paper has presented a software framework to provide an easy-to-use plat-form for parallel and distributed processing of video data on a cluster of SMP computers.Existing algorithms can be easily integrated into the framework with-out reimplementation of the algorithms.The framework organizes the process-ing order of the algorithms in a graph of atomic processing tiles with unre-stricted topology.Parallelization is carried out automatically using a pool-of-tasks scheme.A specific feature of our framework is that it can be used at three differ-ent levels,each comprising comprise a subset of the framework.During the de-velopment,PTs can be directly connected without parallelization,to simplify development and debugging.At a second level,a scheduler is added which auto-matically parallelizes the execution on SMP machines.Finally,in a third level, the processing graph can be distributed over a cluster of computers.As such, the framework provides a scalable approach for distributed processing,without introducing much burden on the programmer.The framework has been successfully applied on a cluster of multi-processor computers to implement various video-processing tasks,including the described view-interpolation system for multiple input cameras.Because of itsflexibility, the framework can be applied to a wide range of applications,probably even in fields other than video processing.References1. D.J.Becker,T.Sterling,D.Savarese,J.E.Dorband,U.A.Ranawak,and C.V.Packer.Beowulf:A parallel workstation for scientific computation.In Proceedings of the International Conference on Parallel Processing,1995.2. D.Farin,Y.Morvan,and P.H.N.de With.View interpolation along a chain ofweakly calibrated cameras.In IEEE Workshop on Content Generation and Coding for3D-Television,June2006.3. E.Gamma,R.Helm,R.Johnson,and J.Vlissides.Design Patterns:Elements ofReusable Object-Oriented Software.Professional Computing Series.Addison-Wesley, 1995.4.J.Hippold and G.Runger.Task pool teams for implementing irregular algorithmson clusters of smps.In International Parallel and Distributed Processing Symposium (IPDPS’03),pages54–61,2003.5.M.Korch and T.Rauber.Evaluation of task pools for the implementation of parallelirregular algorithms.In International Conference on Parallel Processing Workshops (ICPPW’02),pages597–604,2002.6. F.Seinstra,D.Koelma,and A.Bagdanov.Towards user transparent data and taskparallel image and video processing:An overview of the parallel-horus project.In Proceedings of the10th International Euro-Par Conference(Euro-Par2004),volume 3149of Lecture Notes in Computer Science,pages752–759,Aug.2004.。

相关文档
最新文档