Analysis for the Deployment of Single-Point Mooring Buoy System Based on Multi-Body Dynamics Me

合集下载

Summary

Summary

A Web Services Data Analysis Grid*William A. Watson III†‡, Ian Bird, Jie Chen, Bryan Hess, Andy Kowalski, Ying Chen Thomas Jefferson National Accelerator Facility12000 Jefferson Av, Newport News, VA 23606, USASummaryThe trend in large-scale scientific data analysis is to exploit compute, storage and other resources located at multiple sites, and to make those resources accessible to the scientist as if they were a single, coherent system. Web technologies driven by the huge and rapidly growing electronic commerce industry provide valuable components to speed the deployment of such sophisticated systems. Jefferson Lab, where several hundred terabytes of experimental data are acquired each year, is in the process of developing a web-based distributed system for data analysis and management. The essential aspects of this system are a distributed data grid (site independent access to experiment, simulation and model data) and a distributed batch system, augmented with various supervisory and management capabilities, and integrated using Java and XML-based web services.KEY WORDS: web services, XML, grid, data grid, meta-center, portal1. Web ServicesMost of the distributed activities in a data analysis enterprise have their counterparts in the e-commerce or business-to-business (b2b) world. One must discover resources, query capabilities, request services, and have some means of authenticating users for the purposes of authorizing and charging for services. Industry today is converging upon XML (eXtensible Markup Language) and related technologies such as SOAP (Simple Object Access Protocol), WSDL (Web Services Description Language), and UDDI (Universal Description, Discovery and Integration) to provide the necessary capabilities [1].The advantages of leveraging (where appropriate) this enormous industry investment are obvious: powerful tools, multiple vendors (healthy competition), and a trained workforce* Work supported by the Department of Energy, contract DE-AC05-84ER40150.† Correspondence to: William Watson, Jefferson Laboratory MS 16A, 12000 Jefferson Av, Newport News, VA 23606.‡ Email: Chip.Watson@.(reusable skill sets). One example of this type of reuse is in exploiting web browsers for graphical user interfaces. The browser is familiar, easy to use, and provides simple access to widely distributed resources and capabilities, ranging from simple views to applets, including audio and video streams, and even custom data streams (via plug-ins).Web services are very much like dynamic web pages in that they accept user-specified data as part of the query, and produce formatted output as the response. The main difference is that the input and output are expressed in XML (which focuses upon the data structure and content) instead of HTML (which focuses upon presentation). The self-describing nature of XML (nested tag name + value sets) facilitates interoperability across multiple languages, and across multiple releases of software packages. Fields (new tags) can be added with no impact on previous software.In a distributed data analysis environment, the essential infrastructure capabilities include: · Publish a data set, specifying global name and attributes· Locate a data set by global name or by data set attributes· Submit / monitor / control a batch job against aggregated resources· Move a data set to / from the compute resource, including to and from the desktop · Authenticate / authorize use of resources (security, quotas)· Track resource usage (accounting)· Monitor and control the aggregate system (system administration, user views)· (for some applications) Advance reservation of resourcesMost of these capabilities can be easily mapped onto calls to web services. These web services may be implemented in any language, with Java servlets being favored by Jefferson Lab (described below).It is helpful to characterize each of these capabilities based upon the style of the interaction and the bandwidth required, with most operations dividing into low data volume information and control services (request + response), and high volume data transport services (long-lived data flow).In the traditional web world, these two types of services have as analogs web pages (static or dynamic) retrieved via http, and file transfers via ftp. A similar split can be made to map a data analysis activity onto XML based information and control services (request + response), and a high bandwidth data transport mechanism such as a parallel ftp program, for example bbftp [2]. Other non-file-based high bandwidth I/O requirements could be met by application specific parallel streams, analogous to today’s various video and audio stream formats.The use of web services leads to a traditional three tier architecture, with the application or browser as the first tier. Web services, the second tier, are the integration point, providing access to a wide range of capabilities in a third tier, including databases, compute and file resources, and even entire grids implemented using such toolkits as Condor [3], Legion[4], Globus[5] (see Figure 1).As an example, in a simple grid portal, one uses a single web server (the portal) to gain access to a grid of resources “behind” the portal. We are proposing a flexible extension of this architecture in which there may be a large number of web servers, each providing access to local resources or even remote services, either by using remote site web services or by using a non-web grid protocol.All operations requiring privileges use X.509 certificate based authentication and secure sockets, as is already widely used for e-commerce. Certificates are currently issued by a simple certificate authority implemented as a java servlet plus OpenSSL scripts and username + password authentication. These certificates are then installed in the user’s web browser, and exported for use by other applications to achieve the goal of “single sign-on”. In the future, this prototype certificate authority will be replaced by a more robust solution to be provided by another DOE project. For web browsing, this certificate is used directly. For applications, a temporary certificate (currently 24 hours) is created as needed and used for authentication. Early versions of 3rd party file transfers supports credential forwarding of these temporary certificates.2. Implementation: Data Analysis RequirementsThe Thomas Jefferson National Accelerator Facility (Jefferson Lab) is a premier nuclear physics research laboratory engaged in probing the fundamental interactions of quarks andgluons inside the nucleus. The 5.7 GeV continuous electron beam accelerator provides a high quality tool for up to three experimental halls simultaneously. Experiments undertaken by a user community of over eight hundred scientists from roughly 150 institutions from around the world acquire as much as a terabyte of data per day, with data written to a 12000 slot StorageTek silo installation capable of holding a year’s worth of raw, processed, and simulation data.First pass data analysis (the most I/O intensive) takes place on a farm of 175 dual processor Linux machines. Java-based tools (JASMine and JOBS, described below) provide a productive user environment for file migration, disk space management, and batch job control at the laboratory. Subsequent stages of analysis take place either at the Lab or at university centers, with off-site analysis steadily increasing. The Lab is currently evolving towards a more distributed, web-based data analysis environment which will wrap the existing tools into web services, and add additional tools aimed at a distributed environment.Within a few years, the energy of the accelerator will be increased to 12 GeV, and a fourth experimental hall (Hall D) will be added to house experiments which will have ten times the data rate and analysis requirements of the current experiments. At that point, the laboratory will require a multi-tiered simulation and analysis model, integrating compute and storage resources situated at a number of large university partners, with raw and processed data flowing out from Jefferson Lab, and simulation and analysis results flowing into the lab.Theory calculations are also taking a multi-site approach – prototype clusters are currently located at Jefferson Lab and MIT for lattice QCD calculations. MIT has a cluster of 12 quad-processor alpha machines (ES40s), and will add a cluster of Intel machines in FY02. Jefferson Lab plans to have a cluster of 128 dual Xeons (1.7+ GHz) by mid FY02, doubling to 256 duals by the end of the year. Other university partners are planning additional smaller clusters for lattice QCD. As part of a 5 year long range national lattice computing plan, Jefferson Lab plans to upgrade the 0.5 teraflops capacity of this first cluster to 10 teraflops, with similar capacity systems being installed at Fermilab and Brookhaven, and smaller systems planned for a number of universities.For both experiment data analysis and theory calculations the distributed resources will be presented to the users as a single resource, managing data sets and providing interactive and batch capabilities in a domain specific meta-facility.3. The Lattice PortalWeb portals for science mimic their commercial counterparts by providing a convenient starting point for accessing a wide range of services. Jefferson Lab and its collaborators at MIT are in the process of developing a web portal for the Lattice Hadron Physics Collaboration. This portal will eventually provide access to Linux clusters, disk caches, and tertiary storage located at Jefferson Lab, MIT, and other universities. The Lattice Portal is being used as a prototype for a similar system to serve the needs of the larger Jefferson Labexperimental physics community, where FSU is taking a leading role in prototyping activities.The two main focuses of this portal effort are (1) a distributed batch system, and (2) a data grid. The MIT and JLab lattice clusters run the open source Portable Batch System (PBS) [6]. A web interface to this system [7][8] has been developed which replaces much of the functionality of the tcl/tk based gui included with openPBS. Users can view the state of batch queues and nodes without authentication, and can submit and manipulate jobs using X.509 certificate based authentication.The batch interface is implemented as Java servlets using the Apache web server and the associated Tomcat servlet engine [9]. One servlet periodically obtains the state of the PBS batch system, and makes that available to clients as an XML data structure. For web browsing, a second servlet applies a style sheet to this XML document to produce a nicely formatted web page, one frame within a multi-frame page of the Lattice Portal. Applications may also attach directly to the XML servlet to obtain the full system description (or any subset) or to submit a new job (supporting, in the future, wide area batch queues or meta-scheduling).Because XML is used to hold the system description, much of this portal software can be ported to an additional batch system simply by replacing the interface to PBS. Jefferson Lab’s JOBS [9] software provides an extensible interface to the LSF batch system. In the future, the portal software will be integrated with an upgraded version of JOBS, allowing support for either of these back end systems (PBS or LSF).The portal’s data management interface is similarly implemented as XML servlets plus servlets that apply style sheets to the XML structures for web browsing (Figure 2.).The replica catalog service tracks the current locations of all globally accessible data sets. The back end for this service is an SQL database, accessed via JDBC. The replica catalog is organized like a conventional file-system, with recursive directories, data sets, and links. From this service one can obtain directory listings, and the URL’s of hosts holding a particular data set. Recursive searching from a starting directory for a particular file is supported now, and more sophisticated searches are envisaged.A second service in the data grid (the grid node) acts as a front end to one or more disk caches and optionally to tertiary storage. One can request files to be staged into or out of tertiary storage, and can add new files to the cache. Pinning and un-pinning of files is also supported. For high bandwidth data set transfers, the grid node translates a global data set name into the URL of a file server capable of providing (or receiving) the specified file. Access will also be provided to a queued file transfer system that automatically updates the replica catalog.While the web services can be directly invoked, a client library is being developed to wrap the data grid services into a convenient form (including client-side caching of some results, a significant performance boost). Both applet and stand-alone applications are being developed above this library to provide easy-to-use interfaces for data management, while also testing the API and underlying system.The back end services ( JASMine [11] disk and silo management) used by the data web services are likewise written in Java. Using Java servlets and web services allowed a re-use of this existing infrastructure and corresponding Java skills. The following is a brief description of this java infrastructure that is being extended from the laboratory into the wide area web by means of the web services described above.4. Java Infrastructure4.1. JASMineJASMine is a distributed and modular mass storage system developed at Jefferson Lab to manage the data generated by the experimental physics program. Originally intended to manage the process of staging data to and from tape, it is now also being applied for user accessible disk pools, populated by user’s requests, and managed with automatic deletion policies.JASMine was designed using object-oriented software engineering and was written in Java. This language choice facilitated the creation of rapid prototypes, the creation of a component based architecture, and the ability to quickly port the software to new platforms.Java’s performance was never a bottleneck since disk subsystems, network connectivity, and tape drive bandwidth have always been the limiting factors with respect to performance. The added benefits of garbage collection, multithreading, and the JDBC layer for database connectivity have made Java an excellent choice.The central point of management in JASMine is a group of relational databases that store file-related meta-data and system configurations. MySQL is currently being used because of its speed and reliability; however, other SQL databases with JDBC drivers could be used.JASMine uses a hierarchy of objects to represent and organize the data stored on tape. A single logical instance of JASMine is called a store. Within a store there may be many storage groups. A storage group is a collection of other storage groups or volume sets. A volume set is a collection of one or more tape volumes. A volume represents a physical tape and contains a collection of bitfiles. A bitfile represents an actual file on tape as well as its meta-data. When a file is written to tape, the tape chosen comes from the volume set of the destination directory or the volume set of a parent directory. This allows for the grouping of similar data files onto a common set of tapes. It also provides an easy way to identify tape volumes that can be removed from the tape silo when the data files they contain are no longer required.JASMine is composed of many components that are replicated to avoid single points of failure: Request Manager handles all client requests, including status queries as well as requests for files. A Library Manager manages the tape. A Data Mover manages the movement of data to and from tape.Each Data Mover has a Dispatcher that searches the job queue for work, selecting a job based on resource requirements and availability. A Volume Manager tracks tape usage and availability, and assures that the Data Mover will not sit idle waiting for a tape in use by another Data Mover. A Drive Manager keeps track of tape drive usage and availability, and is responsible for verifying and unloading tapes.The Cache Manager keeps track of the files on the stage disks that are not yet flushed to tape and automatically removes unused files when additional disk space is needed to satisfy requests for files. This same Cache Manager component is also used to manage the user accessible cache disks for the Lattice Portal. For a site with multiple disk caches, the Cache Managers work collaboratively to satisfy requests for cached files, working essentially like a local version of the replica catalog, tracking where each file is stored on disk (Figure 3). The Cache Manager can organize disks into disk groups or pools. These disk groups allow experiments to be given a set amount of disk space for user disk cache – a simple quota system. Different disk groups can be assigned different management (deletion) policies. The management policy used most often is the least recently used policy. However, the policies are not hard coded, and additional management policies can be added by implementing the policy interface.The Jefferson Lab Offline Batch System (JOBS, or just “the JobServer”) is a generic user interface to one or more batch queuing systems. The JobServer provides a job submission API and a set of user commands for starting and monitoring jobs independent of the underlying system. The JobServer currently interfaces with Load Sharing Facility (LSF). Support for other batch queuing systems can be accomplished by creating a class that interfaces with the batch queuing system and implements the batch system interface of the JobServer.The JobServer has a defined set of keywords that users use to create a job command file. This command file is submitted to the JobServer, where it is parsed into one or more batch jobs. These batch jobs are then converted to the format required by the underlying batch system and submitted. The JobServer also provides a set of utilities to gather information on submitted jobs. These utilities simply interface to the tools or APIs of the batch system and return the results.Batch jobs that require input data files are started in such a way as to assure that the data is pre-staged to a set of dedicated cache disks before the job itself acquires a run slot and is started. With LSF, this is done by creating multiple jobs with dependencies. If an underlying batch system does not support job dependencies, the JobServer can pre-stage the data before submitting the job.5. Current Status and Future DevelopmentsThe development of the data analysis web services will proceed on two fronts: (1) extending the capabilities that are accessible via the web services, and (2) evolving the web services to use additional web technology.On the first front, the batch web services interface will be extended to include support for LSF through the JOBS interface described above, allowing the use of the automatic staging of data sets which JOBS provides (current web services support only PBS). For the data grid, policy based file migration will be added above a queued (third party) file transfer capability, using remote web services (web server to web server) to negotiate transfer protocols and target file daemon URL’s.On the second front, prototypes of these web services will be migrated to SOAP (current system uses bare XML). Investigations of WSDL and UDDI will focus on building more dynamic ensembles of web-based systems, moving towards the multi-site data analysis systems planned for the laboratory.6. Relationship to Other ProjectsThe web services approach being pursued by Jefferson Lab has some overlap with the grid projects in the Globus and Legion toolkits, Condor, and with the Unicore product [12]. Each seeks to present a set of distributed resources to client applications (and users) as a single integrated resource. The most significant difference between Jefferson Lab’s work and these other products is the use of web technologies to make the system open, robust and extensible. Like Unicore, the new software is developed almost entirely in Java, facilitating easy integration with the Lab’s existing infrastructure. However, the use of XML and HTTP as the primary application protocol makes the web services approach inherently multi-language and open, whereas Unicore uses a Java-only protocol. At this early stage, the new system does not cover as wide a range of capabilities (such as the graphical complex job creation tool in Unicore or the resource mapping flexibility in Condor-G), but is rapidly covering the functionality needed by the laboratory. In particular, details it contains capabilities considered essential by the laboratory and not yet present in some of the alternatives (for example, the Globus Replica Catalog does not yet have recursive directories, which are now planned for a future release). In cases where needed functionality can be better provided by one of these existing packages, the services of these systems can be easily wrapped into an appropriate web service. This possibility also points towards the use of web service interfaces as a way of tying together different grid systems. In that spirit, Jefferson Lab is collaborating with the Storage Resource Broker [13] team at SDSC to define common web service interfaces to data grids. SRB is likewise developing XML and web based interfaces to their very mature data management product. ACKNOWLEDGEMENTSPortions of the Lattice Portal software is being developed as part of Jefferson Lab’s work within the Particle Physics Data Grid Collaboratory [14], a part of the DOE’s Scientific Discovery Through Advanced Computing initiative.REFERENCES[1] For additional information on web services technologies, see (July 9, 2001)/TR/2000/REC-xml-20001006Extensible Markup Language (XML)1.0 (Second Edition) W3C Recommendation 6 October 2000;/TR/SOAP/Simple Object Access Protocol (SOAP) 1.1 W3C Note08 May 2000; /TR/wsdl Web Services Description Language(WSDL) 1.1 W3C Note, 15 March 2001; /[2] See http://doc.in2p3.fr/bbftp/ (July 9, 2001). bbftp was developed by Gilles Farrache(farrache@cc.in2p3.fr) from IN2P3 Computing Center, Villeurbanne (FRANCE) to support the BaBar high energy physics experiment.[3] Michael Litzkow, Miron Livny, and Matt Mutka, Condor - A Hunter of IdleWorkstations Proceedings of the 8th International Conference of Distributed Computing Systems, June, 1988; see also /condor/[4] Michael J. Lewis, Andrew Grimshaw. The Core Legion Object Model Proceedings ofthe Fifth IEEE International Symposium on High Performance Distributed Computing, August 1996.[5] I. Foster and C. Kesselman. Globus: A Metacomputing Infrastructure Toolkit.International Journal of Supercomputing Application s. 11(2):115-128, 1997[6] See: /. (July 9, 2001) The Portable Batch System (PBS) is aflexible batch queueing and workload management system originally developed by Veridian Systems for NASA.[7] See /. (July 9, 2001)[8] P. Dreher, MIT W. Akers, J. Chen, Y. Chen, C. Watson, Development of Web-basedTools for Use in Hardware Clusters Doing Lattice Physics Proceedings of the Lattice 2001 Conference, to be published (2002) Nuclear Physics B.[9] See /tomcat/ (July 9, 2001)[10] I Bird, R Chambers, M Davis, A Kowalski, S Philpott, D Rackley, R Whitney,Database Driven Scheduling for Batch Systems, Computing in High Energy Physics Conference 1997.[11] Building the Mass Storage System at Jefferson Lab Proceedings of the 18th IEEESymposium on Mass Storage Systems (2001).[12] See http://www.unicore.de/ (July 9, 2001)[13] See /DICE/SRB/[14] See /. (July 9, 2001)。

华为SecoManager安全控制器产品介绍说明书

华为SecoManager安全控制器产品介绍说明书

Huawei SecoManager Security ControllerIn the face of differentiated tenant services and frequent service changes, how to implementautomatic analysis, visualization, and management of security services, security policy optimization,and compliance analysis are issues that require immediate attention. Conventional O&M relies onmanual management and configuration of security services and is therefore inefficient. Securitypolicy compliance check requires dedicated personnel for analysis. Therefore, the approval is usuallynot timely enough, and risky policies may be omitted. The impact of security policy delivery onservices is unpredictable. That is, the impact of policies on user services cannot be evaluated beforepolicy deployment. In addition, as the number of security policies continuously increases, it becomesdifficult for security O&M personnel to focus on key risky policies. The industry is in urgent needof intelligent and automated security policy management across the entire lifecycle of securitypolicies to help users quickly and efficiently complete policy changes and ensure policy deliverysecurity and accuracy, thereby effectively improving O&M efficiency and reducing O&M costs.The SecoManager Security Controller is a unified security controller provided by Huawei for differentscenarios such as DCs, campus networks, Branch. It provides security service orchestration andunified policy management, supports service-based and visualized security functions, and forms aproactive network-wide security protection system together with network devices, security devices,and Big Data intelligent analysis system for comprehensive threat detection, analysis, and response.Product AppearancesProduct HighlightsMulti-dimensional and automatic policy orchestration, security service deployment within minutes• Application mutual access mapping and application-based policy management: Policymanagement transitions from the IP address-based perspective to the application mutual access relationship-based perspective. Mutual-access relationships of applications on the network are abstracted with applications at the core to visualize your application services so that you can gain full visibility into the services, effectively reducing the number of security policies. The model-based application policy model aims to reduce your configuration workload and simplify network-wide policy management.• Policy management based on service partitions: Policy management transitions from thesecurity zone-based perspective to the service partition-based perspective. Conventional network zones are divided into security zones, such as the Trust, Untrust, DMZ, and Local zones. In a scenario with a large number of security devices and a large network scale, factors of security zone, device, policy, service rollout, and service change are intertwined, making it difficult to visualize services and to effectively guide the design of security policies. However, if security policies are managed, controlled, and maintained from the perspective of service partitions, users need to pay attention only to service partitions and security services but not the mapping among security zones, devices, and services, which effectively reduces the complexity of security policy design.Service partition-based FW1untrusttrustDMZ XXX FW2untrust trustDMZ XXX FW3untrust trust DMZ XXX InternetGuest partition R&D partition Data storage partitionExternal service partition Internal service partition• Management scope of devices and policies defined by protected network segments to facilitate policy orchestration: A protected network segment is a basic model of security service orchestration and can be considered as a range of user network segments protected by a firewall.It can be configured manually or through network topology learning. The SecoManager Security Controller detects the mapping between a user service IP address and a firewall. During automatic policy orchestration, the SecoManager Security Controller automatically finds the firewall that carries a policy based on the source and destination addresses of the policy.• Automatic security service deployment: Diversified security services bring security assurance for data center operations. Technologies such as protected network segment, automatic policy orchestration, and automatic traffic diversion based on service function chains (SFCs) enable differentiated tenant security policies. Policies can be automatically tiered, split, and combined so that you can gain visibility into policies.Intelligent policy O&M to reduce O&M costs by 80%• Policy compliance check: Security policy compliance check needs to be confirmed by the security approval owner. The average number of policies to be approved per day ranges from several to hundreds. Because the tool does not support all rules, the policies need to be manually analyzed one by one, resulting in a heavy approval workload and requiring a dedicated owner to spend hours in doing so. The SecoManager Security Controller supports defining whitelists, risk rules, and hybrid rules for compliance check. After a policy is submitted to the SecoManager Security Controller, the SecoManager Security Controller checks the policy based on the defined check rules and reports the check result and security level to the security approval owner in a timely manner.In this way, low-risk policies can be automatically approved, and the security approval owner needs to pay attention only to non-compliant policy items, improving the approval efficiency and avoiding the issues that the approval is not timely and that a risky policy is omitted.• Policy simulation: Based on the learning result of service mutual access relationships, the policies to be deployed are compared, and their deployment is simulated to assess the impact of the deployment, effectively reducing the risks brought by policy deployment to services.• Redundant policy deletion: After a policy is deployed, redundancy analysis and hit analysis are performed for policies on the entire network, and the policy tuning algorithm is used, deleting redundant policies and helping you focus on policies closely relevant to services.Network collaboration and security association for closed-loop threat handling within minutes • Collaboration with network for threat handling: In a conventional data center, application deployment often takes a long time. The application service team relies on the network team to deploy the network; the network team needs to understand the requirements of the application service team to deploy a network that is suitable for the application service team. The SecoManager Security Controller learns mappings between service policies and security policies based on the network topology, and collaborates with the data center SDN management and control system (IMaster NCE-Fabric) or campus SDN management and control system to divert tenant traffic to corresponding security devices based on SFCs on demand. The SecoManager Security Controller automatically synchronizes information about the tenants, VPCs, network topology (including logical routers, logical switches, logical firewalls, and subnets), EPGs, and SFCs from the SDN management and control system and combines the learned application service mutual access relationships to automatically orchestrate and deliver security policies, implementing security-network synergy.• Collaboration with security: Advanced persistent threats (APTs) threaten national infrastructure of the finance, energy, government, and other sectors. Attackers exploit 0-day vulnerabilities, use advanced evasion techniques, combine multiple attack means such as worm and ransomware, and may remain latent for a long period of time before they actually initiate attacks. The Big Data security product HiSec Insight can effectively identify unknown threats based on network behavior analysis and correlation analysis technologies. The threat handling method, namely isolation or blocking, is determined based on the threat severity. For north-south threats, the SecoManager Security Controller delivers quintuple blocking policies to security devices. For east-west threats, isolation requests are delivered to the network SDN management and control system to control switches or routers to isolate threatened hosts.Product Deployment• Independent deployment: The SecoManager Security Controller is deployed on a server or VM as independent software.• Integrated deployment: The SecoManager Security Controller and SDN management and control system are deployed on the same physical server and same VM.Database• Collaboration with the SDN management and control system to detect network topology changes and implement tenant-based automatic security service deployment.• North-south threat blocking, east-west threat isolation, and refined SDN network security control through SFC-based traffic diversion.• Interworking with the cloud platform to automatically convert service policies to security policies. Product SpecificationsOrdering InformationNote: This product ordering list is for reference only. For product subscription, please consult Huawei representatives. GENERAL DISCLAIMERThe information in this document may contain predictive statement including, without limitation, statements regarding the future financial and operating results, future product portfolios, new technologies, etc. There are a number of factors that could cause actual results and developments to differ materially from those expressed or implied in the predictive statements. Therefore, such information is provided for reference purpose only and constitutes neither an offer nor an acceptance. Huawei may change the information at any time without notice.Copyright © 2020 HUAWEI TECHNOLOGIES CO., LTD. All Rights Reserved.。

精益生产词汇

精益生产词汇

精益生产专业词汇解释A3 Report ||A3报告A-B Control ||A-B控制Act||行动Andon ||信号灯Apparent Efficiency||表面效率Automatic Line Stop ||自动停止生产线Batch and Queue ||批量生产Batch and Queue||批量与队列breakthrough kaizen||突破性改善Buffer Stock ||缓冲库存Capital Linearity ||线性化的设备投资Capital Linearity||投资线性化Cell ||生产单元Chaku-Chaku ||一步接一步Change Agent ||实施改变的领导者Changeover ||换模Check||检查Chief Engineer ||总工程师Continuous Flow||连续流Cross-dock||交叉货仓Cycle Time ||周期时间Demand Amplification||需求扩大Design-In ||共同设计Do||实施Downtime ||停工期Effective Machine Cycle Time||有效机器周期时间Efficiency ||效率EPEx||生产批次频率Error-Proofing||预防差错Every Product Every Interval ||EPEx ||生产批次频率Fill-Up System ||填补系统Finished Goods||成品First In, First Out ||FIFO ||先进先出Five Whys ||五个“为什么”Fixed-Position Stop System ||固定工位来停止生产Fixed-Position Stop System||固定位置停止系统Flow Production ||流水线生产Four Ms ||四MGemba||现场Greenfield ||新建工厂Heijunka ||均衡化Heijunka Box||生产均衡柜Heijunka||均衡化Inspection||检查Inventory ||库存Inventory Turns ||库存周转率Jidoka ||自动化Jishuken||自主研修JIT||及时生产job breakdown||任务细分书job element||工作要点书Just-In-Time||及时生产Kaikaku ||突破性改善Kaizen ||改善Kaizen Workshop ||改善研习会Kaizen||改善Kanban ||看板Labor Linearity||人力线性化Lean Enterprise ||精益企业Lean Logistics ||精益物流Lean Production ||精益生产Local Efficiency||局部效率Machine Cycle Time||机器周期时间Mass Production||大规模制造Material Flow||材料流Material Flow||物料流Mixed Supermarket and Sequential Pull System||库存超市与顺序拉动混合系统Monuments||大型装备Muda||浪费Multi-Process Handling||多工序操作Non Value-Creating ||非增值Non Value-Creating Time||非增值时间One-Piece Flow||单件流。

低频活动漂浮潜水船声探测系统(LFATS)说明书

低频活动漂浮潜水船声探测系统(LFATS)说明书

LOW-FREQUENCY ACTIVE TOWED SONAR (LFATS)LFATS is a full-feature, long-range,low-frequency variable depth sonarDeveloped for active sonar operation against modern dieselelectric submarines, LFATS has demonstrated consistent detection performance in shallow and deep water. LFATS also provides a passive mode and includes a full set of passive tools and features.COMPACT SIZELFATS is a small, lightweight, air-transportable, ruggedized system designed specifically for easy installation on small vessels. CONFIGURABLELFATS can operate in a stand-alone configuration or be easily integrated into the ship’s combat system.TACTICAL BISTATIC AND MULTISTATIC CAPABILITYA robust infrastructure permits interoperability with the HELRAS helicopter dipping sonar and all key sonobuoys.HIGHLY MANEUVERABLEOwn-ship noise reduction processing algorithms, coupled with compact twin line receivers, enable short-scope towing for efficient maneuvering, fast deployment and unencumbered operation in shallow water.COMPACT WINCH AND HANDLING SYSTEMAn ultrastable structure assures safe, reliable operation in heavy seas and permits manual or console-controlled deployment, retrieval and depth-keeping. FULL 360° COVERAGEA dual parallel array configuration and advanced signal processing achieve instantaneous, unambiguous left/right target discrimination.SPACE-SAVING TRANSMITTERTOW-BODY CONFIGURATIONInnovative technology achievesomnidirectional, large aperture acousticperformance in a compact, sleek tow-body assembly.REVERBERATION SUPRESSIONThe unique transmitter design enablesforward, aft, port and starboarddirectional transmission. This capabilitydiverts energy concentration away fromshorelines and landmasses, minimizingreverb and optimizing target detection.SONAR PERFORMANCE PREDICTIONA key ingredient to mission planning,LFATS computes and displays systemdetection capability based on modeled ormeasured environmental data.Key Features>Wide-area search>Target detection, localization andclassification>T racking and attack>Embedded trainingSonar Processing>Active processing: State-of-the-art signal processing offers acomprehensive range of single- andmulti-pulse, FM and CW processingfor detection and tracking. Targetdetection, localization andclassification>P assive processing: LFATS featuresfull 100-to-2,000 Hz continuouswideband coverage. Broadband,DEMON and narrowband analyzers,torpedo alert and extendedtracking functions constitute asuite of passive tools to track andanalyze targets.>Playback mode: Playback isseamlessly integrated intopassive and active operation,enabling postanalysis of pre-recorded mission data and is a keycomponent to operator training.>Built-in test: Power-up, continuousbackground and operator-initiatedtest modes combine to boostsystem availability and accelerateoperational readiness.UNIQUE EXTENSION/RETRACTIONMECHANISM TRANSFORMS COMPACTTOW-BODY CONFIGURATION TO ALARGE-APERTURE MULTIDIRECTIONALTRANSMITTERDISPLAYS AND OPERATOR INTERFACES>State-of-the-art workstation-based operator machineinterface: Trackball, point-and-click control, pull-down menu function and parameter selection allows easy access to key information. >Displays: A strategic balance of multifunction displays,built on a modern OpenGL framework, offer flexible search, classification and geographic formats. Ground-stabilized, high-resolution color monitors capture details in the real-time processed sonar data. > B uilt-in operator aids: To simplify operation, LFATS provides recommended mode/parameter settings, automated range-of-day estimation and data history recall. >COTS hardware: LFATS incorporates a modular, expandable open architecture to accommodate future technology.L3Harrissellsht_LFATS© 2022 L3Harris Technologies, Inc. | 09/2022NON-EXPORT CONTROLLED - These item(s)/data have been reviewed in accordance with the InternationalTraffic in Arms Regulations (ITAR), 22 CFR part 120.33, and the Export Administration Regulations (EAR), 15 CFR 734(3)(b)(3), and may be released without export restrictions.L3Harris Technologies is an agile global aerospace and defense technology innovator, delivering end-to-endsolutions that meet customers’ mission-critical needs. The company provides advanced defense and commercial technologies across air, land, sea, space and cyber domains.t 818 367 0111 | f 818 364 2491 *******************WINCH AND HANDLINGSYSTEMSHIP ELECTRONICSTOWED SUBSYSTEMSONAR OPERATORCONSOLETRANSMIT POWERAMPLIFIER 1025 W. NASA Boulevard Melbourne, FL 32919SPECIFICATIONSOperating Modes Active, passive, test, playback, multi-staticSource Level 219 dB Omnidirectional, 222 dB Sector Steered Projector Elements 16 in 4 stavesTransmission Omnidirectional or by sector Operating Depth 15-to-300 m Survival Speed 30 knotsSize Winch & Handling Subsystem:180 in. x 138 in. x 84 in.(4.5 m x 3.5 m x 2.2 m)Sonar Operator Console:60 in. x 26 in. x 68 in.(1.52 m x 0.66 m x 1.73 m)Transmit Power Amplifier:42 in. x 28 in. x 68 in.(1.07 m x 0.71 m x 1.73 m)Weight Winch & Handling: 3,954 kg (8,717 lb.)Towed Subsystem: 678 kg (1,495 lb.)Ship Electronics: 928 kg (2,045 lb.)Platforms Frigates, corvettes, small patrol boats Receive ArrayConfiguration: Twin-lineNumber of channels: 48 per lineLength: 26.5 m (86.9 ft.)Array directivity: >18 dB @ 1,380 HzLFATS PROCESSINGActiveActive Band 1,200-to-1,00 HzProcessing CW, FM, wavetrain, multi-pulse matched filtering Pulse Lengths Range-dependent, .039 to 10 sec. max.FM Bandwidth 50, 100 and 300 HzTracking 20 auto and operator-initiated Displays PPI, bearing range, Doppler range, FM A-scan, geographic overlayRange Scale5, 10, 20, 40, and 80 kyd PassivePassive Band Continuous 100-to-2,000 HzProcessing Broadband, narrowband, ALI, DEMON and tracking Displays BTR, BFI, NALI, DEMON and LOFAR Tracking 20 auto and operator-initiatedCommonOwn-ship noise reduction, doppler nullification, directional audio。

prometheus deployment指标

prometheus deployment指标

prometheus deployment指标英文版Prometheus Deployment MetricsIn the world of monitoring and observing the performance of IT infrastructure, Prometheus stands out as a popular open-source monitoring and alerting toolkit. It is widely used for collecting and storing metrics from various sources, enabling real-time monitoring and analysis. Prometheus's deployment metrics are crucial for understanding the health, stability, and efficiency of the deployed applications or services.1. Deployment Metrics OverviewDeployment metrics provide insights into the deployment process, including the success or failure of deployments, the duration of deployments, resource utilization during deployments, and more. These metrics are crucial for understanding the impact of deployments on the overall system performance.2. Key Metrics for Prometheus DeploymentDeployment Duration: Measures the time taken for a deployment to complete. This metric helps identify bottlenecks and optimize the deployment process.Deployment Success Rate: Indicates the percentage of successful deployments compared to the total number of deployments. A low success rate might indicate issues with the deployment pipeline or the target environment.Resource Utilization: Monitors the CPU, memory, and disk usage during deployments. High resource utilization might indicate performance issues or insufficient resources.Error Rates: Tracks the number of errors encountered during deployments. This metric helps identify common deployment issues and improve deployment reliability.Rollback Rate: Measures the frequency of rollbacks due to deployment failures. A high rollback rate might suggest a need for more robust deployment strategies or better testing procedures.3. Benefits of Monitoring Deployment MetricsImproved Visibility: Monitoring deployment metrics provides a clear picture of the deployment process, enabling quick identification of issues and bottlenecks.Enhanced Reliability: By tracking error rates and rollback rates, teams can improve the reliability of their deployment processes.Optimized Performance: Understanding resource utilization during deployments helps optimize the deployment process, ensuring efficient resource allocation.4. ConclusionPrometheus deployment metrics are essential for monitoring and improving the deployment process. By tracking key metrics like deployment duration, success rate, resource utilization, error rates, and rollback rate, teams can ensure reliable, efficient, and optimized deployments. This, in turn, leads to improved system performance and a better user experience.中文版Prometheus部署指标在监控和观察IT基础设施性能的领域中,Prometheus是一个流行的开源监控和告警工具集。

Silk Central 测试管理软件说明书

Silk Central 测试管理软件说明书

Silk CentralOpenT ext Silk Central is an open solution that addresses test management by aligning goals and requirements with testing, technology, and processes. It provides an integrated framework for im-proving productivity, traceability, and visibility for all types of software testing and ensures control over application readiness.Product HighlightsEffective collaboration across teams, pro-cesses, and tools drives successful projects.Yet software delivery teams see testing astime-consuming, poorly focused, and a hin-drance to project goals. Silk Central providesgoal- and requirement-driven test manage-ment. It coordinates technologies, resources,team capabilities, and meaningful deliverablesto promote quality, collaboration, efficiency,and confidence. Silk Central scales from sup-porting a single project to being the axis thatunites software development teams in differ-ent locations, right across an organization. Silk Central delivers control over application quality by providing consistent, repeatable processes from traditional methodologies—such as waterfall—right through to more itera-tive approaches, including agile. As a unified framework for managing manual and auto-mated tests, it integrates with unit, functional, and performance test suites; issue and require-ment management systems; and continuous integration (CI) and DevOps tools. Business Process T estingThe Business Process Testing (BPT) capa-bilities in Silk Central support an organiza-tion’s need to validate end-to-end business processes implemented in the application. Besides demonstrating end-to-end process quality, BPT also enables control among tes-ters, such as the hand-off between teams and the behavior on test failure.Figure 1. Business Process T estingT est Asset WorkflowDrive improved management and controlacross your tests while maintaining auditability.Figure 2. Workflow statesData SheetData Sheet Silk CentralT ransition tests through a review process from Draft, Under Review, to Approved state with full audit control at each stage, and use dedi-cated role permissions to control the process. Redundant tests can be marked as Obsolete at any stage of the workflow, ready for final review before deletion.Collaboration Using Microsoft T eamsThe integration of Silk Central with Microsoft T eams provides a means of team collaboration for manual testers working together on a proj-ect. A set of predefined actions trigger mes-sages to be sent to Microsoft T eams. These messages deliver the crucial information and links to jump to the relevant items in Silk Central or to perform the next steps.Client and Project StructureSilk Central supports your diverse needs, and the scales of deployment, by using clients (dis-tinct units within a Silk Central instance) to sup-port a multi-tenancy structure. This enables organizations to separate projects by clients while maintaining security and control, with additional ability to manage license allocation, e ither within a single database or across mul-tiple database instances.Key FeaturesRequirement ManagementMANAGE REQUIREMENTS FROMMULTIPLE SOURCESSilk Central enables you to unite requirements, regardless of source, into a single project. Achieve a centralized end-to-end view in one location to enhance collaboration and control across your project.TEST AGILE REQUIREMENTSSilk Central offers strong integrations with leading agile planning and tracking tools such as Atlassian Jira Agile. Silk Central manual e xecution planning supports agile develop-ment through progress reporting, burndown charts, planning, and personal dashboards. Optimized to test both agile and traditional re-quirements, Silk Central accelerates test ex-ecution and increases collaboration on qualityassurance activities.MITIGATE RISK USING QUALITY GOALSSilk Central helps testers prioritize, execute,and demonstrate how their efforts respondto risk-based and requirement-driven test-ing. Silk Central reports illustrate how test-ing has responded to these defined qualitygoals, ultimately enabling the critical go/no-go decision.T est Design and Asset ManagementKEYWORD-DRIVEN TESTINGKeyword-driven testing closely couples SilkCen t ral management with Selenium or Silk T estautomation capabilities:■Involve different professional groups in thetest automation process by separatingtest design and implementation.■Achieve maintenance benefits by reusingtest code or keywords, and enjoy the easyabstraction of low-level automation andhigh-level business scenarios throughmulti-level keyword sequences.■Enable a seamless transition from manualto automated testing by basing keywordtests on current manual test steps.■Get recommended keywords based on theautomatic analysis of existing keyword-driven tests when creating a new test.SHARE TEST ASSETS ACROSS PROJECTSDrive improved reuse and lower maintenanceoverheads by sharing tests across projects.Defining a central test repository and enablingprojects to reference tests in it improves pro-cess and practice within your organization.REUSABILITY AND MAINTAINABILITYOF ASSETSSilk Central efficiently maintains and re-uses test assets. Mandatory Attributes andMandatory Properties help standardize datausage to ensure they are always used in as-set creation. Use the Global List attributesto centrally maintain a set of items and usethem in multiple projects. This enables datastandardization and increases reusability andmaintainability across requirements, tests, andexecution plans. Utilize Linked Fields for man-agement of related fields to reduce the likeli-hood of wrong input and User List attributes toautomatically populate tests or execution plansto project users.T est Planning and ExecutionADVANCED MANUAL EXECUTION PLANNINGSilk Central increases efficiency and productiv-ity by making it easy to identify, plan, and sched-ule manual tests. It allows users to decide:■What to test through quality goals andfilters■When to test by planning test cycleswithin specified timeframes■Where to test by choosing which systemsor configurations to use■Who to test by allocating based onresource and time capacityPARALLEL TEST EXECUTIONSOrganizations need efficient test execution.The parallel testing capabilities within SilkCentral execute automated tests across ev-ery execution server (physical or virtual) at thesame time—reducing overall test time anddriving project efficiency. Silk Central providesusers with the flexibility to decide what to ex-ecute in parallel and what not to.KUBERNETES PLUGINReduce resource consumption and cost bymaking Kubernetes pods available for test ex-ecution as virtual execution servers.MOBILE TESTING■Execute mobile device testing withUFT Digital Lab and Silk T est Workbenchintegration, including automated functionaltests and manual tests.■Use Silk Central when manually testing mobile applications. It mirrors the mobile device on your desktop for convenient interaction with remote test devices.■Enjoy the flexible scheduling control of manual and automated tests as if any remote iOS or Android device were connected locally. T est teams can also select mobile test devices using multiple filtering options, including device type, platform, application, or device name.TESTING FROM EVERY ANGLEIntegration with many testing tools means Silk Central offers coverage of every angle of your application through unit, functional, and perfor-mance testing, whether manual or automated. ■Unit T est: JUnit, NUnit, UFT Developer■Functional T est: UFT One, Silk T est■Performance T est: Silk Performer, LoadRunner Professional■Regression T est: UFT One, Silk T est■Any T est: The ProcessExecutor test type can be used to execute any script from command line.CODE AND CONFIGURATION COVERAGESilk Central helps identify functionality gaps in current test definitions by providing code cov-erage analysis for manual and automated tests for .NET or Java apps.Configuration testing ensures test scripts exe-cuted against multiple configurations need no duplication, providing script reuse and report-ing on coverage of test environments.EXECUTION SNAPSHOTExporting the ‘Document View’ of the Execution Planning unit to Microsoft Excel creates a snapshot of execution status and progress for management, analytics, and reporting. Filters, sorting, grouping and order settings of the col-umns in DocumentView are also applied to the exported data.T racking and ReportingTESTBOOK: INSIGHT INTO MANUAL TESTSThe T estbook delivers real-time updates onall activities during manual testing. It simplifiescollaboration among testers and test manag-ers by displaying who did what and when.SIDE-BY-SIDE RESULTS COMPARISONFOR AUTOMATED TESTSUse side-by-side result analysis for fast in-sight into the status of automated test runs.Compare runs using visual reports showingthe execution status across every individualconfiguration.TEST RESULT CONSOLIDATIONImprove result analysis by extending test ex-ecution tooling eco-systems:■Consume results for automated teststhat are executed outside of Silk Centralexecution servers.■Collect all artifacts—such as requirements,tests, and results—in a single repositoryfor holistic reporting and decision-making.REPORTING AND ANAL YTICSUse the reports to view a full audit status ofmanual test results and associated defects.Y ou can also trace issues raised, associated re-sult files, and progress status in a format view-able online or exported to a PDF. Use video andscreen capture for manual and automated testactivities to reproduce issues and compile evi-dence to support compliance and audit needs.As well as rich metrics and analytics avail-able through default reports, Silk CentralFigure 3. User defined dashboardaccommodates custom reporting needs. Itsupports Microsoft Word and BIRT for ad-vanced reporting and can export data to PDFor Microsoft Excel.BUILT-IN ISSUE TRACKINGUse the built-in issue tracking capabilities toraise, track, and manage issues against testassets and a defined workflow relevant to yourorganization.UsabilityADVANCED MANUAL TEST INTERFACEThis web-based interface requires no installa-tion and needs little or no training. Users canexecute tests and capture evidence to supporttest execution with screenshots, videos, au-dios, result files, code coverage, and defects.FACILITATE ACCESSIBILITY NEEDSBy supporting keyboard navigation and fa-cilitating audible feedback through a screenreader, Silk Central is accessible for blind orvisually impaired users. It supports both NVDAand JAWS.SIMPLIFY USER MAINTENANCE WITHLDAP OR ACTIVE DIRECTORY (AD)Automatically create and authenticate users ondemand. Eliminate the need to manually cre-ate accounts for users who are interested orinvolved in test activities with Silk Central.Supported IntegrationsFigure 4. Silk Central integrations landscapeINTEGRATIONS VIA OPENTEXT CONNECTThe OpenT ext Connect portfolio of integrations provides extensive integration capabilities. Its Silk Central T est Manager Connector simpli-fies integration setup, optimizes require m ent synchronization performance, and enables better visibility into test asset relationships and traceability between tests and defects. INTEGRATIONS VIA SILK CENTRAL APISilk Central offers SOAP-based Web Services for the integration of third-party applications, as well as a REST API for managing results of external execution plan runs.OPENTEXT SOFTWARE INTEGRATIONS■Silk Performer■LoadRunner Professional■Silk T est■UFT One■UFT Developer■StarT eam■AccuRev■Silk T estPartner■Atlas■Deployment Automation■Release Control■Solutions Business Manager ■UFT Mobile■OpenT ext Connect■Jenkins, Hudson■Microsoft TFS■SeleniumTHIRD-PARTY SOFTWARE INTEGRATIONS■IBM DOORS, DOORS Next Generation■Bugzilla■Atlassian Jira and Jira Agile■JUnit and NUnit■Subversion■Git■T eam Foundation Server■Microsoft Office Word and Excel■Microsoft T eams■Amazon and Azure Cloud execution■Kubernetes■Apache Commons Virtual File System (VFS)■Microsoft Visual Studio/Visual Studio T estAgent■Plug-in API for additional integrations Please consult your OpenT ext account repre-sentative for supported versions and additional integrations.System RequirementsOperating System Support■W indows Server 2016, 2019Additional for Mobile Device T esting■A ndroid 5.x, 6.x, 7.x, 8.x, 9.x and 10.x■i OS 10.x,11.x, 12.x and 13.xAdditional for Execution Server■W indows 8.1 (32/64bit), Windows 10 (32/64bit)■U buntu, Red Hat Enterprise Linux, Debian, SUSE Linux Web Server Support■I IS 8, 10 (32/64Bit)■S tandalone web server (T omcat)Web Browser Support■I nternet Explorer 11 or later (no compatibility mode), Mozilla Firefox, Google Chrome,Microsoft EdgeDatabase Support■M icrosoft SQL Server 2016 SP2, 2017, 2019■Oracle 19cPlease refer to Installation and System Con fi guration Help in Silk Central documentation for fur-ther details.。

EXFO 2000 系列以太网测试仪 中文说明书

EXFO 2000 系列以太网测试仪 中文说明书

S P E C S H E ETKEY FEATURES AND BENEFITSAccelerate Ethernet service activation with bidirectional EtherSAM (Y .156) and RFC 2544 test suites, multistream traffi c generation, Through mode and bit-error-rate (BER) testing Experience unprecedented confi guration simplicity with hybrid touchscreen/keypad navigation and data entry Increase technician autonomy and productivity with intelligent discovery of remote EXFO Ethernet testers, as well as in-service testing via dual-port Through mode Eliminate errors in data interpretation with revolutionary new GUI on 7-inch TFT screen, historical event logger, visual gauges and 3D-icon depictions of pass/fail outcomesSimplify reporting with integrated Wi-Fi and Bluetooth connectivity capabilitiesIntegrated applications to test VoIP services, and additional IP test utilities including VLAN scan and LAN discovery via EXpert VoIP and EXpert IP test toolsSupport for packet capture and analysis, wireless troubleshooting and TCP throughput testingExtend fi eld testing operations with compact, lightweight platform equipped with long-duration battery packFTB-860 NetBlazer Series Ethernet TestersPOWERFUL, FAST, INTUITIVE ETHERNET TESTINGeld technicians comprehensive, yet simple test suites to quickly and easily turn up, validate and troubleshoot Ethernet services, with full EtherSAM capabilities, from 10 Mbit/s to 10 Gbit/s.EXFO FTB-1 FTB-860G SpecsProvided by THE ULTRA-PORTABLE CHOICE FOR HIGH-SPEED ETHERNET TESTINGThe ongoing deployment of GigE and 10 GigE circuits across access and metro networks demands a testing solution that seamlessly adapts to either operating environment—without sacrificing portability, speed or cost—in order to guarantee the performance and quality of service (QoS) metrics of these services.Leveraging the powerful, intelligent FTB-1 handheld platform, the NetBlazer series streamlines processes and empowers field technicians to seamlessly transition between 10/100/1000/10000 interfaces to rapidly adapt to a variety of networking environments.Powerful and FastThe NetBlazer series is a portfolio of fully integrated 10 Mbit/s to 10 Gbit/s handheld Ethernet testers. Available in three hardware configurations, each FTB-860x offers the industry’s largest TFT screen with unprecedented configuration simplicity via hybrid touchscreen/keypad navigation. Platform connectivity is abundant via Wi-Fi, Bluetooth, Gigabit Ethernet or USB ports, making it accessible in any environment.FTB-860G: 10 M BIT /S TO 10 G BIT/SIf the need is for full Ethernet coverage from 10 Mbit/s up to 10 Gbit/s, › 10 Base-T to 10 gigabit testing› IPv6 testingFTB-860: GIGABIT ETHERNETIf the need is purely for Gigabit Ethernet coverage, then the FTB-860 is › 10 Base-T to 1 gigabit testing› IPv6 testingFTB-860GL: 10 M BIT/S TO 10 G BIT/S LOOPBACK ONLYCombined with the FTB-860G or FTB-860, the FTB-860GL is the most cost-effective solution for GigE and 10 GigE intelligent loopback testing; it supports bidirectional EtherSAM and RFC 2544 testing and offers five › 10 Base-T to 10 gigabit loopback› EtherSAM (bidirectional partner)*› RFC 2544 (bidirectional partner)› Intelligent autodiscovery› IPv6 testing› Ping/traceroute* Contact your EXFO representative to confirm availability.Setting a New GUI Standard: Unprecedented Simplicity in Configuration Setup and NavigationIntelligent Situational Configuration Setup›G uides technicians through complete, accurate testingprocesses(suggestion prompts, help guides, etc.)›R educes navigation by combining associated testingfunctions on a single screen›I ntelligent autodiscovery allows a single technician to performend-to-end testingDedicated Quick-Action Buttons›Remote discovery to fi nd all the other EXFO units›Laser on/off›T est reset to clear the results and statistics while running a test ›Report generation›Save or load test confi gurations›Quick error injectionAssorted Notifications›Clear indication of link status for single or dual ports›Negotiated speed display for single or dual ports›O ptical power status available at all times for single or dual ports›Pass/fail indication at all times for all testsStreamlined Navigation›R emote discovery button available at all times; no reason to leave your current location to scan for a remote unit›T esting status can be maximized to fi ll the entire screen by simply clicking on the alarm status button; whether the unit is in your hand or across the room, test results can be easily determined with a simple glance at the display screen›R FC 2544 configuration is maximized in a single page;no need to navigate through multiple screens to confiindividual subtests›R FC 2544 results and graphs are also maximized in a single page; no need to navigate through multiple screens to viewindividual RFC subtest results FO unitswhile running a testdual portsal portstimes for single mes; no reason toemote unite entire screen by ; whether the unit sults can be easily splay screenn a single page; eens to confi gure ximized in a single e screens to viewRAPID, ACCURATE TEST RESULTS AT YOUR FINGERTIPSKey FeaturesIntelligent Network Discovery ModeUsing any NetBlazer series test set, you can single-handedly scan the network and connect to any available EXFO datacom remote tester. Simply select the unit to be tested and choose whether you want traffic to be looped back via Smart Loopback or Dual Test Set for simultaneous bidirectional EtherSAM and RFC 2544 results. No more need for an additional technician at the far end to relay critical information—the NetBlazer products take care of it all.Smart Loopback FlexibilityThe Smart Loopback functionality has been enhanced to offer five distinct loopback modes. Whether you are looking to pinpoint loopback traffic from a UDP or TCP layer, or all the way down to a completely promiscuous mode (Transparent Loopback mode), NetBlazer has the flexibility to adjust for all unique loopback situations.Global Pass/Fail AnalysisThe NetBlazer series provides real-time pass/fail status via text or icons. Clicking on the pass/fail indicator maximizes this important status to full screen, providing instant, easily understood notification whether the unit is in the technician’s hands or across the room.Remembering the Last IP or MAC AddressesField technicians have enough things to worry about and don’t always have the luxury of time to enter the same IP or MAC address test after test. The NetBlazer series remembers the last 10 MAC, IPv4 and IPv6 addresses as well as J0/J1 traces for 10G WAN, even afterthe unit has been rebooted.Traffic GenerationUnparalleled analog visual gauges combined with user-defined thresholds show instantaneously whether or not the test traffic is in or out of expected ranges.Additionally, bandwidth and frame size can be modified on-the-fly without navigating away to a different page, giving technicians instantaneous reaction on the gauges. Traffic generation brings together over 10 critical stats in a very visual and organizedfashion, ensuring that technicians can quickly and easily interpret the outcome of the test.The analog gauges are lined with Green and Red layers to represent the expected thresholds.Real-time bandwidth and frame-size adjustment.Overall pass/fail assessment.Throughput, jitter and latency with visual pass/fail thresholds,analog gauges and digitalreadouts.Frame loss and out-of-sequence notification.Multistream ConfigurationConfiguring multiple streams with proper COS and QOS bits can be a complex task. NetBlazer makes it simpler, with all streams easily selectable and configurable from one location. With large icons located throughout the stream pages, configuration becomes as simple as a touch of a finger. Technicians can define one configuration profile and apply it to all the background streams simultaneously. From there, it is just a matter of making slight tweaks as needed rather than complete configuration profiles per stream.Through ModeThrough mode testing consists of passing traffic through either of the NetBlazer’s two 100/1000 Base-X ports or the two 10/100/1000 Base-T ports for in-service troubleshooting of live traffic between the carrier/service provider network and the customer network. Through mode allows technicians to access circuits under test without the need for a splitter.Supporting 10 Gigabit EthernetThe 10 G igabit Ethernet interface is available in both 10 G igE LAN and 10 G igE WAN modes via a single SFP+ transceiver. All Ethernet testing applications—from BER testing to the full EtherSAM suite—are available for both IPv4 and IPv6. Unique to the 10 GigE WAN interface is the ability to send and monitor SONET/SDH J0/J1 traces and the path signal label (C2). The WAN interface can also monitor SONET and SDH alarms and errors.E THER SAM: THE NEW STANDARD IN ETHERNET TESTINGUntil now, RFC 2544 has been the most widely used Ethernet testing methodology. However it was designed for network device testing in the lab, not for service testing in the field. ITU-T Y.156sam is the newly introduced draft standard for turning up and troubleshooting carrier Ethernet services. It has a number of advantages over RFC 2544, including validation of critical SLA criteria such as packet jitter and QoS measurements. This methodology is also significantly faster, therefore saving time and resources while optimizing QoS.EXFO’s EtherSAM test suite—based on the draft ITU-T Y.156sam Ethernet service activation methodology—provides comprehensive field testing for mobile backhaul and commercial services.Contrary to other methodologies, EtherSAM supports new multiservice offerings. It can simulate all types of services that will run on the network and simultaneously qualify all key SLA parameters for each of these services. Moreover, it validates the QoS mechanisms provisioned in the network to prioritize the different service types, resulting in better troubleshooting, more accurate validation and much faster deployment. EtherSAM is comprised of two phases, the network configuration test and the service test.Network Configuration TestThe network configuration test consists of sequentially testing each service. It validates that the service is properly provisioned and that all specific KPIs or SLA parameters are met.Service TestOnce the configuration of each individual service is validated, the service test simultaneously validates the quality of all the services over time.EtherSAM Bidirectional ResultsEXFO’s EtherSAM approach proves even more powerful as it executes the complete ITU-T Y.156sam test with bidirectional measurements. Key SLA parameters are measured independently in each test direction, thus providing 100 % first-time-rightservice activation—the highest level of confidence in service testing.EX PERT TEST TOOLSEXpert Test Tools is a series of platform-based software testing tools that enhance the value of the FTB-1 platform, providing additional testing capabilities without the need for additional modules or units.The EXpert VoIP Test Tools generates a voice-over-IP call directly from the test platform to validateperformance during service turn-up and troubleshooting.›Supports a wide range of signaling protocols, including SIP, SCCP, H.248/Megaco and H.323 ›Supports MOS and R-factor quality metrics› Simplifies testing with configurable pass/fail thresholds and RTP metricsThe EXpert IP Test Tools integrates six commonly used datacom test tools into one platform-based application to ensure field technicians are prepared for a wide-range of testing needs. › Rapidly perform debugging sequences with VLAN scan and LAN discovery› Validate end-to-end ping and traceroute› Verify FTP performance and HTTP availabilityTEST TOOLS IPEXpert TEST TOOLS VoIPOPTICAL INTERFACESTwo ports: 100M and GigEAvailable wavelengths (nm)850, 1310 and 1550100 Base-FX100 Base-LX1000 Base-SX1000 Base-LX1000 Base-ZX1000 Base-BX10-D1000 Base-BX10-USFP+ OPTICAL INTERFACES (10G)10G Base-SR/SW10G Base-LR/LW 10G Base-ER/EW Wavelength (nm)85013101550Tx level (dBm)–5 to –1–8 to 0.5–4.7 to 4.0SPECIFICATIONSELECTRICAL INTERFACESTwo ports: 10/100 Base-T half/full duplex, 1000 Base-T full duplexGENERAL SPECIFICATIONSSize (H x W x D)130 mm x 36 mm x 252 mm (5 1/8 in x 1 7/16 in x 9 15/16 in)Weight (with battery) 0.58 kg (1.3 lb)TemperatureTESTINGEtherSAM (Y.156sam)Network configuration and service test as per ITU-T Y.156sam. Tests can be performed using remote loopback orADDITIONAL FEATURESOptical power measurement Supports optical power measurement at all times; displayed in dBm.UPGRADESFTB-8590SFP modules GigE/FC/2FC at 850 nm, MM, <500 mEXFO is certified ISO 9001 and attests to the quality of these products. This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. EXFO has made every effort to ensure that the information contained in this specification sheet is accurate. However, we accept no responsibility for any errors or omissions, and we reserve the right to modify design, characteristics and products at any time without obligation. Units of measurement in this document conform to SI standards and practices. In addition, all of EXFO’s manufactured products are compliant with the European Union’s WEEE directive. For more information, please visit /recycle. Contact EXFO for prices and availability or to obtain the phone number of your local EXFO distributor. For the most recent version of this spec sheet, please go to the EXFO website at /specs .In case of discrepancy, the Web version takes precedence over any printed literature.EXFO Corporate Headquarters > 400 Godin Avenue, Quebec City (Quebec) G1M 2K2 CANADA | Tel.: +1 418 683-0211 | Fax: +1 418 683-2170 |*************Toll-free: +1 800 663-3936 (USA and Canada) | EXFO America 3701 Plano Parkway, Suite 160Plano, TX 75075 USA Tel.: +1 800 663-3936 Fax: +1 972 836-0164 EXFO Asia 151 Chin Swee Road, #03-29 Manhattan House SINGAPORE 169876Tel.: +65 6333 8241 Fax: +65 6333 8242EXFO China 36 North, 3rd Ring Road East, Dongcheng District Beijing 100013 P. R. CHINATel.: + 86 10 5825 7755 Fax: +86 10 5825 7722Room 1207, Tower C, Global Trade Center EXFO Europe Omega Enterprise Park, Electron Way Chandlers Ford, Hampshire S053 4SE ENGLAND Tel.: +44 2380 246810 Fax: +44 2380 246801EXFO NetHawkElektroniikkatie 2 FI-90590 Oulu, FINLAND Tel.: +358 (0)403 010 300 Fax: +358 (0)8 564 5203EXFO Service Assurance270 Billerica RoadChelmsford, MA 01824 USATel.: +1 978 367-5600Fax: +1 978 367-5700FTB-860 NetBlazer Series Ethernet TestersORDERING INFORMATIONSPFTB860Series.1AN© 2010 EXFO Inc. All rights reserved.Printed in Canada 10/09FTB-860G-XX -XX -XXNotesa. Requires purchase of SFP.b. Requires purchase of SFP+.。

单线激光雷达agv的工作原理

单线激光雷达agv的工作原理

单线激光雷达agv的工作原理Laser navigation plays a crucial role in the functionality of single-line AGVs. This technology utilizes a laser sensor mounted on the AGV to scan its surroundings and create a map of the environment. These maps are then used to guide the AGV along its intended path, allowing for precise and autonomous navigation.激光导航在单线AGV的功能中起着至关重要的作用。

该技术利用安装在AGV上的激光传感器扫描周围环境,并创建环境地图。

然后利用这些地图来引导AGV沿着其预定路径行驶,实现精确和自主的导航。

One of the key advantages of laser navigation for single-line AGVs is its ability to accurately detect and avoid obstacles. The laser sensor is capable of detecting objects in its path and adjusting the AGV'sroute accordingly. This allows for safe and efficient operation in dynamic environments, where obstacles may appear unexpectedly.激光导航对于单线AGV的一个关键优势是其能够精确地检测和避开障碍物。

激光传感器能够检测到路径上的物体,并相应调整AGV的行驶路线。

FortiSandbox:第三代沙盒技术,具有动态人工智能分析说明书

FortiSandbox:第三代沙盒技术,具有动态人工智能分析说明书

WHITE PAPERFortiSandbox: Third-generation Sandboxing Featuring Dynamic AI AnalysisExecutive SummaryAs zero-day and unknown attacks continue to grow in numbers and sophistication, successful breaches are taking longer to discover—at a significant cost to businesses. Many first- and even second-generation sandboxing solutions lack key features like robust artificial intelligence (AI) to keep pace with the rapidly evolving threat landscape. A third generation of sandbox devices, however, not only can help detect breaches in a timely manner but also prevent them from happening. FortiSandbox integrates with the broader Fortinet Security Fabric and utilizes the universal MITRE ATT&CK security language for categorizing threat models and methodologies. FortiSandbox applies robust AI capabilities in the form of both static and dynamic behavior analysis to expose previously unseen malware and other threats before a breach can occur. Expanding Breach Impacts Require Better Detection and PreventionA majority (nearly 80%) of organizations report that they are introducing digitalinnovations faster than their ability to prevent successful cyberattacks .1 The vastmajority (92%) of security architects report experiencing at least one intrusion inthe past 12 months.2 And this problem is compounded by the fact that it typicallytakes more than nine months (279 days) for an organization to discover a securitybreach—with an average cost of nearly $4 million in damages per event.3These critical issues beg the question—what can security architects do to improve not only breach detection but prevention as well? For some time now, organizations have successfully relied on sandboxing devices to discover threats (like malware) hidden in network traffic. But over time, cyber criminals have found ways to thwart detection by previous-generation sandboxes—such as encryption, polymorphism, and even their own AI-based codes that give these threats adaptive and intelligent evasion abilities.Subsequently, a new generation of sandboxes was needed to keep pace with these rapidly evolving threats. FortiSandbox offers a third-generation sandbox designed for detection and prevention of breaches caused by the full spectrum of AI-enabled threats—both known and unknown.Fortinet Third-generation FortiSandbox SolutionUnlike previous-generation solutions, a third-generation sandbox must do three things:n n It must utilize a universal security language via a standardized reporting framework to categorize malware techniques.n n It must share threat intelligences across a fully integrated security architecture in order to automate breach protection in real time as threats are discovered.n n It must perform both static and dynamic (behavior) AI analysis to detect zero-day threats.FortiSandbox addresses the problem of multiple, nonstandard security languages for malware reporting through the MITRE ATT&CK framework.4 This broadly adopted knowledge base of adversary tactics and techniques categorizes all malware tactics in an easy-to-read matrix. This, in turn, helps security teams accelerate threat management and response processes.As an integrated part of the Fortinet Security Fabric architecture, FortiSandbox uses three forms of threat intelligence for automated breach detection and prevention. It uses global intelligence about emerging threats around the world via FortiGuard Labs researchers. Second, it shares local intelligence with both Fortinet and non-Fortinet products across thesecurity infrastructure for real-time situational awareness across the organization. Finally, and most importantly, FortiSandbox applies true AI capabilities—including both static and behavior analysis —to improve detection efficacy of zero-day threats.FortiSandbox AI Capabilities FortiSandbox applies two robust machine algorithms for static and dynamic threat analysis:n n A patent-pending enhanced random forest with boost tree machine-learning model n n A least squares optimizationlearning modelFortiSandbox uses the same language as the MITRE ATT&CK framework for categorizing malware techniques, which accelerates threat management and response.Simplicity, Flexibility, Scalability, and CostComplexity is the enemy of security. Security teams that rely on multiple,disaggregated security products from various vendors are typically forced to learnnonstandard security languages. Each solution may have its own unique languageto report a potential threat. Alerts describing the same threat must be manuallytranslated and mapped by the security operations (SecOps) team to understandthe scope of a problem. This requires the dedicated attention of an experiencedSecOps analyst while lengthening the time it takes to investigate and mitigate apotential attack.And as networks continue to grow and organizations expand adoption of digitalinnovations, their sandboxing needs must be able to keep pace in terms ofperformance, scalability, deployment form factors, licensing flexibility, and (perhapsmost of all) costs. Solution simplicity. FortiSandbox addresses the problem of multiple security languages through the adoption of the MITRE ATT&CK framework. This establishes a universal security language for categorizing all malware techniques inan easy-to-read matrix as part of threat mapping and reporting—which helps security architects accelerate threatmanagement and response processes.FortiSandbox also supports inspection of multiple protocols in a single, unified solution to help simplify infrastructure, centralize reporting, improve threat-hunting capabilities, and reduce costs—capital investment (CapEx) as well asOpEx. Other vendors may require up to four separate sandboxes to provide comparable protection across the extended business infrastructure. Other products may also have limited deployment options that impact the ease of setup andexpansion due to business growth.Flexible form factors. As FortiSandbox is available in multiple form factors (e.g., on-premises appliance, VM, hosted cloud, public cloud), it supports growth and scalability while covering the entire network attack surface. For example, an organization that purchases a sandbox that only comes in an on-premises form factor will be forced to start over fromscratch to extend sandboxing protection into future or current cloud environments (costing time, money, and greaterpotential risk exposure). And if they want to move to a hybrid sandbox deployment, they have no other alternative.Scalability and clustering. To handle the scanning of a high number of files concurrently, multiple FortiSandbox devices can be used together in a load-balancing high-availability (HA) cluster.9 FortiSandbox supports up to 100-node sandbox clusters. Here, clusters can be connected to one another, which means homogenous scale is limitless, enabling FortiSandbox to keep up with high traffic throughput and to remain ahead of potential or planned additions to thebusiness in the future.Low TCO. Next-generation sandboxing delivers better security for modern networks, as well as better value for new or replacement sandboxing devices. Current testing shows that FortiSandbox provides outstanding performance, value, and investment protection with a low three-year TCO.10Detect and Prevent Breaches with AI-based FortiSandboxAs new malware variants multiply and the risk of zero-day attacks makes breaches an eventuality for organizations of all sizes, security architects should look to replace outdated sandboxing devices with a solution that is designed for current needs. As part of the Fortinet Security Fabric, FortiSandbox provides an integrated, third-generation sandbox that allows security leaders to both detect and prevent breaches through true AI-based security effectiveness, manageability,scalability, and cost.1“The Cost of Cybercrime: Ninth Annual Cost of Cybercrime Study,” Accenture and Ponemon Institute, March 6, 2019.2“The Security Architect and Cybersecurity: A Report on Current Priorities and Challenges,” Fortinet, November 12, 2019.3“2019 Cost of a Data Breach Report,” Ponemon Institute and IBM Security, July 2019.4“MITRE ATT&CK,” MITRE, accessed November 25, 2019.5“NSS Labs Announces 2018 Breach Detection Systems Group Test Results,” NSS Labs, October 11, 2018.6Jessica Williams, et al., “Breach Prevention Systems Test Report,” NSS Labs, August 7, 2019.7“Q2 2019 Advanced Threat Defense (ATD) Testing Report,” ICSA Labs, July 9, 2019.8Dipti Ghimire and James Hasty, “Breach Detection Systems Test Report,” NSS Labs, October 19, 2017.9“Technical Note: FortiSandbox HA-Cluster Explanation and Configuration,” Fortinet, accessed November 25, 2019.10 Jessica Williams, et al., “Breach Prevention Systems Test Report,” NSS Labs, August 7, 2019. Copyright © 2021 Fortinet, Inc. All rights reserved. Fortinet, FortiGate, FortiCare and FortiGuard, and certain other marks are registered trademarks of Fortinet, Inc., and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance results. Nothing herein represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests. Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify, transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable. Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify, transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.February 25, 2021 9:59 PM。

《风险评价技术及方法》 8._Operating_and_Support_Hazard_Analysis

《风险评价技术及方法》 8._Operating_and_Support_Hazard_Analysis

Chapter 8Operating and SupportHazard Analysis8.1INTRODUCTIONThe operating and support hazard analysis (O&SHA)is an analysis technique for identifying hazards in system operational tasks,along with the hazard causal factors,effects,risk,and mitigating methods.The O&SHA is an analysis technique for specifically assessing the safety of operations by integrally evaluating operational procedures,the system design,and the human system integration (HSI)interface.The scope of the O&SHA includes normal operation,test,installation,mainten-ance,repair,training,storage,handling,transportation,and emergency /rescue oper-ations.Consideration is given to system design,operational design,hardware failure modes,human error,and task design.Human factors and HSI design considerations are a large factor in system operation and therefore also in the O&SHA.The O&SHA is conducted during system development in order to affect the design for future safe operations.8.2BACKGROUNDThis analysis technique falls under the operations design hazard analysis type (OD-HAT)because it evaluates procedures and tasks performed by humans.The basic analysis types are described in Chapter 3.An alternate name for this analysis technique is the operating hazard analysis (OHA).131Hazard Analysis Techniques for System Safety ,by Clifton A.Ericson,IICopyright #2005John Wiley &Sons,Inc.132OPERATING AND SUPPORT HAZARD ANALYSISThe purpose of the O&SHA is to ensure the safety of the system and personnel in the performance of system operation.Operational hazards can be introduced by the system design,procedure design,human error,and/or the environment.The overall O&SHA goal is to:1.Provide safety focus from an operations and operational task viewpoint.2.Identify task or operationally oriented hazards caused by design,hardwarefailures,software errors,human error,timing,and the like.3.Assess the operations mishap risk.4.Identify design system safety requirements(SSRs)to mitigate operational taskhazards.5.Ensure all operational procedures are safe.The O&SHA is conducted during system development and is directed toward developing safe design and procedures to enhance safety during operation and main-tenance.The O&SHA identifies the functions and procedures that could be hazar-dous to personnel or,through personnel errors,could create hazards to equipment, personnel,or both.Corrective action resulting from this analysis is usually in the form of design requirements and procedural inputs to operating,maintenance,and training manuals.Many of the procedural inputs from system safety are in the form of caution and warning notes.The O&SHA is applicable to the analysis of all types of operations,procedures, tasks,and functions.It can be performed on draft procedural instructions or detailed instruction manuals.The O&SHA is specifically oriented toward the hazard analysis of tasks for system operation,maintenance,repair,test,and troubleshooting.The O&SHA technique provides sufficient thoroughness in identifying and mitigating operations and support-type hazards when applied to a given system/ subsystem by experienced safety personnel.A basic understanding of hazard analy-sis theory is essential as well as knowledge of system safety concepts.Experience with,or a good working knowledge of,the particular type of system and subsystem is necessary in order to identify and analyze hazards that may exist within pro-cedures and instructions.The methodology is uncomplicated and easily learned. Standard O&SHA forms and instructions have been developed that are included as part of this chapter.The O&SHA evaluates the system design and operational procedures to identify hazards and to eliminate or mitigate operational task hazards.The O&SHA can also provide insight into design changes that might adversely affect operational tasks and procedures.The O&SHA effort should start early enough during system develop-ment to provide inputs to the design and prior to system test and operation.The O&SHA worksheet provides a format for entering the sequence of operations,pro-cedures,tasks,and steps necessary for task accomplishment.The worksheet also provides a format for analyzing this sequence in a structured process that produces a consistent and logically reasoned evaluation of hazards and controls.8.4DEFINITIONS133Although some system safety programs(SSPs)may attempt to replace the O&SHA with a preliminary hazard analysis(PHA),this is not recommended since the PHA is not oriented specifically for the analysis of operational tasks. Use of the O&SHA technique is recommended for identification and mitigation of operational and procedural hazards.8.3HISTORYThe O&SHA technique was established very early in the history of the system safety discipline.It was formally instituted and promulgated by the developers of MIL-STD-882.It was developed to ensure the safe operation of an integrated sys-tem.It was originally called operating hazard analysis(OHA)but was later expanded in scope and renamed O&SHA to more accurately reflect all operational support activities.8.4DEFINITIONSTo facilitate a better understanding of O&SHA,the following definitions of specific terms are provided:Operation An operation is the performance of procedures to meet an overall objective.For example,a missile maintenance operation may be“replacing missile battery.”The objective is to perform all the necessary procedures and tasks to replace the battery.Procedure A procedure is a set of tasks that must be performed to accomplish an operation.Tasks within a procedure are designed to be followed sequentially to properly and safely accomplish the operation.For example,the above battery repla-cement operation may be comprised of two primary procedures:(1)battery removal and(2)battery replacement.Each of these procedures contains a specific set of tasks that must be performed.Task A task is an element of work,which together with other elements of work comprises a procedure.For example,battery removal may consist of a series of sequential elements of work,such as power shutdown,compartment cover removal, removal of electrical terminals,unbolting of battery hold down bolts,and battery removal.Figure8.1portrays these definitions and their interrelationships.It should be noted that tasks might be further broken down into subtasks,sub-subtasks,and so forth.8.5THEORYFigure 8.2shows an overview of the basic O&SHA process and summarizes the important relationships involved.The intent of the O&SHA is to identify and mitigate hazards associated with the operational phases of the system,such as deployment,maintenance,calibration,test,training,and the like.This process consists of utilizing both design information and known hazard information to verify complete safety coverage and control of hazards.Operational task hazards are identified through the meticulous analysis of each detailed procedure that is to be performed during system operation or support.Input information for the O&SHA consists of all system design and operation information,operation and support manuals,as well as hazards identified by other program hazard analyses.Typically the following types of information are available and utilized in the O&SHA:1.Hazards and top-level mishaps (TLMs)identified from the preliminary hazardlist (PHL),PHA,subsystem hazard analysis (SSHA),system hazard analysis (SHA),and health hazard assessment (HHA)2.Engineering descriptions of the system,support equipment,and facilities3.Written procedures and manuals for operational tasks to be performed Procedure 1OperationProcedure 3 Procedure 2•Task 1.1•Subtask 1.1a•Subtask 1.1b•Task 1.2•Subtask 1.2a •Task 2.1 •Task 2.2 •Task 2.3 •Task 3.1 •Task 3.2 •Task 3.3Figure 8.1Operationdefinitions.Figure 8.2O&SHA overview.134OPERATING AND SUPPORT HAZARD ANALYSIS8.6METHODOLOGY1354.Chemicals,materials,and compounds used in the system production,oper-ation,and support5.Human factors engineering data and reports6.Lessons learned,including human error mishaps7.Hazard checklistsThe primary purpose of the O&SHA is to identify and mitigate hazards resulting from the system fabrication,operation,and maintenance.As such,the following information is typically output from the O&SHA:1.Task hazards2.Hazard causal factors(materials,processes,excessive exposures,errors,etc.)3.Risk assessment4.Safety design requirements to mitigate the hazard5.The identification of caution and warning notes for procedures and manuals6.The identification of special HSI design methods to counteract human-error-related hazardsGenerally,the O&SHA evaluates manuals and procedural documentation that are in the draft stage.The output of the O&SHA will add cautions and warnings and poss-ibly new procedures to thefinal documentation.8.6METHODOLOGYThe O&SHA process methodology is shown in Figure8.3.The idea behind this pro-cess is that different types of information are used to stimulate hazard identification. The analyst employs hazard checklists,mishap checklists,and system tools.Typical system tools might include functionalflow diagrams(FFDs),operational sequence diagrams(OSDs),and indentured task lists(ITLs).Table8.1lists and describes the basic steps of the O&SHA process.The O&SHA process involves performing a detailed analysis of each step or task in the oper-ational procedure under investigation.The objective of the O&SHA is to identify and mitigate hazards that might occur during the operation and support of the system.The human should be considered an element of the total system,both receiving inputs and initiating outputs during the conduct of this analysis.Hazards may result due to system design,support equipment design,test equipment,human error,HSI,and/or procedure design. O&SHA consideration includes the environment,personnel,procedures,and equipment involved throughout the operation of a system.The O&SHA may be performed on such activities as testing,installation,modification,maintenance, support,transportation,ground servicing,storage,operations,emergency escape, egress,rescue,postaccident responses,and training.The O&SHA also ensures that operation and maintenance manuals properly address safety and healthrequirements.The O&SHA may also evaluate adequacy of operational and support procedures used to eliminate,control,or abate identified hazards or risks.The O&SHA effort should start early enough to provide inputs to the design and prior to system test and operation.The O&SHA is most effective as a continuing closed-loop iterative process,whereby proposed changes,additions,and formu-lation of functional activities are evaluated for safety considerations,prior to formal acceptance.O&SHA considerations should include:1.Potentially hazardous system states under operator control2.Operator hazards resulting from system design (hardware aging and wear,distractions,confusion factors,worker overload,operational tempo,exposed hot surfaces,environmental stimuli,etc.)3.Operator hazards resulting from potential human error4.Errors in procedures and instructions5.Activities that occur under hazardous conditions,their time periods,and theactions required to minimize risk during these activities /time periods6.Changes needed in functional or design requirements for system hardware /software,facilities,tooling,or support /test equipment to eliminate or control hazards or reduce associated risks7.Requirements for safety devices and equipment,including personnel safetyand life supportequipmentO&SHA Worksheets•Hazards •Mishaps •Causal sources •Risk •SCFs and TLMs •Mitigation methods •SSRsFigure 8.3O&SHA methodology.136OPERATING AND SUPPORT HAZARD ANALYSIS8.Warnings,cautions,and special emergency procedures (e.g.,egress,rescue,escape,render safe,explosive ordnance disposal,back-out,etc.),including those necessitated by failure of a computer software-controlled operation to produce the expected and required safe result or indicationTABLE 8.1O&SHA ProcessStepTask Description 1Define system operation.Define,scope,and bound the operation to beperformed.Understand the operation and itsobjective.2Acquire data.Acquire all of the necessary design and operationaldata needed for the analysis.These data includeboth schematics and operation manuals.3List procedures and detailed tasks.Make a detailed list of all procedures and tasks to beconsidered in the O&SHA.This list can be takendirectly from manuals,procedures,or operationalplans that are already written or in draft form.4Conduct O&SHA. a.Input task list into the O&SHA worksheets.b.Evaluate each item in the task list and identifyhazards for the task.pare procedures and tasks with hazardchecklists.pare procedures and tasks with lessonslearned.e.Be cognizant of task relationships,timing,andconcurrent tasks when identifying hazards.5Evaluate risk.Identify the level of mishap risk presented by thehazard with,and without,mitigations in thesystem design.6Recommend corrective action.Recommend corrective action necessary toeliminate or mitigate identified hazards.Workwith the design organization to translate therecommendations into SSRs.Also,identify safetyfeatures already in the design or procedures thatare present for hazard mitigation.7Ensure caution and warnings are implemented.Review documented procedures to ensure thatcorrective action is being implemented.Ensurethat all caution and warning notes are inputted inmanuals and /or posted on equipmentappropriately,as recommended in the O&SHA.8Monitor corrective action.Participate in verification and validation ofprocedures and review the results to ensure thatSSRs effectively mitigate hazards.9Track hazards.Transfer identified hazards into the hazard trackingsystem (HTS).Update hazards in the HTS ascausal factors and risk are identified in theO&SHA.10Document O&SHA.Document the entire O&SHA process on theworksheets.Update for new information andclosure of assigned corrective actions.8.6METHODOLOGY 1379.Requirements for packaging,handling,storage,transportation,maintenance,and disposal of hazardous materials10.Requirements for safety training and personnel certification11.The safety effect of nondevelopmental items(NDI)and commercial off-the-shelf(COTS)items,both in hardware and software,during system operation12.The safety effect of concurrent tasks and/or procedures8.7WORKSHEETThe O&SHA is a detailed hazard analysis utilizing structure and rigor.It is desirable to perform the O&SHA using a specialized worksheet.Although the specific format of the analysis worksheet is not critical,as a minimum,the following basic infor-mation is required from the O&SHA:1.Specific tasks under analysis2.Identified hazard3.Effect of hazard4.Hazard causal factors(varying levels of detail)5.Recommended mitigating action(design requirement,safety devices,warningdevices,special procedures and training,caution and warning notes,etc.)6.Risk assessment(initial andfinal)Figure8.4shows the columnar format O&SHA worksheet recommended for SSP usage.This particular worksheet format has proven to be useful and effective in many applications,and it provides all of the information necessary from an O&SHA.System: Operation:Operating and Support Hazard Analysis Analyst:Date:Task HazardNo.Action FMRI Status1324 567111391081214 Hazard Causes Effects IMRI Recommended CommentsFigure8.4Recommended O&SHA worksheet.138OPERATING AND SUPPORT HAZARD ANALYSIS8.7WORKSHEET139The following instructions describe the information required under each column entry of the O&SHA worksheet:1.System This entry identifies the system under analysis.2.Operation This entry identifies the system operation under analysis.3.Analyst This entry identifies the name of the O&SHA analyst.4.Date This entry identifies the date of the O&SHA analysis.5.Task This column identifies the operational task being analyzed.List anddescribe each of the steps or tasks to be performed.If possible,include the purpose and the mode or phase of operation being performed.6.Hazard Number This is the number assigned to the identified hazard in theO&SHA(e.g.,O&SHA-1,O&SHA-2).This is for future reference to the particular hazard source and may be used,for example,in the hazard action record(HAR).The hazard number is at the end of the worksheet because not all tasks listed will have hazards associated with them,and this column could be confusing at the front of the worksheet.7.Hazard This column identifies the specific hazard,or hazards,that couldpossibly result from the task.(Remember:Document all hazard consider-ations,even if they are later proven to be nonhazardous.)8.Causes This column identifies conditions,events,or faults that could causethe hazard to exist and the events that can trigger the hazardous elements to become a mishap or accident.9.Effects This column identifies the effect and consequences of the hazard,should it occur.The worst-case result should be the stated effect.10.Initial Mishap Risk Index(IMRI)This column provides a qualitativemeasure of mishap risk significance for the potential effect of the identified hazard,given that no mitigation techniques are applied to the hazard.Risk measures are a combination of mishap severity and probability,and the recommended values from MIL-STD-882are shown below.Severity Probability1.Catastrophic A.Frequent2.Critical B.Probable3.Marginal C.Occasional4.Negligible D.RemoteE.Improbable11.Recommended Action This column establishes recommended preventivemeasures to eliminate or mitigate the identified hazards.Recommendations generally take the form of guideline safety requirements from existing sources or a proposed mitigation method that is eventually translated intoa new derived SSR intended to mitigate the hazard.SSRs are generatedafter coordination with the design and requirements organizations.Hazard mitigation methods should follow the preferred order of precedence140OPERATING AND SUPPORT HAZARD ANALYSISestablished in MIL-STD-882for invoking or developing safety require-ments,which are shown below.Order of Precedence1.Eliminate hazard through design selection.2.Control hazard through design methods.3.Control hazard through safety devices.4.Control hazard through warning devices.5.Control hazard through procedures and training.12.Final Mishap Risk Index(FMRI)This column provides a qualitativemeasure of mishap risk significance for the potential effect of the identified hazard,given that mitigation techniques and safety requirements are applied to the hazard.The same values used in column10are also used here.ments This column provides a place to record useful informationregarding the hazard or the analysis process that are not noted elsewhere.14.Status This column states the current status of the hazard,as being eitheropen or closed.Note in this analysis methodology that each and every procedural task is listed and analyzed.For this reason,not every entry in the O&SHA form will constitute a hazard since not every task is hazardous.This process documents that the O&SHA considered all tasks.8.8HAZARD CHECKLISTSHazard checklists provide a common source for readily recognizing hazards.Since no single checklist is ever really adequate in itself,it becomes necessary to develop and utilize several different checklists.Utilizing several checklists may result in some repetition,but complete coverage of all hazardous elements will be more cer-tain.If a hazard is duplicated,it should be recognized and condensed into one hazard.Remember that a checklist should never be considered a complete and final list but merely a catalyst for stimulating hazard recognition.Chapter4on PHL analysis provided some example general-purpose hazard checklists applicable to system design.Figure8.5provides an example hazard checklist applicable to operational tasks.This example checklist is not intended to represent all hazard sources but some typical considerations for an O&SHA.8.9SUPPORT TOOLSThe functionalflow diagram(or functional block diagram)simplifies system design and operation for clarity and e of the FFD for O&SHA evaluation of procedures and tasks is recommended.Indentured equipment lists were defined in Chapter 1as a valuable aid in under-standing systems and performing hazard analyses.ITLs are also developed to assist in the design and development of operations.Operational sequence diagrams (OSDs)are a special type of diagram used to define and describe a series of operations and tasks using a graphical format.The OSD plots a flow of information,data,or energy relative to time (actual or sequen-tial)through an operationally defined system using standard symbols to relate actions taken.Actions in the OSD may include inspections,data transmittal /receipt,Figure 8.6Operational sequence diagramsymbols.Figure 8.5Operational hazard checklist.8.9SUPPORT TOOLS141142OPERATING AND SUPPORT HAZARD ANALYSISFigure8.7Example operational sequence diagram.storage,repair,decision points,and so forth.The OSD helps to display and simplify activities in a highly complex system and identify procedurally related hazards.Symbols used in the OSD are adapted from the American Society of Mechanical Engineers(ASME)flowchart standards,as shown in Figure8.6.The OSD method-ology was originally defined in MIL-H-46855[1].An example OSD is shown in Figure8.7for a missile system.Note that the subsystems are denoted along the top,and time is denoted in the left-hand column.8.10GUIDELINESThe following are some basic guidelines that should be followed when completing the O&SHA worksheet:1.Remember that the objective of the O&SHA is to evaluate the system designand operational procedures to identify hazards and to eliminate or mitigate operational task hazards.2.Start the O&SHA by populating the O&SHA worksheet with the specific tasksunder investigation.3.A hazard write-up in the O&SHA worksheet should be clear and understand-able with as much information necessary to understand the hazard.4.The O&SHA hazard column does not have to contain all three elements of ahazard:hazardous element(HE),initiating mechanisms(IMs),and outcome (O).The combined columns of the SSHA worksheet can contain all three components of a hazard.For example,it is acceptable to place the HE in the hazard section,the IMs in the cause section and the O in the effect section.The hazard,causes,and effects columns should together completely describe8.11EXAMPLES143the hazard.These columns should provide the three sides of the hazard triangle(see Chapter2).8.11EXAMPLES8.11.1Example1To demonstrate the O&SHA methodology,a hypothetical procedure will be ana-lyzed.The selected example procedure is to replace an electrical outlet receptacle in a weapons maintenance facility.The receptacle contains220VAC,so the pro-cedure is a hazardous operation.The detailed set of tasks to accomplish this pro-cedure is provided in Table8.2.Tables8.3,8.4,and8.5contain the O&SHA worksheets for this example.The following should be noted from this example analysis:1.Every procedural task is listed and evaluated on the worksheet.2.Every task may not have an associated hazard.3.Even though a task may not have an identified hazard,the task is still docu-mented in the analysis to indicate that it has been reviewed.8.11.2Example2In order to further demonstrate the O&SHA methodology,the same hypothetical Ace Missile System from Chapters4,5,and6will be used.The system design is shown again in Figure8.8.Figure8.9shows the basic planned operational phases for the Ace Missile Sys-tem.Phase4has been selected for O&SHA in this example.The detailed set of tasks to accomplish phase4procedure is provided in Table8.6.TABLE8.2Example Electrical Outlet ReplacementProcedureStep Description of Task1.0Locate circuit breaker2.0Open circuit breaker3.0Tag circuit breaker4.0Remove receptacle wall plate—2screws5.0Remove old receptacle—2screws6.0Unwire old receptacle—disconnect3wires7.0Wire new receptacle—connect3wires8.0Install new receptacle—2screws9.0Install old wall plate—2screws10.0Close circuit breaker11.0Remove circuit breaker tag12.0Test circuitT A B L E 8.3O &S H A E x a m p l e 1—W o r k s h e e t 1144T A B L E 8.4O &S H A E x a m p l e 1—W o r k s h e e t 2145T A B L E 8.5O &S H A E x a m p l e 1—W o r k s h e e t 3146It should be noted that in a real-world system the steps in Table 8.3would likely be more refined and consist of many more discrete and detailed steps.The steps have been kept simple here for purposes of demonstrating the O&SHA technique.Tables 8.7through 8.10contain the O&SHA worksheets for the Ace Missile System example.- Warhead - Battery- Computer/SW - Receiver - Destruct - Fuel- Rocket BoosterFigure 8.8Ace Missile System.Phase 6 Phase 7Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Figure 8.9Ace functional flow diagram of missile operational phases.TABLE 8.6Missile Installation in Launch Tube Procedure Step Description of Task4.1Remove missile from ship storage locker.4.2Load missile onto handcart transporter.4.3Transport missile to launch tube.4.4Hoist missile into launch tube.4.5Run missile tests.4.6Install missile cables.4.7Remove S&A pins.4.8Place missile in standby alert.8.11EXAMPLES147T A B L E 8.7O &S H A E x a m p l e 2—W o r k s h e e t 1148T A B L E 8.8O &S H A E x a m p l e 2—W o r k s h e e t 2149T A B L E 8.9O &S H A E x a m p l e 2—W o r k s h e e t 3S y s t e m :A c e M i s s i l e S y s t e mO p e r a t i o n :M i s s i l e I n s t a l l a t i o n i n L a u n c h T u b eO p e r a t i n g a n d S u p p o r t H a z a r d A n a l y s i s A n a l y s t :D a t e :T a s kH a z a r d N o .H a z a r dC a u s e s E f f e c t s I M R IR e c o m m e n d e d A c t i o n F M R I C o m m e n t s S t a t u sT a s k 4.5:R u n m i s s i l e t e s t s .O H A -9M i s s i l e t e s t c a u s e s m i s s i l e l a u n c h .T e s t e q u i p m e n t f a u l t ;s t r a y v o l t a g e o n t e s t l i n e s .I n a d v e r t e n t m i s s i l e l a u n c h ,r e s u l t i n g i n p e r s o n n e l i n j u r y .1D D e v e l o p t e s t e q u i p m e n t p r o c e d u r e s a n d i n s p e c t i o n s R e q u i r e t r a i n e d a n d q u a l i fie d p e r s o n n e l C a u t i o n n o t e t o e n s u r e S &A p i n s a r e i n s t a l l e d 1E O p e nO H A -10M i s s i l e t e s t c a u s e s d e s t r u c t s y s t e m i n i t i a t i o n .T e s t e q u i p m e n t f a u l t ;s t r a y v o l t a g e o n t e s t l i n e s .I n a d v e r t e n t m i s s i l e d e s t r u c t i n i t i a t i o n ,r e s u l t i n g i n p e r s o n n e l i n j u r y .1D D e v e l o p t e s t e q u i p m e n t p r o c e d u r e s a n d i n s p e c t i o n s R e q u i r e t r a i n e d a n d q u a l i fie d p e r s o n n e l C a u t i o n n o t e t o e n s u r e S &A p i n s a r e i n s t a l l e d 1E O p e nT a s k 4.6:I n s t a l l m i s s i l e c a b l e s .O H A -11C a b l e s i n c o r r e c t l y i n s t a l l e d ,r e s u l t i n g i n m i s m a t e d c o n n e c t o r s t h a t c a u s e w r o n g v o l t a g e s o n m i s s i l e l a u n c h w i r e .H u m a n e r r o r r e s u l t s i n i n c o r r e c t c o n n e c t o r m a t i n g t h a t p l a c e s w r o n g v o l t a g e s o n c r i t i c a l c o n n e c t o r p i n s .I n a d v e r t e n t m i s s i l e l a u n c h ,r e s u l t i n g i n p e r s o n n e l d e a t h /i n j u r y .1DR e q u i r e t r a i n e d a n d q u a l i fie d p e r s o n n e l C a u t i o n n o t e t o e n s u r e S &A p i n s a r e i n s t a l l e d D e s i g n c o n n e c t o r s t o p r e v e n t m i s m a t i n g1E O p e nP a g e :3o f 4150。

分析方法 英文

分析方法 英文

分析方法英文Analysis Methods。

Analysis methods are essential tools for researchers and analysts to make sense of data, draw conclusions, and make informed decisions. There are various analysis methods available, each with its own strengths and weaknesses. In this document, we will explore some common analysis methods and their applications.1. Descriptive Analysis。

Descriptive analysis is the simplest form of data analysis. It involves summarizing and describing the main features of a dataset. This method is useful for providing an overview of the data, such as mean, median, mode, range, and standard deviation. Descriptive analysis is often used to explore and understand the basic characteristics of a dataset before moving on to more complex analysis methods.2. Inferential Analysis。

Inferential analysis is used to make inferences or predictions about a population based on a sample of data. This method involves using statistical techniques to draw conclusions about a larger group based on the characteristics of a smaller group. Inferential analysis is commonly used in hypothesis testing, confidence intervals, and regression analysis. It allows researchers to make generalizations and predictions about a population based on a sample.3. Exploratory Data Analysis。

3GPP 5G基站(BS)R16版本一致性测试英文原版(3GPP TS 38.141-1)

3GPP 5G基站(BS)R16版本一致性测试英文原版(3GPP TS 38.141-1)

4.2.2
BS type 1-H.................................................................................................................................................. 26
4.3
Base station classes............................................................................................................................................27
1 Scope.......................................................................................................................................................13
All rights reserved. UMTS™ is a Trade Mark of ETSI registered for the benefit of its members 3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners LTE™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM Association

brilliantlysimplesecurityand:出色的简单安全

brilliantlysimplesecurityand:出色的简单安全

brilliantly simple security and controleffectively and more efficiently than any other global vendor.Security used to be about identifying code known to be bad and preventing it from breaching the organization’s network perimeter. T oday, that’s not enough. Increased employee mobility, flexible working and visitors plugging into the corporate systems are all leading to the rapid disappearance of the traditional network.As IT departments fight to regain control, a fragmented security strategy that involves separate firewalls, anti-virus and anti-spam is no longer acceptable.Against a background of escalating support desk costs and relentless demands for increased access to corporate information, the challenge of providing reliable protection from today’s sophisticated, blended threats is complicated by other factors. The need to enforce internal and regulatory compliance policies, and the emergence of the ITdepartment as a key supporter of business strategy and processes, has made itsimportance broader and more critical than ever before.The result is a recognition that today’s security requires not just the blocking of malware, but also the controlling of legitimate applications, network access, computer configuration, and user behavior. The solution to the problem lies in enforcing security through control. Sophos Enterprise Security and Control does just that.“We’re seeing different types of threat, a vastly changed environment and organizations struggling with as many as ten point-products. Our response is simple – we’ve integrated the protection they need into a single, easily managed solution.”Richard JacobsSophos Chief T echnology OfficerEvolving threat –the need for control»»Unifying multiple threat technologies at the web, email and endpointEnterprise Security and Control gives you a brilliantly simple way to manage the cost and complexity of keeping your organization threat-free.Defeating today’s and tomorrow’s threats Sophos provides ongoing rapid protection against multiple known and emerging threats. Unique technologies developed by experts in SophosLabs™ protect you from unknown threats at every vulnerable point – desktops, laptops, servers, mobile devices, email and web – before they can execute and even before we have seen them.Unifying control of the good, the bad, and the suspiciousAs well as blocking malicious code and suspicious behavior , we give you the control you need to prevent data leakage and maximize user productivity – making web browsing safe, eliminating spam, stopping phishing attacks, and letting you control the use of removable storage devices, wireless networking protocols and unauthorized software like VoIP , IM, P2P and games. You can ensure that securityprotection on your computers is up to date and enabled, certify computers before and after they connect to your network, and prevent unauthorized users from connecting.Giving real integration todeliver faster, better protectionNo matter what stage of the process you are talking about, we take a completely integrated approach. At the threat analysis level, SophosLabs combines malware, spam, application and web expertise. At the administrative level, you can manage all threats with single, integrated policies, and at the detection level, our unified engine looks for the good and the bad at every vulnerable point, in a single scan.Driving down costs through simplification and automationOur approach of easy integration andsimplification for any size network allows you to achieve more from existing budgets. At-a-glance dashboards, remote monitoring, and automation of day-to-day management tasks free you to tackle business problems rather than having to maintain the system.“We’ve engineered an intelligent engine that simultaneously scans for all types of malware, suspicious behavior, and legitimate applications – to maximize the performance of our endpoint, web and email solutions.Security and control, in a single scan.”Wendy DeanSophos VP of Engineering»Over 100 million usersin 150 countries relyon Sophos“It doesn’t really matter anymore where the threat comes from – webdownload, email attachment, guest laptop – the lines are blurring. All thatmatters is that you don’t get infected, and our exceptional visibility andexpertise ensure you won’t.”Vanja SvajcerSophos Principal Virus ResearcherExpertise and technology for real securityAt the heart of our expertise is SophosLabs,giving you the fastest response in theindustry to emerging threats, and deliveringpowerful, robust security. With an integrated global network of highly skilled analysts with over 20 years’ experience in protectingbusinesses from known and emerging threats,our expertise covers every area of network security – viruses, spyware, adware, intrusion, spam, and malicious URLs.Integrated threat expertise, deployment and detectionMillions of emails and web pages analyzed every day Thousands of malicious URLs blocked every day Innovative proactive technologies forpre-execution detection»»»»Constant independent recognition including 36 VB100 awards Automated analysisGenotype database with terabytes of data»»»“The excellence of our web, email and phone support services really sets us apart from our competitors. We provide 24-hour support, 365 days a year. When customers call us they speak directly tosomeone who is able to solve their problem.”Geoff SnareSophos Head of Global Technical Support»Web, email andtelephone support included in all licensesSophos NAC AdvancedAdvanced features designed specifically for enterprise network access control requirements. Providing easy deployment across existing network infrastructures, controlled access to the network, and enforced computer compliance with security policy before and after connecting to the network.Improving security through control for web, email and endpointEnterprise Security and Control delivers complete protection for desktops, laptops, mobile devices, file servers, your email gateway and groupware infrastructure and all your web browsing needs – in one simple license.It is also possible to subscribe separately to the Web, Email and Endpoint Security and Control services. In addition, there is a more advanced network access control (NAC) option for larger organizations.Web Security and ControlManaged appliances providing safe and productive browsing of the web, with fully integrated protection against malware, phishing attacks, drive-by-downloads, anonymizing proxies,spyware, adware, inappropriate visiting of websites, and data leakage from infected computers.Email Security and ControlManaged email appliances and protection for Exchange, UNIX and Domino servers, providingunique integration of anti-virus, anti-spam, anti-phishing and policy enforcement capabilities to secure and control email content.Endpoint Security and ControlA single automated console for Windows, Mac and Linux computers, providing integrated virus, spyware and adware detection, host intrusion prevention, application control, device control, network access control and firewall.Multiple threat protectionAnti-virus Anti-spywareAnti-adware and potentially unwanted applications Application control – VoIP , IM, P2P and moreDevice control – removable storage and wireless networking protocols Behavior analysis (HIPS)Client firewall Anti-spam Anti-phishingEmail content controlMalicious website blocking Productivity filteringReal-time web download scanning Automatic anonymizing proxy detection Control of guest access Blocking unknown or unauthorized users »»»»»»»»»»»»»»»»Full details of each of our products can be found at and on separate technical datasheetsSophos Professional ServicesSophos Professional Services provides the right skills to implement and maintain complete endpoint, web and email security , ensuring rapid, customized, deployment of our products.Unrivalled round-the-clock supportOur globally managed support team provides web, email and telephone support. 24x7x365 technical support is included for all products and you can call us for one-to-one assistance at any time.Simple pricing and licensingOne simple, subscription-based license provides web, email and telephone support and all future updates to protection, management and product upgrades.“We’re seeing a tremendous rise in organizations of all sizes switching to us from legacy security vendors. Like the leading independent analysts and industry watchers, they trust us, they trust our products, they trust our vision.”Steve MunfordSophos Chief Executive OfficerOur unique approach is why analysts see us as the clear alternative to Symantec and McAfee, and why over 100 million users, including the world’s leading business and security organizations, trust Sophos.The analyst view“Buyers who prefer a broad and comprehensive EPP suite with impressive management capability, especially NAC...will do well to consider Sophos.” Gartner, Magic Quadrant for Endpoint Protection Platforms 2007The customer view“We’ve been delighted by the high level of dedicated support and expertise delivered by Sophos, particularly given our need for a fast implementation.”Chris Leonard, European IT Security and Compliance Manager, HeinzThe industry view“Sophos... consistently beat McAfee and Symantec in ease-of use which should reduce recurring costs in any size enterprise.”Cascadia Labs, Comparative Review, Endpoint Security for Enterprises Sophos customers include: CitgoDeutsche Postbank AGGE, IncGulfstreamHarvard UniversityHeinzHong Kong UniversityInterbrewMarks & SpencerNew York UniversityOrangeOxford UniversityPulitzerSainsbury’sSiemensSociété GénéraleToshibaUniversity of HamburgUniversity of OtagoUS Government AgenciesWeleda AGXerox Corporation»»»»»»»»»»»»»»»»»»»»»»»the clear alternative to Symantec and McAfeeBoston, USA |Oxford, UK204。

CYME 8.2 电力系统分析软件说明书

CYME 8.2 电力系统分析软件说明书

CYMEPower Engineering Software and SolutionsThe suite of advanced power system analysis tools in the forefront of grid modernizationThe CYME 8 Series, with its first version released in 2016, has promised to deliver a new generation of our software strengthening power system modeling and analytical capabilities, while taking userfriendliness to another level. CYME 8.2 continues to deliver its promise to invest in system planning and operation in the context of high distributed energy resources (DER) penetration level, and to further the development of a powerful user interface connecting simplicity with efficiency.Key new features include:• Several enhancements tothe Integration Capacity Analysis (ICA) and DER Impact Evaluation, alongwith the refinement of DER equipment models.• Enrichment of the time-series analysis capabilities through the integration of the Load Flow with Profiles Analysis and Advanced Project Manager.• Continuous improvement ofcore components such as theDynamic Data Pull and DataPush Publishing, ProtectiveDevice Analysis, AdvancedFault Locator and DistributionState Estimator.• Additional reportingfunctionalities to ease thevisualization and interpretationof simulation results.Partnering with utilities, listeningto the voice of the customerand leveraging our cutting-edgeengineering and IT expertise, theCYME team continues to servethe industry with a state-of-the-art engineering tool that bringsdependable results at users’fingertips.Efficiency gains with the rightDER analysis toolsAs the electric power systemlandscape continues to evolvewith the global trend for cleanerenergy, utilities are facing variouschallenges associated with therequirements of our modernera. Increasing workload withlimited workforce, stringentregulations and high customerexpectations trigger the needfor contemporary tools enablingthe automatic, comprehensiveanalysis of networks subjectto current and future DERdeployment.The CYME Integration CapacityAnalysis module, performingthorough distribution systemload and generation hostingcapacity analysis, becomes moreflexible in terms of simulationparameters and more informativefor results as it gets adopted byan increasing number of utilities.Feeder-level results, hostingcapacity distribution chartsand bottleneck identificationare new outputs to supportengineers with their gridassessment and planning.The CYME DER ImpactEvaluation module, automatingrepetitive, time-consuming anderror-prone verifications involvedwith system impact studies,sees its scope widen witha series of new verificationsrelated to protection schemes.The analysis is also more flexible,with an increased granularityof its simulation parametersand the ability to considermultiple dispersed installationsin a single execution.DER modeling reaches newheights with the possibilityof creating a library of AC/DCconverters, the improvementsof advanced inverter functionsand the support of single- andthree-phase power-electronicsbased shunt reactive powercompensation devices.While utilities operate inan era of rapid changeswhere new challengesrequire innovative solutions,providing clean, reliable,safe and affordable powerremains at the core of theirmission.Driven by its vast userbase, the CYME teamcontinues to enhance itsbest-of-breed power systemanalysis software, makingCYME 8.2 the essential toolfor supporting engineerswith accurate systemperformance assessment,electrical assets and capitalexpenditure optimizationand sou nd decision-making.CYME 8.2 New FeaturesFollow us on social media to get thelatest product and support information.Eaton is a registered trademark. All other trademarks are property of their respective owners.Eaton1000 Eaton Boulevard Cleveland, OH 44122United States CYME International T&D 1485 Roberval, Suite 104St.Bruno, QC, Canada J3V 3P8P: 450.461.3655 F: 450.461.0966P: 800.361.3627 (Canada/USA)******************/cyme© 2018 Eaton All Rights Reserved Printed in CanadaPublication No. BR 917 085 EN December 2018Visibility and awareness on future off-peak system conditionsThe ever-increasing DER penetration levels raise the interest in hourly forecasts leveraging AMI and SCADA measurements for system analysis associated with long-term planning. As a result, the CYME Software has taken a great leap forward in terms of time-series simulationcapabilities with the integration its Load Flow with Profiles Analysis and Advanced Project Manager.The CYME Advanced Project Manager module, assisting with long-term network planning via a toolset geared for efficient as-planned system assessment and risk mitigation scenario comparison, has seen its framework evolve towards a chronology-driven architecture.Time-stamped modelmodifications, addressing future violation through traditional solutions or non-wiresalternatives, now establish the project timeline and enable calendar-based navigation.When used in conjunction with the CYME Steady State Analysis with Load Profiles module, snapshot or long-term time-range load flow analyses,ingesting feeder-level down to service-level forecasts, can be performed on an electric system model that changes over time.Profile and billing data management as well as time-series analysis result visualization have also been seamlessly integrated into the CYME Software user interface for the highest level of user-friendliness.Unprecedented model and algorithm sophistication The power system modelbeing one of the cornerstones of the CYME Software, a particular care is taken to ensure an organic evolution of its equipment and network modeling capabilities. In line with industry trends and technological advancements,CYME 8.2 features several enhancements aiming at providing the best model to emulate the network behavior under various operating conditions.• VAR Compensator – Single- or three-phase power-electronic based shunt reactive power compensation device (e.g.Varentec ENGO™, AMSC D-VAR VVO™, ABB PCS100STATCOM, IngeteamINGEGRID™ STATCOM, etc.).• AC/DC Converter – Dedicated single- or three-phase equipment allowing the creation of a library (manufacturer, model and standard information), plus enhancements to advanced inverter functions.• Cables and Conductors – Duplex service drop for detailed low-voltage secondary distribution modeling.• DER Devices – Control systems block diagram development via UDM stability model enablingvoltage ride-through analysis.• Transformers – T -T and T -TN configurations.As part of our commitment to continuously improve our analysis functions in order to stay in sync with the growing and evolving needs of ourusers, several recent additions to the software have been further enhanced.• Dynamic Data Pull and Data Push Publishing – New software development tool,CYME Studio, to support in-house development.• Protective Device Analysis –Better abnormal conditions reporting and improvements to Minimum Fault, Sequence of Operations and Arc Flash Hazards analyses.• Advanced Fault Locator – Support for additional use cases, improveduserfriendliness and revamped results reporting and display.• Distribution State Estimator –Dedicated ammeter, varmeter and wattmeter, plus a series of novel measurement attributes.As the CYME team keeps improving its calculation engines and refining its modeling capabilities, theoutcome of these multi-faceted user-driven developmentinitiatives makes CYME 8.2 a fundamental tool for all power engineering studies.For over 30 years, the CYME team has built a strong reputation with its clients by delivering the best software solutions backed by unsurpassed customer service.For information on the CYME Software, or for a web demo,please reach out to us at ******************Users can get more details on CYME 8.2 by downloading the Readme document at https:///downloads/software。

Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition

Tyzx DeepSea High Speed Stereo Vision SystemJohn Iselin Woodfill,Gaile Gordon,Ron BuckTyzx,Inc.3885Bohannon Drive Menlo Park,CA 94025Woodfill,Gaile,Ron @AbstractThis paper describes the DeepSea Stereo Vision System which makes the use of high speed 3D images practical in many application domains.This system is based on the DeepSea processor,an ASIC,which computes absolute depth based on simultaneously captured left and right im-ages with high frame rates,low latency,and low power.The chip is capable of running at 200frames per second with 512x480images,with only 13scan lines latency between data input and first depth output.The DeepSea Stereo Vi-sion System includes a stereo camera,onboard image rec-tification,and an interface to a general purpose processor over a PCI bus.We conclude by describing several applica-tions implemented with the DeepSea system including per-son tracking,obstacle detection for autonomous navigation,and gesture recognition.In Proceedings of the IEEE Computer Society Workshop on Real Time 3-D Sensors and Their Use,Conference on Computer Vision and Pattern Recognition ,(Washington,D.C.),June 2004.1.IntroductionMany image processing applications require or are greatly simplified by the availability of 3D data.This rich data source provides direct absolute measurements of the scene.Object segmentation is simplified because discontinuities in depth measurements generally coincide with object borders.Simple transforms of the 3D data can also provide alterna-tive virtual viewpoints of the data,simplifying analysis for some applications.Stereo depth computation,in particular,has many ad-vantages over other 3D sensing methods.First,stereo is a passive sensing method.Active sensors,which rely on the projection of some signal into the scene,often pose high power requirements or safety issues under certain operating conditions.They are also detectable -an issue in security or defense applications.Second,stereo sensing provides a color or monochrome image which is exactly (inherently)registered to the depth image.This image is valuable in im-age analysis,either using traditional 2D methods,or novel methods that combine color and depth image data.Third,the operating range and Z resolution of stereo sensors are flexible because they are simple functions of lens field-of-view,lens separation,and image size.Almost any operat-ing parameters are possible with an appropriate camera con-figuration,without requiring any changes to the underlying stereo computation engine.Fourth,stereo sensors have no moving parts,an advantage for reliability.High frame rate and low latency are critical factors for many applications which must provide quick decisions based on events in the scene.Tracking moving objects from frame to frame is simpler at higher frame rates because rel-ative motion is smaller,creating less tracking ambiguity.In autonomous navigation applications,vehicle speed is lim-ited by the speed of sensors used to detect moving obstacles.A vehicle traveling at 60mph covers 88ft in a second.An effective navigation system must monitor the vehicle path for new obstacles many times during this 88feet to avoid collisions.It is also critical to capture 3D descriptions of potential obstacles to evaluate their location and trajectory relative to the vehicle path and whether their size represents a threat to the vehicle.In safety applications such as airbag deployment,the 3D position of vehicle occupants must be understood to determine whether an airbag can be safely deployed -a decision that must be made within tens of mil-liseconds.Computing depth from two images is a computation-ally intensive task.It involves finding,for every pixel in the left image,the corresponding pixel in the right image.Correct corresponding pixel is defined as the pixel repre-senting the same physical point in the scene.The distance between two corresponding pixels in image coordinates is called the disparity and is inversely proportional to distance.In other words,the nearer a point is to the sensor,the more it will appear to shift between left and right views.In dense stereo depth computation,finding a pixel’s corresponding pixel in the other image requires searching a range of pix-els for a match.As image size,and therefore pixel density,increases,the number of pixel locations searched must in-crease to retain the same operating range.Therefore,foran NxN image,the stereo computation is approximately.Fortunately,the search at every pixel can be ef-fectively parallized.Tyzx has developed a patented architecture for stereodepth computation and implemented it in an ASIC called the DeepSea Processor.This chip enables the computa-tion of3D images with very high frame rates(up to200fps for512x480images)and low power requirements(1 watt),properties that are critical in many applications.Wedescribe the DeepSea Processor in Section2.The chip isthe basis for a stereo vision system,which is described in Section3.We then describe several applications that have been implemented based on this stereo vision system in Sec-tion4including person tracking,obstacle detection for au-tonomous navigation,and gesture recognition.2.DeepSea ProcessorThe design of the DeepSea ASIC is based on a highly paral-lel,pipelined architecture[6,7]that implements the Census stereo algorithm[8].As the input pixels enter the chip,the Census transform is computed at each pixel based on the local neighborhood,resulting in a stream of Census bit vec-tors.At every pixel a summed Hamming distance is used to compare the Census vectors around the pixel to those at52locations in the other image.These comparisons are pipelined,with52comparisons occurring simultaneously. The best match(shortest summed Hamming distance)is lo-cated withfive bits of subpixel precision.The DeepSea Pro-cessor converts the resulting pixel disparity to metric dis-tance measurements using the stereo camera’s calibration parameters and the depth units specified by the user.Under specific imaging conditions,the search for corre-spondence can be restricted to a single scan line rather than a full2D window.This simplification is possible,in the absence of lens distortion,when the imagers are coplanar, their optical axes are parallel,and corresponding scan lines are co-linear.The DeepSea processor requires rectified im-agery(see Section3)to satisfy these criteria with real-world cameras.The DeepSea Processor also evaluates a number of“in-terest operators”and validity checks which are taken into account to determine the confidence of a measurement.One example is the left/right check.A correct measurement should have the same disparity whether the search is ini-tiated in the left image or the right image.Different results indicate an invalid measurement.This check is expensive in software,but easily performed in the DeepSea Processor.2.1.Advantages of the Census TransformOne problem that makes determining stereo correspondence difficult is that the left and right images come from distinct imagers and viewpoints,and hence corresponding regionsX=Pixel to match1=Brighter than X0=Same or darker than XFigure1:Census transform:pixels darker than center are 0’s in bit vector,pixels brighter than center are1’s.in the two images may have differing absolute intensities resulting from distinct internal gains and biases,as well dis-tinct viewing angles.The DeepSea Processor uses the Census transform as its essential building block to compare two pixel neigh-borhoods.The Census transform represents a local image neighborhood in terms of its relative intensity structure. Figure1shows that pixels that are darker than the center are represented as0’s whereas pixels brighter than the cen-ter are represented by1’s.The bit vector is output in row major parisons between two such Census vec-tors are computed as their Hamming distance.Because the Census transform is based on the relative intensity structure of each image it is invariant to gain and bias in the imagers.This makes the stereo matching robust enough that the left and right imagers can,for example,run independent exposure control without impacting the range quality.This is a key advantage for practical deployment.Independent evaluations of stereo correlation methods [5,2,1]have found that Census performs better than clas-sic correlation approaches such as normalized cross corre-lation(NCC),sum of absolute differences(SAD),and sum of squared differences(SSD).2.2.DeepSea Processor performanceThefigure of merit used to evaluate the speed of stereo vi-sion systems is Pixel Disparities per Second(PDS).This is the total number of pixel to pixel comparisons made per sec-ond.This is computed from the area of the image,the width of the disparity search window in pixels,and the frame rate. The DeepSea Processor is capable of2.6billion PDS.This is faster than any other stereo vision system we are aware of by an order of magnitude or more.Additional performance details are summarized in Figure2.DeepSea Processor Specifications512x2048(10bit)52Disparities 5bits 16bit200fps (512x480)1wattFigure 2:DeepSea ProcessorSpecificationsFigure 3:DeepSea Board.3.DeepSea Stereo SystemThe DeepSea Development System is a stereo vision system that employs the DeepSea Processor and is used to develop new stereo vision applications.The DeepSea Stereo Vision System consists of a PCI board which hosts the DeepSea Processor,a stereo camera,and a software API to allow the user to interact with the board in C++from a host PC.Color and range images are transferred to the host PC’s memory via DMA.The DeepSea board (shown in Figure 3)performs com-munication with a stereo camera,onboard image rectifica-tion,and interfaces to a general purpose processor over the PCI bus.Since the host PC does not perform image rectifi-cation and stereo correlation,it is now available for running the user’s applicationcode.Figure 4:Tyzx Stereo Camera Family:5cm,22cm,and 33cm baselines.The DeepSea stereo cameras are self-contained stereo cameras designed to work in conjunction with the DeepSea Board.DeepSea stereo cameras connect directly to the board using a high-speed stereo LVDS link.By directly connecting to the DeepSea system,latency is reduced and the host’s PCI Bus and memory are not burdened with frame-rate,raw,image data.Tyzx has developed a fam-ily of stereo cameras which includes 5cm,22cm,and 33cm lens separations (baselines)as shown in Figure 4.A variety of standard CMOS imagers are used based on application requirements for resolution,speed,color,and shutter type.Each camera is calibrated to define basic imager and lens parameters such as lens distortion and the exact relation-ship between the imagers.These calibration parameters are used by the system to rectify the images.After this trans-formation the images appear distortion free,with co-planar image planes,and with corresponding scan lines aligned.The frame rate of any given system configuration will vary based on the capabilities of the mon con-figurations include:Omnivision:image sizes 400x300to 512x512,frame rates of 30fps to 60fps,colorNational:image sizes 320x240to 512x480,frame rate of 30fps,high dynamic rangeMicron:image sizes 320x240to 512x480,frame rates of 47fps to 90fps,Freeze frame shutter4.Applications of Tyzx Stereo SensorsTracking people in the context of security systems is one application that is ideal for fast 3D sensors.The Tyzx dis-tributed 3D person tracking system was the first application built based on the Tyzx stereo system.The fast frame rates simplify the matching of each person’s location from frame to frame.The direct measurements of the 3D location of each person create more robust results than systems based on 2D images alone.We also use a novel background mod-eling technique that makes full use of color and depth data [3,4]that contributes to robustness of tracking in the con-text of changing lighting.The fact that each stereo camera is already calibrated to produce absolute 3D measurements greatly simplifies the process of registering the cameras to each other and the world during installation.An example of tracking results is shown in Figure 5.The right image shows a plan view of the location of all the people in a large room;on the left these tracked locations are shown overlaid on the left view from each of four networked stereo cameras.Fast 3D data is also critical for obstacle detection in au-tonomous navigation.Scanning laser based sensors have been the de facto standard in this role for some time.How-ever,stereo sensors now present a real alternative becauseFigure 5:Tracking people in a large space based on a net-work of four Tyzx stereo integratedsensors.Figure 6:CMU’s Sandstorm autonomous vehicle.Tyzx DeepSea Stereo sensor is mounted in the white dome.of faster frame rates,full image format,and their passive nature.Figure 6shows a Tyzx DeepSea stereo system mounted on the Carnegie Mellon Red Team’s Sandstorm autonomous vehicle in the recent DARPA Grand Challenge Race.Sensing and reacting to a user’s gestures is a valuable control method for many applications such as interactive computer games and tele-operation of industrial or con-sumer products.Any interactive application is very sensi-tive to the time required to sense and understand the user’s motions.If too much time passes between the motion of the user and the reaction of the system,the application will seem sluggish and unresponsive.Consider for example the control of a drawing program.The most common inter-face used for this task is a mouse -which typically reports its position 100to 125times per second.In Figure 7we show an example in which a user controls a drawing ap-plication with her fingertip instead of a mouse.The 3D position of the figure tip is computed from a stereo range image and a greyscale image.The finger position is tracked relative to the plane of the table top.When the finger tip ap-proaches the surface of the table,the virtual mouse button is depressed.Motion of the figure tip along the table is con-sidered a drag.The finger moving away from the surface is interpreted as the release of the button.In this application,a narrow baseline stereo camera is used to achieve an op-erating range of 2to 3feet from the sensor with 3D spatial accuracy ofmm.Figure 7:Using 3D tracking of the finger tip as a virtual mouse to control a drawing application.Inset shows left imageview.Figure 8:Tyzx Integrated Stereo System.Self containedstereo camera,processor,and general purpose CPU.5.Future Directions and ConclusionsTyzx high speed stereo systems make it practical to bring high speed 3D data into many applications.Integration of the DeepSea Board with a general purpose processor creates a smart 3D sensing platform -reducing footprint,costs,power,and increasing deployability.Figure 8shows a prototype stand-alone stereo system,incorporating a stereo camera,DeepSea Board,and a general purpose CPU.This device requires only power and ethernet connections for de-ployment.The processing is performed close to the image sensor,including both the computation of 3D image data and the application specific processing of these 3D images.Only the low bandwidth results,e.g.object location co-ordinates or dimensions,are sent over the network.Even higher performance levels and further reductions in foot-print and power are planned.In the future we envision a powerful networked 3D sensing platform no larger than the stereo camera itself.AcknowledgmentsTyzx thanks SAIC for providing the Tyzx stereo system to Carnegie Mellon for their Sandstorm Autonomous Vehicle. References[1]J.Banks,M.Bennamoun,and P.Corke.“Non-parametrictechniques for fast and robust stereo matching,”In Proceed-ings of IEEE TENCON,Brisbane,Australia,December1997.[2]S.Gautama,croix,and M.Devy,“Evaluation of StereoMatching Algorithms for Occupant Detection”,in Proceed-ings of the International Workshop on Recognition,Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pages177–184.Sept1999.Cofus,Greece.[3]G.Gordon,T.Darrell,M.Harville,J.Woodfill.“Backgroundestimation and removal based on range and color”,Proceed-ings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,(Fort Collins,CO),June 1999.[4]M.Harville,G.Gordon,J.Woodfill,“Foreground Segmen-tation Using Adaptive Mixture Models in Color and Depth”, Proceedings of the IEEE Workshop on Detection and Recog-nition of Events in Video,(Vancouver,Canada),July2001. [5]Heiko Hirschmller,“Improvements in Real-Time Correlation-Based Stereo Vision”,Proceedings of IEEE Workshop on Stereo and Multi-Baseline Vision,pp.141-148.December 2001,Kauai,Hawaii[6]J.Woodfill,B.V on Herzen,“Real-Time Stereo Vision onthe PARTS Reconfigurable Computer,”Proceedings IEEE Symposium on Field-Programmable Custom Computing Ma-chines,Napa,pp.242-250,April1997.[7]Woodfill,Baker,V on Herzen,Alkire,“Data processing sys-tem and method”,U.S.Patent number6,456,737.[8]R.Zabih,J.Woodfill,“Non-parametric Local Transforms forComputing Visual Correspondence”,Third European Confer-ence on Computer Vision,(Stockholm,Sweden)May1994.。

HP ArcSight SIEM安全政策管理最佳实践与案例说明书

HP ArcSight SIEM安全政策管理最佳实践与案例说明书

Bhavika Kothari, QA LeadVictor Lee, Product Manager, CISSPAgenda •Introduction •Best practices •Management tool •Use cases •Discussion and Q&AHP ArcSight Next Generation Cyber DefensePredictVisualizeSearch Collect CorrelateRespondAnalyticsSIEMIntroductionWhy is manageability important for security?•Ensure security policies are Followed And Enforced•Manage the deployment holistically and not just individual elements •Monitor, create alert, and maintain the security operations •Deliver efficient and timely implementation•Enable resources to focus on security analysisBest practice•Create Golden Configuration•Create Groups•Monitor critical events and set alerts•Update to the latest ArcSight product release ASAP •Backup regularly•Review and audit changes•Leverage the ArcSight user community in Protect724Management toolWhat are the benefits of using management tools?•Reduce cost•Faster and reliable implementation of security policy•Increase accuracy•Enable resource to focus on security analyticsWhat is the name of the ArcSight management tool?ArcSight Management CenterHP ArcSight Management CenterArcSight Management Center (ArcMC) delivers centralized enterprise management that simplifies the deployment and maintenance of the desired enterprise security posture in a cost effective and efficient manner.ArcMC Version 2.0ArcSight Management Center (ArcMC)A few definitions• A host is a system that hosts at least on ArcSight product• A node is a managed ArcSight productConnectorConnector applianceArcSight Management CenterLogger•Node can be software or hardware form factor• A configuration listed in ArcMC is considered a golden configuration•Subscriber are the nodes which can receive the golden configuration.•When subscriber’s configuration is identical to the golden configuration, it is considered compliant. Otherwise, it is non-compliant.ArcMC architectureHTTPsHTTPsCWSAPIArcMC paradigm of operationStep 1 Create/import configuration inArcMCStep 2Add subscribersto theconfigurationStep 3Push configurationto subscribersStep 4Check complianceConfiguration Management Use casesUse case: Schedule regularconfiguration backupConfigure all the appliances to do backup on same schedule, i.e., every Saturday at 10 p.m.Use case: Logger filtersAdd new filter query - Create filters once on one Logger and wants to have the same filters on the rest of Loggers w/o re-creating them on other Loggers•Add new employee - Create the sameusers on all the Appliances, softwareor hardware form factor•Add new appliances, for examplemultiple ArcMC or multiple Loggers –need to add existing users to the newappliances.Connector Appliances,Use case: Window Unified Connector configuration•Push Window Unified Connector configuration to multiple Window Unified Connectors (WUC)•Run compliance check to ensure the configurations are indeed on the SmartConnectorsSoftware Connector AppliancesHPArcSight HPArcSight HPArcSightUse case: DNS Management Array•Add a new DNS server across all ArcSight Appliances•Add a new DNS server to a logical group bylocation or functionUse case: Compliance check•Is my environment compliant with FIPS?•Compliance check can be extended, for example, Is the configuration compliant with the baseline “golden” configuration?following the corporate policy?Supported Logger configurations Logger•Logger Configuration Backup•Logger Smart Message Receiver•Logger Transport Receiver•Logger Storage Group•Logger FilterSupported Connector and ConApp and ArcMC configurationsConnectors•FIPS•Map Files•Parser Override•Syslog Connector•Window Unified Connector•BluecoatConnector Appliance and ArcMC•Conapp/ArcMC Configuration BackupSupported System Admin configurations Software•Authentication External•Authentication Local Password•Authentication Session•User Configuration•SMTPHardware•DNS•NTP•Network•SNMPBulk add host- Import hosts•Allows adding hosts in bulk from a Comma Separated Values (.csv) file •Background batch job•Requirement: .csv file with valid host entries•Results of import hosts job will be stored in a text file at<install_dir>/userdata/arcmc/importhosts/Create CSV File for bulk add hostBulk add host using import CSVImport Host CSV FileArcMC node managementA node is a managedArcSight product•Connector•Connector Appliance•Logger•ArcMCNodes can besoftware orhardware formfactorUse case: Update software to the latest release•New ArcSight software release - Pushnew versions of software to connectors,ArcMC appliances and logger appliances. HPArcSightHPArcSightHPArcSightDemo Update software to the latest releaseMonitoring nodesArcMC 2.0 will support monitoring for•Connector Appliance (hardware and software) •Logger Appliance (hardware and software)•Local and Managed ArcMCs (hardware and software) •Smart ConnectorsHealth data monitoredArcMC collects health data from managed products in 1-min, 5-min and 1-hour time intervals to support charting and alert generation.•CPU•Memory•Disk•Network•EPS In/Out•Event and Queue Stats•Thread Count•Fan, Voltage, Power Supply, Temperature, RAIDCritical alert generation•Breach rules are defined to generate alerts against health data metrics. •Example: Generate a FATAL alert for any Logger whose average CPU usage in the past 5 minutes is greater than 90%breach.rule[1].product = LOGGERbreach.rule[1].severity = FATALbreach.rule[1].metric = CPUbreach.rule[1].aggregation = AVGbreach.rule[1].measurement = GREATERbreach.rule[1].value = 90breach.rule[1].timespan = 5–Displays alerts /breaches across all themanaged products–Displays per productseverity / alert pie chartsmanaged product Displays alert / breaches of particular producttype© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.41 Monitoring levelsIndividual product•Displays alert / breaches on a managed node•Displays different health monitor stats (EPS In/ Out, CPU, MemoryUtilization, Hardware Stats)© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.43 For more information•TB3067, Connector Appliance Migration to ArcSight Management Center •HP ArcSight demostation•HP ArcSight Management Center demo station•Contact your sales rep •Presentations will be posted after Protect at https:///com munity/events/protect-conference© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.44 Please fill out a survey.Hand it to the door monitor on your way out.Thank you for providing your feedback, which helps us enhance content for future events.Session TB3133 Speakers Victor Lee and Bhavika Kothari Please give me your feedback。

军工行业英语

军工行业英语

军工行业英语一、单词1. munition- 英语释义:[noun] military weapons, ammunition, equipment, and stores. [verb] supply with munitions.- 用法:作名词时,可用于指各种军事用品,如“munitions factory (兵工厂)”;作动词时,如“We need to munition the troops before the battle.(战斗前我们需要给部队提供军需品。

)”- 双语例句:- The country has a large munitions depot.(这个国家有一个大型的军需库。

)- They are busy munitioning the warships.(他们正忙于给战舰提供军需品。

)2. armament- 英语释义:[noun] military weapons and equipment; the process of equipping military forces with weapons and equipment.- 用法:可表示军备的总体概念,如“nuclear armament(核军备)”,也可指武装过程,如“the armament of the new army(新军的武装)”- 双语例句:- The armament of the country has been modernized.(这个国家的军备已经现代化了。

)- The reduction of armament is a global issue.(裁减军备是一个全球性的问题。

)3. artillery- 英语释义:[noun] large - caliber guns used in warfare on land; the branch of an army that uses such guns.- 用法:可指火炮本身,如“heavy artillery(重炮)”,也可指炮兵部队,如“the artillery battalion(炮兵营)”- 双语例句:- The enemy's artillery caused great damage.(敌人的火炮造成了巨大的破坏。

一次性系统使用技术应用

一次性系统使用技术应用

虞 张森 蕾蛐 时时 沈编写人员名单编z 主]\京林 周新华副 主 编z 厉梁积 崔跌民编委人员z ( 按姓氏拼音排序)陈浩军 陈 燕 陈 粟 房里清 齿忠玉 贾国栋 立在积盖章 毛惹华 陶柳陪 薛彤彤 张喜红 朱 枫杜 嬉 李晓辉 魏开坤眼 昕 刘大湾徐景辉编 者一次性使用系统 ( Single Use Syste 囚s, SUS ) 的概念被应用于制药领域 已有 30 余年历史,将一次性应用系统成功地应用于药品研发和生产实践需要 各方基于科学和风险 ,做好相关的评估和实施工作.在中国一次性应用系统 应用的起步相对较晚 ,但近几年发展迅速 ,已有一批生物制药企业将一次性 应用系统 31入制药工艺过程中 .为 了给国内监管部门及生产行业提供可参考学习的一次性应用系统相关 资料,由中国食品药品国际交流中心牵头组织经各界专家共同努力 ,《国外 制药一次性使用系统应用及技术文件汇编》 ( 简称 《汇编》 } 子 2016 年元月 顺利编篡完成 。

在国内外制药工业界对一次性使用系统具有强烈应用意愿 ,而国内外缺 参与人员z( 按姓氏拼音排序)乏针对性的法规和指南的背景之下 ,药品监管部门与制药工业界的专家一直高 野 职安据 罗家立沈 声、,盛 萍 保持着开放 、积极和有效的互动交流 ,在不到一年的时间里高质量的完成了 苏建华 解红艳 陶菊红 严宇援 王富职 杨 帆 饭 杨积艳 杨 王志坚骥 第一版 《汇编》 。

在编篡过程中 ,得到了国家食品药品监督管理总局食品药 品审核查验中心的全程指导 ,同时也得到了查主型药、在里主气医疗、蛊$ 密理博和博世公司等公司中外专家的大力支持 。

俞丽华 张 艳 5 商宇学锋张军利 郑 磊本 《汇编》 充分体现了我国药品监管部门对制药工业界出现的新挑战 、 新需求的密切关注和积极引导 .为我国药监系统对一次性使用系统的监管提 国外技术指导专家z( 按姓氏拼音排序)供了可借鉴的依据 ,也为药品生产企业对一次性使用系统的使用提供了可查 Chris 始l Fenge; Ge 创ge.Ai 也ms; Giin :回Jagschies; Is油elle Uettw 迎er; Janmeet Anant; Jean-Marc C 呼pia;Je 班l:ey C 红始r; Michael Felo; Michael Pa:归.e; Miriam Monge; Parrish M. Galliher; Ross Acucena阅的资料。

六西格玛术语中英文对照

六西格玛术语中英文对照

六西格玛术语中英文对照$k-—Thousands of dollars千美元$M——Millions of dollars百万美元% R & R-—Gage % Repeatability and Reproducibility %重复性和再现性ANOVA——Analysis Of Variance 方差分析AOP——Annual Operating Plan年度运营计划BB——Black Belt黑带A process improvement project team leader who is trained and certified in the Six Sigma breakthrough methodology and tools, and who is responsible for project execution.经“六西格玛"方法论和工具使用培训并认证的过程改进项目的项目负责人,负责项目的执行. BOD——Board of Directors董事会BPM——Business Process Management商业流程管理BTS——Breakthrough Technology Solution 突破性改进解决方案C & E——Cause and Effects matrix因果矩阵CAP——Change Acceleration Process加速变革流程Capability——能力The total range of inherent variation in a stable process。

It is determined by using control charts data。

在稳定过程中全部内在固有变化的改变范围。

它由控制图的数据来确定。

CapabilityIndex—-能力指数A calculated value used to compare process variation to a specification。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
相关文档
最新文档