Adaptive Processor Allocation in Packet Processing Systems

合集下载

DRAM efficient adaptive MCMC

DRAM  efficient adaptive MCMC

DRAM
The success of the DR strategy depends largely on the fact that at least one of the proposals is successfully chosen. The intuition behind adaptive strategies is to learn from the information obtained during the run of the chain, and, based on this, to tune the proposals to work more efficiently. In the example, we shall combine AM adaptation with an m–stages DR algorithm in the following way: K The proposal at the first stage of DR is adapted just as in AM: the covariance C 1 n is computed from the points of the sampled chain, no matter at which stage these points have been accepted in the sample path. K The covariance Ci n of the proposal for the i:th stage (i = 2, ..., m) is always computed simply 1 as a scaled version of the proposal of the first stage, Ci n = γ i Cn .

memory allocation policy 内存分配策略

memory allocation policy 内存分配策略

memory allocation policy 内存分配策略内存分配策略的重要性及其不同类型内存分配策略(Memory Allocation Policy)是操作系统中的关键概念之一,它决定了如何使用和管理计算机的内存资源。

合理的内存分配策略可以提高系统的效率和性能,同时还可以避免由于内存不足导致的问题。

本文将介绍内存分配策略的重要性以及常见的不同类型。

首先,内存分配策略对系统的性能和效率至关重要。

在计算机系统中,内存是一种有限的资源,因此如何高效地使用和分配内存非常重要。

合理的内存分配策略可以最大限度地减少内存碎片化,提高内存利用率,从而提高系统的性能和效率。

其次,不同的内存分配策略适用于不同的场景。

具体而言,常见的内存分配策略包括固定分区分配、动态分区分配、最佳适应分配和最差适应分配等。

每种策略都有其特点和适用的场景。

固定分区分配是最简单的内存分配策略之一。

该策略将内存划分为固定大小的分区,每个分区只能分配给一个进程。

这种策略适用于需要稳定分区的场景,例如嵌入式系统等。

然而,固定分区分配也存在内存利用率低和内存碎片化严重的问题。

动态分区分配是一种更为灵活的策略。

该策略将内存划分为可以动态分配的分区,每个分区可以分配给不同大小的进程。

动态分区分配可以更好地利用内存资源,但可能会导致内存碎片化问题。

为了解决内存碎片化问题,可以采用内存紧缩和内存换页等技术。

最佳适应分配和最差适应分配是基于动态分区的改进策略。

最佳适应分配策略会选择最合适的适应分区来满足进程的内存需求,从而减少内存碎片化。

而最差适应分配策略则会选择最大的适应分区来分配内存,这样可以最大化内存利用率。

然而,这两种策略都有一定的缺点,例如最佳适应分配策略可能导致大量的内部碎片,而最差适应分配策略可能导致大量的外部碎片。

除了上述策略外,还有一些其他的内存分配策略,如离散分配策略、伙伴系统、页面分配策略等。

这些策略可以根据具体的需求和系统特点进行选择和优化。

英语作文-集成电路设计行业中的可持续发展与环境保护

英语作文-集成电路设计行业中的可持续发展与环境保护

英语作文-集成电路设计行业中的可持续发展与环境保护In the realm of integrated circuit (IC) design, sustainable development and environmental protection have emerged as crucial considerations. As the demand for advanced electronic devices continues to surge, so does the importance of mitigating the industry's environmental impact. 。

The IC design industry plays a pivotal role in today's technological landscape, driving innovations that power our modern digital age. However, this progress often comes at a cost to the environment. The manufacturing processes involved in IC design, including semiconductor fabrication and packaging, consume substantial energy and resources. These activities not only contribute to carbon emissions but also generate various forms of waste that can harm ecosystems if improperly managed.To address these challenges, stakeholders within the IC design sector are increasingly focusing on sustainable practices. One key area of improvement lies in energy efficiency during the manufacturing process. Innovations in semiconductor manufacturing technologies have significantly reduced the energy intensity per unit of production. Advanced techniques such as multi-patterning lithography and process node scaling have enabled manufacturers to achieve higher performance with lower energy consumption and reduced environmental footprint.Moreover, the adoption of clean energy sources such as solar and wind power is becoming more prevalent in IC fabrication facilities. By transitioning to renewable energy sources, companies not only reduce their carbon footprint but also contribute to global efforts in combating climate change. This shift towards sustainability is reinforced by regulatory frameworks and industry standards that encourage adherence to environmental guidelines and promote responsible corporate practices.Beyond manufacturing, the design phase itself plays a critical role in environmental sustainability. Integrated circuit designers are increasingly incorporating principles of eco-design into their projects. This involves optimizing chip layouts for energy efficiency, minimizing power consumption during operation, and extending product lifespan through robust design practices. Additionally, the concept of "design for environment" (DFE) emphasizes the use of materials that are less harmful to the environment and easier to recycle.Furthermore, the IC design industry is exploring innovative solutions to mitigate electronic waste (e-waste) generated from end-of-life products. Recycling programs for electronic components and materials are being developed to recover valuable resources such as precious metals and reduce the environmental impact of disposal.Collaboration across the supply chain is crucial for advancing sustainability in IC design. Partnerships between semiconductor manufacturers, design firms, research institutions, and environmental organizations facilitate knowledge sharing and promote the development of sustainable technologies. By fostering a culture of innovation and responsibility, the IC design industry can continue to drive economic growth while minimizing its ecological footprint.In conclusion, the pursuit of sustainable development and environmental protectionin the IC design industry is not merely a trend but a necessity for future generations. Through continuous innovation, adoption of clean technologies, and adherence to eco-friendly practices, stakeholders can achieve a balance between technological advancement and environmental stewardship. By prioritizing sustainability today, we pave the way for a more resilient and environmentally conscious future.。

德尔·韦玛网络S4048T-ON交换机说明书

德尔·韦玛网络S4048T-ON交换机说明书

The Dell EMC Networking S4048T-ON switch is the industry’s latest data center networking solution, empowering organizations to deploy modern workloads and applications designed for the open networking era. Businesses who have made the transition away from monolithic proprietary mainframe systems to industry standard server platforms can now enjoy even greater benefits from Dell EMC open networking platforms. By using industry-leading hardware and a choice of leading network operating systems to simplify data center fabric orchestration and automation, organizations can tailor their network to their unique requirements and accelerate innovation.These new offerings provide the needed flexibility to transform data centers. High-capacity network fabrics are cost-effective and easy to deploy, providing a clear path to the software-defined data center of the future with no vendor lock-in.The S4048T-ON supports the open source Open Network Install Environment (ONIE) for zero-touch installation of alternate network operating systems, including feature rich Dell Networking OS.High density 1/10G BASE-T switchThe Dell EMC Networking S-Series S4048T-ON is a high-density100M/1G/10G/40GbE top-of-rack (ToR) switch purpose-builtfor applications in high-performance data center and computing environments. Leveraging a non-blocking switching architecture, theS4048T-ON delivers line-rate L2 and L3 forwarding capacity within a conservative power budget. The compact S4048T-ON design provides industry-leading density of 48 dual-speed 1/10G BASE-T (RJ45) ports, as well as six 40GbE QSFP+ up-links to conserve valuable rack space and simplify the migration to 40Gbps in the data center core. Each40GbE QSFP+ up-link can also support four 10GbE (SFP+) ports with a breakout cable. In addition, the S4048T-ON incorporates multiple architectural features that optimize data center network flexibility, efficiency and availability, including I/O panel to PSU airflow or PSU to I/O panel airflow for hot/cold aisle environments, and redundant, hot-swappable power supplies and fans. S4048T-ON supports feature-rich Dell Networking OS, VLT, network virtualization features such as VRF-lite, VXLAN Gateway and support for Dell Embedded Open Automation Framework.• The S4048T-ON is the only switch in the industry that supports traditional network-centric virtualization (VRF) and hypervisorcentric virtualization (VXLAN). The switch fully supports L2 VX-• The S4048T-ON also supports Dell EMC Networking’s Embedded Open Automation Framework, which provides enhanced network automation and virtualization capabilities for virtual data centerenvironments.• The Open Automation Framework comprises a suite of interre-lated network management tools that can be used together orindependently to provide a network that is flexible, available andmanageable while helping to reduce operational expenses.Key applicationsDynamic data centers ready to make the transition to software-defined environments• High-density 10Gbase-T ToR server access in high-performance data center environments• Lossless iSCSI storage deployments that can benefit from innovative iSCSI & DCB optimizations that are unique only to Dell NetworkingswitchesWhen running the Dell Networking OS9, Active Fabric™ implementation for large deployments in conjunction with the Dell EMC Z-Series, creating a flat, two-tier, nonblocking 10/40GbE data center network design:• High-performance SDN/OpenFlow 1.3 enabled with ability to inter-operate with industry standard OpenFlow controllers• As a high speed VXLAN Layer 2 Gateway that connects thehypervisor based ovelray networks with nonvirtualized infrastructure Key features - general• 48 dual-speed 1/10GbE (SFP+) ports and six 40GbE (QSFP+)uplinks (totaling 72 10GbE ports with breakout cables) with OSsupport• 1.44Tbps (full-duplex) non-blocking switching fabric delivers line-rateperformance under full load with sub 600ns latency• I/O panel to PSU airflow or PSU to I/O panel airflow• Supports the open source ONIE for zero-touch• installation of alternate network operating systems• Redundant, hot-swappable power supplies and fansDELL EMC NETWORKING S4048T-ON SWITCHEnergy-efficient 10GBASE-T top-of-rack switch optimized for data center efficiencyKey features with Dell EMC Networking OS9Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF, BGP and PBR (Policy Based Routing) support• Scalable L2 and L3 Ethernet switching with QoS and a full complement of standards-based IPv4 and IPv6 features, including OSPF, BGP andPBR (Policy Based Routing) support• VRF-lite enables sharing of networking infrastructure and provides L3traffic isolation across tenants• Increase VM Mobility region by stretching L2 VLAN within or across two DCs with unique VLT capabilities like Routed VL T, VLT Proxy Gateway • VXLAN gateway functionality support for bridging the nonvirtualizedand the virtualized overlay networks with line rate performance.• Embedded Open Automation Framework adding automatedconfiguration and provisioning capabilities to simplify the management of network environments. Supports Puppet agent for DevOps• Modular Dell Networking OS software delivers inherent stability as well as enhanced monitoring and serviceability functions.• Enhanced mirroring capabilities including 1:4 local mirroring,• Remote Port Mirroring (RPM), and Encapsulated Remote PortMirroring (ERPM). Rate shaping combined with flow based mirroringenables the user to analyze fine grained flows• Jumbo frame support for large data transfers• 128 link aggregation groups with up to 16 members per group, usingenhanced hashing• Converged network support for DCB, with priority flow control(802.1Qbb), ETS (802.1Qaz), DCBx and iSCSI TLV• S4048T-ON supports RoCE and Routable RoCE to enable convergence of compute and storage on Active FabricUser port stacking support for up to six units and unique mixed mode stacking that allows stacking of S4048-ON with S4048T-ON to providecombination of 10G SFP+ and RJ45 ports in a stack.Physical48 fixed 10GBase-T ports supporting 100M/1G/10G speeds6 fixed 40 Gigabit Ethernet QSFP+ ports1 RJ45 console/management port with RS232signaling1 USB 2.0 type A to support mass storage device1 Micro-USB 2.0 type B Serial Console Port1 8 GB SSD ModuleSize: 1RU, 1.71 x 17.09 x 18.11”(4.35 x 43.4 x 46 cm (H x W x D)Weight: 23 lbs (10.43kg)ISO 7779 A-weighted sound pressure level: 65 dB at 77°F (25°C)Power supply: 100–240V AC 50/60HzMax. thermal output: 1568 BTU/hMax. current draw per system:4.6 A at 460W/100VAC,2.3 A at 460W/200VACMax. power consumption: 460 WattsT ypical power consumption: 338 WattsMax. operating specifications:Operating temperature: 32°F to 113°F (0°C to45°C)Operating humidity: 5 to 90% (RH), non-condensing Max. non-operating specifications:Storage temperature: –40°F to 158°F (–40°C to70°C)Storage humidity: 5 to 95% (RH), non-condensingRedundancyHot swappable redundant powerHot swappable redundant fansPerformance GeneralSwitch fabric capacity:1.44Tbps (full-duplex)720Gbps (half-duplex)Forwarding Capacity: 1080 MppsLatency: 2.8 usPacket buffer memory: 16MBCPU memory: 4GBOS9 Performance:MAC addresses: 160KARP table 128KIPv4 routes: 128KIPv6 hosts: 64KIPv6 routes: 64KMulticast routes: 8KLink aggregation: 16 links per group, 128 groupsLayer 2 VLANs: 4KMSTP: 64 instancesVRF-Lite: 511 instancesLAG load balancing: Based on layer 2, IPv4 or IPv6headers Latency: Sub 3usQOS data queues: 8QOS control queues: 12Ingress ACL: 16KEgress ACL: 1KQoS: Default 3K entries scalable to 12KIEEE compliance with Dell Networking OS9802.1AB LLDP802.1D Bridging, STP802.1p L2 Prioritization802.1Q VLAN T agging, Double VLAN T agging,GVRP802.1Qbb PFC802.1Qaz ETS802.1s MSTP802.1w RSTP802.1X Network Access Control802.3ab Gigabit Ethernet (1000BASE-T)802.3ac Frame Extensions for VLAN T agging802.3ad Link Aggregation with LACP802.3ae 10 Gigabit Ethernet (10GBase-X) withQSA802.3ba 40 Gigabit Ethernet (40GBase-SR4,40GBase-CR4, 40GBase-LR4) on opticalports802.3u Fast Ethernet (100Base-TX)802.3x Flow Control802.3z Gigabit Ethernet (1000Base-X) with QSA 802.3az Energy Efficient EthernetANSI/TIA-1057 LLDP-MEDForce10 PVST+Max MTU 9216 bytesRFC and I-D compliance with Dell Networking OS9General Internet protocols768 UDP793 TCP854 T elnet959 FTPGeneral IPv4 protocols791 IPv4792 ICMP826 ARP1027 Proxy ARP1035 DNS (client)1042 Ethernet Transmission1305 NTPv31519 CIDR1542 BOOTP (relay)1812 Requirements for IPv4 Routers1918 Address Allocation for Private Internets 2474 Diffserv Field in IPv4 and Ipv6 Headers 2596 Assured Forwarding PHB Group3164 BSD Syslog3195 Reliable Delivery for Syslog3246 Expedited Assured Forwarding4364 VRF-lite (IPv4 VRF with OSPF, BGP,IS-IS and V4 multicast)5798 VRRPGeneral IPv6 protocols1981 Path MTU Discovery Features2460 Internet Protocol, Version 6 (IPv6)Specification2464 Transmission of IPv6 Packets overEthernet Networks2711 IPv6 Router Alert Option4007 IPv6 Scoped Address Architecture4213 Basic Transition Mechanisms for IPv6Hosts and Routers4291 IPv6 Addressing Architecture4443 ICMP for IPv64861 Neighbor Discovery for IPv64862 IPv6 Stateless Address Autoconfiguration 5095 Deprecation of T ype 0 Routing Headers in IPv6IPv6 Management support (telnet, FTP, TACACS, RADIUS, SSH, NTP)VRF-Lite (IPv6 VRF with OSPFv3, BGPv6, IS-IS) RIP1058 RIPv1 2453 RIPv2OSPF (v2/v3)1587 NSSA 4552 Authentication/2154 OSPF Digital Signatures Confidentiality for 2328 OSPFv2 OSPFv32370 Opaque LSA 5340 OSPF for IPv6IS-IS1142 Base IS-IS Protocol1195 IPv4 Routing5301 Dynamic hostname exchangemechanism for IS-IS5302 Domain-wide prefix distribution withtwo-level IS-IS5303 3-way handshake for IS-IS pt-to-ptadjacencies5304 IS-IS MD5 Authentication5306 Restart signaling for IS-IS5308 IS-IS for IPv65309 IS-IS point to point operation over LANdraft-isis-igp-p2p-over-lan-06draft-kaplan-isis-ext-eth-02BGP1997 Communities2385 MD52545 BGP-4 Multiprotocol Extensions for IPv6Inter-Domain Routing2439 Route Flap Damping2796 Route Reflection2842 Capabilities2858 Multiprotocol Extensions2918 Route Refresh3065 Confederations4360 Extended Communities4893 4-byte ASN5396 4-byte ASN representationsdraft-ietf-idr-bgp4-20 BGPv4draft-michaelson-4byte-as-representation-054-byte ASN Representation (partial)draft-ietf-idr-add-paths-04.txt ADD PATHMulticast1112 IGMPv12236 IGMPv23376 IGMPv3MSDP, PIM-SM, PIM-SSMSecurity2404 The Use of HMACSHA- 1-96 within ESPand AH2865 RADIUS3162 Radius and IPv63579 Radius support for EAP3580 802.1X with RADIUS3768 EAP3826 AES Cipher Algorithm in the SNMP UserBase Security Model4250, 4251, 4252, 4253, 4254 SSHv24301 Security Architecture for IPSec4302 IPSec Authentication Header4303 ESP Protocol4807 IPsecv Security Policy DB MIBdraft-ietf-pim-sm-v2-new-05 PIM-SMwData center bridging802.1Qbb Priority-Based Flow Control802.1Qaz Enhanced Transmission Selection (ETS)Data Center Bridging eXchange (DCBx)DCBx Application TLV (iSCSI, FCoE)Network management1155 SMIv11157 SNMPv11212 Concise MIB Definitions1215 SNMP Traps1493 Bridges MIB1850 OSPFv2 MIB1901 Community-Based SNMPv22011 IP MIB2096 IP Forwarding T able MIB2578 SMIv22579 T extual Conventions for SMIv22580 Conformance Statements for SMIv22618 RADIUS Authentication MIB2665 Ethernet-Like Interfaces MIB2674 Extended Bridge MIB2787 VRRP MIB2819 RMON MIB (groups 1, 2, 3, 9)2863 Interfaces MIB3273 RMON High Capacity MIB3410 SNMPv33411 SNMPv3 Management Framework3412 Message Processing and Dispatching forthe Simple Network ManagementProtocol (SNMP)3413 SNMP Applications3414 User-based Security Model (USM) forSNMPv33415 VACM for SNMP3416 SNMPv23417 Transport mappings for SNMP3418 SNMP MIB3434 RMON High Capacity Alarm MIB3584 Coexistance between SNMP v1, v2 andv34022 IP MIB4087 IP Tunnel MIB4113 UDP MIB4133 Entity MIB4292 MIB for IP4293 MIB for IPv6 T extual Conventions4502 RMONv2 (groups 1,2,3,9)5060 PIM MIBANSI/TIA-1057 LLDP-MED MIBDell_ITA.Rev_1_1 MIBdraft-grant-tacacs-02 TACACS+draft-ietf-idr-bgp4-mib-06 BGP MIBv1IEEE 802.1AB LLDP MIBIEEE 802.1AB LLDP DOT1 MIBIEEE 802.1AB LLDP DOT3 MIB sFlowv5 sFlowv5 MIB (version 1.3)DELL-NETWORKING-SMIDELL-NETWORKING-TCDELL-NETWORKING-CHASSIS-MIBDELL-NETWORKING-PRODUCTS-MIBDELL-NETWORKING-SYSTEM-COMPONENT-MIBDELL-NETWORKING-TRAP-EVENT-MIBDELL-NETWORKING-COPY-CONFIG-MIBDELL-NETWORKING-IF-EXTENSION-MIBDELL-NETWORKING-FIB-MIBIT Lifecycle Services for NetworkingExperts, insights and easeOur highly trained experts, withinnovative tools and proven processes, help you transform your IT investments into strategic advantages.Plan & Design Let us analyze yourmultivendor environment and deliver a comprehensive report and action plan to build upon the existing network and improve performance.Deploy & IntegrateGet new wired or wireless network technology installed and configured with ProDeploy. Reduce costs, save time, and get up and running cateEnsure your staff builds the right skills for long-termsuccess. Get certified on Dell EMC Networking technology and learn how to increase performance and optimize infrastructure.Manage & SupportGain access to technical experts and quickly resolve multivendor networking challenges with ProSupport. Spend less time resolving network issues and more time innovating.OptimizeMaximize performance for dynamic IT environments with Dell EMC Optimize. Benefit from in-depth predictive analysis, remote monitoring and a dedicated systems analyst for your network.RetireWe can help you resell or retire excess hardware while meeting local regulatory guidelines and acting in an environmentally responsible way.Learn more at/lifecycleservicesLearn more at /NetworkingDELL-NETWORKING-FPSTATS-MIBDELL-NETWORKING-LINK-AGGREGATION-MIB DELL-NETWORKING-MSTP-MIB DELL-NETWORKING-BGP4-V2-MIB DELL-NETWORKING-ISIS-MIBDELL-NETWORKING-FIPSNOOPING-MIBDELL-NETWORKING-VIRTUAL-LINK-TRUNK-MIB DELL-NETWORKING-DCB-MIBDELL-NETWORKING-OPENFLOW-MIB DELL-NETWORKING-BMP-MIBDELL-NETWORKING-BPSTATS-MIBRegulatory compliance SafetyCUS UL 60950-1, Second Edition CSA 60950-1-03, Second Edition EN 60950-1, Second EditionIEC 60950-1, Second Edition Including All National Deviations and Group Differences EN 60825-1, 1st EditionEN 60825-1 Safety of Laser Products Part 1:Equipment Classification Requirements and User’s GuideEN 60825-2 Safety of Laser Products Part 2: Safety of Optical Fibre Communication Systems FDA Regulation 21 CFR 1040.10 and 1040.11EmissionsInternational: CISPR 22, Class AAustralia/New Zealand: AS/NZS CISPR 22: 2009, Class ACanada: ICES-003:2016 Issue 6, Class AEurope: EN 55022: 2010+AC:2011 / CISPR 22: 2008, Class AJapan: VCCI V-3/2014.04, Class A & V4/2012.04USA: FCC CFR 47 Part 15, Subpart B:2009, Class A RoHSAll S-Series components are EU RoHS compliant.CertificationsJapan: VCCI V3/2009 Class AUSA: FCC CFR 47 Part 15, Subpart B:2009, Class A Available with US Trade Agreements Act (TAA) complianceUSGv6 Host and Router Certified on Dell Networking OS 9.5 and greater IPv6 Ready for both Host and RouterUCR DoD APL (core and distribution ALSAN switch ImmunityEN 300 386 V1.6.1 (2012-09) EMC for Network Equipment\EN 55022, Class AEN 55024: 2010 / CISPR 24: 2010EN 61000-3-2: Harmonic Current Emissions EN 61000-3-3: Voltage Fluctuations and Flicker EN 61000-4-2: ESDEN 61000-4-3: Radiated Immunity EN 61000-4-4: EFT EN 61000-4-5: SurgeEN 61000-4-6: Low Frequency Conducted Immunity。

基于粒子群遗传混合优化算法在OFDMA中自适应资源分配应用

基于粒子群遗传混合优化算法在OFDMA中自适应资源分配应用

长春理工大学学报(自然科学版)Journal of Changchun University of Science and Technology (Natural Science Edition )Vol.44No.3Jun.2021第44卷第3期2021年6月收稿日期:2020-04-01基金项目:吉林省科技厅项目(20200403151SF )作者简介:荣国成(1997-),男,硕士研究生,E-mail :*****************通讯作者:王昊(1980-),男,博士,副教授,E-mail :****************基于粒子群遗传混合优化算法在OFDMA 中自适应资源分配应用荣国成1,王昊1,沙莎2(1.长春理工大学电子信息工程学院,长春130022;2.长春电子科技学院电子工程学院,长春130114)摘要:针对正交频分多址技术(OFDMA )在无线通信系统中资源分配不均衡导致无法满足用户服务质量问题,将OFDMA 资源分配问题转化为函数优化问题,分别对子载波分配与功率分配进行研究,在传统粒子群算法与遗传算法基础上引用一种混合自适应算法对目标函数求取最佳解,对资源分配问题进行研究,目的在保证用户比例公平性的条件下提高有效资源利用率,最大化系统吞吐量。

通过仿真分析表明,与其他算法相比,混合优化算法在系统公平性与吞吐量方面具有有效提高。

关键词:无线通信;资源分配;吞吐量;粒子群算法;优化算法中图分类号:TN925文献标志码:A文章编号:1672-9870(2021)03-0096-06Application of Particle Swarm Genetic Hybrid Optimization Algorithm in Adaptive Resource Allocation in OFDMARONG Guo-cheng 1,WANG Hao 1,SHA Sha 2(1.School of Electronics and Information Engineering ,Changchun University of Science and Technology ,Changchun 130022;2.School of Electronic Engineering ,Changchun College of Electronic Technology ,Changchun 130114)Abstract :For Orthogonal Frequency Division Multiple Access (OFDMA ),the imbalanced resource allocation in wireless communication systems led to the inability to meet user service quality issues.In this paper ,the OFDMA resource alloca-tion problem was transformed into a function optimization problem ;sub-carrier allocation and power allocation separately was studied based on traditional particle swarm optimization and genetic algorithm.A hybrid adaptive algorithm was used to find the best solution to the objective function ,and the resource allocation problem is studied.The purpose was to improve the effective resource utilization and maximize the system throughput under the condition of ensuring the fairness of the us-er's proportion.Simulation analysis showed that compared with other algorithms ,the hybrid optimization algorithm had an effective improvement in system fairness and throughput.Key words :wireless communication ;resource allocation ;throughput ;particle swarm optimization ;optimization在下一代无线接入网络中,由于通信系统应具备严谨的用户应用和测试用例的需求,以及服务和功能的高度异构性,因此服务质量(QoS )供应更具挑战性[1]。

Atlas Copco PF6 FlexSystem 灵活生产系统说明书

Atlas Copco PF6 FlexSystem 灵活生产系统说明书
With its compact size that require less hanging cables,
improving ergonomics and work range.
Reduce troubleshooting time and increase OEE With less components under stress, this modular plug-and-play controller increases your availability by more than 85%.
4
Easy system integration
A modular PF6 FlexSystem reduce your engineering e orts, making the integration with line equipments easier and e ective .
Improve your safety, while eliminate multiple cables and increase reliability and MTBF without worry about dirt or wet environments.
xtured controller that enables improved
exibility and higher productivity on the line
Innovation is the key to staying competitive
Smart Connected Assembly supports Industry 4.0 becoming a reality. A reality where technology is

基于自适应遗传算法的多无人机协同任务分配

基于自适应遗传算法的多无人机协同任务分配

2021,36(1)电子信息对抗技术Electronic Information Warfare Technology㊀㊀中图分类号:V279;TN97㊀㊀㊀㊀㊀㊀文献标志码:A㊀㊀㊀㊀㊀㊀文章编号:1674-2230(2021)01-0059-06收稿日期:2020-02-28;修回日期:2020-03-30作者简介:王树朋(1990 ),男,博士,工程师㊂基于自适应遗传算法的多无人机协同任务分配王树朋,徐㊀旺,刘湘德,邓小龙(电子信息控制重点实验室,成都610036)摘要:提出一种自适应遗传算法,利用基于任务价值㊁飞行航程和任务分配均衡性的适应度函数评估任务分配方案的优劣,在算法运行过程中交叉率和变异率进行实时动态调整,以克服标准遗传算法易陷入局部最优的缺点㊂将提出的自适应遗传算法用于多无人机协同任务分配问题的求解,设置并进行了实验㊂实验结果表明:提出的自适应遗传算法可以较好地解决多无人机协同任务分配问题,得到较高的作战效能,证明了该方法的有效性㊂关键词:遗传算法;适应度函数;无人机;任务分配;作战效能DOI :10.3969/j.issn.1674-2230.2021.01.013Cooperative Task Assignment for Multi -UAVBased on Adaptive Genetic AlgorithmWANG Shupeng,XU Wang,LIU Xiangde,DENG Xiaolong(Science and Technology on Electronic Information Control Laboratory,Chengdu 610036,China)Abstract :An improved adaptive genetic algorithm is proposed,and a fitness function based ontask value,flying distance and the balance of task allocation scheme is used to evaluate the qualityof task allocation schemes.In the proposed algorithm,the crossover probability and mutation prob-ability can adjust automatically to avoid effectively the phenomenon of the standard genetic algo-rithm falling into the local optimum.The proposed improved genetic algorithm is used to solve the problem of cooperative task assignment for multiple Unmanned Aerial Vehicles (UAVs).The ex-periments are conducted and the experimental results show that the proposed adaptive genetic al-gorithm can significantly solve the problem and obtain an excellent combat effectiveness.The ef-fectiveness of the proposed method is demonstrated with the experimental results.Key words :genetic algorithm;fitness function;UAV;task assignment;combat effectiveness1㊀引言无人机是一种依靠程序自主操纵或受无线遥控的飞行器[1],在军事科技方面得到了极大重视,是新颖军事技术和新型武器平台的杰出代表㊂随着战场环境日益复杂,对于无人机的性能要求越来越高,单一无人机在复杂的战场环境中执行任务具有诸多不足,通常多个无人机进行协同作战或者执行任务㊂通常地,多无人机协同任务分配是:在无人机种类和数量已知的情况下,基于一定的环境信息和任务需求,为多个无人机分配一个或者一组有序的任务,要求在完成任务最大化的同时,多个无人机任务执行的整体效能最大,且所付出的代价最小㊂从理论上讲,多无人机协同任务分配属于NP -hard 的组合优化问题,通常需95王树朋,徐㊀旺,刘湘德,邓小龙基于自适应遗传算法的多无人机协同任务分配投稿邮箱:dzxxdkjs@要借助于算法进行求解㊂目前,国内外研究人员已经对于多无人机协同任务分配问题进行了大量的研究,并提出很多用于解决该问题的算法,主要有:群算法㊁自由市场机制算法和进化算法等㊂群算法是模拟自然界生物群体的行为的算法,其中蚁群算法[2-3]㊁粒子群算法[4]以及鱼群算法[5]是最为典型的群算法㊂研究人员发现群算法可以用于求解多无人机协同任务分配问题,但是该算法极易得到局部最优而非全局最优㊂自由市场机制算法[6]是利用明确的规则引导买卖双方进行公开竞价,在短时间内将资源合理化,得到问题的最优解和较优解㊂进化算法适合求解大规模问题,其中遗传算法[7-8]是最著名的进化算法㊂遗传算法在运行过程中会出现不容易收敛或陷入局部最优的问题,许多研究人员针对该问题对遗传算法进行了改进㊂本文提出一种改进的自适应遗传算法,在算法运行过程中适应度值㊁交叉率和变异率可以进行实时动态调整,以克服遗传算法易陷入局部最优的缺点,并利用该算法解决多无人机协同任务分配问题,以求在满足一定的约束条件下,无人机执行任务的整体收益最大,同时付出的代价最小,得到较大的效费比㊂2㊀问题描述㊀㊀多无人机协同任务分配模型是通过设置并满足一定约束条件的情况下,包括无人机的自身能力限制和环境以及任务的要求等,估算各个无人机执行任务获得的收益以及付出的代价,并利用评价指标进行评价,以求得到最大的收益损耗比和最优作战效能㊂通常情况下,多无人机协同任务分配需满足以下约束:1)每个任务只能被分配一次;2)无人机可以携带燃料限制造成的最大航程约束;3)无人机载荷限制,无人机要执行某项任务必须装载相应的载荷㊂另外,多无人机协同任务分配需要遵循以下原则:1)收益最高:每项任务都拥有它的价值,任务分配方案应该得到最大整体收益;2)航程最小:应该尽快完成任务,尽可能减小飞行航程,这样易满足无人机的航程限制,同时降低无人机面临的威胁;3)各个无人机的任务负载尽可能均衡,通常以任务个数或者飞行航程作为标准判定; 4)优先执行价值高的任务㊂根据以上原则,提出多无人机协同任务分配的评价指标,包括:1)任务价值指标:用于评估任务分配方案可以得到的整体收益;2)任务分配均衡性指标:用于评估无人机的任务负载是否均衡;3)飞行航程指标:用于评估无人机的飞行航程㊂3㊀遗传算法㊀㊀要将遗传算法用于多无人机协同任务分配问题的求解,可以将任务分配方案当作种群中的个体,确定合适的染色体编码方法,利用按照一定结构组成的染色体表示任务分配方案㊂然后,通过选择㊁交叉和变异等遗传操作进行不断进化,直到满足约束条件㊂通常来说,遗传算法可以表示为GA=(C,E, P0,M,F,G,Y,T),其中C㊁E㊁P0和M分别表示染色体编码方法㊁适应度函数㊁初始种群和种群大小,在本文的应用中,P0和M分别表示初始的任务分配方案集合以及任务分配方案的个数;F㊁G 和Y分别表示选择算子㊁交叉算子和变异算子;T 表示终止的条件和规则㊂因此,利用遗传算法解决多无人机协同任务分配问题的主要工作是确定以上8个参数㊂3.1㊀编码方法利用由一定结构组成的染色体表示任务分配方案,将一个任务分配方案转换为一条染色体的过程可以分为2个步骤:第一步是根据各个无人机需执行的任务确定各个无人机对应的染色体;第二步是将这些小的染色体结合,形成整个任务分配方案对应的完整染色体㊂假设无人机和任务的个数分别为N u和N t,其中第i个无人机U i的06电子信息对抗技术㊃第36卷2021年1月第1期王树朋,徐㊀旺,刘湘德,邓小龙基于自适应遗传算法的多无人机协同任务分配任务共有k个,分别是T i1㊁T i2㊁ ㊁T ik,则该无人机对应的任务染色体为[T i1T i2 T ik]㊂在任务分配时,可能出现N t个任务全部分配给一个无人机的情况,另外为增加随机性和扩展性,提高遗传算法的全局搜索能力,随机将N t-k个0插入到以上的任务染色体中,产生一条全新的长度为N t的染色体㊂最终,一个任务分配方案可以转换为一条长度为N u∗N t的染色体㊂3.2㊀适应度函数在本文的应用中,适应度函数E是用于判断任务分配方案的质量,根据上文提出的多无人机协同任务分配问题的原则和评价指标可知,主要利用任务价值指标㊁任务分配均衡性指标以及飞行航程指标等三个指标判定任务分配方案的质量㊂假设有N u个无人机,F i表示第i个无人机U i的飞行航程,整个任务的总飞行航程F t可以表示为:F t=ðN u i=1F i(1)无人机的平均航程为:F=F t Nu(2)无人机飞行航程的方差D可以表示为:D=ðN u i=1F i-F-()2N u(3)为充分考虑任务价值㊁飞行航程以及各个无人机任务的均衡性,将任务分配方案的适应度函数定义为:E=V ta∗F t+b∗D(4)其中:V t为任务的总价值,F t为总飞行航程,D为各个无人机飞行航程的方差,a和b分别表示飞行航程以及飞行航程均衡性的权重㊂另外,任务分配方案的收益损耗比GL可以表示为:GL=V tF t(5)另外,在遗传算法运行的不同阶段,需要对任务分配方案的适应度进行适当地扩大或者缩小,新的适应度函数E可以表示为:Eᶄ=1-e-αEα=m tE max-E avg+1,m=1+lg T()ìîíïïïï(6)其中:E为利用公式(4)计算得到的原适应度值, E avg为适应度值的平均值,E max为适应度最大值,t 为算法的运行次数,T为遗传算法的终止条件㊂在遗传算法运行初期,E max-E avg较大,而t较小,因此α较小,可以提高低质量任务分配方案的选择概率,同时降低高质量任务分配方案的选择概率;随着算法的运行,E max-E avg将逐渐减小,t 将逐渐增大,因此α会逐渐增大,可以避免算法陷入随机选择和局部最优㊂3.3㊀种群大小㊁初始种群和终止条件按照通常做法,将种群大小M的取值范围设定为20~100㊂首先,随机产生2∗M个符合要求的任务分配方案,利用公式(4)计算各个任务分配方案的适应度值㊂然后,从中选取出适应度值较高的M 个任务分配方案组成初始种群P0,即初始任务分配方案集合㊂终止条件T设定为:在规定的迭代次数内有一个任务分配方案的适应度值满足条件,则停止进化;否则,一直运行到规定的迭代次数㊂3.4㊀选择算子首先,采用精英保留策略将当前适应度值最大的一个任务分配方案直接保留到下一代,提高遗传算法的全局收敛能力㊂随后,利用最知名的轮盘赌选择法选择出剩余的任务分配方案㊂3.5㊀交叉算子和变异算子在算法运行过程中需随时动态调整p c和p m,动态调整的原则如下:1)适当降低适应度值比平均适应度值高的任务分配方案的p c和p m,以保护优秀的高质量任务分配方案,加快算法的收敛速度;2)适当增大适应度值比平均适应度值低的任务分配方案的p c和p m,以免算法陷入局部最优㊂另外,任务分配方案的集中度β也是决定p c 和p m的重要因素,β可以表示为:16王树朋,徐㊀旺,刘湘德,邓小龙基于自适应遗传算法的多无人机协同任务分配投稿邮箱:dzxxdkjs@β=E avgE max(7)其中:E avg 表示平均适应度值;E max 表示最大适应度值㊂显然,β越大,任务分配方案越集中,遗传算法越容易陷入局部最优㊂因此,随着β增大,p c 和p m 应该随之增大㊂基于以上原则,定义p c 和p m 如下:p c =0.8E avg -Eᵡ()+0.6Eᵡ-E min ()E avg -E min +0.2㊃βEᵡ<E avg 0.6E max -Eᵡ()+0.4Eᵡ-E avg ()E max -E avg +0.2㊃βEᵡȡE avgìîíïïïïïp m =0.08E avg -E‴()+0.05E‴-E min ()E avg -E min +0.02㊃βE‴<E avg and β<0.80.05E max -E‴()+0.0001E‴-E avg ()E max -E avg+0.02㊃βE‴ȡE avg and β<0.80.5βȡ0.8ìîíïïïïïï(8)其中:E max 为最大适应度值,E min 为最小适应度值,E avg 为平均适应度值,Eᵡ为进行交叉操作的两个任务分配方案中的较大适应度值,E‴为进行变异操作的任务分配方案的适应度值,β为任务分配方案的集中度,可利用公式(7)计算得到㊂4㊀实验结果4.1㊀实验设置4架无人机从指定的起飞机场起飞,飞至5个任务目标点执行10项任务,最终降落到指定的降落机场㊂其中,如表1所示,无人机的编号分别为UAV 1㊁UAV 2㊁UAV 3和UAV 4㊂另外,起飞机场㊁降落机场㊁目标如图6所示㊂任务的编号分别为任务1至任务10(简称为T 1㊁T 2㊁ ㊁T 10),每项任务均为到某一个目标点执行侦察㊁攻击㊁事后评估中的某一项,任务设置如表2所示㊂表1㊀无人机信息编号最大航程装载载荷UAV 120侦察㊁攻击UAV 120侦察UAV 125攻击㊁评估UAV 130侦察㊁评估图1㊀任务目标位置示意图表2㊀任务设置任务编号目标编号任务类型任务价值T 11侦察1T 21攻击2T 32攻击3T 42评估3T 53侦察4T 63评估6T 74侦察2T 84攻击3T 94评估5T 105评估14.2㊀第一组实验首先,随机地进行任务分配,得到一个满足多无人机协同任务分配的约束条件的任务分配方案如下:㊃UAV 1:T 2ңT 5㊃UAV 2:T 1ңT 7㊃UAV 3:T 3ңT 6ңT 8㊃UAV 4:T 4ңT 9ңT 10计算可知,4个无人机的飞行航程分别是14.0674㊁12.6023㊁20.1854和22.1873,飞行总航程为69.0423,执行任务的总价值为30,最终的收益损耗比约为0.43㊂另外,各个飞行器飞行航程的方差约为16.18,UAV 1和UAV 2的飞行航程相对较短,而UAV 3和UAV 4的飞行航程相对较长,各个无人机之间的均衡性存在明显不足㊂为提高收益损耗比,分别利用标准遗传算法和本文提出的自适应遗传算法进行优化,两个算法的参数设置如表3所示㊂26电子信息对抗技术·第36卷2021年1月第1期王树朋,徐㊀旺,刘湘德,邓小龙基于自适应遗传算法的多无人机协同任务分配表3㊀遗传算法参数设置参数名称标准遗传算法自适应遗传算法E 公式(4)公式(6)M 2020选择方法精英策略轮盘赌选择法精英策略轮盘赌选择法P c 0.8公式(8)交叉方法单点交叉单点交叉P m 0.2公式(8)T500500最终,利用标准遗传算法得到任务分配方案如下:㊃UAV 1:T 3ңT 8㊃UAV 2:T 1ңT 7㊃UAV 3:T 9㊃UAV 4:T 6ңT 5计算可得,4个无人机的飞行航程分别为12.78㊁12.6023㊁12.434和12.9443,总飞行航程为50.7605,总任务价值为24,计算可知收益损耗比约为0.47,相对于随机任务分配提高约9.3%㊂另外,各个飞行器飞行航程的方差约为0.04,无人机飞行航程比较均衡,未出现飞行航程过长或过短的情况㊂在算法运行过程中,最佳适应度曲线如图2所示,在遗传算法约迭代到第160次时陷入局部最优,全局搜索能力不足㊂图2㊀标准遗传算法的最佳适应度曲线图1为进一步提高算法的效率,利用本文提出的改进自适应遗传算法解决多无人机协同任务分配问题㊂最终,利用自适应遗传算法得到的任务分配方案如下:㊃UAV 1:T 2ңT 3ңT 8ңT 7㊃UAV 2:T 5㊃UAV 3:T 6㊃UAV 4:T 1ңT 4ңT 9计算可知,4个无人机的飞行航程分别为12.8191㊁12.9443㊁12.9443和12.8191,总飞行航程为51.5268,总任务价值为29,收益耗比约为0.56,相对于随机任务分配提高约30.2%,相对于基于标准遗传算法的任务分配方案提高约19.1%㊂另外,各个飞行器飞行航程的方差约为0.004,无人机飞行航程的均衡性相对于基于标准遗传算法的任务分配方案有了进一步的提高㊂在算法运行过程中,最佳适应度值曲线如图3所示,可以有效避免遗传算法陷入局部最优或者随机选择㊂图3㊀自适应遗传算法的最佳适应度曲线图14.3㊀第二组实验在第一组实验中,因任务10(简称为T 10)的价值较低,在最终的任务分配方案中极少被分配㊂在第二组实验中,将T 10的价值由1调整为6,其他设置项不变㊂首先,随机进行任务分配,最终的任务分配方案和第一组实验相同㊂随后,利用标准遗传算法进行多无人机协同任务分配,最终的任务分配方案如下:㊃UAV 1:T 2ңT 3ңT 7ңT 8㊃UAV 2:T 5㊃UAV 3:T 6ңT 10㊃UAV 4:T 9基于此任务分配方案,4个无人机的飞行航程分别为12.8191㊁12.9443㊁13.6883和12.434,总飞行航程为51.8857,总任务价值为31,因此计36王树朋,徐㊀旺,刘湘德,邓小龙基于自适应遗传算法的多无人机协同任务分配投稿邮箱:dzxxdkjs@算可得收益损耗比约为0.6,相对于随机任务分配提高约17.6%㊂另外,各个飞行器飞行航程的方差约为0.21,各个无人机的飞行航程的均衡性一般,相对于随机任务分配有一定的提高㊂在算法运行过程中,最佳适应度值曲线如图4所示,在算法迭代运行约90次时陷入较长时间的局部最优,直到迭代次数为340次时,然后再次陷入局部最优㊂图4㊀标准遗传算法的最佳适应度曲线图2最后,将本文提出的自适应遗传算法用于多无人机协同任务分配问题的求解,得到最终的任务分配方案如下:㊃UAV 1:T 2ңT 3ңT 8ңT 7㊃UAV 2:T 5㊃UAV 3:T 6ңT 10㊃UAV 4:T 1ңT 4ңT 9基于此任务分配方案可得,4个无人机的航程分别是12.8191㊁12.9443㊁13.6883以及12.8191,总飞行航程为52.2708,总任务价值为35,计算可得效益损耗比约为0.67,相对于利用标准遗传算法得到的任务分配方案有了进一步提高㊂另外,各个无人机飞行航程的方差约为0.13,飞行航程的均衡性较好㊂在算法运行过程中,最佳适应度值曲线如图5所示,适应度值一直在实时动态变化,可以有效避免遗传算法陷入局部最优或者随机选择㊂由实验结果可得,当任务10的任务价值从1调整为6以后,不再出现该任务没有无人机执行的情况,这说明利用遗传算法进行多无人机协同任务分配可以根据任务的价值以及代价进行实时动态调整,符合 优先执行价值高的任务 的原则㊂图5㊀自适应遗传算法的最佳适应度曲线图25 结束语㊀㊀本文提出了一种基于自适应遗传算法的多无人机协同任务分配方法,整个遗传过程利用自适应的适应度函数评估任务分配结果的优劣,交叉率和变异率在算法运行过程中可以实时动态调整㊂实验结果表明,和随机进行任务分配相比,本文提出的方法在满足一定的原则和约束条件下,可以得到更高的收益损耗比,并且无人机飞行航程的均衡性更好㊂另外,和标准遗传算法相比,本文提出的改进遗传算法可以有效地扩展搜索空间,具有较高的全局搜索能力,不易陷入局部最优㊂参考文献:[1]㊀江更祥.浅谈无人机[J].制造业自动化,2011,33(8):110-112.[2]㊀楚瑞.基于蚁群算法的无人机航路规划[D].西安:西北工业大学,2006.[3]㊀杨剑峰.蚁群算法及其应用研究[D].杭州:浙江大学,2007.[4]㊀刘建华.粒子群算法的基本理论及其改进研究[D].长沙:中南大学,2009.[5]㊀李晓磊.一种新型的智能优化方法-人工鱼群算法[D].杭州:浙江大学,2003.[6]㊀AUSUBEL L M,MILGROM P R.Ascending AuctionsWith Package Bidding[J].Frontiers of Theoretical E-conomics,2002,1(1):1-42.[7]㊀刘昊旸.遗传算法研究及遗传算法工具箱开发[D].天津:天津大学,2005.[8]㊀牟健慧.基于混合遗传算法的车间逆调度方法研究[D].武汉:华中科技大学,2015.46。

Adaptive Electrocardiogram Feature Extraction on Distributed Embedded Systems

Adaptive Electrocardiogram Feature Extraction on Distributed Embedded Systems

Adaptive Electrocardiogram Feature Extraction on Distributed Embedded Systems Roozbeh Jafari,Student Member,IEEE,Hyduke Noshadi,Student Member,IEEE,Soheil Ghiasi,Member,IEEE,and Majid Sarrafzadeh,Fellow,IEEE Abstract—Tiny embedded systems have not been an ideal outfit for high performance computing due to their constrained resources.Limitations in processing power,battery life,communication bandwidth,and memory constrain the applicability of existing complex medical analysis algorithms such as the Electrocardiogram(ECG)analysis.Among various limitations,battery lifetime has been amajor key technological constraint.In this paper,we address the issue of partitioning such a complex algorithm while the energyconsumption due to wireless transmission is minimized.ECG analysis algorithms normally consist of preprocessing,patternrecognition,and classification.Considering the orientation of the ECG leads,we devise a technique to perform preprocessing andpattern recognition locally in small embedded systems attached to the leads.The features detected in the pattern recognition phase are considered for the classification.Ideally,if the features detected for each heartbeat reside in a single processing node,thetransmission will be unnecessary.Otherwise,to perform classification,the features must be gathered on a local node and,thus,the communication is inevitable.We perform such a feature grouping by modeling the problem as a hypergraph and applying partitioning schemes which yield a significant power saving in wireless communications.Furthermore,we utilize dynamic reconfiguration bysoftware module migration.This technique,with respect to partitioning,enhances the overall power saving in such systems.Moreover, it adaptively alters the system configuration in various environments and on different patients.We evaluate the effectiveness of our proposed techniques on MIT/BIH benchmarks and,on average,achieve70percent energy saving.Index Terms—Computational biology,ECG analysis,embedded systems,feature extraction.æ1I NTRODUCTIONT HE electrocardiogram(ECG)is the record of variation of bioelectric potential with respect to time as the human heart beats.Due to its ease of use and noninvasiveness,ECG plays an important role in patient monitoring and diag-nosis.Multichannel electrocardiogram(ECG)data provide cardiologists with essential information to diagnose heart disease in a patient.Our primary objective is to address the feasibility verification of implementing an ambulatory ECG analysis algorithm with real-time diagnosis functions for wearable computers.ECG analysis algorithms have always been very difficult tasks in the realization of computer-aided ECG diagnosis.Implementation of such algorithms becomes even harder for small and mobile embedded systems that should meet the given latency requirements while minimizing overall energy dissipation for the system. Distributed embedded systems are successfully deployed in various wearable computers.Distributed architectures have been developed for cooperative detection,scalable data transport,and other capabilities and services.However,the complexity of algorithms running on these systems has introduced a new set of challenges associated with resource constrained devices and their energy concerns.These obstacles may dramatically reduce the effectiveness of embedded distributed algorithms.Thus,a new distributed, embedded,computing attribute,dynamically reconfigur-able,must be developed and provided to such systems.In these systems,reconfiguration capability,in particular,may be of great advantage.This capability can adaptively alter the system configuration to accommodate the objectives and meet the constraints for highly dynamic systems.There have been exciting advances in the development of pervasive computing technologies in the past few years. Computation,storage,and communication are now more or less woven into the fabric of our society with much of the progress being due to the relentless march of Silicon-based electronics technology as predicted by Moore’s Law.The emerging field of flexible electronics,where electronic components such as transistors and wires are built on a thin flexible material,offers a similar opportunity to weave computation,storage,and communication into the fabric of the very clothing that we wear,thereby creating an intelligent fabric(also called electronic textiles or e-textiles) [1].The implications of seamlessly integrating a large number of communicating computation and storage re-sources,mated with sensors and actuators,in close proximity to the human body are quite exciting;for example,one can imagine biomedical applications where biometric and ambient sensors are woven into the garment of a patient to trigger and modulate the delivery of a drug. Realizing such novel applications is not just a matter of developing innovative materials for flexible electronics, along with accompanying sensors and actuators;the characteristics of the flexible electronics technology and.R.Jafari,H.Noshadi,and M.Sarafzadeh are with the Computer ScienceDepartment,University of California,Los Angeles,Los Angeles,CA90095.E-mail:{rjafari,hyduke,majid}@..S.Ghiasi is with the Department of Electrical and Computer Engineering,University of California,Davis,Davis,CA95616.E-mail:soheil@.Manuscript received12July2005;revised22Feb.2006;accepted8Mar.2006;published online26June2006.Recommended for acceptance by N.Amato,S.Aluru,and D.Bader.For information on obtaining reprints of this article,please send e-mail to:tpds@,and reference IEEECS Log Number TPDSSI-0332-0705.1045-9219/06/$20.00ß2006IEEE Published by the IEEE Computer Societythe requirements of the applications enabled by it necessi-tate radical innovation in system-level design.Electronic components built of flexible materials have characteristics that are very different from that of silicon and PCB-based electronics.Further,the operating scenarios of these systems involve environmental dynamics,physical cou-pling,resource constraints,infrastructure support,and robustness requirements that are distinct from those faced by traditional systems.This unique combination requires one to go beyond thinking of these systems as traditional electronic systems in a different form factor.Instead, rethinking and a complete overhaul of the system archi-tecture and the design methodology for all layers of these systems is required.2R ELATED W ORKSeveral“wearable”technologies exist to continually moni-tor a patient’s vital signs,utilizing low cost,well-established disposable sensors such as blood oxygen finger clips and electrocardiogram electrodes.The Smart Shirt from Sensa-tex[2]is a wearable health monitoring device that integrates a number of sensory devices onto the Wearable Motherboard from Georgia Tech[3].The Wearable Mother-board is woven into an undershirt in the Smart Shirt design. Their interconnect is a flexible data bus that can support a wide array of sensory devices.These sensors can commu-nicate via the data bus to a monitoring device located at the base of the shirt.The monitoring device is integrated into a single processing unit that also contains a transceiver. Several other technologies have been introduced such as MIThril from MIT[4],e-Textile from Carnegie Mellon University[5],Wearable e-Textile from Virginia Tech[6], and CustoMed and RFab-Vest from UCLA[7],[8].The Lifeguard project being conducted at Stanford University is a physiological monitoring system comprised of physiolo-gical sensors(ECG/Respiration electrodes,Pulse Oximeter, Blood Pressure Monitor,Temperature probe),a wearable device with built-in accelerometers(CPOD),and a base station(Pocket PC).The CPOD acquires and logs the physiological parameters measured by the sensors[9].The Assisted Cognition Project conducted at the University of Washington’s Department of Computer Science explored the use of AI systems to support and enhance the independence and quality of life of Alzheimer’s patients. Assisted Cognition systems use ubiquitous computing and artificial intelligence technology to replace some of the memory and problem-solving abilities that have been lost by an Alzheimer’s patient[10].Nevertheless,none of the above projects/systems supports the concept of scalability and adapting complex processing algorithms.3A UTOMATED F EATURE S ET D ETECTIONGiven the goal of classifying objects based on their attributes,the functionality of an automated pattern recognition system can be divided into two basic tasks: The description task generates attributes of an object using feature extraction techniques,and the classification task assigns a group label to the object based on the attributes with a classifier.There are two different approaches for implementing a pattern recognition system:statistical and structural.Each approach utilizes different schemes within the description and classification tasks which incorporates a pattern recognition system.Statistical pattern recognition[11],[12] concludes from statistical decision theory to discriminate among data from different groups based upon quantitative features of the data.The quantitative nature of statistical pattern recognition,however,makes it difficult to discrimi-nate among groups based on the morphological(i.e.,shape-based or structural)subpatterns and their interrelationships embedded within the data.This limitation provided the impetus for development of structural approaches to pattern recognition.Structural pattern recognition[13],[14]relies on syntactic grammars to discriminate among data from different groups based upon the morphological interrelationships (or interconnections)present within the data.Structural pattern recognition systems are effective for image data as well as time-series data.We have investigated an accurate ECG processing algo-rithm based on structural pattern recognition(as depicted in Fig.1)mapped onto our processing units(dot-motes)[15]. The algorithm consists of three stages:preprocessing,pattern recognition,and classification.We perform preprocessing and pattern recognition locally,i.e.,within close proximity to the ECG leads.The preprocessing includes filtering,while the pattern recognition includes heartbeat detection(through the QRS complex detection),segmentation,as well as feature extraction.Once the features are extracted,they will be processed for classification.The filtering is performed by finite impulse response (FIR)filters with cut-off frequencies of5-150Hz for a sampling rate of360samples/sec.The heartbeat detection is implemented with a QRS detector based on the algorithm of Pan and Tompkins[16]with some improvements that employ slope information.The scheme proposed by Laguna et al.[17]is used to extract the fiducial points.All offset and onset points are detected based on the location and convexity of the R point.We detect each point onset by locating the largest isoelectric region before the point.Then, we search for the inflection point followed by largest negative slope for convex R-wave or largest positive slope for concave R-wave.We also detect the point offset by searching for significant up slope following the end of the last down slope for P,T,and S offsets in particular. Consequently,features related to heartbeat intervals and ECG morphology are calculated for each heartbeat.The list of features is included in Table1and are based on[18]and [19]with minor additions.In addition,a sample filtered ECG signal which was automatically segmented by our tool is depicted in Fig.2.We extract a total of23features from the ECG signals, and each derives from one of the groups below: RR Interval Features:We extract four features based on RR Intervals.The RR interval is the interval between two successive heartbeat fiducial points,obtained from the maximum of the R-wave.The pre-RR interval is the RR-interval between a detected heartbeat and the previous one. The post-RR interval is the interval between a given heartbeat and next detected one.The average-RR interval is the average of all detected RR intervals,and the local average-RR interval is the average of the10most recent RR-intervals.Heartbeat Interval Features :We extract five features related to heartbeat intervals.QRS duration is the time between QRS offset and QRS onset.T-wave duration is the time between T-wave onset and T-wave offset.The PR,ST,and QT duration are additions to the automated classifica-tion system.ST duration is the time between S-wave offset and T-wave onset.The PR duration is the time between P-wave onset and R.The QT duration is the time between Q-wave onset and T-wave offset.All of these features are obtained by first determining the start and end point of each interval,and then subtracting the end point from the start point.Geometric Points :We calculate the signal DC shift level by taking the average base line of the previous five successive detected heartbeats.The maximal positive and the minimal negative peaks are detected by computing the voltage difference between each sample in the heartbeat and DC shift level.In addition,we extract the number of samples in a 70-100percent range of absolute peak value.Finally,we compute the slope velocity of Q-onset-R as well as R-S segments.ECG Morphology Features :We extract eight features based on ECG morphologies arranged into four groups.Two groups consist of samples from heartbeat segments and two groups consist of samples from fixed intervals.Within each group,one feature consists of samples from the original ECG signal,while the other feature is extracted from the normalized ECG signal.The normalization is done through scaling down the amplitude of samples by standard deviation of the same heartbeat.We extract samples from heartbeat segments in ECG morphology 1and 2(see Fig.3).In morphology 1,10samples between QRS onset and offset are extracted,and in morphol-ogy 2,nine samples between S-wave offset and T-wave offset are obtained.The number of samples collected is also contingent upon the sampling rate and scales with various sampling rates accordingly (the aforementioned numbers are for the original sampling rate of 360samples per second).We extract samples from a fixed interval in ECG morphology 3and 4(see Fig.4).In morphology 3,10samples between R À50ms and R þ100ms are extracted,and in morphology 4,eight samples between between R þ150ms and R þ500ms are acquired.For all ECG morphologies,the elements that fall in between two samples are estimated using linear polariza-tion.We have repeated such feature extraction for three input sampling rates of 360,200,and 100samples per second.Three hundred and sixty samples/second is the original sampling rate for the MIT/BIH [20]benchmarksJAFARIET AL.:ADAPTIVE ELECTROCARDIOGRAM FEATURE EXTRACTION ON DISTRIBUTED EMBEDDED SYSTEMS 799Fig.1.ECG analysis schematic.and the sampling rates of 200and 100samples/second was acquired by downsampling the input.Despite our objective is to minimize the communication among processing nodes before the classification phase,this study does not investigate the problem of classification.Therefore,we did not implement a classifier for our platform.However,any classifier suitable for constrained embedded systems may be deployed.4S OFTWARE P ROFILINGTo measure the execution delay of our heartbeat detection and feature extraction program,we used Avrora [21],a microcontroller simulator framework developed at the University of California,Los Angeles.Avrora is a precise and flexible simulator that preserves all timing and behavior of the instrumented program,while allowing800IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,VOL.17,NO.8,AUGUST 2006TABLE 1Features Categorized byGroupsFig.2.Automatic ECG segmentation performed on filtered signal.user-defined profiling of application information.With Avrora,users can easily profile application-specific infor-mation such as branch frequency,maximum stack size,and memory access by adding custom program monitors.For our experiments,we implemented a program monitor on Avrora that generates the control flow graph (CFG)while measuring the execution frequency and delay of each basic block.Since the CFG of our system is very large,only the major processes are shown in Fig.5.The CFG is dynamically generated based upon our compiled and assembled ECG program,while Avrora simulates the program execution.Unlike static analysis,parts of a program that are not executed during the simulation will not be accounted for.Also,the generated graph will accurately reflect compiler optimizations.Hardware inter-rupts,which occur intermittently during execution,are accounted for as well.Delay analysis for each function is performed during CFG generation for practicality since basic block information may be too detailed.Function delay is measured as the duration when execution enters a function to when execution exits.Calls to other functions are accounted for,while interrupts are not.Since execution delay may be inconsistent due to functions containing different execution paths,the average function delay is gathered from each execution instance.However,for our purposes,the functions that extract features from heartbeat signals all consist of a single execution path.Therefore,we lost no precision in our analysis.The delays of feature detection modules for sampling rate of 360samples/second are illustrated in Fig.6.5T ARGET A RCHITECTURE M ODELNetworked sensor nodes containing constrained,often battery-powered,embedded computers can densely sample phenomena that were previously difficult or costly to observe.Sensor nodes can be placed anywhere on a patients’body.Due to the mobility of such systems,wireless sensor networks are expected to be both autono-mous and long-lived,surviving environmental hardships while conserving energy as much as possible.It is well-known that the amount of energy consumed for a single wireless communication of one bit can be many orders of magnitude greater than the energy required for a single local computation [22].Thus,we focus on the energy used for wireless communication.In our model,since all nodes are placed within close proximity of each other,weassume they communicate directly and multihop commu-nication is not required.Therefore,the total energy consumed for in-network processing is:"ðn Þ¼b ðn ÞÂe ðn Þ;ð1Þwhere b ðn Þis the number of packets transmitted and e ðn Þis the average amount of energy required to transmit one packet.In our design,we consider the Collision Free Model (CFM),which simplifies the programming by abstracting out all of the details of low level channel contention and packet collision from the algorithm designers.By abstract-ing reliable communication as an atomic operation,pro-gramming based on CFM bears a resemblance to existing algorithm design in parallel and distributed computation.CFM does not really capture the impact of packet collision that distinguishes wireless communication from wired communication,which makes performance analysis under CFM not very accurate.However,for the sake of simplicity,we consider CFM in our design.6D YNAMIC R ECONFIGURATIONSensor nodes are composed of embedded systems as well as general-purpose software,introducing a tension between resource and energy constraints and the layers of indirec-tion required to support true general-purpose operating systems.TinyOS [23],the state-of-the-art sensor operating system,tends to prioritize embedded system constraints over general-purpose OS functionality.TinyOS consists of a collection of software components written in the NesC language [24],ranging from low-level parts of the network stack to application-level routing logic.Our target operating system,SOS,is a new operating system for mote-class sensor nodes that takes a more dynamic point on the design spectrum [25].SOS consists of dynamically-loaded modules and a common kernel,which implements messaging,dynamic memory,and module loading and unloading,among other services.Dynamic reconfigurability is one of our primary assumptions.In the domain of embedded computing,reconfigurability is the ability to modify the software on individual nodes of a network after the network has been deployed and initialized.This provides the ability to incrementally update the sensor network after it is deployed,add new software modules,and remove unused software modules when they are no longer needed.JAFARI ETAL.:ADAPTIVE ELECTROCARDIOGRAM FEATURE EXTRACTION ON DISTRIBUTED EMBEDDED SYSTEMS 801Fig.3.The sampling intervals of ECG morphologies 1and 2.Morphology 1consists of samples extracted from QRS onset and offset.Morphology 2consists of samples from S-wave offset and T-wave offset.Fig.4.The sampling intervals of ECG morphologies 3and 4.Morphology 3consists of samples between 50ms before and after the fiducial point (FP).Morphology 4consists of samples between 150ms and 500ms after the fiducial point.The growing tensions between large,hard to update networks and complex applications with incremental patches has made reconfigurability an issue that can no longer be ignored.SOS supports a mechanism that enables over the air reprogramming of the sensor ing this method,software modules may be modified,added,or removed.7F EATURE S ET P ARTITIONINGA hypergraph is a generalization of a graph,where the set of edges is replaced by a set of hyperedges.A hyperedge extends the notion of an edge by allowing more than two vertices to be connected by a hyperedge.Formally,a hypergraph H¼ðV;E hÞis defined as a set of vertices V and a set of hyperedges E h,where each hyperedge is a subset of the vertex set V[26],and the size a hyperedge is the cardinality of this subset.Let w i denote the weight of vertex v i2V.A K-way vertex partitionżf V1;V2;...;V k g of H is said to be balanced with an overall load imbalance tolerance (1if each V i satisfies the following equation: W k W avgð1þ Þ;for k¼1;2;...;K;ð2ÞwhereW k¼Xv i2V kw ið3ÞW avg¼Xv i2Vw i!=K:ð4ÞIn a partition of H,a hyperedge that has at least one vertex in a partition is said to connect that partition. Connectivity setÃj of a hyperedge e j is defined as the set of802IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,VOL.17,NO.8,AUGUST2006Fig.5.Graph generated from profiling analysis containing only major blocks required for featureextraction.Fig. 6.Delays corresponding to ECG feature detection modules extracted in profiling phase.partitions connected by e j .Connectivity j ¼j Ãj j of a hyperedge e j denotes the number of partitions connected by e j .A hyperedge h j is said to be cut (external)if it connects more than one partition (i.e., j >1),and uncut (internal)otherwise (i.e., j ¼1).Therefore,the definition of cut-size is as follows:cutsize ðÅÞ¼Xe j 2E hð j À1Þ:ð5ÞHence,the cutsize is equal to the number of cut nets.Thehypergraph partitioning is defined as dividing it into two or more parts such that the cutsize is minimized,while a given balance criterion among the partition weights is achieved.The hypergraph partitioning problem is known to be NP-hard [27].During the software partitioning,it is quite important to be able to divide the system specification into clusters so that the intercluster (intermote)connections are minimized.Hyper-graphs can be used to naturally represent feature extractionJAFARI ET AL.:ADAPTIVE ELECTROCARDIOGRAMFEATURE EXTRACTION ON DISTRIBUTED EMBEDDED SYSTEMS 803TABLE 2Benchmark Statisticsalgorithms.The vertices of the hypergraph are modeled as features,their weights represent the computational time required for features detection,and the hyperedges resemble the number of times a set of features is triggered simulta-neously.Partitioning the graph such that the cut-size is minimized while the partitions are balanced can reduce the communication that is required among various processing units for classification phase.The vision is that all features selected must be classified at a local node,thus,in the events where selected features reside on distributed nodes,inter-node communication is inevitable.A high quality hyper-graph partitioning algorithm greatly affects the feasibility,quality,and the cost of the resulting system.We employed a hypergraph partitioning algorithm that is based on the multilevel paradigm.In the multilevel paradigm,a sequence of successively coarser hypergraphs is constructed.A bisection of the smallest hypergraph is computed and used to obtain a bisection of the original hypergraph by successively projecting and refining the bisection to the next level finer hypergraph.We have usedhMETIS,a program for partitioning hypergraphs imple-mented for PCs [28].The same algorithm can be easily ported on a mobile computer such as a Pocket PC to facilitate dynamic reconfiguration.The vision is that the hypergraph information is collected real-time from the processing nodes of the wearable computer.Subsequently,the algorithm running on the motes are reconfigured.The number of partitions is determined as described below:The preprocessing tasks,as well as pattern recognition,must be completed before the next heartbeat arrives.Let the heartbeat be N beats per minute (bpm).Therefore,the heartbeat rate period can be obtained from:T heartbeat ¼60=N:ð6ÞLet the time required for preprocessing and pattern recognition be t pre and t recog ,respectively.t pre þt recog < ÂðT heartbeat Þ;where<1:ð7Þ804IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,VOL.17,NO.8,AUGUST 2006TABLE 3Number of Queries Exchanged among Processing Units:Sampling Rate =360Sample/Sec TABLE 4Number of Queries Exchanged among Processing Units:Sampling Rate =200Sample/SecThe factor is selected to be 0:9to ensure a margin that prevents overloading the processing units.Therefore,the maximum CPU time that may be assigned to pattern recognition is ÂðT heartbeat ÞÀt pre ,where t pre is fixed and can be computed from the profiling stage.As described earlier,the weight on vertices represents the required computational time for each feature.In addition,W k is already outlined in (4).Therefore,the following objective should be accommodated:Minimize Ks:t:W k <t recog 8k ¼1::K:ð8ÞTo determine the value of K ,we consider the total time required for pattern recognition on all features,T recog (extracted from profiling analysis).It is trivial that the lowerbound on K can be obtained from the following equation:K ¼T recog =t recog :ð9ÞOnce partitioning is performed based on the value of K ,the solution may be imbalanced and violates the constraint described in (8).In this case,K must be incremented and the features are repartitioned until a feasible solution is determined.8S IMULATION A NALYSISThis section presents various simulation analysis performed to exhibit the effectiveness of our technique.All experi-ments were carried out with ECG signals from MIT-BIH Arrhythmia database.The MIT-BIH Arrhythmia database contains 48half-hour excerpts of two-channel ambulatory ECG recordings,obtained from 47subjects studied by the BIH Arrhythmia Laboratory between 1975and 1979.The recordings were digitized at 360samples per second per channel with 11-bit resolution over a 10mV range.We used all 48complete records freely available from PhysioNet [29].We also repeated the experiments by downsampling all the benchmarks to 200and 100samples per second.As illustrated in Table 2,each MIT-BIH record has the recordings of two channels.Yet,we only used the first channel.The second channel was not used for the sake of simplicity.Originally,in MIT/BIH benchmarks,the electro-des placed on the chest were selected due to their small noise level.We performed profiling analysis on the algorithm described in Section 3using Avrora to compute the computational delay of feature detection modules.The ECG algorithm was ported both for dot-motes (SOS)and PCs.The algorithm for PC was written in C language.The simulation for feature and hypergraph extraction was done on PC due to a number of software instability that we encountered in SOS.As for hypergraph partitioning,we utilized hMETIS.The MIT/BIH benchmarks were used with three sampling rates as illustrated in Tables 3,4,and 5.The original sampling rate was 360samples/sec while 200and 100samples/sec were acquired by downsampling the data.In Tables 3,4,and 5,two scenarios for configuration were considered.In one scenario,features were adaptively assigned to processing units based on hypergraph parti-tioning (adaptive partitioning).In the other scenario,the optimized configuration was determined using hypergraphpartitioning on benchmark 100and remained fixed throughout our experiments (fixed configuration).The number of partitions were obtained from (9)for each benchmark.Table 3figures the number of queries ex-changed in both scenarios.Considering that the experi-ments were carried out through simulations,we were unable to measure the wireless power consumption.However,given the number of features we examined—23,each query may be incorporated in a wireless packet of dot-motes (30bytes).Therefore,taking into account (1),the wireless power consumption is proportional to the number of queries exchanged.On average,the communication energy consumption was reduced by approximately 70per-cent in all sets of experiments.The wireless communication overhead for partitioning was negligible due to the small size,sparsity,and slowly changing nature of our hyper-graphs.The reconfiguration was performed only once for each benchmark.Therefore,its effect on the performance of the system was negligible.JAFARI ET AL.:ADAPTIVE ELECTROCARDIOGRAM FEATURE EXTRACTION ON DISTRIBUTED EMBEDDED SYSTEMS 805TABLE 5Numberof Queries Exchanged among Processing Units:Sampling Rate =100Sample/Sec。

专业英语单词

专业英语单词

“daisy-chained”application “菊花链”的应用32 “dumb”device 无信息反馈的“哑”设备32 3-phase transformer 三相芯式变压器31a factor of 10 一个数量级43 a lagging power factor 滞后的功率因数21a mutually inducede e.m.f.互感电动势21 absolute value 绝对值14 AC line (mains)交流供电主线路43 access method 接入方式32 active branch 有源支路13 active circuit elements有源电路元件12 active switch 有源开关43 adaptive feature 自适应特性35 adaptive relaying 自适应继电保护35 additive polarity 加极性31 adjusting timing 调节计时43 admittance 导纳17 aircraft generator 飞行器用发电机43 Al (Artificial Intelligence)人工智能56 amplifier 放大器17 analytical solution 解析法14 Arabic number 阿拉伯计数制18 Argand diagram 阿尔冈图14 Argand 阿尔冈,法国数学家14 armature leakage reactance 电枢漏电抗31 asscmbled circuit 集成电路18 assembly or microcode 汇编语言或微处理码56 asymmetrical fault 不对称故障31 asynchronous serial data 异步串行的数据32 Asynchronous Transfer Mode( ATM )异步转换模式32 asynchronous transfer mode switching technology 异步转换模式开关技术32 automatic failover 自动纠错32 automatic range switching 输入电压43 自动调节开关backbone network 主网32 backlight 背光43 backup protection 备用保护35 backward-chaining 向后链接56 base frequency 基波频率43 be multiplexed (信号)被多重化32 be referred to 折算到,折合到23 binary combination 二进制数组18 binary 二进制18 Binary-sequential order 二进制次序18 bio-based product 生物基产品47 bio-energy 生物能源47 biogas technology 沼气技术47 biomass forming particles fuel 生物质成型颗粒燃料47 biomass industry 生物质产业47 biomass materials 生物质原料47 biomass-rich regions生物质能丰富的地区47 biopolar junction transistor( BJT)双极型晶体管15 Boolean algebra 布尔代数18 breaker failure 断路器故障35 buffer amplifier 缓冲放大器15 building-block circuit 积木式结构电路18 burning wood residue 燃烧木材剩余物47 capacitance effect 电容效应21 capacitor-diode voltage multiplier 电容二极管电压倍增器43 carbon-filament lamp 碳丝灯泡11 carrier sense multiple access tokenbus 令牌总线32 Cartesian coordinates 笛卡儿坐标系14 cascading outages 串级停电事故35 CASE 计算机辅助软件工程56 central operations computer system计算机系统中心32 chance variable 随机变量18 changing setting 改变设定值35 charge pump 充电泵43Chopper controller 斩波控制器43 circuit branch 支路13 circuit components 电路元件11 circuit diagram电路图11 circuit parameters电路参数11 circuit-breakers 断路器31 circulating current 环流31 clockwise (顺)时针方向14 CLOS (Cornrnon Lisp Object System )一种面向对象的编程和表达的工业标准56closed loop control 闭环控制32 coaxial cable 同轴电缆32 Cockcroft-Walton generator 克罗夫特一沃尔顿发电机43combining heat and power (CHP)with biomass 生物质热电联(产)47common base 共基极15 common collector 共集电极15 common drain 共栅极15 common reference 参考点16 common source 共源极15 complex circuit 复杂电路13 complex number 复数14 complex peak value 复数幅值14 complex plane 复平面14 complex time function 复数时间函数14Complex-number method = methodof complex numbers 复数法14computational arithmetic 算术运算18 conductance 电导11 conductor导体11 constant angular velocity 恒定角速度14construction phase 建设阶段47 contradiction between …之间的矛盾47control strategies 控制策略35 control-board 控制屏31 convention 惯例16 corn stalk玉米秸秆47 correcting power factor 校正功率因43 数counter-clockwise 逆时针方向14 CPU core 中央处理器芯片43 CT 电流互感器35 current magnitude relay 电流继电器35 cut off (completely off )全断43 D.C. blocking capacitor 直流耦合电容器、隔直流电容器15D.C. machine 直流电机13 D.C. supply 直流供电电源15 damper wingding 阻尼绕组31 Darlington 达林顿(人名)16 data communications equipment (DCE)数据通信设备32 data terminating equipment (DTE)数据终端设备32 De Morgan’s Theorem 德摩根定理(德摩根是19世纪英国数学家)18 decentralized fuel consurning problem分散用能问题47 decimal number 十进制数18 decimal system 十进制18 demagnetize 去磁作用21 dependent variable 因变量,函数18 desktop computer 台式计算机43 difference signal 差值信号17 differential input 差动输入16 differentiation 微分16 dimensional 量纲的,因次的,……维的16 direct-axis sub-transient short circuittime constant 直轴次暂态短路时间常数31direct-axis transient open-circuit time constant 直轴暂态开路时间常数31direct-axis transient short-circuitedtime constant 直轴暂态短路时间常数31direct-axis 直轴31 direct-current(D.C.)circuit直流电路11 direct-fired biomass for electric power generation 生物质直燃式发电47 direct-fired biomass power generation 47生物质直燃式发电directional comparison relay 方向比较继电器35 directional contact 方向性吸合35 discrete electromechanical control relay 离散的机电控制继电器32 discrete transducer 离散的传感器32 displacement current位移电流11 distributed power supply 分散的独立电源47 distributed resource 分散的资源47 domestic product 国内产品43 double subscript 双下标12 dual-redundant processor (有冗余的)双处理器32 dummy load 假负荷16 dummy order 伪指令16 duty cycle 占空比43 e.m.f.=electromotive force电动势11 effective gain 有效增益17 effective values 有效值14 electric circuit电路11 electric energy电能11 electrical device电气设备11 electrical plant 电气设备31 electro mechanical backup 机电(设备的)备份32 electrode 电极,电焊条15 electromagnetic interference (EMI)电磁干扰43 electronic analog 电子模拟16 electronic re-closer 电子式的重合器32 emergency generator system 应急发电机系统43 emitter follower 射极跟随器15 emitter 发射器,发射极,发射管15 energy control center 电能控制中心32 energy converter电能转换器11 energy source电源11 epoch angle 初相角14equivalent circuit 等效电路,等值电路21error in output voltage 输出电压的误差43ethernet network 以太网32excitation current 励磁电流31exciter 励磁机31expand agricultural function 拓展农业功能47expert system 专家系统56external characteristic外特性11factor 空载功率因数21factor 系数,因数16feedback component 反馈元件16feedback 反馈16fequency bandwidth 频带宽度17Fiber Distributed Data Interface(FDDI) 光纤分布数据接口,分布式光纤数据接口32fidelity 保真度17field effect transistor (FET )场效应管15filter bank 滤波器组43filter capacitor 滤波器的电容43forward-chaining 向前链接56fuel cell 燃料电池43full on 全开43full-wave rectifir全波整流器43fuzzy predicates 模糊术语56fuzzy probabilities 模糊概率56fuzzy truth value 模糊真值56gain 增益15gain-bandwidth product 增益带宽积17gateway 网关32General-purpose workstation 通用工作站56generator发电机11Google 谷歌43graphical user interface 图显式的用户界面32hand-held calculator 便携式计数器18heating appliance电热器11Heaviside 海维赛德,美国物理学家14hidden failure ( HF )隐匿性故障35Hidden Failure Monitoring and 35Control System( HFMCS)隐匿性故障监控系统high current-handling capacity 大电流处理能力43 high vulnerability relays 高易损继电器35 high-vulnerability index 高易损指数35 host computer system (master station)主机计算机系统(即主站)32 HV and EHV systems 高压和超高压系统35 hybrid-∏ model 混合∏形模型15 ideal amplifier 理想放大器16 ideal current source 理想电流源12 ideal source 理想电源12 ideal voltage source 理想电压源12 IED ( Intelligent Equipment and Device )智能化的设备与装置32 immediate trip立即跳闸35 impedance 阻抗17 importance sampling 重要性采样35 importance sampling 重要性采样35 in antiphase 反相21 in close proximity ( to)紧密耦合21 in parallel with 和……并联15 independent vatiable 自变最18 Industrial Revolution 工业革命47 infinite voltage gain 无穷大电压增益16 initial voltage 初始电压16 input range switch 输人范围调节开关43 input resistance 输入电阻15 installed capacity 装机容量47 instantaneous values 瞬时值14 integrated circuit of the large-scale大规模集成电路18 Intel Developers Forum 英特尔开发商论坛43 intelligent programmable 智能化可编程的32 intelligent technology 智能技术56 interface module 接口模块32 interlock 连锁装置32 internal resistance 内阻12 International Space Station 国际空间站43 Inverse 倒数17 inverting amplifier 反相放大器16 inverting amplifier 运放16 isolation mechanism 隔离机制43 isolation 隔离,绝缘,隔振15 isulator 绝缘体15 job opportunity 就业机会47 knowledge base 知识库56 lagging production pattern 落后生产模式47 LAN ( Local Area Network )局域网32 laptop computer 便携式计算机43 large-scale food processing enterprises 大型粮食加工企业47 large-scale wood processing plant 大型木材加工厂47 leakage current 漏电流12 linear regulator 线性调节器43 line-to-line fault 相对相故障31 lino-to-earth fault 相对地故障31 load characteristic 负载特性11 load resistance负载电阻11 load tap changer 有载分接开关32 logic AND function 逻辑“与”函数18 logic circuit 逻辑电路18 logic condition 逻辑状态18 logic equation 逻辑方程18 logic OR function 逻辑“或”函数18 logic symbol 逻辑符号18 logic variable 逻辑变量18 low on-resistance 低导通电阻43 low voltage variant 低电压发生器43 lower limit on the integration 积分下限16 low-pass filter 低通滤波器43 magnetic and electric field电磁场11 magnetizing current 激磁电流21 major grain-producing areas 粮食主47产区make use of biomass for power generation 用生物质能发电47 manual voltage range switch 手动电压换挡开关43 manuf acturer’s date sheet (产品)铭牌16 mathematical operation 数字运算16 megohm 兆欧(姆),百万欧(姆)16 metal-filament lamp 金属丝灯泡11 microvolt 微伏16 mid-frequency band 中频带15 million tons of standard coal 百万吨标煤47 MIS and DP 管理信息系统与数据处理56 modulus (复数)模14 multi-drop application 多驻点的应用32 multi-joint and coordinate within the cross-course , cross-section and cross-profession学科、跨部门、跨行业的联合与协同47multi-layered stack 多层次的堆栈32 multiple-state 多态18 multistage MOSFET amplifier 多级功率场效应管放大器43 negative and zero sequence reactance负序和零序电抗31 negative feedback 负反馈17 non-inverting terminal 非反相端16 non-linear characteristics 非线性特性11 nonlinear distortion 非线性失真17 non-redundant processor 非冗余的控制器32 non-switching power-supply for stand-by 非开关式的备用电源43 Norton current source 诺顿电流源17 off-line power supply 离线式电源43 offset = bias 偏置16 Ohm's law 欧姆定律13 on site 现场43 open communications protocol 开放的通信协议32 open loop gain 开环增益16 open vs. proprietary system 开放与专卖系统32 open-loop regulator 开环调节器43 operating and polarizing signal 运行和极化信号35 operational amplifier 运算放大器16 optical fiber 光纤(电缆)32 opto-coupler 光耦合器43 order 数量级16 origin of coordinates 坐标原点14 oscillation 振荡17 output contact 输出吸合(信号)35 output lead 输出端18 output resistance 输出电阻15 overall amplifier gain 放大器总增益17 over-all planning and all-round considerations 统筹兼顾47 overcurrent relay 过电流继电器35 overcurrent relay 过电流继电器35 P.D.= potential drop 电压降13 parallel circuit 并联电路15 parallel resonance 并联混振15 parallel series 混联15 passive circuit elements无源电路元件12 passive element 无源元件13 percentage impedance voltage 阻抗电压百分数31 permeability 磁导率21 phase comparison relay 相位比较继电器35 phase displacement 相位差14 phase inversion 倒相17 phase reversal 反相16 phase shift 相位移21 physical media 物理介质32 pick-up settings 启动值35 planning and designing phase 规划设计阶段47 plug and play 点到即用32 polarity 极性,偏极15positive reference direction 正(参考)方句13 potential distribution 电位分布13 power component of current 电流的有功分量21 power factor correction (PFC)功率因数校正43 power line carrier 电力线载波35 power station 发电厂31 power supply unit (PSU)供电单元,电源设备43 power system black out 系统断电35 power transistor 功率晶闸管43 power transmission line输电线12 primary cell原生电池11 probability 概率35 Programmable logic controller( PLC)可编程逻辑控制器32 protection system 保护系统35 protective relay scheme 继电保护方式35 PT 电压互感器35 PWM ( Pulses Width Modulation )脉宽调制43 quadrature-axis 交轴31 quasiresonant ZCS/ZVS (zero current/zero voltage )43 r.m.s. vaues = root of mean square 均方根值14 radio frequency (RF)无线电频率43 rare event 小概率事件35 rate of change of voltage 电压变化率21 reach or settings of a relay 继电器的保护或整定(范围)35 reactor 电抗器31 receiving end 接收端12 rectification 整流43 rectifier 整流器43 redundant 冗余(设备)35 reference point 参考点13 region of vulnerability ( RV )易损区域35 regional biomass energy program 区47 域性生物质能源计划regional power stations 区域供电站47 remote terminal unit ( RTU )远动终端设备单元32 resources dispersed rural areas 资源分散的农村地区47 retum difference 反馈深度17 reverse power flow 逆潮流35 revising control 修正控制35 rotating vector 旋转矢量14 run stably 稳定运行47 salient-pole machine 凸极电机31 SCADA Supervisory Control and Data Acquisition 监控与数据采集系统32schematic 纲要的;图解的;按照图式的(或公式的)16secondary cell再生电池11 self-(or mutual-)induction自(互)感11self-bias resistor 自偏置电阻15 self-calibrating 自我校验35 self-checking 自我检测35 self-monitor 自我监控35 sending end 发送端12 serial data interfaces 串行数据接口32 serial interface 串行接口32 series and parallelequivalent circuit 串并联等值电路12simple algebra 初等代数18 single-ended output 单端输入16 single-loop network (circuit)单回路网络(电路)13 single-phase banks of three each 单相组式(变压器)31 sink(倒)U 形(电路)21 sinusoidal shape 正弦波形状43 sinusoidal time function 正弦时间函数14 sinusoidal variations 正弦变量21 skin effect 集肤效应43 small signal amplifier 小信号放大器15soufce follower circuit 信号源跟随电路15 source code 源代码32 space shttle 航天飞机43 SQL (Structured Query Language)结构化查询语言(一种与相关的数据库进行通信的工业标准)56square waves 方波43 standard AC electric motors 按标准生产的交流电动机43station transformer 厂用变压器31 steady direct current 恒稳直流电14Steinmetz 施太因梅兹,出生于法国的美国电气工程师,提出交流电系统概念,创立计算交流电路的方法,研究电瞬变现象理论,著有《交流电现象的理论和计算》14storage battery蓄电池11 stored magnetic energy磁场储能21 substation automation 电站自动化32 substation-resident equipment 变电站的驻站设备32 substrate 底层,基片,衬底16 subsystem 子系统,辅助系统18 subtractive polarity 减极性31 sub-transient reactance 次暂态同步电抗31 sub-transient 次暂态31 sudden short-circuit condition 突然短路31 suited to local conditions 因地制宜47 summing circuit 总和线路,反馈系统中的比较环节17 sunrise industry 朝阳产业resollrce efficient utilization 资源高效利用47 switch 准谐振零电流/零电压开关43 switch-board 开关屏31 switching regulator 开关调节器43 symmetrical banks 呈放射状的(磁路),即三相芯式变压器31 synchronous optical network technology 同步光纤网络技术32 temperature rise limit 温升极限31 terminal voltage端电压11 terminology 术语,专门名词18 the applied voltage 外施电压21 the dielectric电介质11 the general expression 通式,公式23 the no-load power 21 thermal noise 热噪音17 third harmonics or their multiples 三及三的倍数次谐波31 time-invariant时不变的11 tirne of decay 衰减时间31 token ring 令牌环32 traditional fossil fuels 传统的矿物燃料47 traditional renewable energy surces传统的可再生能源47 training region 实验区47 transconductance 跨导17 transfer trip scheme 远方跳闸方式35 transient condition 暂态31 transient internal voltage 次暂态内电势31 transient over-voltage 暂态过电压31 transistor noise 晶体管噪声17 transresistance 互阻17 triangular symbol 三角符号16 trigonometric transformations三角转换14 trip logic 跳闸逻辑35 tri-state driver 三态驱动器32 truth table 真值表18 unidirectional current单方向电流11 universal gate 全能门18 universal inputs 多用电源输入端43 unsupervised control 无人值守的控制32 user interfaces用户接口56 UTP ( unshielded twisted-pair ) copper 非屏蔽双绞铜导线32 variable duty cycle 可变的工作周期43 vector 矢量14 vector diagrams 矢量图14 vector groups in transformer 31connection 变压器的连接组别vectors of voltages ( currents , magnetic fluxes , etc. ) 电压(电流,磁通等)矢量14very large-scale (VLSI) types 超大规模集成电路18virtual ground 虚地16 voltage doubler 电压倍增器43 voltage drop电压降11 voltage ratio 电压比31 volt-ampere characteristics 伏安特性11 vulnerability index 易损坏指数35 wire导线11 with collision detection ( CSMA /CD )具有碰撞检测功能的载波侦听多路通道32X Window System 一种图形和显示管理的工业标准56zero-power- actor 零功率因数21 zones of protection 保护范围35。

华为IPS模块商品介绍说明书

华为IPS模块商品介绍说明书

IPS moduleHUAWEI IPS ModuleOverviewHuawei IPS module is a new generation of dedicated intrusion detection and prevention products. It is designed to resolve network security issues in the Web2.0 and cloud age. In the IPv4 and IPv6 network environment, the IPS module supports virtual patching, web application protection, client protection, malicious-software control, network application control, and network-layer and application-layer DoS attack defense.With the carrier-class high availability design, the IPS module can be inserted on switches, such as the S12700, S9700, and S7700, providing plug and play and scalability features. It can be deployed flexibly in multiple network environments. This module supports zero-configuration deployment and does not require complicated signature adjustment and manual setting of network parameters and threshold baselines to block service threats. Functioning with basic network devices, the IPS module comprehensively protects network infrastructures, network bandwidth performance, servers, and clients for large and medium-sized enterprise, industry, and carriers.Product FeaturesFlexible Deployment and Easy to Use•Uses software to adjust the networking, which simplifies the installation and deployment and frees the administrators from adjusting the complex cables. •Integrates networks with security using products from the same vendor, which facilitates unified management and simplifies the management. •Supports zero-configuration deployment and plug and play, and doesnot require complicated signature adjustment and manual setting of network parameters.•Provides diversified policy templates to simplify configurations in various scenarios and facilitate security policy customization.Accurate Detection and Efficient Threat Prevention•Detects attacks accurately without false positives with the advanced vulnerability feature detection technology.•Automatically learns the traffic baselines to prevent incorrect threshold configurations.•Automatically blocks major and severe threats without signature modification.Comprehensive Protection from System Service to Application Software•Provides traditional intrusion protection system (IPS) functions, such as vulnerability-based attack defense, web application protection, malware control, application management and control, and network-layer DoS attack defense.•Provides comprehensive protection for client systems exposed to the prevalent attacks that target web browsers, media files, and other document file formats.•Provides industry-leading defense against application-layer DoS attacks that spread through HTTP , DNS, or SIP .•Detects attacks and upgrades signatures in a timely manner with the global vulnerability trace capability.Application Awareness for Accurate Control of User Behaviors•Identifies more than 6000 network applications. With precise bandwidth allocation policies, the IPS module restricts the bandwidth used by unauthorized applications and reserves sufficient bandwidths for office applications, such as OA and ERP .•Monitors and manages various network behaviors, such as instant messaging (IM), online games, online video, and online stock trading. This enables enterprises to identify and prevent unauthorized network behaviors and better implement security policies.Specifications。

英语作文-集成电路设计行业的智能芯片与系统解决方案

英语作文-集成电路设计行业的智能芯片与系统解决方案

英语作文-集成电路设计行业的智能芯片与系统解决方案The semiconductor industry, particularly in the realm of integrated circuit (IC) design, has witnessed a remarkable evolution over the years. Among the forefront advancements lies the domain of smart chips and system solutions. In this article, we delve into the intricacies and innovations within the domain of intelligent chip design and its broader implications for the industry.Intelligent chips, often referred to as system-on-chips (SoCs), represent a fusion of hardware and software expertise aimed at delivering enhanced functionalities and performance. These chips integrate various components, including processors, memory, sensors, and interfaces, onto a single substrate, thus offering compactness and efficiency.One of the defining features of intelligent chips is their adaptability and programmability. Through sophisticated algorithms and firmware, these chips can dynamically adjust their behavior based on environmental conditions, user inputs, and other stimuli. This adaptability is particularly crucial in applications such as IoT devices, automotive electronics, and consumer electronics, where flexibility and responsiveness are paramount.Moreover, intelligent chips boast advanced security features to safeguard sensitive data and thwart malicious attacks. Encryption, authentication mechanisms, and secure boot protocols are integrated into the chip architecture to provide robust protection against cybersecurity threats. As data privacy concerns continue to escalate, the incorporation of stringent security measures has become indispensable across various industry sectors.Furthermore, the emergence of artificial intelligence (AI) and machine learning (ML) has propelled the capabilities of intelligent chips to unprecedented heights. By embedding neural network accelerators and dedicated hardware for AI inference tasks, these chips can perform complex computations with unparalleled speed and efficiency. This pavesthe way for innovative applications such as image recognition, natural language processing, and autonomous decision-making.In addition to standalone intelligent chips, there is a growing trend towards system-level integration and co-design. This entails the seamless integration of multiple chips and subsystems to form cohesive, synergistic systems. By optimizing the interaction between different components, designers can achieve higher performance, lower power consumption, and reduced latency, thereby unlocking new possibilities in terms of functionality and user experience.The design process for intelligent chips involves a multidisciplinary approach, encompassing aspects of electrical engineering, computer science, and materials science. Designers leverage advanced tools and methodologies, including electronic design automation (EDA) software, hardware description languages (HDLs), and simulation techniques, to model, simulate, and verify the chip's functionality prior to fabrication.Furthermore, the relentless pursuit of miniaturization and energy efficiency has led to innovations in semiconductor manufacturing technologies. From FinFET transistors to advanced packaging techniques such as 3D integration and wafer-level packaging, manufacturers are continually pushing the boundaries of what is technologically feasible. These advancements not only enable higher transistor densities and faster switching speeds but also contribute to reducing the overall cost per function, thus driving widespread adoption of intelligent chips across diverse market segments.Looking ahead, the trajectory of intelligent chip design is poised to intersect with other transformative technologies such as quantum computing, neuromorphic computing, and edge computing. As the demand for compute-intensive applications continues to escalate, the role of intelligent chips as the cornerstone of next-generation electronics becomes increasingly pronounced.In conclusion, the field of intelligent chip design represents a convergence of innovation, ingenuity, and interdisciplinary collaboration. From powering the devices we use daily to driving the next wave of technological breakthroughs, these chips serve as the bedrock upon which the digital future is built. As we navigate the complexities of aninterconnected world, the quest for ever-smarter, more efficient chips will undoubtedly remain at the forefront of technological progress.。

全产业链价值创造英文说明书

全产业链价值创造英文说明书

全产业链价值创造英文说明书1The concept of full industrial chain value creation refers to the comprehensive and coordinated optimization and integration of all links within an industry chain, from the initial stage of research and development to the final stage of sales and after-sales service. This approach aims to maximize the overall value and competitive advantage of the entire chain.Take a well-known automotive brand as an example. They have achieved value maximization by integrating various aspects such as research and development, production, and sales. In the R&D stage, they invest heavily in technological innovation and design to create unique and appealing vehicle models. During the production process, they adopt advanced manufacturing techniques and strict quality control to ensure high-quality output. In the sales phase, they establish an extensive distribution network and provide excellent customer service to enhance brand image and customer satisfaction.Another case is an electronic enterprise that optimizes its full industrial chain layout to enhance competitiveness. They focus on enhancing the efficiency and flexibility of the supply chain to respond quickly to market changes. They also continuously improve the R&D capabilities to launch new products that meet the diverse needs ofconsumers. At the same time, they build a strong marketing and sales team to expand market share.The significance of full industrial chain value creation is profound. It helps enterprises reduce costs, improve product quality and service levels, and enhance their ability to respond to market fluctuations. Moreover, it promotes the efficient allocation of resources and the upgrading of the entire industry, leading to sustainable development and greater economic benefits.In conclusion, full industrial chain value creation is not only an important strategy for enterprises to succeed in the fierce market competition but also a driving force for the healthy development of the entire industry.2The whole industrial chain value creation is a complex and significant topic that involves multiple elements and challenges. To understand it thoroughly, let's take the example of an agricultural enterprise. In its pursuit of full industrial chain development, it often encounters the risk of market fluctuations. For instance, sudden changes in the demand and supply of agricultural products can lead to price instability. This not only affects the income of farmers but also poses challenges to the processing and sales links. To cope with this, the enterprise needs to establish a precise market monitoring mechanism and a flexible production adjustment strategy.Another example could be a clothing brand. Supply chain issues can have a significant impact on its value creation. Delays in raw material supply or problems in logistics can cause production delays and customer dissatisfaction. To address these problems, the brand should build a stable and efficient supply chain system, strengthen cooperation with suppliers, and improve inventory management.In conclusion, the key elements of whole industrial chain value creation include seamless coordination among various links, effective risk management, and continuous innovation. Only by paying attention to these aspects and taking corresponding measures can enterprises truly achieve sustainable value creation and development in the fierce market competition.3The entire industrial chain value creation represents a revolutionary concept that has reshaped the business landscape in the contemporary era. It involves integrating all stages of production, distribution, and consumption to maximize value and achieve sustainable growth. Take, for instance, a leading internet enterprise that harnessed the power of big data to drive an upgrade across the entire industrial chain. By collecting and analyzing vast amounts of data from various sources, this company was able to identify market trends, customer preferences, and potential operational bottlenecks with unprecedented accuracy. This enabled themto optimize their product offerings, streamline their supply chain, and enhance their marketing strategies, resulting in a significant increase in market share and customer satisfaction.Another compelling example is a traditional manufacturing firm that underwent an intelligent transformation to achieve a breakthrough in value creation. Through the adoption of advanced technologies such as robotics, artificial intelligence, and the Internet of Things, this company automated its production processes, improved product quality, reduced production costs, and shortened delivery times. Simultaneously, it leveraged digital platforms to establish closer connections with customers, providing personalized products and services, and thereby enhancing brand loyalty and competitiveness.In conclusion, the success of the entire industrial chain value creation lies in the seamless integration of resources, the application of innovative technologies, and a customer-centric approach. It requires businesses to have a forward-looking vision, a willingness to embrace change, and the ability to collaborate effectively across different sectors. Only by doing so can enterprises truly unlock the potential of the entire industrial chain and create long-term value in an increasingly competitive marketplace.4The concept of full industrial chain value creation has emerged as a driving force for businesses and society. Let's take a food enterprise as anexample. By implementing full industrial chain management, this enterprise can closely monitor every step from raw material sourcing to production, processing, and distribution. This not only ensures the safety and quality of food but also boosts consumers' trust. For instance, when it comes to the selection of agricultural products, strict standards are imposed to guarantee the freshness and non-pollution of the ingredients. During the production process, advanced technologies and strict quality control measures are adopted to eliminate any potential risks. As a result, consumers are more willing to purchase products from this enterprise, which leads to increased sales and a better reputation.Another example can be found in the energy sector. A certain energy enterprise has made remarkable contributions to promoting the popularization of green energy through the development of a full industrial chain. It starts from the research and development of new energy technologies, followed by the establishment of large-scale production facilities to reduce costs and improve efficiency. Moreover, efforts are made in the construction of energy storage and transmission systems to ensure a stable supply of green energy. This comprehensive approach not only helps reduce reliance on traditional energy sources but also plays a crucial role in protecting the environment and achieving sustainable development.In conclusion, full industrial chain value creation brings numerousbenefits to both enterprises and society. It enhances the competitiveness of enterprises, meets the demands of consumers for high-quality products and services, and contributes to the sustainable development of society as a whole.5The concept of full industrial chain value creation has emerged as a powerful force shaping the dynamics of various industries in today's highly competitive business landscape. It involves the seamless integration and optimization of all stages of a product or service's lifecycle, from raw materials sourcing to end-user consumption.In the financial sector, for instance, the construction of a full industrial chain financial service system has become increasingly crucial. This encompasses providing a comprehensive range of financial products and services, including financing for startups, supply chain finance for enterprises, and wealth management for individuals. By integrating these elements, financial institutions can better meet the diverse needs of clients and enhance their overall competitiveness.The healthcare industry has also witnessed significant improvements through full industrial chain integration. By integrating various components such as medical research and development, production of medical devices and drugs, hospital operations, and post-treatment rehabilitation, the allocation of medical resources can be optimized. Thisresults in improved accessibility and quality of healthcare services for patients.Looking forward, the trend of full industrial chain value creation is set to continue and intensify. Industries will need to focus on technological innovation, data analytics, and strategic partnerships to further enhance the efficiency and effectiveness of their value creation processes. Only by embracing this holistic approach can businesses thrive and contribute to sustainable economic growth and social development.。

英语作文-集成电路设计行业中的芯片封装与封装技术解析

英语作文-集成电路设计行业中的芯片封装与封装技术解析

英语作文-集成电路设计行业中的芯片封装与封装技术解析Integrated Circuit (IC) packaging plays a crucial role in the semiconductor industry, facilitating the protection, connection, and thermal management of microelectronic devices. This article provides an in-depth analysis of chip packaging and the technologies involved in this vital aspect of IC design.### Introduction to Chip Packaging。

IC packaging is the final stage in semiconductor device fabrication before the product reaches the end-user. It involves encapsulating the bare silicon die into a package that provides electrical connections to the outside world while offering protection from mechanical stress, moisture, and other environmental factors. The choice of packaging technology significantly impacts the performance, reliability, and cost of integrated circuits.### Types of Chip Packages。

稀疏恢复和傅里叶采样

稀疏恢复和傅里叶采样

Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leslie A. Kolodziejski Chair, Department Committee on Graduate Students
2
Sparse Recovery and Fourier Sampling by Eric Price
Submitted to the Department of Electrical Engineering and Computer Science on August 26, 2013, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Department of Electrical Engineering and Computer Science August 26, 2013
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piotr Indyk Professor Thesis Supervisor

新质生产力 理解

新质生产力 理解

新质生产力理解下载温馨提示:该文档是我店铺精心编制而成,希望大家下载以后,能够帮助大家解决实际的问题。

文档下载后可定制随意修改,请根据实际需要进行相应的调整和使用,谢谢!并且,本店铺为大家提供各种各样类型的实用资料,如教育随笔、日记赏析、句子摘抄、古诗大全、经典美文、话题作文、工作总结、词语解析、文案摘录、其他资料等等,如想了解不同资料格式和写法,敬请关注!Download tips: This document is carefully compiled by the editor. I hope that after you download them, they can help yousolve practical problems. The document can be customized and modified after downloading, please adjust and use it according to actual needs, thank you!In addition, our shop provides you with various types of practical materials, such as educational essays, diary appreciation, sentence excerpts, ancient poems, classic articles, topic composition, work summary, word parsing, copy excerpts,other materials and so on, want to know different data formats and writing methods, please pay attention!新质生产力是指以信息技术和人才为核心,加强创新能力和提高生产效率的新型生产方式。

ALLOCATION PROCESSOR

ALLOCATION PROCESSOR

专利名称:ALLOCATION PROCESSOR发明人:KISHIDA KENJI,SUZUKI HIDETOSHI 申请号:JP32672690申请日:19901128公开号:JPH04195473A公开日:19920715专利内容由知识产权出版社提供摘要:PURPOSE:To perform efficient allocation operation in a short time by selecting only one of blocks which are divided by one allocation processor and performing allocation operation only for the selected block. CONSTITUTION:Necessary data are transferred from the data base in a host computer 210 to the memory of a #1 machine. Then the operator of the #1 machine considers what division of work for a document such as a leaflet provides the highest efficiency and divides the document surface into three. Which of the defined blocks B1-B3 is selected for the #1 machine is determined and the selected block is defined as the allocation object block of the #1 machine. At this time, each allocation processor performs an allocating process only for the defined allocation object block. Consequently, the efficient allocation operation is performed in a short time.申请人:DAINIPPON PRINTING CO LTD更多信息请下载全文后查看。

Adaptive array processor and processing method for

Adaptive array processor and processing method for

专利名称:Adaptive array processor and processing method for communication system发明人:Larry D. Alter申请号:US06/015232申请日:19790226公开号:US04347627A公开日:19820831专利内容由知识产权出版社提供摘要:A processing circuit (50) in a communication system performs multi- channel adaptive array processing using a summed reference technique. A plurality of adaptive modules (56 and 58) modify the magnitude and phase of antenna element signals from a plurality of antenna elements. The outputs of the adaptive modules are combined by a combiner (60) to produce a summation signal. A modem (64) processes and divides the summation signal into a plurality of reference channel signals on channels 1 through K. A summer (70) adds the reference channel signals together to produce a composite reference signal that is subtracted by a subtractor (74) from the summation signal to produce an error signal. The error signal is applied to the adaptive modules (56 and 58) which respond to nullify interfering and noise signals and to enhance the signal-to-interference ratio for the summation signal.申请人:E-SYSTEMS, INC.代理人:Robert V. Wilder,Albert M. Crowder, Jr.更多信息请下载全文后查看。

ems 光储协同策略

ems 光储协同策略

ems 光储协同策略英文回答:EMS (Energy Management System) is a vital component in the development of renewable energy and energy storage technologies. It plays a crucial role in optimizing the coordination between energy generation, storage, and consumption. The concept of EMS is to integrate various energy sources, such as solar power, wind power, and energy storage systems, to ensure the efficient and reliable operation of the entire energy system.One of the key strategies in EMS is the implementation of a coordinated energy storage strategy. This strategy aims to maximize the utilization of energy storage systems by optimizing the charging and discharging cycles based on the energy demand and supply. By effectively managing the energy storage resources, EMS can enhance the overall energy efficiency and stability of the system.For instance, imagine a scenario where there is excess solar energy during the day when the demand is low. Instead of letting this energy go to waste, EMS can intelligently divert the surplus energy to charge the energy storage systems. Later, when the demand exceeds the supply, EMS can utilize the stored energy to meet the increased demand. This coordinated approach ensures that energy iseffectively utilized and minimizes the reliance on traditional power sources during peak hours.Another important aspect of the EMS and energy storage synergy is the ability to provide backup power during emergencies or power outages. Energy storage systems can store excess energy during normal operation and release it when needed, acting as a reliable backup power source. This capability is particularly crucial in areas prone to natural disasters or remote locations where the grid infrastructure is unreliable.Furthermore, EMS can also optimize the use of energy storage systems by considering factors such as energy prices and grid conditions. For example, during periods ofhigh electricity prices, EMS can prioritize the use of stored energy to avoid expensive peak-hour electricity rates. Additionally, EMS can monitor the grid conditionsand make real-time adjustments to the energy storage systems to ensure grid stability and prevent blackouts.中文回答:EMS(能源管理系统)是可再生能源和能源储存技术发展中的关键组成部分。

评估指南抓住锲机那部分内容

评估指南抓住锲机那部分内容

评估指南抓住锲机那部分内容英文回答:When it comes to evaluating opportunities, one important aspect is to seize the moment or "grab the bull by the horns," as they say. This means being able to recognize and take advantage of favorable circumstances or openings. In the context of evaluation guidelines, this refers to identifying key moments or triggers that can lead to successful outcomes.To effectively capture these opportunities, it is crucial to stay alert and "keep one's eyes peeled" for potential breakthroughs. This involves actively monitoring the market, industry trends, and customer needs. For example, a company in the technology sector may be evaluating the potential of entering a new market. By staying up-to-date with the latest developments and identifying emerging trends, they can seize the opportunity when the timing is right.Another important aspect of seizing the moment is being proactive and "thinking outside the box." This means looking beyond the obvious and exploring innovative solutions or approaches. For instance, a restaurant owner evaluating their business may notice a growing demand for plant-based food options. By proactively introducing a vegetarian or vegan menu, they can tap into this emerging trend and attract a new customer base.Furthermore, effective evaluation guidelines should include assessing the risks and rewards associated with seizing the moment. It is important to weigh the potential benefits against the potential drawbacks. This requires a thorough analysis of the market conditions, competition, and potential challenges. Taking calculated risks is part of seizing opportunities, but it is essential to minimize potential pitfalls.中文回答:评估机会时,一个重要的方面是抓住时机,正如人们所说的“抓住机会”。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Adaptive Processor Allocation in Packet Processing SystemsRavi Kokku Upendra Shevade Nishit Shah Harrick M.Vin Mike Dahlinemail:{rkoku,upendra,nishit,vin,dahlin}@University of Texas at AustinAbstractThe functionality of packet processing applications is of-ten partitioned into pipeline stages;these stages are al-located a subset of the multiple processors available in a packet processing system.The workload,and hence the processing requirement,for each pipeline stagefluctu-ates over time.Adapting processor allocations to pipeline stages at run-time can improve robustness of the system to trafficfluctuations,can reduce processor provisioning requirement of the system,and can conserve energy.In this paper,we present an on-line algorithm for adapting processor allocations while ensuring that the additional delay suffered by packets as a result of adaptation is deter-ministically bounded.The resulting Processor Allocation Algorithm(PAL)is simple,but it allocates only as many processors to stages as needed to meet packet delay guar-antees,accounts for system reconfiguration overheads, and copes with the unpredictability of packet arrival pat-terns.A key contribution of PAL is its generality;it cap-tures the adaptation opportunities in the system as afinite state automaton(FSA)—the methodology for constructing the FSA can be applied to a variety of application require-ments and system configurations.We demonstrate that for a set of trace workloads PAL can reduce processor pro-visioning level by30-50%,reduce energy consumption by 60-70%while increasing the average packet processing delay by less than150µs.We describe our prototype im-plementation for Intel’s IXP2400-based packet processing system.1IntroductionAdapting processor allocations to pipeline stages of a packet processing application at run-time can improve ro-bustness of the system to trafficfluctuations,can reduce processor provisioning requirement of the system,and can conserve energy.In this paper,we design,implement, and evaluate an adaptive processor allocation algorithm for packet processing systems.In what follows,first we discuss the background,the problem/opportunity,and the challenges in designing an adaptive processor allocation algorithm for packet processing systems.Then,we out-line the contributions of this research.Background Packet processing systems(PPS)are de-signed to process network packets efficiently.Over the past several years,the diversity and complexity of ap-plications supported by PPS have increased dramatically.Examples of applications supported by PPS include Vir-tual Private Network(VPN),intrusion detection,content-based load balancing,and protocol gateways.Most of these applications are specified as graphs of functions and the specific sequence of functions invoked for a packet depends on the packet’s type(determined based on the packet header and/or payload)[19,21].For most of these packet processing applications,the time to process a packet is dominated by memory-access latencies.Hence,an architecture containing a single, high-performance processor is often not suitable for a PPS.To mask memory access latencies,and thereby pro-cess packets at high rates,most modern PPSs utilize mul-tiple parallel processors.For instance,Intel’s IXP2800 network processor—a building block used in a wide-range of PPSs—includes16RISC cores(referred to as micro-engines)and an XScale controller.To achieve high packet processing throughput in such multi-processor environments,it is essential that the code fragments used to process packets reside in instruction caches.This is because,each packet processing applica-tion can be thought of as a large loop that repeats for every packet;to ensure high packet processing throughput,the entire loop body mustfit into the instruction cache.To-day’s processors or cores within network processors,how-ever,are configured with a very limited size instruction store/caches(e.g.,4K instructions in Intel R ’s IXP2800 network processor);the limited instruction store is often sufficient to hold code for a portion of the application, but rarely enough to hold code for the entire application. This leads to software designs in which the responsibility for processing packets is partitioned into a set of pipeline stages;further,the stages are mapped onto processors—with each processor specialized to perform one task[1]. Partitioning applications into pipelined stages is also im-portant for improving robustness of request-processing systems[38].The Problem/Opportunity Today,the allocation of pro-cessors to pipeline stages of an application is done stati-cally(at design-time).Consequently,to guarantee robust-ness tofluctuations in the arrival rate for different types of packets,packet processing systems often provision suffi-cient number of processors to handle the expected maxi-mum load for each pipeline stage.However,as illustrated by Figure1,the observed loadfluctuates significantly over time and at any instant is often substantially lower than the maximum load[22,25,27,29,40].1200 400 600 800 1000 1200 0 10 20 30 40 50 60 70 80 90P a c k e t s p e r s e c o n dTime (in thousands of seconds)Figure 1:Packet arrivals per second over a day for the Auckland trace [26]In such settings,an adaptive run-time environment—that can change the allocation of processors to pipeline stages at run-time—can yield significant benefits.First,the ability to match processor allocations to the processing demands for each pipeline stage leads to system designs that are robust to traffic fluctuations.An adaptive sys-tem can allocate appropriate number of processors to each stage even when the processing demands for a stage ex-ceed design-time expectations,as long as the cumulative demands do not exceed the provisioning level;further,this simplifies the determination of the processor provision-ing level for the entire system.Second,by multiplexing processors among different types of packets,an adaptive system can reduce the cumulative processor requirement (or provisioning level),and thereby reduce system cost.Finally,by reducing the power consumption of idle pro-cessors (e.g.,by turning off processors or running them in low-power mode),an adaptive system can conserve en-ergy.The Challenges Although the properties of networkworkloads and of packet processing hardware raise oppor-tunities for adaptive processor allocations,they also raise key challenges.First,because network traffic can fluc-tuate at multiple time-scales,accurately predicting traf-fic arrival patterns is difficult [25,27,29,40]and inter-vals of idleness or low load may often be short [22],so it may be difficult to take advantage of periods of low load.Second,allocating and releasing processors incurs delay/overhead (generally of the order of a few hundred microseconds [23]).If the system releases processors too aggressively during idle periods,then because of the in-herent delay in re-allocating processors,a burst of arriving packets might suffer unacceptable delays or losses.Our Contributions In this paper,we present an on-linealgorithm for adapting processor allocations while ensur-ing that the additional delay suffered by packets as a re-sult of adaptation is deterministically bounded .Our Pro-cessor Allocation Algorithm (PAL)is simple,but it al-locates only as many processors to stages as needed to meet packet delay guarantees,accounts for system recon-figuration overheads,and copes with the unpredictability of packet arrival patterns.PAL,like active queue man-agement algorithms [6,9,15],makes processor alloca-tion/release decisions based only on the current queue length;it does not rely on any predictions for future ar-rival patterns beyond knowing the worst-case arrival rate.•Given a current allocation of j processors for a stage and a worst-case delay bound D ,PAL allocates ad-ditional processors only when the current allocation is unable to process within D the sum of (a)all cur-rently enqueued incoming packets and (b)the max-imum number of packets that could arrive during a processor’s allocation latency.Surprisingly,for real-istic system configurations,a simple sufficient con-dition to meet this general activation requirement is to activate the j +1st processor when the set of en-queued packets first exceeds the number of packets that j processors can process within D .•Conversely,when the system has excess capacity,PAL releases one or more processors when both (a)the input queue becomes empty and (b)the minimum time until the processors would be reactivated under a worst-case arrival rate exceeds the latency for allo-cating/releasing a processor.A key contribution of PAL is its generality;it captures the adaptation opportunities in the system as a finite state automaton (FSA)—the methodology for constructing the FSA can be applied to a variety of application require-ments and system configurations.There are four salient features of PAL.First,PAL of-fers the flexibility to instantiate various policies for ea-ger or lazy allocation and release of processors;this al-lows PAL to tradeoff adaptation frequency/benefits with the delay incurred by packets.Second,we show that for acceptable values of the delay bound D ,total proces-sor requirement for PAL is within 10-30%of an ideal,hypothetical setting that incurs no overhead for proces-sor allocation/release.Third,PAL does not require pre-diction of future packet arrival patterns.Given the vari-ability of packet arrival rates in many network environ-ments [25,27,29,40],algorithms that do not depend on on-line prediction are likely to be less complex and more effective than those that do.Fourth,PAL deterministically meets a configurable bound on packet processing delay.We have evaluated PAL through simulations;further,we have implemented a prototype adaptive processor al-location framework for a packet processing system based on Intel’s IXP2400network ing simulations,we demonstrate that for a set of trace workloads PAL can reduce processor provisioning level by 30-50%,reduce 2energy consumption by 60-70%while increasing the av-erage packet processing delay by less than 150µs.The rest of the paper is organized as follows.In Sec-tion 2,we describe our system model.In Sections 3,we discuss our processor allocation algorithm.We describe the results of our experimental evaluation in Section 4,and discuss our prototype implementation in Section 5.Related work is discussed in Section 6,and finally,Sec-tion 7summarizes our contributions.2System ModelWe consider a packet processing system (PPS)with P processors,and an application with S pipeline stages.At any instant,a subset of the processors are allocated to each pipeline stage.A packet arriving into the system is pro-cessed by a subset of pipeline stages prior to departure.Each pipeline stage is associated with a queue;packets are queued into the stage’s queue until a processor becomes available to service the packet (see Figure 2).We assume that a packet queued for service at stage i can be served by any of the processors allocated to stage i .Let the timetaken to service a packet at stage i be t i pktunits;and each stage service packets in the order of their arrival.On being processed by a stage,the packet is queued either for pro-cessing at the next processing stage or for transmission at an outgoing link (once the packet has been processed by all the requiredstages).q len21A 2q len21q len21A 1A PacketsPacketsPacketsStage 2Stage 1SStage S1S 2Figure 2:System model.A i ,i ∈[1,S ]denote the number of processors allocated to pipeline stage iLet the delay between when a packet is enqueued and when its processing is complete by stage i be bounded by D i .The arrival rate of packets into each queue may fluctuate over time;hence,the allocation of processors toParameterDescriptionP Number of processors in the system SNumber of pipeline stagesR i arr Worst-case arrival rate for stage i N i pWorst-case processor requirement for stage it i pkt Processing time per packet for stage i t swSwitching delay (allocate/release pro-cessor)D iDelay guarantee for stage iTable 1:System model parameterspipeline stages changes over time.Let the maximum rate of packet arrivals into queue for stage i be given by R i arr ,and let N i p denote the maximum number of processors re-quired to process the worst-case arrival rate R i arr withinthe delay bound D i .Observe that for a non-adaptive sys-tem,P =∑S i =1N i p .An adaptive system can multiplex pro-cessors among different pipeline stages;this facilitates asystem with provisioning level P <∑S i =1N i p to provide de-terministic delay bounds D i to each packet processed bystage i .We assume that allocating and releasing a proces-sor incurs a delay t sw .Table 1summarizes these system model parameters.Throughout the paper,we consider a system provision-ing level such that the total instantaneous processor de-mands for all pipeline stages never exceeds the provi-sioned capacity.This requirement is essential to provide deterministic bounds on the delay experienced by packets at each pipeline stage.Further,we only model the delay incurred by packets while waiting for service by one of the processors;we do not model delay incurred by packets in the input or output ports.3Processor Allocation AlgorithmFor each pipeline stage,at any instant,an adaptive system should determine and allocate only as many processors as needed to process packets before their processing dead-lines.Adapting processor allocations dynamically has three benefits.First,the ability to match processor allo-cations to the processing demands for each pipeline stage leads to system designs that are robust to traffic fluctua-tions.An adaptive system can allocate appropriate num-ber of processors to each stage even when the processing demands for a stage exceeds design-time expectations,as long as the cumulative demands do not exceed the provi-sioning level;further,this simplifies the determination of the processor provisioning level for the system.Second,adaptation enables statistical multiplexing of processors among pipeline stages,which,in turn,reduces the overall processor provisioning requirement.Finally,by deactivat-3jj aj+k len > Q len = 0q q jthN pj rj−k Figure 3:The finite-state automaton for PAL.The quanti-ties in each state denote the current processor allocation.ing (or running in low-power mode)spare processors,the adaptive system can reduce overall energy consumption.Our Processor Allocation Algorithm (PAL)maintains for each pipeline stage a finite state automaton (FSA)(see Figure 3).Each state in the FSA for pipeline stage i repre-sents a processor allocation level for stage i ;state transi-tions denote processor allocation and release events.State transitions are triggered based on the length q i len of the queue for stage i .When stage i is allocated j processors,PAL requestsallocation of an additional k ja processors (by making atransition from state j to j +k ja in the FSA)when thequeue length for stage i exceeds a threshold Q jth .Simi-larly,when the queue is empty (q i len =0),then from anystate j ,PAL can release k jr ≤(j −n min )processors,where n min is the minimum number of processors that must re-main allocated to stage i .In what follows,we describeconstruction of the FSA by deriving the values of Q jth and k j a for all values of j ,and the values of n min and k jr .Since the construction is the same for all pipeline stages,we present the FSA construction for a single pipeline stage.Further,for brevity,we will eliminate any reference to a specific stage (and drop superscript i from all symbols de-fined in Table 1)from our discussion.3.1Processor Allocation3.1.1Q jth :When to Allocate Processors?PAL allocates one or more processors when the queue length reaches a level where the delay incurred by pack-ets can exceed the desired bound D .The rate at which packets are serviced from the queue is a function of the number of processors allocated to the stage.In particular,since each processor can service a packet in t pkt time,theservice rate R jdep for j processors is given by:R j dep=j t pkt(1)Let Q jlim denote the maximum queue length that j processors can process within the maximum permittedpacket-processing delay D .Note that if there are Q jlim packets in the queue and the pipeline stage is allocatedt SW t SWt SW D −+DTimeq len=Q Packet p arrivesInitiate ProcessorAllocationlim j+ 1Q j onProcessor Allocated−Figure 4:Allocation procedure:Timing diagram j processors,then the total number of packets in the stageis Q jlim +j .Hence,Q j lim is determined byQ jlim +j =D ×R jdep⇒Q jlim =j ∗D t pkt−1 (2)Suppose the arrival of a packet p causes the queuelength to become Q jlim +1(and hence the total number ofpackets in the stage to become Q jlim +1+j ).Then,with only j allocated processors,the delay incurred by packet p would exceed D .To ensure that packet p can be serviced prior to its delay bound,one or more additional processors must be allocated.Let us assume that the system requests allocation of an additional processor τunits of time prior to the arrival of packet p (see Figure 4).τ>0represents a speculative sys-tem that allocates additional processors in anticipation ofq len exceeding Q jlim ;in contrast,τ≤0represents a reac-tive system that allocates additional processors only afterq len >Q jlim .Observe that a newly allocated processor can serve packets only after t sw time units.Hence,for an interval (t sw −τ)after the arrival of packet p ,packets are serviced with j processors (and hence,at the rate of j /t pkt );after that time,(j +1)processors service packets at the rate of (j +1)/t pkt .Thus,packet p will meet its delay bound D if and only if Q jlim +1+j≤(t sw −τ)×j t pkt+(D −t sw +τ)×j +1t pktThis constraint requiresτ≥t pkt +t sw −D(3)which lets us to the following conclusion.Conclusion 1For D ≥t sw +t pkt ,τcan be smaller than or equal to 0.Hence,allocation of additional processors4need to be triggered only after the queue length q len >Q jlim ,where j is the number of currently allocated proces-sors.Hence,Q j th =Q jlim .Otherwise,if D <t sw +t pkt ,then PAL must speculate the possibility of q len >Q jlim and thereby trigger the allocation of additional proces-sors when q len >Q j lim −τ×(R arr −R jdep ).Hence,if (t pkt +t sw −D )>0,the Q jth =Q jlim −τ×(R arr −R jdep ).This is an important conclusion;it indicates that when the delay bound D ≥t pkt +t sw ,PAL can be completely re-active;it can observe the queue build up and react onlyupon receiving a packet whose delay guarantee will beviolated.We expect that the condition D ≥t pkt +t sw is likely to be met by most realistic system configurations.This is because,even for a non-adaptive system,the delaybound D must be at least t pkt (the time required to pro-cess a packet);increasing the delay bound further enablesa system to achieve the benefits of adaptive resource allo-cation.3.1.2k ja :How Many Processors to Allocate?Once requested,a processor becomes available to service packets only after a delay of t sw time units.Thus,the num-ber of processors to be allocated is selected such that all the packets that can arrive within time t sw (not just packet p that triggered the allocation request)can be serviced prior to their respective deadlines.If j and k j a ,respectively,denote the number of cur-rently allocated processors and the ones being requested,then the above condition can be met if the queue lengthat time when (j +k ja )processors are ready to serve pack-ets does not exceed Q j +kja th +1.Note that the request for additional k j a processors is triggered when q len =Q jth +1.During time interval t sw ,packets can arrive into the stage queue at a rate no greater than R arr ;further,with j al-located processors,packet depart the queue at rate R jdep .Thus,the maximum increase in queue length is bounded by (R arr −R jdep )×t sw .Thus,the delay bound for each packet can be satisfied if:(Q j +k j a th +1)−(Q jth +1)≥ R arr −R j dep ×t sw Substituting the values for Q j +k j a th ,Q j th ,and R j dep fromEquation (1)and Conclusion (1),we get:k ja × D t pkt −1 +τ× k j a t pkt≥ R arr −j t pkt ×t sw ⇒k j a≥(R arr ×t pkt −j )×t swD +τ−t pkt(4)This leads to the following conclusion.Conclusion 2When the queue length for a stage with j allocated processors reaches its threshold (as defined inConclusion 1),the smallest number of processors k ja that must be allocated is given by:k ja =min(R arr ×t pkt −j )×t sw D +τ−t pkt ,N p −j (5)where N p is the total number of processors in the system.We make the following observations.•The value of k ja shown in Conclusion 2is a functionof j (j ∈[0,N p ]),the number of currently allocatedprocessors.The smaller the value of j ,the greateris the value of k ja ,and vice versa.This relationship allows the system to ramp-up quickly from a low-utilization state (with very few allocated processors)by allocating a larger number of processors first.The number of processors allocated with each trigger de-creases at higher levels of utilization.In the limit,when j =N p ,no additional processors can be allo-cated;hence,k Np a =0.•Equation (3)defines a lower-bound on the value of τ(in particular,τ≥t pkt +t sw −D ).If τis selected to be equal to the lower-bound (i.e.,τ=t pkt +t sw −D ),then Equation (5)reduces to:k j a =min R arr ×t pkt −j ,N p −j (6)Observe that packet processing systems are often provisioned to meet the demands of the expected maximum arrival rate R arr .In such an appropriately provisioned system,the number of processors N p is,in fact,equal to R arr ×t pkt .In such a case,for all val-ues of j ,k j a =N p −j .Thus,the FSA contains a directtransition from every state to the state with N p allo-cated processors;every trigger to increase processor allocation will request all available processors.Selecting τ>t pkt +t sw −D results in smaller values of k ja .The greater the value of τ,the smaller the valueof k j a .Thus,k j a values for a speculative system (τ>0)are smaller than those for a reactive system (τ≤0);further,even for a reactive system,selecting τ<0yields larger values of k j a as compared the case when τ=0.Smaller values of k ja are preferable;they al-low PAL to gradually increase processor allocations and thereby better utilize available processors across different pipeline stages.•If D +τ>(R arr ×t pkt ×t sw +t pkt ),then from Equa-tion (5),for all values of j <N p ,k ja =1;thus,pro-cessors will be allocated one at a time.With this,the number of reachable states in the FSA becomes equal5T releaseswt t sw ACDstart allocationstart BreleaseTimeFigure 5:Condition for release to be beneficial to N p .The greater are the number of reachable states in the FSA,the better is the opportunity for PAL to align processor allocation to processing demand.3.2Processor Release3.2.1When to Release Processors?Processors allocated to a stage should be released when the processors are running at low-levels of utilization.This would be the case when the packet processing capac-ity allocated to a stage (and hence the maximum packet service rate)exceeds the packet arrival rate for the stage significantly.Instead of monitoring the processor utiliza-tion levels continuously,PAL simply estimates the pos-sibility of over-allocation by monitoring queue length.In particular,PAL uses the condition q len =0as a trigger to release an appropriate number of processors.3.2.2k jr :How Many Processors to Release?To determine the number of processors that can be re-leased,we first derive the minimum number of processors n min that must remain allocated to ensure that the delay guarantee for any future packets is not violated.To derive the value of n min ,we observe that releasing a processor is beneficial only if a subsequent allocation of processors to the stage is separated from the release event by at least t sw (the time required to release/allocate a processor)(Figure 5).Observe that any more proces-sors than n min would need to be allocated only τunits of time prior to the instant when the queue length reaches Q n min lim +1(from its current value of 0).Given that packets can arrive at the maximum rate R arr and that packets areserviced at rate R n mindep with n min allocated processors,the earliest time at which additional processors may need to be added is given by:T release =Q n min lim +1R arr −R n min dep−τ(7)By requiring the T release ≥t sw ,we derive n min as:∀j :n min ≥((t sw +τ)×R arr −1)×t pkt pkt sw (8)This leads to the following conclusion.Conclusion 3Once the queue becomes empty,an adap-tive system can release all but n min processors and still ensure that the delay guarantee is met for all packets.Hence,the number of processor that can be released from state j is bounded by:k j r ≤j −n min(9)We make the following observations.•Substituting τ=t pkt +t sw −D in Equation (8),we get:n min ≥((t pkt +2×t sw −D )×R arr −1)×t pkt2×t sw(10)Thus,greater the delay bound D ,the smaller the value of n min .In fact,if D ≥t pkt +2×t sw ,then n min =0.•If τ≤−t sw ,then the conditionT release =Q n min lim +1R arr −R n min dep−τ≥t sw is satisfied for all values of n min ,including n min =0.We summarize these observations in the following con-clusion.Conclusion 4If D ≥t pkt +2×t sw ,then by selecting τ≤−t sw ,one can design an adaptive system in which once the queue becomes empty,the system can release all the idle processors,while ensuring that the delay incurred by each packet is bounded by D.3.3DiscussionEquation (3)defines a lower-bound on the value of τ,andConclusion (9)defines an upper-bound on k jr ,the number of processors that can be released from state j .The design of PAL supports flexibility in selecting τand a method forcomputing k jr .In this section,we discuss the impact ofselecting different values of τand k jr on the efficacy of PAL.3.3.1Selecting τFrom Equation (3),τ≥t pkt +t sw −D .Thus,if D <t pkt +t sw ,then τ>0;hence,PAL operates in the speculative mode (making worst-case estimates about packet arrivalrate).In this case,selecting the smallest possible value of τ(equal to t pkt +t sw −D )is the most appropriate.In contrast,if D ≥t pkt +t sw ,then t pkt +t sw −D ≤0.Hence,PAL can be completely reactive;it can allocate additional processors only after receiving a packet whose deadline will be violated with the current set of allocated processors.This case also offers flexibility in selecting a value of τin the range t pkt +t sw −D ≤τ≤0.6•An eager processor allocation policy can selectτ=0.Such a policy yields smaller values of k j a(Equa-tion(5));this allows PAL to better align processor al-locations with the processing demands.On the con-trary,selectingτ=0ensure that n min>0;hence, PAL must maintain a minimum allocation of n min processors to the pipeline stage even if the process-ing demand is smaller.•A lazy processor allocation policy can select any value ofτin the range t pkt+t sw−D≤τ<0.The ex-treme case is one withτ=t pkt+t sw−D.In this case, if D≥t pkt+2×t sw,then as per Conclusion4,n min=0.However,as per Equation(6),∀j:k j a=N p−j.Since n min=0,this results in a two-state FSA–the two states represent allocations of0and N p proces-sors to the pipeline stage.As a result,PAL makes frequent transitions between these two states;this minimizes processor multiplexing opportunities(and hence is not desirable).A more desirable policy is one that allows n min=0,and yet supports a multi-state FSA.For D>t pkt+ 2×t sw,this can be achieved by selectingτ=−t sw.3.3.2Selecting k j rFrom Equation(9),0≤k j r≤j−n min.The choice of k j r defines the following two policies.•An eager policy for releasing processors selects k j r= j−n min.In particular,when q len=0,such a pol-icy releases all but n min processors.This policy is the most aggressive in terms of releasing idle processors;this facilitates statistical multiplexing of processors among stages.However,it also introduces a signif-icant number of allocation/release transitions in the system(thereby adversely affecting the stability of the system).•A lazy policy for releasing processors releases pro-cessors progressively so as to arrive at the right level of processor allocation for each stage.A lazy scheme can employ various sub-policies for select-ing the value of k j r.Some simple instances of such sub-policies include decreasing the allocation by a constant(additive-decrease)or by a constant factor (multiplicative-decrease).A little more involved policy is one that measuresthe current arrival rate of packets and releases all but the processors needed to match the arrival rate.In particular,such a policy measures over a certain time interval∆the arrival rate R arr(∆)of packets,and then computes k j r as follows:k j r=min j−n min,j− R arr(∆)1/t pkt (11)where R arr(∆)1/t pkt= R arr(∆)×t pkt denotes the number ofprocessors required to match the packet arrival rateat a stage.With such a policy,the action of releas-ing processors is triggered when the queue is empty(i.e.,q len=0)and the number of required processorsreduces.Observe that the lazy release policies introduce sev-eral reachable intermediate states in the FSA.This allows PAL to better match processor allocations to the require-ments;further,it reduces the number of allocate/release transitions performed by the system(and thereby leads to a stable system design).4Experimental EvaluationIn this section,wefirst describe our experimental method-ology,and then present results of our simulations.4.1ExperimentalMethodologyFigure6:3G wireless router architecture Application Model:We conduct all our experiments us-ing the model of a3G wireless router—a canonical packet processing application.Figure6shows the application graph with different packet processing stages.In this ap-plication,each incoming packet is sent to a link layer de-multiplexor that identifies IPv4/IPv6packets and deter-mines whether or not the packet header is compressed. Packets with compressed headers undergo an appropriate decompression.The packets then go through appropri-ate IP forwarding that determines the next hop for the packet.The next hop address determines whether the packet should be simply forwarded,compressed and for-warded,or converted to the other version of the protocol.Table2summarizes the t ipktvalues for each of these packet processing stages;we do not model Ingress and Egress stages for our analysis(since these stages often require dedicated processors.Traces:We analyze traces collected from various points in the Internet.For brevity,we only discuss two traces –a6hour subset of NLANR trace(AUCKLAND)con-taining20million packets collected from a link connect-7。

相关文档
最新文档