A survey of proxy cache evaluation techniques

合集下载

Pareto-efficiency

Pareto-efficiency

A does not tell the A: -5 / B: 0 whole truth
5
Dominant Strategy
• Prisoner‘s dilemma Betraying the accomplice is a dominant strategy
• Dominant strategy A dominant strategy is a strategy which is always better than all the other strategies no matter what strategy the others choose. A strategy si* of agent i dominates all of his other strategies si, if independently of the strategies s-i of the other players always results in the highest valuation, i.e. ui(si*,u-i) > ui(si,u-i)
7
Nash Equilibrium
• Pareto-efficiency: A set of strategies is pareto-efficient if there is no other set where all players get at least the same payment and one agent gets a better one. • Nash Equilibrium A Nash Equilibrium describes a state of the game (equilibrium) where no player is able to increase his payment by only changing his own strategy.

德尔(Dell)EM加力开放网络交换机S4048T-ON产品介绍说明书

德尔(Dell)EM加力开放网络交换机S4048T-ON产品介绍说明书

The Dell EMC PowerSwitch S4048T -ON switch is the industry’s latest data center networking solution, empowering organizations to deploy modern workloads and applications designed for the open networking era.Businesses who have made the transition away from monolithicproprietary mainframe systems to industry standard server platforms can now enjoy even greater benefits from Dell T echnologies’ open networking platforms. By using industry-leading hardware and a choice of leading network operating systems to simplify data center fabric orchestration and automation, organizations can tailor their network to their unique requirements and accelerate innovation.These new offerings provide the needed flexibility to transform data centers. High-capacity network fabrics are cost-effective and easy to deploy, providing a clear path to the software-defined data center of the future with no vendor lock-in. The S4048T -ON supports the open source Open Network Install Environment (ONIE) for zero-touch installation of alternate network operating systems, including feature rich Dell EMC Networking OS9 and Dell EMC SmartFabric OS10.High density 1/10G BASE-T switchThe Dell EMC PowerSwitch S-Series S4048T -ON is a high-density 100M/1G/10G/40GbE top-of-rack (T oR) switch purpose-built for applications in high-performance data center and computing environments. Leveraging a non-blocking switching architecture, the S4048T-ON delivers line-rate L2 and L3 forwarding capacity within a conservative power budget. The compact S4048T-ON design provides industry-leading density of 48 dual-speed 1/10G BASE-T (RJ45) ports, as well as six 40GbE QSFP+ up-links to conserve valuable rack space and simplify the migration to 40Gbps in the data center core. Each 40GbE QSFP+ up-link can also support four 10GbE (SFP+) ports with a breakout cable. In addition, the S4048T-ON incorporates multiple architectural features that optimizedata center network flexibility, efficiency and availability, including I/O panel to PSU airflow or PSU to I/O panel airflow for hot/cold aisle environments, and redundant, hot-swappable power supplies and fans. S4048T -ON supports feature-rich Dell EMC Networking OS9 and Dell EMC SmartFabric OS10, VLT, network virtualization features such as VRF-lite, VXLAN Gateway and support for Dell Embedded Open Automation Framework.• The S4048T -ON is the only switch in the industry that supports traditional network-centric virtualization (VRF) and hypervisor centric virtualization (VXLAN). The switch fully supports L2 VXLAN gateway function and has hardware support for L3 • The S4048T-ON also supports Dell T echnologies’ EmbeddedOpen Automation Framework, which provides enhanced network automation and virtualization capabilities for virtual data center environments.• The Open Automation Framework comprises a suite of interrelated network management tools that can be usedtogether or independently to provide a network that is flexible, available and manageable while helping to reduce operational expenses.Key applicationsDynamic data centers ready to make the transition to software-defined environments• High-density 10Gbase-T T oR server access in high-performance data center environments • Lossless iSCSI storage deployments that can benefit frominnovative iSCSI & DCB optimizations that are unique only to Dell Networking switches • When running the Dell EMC Networking OS9, Active Fabric™ implementation for large deployments in conjunction with the Dell EMC Z-Series, creating a flat, two-tier, nonblocking 10/40GbE data center network design:• High-performance SDN/OpenFlow 1.3 enabled with ability to inter-operate with industry standard OpenFlow controllers • As a high speed VXLAN Layer 2 Gateway that connects the hypervisor based ovelray networks with nonvirtualized infrastructure Key features - general• 48 dual-speed 1/10GbE (SFP+) ports and six 40GbE (QSFP+) uplinks (totaling 72 10GbE ports with breakout cables) with OS support • 1.44Tbps (full-duplex) non-blocking switching fabric delivers line-rate performance under full load with sub 600ns latency I/O panel to PSU airflow or PSU to I/O panel airflow • Supports the open source ONIE for zero-touch installation of alternate network operating systems • Redundant, hot-swappable power supplies and fans DELL EMC POWERSWITCH S4048T-ON SWITCHEnergy-efficient 10GBASE-T top-of-rack switch optimized for data center efficiency•Key features with Dell EMC Networking OS9• Scalable L2 and L3 Ethernet switching with QoS and a fullcomplement of standards-based IPv4 and IPv6 features,including OSPF, BGP and PBR (Policy Based Routing) support • VRF-lite enables sharing of networking infrastructure andprovides L3 traffic isolation across tenants• Increase VM Mobility region by stretching L2 VLAN within oracross two DCs with unique VLT capabilities like Routed VLT, VLT Proxy Gateway• VXLAN gateway functionality support for bridgingthe nonvirtualized and the virtualized overlay networks with line rate performance• Embedded Open Automation Framework adding automatedconfiguration and provisioning capabilities to simplify themanagement of network environments• Supports Puppet agent for DevOps• Modular Dell EMC Networking OS software delivers inherentstability as well as enhanced monitoring and serviceabilityfunctions• Enhanced mirroring capabilities including 1:4 localmirroring, Remote Port Mirroring (RPM), andEncapsulated Remote Port Mirroring (ERPM). • Rate shaping combined with flow based mirroring enables the user to analyze fine grained flows• Jumbo frame support for large data transfers• 128 link aggregation groups with up to 16 members per group, using enhanced hashing• Converged network support for DCB, with priority flow control (802.1Qbb), ETS (802.1Qaz), DCBx and iSCSI TLV• S4048T-ON supports RoCE and Routable RoCE to enable convergence of compute and storage on Active Fabric• User port stacking support for up to six units and unique mixed mode stacking that allows stacking of S4048-ON with S4048T-ON to provide combination of 10G SFP+ and RJ45 ports in a stack1/10G BASE-T cabling distancesCable T ype 1G BASE-T 10G BASE-TCat 6 UTP100m (330 ft) 55m (180 ft)Cat 6 STP100m (330 ft) 100m (330 ft)Cat 6A UTP100m (330 ft) 100m (330 ft)Cat 7100m (330 ft) 100m (330 ft)Product DescriptionS4048T S4048T, 48x 10GBASE-T, 6x QSFP+, 2x AC PSU, 2x fans, I/O Panel to PSU Airflow S4048T, 48x 10GBASE-T, 6x QSFP+, 2x AC PSU, 2x fans, PSU to I/O Panel AirflowRedundant power supplies S4048T, AC Power Supply, I/O Panel to PSU Airflow S4048T, AC Power Supply, PSU to I/O Panel AirflowFans S4048T Fan Module, I/O Panel to PSU Airflow S4048T Fan Module, PSU to I/O Panel AirflowOptics Transceiver,40GE QSFP+ Short Reach Optic,850nm wavelength,100-150m reach on OM3/OM4 Transceiver, 40GbE QSFP+ ESR, 300m reach on OM3 / 400m on OM4Transceiver, 40GbE QSFP+ PSM4 with 1m pigtail to male MPO SMF, 2km reach Transceiver, 40GbE QSFP+ PSM4 with 5m pigtail to male MPO SMF, 2km reach Transceiver, 40GbE QSFP+ PSM4 with 15m pigtail to male MPO SMF, 2km reach Transceiver, 40GbE QSFP+ LR4, 10km reach on SMFTransceiver, 40GbE QSFP+ to 1G Cu SFP adapter, QSA1 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics3 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics5 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics7 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics10 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics25 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics50 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics75 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ Optics100 meter QSFP+ to QSFP+ OM3 MTP Fiber Cable. Requires QSFP+ OpticsProduct DescriptionCables Cable, QSFP+ to QSFP+, 40GbE Passive Copper Direct Attach Cable, 0.5 MeterCable, QSFP+ to QSFP+, 40GbE Passive Copper Direct Attach Cable, 1 MeterCable, QSFP+ to QSFP+, 40GbE Passive Copper Direct Attach Cable, 3 MeterCable, QSFP+ to QSFP+, 40GbE Passive Copper Direct Attach Cable, 5 MeterCable, QSFP+ to QSFP+, 40GbE Passive Copper Direct Attach Cable, 7 MeterCable, QSFP+, 40GbE, Active Fiber Optical Cable, 10 Meters (No optics required)Cable, QSFP+, 40GbE, Active Fiber Optical Cable, 50 Meters (No optics required)Cable, 40GbE QSFP+ to 4 x 10GbE SFP+, Active Optical Breakout CableCable, 40GbE (QSFP+) to 4 x 10GbE SFP+ Passive Copper Breakout Cable, 0.5 Meters Cable, 40GbE (QSFP+) to 4 x 10GbE SFP+ Passive Copper Breakout Cable, 1 MeterCable, 40GbE (QSFP+) to 4 x 10GbE SFP+ Passive Copper Breakout Cable, 3 MetersCable, 40GbE (QSFP+) to 4 x 10GbE SFP+ Passive Copper Breakout Cable, 5 MetersCable, 40GbE (QSFP+) to 4 x 10GbE SFP+ Passive Copper Breakout Cable, 7 MetersCable, 40GbE MTP (QSFP+) to 4xLC Optical Connectors, 1M(QSFP+,SFP+ Optics REQ,not incl) Cable, 40GbE MTP (QSFP+) to 4xLC Optical Connectors, 3M(QSFP+,SFP+ Optics REQ,not incl) Cable, 40GbE MTP (QSFP+) to 4xLC Optical Connectors, 5M(QSFP+,SFP+ Optics REQ,not incl) Cable, 40GbE MTP (QSFP+) to 4xLC Optical Connectors, 7M(QSFP+,SFP+ Optics REQ,not incl)Software L3 Dell EMC Networking OSS4048T: Dell EMC Networking software license operating system software license for advanced L3 features, latest versionS4048T: Dell EMC Networking software licenseDell EMC Networking OS operating system software license, latest versionNote: in-field change of airflow direction only supported when unit is powered down and all fan andpower supply units are replaced with airflow moving in a uniform directionSupported operating systems Big Switch Networks Switch Light OSDell EMC Networking OS9 and Dell EMC SmartFabric OS10 Pluribus OSTechnical specifications48 fixed 10GBase-T ports supporting 100M/1G/10G speeds6 fixed 40 Gigabit Ethernet QSFP+ ports1 RJ45 console/management port withRS232 signaling1 USB 2.0 type A to support mass storage device1 Micro-USB 2.0 type B Serial Console Port1 8 GB SSD ModuleSize: 1RU, 1.71 x 17.09 x 18.11”(4.35 x 43.4 x 46 cm) (H x W x D)Weight: 23 lbs (10.43kg)ISO 7779 A-weighted sound pressure level:65 dB at 77°F (25°C)Power supply: 100–240V AC 50/60HzMax. thermal output: 1568 BTU/hMax. current draw per system:4.6 A at 460W/100VAC,2.3 A at 460W/200VACMax. power consumption: 460 WattsT ypical power consumption: 338 WattsMax. operating specifications:Operating temperature: 32°F to 113°F(0°C to 45°C)Operating humidity: 5 to 90% (RH),non-condensingMax. non-operating specifications:Storage temperature: –40°F to 158°F(–40°C to 70°C) condensingRedundancyHot swappable redundant powerHot swappable redundant fansPerformance GeneralSwitch fabric capacity:1.44Tbps (full-duplex)720Gbps (half-duplex)Forwarding Capacity: 1080 MppsLatency: 2.8 usPacket buffer memory: 16MBCPU memory: 4GBOS9 Performance:MAC addresses: 160KARP table 128KIPv4 routes: 128KIPv6 hosts: 64KIPv6 routes: 64KMulticast routes: 8KLink aggregation: 16 links per group, 128 groupsLayer 2 VLANs: 4KMSTP: 64 instancesVRF-Lite: 511 instancesLAG load balancing: Based on layer 2,IPv4 or IPv6 headersLatency: Sub 3usQOS data queues: 8QOS control queues: 12Egress ACL: 1KQoS: Default 3K entries scalable to 12KIEEE compliance with Dell EMC Networking OS9802.1AB LLDP802.1D Bridging, STP802.1p L2 Prioritization802.1Q VLAN T agging, Double VLAN T agging, GVRP802.1Qbb PFC802.1Qaz ETS802.1s MSTP802.1w RSTP802.1X Network Access Control802.3ab Gigabit Ethernet (1000BASE-T)802.3ac Frame Extensions for VLAN T agging802.3ad Link Aggregation with LACP802.3ae 10 Gigabit Ethernet (10GBase-X)with QSA802.3ba 40 Gigabit Ethernet (40GBase-SR4,40GBase-CR4, 40GBase-LR4) onoptical ports802.3u Fast Ethernet (100Base-TX)802.3x Flow Control802.3z Gigabit Ethernet (1000Base-X)with QSA802.3az Energy Efficient EthernetANSI/TIA-1057 LLDP-MEDForce10 PVST+Max MTU 9216 bytesRFC and I-D compliance withDell EMC Networking OS9General Internet protocols768 UDP793 TCP854 T elnet959 FTPGeneral IPv4 protocols791 IPv4792 ICMP826 ARP1027 Proxy ARP1035 DNS (client)1042 Ethernet T ransmission1305 NTPv31519 CIDR1542 BOOTP (relay)1812 Requirements for IPv4 Routers1918 Address Allocation for Private Internets2474 Diffserv Field in IPv4 and Ipv6 Headers2596 Assured Forwarding PHB Group3164 BSD Syslog3195 Reliable Delivery for Syslog3246 Expedited Assured Forwarding4364 VRF-lite (IPv4 VRF with OSPF, BGP,IS-IS and V4 multicast)5798 VRRPGeneral IPv6 protocols1981 Path MTU Discovery Features2460 Internet Protocol, Version 6 (IPv6)Specification2464 T ransmission of IPv6 Packets overEthernet Networks2711 IPv6 Router Alert Option4007 IPv6 Scoped Address Architecture4213 Basic T ransition Mechanisms for IPv6Hosts and Routers4291 IPv6 Addressing Architecture4443 ICMP for IPv64861 Neighbor Discovery for IPv64862 IPv6 Stateless Address Autoconfiguration 5095 Deprecation of T ype 0 RoutingHeaders in IPv6IPv6 Management support (telnet, FTP,TACACS, RADIUS, SSH, NTP)VRF-Lite (IPv6 VRF with OSPFv3, BGPv6, IS-IS)RIP1058 RIPv1 2453 RIPv2OSPF (v2/v3)1587 NSSA 4552 Authentication/2154 OSPF Digital SignaturesConfidentiality for2328 OSPFv2 OSPFv32370 Opaque LSA 5340 OSPF for IPv6IS-IS1142 Base IS-IS Protocol1195 IPv4 Routing5301 Dynamic hostname exchange mechanism for IS-IS5302 Domain-wide prefix distribution withtwo-level IS-IS 5303 3-way handshake for IS-IS pt-to-ptadjacencies5304 IS-IS MD5 Authentication5306 Restart signaling for IS-IS5308 IS-IS for IPv65309 IS-IS point to point operation over LANdraft-isis-igp-p2p-over-lan-06draft-kaplan-isis-ext-eth-02BGP1997 Communities2385 MD52545 BGP-4 Multiprotocol Extensions forIPv6 Inter-Domain Routing2439 Route Flap Damping2796 Route Reflection2842 Capabilities2858 Multiprotocol Extensions2918 Route Refresh3065 Confederations4360 Extended Communities4893 4-byte ASN5396 4-byte ASN representationsdraft-ietf-idr-bgp4-20 BGPv4draft-michaelson-4byte-as-representation-054-byte ASN Representation (partial)draft-ietf-idr-add-paths-04.txt ADD PATHMulticast1112 IGMPv12236 IGMPv23376 IGMPv3MSDP, PIM-SM, PIM-SSMSecurity2404 The Use of HMACSHA- 1-96 withinESP and AH2865 RADIUS3162 Radius and IPv63579 Radius support for EAP3580 802.1X with RADIUS3768 EAP3826 AES Cipher Algorithm in the SNMPUser Base Security Model4250, 4251, 4252, 4253, 4254 SSHv24301 Security Architecture for IPSec4302 IPSec Authentication Header4303 ESP Protocol4807 IPsecv Security Policy DB MIBdraft-ietf-pim-sm-v2-new-05 PIM-SMwData center bridging802.1Qbb Priority-Based Flow Control802.1Qaz Enhanced Transmission Selection(ETS)Data Center Bridging eXchange (DCBx)DCBx Application TLV (iSCSI, FCoE)Network management1155 SMIv11157 SNMPv11212 Concise MIB Definitions1215 SNMP Traps1493 Bridges MIB1850 OSPFv2 MIB1901 Community-Based SNMPv22011 IP MIB2096 IP Forwarding T able MIB2578 SMIv22579 T extual Conventions for SMIv22580 Conformance Statements for SMIv22618 RADIUS Authentication MIB2665 Ethernet-Like Interfaces MIB2674 Extended Bridge MIB2787 VRRP MIB2819 RMON MIB (groups 1, 2, 3, 9)2863 Interfaces MIB3273 RMON High Capacity MIB3410 SNMPv33411 SNMPv3 Management Framework3412 Message Processing and Dispatchingfor the Simple Network ManagementProtocol (SNMP)3413 SNMP Applications3414 User-based Security Model (USM) forSNMPv33415 VACM for SNMP3416 SNMPv23417 Transport mappings for SNMP3418 SNMP MIB3434 RMON High Capacity Alarm MIB3584 Coexistance between SNMP v1, v2and v34022 IP MIB4087 IP Tunnel MIB4113 UDP MIB4133 Entity MIB4292 MIB for IP4293 MIB for IPv6 T extual Conventions4502 RMONv2 (groups 1,2,3,9)5060 PIM MIBANSI/TIA-1057 LLDP-MED MIBDell_ITA.Rev_1_1 MIBdraft-grant-tacacs-02 TACACS+draft-ietf-idr-bgp4-mib-06 BGP MIBv1IEEE 802.1AB LLDP MIBIEEE 802.1AB LLDP DOT1 MIBIEEE 802.1AB LLDP DOT3 MIB sFlowv5 sFlowv5 MIB (version 1.3)DELL-NETWORKING-SMIDELL-NETWORKING-TCDELL-NETWORKING-CHASSIS-MIBDELL-NETWORKING-PRODUCTS-MIBDELL-NETWORKING-SYSTEM-COMPONENTMIBDELL-NETWORKING-TRAP-EVENT-MIBDELL-NETWORKING-COPY-CONFIG-MIBDELL-NETWORKING-IF-EXTENSION-MIBDELL-NETWORKING-FIB-MIBDELL-NETWORKING-FPSTATS-MIBDELL-NETWORKING-LINK-AGGREGATIONMIBDELL-NETWORKING-MSTP-MIBDELL-NETWORKING-BGP4-V2-MIBDELL-NETWORKING-ISIS-MIBDELL-NETWORKING-FIPSNOOPING-MIBDELL-NETWORKING-VIRTUAL-LINK-TRUNKMIB DELL-NETWORKING-DCB-MIBDELL-NETWORKING-OPENFLOW-MIB DELL-NETWORKING-BMP-MIB DELL-NETWORKING-BPSTATS-MIB Regulatory compliance SafetyCUS UL 60950-1, Second Edition CSA 60950-1-03, Second Edition EN 60950-1, Second EditionIEC 60950-1, Second Edition Including All National Deviations and Group Differences EN 60825-1, 1st EditionEN 60825-1 Safety of Laser Products Part 1: Equipment Classification Requirements and User’s GuideEN 60825-2 Safety of Laser Products Part 2: Safety of Optical Fibre Communication SystemsFDA Regulation 21 CFR 1040.10 and 1040.11EmissionsInternational: CISPR 22, Class AAustralia/New Zealand: AS/NZS CISPR 22: 2009, Class ACanada: ICES-003:2016 Issue 6, Class AEurope: EN 55022: 2010+AC:2011 / CISPR 22: 2008, Class AJapan: VCCI V-3/2014.04, Class A & V4/2012.04USA: FCC CFR 47 Part 15, Subpart B:2009, Class A RoHSAll S-Series components are EU RoHS compliant.CertificationsJapan: VCCI V3/2009 Class AUSA: FCC CFR 47 Part 15, Subpart B:2009, Class AAvailable with US Trade Agreements Act (TAA) complianceUSGv6 Host and Router Certified on Dell Networking OS 9.5 and greater IPv6 Ready for both Host and RouterUCR DoD APL (core and distribution ALSAN switchImmunityEN 300 386 V1.6.1 (2012-09) EMC for Network Equipment EN 55022, Class AEN 55024: 2010 / CISPR 24: 2010EN 61000-3-2: Harmonic Current Emissions EN 61000-3-3: Voltage Fluctuations and Flicker EN 61000-4-2: ESDEN 61000-4-3: Radiated Immunity EN 61000-4-4: EFT EN 61000-4-5: SurgeEN 61000-4-6: Low Frequency Conducted ImmunityDellTechnologies ServicesConsultingDell T echnologies Consulting Services provides industryprofessionals with a wide range of tools and the experience your need to design and execute plans to transform your business. DeploymentAccelerate technology adoption with ProDeploy Enterprise Suite. Trust our experts to lead deployments through planning, configuration and complex integrations.ManagementRegain control of operations with flexible IT management options. Our Residency Services help you adopt and optimize new technologies and our Managed Services allow you to outsource portions of your environment to us.SupportIncrease productivity and reduce downtime with ProSupportEnterprise Suite. Expert support backed by proactive and predictive artificial intelligence cationDell T echnologies Education Services help you develop the IT skills required to lead and execute transformational strategies. Get certified today.Learn more at/ServicesPlan, deploy, manage and support your IT transformation with our top-rated servicesLearn more at DellT /Networking。

2022年职业考证-软考-信息处理技术员考试全真模拟易错、难点剖析AB卷(带答案)试题号:82

2022年职业考证-软考-信息处理技术员考试全真模拟易错、难点剖析AB卷(带答案)试题号:82

2022年职业考证-软考-信息处理技术员考试全真模拟易错、难点剖析AB卷(带答案)一.综合题(共15题)1.单选题Each Web site has its own unique address known as a().问题1选项A.URL(Uniform Resource Locator)B.IP(Internet Protocol)C.HTML(Hyper Text Markup Language)D.www(World Wide Web)【答案】A【解析】全句翻译:每个网站都有自己的唯一地址,称为统一资源定位符(或网址)。

常见英文单词简记:Web 网站 address 地址URL(Uniform Resource Locator)网址/统一资源定位符 IP(Internet Protocol) IP地址,网卡地址HTML(Hyper Text Markup Language)超文本标记语言,是一种标记语言。

它包括一系列标签.通过这些标签可以将网络上的文档格式统一,通常与网页文件有关www(World Wide Web)万维网,应用服务2.单选题A()is a computer system or program that automatically prevents an unauthorized person from gaining access to a computer when it is connected to a network such as the Internet.问题1选项A.firewallB.gatewayC.routerD.anti-virus software【答案】A【解析】该题考查计算机专业英语知识。

属于网络基础英语词汇的考查。

问题:一个()是计算机系统或程序,当计算机与网络(如因特网)连接时,它能自动阻止一个未经授权的人员获得对计算机的访问权。

A.firewall 防火墙(在内外网之间架起的防御系统,属于被动防御,主要功能是阻止未经授权实体对内部网络的访问)B.gateway 网关(在网络层以上实现网络互连,仅用于两个高层协议不同的网络互连)C.router 路由器(对不同的网络之间的数据包进行存储、分组转发处理)D.anti-virus software 防病毒软件(可进行检测、防护,并采取行动来解除或删除恶意软件程序)3.单选题以下关于企业信息处理的叙述中,不正确的是()。

ProxySG SGOS 6.7.2.102 FIPS Mode 说明说明书

ProxySG SGOS 6.7.2.102 FIPS Mode 说明说明书

ProxySG in FIPS ModeApplies to ProxySG appliance,Reverse Proxy appliance,and Secure Web Gateway Virtual ApplianceDocument Version:SGOS6.7.2.102Document Revision:5/18/2018What Happens When FIPS Mode is Enabled in SGOS6.7What Happens When FIPS Mode is Enabled in SGOS 6.7 Except where noted,information in this section applies to supported models of the ProxySG appliance,ProxySG Reverse Proxy appliance,and the Secure Web Gateway Virtual Appliance(SWG VA)with SGOS6.7.2.102.FIPS mode enforces the requirements of Federal Information Processing Standard140-2and Common Criteria(CC) on the appliance and ensures the use of FIPS140-2approved algorithms along with FIPS and CC approved behavior.Note that the term FIPS mode refers to secure configuration that meets both FIPS and CC requirements. When FIPS mode is enabled,it enforces the following changes on the appliance:n The Management Console is secured with a TLS1.x connection.n The remote access command line interface is secured with SSHv2.n SNMPv3is enabled;earlier versions of SNMP are disabled.n Remote access via Sky UI is disabled.n ProxySG birth certificates are refreshed with a2048-bit key and signed with SHA-2.n Some external services and features are disabled and other features have special restrictions in place.See "Restrictions in FIPS Mode for SGOS6.7"on page 5for details.n FIPS-relevant services must use a set of approved cryptographic algorithms.See"Cryptographic Algorithms for SGOS6.7"on the facing page for a list of these approved algorithms and a list of services that aren'tsubject to these restrictions.n Additional testing is performed when the appliance is powered on or reset.See"Additional Protections in FIPS Mode in SGOS6.7"on page 7.Other documents,described below,explain how to enable FIPS mode.Enabling FIPS ModeFor instructions on enabling FIPS mode,see the Security Policy document for the ProxySG model and SGOS version you are using.The Security Policy describes how the appliance meets the security requirements of FIPS 140-2as well as how to run the appliance in FIPS mode:https:///content/unifiedweb/en_US/Documentation.html?prodRefKey=1145522&locale=en_ USTo determine if a model or version is FIPS140-2validated,refer to the Cryptographic Module Validation Program (CMVP)validated module listing:/groups/STM/cmvp/documents/140-1/140val-all.htmTo determine if a model or version is Common Criteria certified,refer to the Common Criteria Certified Products listing:/products/Cryptographic Algorithms for SGOS6.7Except where noted,information in this section applies to supported models of the ProxySG appliance,ProxySG Reverse Proxy appliance,and the Secure Web Gateway Virtual Appliance(SWG VA)with SGOS6.7.2.102.In FIPS mode,FIPS-relevant services must use only the cryptographic algorithms and functions listed in the table below.The Security Policy for each platform contains additional information about how each algorithm is used. FIPS-Approved Algorithm ImplementationsFIPS-relevant services include local and remote administration of the ProxySG using the Management Console (HTTPS over TLS),remote login utility(SSHv2),SNMP(v3only),and connections to Symantec for services such as WebPulse,DRTR,CachePulse,and licensing entitlements.FIPS-relevant services also include creation,storage, management,and deletion of Critical Security Parameters(CSPs)as defined in the Security Policy.These services must abide by the algorithms specified in the table below.Approved algorithms may change over time.This list includes the algorithms that were approved for the latest ProxySG FIPS140-2validation.The table below lists the non-approved algorithms available in non-FIPS mode.Note that MD5is also permitted for use in FIPS mode with the TLS protocol;however,it is not permitted for general hashing services.Services that Aren't FIPS-RelevantSome services aren't relevant to FIPS140-2or Common Criteria and need not comply with the algorithms in the table above.The following services have no FIPS140-2limitations on algorithms that can be used: n IWA Directn SSH,SSL,and TLS connections that are proxied through the appliance(for example,connections using the HTTPS proxy)n Services that are disabled in FIPS modeRestrictions in FIPS Mode for SGOS6.7Except where noted,information in this section applies to supported models of the ProxySG appliance,ProxySG Reverse Proxy appliance,and the Secure Web Gateway Virtual Appliance(SWG VA)with SGOS6.7.2.102.When FIPS mode is enabled,some external services and features are disabled or have special restrictions in place. Disabled External ServicesThe following external services are disabled in FIPS mode:n Automatic registration with Symantec Director(You can still manually add ProxySG to Director.)n Websense offboxn Identdn Netbios respondern Authenticated WCCPn Blue Coat Sky UIDisabled FeaturesThe following ProxySG features are disabled in FIPS mode:n Uploads of unencrypted access logsn PCAP transfers via FTPn Archive configuration via FTP or TFTPn DES-encrypted import/export of private keys when creating a FIPS-compliant keyringn(Not applicable to virtual appliances) Using the LCD after FIPS mode is initializedn Automatic sending of mini-contexts and heartbeatsn Connection forwardingn Ability to specify plaintext passwords and secrets on the command line when connected via the serial console n Session monitor(replication protocol does not support TLS)n XML realmsn SGRP(failover)Although disabled features may still appear in the user interface and can be entered at the command line,they cannot be applied;an error message appears when you apply these selections.Other RestrictionsFIPS has the following additional restrictions on the ProxySG:n Downloading of configuration items must use TLS.n ADN configuration must be secure-only and secure-outbound.n The serial port must be secured with a password.n Off-box authentication must use TLS.n Content filtering lists must be downloaded over TLS.n WebPulse connections must use TLS.n Internal certificate generation using only FIPS approved cryptographic algorithms and sizes.See"Cryptographic Algorithms for SGOS6.7"on page 3.n To meet Common Criteria NDPP requirements,password strength requirements are configurable by the administrator with a minimum of15characters.n For FIPS-relevant services,TLS,SSH,and SNMPv3protocols are constrained to approved algorithms.See"Cryptographic Algorithms for SGOS6.7"on page 3.n HTTP access to the Management Console is not allowed.n When upgrading,secure content filtering(DRTR)connections are disabled in non-FIPS mode devices.If the device is put into FIPS mode,secure connections are enabled and cannot be disabled.n Firmware images loaded in FIPS mode must have been downloaded in FIPS mode and pass the firmware load test successfully.n When connected to the CLI via serial console,you cannot enter passwords on the command line,as part of the command.If you enter the password on the command line,an error will be returned in FIPS mode.If the password is not entered as part of the command,a prompt will appear,asking you to enter the password.Note that this rule is not in effect when connected to the CLI via SSH.n Connections between ProxyClient and ProxySG are not secure;therefore,ProxyClient should not be used with an appliance in FIPS mode.n Connections between Unified Agent and ProxySG always use HTTPS,regardless of whether the appliance is in FIPS mode.The following applications are subject to FIPS compliance:n Authentication realms that use SSL with the authentication servers.The following authentication realms are recommended for administrative authentication to the appliance:l IWA-BCAAA(with TLS—not SSL)with basic credentialsl Locall X.509certificate based(including certificate realms;refer to the Common Access Card Solutions Guide for information)l LDAP with TLS(not SSL)l RADIUSn URL database downloads over SSLn Secure heartbeatsn Secure access log uploadsn Upon upgrading,the ProxySG uses the SSL device profile instead of the SSL client profile for external applications that the ProxySG connects to(such as an ADN concentrator,an LDAP client,a BCAAA client, and WebPulse).After entering the FIPS mode of operation,the default SSL device profile is created as FIPS compliant and thus allows only FIPS-compliant SSL versions,ciphers,and so forth.n If you use Symantec Management Center to monitor devices in your network,statistics collection does not work for FIPS mode-enabled devices.You can view some device properties but cannot generate reports on the device.Note that the following services are NOT subject to FIPS compliance:n IWA Directn SSH,SSL,and TLS connections that are proxied through the appliance(for example,connections using the HTTPS proxy)n Services that are disabled in FIPS modeAdditional Protections in FIPS Mode in SGOS6.7 Additional Protections in FIPS Mode in SGOS 6.7 Except where noted,information in this section applies to supported models of the ProxySG appliance,ProxySG Reverse Proxy appliance,and the Secure Web Gateway Virtual Appliance(SWG VA)with SGOS6.7.2.102.When FIPS mode is enabled,additional testing is performed when the ProxySG appliance is turned on or reset: n Power-up self-tests ensure all FIPS140-2approved cryptographic algorithms function as per design specifications.Power-up self-tests result in longer boot times.n(Not applicable to virtual appliances) SGOS firmware is cryptographically validated before loading.n SGOS conducts continuous testing of the non-deterministic random number generator(NDRNG)and the deterministic random number generator(DRNG).n Critical Security Parameters(CSPs)are encrypted.See the Security Policy document for a listing of CSPs.n Encrypted Critical Security Parameters can be zeroed when desired by exiting FIPS mode.n Management operations are always conducted using cryptographically strong algorithms and protocol versions.n Administrator passwords are more difficult to guess due to the minimum length requirements.n(Not applicable to virtual appliances) Tamper evidence for the outer casing of the appliance is provided by the baffles and tamper evident labels in the FIPS kit.n(Not applicable to virtual appliances) Unauthenticated input is not accepted from the front panel of the ProxySG.What Happens When FIPS Mode is Disabled in SGOS6.7 What Happens When FIPS Mode is Disabled in SGOS 6.7 Except where noted,information in this section applies to supported models of the ProxySG appliance,ProxySG Reverse Proxy appliance,and the Secure Web Gateway Virtual Appliance(SWG VA)with SGOS6.7.2.102.You should disable FIPS mode when decommissioning an appliance or when returning the product or(for non-virtual appliances) hard drive for replacement with an RMA(Returned Materials Authorization).When FIPS mode is disabled,the Master Encryption Key(MEK)is zeroed,thereby rendering all Critical Security Parameters encrypted by the MEK inaccessible.This is called zeroization.The MEK encrypts Critical Security Parameters such as RSA private keys,SSH private keys,Administrator passwords,“Enable”and“Setup”passwords,and SNMP privacy and authentication keys.For more details,see the Security Policy document for your ProxySG model and SGOS version:https:///content/unifiedweb/en_US/Documentation.html?prodRefKey=1145443&locale=en_ USOnce the MEK is zeroized,decryption involving the MEK becomes impossible,making these CSPs unobtainable by an attacker.In addition,rebooting the module causes all temporary keys stored in volatile memory(SSH Session key,TLS session key,DRBG entropy values,and NDRNG entropy values)to be zeroized.The Crypto-Officer must wait until the module has successfully rebooted in order to verify that zeroization has completed.Configuration settings and logs will not be recoverable after exiting FIPS mode.Note that the removal of these items is not cryptographically secure;unless you take additional steps to perform a secure deletion,you might be able to recover the deleted files with appropriate software tools.In addition,when FIPS mode is disabled:n Services and features that were disabled in FIPS mode will be enabled.See"Restrictions in FIPS Mode for SGOS6.7"on page 5for details.n Services that were limited to the set of FIPS-approved algorithms will no longer have those restrictions.See "Cryptographic Algorithms for SGOS6.7"on page 3.n The ProxySG appliance will not perform the power-up self-tests conducted in FIPS mode.See"Additional Protections in FIPS Mode in SGOS6.7"on the previous page.ProxySG FIPS Mode Guide for SGOS6.5 Legal NoticeCopyright©2018Symantec Corp.All rights reserved.Symantec,the Symantec Logo,the Checkmark Logo,Blue Coat, and the Blue Coat logo are trademarks or registered trademarks of Symantec Corp.or its affiliates in the U.S.and other countries.Other names may be trademarks of their respective owners.This document is provided for informational purposes only and is not intended as advertising.All warranties relating to the information in this document,either express or implied,are disclaimed to the maximum extent allowed by law.The information in this document is subject to change without notice.THE DOCUMENTATION IS PROVIDED"AS IS"AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES,INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,ARE DISCLAIMED,EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE,OR USE OF THIS DOCUMENTATION.THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.SYMANTEC CORPORATION PRODUCTS, TECHNICAL SERVICES,AND ANY OTHER TECHNICAL DATA REFERENCED IN THIS DOCUMENT ARE SUBJECT TO U.S.EXPORT CONTROL AND SANCTIONS LAWS,REGULATIONS AND REQUIREMENTS,AND MAY BE SUBJECT TO EXPORT OR IMPORT REGULATIONS IN OTHER COUNTRIES.YOU AGREE TO COMPLY STRICTLY WITH THESE LAWS,REGULATIONS AND REQUIREMENTS,AND ACKNOWLEDGE THAT YOU HAVE THE RESPONSIBILITY TO OBTAIN ANY LICENSES,PERMITS OR OTHER APPROVALS THAT MAY BE REQUIRED IN ORDER TO EXPORT,RE-EXPORT,TRANSFER IN COUNTRY OR IMPORT AFTER DELIVERY TO YOU.Symantec Corporation350Ellis StreetMountain View,CA940435/18/2018。

RDMA技术介绍

RDMA技术介绍

RDMA : new communication mechanism
Problem
basic, principal target data position & size vs memory capacity on destination host distributed file system: memory management(data persistence , data reliability , data efficiency…)
RDMA vs TCP/IP Traditional TCP/IP socket-based byte stream communication, long history , commonplace widely-used (intranet/internet), while RDMA (only intranet) RDMA -Low latency – stack bypass and copy avoidance -Kernel bypass – reduces CPU utilization -High memory bandwidth utilization -available
One side RDMA read and server write create race condition !Data persistence
Challenges for Cache Management -Server Unaware of RDMA reads, difficult to keep track of popularity of cached items. (inefficient cache replacement scheme leads to severe performance downgrade) - when the server evicts an items, it needs to invalidate the remote pointer cached in the client side. (broadcast on every eviction is significant overhead) - Goals: - 1) deliver sustainable high hit rate with limited cache capacity. - 2) not to compromise the benefit of RDMA read - 3) provide reliable resource reclamation scheme. - Design decisions: - 1) Client-Assisted Cache Management - 2) Managing Key-Value Pairs via Leases - 3) Popularity Differentiated Lease Assignment - 4) Classifying Hot-Cold Items for Access History Report - 5) Delayed Memory Reclamation

DNS缓存投毒攻击原理与防御策略

DNS缓存投毒攻击原理与防御策略

17 2009.11
China Communications
II 服务器只记录本地资源的所有授权主 机,若想查询非本地的主机信息,则要向信息持有 者(权威 DNS 服务器)发送查询请求。为了避免每 次查询都发送请求,DNS 服务器会把权威 DNS 服务 器返回的查询结果保存在缓存中,并保持一定时间, 这就构成了 DNS 缓存(DNS Cache)。DNS 缓存投毒 攻击就是通过污染 DNS Cache,用虚假的 IP 地址信 息替换 Cache 中主机记录的真实 IP 地址信息来制造 破坏。
III.KAMINSKY缓存投毒攻击
2008 年夏天,Dan Kaminsky 发现了一种新型 DNS 缓存投毒攻击,引起了网络安全界的广泛关注。 该攻击方法克服了传统 DNS 缓存投毒攻击存在的攻 击所需时间长、成功率很低的缺陷。
3.1 Kaminsky攻击原理 传统的 DNS 缓存投毒攻击,污染的目标是应 答数据包中带有查询结果 IP 地址的回答资源记录部 分(参见表 1 (b)),而 Kaminsky 攻击上升了一个层 次,污染的目标是应答数据包中 Authority Records 部分(授权资源记录,参见表 1 (b))。图 4 显示了 Kaminsky 攻击流程。 (1) 攻 击 者 向 被 攻 击 的 目 标 服 务 器 发 送 一 个 DNS 查询请求,该查询请求中的域名主机使用随机 序 列 和 目 标 域 名 的 组 合, 如 图 4 中 的 www276930. ,其中 为目标域名,276930 是随 机生成的序列。显然,这个查询的域名主机是不存 在的,正常返回的应答数据包中回答资源记录部分 应为 NXDOMAIN(表示该域名主机不存在)。 (2)被攻击目标服务器会按 2.1 节中所述 DNS

信息检索考题汇总

信息检索考题汇总

1.评价信息网站的标准不应该包括下列哪项流动性简洁性2.你在网上找到的信息是来源多样化,包括商业、政府或个人3.为了得到图书馆书架上的一本书,你需要知道图书索书号4.南航图书馆的图书顺序排架,以下哪项是正确的?TP301/1005、TP311.1/1002、TP313/1030、TP39/1025、5在电子数据库(联机目录、期刊数据库),能找到最多相关检索结果的方法是主题查询6为了扩大检索,你该使用下列哪个布尔运算符or7当查询数据库时,在检索词尾我们有时会使用截词符(*),其目的是检索出这个词形开头的所有词的纪录8当你因为使用了不恰当的主题词导致找不着所需信息时,应采取下列哪种方法:试着用一个同义词来替换你刚才用的主题词9想查询本专业领域的核心期刊及其影响因子的情况,应利用哪个数据库Journal Citation Reports(JCR1某研究员发表了一篇文章被SCI收录,如果他希望日后这篇文章被人引用时,能接到系统自动发出的通知,他必須在个性化功能中设定:My Preferences Citation Alerts1下列哪些数据库可查会议文献Ei Engineering Village 2 (EI)ACM Digital Library (ACM1下列哪些可能是当前的、最新的信息源因特网上的文章或信息1我校图书馆网站站主页网址是?1在百度检索文本输入框中分别输入中国银行和“中国银行”,其检索结果应为:______前者多,后者少;前者的结果包含后者1网络信息是否应该给予法律保护其知识产权?应该1剽窃是在你的著作中包含了他人的观点,但是你却没有标明你是引用别人的1期刊的引文通常提供主题查询或关键词查询的机会作者、刊名、日期、页码1南京航空航天大学图书馆采取的分类法是:______中国图书馆分类法1检索结果偏少,对检索词应该如何调整增加同义词变换为上位词增加相关词1中国图书馆分类法采用:______字母和数字相结合的混合号码1这篇文献是会议文献吗,请判断。

Automatic Proxy Configuration

Automatic Proxy Configuration

Automatic Proxy ConfigurationBj¨o rn Knutsson (Bjorn.Knutsson@DoCS.UU.SE)Per Gunningberg (Per.Gunningberg@DoCS.UU.SE)Dept.of Information Technology,Uppsala University, P.O.Box325,SE-75105,UPPSALA,SWEDENA BSTRACTRunning multiple proxies on aflow means a risk that the actions of one proxy interferes with another.To avoid this,every proxy must be made aware of other proxies operating on theflow,and be configured in a way that avoids interference.This paper argues that the way to do this is to describe proxies in terms of ab-stract properties that describe the effect they have on theflow.These abstract properties are then used in a configuration engine to determine how to configure the proxies in a non-interfering manner.I.I NTRODUCTIONNetwork proxies are applications that are deployed in networks to improve the service provided to end points. For example,caching HTTP proxies are put in the network to avoid redundant network traffic and to reduce response times.For end points that are connected via links that ex-perience high bit error rates,a transparent proxy that re-duces the impact of bit errors would be very beneficial.It is our belief that the use of network proxies will in-crease,in order to cope with more and more heterogenous networks and end points.As more and more proxies are deployed,the risk of interference between proxy protocols increases.In a multi-proxy environment,proxies must take the actions of other proxies into account,and this paper is about a method for dealing with this problem.Some types of proxies have a natural place inside the network—HTTP caches are typically put at the gateway to an autonomous system.There are at least two argu-ments against implementing many of the services provided by proxies in the end-to-end transport-or application pro-tocols.Thefirst is that end points rarely know when a particular service(e.g.reducing impact of bit errors)is needed and even if this could be detected,this would be a slow process since the properties of the links wouldfirst have to be discovered empirically.The second is that addi-tions or alterations of end-to-end protocols are costly,since it involves changing the software implementing the proto-col,possibly in every end point,and such alterations thus are unlikely to happen unless they are beneficial to a large group of users.Network proxies come in two distinctflavors;explicit and transparent.Explicit proxies are the proxies that the user or application explicitly connect to for a service,e.g. an HTTP cache.Transparent proxies are proxies that oper-ate on theflow between two hosts and are not necessarily directly visible to the end points.This transparency does not mean that they cannot be controlled by the user.Be-ing able to control the proxies is especially important for transparent proxies that adapt to available bandwidth by reducing the informational content of theflow by reducing the quality of the audio and video contents of aflow.In further discussions we will use the word proxy to re-fer to the instance of an enhanced service running in the network,while we’ll use proxy protocol to refer to the spe-cific implementation of a proxy service,i.e.a proxy may implement several different proxy protocols.II.P ROXY R EGIONSIn[5]we have presented an infrastructure for proxy de-tection and signalling between proxies.The most impor-tant function of this infrastructure is that it provides a way to expose the proxy protocols involved.Each proxy along aflow will see all proxy protocols operating on theflow, allowing them to take into account the effects these proxy protocols will have on aflow.This infrastructure is based on our proxy region con-cept.A proxy region is the superset of all regions cre-ated by various proxies operating on a givenflow.For example,if we have two proxies that provides compres-sion/decompression of a dataflow between two nodes in the network,then that part of the network is a compression region.Proxy A Proxy B Proxy C Proxy DEncryption regionProxy regionHost 2 Host 1Compression regionFEC regionFig.1.The proxy region is the superset of all other regions. We have identified various properties that define re-gions,e.g.trust(encryption),reliability(error correc-tion/recovery)etc.Put in another way,a proxy region is simply the region along the path of a givenflow that is bounded by proxies that understand a unified proxy sig-nalling scheme,as in Figure1.Within the proxy region, we have a high degree of freedom to transform or re-route packages,but once outside the proxy region,theflow should again look like a normalflow.Setting up a proxy region is done in the following way: First the proxies in the proxy region are detected and the boundary proxies,which are responsible for the proxy re-gion,are selected.Next,a configuration message is cre-ated by the border that will be responsible for proxy con-figuration,and the message is passed along the path of the flow in the proxy region.Each proxy appends announce-ments of the proxy protocols it is willing to run.When the configuration message returns to the originator,it decides, based on the information in the configuration message, which proxies and proxy protocols to activate.Finally,an activation message is passed along a path that passes all candidate proxies,telling each proxy which proxy proto-cols to activate.III.P ROXY C ONFIGURATIONTo illustrate the need for unified proxy configuration,let us assume that we are watching a video feed,and that the flow spans a wireless segment.To cope with losses inher-ent in wireless communication,a proxy is used for For-ward Error Correction(FEC)[7],[4].If the link bandwidth is limited,then another proxy can be deployed to reduce the amount of data being sent,for example by transform-ing images or video in theflow to use less colors and/or resolution[3].Finally,due to the ease of eavesdropping on wireless traffic,it might be desirable to deploy a proxy that encrypts theflow across the wireless segments.The problem we run into now is that these three ser-vices cannot be implemented in an arbitrary order and at arbitrary places.An encrypted data stream cannot be trans-formed,and if theflow is altered in any way after FEC has been added,this will nullify the effect of adding FEC since the FEC is valid only for exactly the data it was computed for.We need a way to deduce the ordering constraints,and thus ensure that we transformfirst,encrypt next and add FEC last.We can create a simple configuration mechanism that bases configuration decisions on each proxy’s knowledge of the other proxy protocols operating on theflow.In the example above,if the FEC,encryption and transformation proxies know about each other and the effect that each proxy protocol have on theflow,they can arrange to avoid conflicts.This is done by allowing each proxy protocol to spec-ify that that it is incompatible with a specified set of other proxy protocols,and also by allowing proxy protocols to set priorities in case of conflicts.This is a step forward compared to the situation without any coordinated proxy handling,but we recognize that relying on each proxy’s knowledge of other proxy protocols will not scale.A.Abstract PropertiesIt is our goal in this paper to give an outline of a config-uration mechanism that does not rely on each proxy pro-tocol having explicit knowledge of other proxy protocols, but instead can make configuration decisions based on in-formation provided by the proxy protocols about the effect they have on theflow.This reduces or eliminates the need to modify existing proxy protocols when new proxy proto-cols are added.Our approach is to describe proxy protocols in terms of abstract properties.These properties are general enough to capture similar behavior in different proxy protocols, while specific enough to allow proxies to deduce the proper configuration of proxy protocols on aflow.A.1Properties Derived from BehaviorTo deduce a proper configuration we need to take the properties of different proxy protocols into account.In the FEC,encryption and image transform example,we need to capture the property of FEC that any alteration of the packets will disrupt the operation of the FEC proxies.We also need to capture the difference between transforming theflow by encrypting and by reducing the information content.Other properties do not have ordering implications,but still affect the effectiveness of the proxy protocols.For ex-ample,since FEC adds redundant data to allow recovery from errors and losses,the FEC region should be mini-mized.A compressing proxy,on the other hand,would re-duce bandwidth consumption,and thus we’d want to max-imize the size of the region to reap maximum benefit from the effort put into compression.Yet another category of proxy properties specifies per-manent effects onflows.Both FEC and encryption egress proxies will revert the ingress proxy’s transformation.This reversible property is,however,not true of an image reduc-tion proxy.This means that any other proxy that must op-erate on the original content of the image,cannot operate after a proxy that permanently alters theflow.We should not confuse original content with semantically similar con-tent,i.e.the transformation that reduce colors still main-tain the approximate semantics of the originalflow—it is still a picture of the same object,but not as information rich.This property should also be captured.At this stage,we cannot present an exhaustive list of properties needed for a general proxy configuration,or even categories of properties.Instead we will present a subset that is enough to understand the implications and problems of proxy configuration.A.2Properties Derived from PolicySo far the properties have been derived from the behav-ior and requirements of proxy protocols.The proxies we focus on are assumed to be transparent to the end points. This means that they operate without requiring input from the user or from the end point systems.But when an im-age is being transferred,there may be no way for the user to indicate to the proxies in the network that image qual-ity is more important than transfer time or vice versa.So how does a transparent proxy know when to reduce trans-fer time of an image by reducing its quality?The answer is that it cannot.Our solution is to introduce policy properties to allow end points,and ultimately the user,to control activation of proxy protocols with permanent effects on the content offlows.These proxy protocols must indicate this with the Permanent property.A parameter indicates if it alters image,video,audio,text or all types of data.Permanent alterations of theflow are only allowed if this is explic-itly requested by the user or the application,using the pol-icy property allow.There is also a disallow property that overrides the allow property,i.e.if there exists at least one request for an unalteredflow,then theflow shall not be altered.If the proxy region extends to the end points,then the user can set these properties directly.An alternative is to use a configuration proxy.This proxy has no effect on theflow except during region setup and sets the al-low/disallow properties.The user controls this proxy via a separate connection.B.Candidate PropertiesOur aim is to define a set of properties that captures the constraints that typical proxy protocols imposes on config-uration.As more proxy protocols are examined,proper-ties defined by the already examined proxy protocols will be refined,generalized and/or corroborated.Conflicts be-tween a new proxy protocol and the already examined may force us to add new properties to both the new and old proxy protocols.We will here formulate properties for six different proxy protocols:FEC,encryption,image reduction,TCP Snoop and an MPEG frame dropper.B.1The FEC Proxy ProtocolAn ingress proxy adds Forward Error Correction infor-mation to packets to enable an egress proxy to recovery from bit errors introduced in an unreliable region of the network.It is characterized with the following properties: Peers(1):One peer using the same proxy protocol is needed.Forbid(Inside(Alters(data,packaging))):The packets (data or packet headers)inside the region may not be al-tered in any way once FEC has been added.Alters(data):FEC information is added to the packet, changing the content of the packet.Reverts(data):All the alterations to the packet data are reverted by the peer.Placement(narrow):The region between peers should be minimized.Semantic(preserving):Modifies data,but preserves the original semantics of theflow.B.2The Encryption Proxy ProtocolAn ingress proxy encrypts data which is then decrypted by an egress proxy,providing secure transfer across a un-trusted region of the net.Properties:Peers(1),Reverts(data),Alters(data)Semantic(hiding):The semantic of the data is hidden,i.e.a proxy that is dependent on the semantic of the data can-not operate on thisflow after it has been encrypted.B.3The Compression Proxy ProtocolAn ingress proxy compresses the data in the packets and an egress proxy uncompresses them,reducing the band-width needed to transport the packets across a region. Properties:Peers(1),Reverts(data),Alters(data),Semantic(hiding) Placement(wide):The region should be as wide as possi-ble.Forbid(Prior(Semantic(hiding))):The semantic of data entering from outside the compression region must not have its semantic hidden.B.4The Image Reduction Proxy ProtocolThe proxy transforms images in theflow to lower reso-lution,lesser colors or better encoding to reduce the band-width needed to transfer the image.Properties: Semantic(preserving),Forbid(Prior(Semantic(hiding))), Alters(data)Permanent(image):This proxy will permanently alter the image content of theflow.Peers(none):Only a single instance of this particular proxy protocol should ever operate on theflow.Forbid Placement AltersFECInside(Alters(data))Inside(Alters(packaging))narrow dataEncrypt--data Compress Prior(Semantic(hiding))wide dataImage Prior(Semantic(hiding))serverdata packaging endpointSnoop--drops regeneratesMPEG Prior(Semantic(hiding))serverdata packagingas is the details of the configuration engine.A CKNOWLEDGEMENTSThis work is funded by Ericsson Radio under the MARCH project.We would like to acknowledge Larry Peterson of Prince-ton University for his participation in the work out of which this scheme grew.R EFERENCES[1]H.Balakrishnan,S.S.Seshan,and R.H.Katz.Improving reliabletransport and handoff performance in cellular wireless networks.In Wireless Networks1,pages469–481,1995.[2]S.Cen,C.Pu,R.Staehli,C.Cowan,and J.Walpole.A DistributedReal-Time MPEG Video Audio Player.In Proceedings of NOSS-DAV’95,pages18–21,April1995.[3]Armando Fox,Steven D.Gribble,Eric A.Brewer,and Elan Amir.Adapting to network and client variability via on-demand dynamic distillation.In Proceedings of7th Intl.Conf.On Arch.Support of ng.And Oper.Sys.(ASPLOS VII),October1996.[4]S.Keshav.An Engineering Approach to Computer Networking.Addison-Wesley,1997.ISBN0-201-63442-2.[5]Bj¨o rn Knutsson and Larry L.Peterson.Transparent Proxy Sig-nalling.Submitted to Infocom2001,June2000.[6]K.Mayer-Patel and L.A.Rowe.Design and Performance of theBerkeley Continuous Media Toolkit.In SPIE Proceedings,volume 3020,Feb1997.[7]William Stalling.Data and Computer Communications.MacMil-lan,fourth edition,1991.ISBN0-02-415441-5.。

spirent

spirent

Page 21
常见的测试规范及标准
RFC草案
Internet-Drafts:
Terminology for Benchmarking Network-layer Traffic Control Mechanisms Benchmarking Terminology for Resource Reservation Capable Routers Terminology for Benchmarking IPsec Devices Benchmarking Methodology for IGP Data Plane Route Convergence Terminology for Benchmarking IGP Data Plane Route Convergence Considerations for Benchmarking IGP Data Plane Route Convergence Terminology for Accelerated Stress Benchmarking Methodology Guidelines for Accelerated Stress Benchmarking Hash and Stuffing: Overlooked Factors in Network Device Benchmarking Methodology for Benchmarking Accelerated Stress with Operational EBGP Instabilities Methodology for Benchmarking Accelerated Stress with Operational Security
Page 9
思博伦通信: 一个业界领导者的发展历程

药品生产企业实施药品(GMP2010)年修订

药品生产企业实施药品(GMP2010)年修订

药品生产企业实施药品(GMP2010)年修订关于组织全区抗艾滋病、结核病及疟疾类药品生产企业填报调查问卷的函附件药品生产企业实施药品GMP(2010年修订)过程中存在差距的调查问卷Questionnaire for Investigation of the Gaps of Pharmaceutical Enterprises in Relation to the GMP Guidelines (revised in 2010)1.在中国药品GMP(2010年修订)的总体实施层面,您主要欠缺哪些知识,有哪些培训需求?What are your major knowledge shortages and need for training in the frame of the implementation of the Chinese GMP Guidelines (revised in 2010)?a. 以风险为基础的质量保证体系Risk based quality assurance systemb. 质量受权人和其他关键人员Qualified person and other key personsc. 人员培训Personnel Trainingd. 文件体系Documentation systeme. 空调净化系统HV AC systemf. 工艺用水制备Process water preparationg. 设备的校准,确认和预防性维护Equipment calibration, qualification, and preventive maintenanceh. 工艺验证Process validationi. 清洁验证Cleaning validationj. 产品质量回顾Product quality reviewk. 库房Warehousel. 供应商确认Supplier qualificationm. 委托生产,委托检验Contract manufacturing / testingn. 质控实验室QC laboratoryo. 偏差、纠正与预防措施Deviation / CAPAp. 超标检验数据调查OOS investigationsq. 变更控制Change control2.您所建立的质量保证体系是否以确保产品安全并且符合注册批准要求为目的,以科学为准则并且以风险管理为基础?Is your quality assurance system established on scientific principles and on risk management basis to ensure that your products are safe and comply with the requirements of the Marketing Authorization?a. 您的质量保证体系把那些可能影响产品质量的关键参数考虑在内了吗?Does your quality assurance system consider critical parameters that may impact on product quality?b. 那些关键参数及其控制范围是通过风险评估确定的吗?它们包含在验证总计划中了吗?Are those critical parameters and their control ranges defined by risk assessment and included in the VMP (Validation Master Plan)?c. 您有用于确定那些参数及其范围的系统性方法吗,譬如“失效模式影响分析”?Is there a systematic approach such as FMEA (Failure Mode Effect Analysis) to define those parameters and ranges?d. 您公司有风险评估的书面程序吗,尤其是(相关的)决策程序?Does your company have a written risk assessment procedure, especially for the decision making procedure?3.对于质量受权人和其他关键人员(生产、质控、质保、仓储、工程等),您是否系统地制定了清晰明确的岗位说明?Have you established a well-defined job description system for Qualified Persons (QP) and other key persons (production, quality control, quality assurance, warehousing, engineering, etc.)?a. 质量保证或质量受权人员在质量事务和成品批放行方面,有独立决策权吗?Is there an independence of quality assurance / Qualified Person in makingdecision of quality matters and batch release of finished products?b. 在法规事务、检验技术和生产技术方面,对质量受权人和其他关键人员有持续性的培训吗?Do the Qualified Person and other key persons receive continuous training inregulatory affairs and in analytical and production technology?c. 相关人员在岗位说明书上签字了吗?Are the job descriptions signed by the people concerned?d. 现场是否有:组织机构图、岗位说明书(质量受权人、质量保证部经理、质量控制部经理和生产部经理)、成品批放行程序。

hcip wlan 英文 题库

hcip wlan 英文 题库

HCIP WLAN英文题库随着无线网络技术的不断发展,越来越多的人开始关注和学习无线网络相关的知识。

HCIP WLAN是华为认证体系中的一个重要认证,也是无线网络领域的专业认证之一。

为了帮助大家更好地备考HCIP WLAN认证,我们整理了一份HCIP WLAN英文题库。

以下是题库的详细内容:Part 1: Basic WLAN Knowledge1. What is the IEEE standard for WLAN?2. What are the differences between WLAN and LAN?3. Expl本人n the concept of SSID in WLAN.4. What are the basicponents of a WLAN system?5. How does WLAN security work and what are themon security protocols used in WLAN?Part 2: WLAN Planning and Deployment1. What are the key considerations for WLAN site survey and planning?2. Briefly expl本人n the process of WLAN deployment.3. What factors should be considered when determining WLAN coverage and capacity requirements?4. What are themon WLAN deployment strategies and theiradvantages and disadvantages?5. How to optimize WLAN performance and ensure quality of service?Part 3: WLAN Troubleshooting and M本人ntenance1. What are themon WLAN connectivity issues and how to troubleshoot them?2. Expl本人n the concept of WLAN interference and how to mitigate interference in WLAN.3. What are the key indicators of WLAN performance and how to monitor WLAN performance?4. How to perform regular m本人ntenance and upgrades for WLAN systems?5. What are the best practices for WLAN troubleshooting and m 本人ntenance?Part 4: Advanced WLAN Technologies and Applications1. What are the latest advancements in WLAN technology and their potential impact on WLAN deployment?2. Expl本人n the concept of WLAN roaming and how it works ina multi-AP environment.3. What are the key differences between traditional WLAN and cloud-managed WLAN?4. How does WLAN integrate with other wireless technologies such as Bluetooth and Zigbee?5. What are the emerging applications of WLAN in IoT, smart cities, and enterprise networking?Part 5: Case Studies and Practical Scenarios1. Analyze a real-world WLAN deployment case and identify the key challenges and solutions.2. Design a WLAN solution for a specific enterprise scenario, considering the coverage, capacity, and security requirements.3. Troubleshoot aplex WLAN issue and provide a step-by-step resolution plan.4. Evaluate the performance of an existing WLAN system and propose optimization strategies.5. Discuss the future trends and potential developments in WLAN technology and its impact on the industry.通过上述题库的学习和练习,相信大家可以更全面地了解HCIP WLAN考试的内容和要求,为考试做好充分的准备。

MSTP试题

MSTP试题

我们对一网络进行评估时发现某一骨干节点S380设备槽位资源紧张,以下优化方案不合适的是( )If the slot resources of S380 on one backbone node are detected to insufficient during the network evaluation, then the inappropriate optimization solution is ( ):1认证类售后认证(新)SDH2008SDH网络评估和优化(SDH Ⅳ)时钟源跟踪评估是为了统计网络中时钟跟踪情况不正常的站点,我们进行此项统计时可以利用时钟源视图中的当前视图,在时钟源当前视图下,我们不能区分的是:()The clock source tracking evaluation is used to record thestation whose clock tracking status is abnormal in the network, and the current view in the clock source view can be used for us in this statistics. Which of the following situation cannot be distinguished under the current clock source view? ( )2认证类售后认证(新)SDH2008SDH网络评估和优化(SDH Ⅳ)等效网元概念的引入主要是应用于评估下列哪一项网络指标: ()The concept of equivalent NE is introduced to mainly evaluate which item of network indexes as list below?()4认证类售后认证(新)SDH2008SDH网络评估和优化(SDH Ⅳ)关于S380/S390设备光板使用情况,下列说法错误的是:( )Select a wrongstatement about optical boards of the ZXMP S380/S390 from the followings. ( )4认证类售后认证(新)SDH2008SDH常见故障专题(SDH Ⅲ)S385设备配置CSF交叉板时,当8#、9#CSF板都上报背板侧的OOF告警,端口号是37,请问:该告警的来源板是_____槽位的单板?When configuring the CSF cross-connect board for theZXMP S385, the 8# and 9# CSF boards both report backplane-side OOF alarm and the port number is 37. In this case, the alarm is generated by the board in slot ___.3认证类售后认证(新)SDH2008SDH常见故障专题(SDH Ⅲ)在S385 V3.00系统版本中,无TPS保护,单子架的STM-64(o) 业务接入最大数目是()If a ZXMP S385 V3.00 systemis not configured with TPS ability, the transmission capacity of one subrack is ___ STM-64 (o).4认证类售后认证(新)SDH2008ZXMP S系列产品硬件介绍(传输Ⅰ)在光连接等都正常的情况下 ,新的ECC 协议栈在分配IP地址的时候,是按照哪种顺序进行分配的:()In thecondition of normal optical connection, how does new ECC stack protocol allocate IP address to optical boards?() 1认证类售后认证(新)SDH2008ECC专题(SDH Ⅲ)E300 网管GUI登录密码长度最大为多少位?()The GUI login password of ZXONM E300 has __ bits at maximum.2认证类售后认证(新)SDH2008E300专题(SDH Ⅲ)网络安全评估中,以下属于设备级保护评估的是:( )In the network security evaluation,which one of the following statements belongs to the equipment-level protection evaluation? ( )1认证类售后认证(新)SDH2008SDH网络评估和优化(SDH Ⅳ)判断设备的光接口上的光纤是否存在硬件环回,最有效且不中断业务的方法是?What is the most effective method to judge whether there isthe hardware loopback of fibers on the optical interface of the equipment without interrupting the service? ( )1认证类售后认证(新)SDH2008SDH常见故障专题(SDH Ⅲ)如果S325开销交叉配置了S1或S1保护字节,则会导致的后果是()Which ofthe following problems will come up to the ZXMP S325 system when the S1 byte or S1 protection byte is used for overhead cross-connect? ( )3认证类售后认证(新)SDH2008SDH疑难故障处理(SDH Ⅳ)对于S330的时隙算法切换,以下描述错误的是()Which statement about the timeslot algorithm switching inS330 is incorrect?()4认证类售后认证(新)SDH2008时分专题(SDH Ⅲ)S325在NCP复位的过程中,如果改变通ECC的光纤连接,则可能出现的问题是()During the NCP reset process ofthe S325 equipment, which of the following problem may occur if change the optical fiber connection through the ECC? ( )2认证类售后认证(新)SDH2008SDH疑难故障处理(SDH Ⅳ)某站点S330的一块OBA单板的指示灯状态为,绿灯1慢闪1Hz,绿灯2灭,红灯快闪5Hz,则可以判断此单板运行状态为()The indicator statuses on one OBA board in S330 on a certain site goas follows: green indicator 1 flashes slowly at 1Hz, green indicator 2 is off, and the red indicator falshes quickly at 5Hz; then it can be judged that the running state of this board is ( )2认证类售后认证(新)SDH2008SDH常见故障专题(SDH Ⅲ)Corba Client 不支持以下哪种功能的查询( )Which of the followingchecking function cannot be supported in the Corba Client? ( )4认证类售后认证(新)SDH2008CORBA专题(SDH Ⅲ)关于不同层次的多个SDH环间业务保护方式的下列描述,同时考虑网络结构的清晰性,网优中推荐采用的是()It is know that multiple SDH rings of different network layers havetraffic communication with each other and the traffic needs to be protected. Which of the following networking method is recommended in consideration of clear network structure and network optimization?()3认证类售后认证(新)SDH2008SDH网络评估和优化(SDH Ⅳ)我们在用if –a 查看端口建链情况的时候,如果看到如下信息:Flags=24222Inet 193.193.200.43-193.193.200.43 netmask ffffffff表明:()While we check the link set-upstatus on the port with if -a command, it indicates( ) if you see the following information:Flags=24222<NOARP, POINTTOPOINT, UP>Inet 193.193.200.43-193.193.200.43 netmask ffffffff 1认证类售后认证(新)SDH2008ECC专题(SDH Ⅲ)在现场常常会碰到单个网元脱管的故障(网管路由表完整且该网元所在域内的其他网元均能正常监控),通过添加静态路由的方式可以恢复该网元的监控,以下说法正确的是:()The failure of the single NE cannot be managed by the NMS always occurs on the site (the routing table ofNMS is complete, and other NEs which are in the same area with the previously-mentioned NE can be monitored normally), the monitoring of the NE can be recovered by adding static routing; so which of the following explanation is correct:( )2认证类售后认证(新)SDH2008ECC专题(SDH Ⅲ)S330的2M线有大小两条,其中小的E1线在DDF架上的打线顺序是:( )There are two 2M cables for the ZXMPS330, one is thick, and the other one is thin, and the wire punching order of the thin E1 cable in DDF rack is: ( )3认证类售后认证(新)SDH2008ZXMP S系列产品硬件安装(传输Ⅰ)S380设备,在8光口的OL1光板中,以下哪种光板的8个光口都通ECC?( ) The ECC can pass through which ofthe following OL1 optical board with 8 optical interfaces (all through at 8 optical interfaces) in the S380 equipment? ( )4认证类售后认证(新)SDH2008ECC专题(SDH Ⅲ)V1.10系统版本的S385设备在单板软件和逻辑升级时,下列描述错误的是哪一项?()Select wrong statement about board application and logic upgrading of ZXMP S385 V1.10 from the followings.()2认证类售后认证(新)SDH2008SDH设备升级专题(SDH Ⅲ)有关S385设备单板升级方式,下述描述错误的是哪一项?()Select a wrong statement about board upgrading of the ZXMP S385 from the followings. ( )2认证类售后认证(新)SDH2008SDH设备升级专题(SDH Ⅲ)在安装Corba时,在配置文件没有修改的前提下,E300 sever的默认端口是:()When installing the Corba, the default port of E300 sever is ( ) without modifying the configuration files.1认证类售后认证(新)SDH2008CORBA专题(SDH Ⅲ)有关S380设备相关单板的升级描述,正确的是哪项?( )Which of the following description is correctabout the upgrade for the corresponding board of the S380 equipment?( )4认证类售后认证(新)SDH2008SDH设备升级专题(SDH Ⅲ)ZXMP S385 远程修改NCP MAC地址对NCP 版本的最低要求是( )In case of remote modification of NCP on ZXMP S385, the lowest NCP version required by the MAC address is ( )2认证类售后认证(新)SDH2008SDH疑难故障处理(SDH Ⅳ)下列对插拔交叉板和时钟板对复用段业务的影响说法正确的是( )4认证类售后认证(新) SDH2008疑难故障处理S320设备中,一块34M接口板可以上下_____个34M。

HP PPM Center White Paper - 理解和优化PPM缓存说明书

HP PPM Center White Paper - 理解和优化PPM缓存说明书

HP PPM Center White Paper -Understanding and Tuning the CacheVersion: 1.0October 2014Applies to PPM 9.10 and later.ContentsIntroduction (2)PPM Cache Overview (2)Comparison between Legacy Cache and New Cache (2)Understanding the PPM Cache Statistics reports (3)Legacy Cache (3)New Cache (3)PPM Cache Tuning (3)Flushing the PPM cache with kRunCacheManager.sh and ksc_flush_cache (5)kRunCacheManager.sh (5)Syntax (5)Issues & Limitations (6)ksc_flush_cache (6)Syntax (6)Issues & Limitations (6)PPM Cache changes in PPM 9.31 (6)Conversion from legacy cache to new cache (7)Simplification of new cache configuration (7)Changes to CacheManager Statistics report (7)Changes to kRunCacheManager.sh (7)Changes to ksc_flush_cache (8)More questions? Need help? (8)IntroductionThis document describes the PPM Cache architecture up to PPM 9.30, as well as the changes introduced to the Cache in PPM 9.31. It should help PPM Administrators to correctly tune PPM Caches in order to achieve optimal system performance.PPM Cache OverviewThere are two caches in PPM that can be tuned by PPM Administrators:-Legacy cache (Table Components, Request Type Search Fields, List Validation Values, etc.).It is configured in <PPM_HOME>/conf/tune.conf. Setting the parameters in server.conf works too and some of the parameters can be edited from the admin console.You can view its cache statistics in the “Server Cache Status” report in Workbench’s AdminTools.-New cache (Requests, Request Types, Modules, Portlets, Workflows, etc.).It is configured in <PPM_HOME>/conf/cache.conf.You can view cache statistics in the “CacheManager Statistics” report in Workbench’s AdminTools.Comparison between Legacy Cache and New CacheThe new cache has the following advantages over the legacy cache:-All cache objects are stored using java SoftReference. As a result, if the JVM runs out of Heap Memory, objects in the cache will be automatically garbage collected to free up memory. Thismakes it possible to store large amounts of data in the cache without risking out-of-memoryissues under heavy system load.-There are more cache configuration parameters (at least until PPM 9.31, in which new cache configuration has been simplified).-It is possible to invalidate a single object in the new cache by using ksc_flush_cache special command (which only works with new cache), whereas only full cache flush is possible whenusing kRunCacheManager.sh (which works with both legacy and new cache).-The new cache provides configurable staleness checks, whereas such checks are hard coded in the legacy cache.-More statistics in the Cache report (number of staleness checks, average load time, number of flushes of the different type).Understanding the PPM Cache Statistics reportsThe following information is available in the cache server reports.Legacy CacheFollowing information is available for each legacy cache in the “Server Cache Status Report”:∙Maximum number of objects that can be cached (Cache size)∙Number of additional objects that can be cached (Free units)∙Number of hits, misses and swaps (swaps meaning replace an object by another one when max size is reached)∙Miss rate (the lower, the better)∙Estimation of the amount of memory taken up by the cacheNew CacheFollowing information is available for each new cache in the “CacheManager Statistics Report”:∙Hits, misses, and hit rate (the higher the hit rate, the better)∙Number of cache flushes (broken down by the categories "old", "idle", "soft reference reclaimed", and "max cache size reached")∙Average load time to load an object from database when it is not in the cache∙Number of staleness checks performed∙Max cacheable objects (Cache size), cached object count and maximum idle time∙Whether the cache is distributed or not (if it is, removing an object from the cache in any node of the PPM cluster will send a message to all other cluster nodes to remove that object fromtheir cache).PPM Cache TuningTuning PPM Cache performance should be done at the same time as tuning PPM JVM Heap size. This means finding the right balance between 2 things:-JVM Heap size: Before PPM 9.20, only 32bits JVM was supported, which limited the JVM Heap size to ~1.3GB. Since PPM 9.20 and the adoption of 64bits JVM, the size of the JVM Heapmemory is only limited by the installed physical memory. However, too large Heap size canresult in long full garbage collection times, during which the application is unresponsive. It iscommon to see PPM JVM heap sizes of up to 4 GB, and sometimes more on Service nodes.-Caches size: A larger cache size means more cacheable objects, a better hit rate, and fewer objects to reload from the database, which result in better application performance. However, if cached objects end up taking too much memory, this will impact the performance of theapplication and might even cause out-of-memory problems. Note that only legacy cache isprone to causing such memory problems, as the JVM will automatically discard objects from the new cache whenever the available free memory runs too low.To summarize:-If JVM Heap size is too large, full garbage collection periods will be too long and application might become unresponsive for seconds, degrading users experience-If Caches are too large and end up using up too much memory, the performance of the application will degrade and in the case of legacy cache it might even cause out-of-memoryissues.Tuning your PPM cache can be done by following these steps, which may end up conflicting with each other; if that happens, use your best judgment.-If you see a high number of swaps (legacy cache) or “max cache size reached” flushes (new cache), increase the cache size.-If you see some “soft reference reclaimed” flushes (new cache), increase the JVM Heap or reduce the cache size for cache using up large amounts of space.-If you see a high miss rate (above 20%, legacy cache) or a low hit rate (below 80%, new cache) even after prolonged PPM usage (at least one day of heavy usage), increase the cache size.-If you notice long full garbage collection times (many seconds) during which the system is unresponsive, reduce the Heap size or better tune the JVM Garbage collection. Note that youwill need to use a JVM monitoring tools in order to ensure that JVM pause times are caused by full garbage collection.There are some additional tips that might help you when tuning the PPM cache:1)There is no “standard” cache configuration. Measure, tune, rince & repeat until you reachsatisfactory numbers. All PPM usages are different, and as a result caches configurations should be tuned accordingly.2)When tuning the PPM cache, try to do so after capturing statistics during the highest peak loadtime (usually happening on Friday afternoon or Monday morning). Tuning your cache onlymakes sense if it is tuned to properly handle peak load usage.3)Try not to flush the caches (using kRunCacheManager or ksc_flush_cache) in an automated wayunless you really have to. If using kRunCacheManager, NEVER use the “A” option to flush allcaches in an automated script; you should not flush more caches than necessary.4)If memory limit doesn’t allow you to set the proper max cache size for all the entities, you canrank the cache t o optimize first by the value “Average Load time” x “Misses”, and first increase the cache size of the cache(s) with the highest value. They are the most likely to have ameasurable performance impact.5)You might want to tune differently the PPM nodes in your cluster depending on whether theyare Service nodes or Web User only nodes. The entities loaded (and thus the optimal cachesettings) are different. For example, a pure Service node will never load portlets or menus, but might need a larger cache size for fiscal periods.6)Don’t forget that when using kRunCacheManager.sh, it will always flush selected caches on allnodes of your PPM cluster, but it will reset caches statistics and force garbage collection only on the node it is connected to.Flushing the PPM cache with kRunCacheManager.sh and ksc_flush_cache There are two ways to flush a PPM Cache in a manual or automated way: kRunCacheManager.sh (command line tool), or ksc_flush_cache (special command). They should be called after modifying the data directly in the database without going through one of the supported PPM interfaces (Web UI, SOAP Web Service, REST Web Service, etc.).kRunCacheManager.shkRunCacheManager.sh is a shell script that is run from the command line.Syntaxsh ./kRunCacheManager.sh [<URL>] <cache number>-“URL” parameter is optional. If omitted, it will connect to the first running RMI_URL defined in server.conf. You can pass multiple RMI urls, separated with semicolons (‘;’), and it will connect to the first running one. It is safe to omit this parameter unless you want to connect to a specific PPM node, for example to reset cache statistics or request a garbage collection.-“Cache Number” parameter is the number next to the cache that you want to flush. In order to view the list of caches with their number, run the command without any parameter to list all caches and be prompted for possible options. You can also input a letter to trigger thefollowing actions:o A: Flush All Caches. It is strongly advised not to use this option when runningkRunCacheManager in an automated way.o B: Flush Validation Caches. This will only flush the validation related caches. It can be used when validation definition or values are directly edited in PPM Database.o C: Reset Cache Statistics Counters. This only affects the node you are connected to.o D: Force Garbage Collection.Issues & Limitations-The order of the caches in the list is not consistently enforced and can vary between environments or PPM versions. As a result, if you do flush a specific cache designated by itsnumber in a script, it’s possible that the corresponding cache may change at some point in the future. You should verify that the cache numbers haven’t changed after every PPM upgrade or environment change.-It is not possible to only flush one entity in a cache; the whole cache has to be flushed. This can result in performance impact as all objects from the flushed cached will need to be reloaded.This could have performance impact under heavy load if some caches are flushed too often.-It is strongly advised NOT to use the “flush all caches” option (“kRunCacheManager.sh A”) in an automated way, as it may have performance impact under heavy system load.-If the cache is flushed using kRunCacheManager.sh while the cache maintenance thread is running (it runs every 10 seconds by default), an exception may be fired and thekRunCacheManager.sh cache flush action may be ignored.ksc_flush_cacheksc_flush_cache is a PPM Special Command that can be invoked from any command step (workflow step, PPM report command step, etc.).It only works with new caches.Syntaxksc_flush_cache <cache-name> [<id>]-“cache-name” parameter is the name of the new cache, as defined in cache.conf. For example, in the following cache.conf line:cache.datasource.title = Dashboard Datasourcesthe cache name is datasource.-“id” parameter is optional. If omitted, the cache is flushed from all its entities. If specified, only that entity will be removed from the cache.Issues & Limitations-ksc_flush_cache only works with new caches .-As of PPM 9.30, ksc_flush_cache is an undocumented (though officially supported) special command.-As of PPM 9.30, passing an entity ID to ksc_flush_cache will only work when the cache key is an Integer value. If it is a String value, it will not be flushed, and the only option to remove thedesignated entity will be to flush the whole cache.PPM Cache changes in PPM 9.31Following changes have been done to PPM Cache in 9.31 in order to correct existing issues and limitations.Conversion from legacy cache to new cacheAll the legacy caches listed in kRunCacheManager.sh but one (Scoring Criteria) have been converted to new cache.All these caches are now configured from cache.conf, and their parameters in tune.conf (orserver.conf/admin console) have been deprecated and are not used anymore.Simplification of new cache configurationSome parameters in cache.conf have been removed in an attempt to simplify cache configuration.-The only parameter that can be tuned for all caches is the cache size (parameter “maxSize”).-Parameters “maxAge”, “maxIdleTime”, “resolver” and “stalenessCheckGraceInterval” have been removed.-Parameter “distributed” is inferred automatically based on whether a staleness check in defined (distributed = false) or not (distributed = true).-Cache is automatically set to disabled if maxSize = 0.A staleness check has also been added for request types and table components; even though it’s not enabled by default, if one relies on direct DB upd ates to modify request types or table components, it’s preferable to enable the staleness check rather than disable the cache. Staleness check will only work if column LAST_UPDATE_DATE is modified during data update.One side effect of cache simplification is that the cache maintenance thread (previously used to enforce maxAge and maxIdleTime of cached objects) is now only used to reload the cache.conf configuration when modified at runtime. It doesn’t cause issues anymore when running at the same time as kRunCacheManager.sh.Changes to CacheManager Statistics report-Flush counts of types “old” and “idle” have been removed.-“Cache Flush All” count has been added. It displays the number of times the whole cache has been flushed. It helps identifying abuses of “kRunCacheManager.sh A” or of unnecessary fullcaches flushes.Running “kRunCacheManager.sh A” will increment this value by 1 for all caches.Changes to kRunCacheManager.sh-The order of listed caches has changed: it still lists legacy cache followed by new caches, but now the new caches are ordered alphabetically by cache name.-When listing all caches, the cache name of new caches is displayed in parenthesis after the cache title.-When flushing new caches, you can now use the cache name in place of the cache number. It is advised to always use the cache name, as it doesn’t rely on any ordering of caches in that list.-An extra optional parameter has been added to the command, to pass the entity ID to flush. So you can now flush one specific entity from a cache using kRunCacheManager.sh using thefollowing syntax:sh ./kRunCacheManager.sh [<URL>] <cache name (or cache number)> [<entity ID>] - A new action (E) has been added, that lists all keys in each of the new caches along with their type (String or Integer). This can help diagnose whether a specific entity is currently stored inthe cache or not. Note that in MLU environments, the same key can be displayed multiple times if it is stored in the cache using different languages.Changes to ksc_flush_cache-Passing an entity ID to ksc_flush_cache will now work regardless of the key type (String or Integer).-You can now flush the legacy caches by passing their cache number (as inkRunCacheManager.sh) instead of the cache name. Since there is only one legacy cache left in9.31, it means passing “1” as the cache name to flush the “Scoring Criteria” cache.More questions? Need help?Join the conversation and ask your questions on:- HP PPM Customer Support forum (if you are an existing HP PPM Center customer):/t5/Project-and-Portfolio-Management/bd-p/project-portfolio-mgnt-cust-forum- Public HP PPM Support and News forum:/t5/Project-and-Portfolio-Management/bd-p/itrc-935。

Its methods and systems distributed cache, prefetc

Its methods and systems distributed cache, prefetc

专利名称:Its methods and systems distributed cache,prefetch, copying发明人:ミルダット・スライマン・エー,ヘッダヤ・アブデルサラム・エー,イェーテス・ディビッド・ジェー申请号:JP特願平10-550458申请日:19980515公开号:JP特表2001-526814(P2001-526814A)A公开日:20011218专利内容由知识产权出版社提供专利附图:摘要:A technique for automatic, transparent, distributed, scalable and robust caching,prefetching, and replication in a computer network that request messages for aparticular document follow paths from the clients to a home server that form a routing graph. Client request messages are routed up the graph towards the home server as would normally occur in the absence of caching. However, cache servers are located along the route, and may intercept requests if they can be serviced. In order to be able to service requests in this manner without departing from standard network protocols, the cache server needs to be able to insert a packet filter into the router associated with it, and needs also to proxy for the home server from the perspective of the client. Cache servers may cooperate to service client requests by caching and discarding documents based on its local load, the load on its neighboring caches, attached communication path load, and on document popularity. The cache servers can also implement security schemes and other document transformation features.申请人:トラスティーズ・オブ・ボストン・ユニバーシティ地址:アメリカ合衆国,マサチューセッツ州 02215,ボストン,ベイ ステート ロード 147国籍:US代理人:杉本 修司 (外2名)更多信息请下载全文后查看。

基于web代理缓存技术来提高计算机网络性能的模型研究(IJITCS-V5-N11-5)

基于web代理缓存技术来提高计算机网络性能的模型研究(IJITCS-V5-N11-5)

I.J. Information Technology and Computer Science, 2013, 11, 42-53Published Online October 2013 in MECS (/)DOI: 10.5815/ijitcs.2013.11.05A Proposed Model for Web Proxy CachingTechniques to Improve Computer NetworksPerformanceNashaat el-KhameesyProf. and Head of Computers & Information systems Dept- SAMS, Maady Cairo, EgyptE-mail: Wessasalsol@Hossam Abdel Rahman MohamedComputer & Information System Dept- SAMS, Maady Cairo, EgyptE-mail: Hrahman@.eg, Habdel@.egAbstract—one of the most important techniques for improving the performance of web based applications is web proxy caching Technique. It is used for several purposes such as reduce network traffic, server load, and user perceived retrieval delays by replicating popular content on proxy caches that are strategically placed within the network. The use of web proxy caching is important and useful in the government organizations that provides services to citizens and has many branches all over the country where it is beneficial to improve the efficiency of the computer networks performance, especially remote areas which suffer from the problem of poor network communication. Using of proxy caches provides all users in the government computer networks by reducing the amount of redundant traffic that circulates through the network. It also provides them by getting quicker access to documents that are cached. In addition, there are a lot of benefits we can obtain from the using of proxy caches but we will address them later. In this research, we will use web proxy caching to provide all of the above benefits that we are mentioned above and to overcome on the problem of poor network communication in ENR (Egyptian National Railways). We are going to use a scheme to achieve the integration of forward web proxy caching and reverse proxy caching.Index Terms—Web Proxy Caching Technique, Forward proxy, Reverse ProxyI.IntroductionOne of the most well-known strategies for improving the performance of Web-based system is Web proxy caching by keeping Web objects that are likely to be used again in the future in location closer to user. The mechanisms The Web proxy caching are implemented at the following levels: client level, proxy level and original server level. [1], [2]it is known that proxy servers are located between users and web sites for lessening of the response time of user requests and saving of network bandwidth. We also should build an efficient caching approach in order to achieve better response time.Generally, we use web Proxy servers to provide internet access to users within a firewall. For security reasons, companies run a special type of HTTP servers called "proxy" on their firewall machines. When a Proxy server receives any requests from the clients, it forwards them to the remote servers intercepts the responses, and sends the replies back to the clients. Due to we use the same proxy servers for all clients inside of the firewall in the same organization, these clients share common interests and they would probably access the same set of documents and each client tends to browse back and forth within a short period of time, So this provides the effectiveness of using these proxies to cache documents. Therefore, this will increase the hit ratio for a previously requested and cached document on the proxy server in the future. In addition to web caching at proxy server saves network bandwidth, it also provides lower access latency for the clients.Most Web proxy servers are still based on traditional caching policies. These traditional caching policies only consider one factor in caching decisions and ignore the other factors that have impact on the efficiency of the Web proxy caching. Due to this reason these conventional policies are suitable in traditional caching like CPU caches and virtual memory systems, but they are not efficient in Web caching area. [3], [4]We use the proxy cache of the proxy server and it is located between the client machines and origin server. The work of the proxy cache is similar to the work of the browser cache in storing previously used web objects. The difference between them is the browser cache which deals with only a single user, the proxy server services hundreds or thousands of users. The work of the proxy cache is as follow, when the proxy server receives a request it checks its cache at first if therequest is found the proxy server sends the request directly to the client but if the request is not found the proxy server forwards the request to the origin server and after the origin server replies to the proxy server it forwards the request to the client and also save a copy from the request in local cache for future use. The proxy caching is used to reduce user delays and to reduce Internet congestions it is widely utilized by computer network administrators, technology providers, and businesses. [5], [6], [7]The proxy server uses its filtering rules to evaluate the request, so it may use IP address or protocol to filter the traffic. If the request is valid by the filter, the proxy provides the content by connecting to the origin server and requesting the service on behalf of the client in case the required content is not cached on the proxy server. The proxy server will return the content directly to the client if it was cached before by the proxy serverWe must consider the following problems before applying web proxy caching:Size of Cache: In traditional architectures each proxy server keeps records for data of all other proxy servers. This will lead in increasing in cache size and if cache size becomes large this will be a problem because as cache size is larger, Meta data become difficult to be managed. [8]Cache Consistency:We should ensure that Cache Consistency is verified to avoid Cache Coherence problem. Cache Consistency means when a client send requests for data to proxy server that data should be up-to-date. [9]Load balancing: There is must be a limit for number of connections to certain proxy server to avoid the problem of overloaded only one server than the other in case we use load balancing. [10]Extra Overhead:When all the proxy servers keep the records of all the other proxy servers, this will lead to extra overload in the system which already produces congestion on all proxy servers. This extra head due to each proxy server in the system must check the validity of its data with respect to all other proxy servers. [11]In addition to the proxy cache provide some advantages such as a reduction in latency, network traffic and server load, it also provides some more advantages∙Web proxy caching decreases network traffic and network congestion and this automatically reduces the consumption of bandwidth∙Web proxy caching reduces the latency because of the followings:A.When a client sends to the proxy server arequest already cached in the proxy server so inthis case the proxy server will reply directly tothe client instead of send the request to theorigin server.B.The reduction in network traffic will makeretrieving not cached contents faster because ofless congestion along the path and less workloadat the server.∙Web proxy caching reduces the workload of the origin Web server by caching data locally on the proxy servers over the wide area network.∙The robustness and reliability of the Web service is enhanced because in case the origin server in unavailable due to any failure in the server itself or any failure in the network connection between the proxy server and the origin server, the proxy server will retrieve the required data from its local cache.∙Web caching has a side effect that allows us a chance to analyze an organization's usage patterns.In addition to proxy caches provide a significant reduction in latency, network traffic and server load, they also produce set of issues that must be considered. ∙ A major disadvantage is the resend of old documents to the client due to the lack of proper proxy updating. This issue is the focus of this research.∙ A single proxy is a single point of failure.∙ A single proxy may become a bottleneck. A limit has to be set for the number of clients a proxy can serve. Therefore, in all government institution those provide services to citizen, we must be searched about methods and solutions to enhance the efficient of services delivery , and as we know that most places away from Cairo state is facing failure in the network because the lack of infrastructure and possibilities of the services provider (ISP).There has been a lot of research and enhancement in computer technology and the Internet has emerged for the sharing and exchange of information. There has been a growing demand for internet-based applications that generates high network traffic and puts much demand on the limited network infrastructure. We can use addition of new resources to the network infrastructure and distribution of the traffic across more resources as a possible solution to the problems of growing network traffic.Using of proxy caches in the government computer networks is useful to the server administrator, network administrator, and end user because it reduces the amount of redundant traffic that circulates through the network. And also end users get quicker access to documents that are locally cached in the caches. However, there are additional issues that need to be considered by using of proxies. In this study, we will focus on Web proxy caching since it is the most common caching method.1.1 Problem StatementThe governmental organizations which provide services to citizen must target efficient and more reliable services while keeping cost-effective criteria. Throughout this paper we consider the case of the Egyptian National Railway (ENR) datacenter which serve many applications supported to many branches spreaded allover Egypt which are quite faraway from Cairo state. Current infrastructure faces so many challenging problems leading to poor reliability as well as ineffective services and even more discontinuity of such services even at the headquarter datacenter. The main attributes of the problems facing the ENR network are summarized as following:∙ A major problem of the remote site is unstructured and their heavy network traffic.∙The network overloading might result in the loss of data packets∙The origin servers loaded most of the time∙Transmission delay –normal traffic data but low speed of the line.∙Queuing delay due to huge network traffic∙Slow the services that provided to citizens.∙Processing delay –due to any defection of the network device∙Network faults can cause loss of data∙Broadcast delay – due to presence of broadcasting on networkII.Proxy Caching OverviewCaches are often deployed as forward proxy caches, reverse proxy caches or as transparent proxies.2.1 Forward Proxy CachingThe most common form of a proxy server is a forward proxy; it is used to forward requests from an intranet network to the internet.[12]Fig. 2.1: Forward proxy cachingWhen the forward proxy receives a request from the client, the request can be rejected by the forward server or allowed to pass to the internet or [13] retrieved from the cache to the client. The last one reduces the network traffic and improves the performance.On the other hand, the forward proxy treats the requests by two different ways according to the requests are blocked or allowed. In case the request is blocked, the forward proxy returns an error to the client. In case the request is allowed, the forward proxy checks either the request is cached or not; if it is cached, the forward proxy returns cached content to the client. If it is not cached, the forward proxy forwards the request to the internet then returns the retrieved content from the intent to the client.The above figure explains the work of the forward proxy in case the request is allowed but not cached on the forward proxy A. the forward proxy will send the request to the server on the internet then the server on the internet return the required content to the forward proxy and finally the forward proxy return the received content to the client and cached it on its cache for future and same request. The cached content on the forward proxy will reduce the network traffic in the future and actually improves the performance of the whole system.2.2 Reverse Proxy CachingThe other common form of a proxy server is a reverse proxy; it performs the reverse function of the forward proxy, it is used to forward requests from an internet network to the intranet network. [14]This provides more security by preventing any hacking or an illegal access from the clients on the internet to important data stored on the content servers on the intranet network. By the same way, if the required content is cached on the reverse proxy, this will reduce the network traffic and improves the performance.[15]Fig. 2.2: Reverse proxy cachingThe advantages of reverse proxy are∙Solving single point of failure problem by using load balancing for content servers.∙Reducing the traffic on the content servers in case the request is blocked by the reverse proxy. In this case the request is rejected directly by the reverse proxy without interrupt the content servers.Reducing the bandwidth consumes by blocked requests as it is blocked directly by reverse proxy before reaching to the content servers.The function of the reverse proxy is the same as the function of the forward proxy except the request is initiated from the client on the internet to the content servers in the internal network. At first, the client on the internet sends a request to the reverse proxy. If the request is blocked, the reverse proxy returns an error to the client. If the request is allowed, the reverse proxy checks if the request is cached or not. In case the request is cached, the reverse proxy returns the content information directly to the client on the internet. In case the request is not cached, the reverse proxy sends the request to the content server in the internal network then resends the retrieved content from the content server to the client and also cached the content information from the content server for future requests to same content information [16]2.3 Transparent CachingTransparent proxy caching eliminates one of the big drawbacks of the proxy server approach: the requirement to configure Web browsers. Transparent caches work by intercepting HTTP requests and redirecting them to Web cache servers or cache clusters.[17]This style of caching establishes a point at which different kinds of administrative control are possible; for example, deciding how to load balance requests across multiple caches. There are two ways to deploy transparent proxy caching: at the switch level and at the router level. [18]Router-based transparent proxy caching uses policy-based routing to direct requests to the appropriate cache(s). For example, requests from certain clients can be associated with a particular cache.[19]In switch-based transparent proxy caching, the switch acts as a dedicated load balancer. This approach is attractive because it reduces the overhead normally incurred by policy-based routing. Although it adds extra cost to the deployment, switches are generally less expensive than routers.[20]III.Proxy Caching ArchitectureThe following architectures are popular: hierarchical, distributed and hybrid.3.1 Hierarchical Caching ArchitectureCaching hierarchy consists of multiple levels of caches. In our system we can assume that caching hierarchy consists of four levels of caches. These levels are bottom, institutional, regional, and national levels [21]The main object of using caching hierarchy is to reduce the network traffic and minimize the times that a proxy server needs to contact to the content server in the internet or in the internal network to provide the client with needed content information .These multiple caches works in that manner in case of forward proxy, at first the client initiate a request to the bottom level cache. If the needed content information is found and cached on it, it returns this information to the client directly. If this information is not cached on it, it will forward the client request to the next level cache that is institutional. If this cache found the needed information cached on it, it will return it to bottom level cache then the bottom level cache returns them to the client. If the needed information is not cached on it, it will forward the request to regional level. If the needed information is cached on it, it will return the needed information to the institutional level cache then the institutional level cache returns them to the bottom level cache and finally bottom level cache returns them to the client. If the needed information is not found not found on it, it will forward the request to the last level of cache that is national, if the needed information is found on that cache, it works the same way as above till the information reach to the client. If the needed information is not cached on that cache, it will forward the request to the content server on the internet and also repeat the same steps as above till the information reached to the client.In case of the reverse proxy, the same steps above are repeated except the request will forward by reverse way as in the forward proxy. Here, the request will forward from national level cache then to then to regional then to institutional bottom and finally to the content server in the internal network. The important note in caching hierarchy either in case of the forward proxy or the reverse proxy is each cache receives information from another level cache will cache a copy from thatinformation for future need to the same request.Fig. 3.1: Hierarichal caching architecture3.2 Distributed Caching ArchitectureIn distributed Web caching systems, there are no other intermediate cache levels than the institutional caches, which serve each others' misses. In order to decide from which institutional cache to retrieve a miss document, all institutional caches keep meta-data information about the content of every other institutional cache. With distributed caching, most of the traffic flows through low network levels, which are less congested. In addition, distributed caching allows better load sharing and are more fault tolerant. Nevertheless, a large-scale deployment of distributed caching may encounter several problems such as high connection times, higher bandwidth usage, administrative issues, etc.[22]There are several approaches to the distributed caching. Internet Cache Protocol (ICP), which supports discovery and retrieval of documents from neighboring caches as well as parent caches. Another approach to distributed caching is the Cache Array Routing protocol (CARP), which divides the URL-space among an array of loosely coupled caches and lets each cache store only the documents whose URL are hashed to it.[23]3.3 Hybrid CachingA hybrid cache scheme is any scheme that combines the benefits of both hierarchical and distributed caching architectures. Caches at the same level can cooperate together as well as with higher-level caches using the concept of distributed caching.[24]A hybrid caching architecture may include cooperation between the architecture's components at some level. Some researchers explored the area of cooperative web caches (proxies). Others studied the possibility of exploiting client caches and allowing them to share their cached data.One study addressed the neglect of a certain class of clients in researches done to improve Peer-to-Peer storage infrastructure for clients with high-bandwidth and low latency connectivity. It also examines a client-side technique to reduce the required bandwidth needed to retrieve files by users with low-bandwidth. Simulations done by this research group has proved that this approach can reduce the read and write latency of files up to 80% compared to other techniques used by other systems. This technique has been implemented in the OceanStore prototype (Eaton et al., 2004).[25]IV.Design Goals & Proposed ArchitectureTo improve the computer network performance, decrease the workload for data center and ensure continual service improvement, we aim to design efficient mechanisms for reducing the workload of a data center and business Continuity verification and achieve the following goals:∙Reduces network bandwidth usage consumption which leads to reduce network traffic and network congestion∙Decrease the number of messages that enter the network by satisfying requests before they reach the server.∙Reduces loads on the origin servers.∙Decreases user perceived latency∙Reduced page construction times during both normal loading and peak loading∙If the remote server is not available due to a server \crash" or network partitioning, the client can obtaina cached copy at the proxy.4.1 Proposed ArchitectureWe define before two types of proxies, the forward proxy and the reverse proxy. The forward proxy is used to forward clients from the clients on the internal network to the content server in the internet. The reverse proxy is used to forward requests from the clients in the internet to the content server in the internal network. Fig 5-1 shows that the forward proxy serves as a servant for internal clients and as a cache because it cached the content received from the content server on the internet. So for any the same repeated request, the forward server can return the cached content on it to the client directly without backing again to the content server. On the same time the forward proxy does an important rule as it hides the internal clients from outside world as the request is initiated from the forward proxy.Fig. 4.1: Proposed ArchitectureFig 4-1 also shows the reverse proxy that used to forward the requests from external clients to content servers in internal network. In this case the reverse proxy makes encrypting content, compressing content, reducing the load on content servers. It also hides the responses from internal networks and as them come from the reverse proxy which increases the security. It also caches the content and forwards it directly to clients if they repeated again without backing again to the content server. Finally, we can use load balancing to balance between content servers and in this case the reverse proxy and forward the request from the client to any of this content serves which increase the availability of the system.4.2 Proposed Architecture Workflow1The Remote Site client sends a request for Web Application content to the Forward proxy cache. If Forward proxy caching contains a valid version of the Web Application content in its cache, it will return the content to the requesting user. 2If the content requested by Remote Site user is not contained in the Forward proxy cache, the request is forwarded to an upstream Reverse proxy caching.3If the upstream Reverse Proxy Cache has a valid copy of the requested content in cache, the content is returned to Forward proxy cache (Remote Site). Forward proxy cache places the content in its own cache and then returns the content to the Remote Site user who requested the content.4If the upstream Reverse proxy caching does not contain the requested content in its cache, it will forward the request to the Web Application server. 5The Web Application server returns the requested content to reverse proxy caching. Reverse proxy caching places the content in cache.6Web Application server returns the content to reverse proxy caching. Reverse proxy caching server places the content in its cache. Reverse proxy caching server returns the content from its cache to Forward proxy. Forward proxy cache places the content in its own cache and then returns the content to the Remote Site user who requested the content.Fig. 4.2: Proposed workflowV.Performance AnalysisIn this chapter, we evaluate cache performance of web proxy caching for web applications and compare it to the case of not using web proxy caching at all. We will monitor and evaluate the performance of web proxy caching in three cases:∙Without using web proxy caching.∙At the beginning of using web proxy caching.∙After certain period (one month) from using web proxy caching.We will take in our consideration the following parameters in evaluation process ∙Requests returned from the application server.∙Requests returned from cache without verification. ∙Requests returned from the application server, updating a file in cache.∙Requests returned from cache after verifying that they have not changed.5.1 Performance MetricsThe seven main categories of performance metrics are:1.Cache Performance:how requested Web objects were returned from the Web Server cache or from the network. It will be measured the according to2.Traffic: the amount of network traffic, by date, sent through Web Proxy including both Web and non-Web traffic.3.Daily traffic:average network traffic through Web Proxy at various times during the day. This report includes both Web and non-Web traffic.4.Web Application Responses:how ISA Server responded to HTTP requests during the report period.5.Failures communicating:Web proxy Cache encountered the following failures communicating with other computers during the report period.6.Dropped Packets:shows the users who had the highest number of dropped network packets during the report period Users that had the most dropped packets are listed first7.Queue Length:The System\Processor Queue Length counter shows how many threads are ready in the processor queue, but not currently able to use the processor.VI.ResultsIn this section we will investigate the performance analysis of cache, Network traffic, Failure communication, Dropped packets and queue length 6.1 Cache PerformanceThe cache performance results for each of the log files are shown below. The percentage of requests returned from cache without verification is high. It shows that between 38% of all requests result in a request returned from cache without verification, which is consistent with previously published results. Wills and Mikhailov[26] reported that only 15% to 32% of their proxy logs result in requests returned from cache without verification. Yin, et al.[27]revealed that 20% of requests to the server are due to revalidation of cached documents that have not changed. These results are consistent with the results found in our logs as discussed before. However, with the current logs, the number of requests returned from cache without verification has increased a little. This may be due to the duration of the analysis being longer for this study or to the use of different logs. It is assumed that a large fraction of these frivolous requests are due to embedded objects that do not change often.Table ‎0-1: Cache Performance ResultsStatus Requests% of TotalRequestsTotalBytesObjects returned from the applicationserver20873 59.30 % 617.73 MBObjects returned from cache withoutverification13450 38.20 % 26.93 MBObjects returned from cache afterverifying that they have not changed489 1.40 % 0.99 MB Information not available 354 1.00 % 49.74 KB Objects returned from the applicationserver, updating a file in cache59 0.20 % 14.62 KBTotal 35225 100.00 % 645.72 MB6.2 TrafficThe results for average network traffic through web proxy caching Server at various times during the day at the beginning of using web proxy caching and after certain time from using web proxy caching are in Table below. The results indicate that the average processing time for handling the request is reduced by 43% after certain time of using web proxy caching because proxy caches the previous visited pages and return them directly to client without waste time to ask application server each time.To reflect the physical environment of the network, we have to consider factors influencing traffic. Of various factors influencing traffic, object size is a factor of the objects themselves. Hence, we can reflect the size factor of web object. An average object size hit ratio reflects the factor of object size to object-hit ratio. Average-object Hit Ratio: The cache-hit ratio can indicate an object-hit ratio in web caching. The average object-hit ratio can calculate an average value of an object-hit ratio on a requested page the performance is evaluated by comparing an average object-hit ratio and response time [28]Response Time gain factor (RTGF): This factor give you amount of advantage in web cache response time. [29]。

认证机构的审核发现和纠正措施表的指导性文件

认证机构的审核发现和纠正措施表的指导性文件

申请者提交给认证机构的审核发现和纠正措施表的指导性文件OverviewThe Corrective Action Submission Form is the 认证机构form that records any non compliances raised during a IATF 16949 audit activity. One form is produced for every minor or major non compliance raised during the audit. A single non compliance may incorporate more than one requirement of either the technical specification i16949elf or an organizations’ defined processes.概述纠正措施提交表是认证机构用于记录在16949 审核活动中发现的任何不符合项。

审核中发现的每一个轻微不符合或者重大不符合对应一张表格。

一个单独的不符合项可能包含技术规范本身或者被组织定义的过程中不止一个的要求The FindingSection 1 of the form will be completed by the auditor and will define the process failure which led to the non compliance, the clause of the technical specification or organizations’ processes to which it did not comply and finally a summary of the objective evidence gathered which suppor16949 the raising of the finding.发现表格的第一部分由审核员填写,并定义导致不符合的过程失效,列出没有遵循的技术规范或者组织过程的条款,提供所收集到的支持该发现提出的客观证据的汇总The ResponseOnce the finding has been raised an organization must take immediate action to contain the issue. For example if an un-calibrated gage was seen on the shop floor the gage would be immediately removed to prevent continued use. If unidentified par16949 were observed on the shop floor they would immediately be quarantined while their status was determined.回应一旦提出发现,客户必须采取包含该问题的临时措施。

1+x证书网络安全评估模考试题(含参考答案)

1+x证书网络安全评估模考试题(含参考答案)

1+x证书网络安全评估模考试题(含参考答案)一、单选题(共70题,每题1分,共70分)1、PHP语言中 == 和===区别A、==只会判断值,===会判断值和类型B、==只会进行判断值,===只会判断类型C、==会判断值和类型,===只会判断类型D、==只会判断类型,===只会判断值正确答案:A2、盗取Cookie是用做什么?A、用于登录B、会话固定C、DDOSD、钓鱼正确答案:A3、下面关于UDP的描述哪个是正确的?A、面向连接,可靠的传输B、面向无连接,不可靠的传输C、能确保到达目的的D、传输也需要握手过程正确答案:B4、中国菜刀是一款经典的webshell管理工具,其传参方式是()。

A、POST方式B、COOKIE方式C、明文方式D、GET方式正确答案:A5、Cookie没有以下哪个作用?A、储存信息B、维持用户会话C、执行代码正确答案:C6、下面哪个选项不是NAT技术具备的优点?A、提高连接到因特网的速度B、节省合法地址C、提高连接到因特网的灵活性D、保护内部网络正确答案:A7、关于JAVA三大框架,说法正确的是?A、三大框架是Struts+Hibernate+PHPB、Spring缺点是解决不了在J2EE开发中常见的的问题C、Struts不是开源软件D、Hibernate主要是数据持久化到数据库正确答案:D8、路由工作在OSI哪一层?A、传输层B、网络层C、链路层D、物理层正确答案:B9、Apache用来识别用户后缀的文件是?A、handler.typesB、mime.typesC、mima.confD、handler.conf正确答案:B10、防御XSS漏洞的核心思想为()A、减少使用数据库B、不要点击未知链接C、禁止用户输入D、输入过滤、输出编码正确答案:D11、XSS的分类不包含下面哪一个?A、储存型xssB、反射型xssC、注入型xssD、DOM型xss正确答案:C12、关于文件包含漏洞,以下说法中不正确的是?A、文件包含漏洞只在PHP中经常出现,在其他语言不存在B、文件包含漏洞,分为本地包含,和远程包含C、文件包含漏洞在PHP Web Application中居多,而在JSP、ASP、程序中却非常少,这是因为有些语言设计的弊端D、渗透网站时,若当找不到上传点,并且也没有url_allow_include功能时,可以考虑包含服务器的日志文件正确答案:A13、数据库安全的第一道保障是?A、操作系统的安全B、网络系统的安全C、数据库管理员D、数据库管理系统层次正确答案:B14、下列哪一个是web容器A、JSPB、UDPC、IISD、MD5正确答案:C15、网络运营者应当为()国家安全机关依法维护国家安全和侦查犯罪的活动提供技术支持和协助。

Optimize caching - Make the Web Faster — Google Developers

Optimize caching - Make the Web Faster — Google Developers

Optimize cachingMost web pages include resources that change infrequently, such as CSS files, image files, JavaScript files, and so on. These resources take time to download over the network, which increases the time it takes to load a web page. HTTP caching allows these resources to be saved, or cached, by a browser or proxy. Once a resource is cached, a browser or proxy can refer to the locally cached copy instead of having to download it again on subsequent visits to the web page. Thus caching is a doublewin: you reduce round-trip time by eliminating numerous HTTP requests for the required resources, and you substantially reduce the total payload size of theresponses. Besides leading to a dramatic reduction in page load time for subsequent user visits, enabling caching can also significantly reduce the bandwidth and hosting costs for your site.Leverage browser cachingLeverage proxy cachingLeverage browser cachingOverviewSetting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local diskrather than over the network.DetailsHTTP/S supports local caching of static resources by the browser. Some of the newest browsers (e.g. IE 7, Chrome) use a heuristic to decide how long to cache allresources that don't have explicit caching headers. Other older browsers may require that caching headers be set before they will fetch a resource from the cache;and some may never cache any resources sent over SSL.To take advantage of the full benefits of caching consistently across all browsers, we recommend that you configure your web server to explicitly set caching headersand apply them to all cacheable static resources, not just a small subset (such as images). Cacheable resources include JS and CSS files, image files, and other binary object files (media files, PDFs, Flash files, etc.). In general, HTML is not static, and shouldn't be considered cacheable.HTTP/1.1 provides the following caching response headers :Expires and Cache-Control: max-age. These specify the “freshness lifetime” of a resource, that is, the time period during which the browser can use the cached resource without checking to see if a new version is available from the web server. They are "strong caching headers" that apply unconditionally; that is, once they're set and the resource is downloaded, the browser will not issue any GET requests for the resource until the expiry date or maximum age is reached.Last-Modified and ETag. These specify some characteristic about the resource that the browser checks to determine if the files are the same. In the Last-Modified header, this is always a date. In the ETag header, this can be any value that uniquely identifies a resource (file versions or content hashes are typical). Last-Modified is a "weak" caching header in that the browser applies a heuristic to determine whether to fetch the item from cache or not. (The heuristics are different among different browsers.) However, these headers allow the browser to efficiently update its cached resources by issuing conditional GET requests when the user explicitly reloads the page. Conditional GETs don't return the full response unless the resource has changed at the server, and thus have lower latency than full GETs.It is important to specify one of Expires or Cache-Control max-age, and one of Last-Modified or ETag, for all cacheable resources. It is redundant to specifyboth Expires and Cache-Control: max-age, or to specify both Last-Modified and ETag.RecommendationsSet caching headers aggressively for all static resources.For all cacheable resources, we recommend the following settings:Set Expires to a minimum of one month, and preferably up to one year, in the future. (We prefer Expires over Cache-Control: max-age because it is is more widely supported.) Do not set it to more than one year in the future, as that violates the RFC guidelines.If you know exactly when a resource is going to change, setting a shorter expiration is okay. But if you think it "might change soon" but don't know when,you should set a long expiration and use URL fingerprinting (described below). Setting caching aggressively does not "pollute" browser caches: as far as weknow, all browsers clear their caches according to a Least Recently Used algorithm; we are not aware of any browsers that wait until resources expirebefore purging them.Set the Last-Modified date to the last time the resource was changed. If the Last-Modified date is sufficiently far enough in the past, chances are the browser won't refetch it.Use fingerprinting to dynamically enable caching.fingerprint, and in turn, so does its URL. As soon as the URL changes, the browser is forced to re-fetch the resource. Fingerprinting allows you to set expiry dates long into the future even for resources that change more frequently than that. Of course, this technique requires that all of the pages that reference the resource know about the fingerprinted URL, which may or may not be feasible, depending on how your pages are coded.Set the Vary header correctly for Internet Explorer.Internet Explorer does not cache any resources that are served with the Vary header and any fields but Accept-Encoding and User-Agent. To ensure these resources are cached by IE, make sure to strip out any other fields from the Vary header, or remove the Vary header altogether if possibleAvoid URLs that cause cache collisions in Firefox.The Firefox disk cache hash functions can generate collisions for URLs that differ only slightly, namely only on 8-character boundaries. When resources hash to the same key, only one of the resources is persisted to disk cache; the remaining resources with the same key have to be re-fetched across browser restarts. Thus, if you are using fingerprinting or are otherwise programmatically generating file URLs, to maximize cache hit rate, avoid the Firefox hash collision issue by ensuring that your application generates URLs that differ on more than 8-character boundaries.Use the Cache control: public directive to enable HTTPS caching for Firefox.Some versions of Firefox require that the Cache control: public header to be set in order for resources sent over SSL to be cached on disk, even if the other caching headers are explicitly set. Although this header is normally used to enable caching by proxy servers (as described below), proxies cannot cache any content sent over HTTPS, so it is always safe to set this header for HTTPS resources.ExampleFor the stylesheet used to display the user's calendar after login, Google Calendar embeds a fingerprint in its filename: calendar/static/fingerprint_key doozercompiled.css, where the fingerprint key is a 128-bit hexadecimal number. At the time of the screen shot below (taken from Page Speed's Show Resources panel), the fingerprint was set to 82b6bc440914c01297b99b4bca641a5d:The fingerprinting mechanism allows the server to set the Expires header to exactly one year ahead of the request date; the Last-Modified header to the date the file was last modified; and the Cache-Control: max-age header to 3153600. To cause the client to re-download the file in case it changes before its expiry date or maximum age, the fingerprint (and therefore the URL) changes whenever the file's content does.Additional resourcesFor an in-depth explanation of HTTP caching, see the HTTP/1.1 RFC, sections 13.2, 14.21, and 14.9.3.For details on enabling caching in Apache, consult the Apache Caching Guide.Back to topLeverage proxy cachingOverviewEnabling public caching in the HTTP headers for static resources allows the browser to download resources from a nearby proxy server rather than from a remote origin server.In addition to browser caching, HTTP provides for proxy caching, which enables static resources to be cached on public web proxy servers, most notably those used by ISPs. This means that even first-time users to your site can benefit from caching: once a static resource has been requested by one user through the proxy, that resource is available for all other users whose requests go through that same proxy. Since those locations are likely to be in closer network proximity to your users than your servers, proxy caching can result in a significant reduction in network latency. Also, if enabled proxy caching effectively gives you free web site hosting, since responses served from proxy caches don't draw on your servers' bandwidth at all.You use the Cache-control: public header to indicate that a resource can be cached by public web proxies in addition to the browser that issued the request. With some exceptions (described below), you should configure your web server to set this header to public for cacheable resources.RecommendationsDon't include a query string in the URL for static resources.Most proxies, most notably Squid up through version 3.0, do not cache resources with a "?" in their URL even if a Cache-control: public header is present in the response. To enable proxy caching for these resources, remove query strings from references to static resources, and instead encode the parameters into the file names themselves.Don't enable proxy caching for resources that set cookies.Setting the header to public effectively shares resources among multiple users, which means that any cookies set for those resources are shared as well. While many proxies won't actually cache any resources with cookie headers set, it's better to avoid the risk altogether. Either set the Cache-Control header to private or serve these resources from a cookieless domain.Be aware of issues with proxy caching of JS and CSS files.Some public proxies have bugs that do not detect the presence of the Content-Encoding response header. This can result in compressed versions being delivered to client browsers that cannot properly decompress the files. Since these files should always be gzipped by your server, to ensure that the client can correctly read the files, do either of the following:Set the the Cache-Control header to private. This disables proxy caching altogether for these resources. If your application is multi-homed around the globe and relies less on proxy caches for user locality, this might be an appropriate setting.Set the Vary: Accept-Encoding response header. This instructs the proxies to cache two versions of the resource: one compressed, and oneuncompressed. The correct version of the resource is delivered based on the client request header. This is a good choice for applications that are singly homed and depend on public proxies for user locality.Back to topExcept as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies.Last updated March 28, 2012.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A Survey of Proxy Cache Evaluation TechniquesBrian D.DavisonDepartment of Computer ScienceRutgers,The State University of New JerseyNew Brunswick,NJ08903USAdavison@,/˜davison/AbstractProxy caches are increasingly used around the world to reduce bandwidth requirements and alleviate delays as-sociated with the World-Wide Web.In order to com-pare proxy cache performances,objective measurements must be made.In this paper,we define a space of proxy evaluation methodologies based on source of work-load used and form of algorithm implementation.We then survey recent publications and show their locations within this space.1IntroductionProxy caches are increasingly used around the world to reduce bandwidth and alleviate delays associated with the World-Wide Web.This paper describes the space of proxy cache evaluation methodologies and places cur-rent research within that space.The primary contribu-tions of this paper are threefold:1)definition and de-scription of the space of evaluation techniques;2)ap-praisal of the different methods within that space;and 3)a survey of cache evaluation techniques from the re-search literature.In the next section we provide background into web caching,including the levels of caching present on the web and and an overview of some of the current research issues in web proxy caching.We then describe current proxy cache evaluation methods and place existing re-search in this space.2BackgroundCaching has a long history and is a well-studied topic in the design of computer memory systems(e.g.[HP87, Man82,MH99]),in virtual memory management in op-erating systems(e.g.[Dei90,GC97]),infile systems (e.g.[CFKL95]),and in databases(e.g.[SSV96]). Caching on the Internet is also performed for other net-work services such as DNS[Moc87a,Moc87b],Gopher and FTP[Pet98,Wes98,RH98,Cat92],and in fact much of today’s web caching research can be traced back to the effort to reduce the bandwidth used by FTP[DHS93].2.1Web cachingWeb caching is the temporary storage of web objects (such as HTML documents)for later retrieval.1Propo-nents of web caching claim three significant advantages to web caching:reduced bandwidth consumption(fewer requests and responses that need to go over the net-work),reduced server load(fewer requests for a server to handle),and reduced latency(since cached responses are available immediately,and closer to the client being served).A fourth is sometimes added:more reliabil-ity,as some objects may be retrievable via cache even when the original servers are not reachable.Together, these features can make the World Wide Web less ex-pensive and better performing.One drawback of caching is the potential of using an out-of-date object stored in a cache instead of fetching the current object from the ori-gin server.Another is the lack of logs of client viewings of pages for the purposes of advertising(although this is being addressed,e.g.[ML97]).Caching can be performed by the client application,and is built into virtually every web browser.There are a number of products that extend or replace the built-in caches2with systems that contain larger storage,more features(such as keeping commonly used pages up-to-date and prefetching likely pages),or better performance (such as faster response times as a result of better cachingmechanisms).However,while these systems cache net-work objects from many servers,they do so for a single user.Caching can also be utilized between the client and the server as part of a proxy.Proxy caches are often located near network gateways to reduce the bandwidth required over expensive dedicated Internet connections.When shared with other users,these systems serve many clients with cached objects from many servers.In fact,much of the usefulness(up to85%of the in-cache requests [DMF97])is in caching objects requested by one client for later retrieval by another client.For even greater performance,many proxy caches are part of cache hier-archies,in which a cache can request documents from neighboring caches instead of fetching them directly from the server[CDN+96].Finally,caches can be placed directly in front of a partic-ular server,to reduce the number of requests the server must handle.Most proxy caches can be used in this fash-ion,but this form is often called a reverse cache or some-times an httpd accelerator to reflect the fact that it caches objects for many clients but usually from only one server [CDN+96,Wes98].2.2Proxy caching researchA shared proxy cache serves a population of clients. When a proxy cache receives a request for a web ob-ject,it checks to see if the response is available in mem-ory or disk,and if so,returns the response to the client without disturbing the upstream network connection or destination server.If it is not available in the cache,the proxy attempts to fetch the object directly.However,if at some point the space required to store all the objects being cached exceeds the available space,the proxy will need to replace an object from the cache.In general, cache replacement algorithms attempt to maximize the percentage of requests successfully fulfilled by the cache (called the hit ratio)by holding onto the items most likely to be requested in the future.However,a number of re-searchers argue that access cost should be included in replacement calculations[BH96,CI97,WA97,SSV97, SSV98,SSV99].Even though clients can only retrieve documents from a cache,there is still the issue of cache consistency,as source documents may be updated after the document was loaded into the cache[GS96,CDN+96,SSV98, SSV99].Therefore,proxy caches generally do not guar-antee strong consistency since there is a possibility of re-turning an out-of-date object to the client.Some caching proxies exist as extensions or options of HTTP servers,such as the publicly available servers Apache[Apa98]and Jigsaw[Wor98].Proxies are also available as stand-alone systems,such as the publicly available Squid[Wes98]and the commercial CacheFlow [Cac98a],Cisco Cache Engine[Cis98],and the Mi-crosoft Proxy Server[Mic98].While somewhat dated, Cormack[Cor96]provides a good introduction to web caching and includes an overview of many of these caching systems and a few others.Much like the memory hierarchy of today’s hardware systems,many researchers claim benefits of building hi-erarchies of web caches[CDN+96,Nat98,Dan98].Dif-ficulties includefinding a nearby cache[CC95,CC96, ZFJ97],knowing what is in the neighboring caches [GRC97,GCR97,FCAB98,RCG98],and communica-tion between caches[WC97b,WC97a,RW98,VR98, Vix98].3Evaluating Proxy Cache PerformanceIt is useful to be able to assess the performance of proxy caches,both for consumers selecting the appropriate sys-tem for a particular situation and also for developers working on alternative caching mechanisms.By eval-uating performance across all measures of interest,it is possible to recognize drawbacks with particular imple-mentations.For example,one can imagine a proxy with a very large disk cache that provides a high hit rate,but because of poor disk management it noticeably increases the client-perceived latency.One can also imagine a proxy that prefetches inline images but releases those im-ages from the cache shortly afterward(to make room for other objects).This would allow it to report high request hit rates,but not perform well in terms of bandwidth sav-ings.Finally,it is possible to select a proxy based on its performance on a workload of requests generated by di-alup users and have it perform unsatisfactorily as a par-ent proxy for a large corporation with other proxies as clients.These examples illustrate the importance of ap-propriate evaluation mechanisms for proxy systems. 3.1The Space of Cache Evaluation MethodsThe most commonly used cache evaluation method is that of simulation on a benchmark log of object requests. The byte and page hit rate savings can then be calculatedcaptured logsA2B2Table1:The space of traditional evaluation methodologies for web systems.captured logssimulated systems/network A1A3B2real systems/real network C1C3 Table2:The expanded space of evaluation methodologies for web systems.as well as estimates of latency improvements.In a few cases,an artificial dataset with the necessary characteris-tics(appropriate average and median object sizes,or sim-ilar long-tail distributions of object sizes and object rep-etitions)is used.More commonly,actual client request logs are used since they arguably better represent likely request patterns and include exact information about ob-ject sizes and retrieval times.We characterize web/proxy evaluation architectures along two dimensions:the source of the workload used and form of algorithm implementation.Table1shows the traditional space with artificial vs.captured logs and simulations vs.implemented systems.With the introduction of prefetching proxies,the space needs to be expanded to include implemented systems on a real network connection,and in fact,evaluation of an implemented proxy in thefield would necessitate an-other column,that of live requests.Thus,the expanded view of the space of evaluation methodologies can be found in Table2and shows that there are at least three categories of data sources for testing purposes,and that there are at least three forms of algorithm evaluation. While it may be possible to makefiner distinctions within each area,such as the representativeness of the data workload or thefidelity of simulations,the broader definitions used in the expanded table form a simple yet useful characterization of the space of evaluation methodologies.For the systems represented in each area of the space,we find it worthwhile to consider how the desired qualities of web caching(reduced bandwidth/server load and im-proved response time)are measured.As will be seen be-low,each of the areas of the space represents a trade-off of desirable and undesirable features.In the rest of this section we point out the characteristic qualities of sys-tems in each area of the space and list work in web re-search that can be so categorized.3.2Methodological AppraisalIn general,both realism and complexity increase as you move diagonally downward and to the right in the evalu-ation methodology space.Note that only methods in ar-eas A1,A2,B1and B2are truly replicable,since live re-quest stream samples change over time,as do the avail-ability and access characteristics of hosts via a live net-work connection.As can be seen,caching mechanisms are most commonly evaluated by simulation on captured client request logs like the sample log shown in Figure1.These logs are generally recorded as the requests pass through a proxy, but it is also possible to collect logs by packet sniff-ing(as in[WWB96,MDFK97,GB97])or by appropri-ately modifying the browser to perform the logging(e.g. [TG97,CBC95,CP95]).Simulated systems.Simulation is the simplest mechanism for evaluation as it does not require full implementation.However,simulat-ing the caching algorithm requires detailed knowledge of the algorithms which is not always possible(especially for commercial implementations).Even then,simula-tion cannot accurately assess document retrieval delays [WA97].It is also impossible to accurately simulate caching mechanisms that prefetch on the basis of the contents of the pages being served(termed content-based prefetch-ing),such as those in CacheFlow[Cac98a]and Wcol893252015.30714<client-ip>TCP_HIT/200227GET/metacrawler/images/transparent.gif-NONE/-image/gif893252015.31223<client-ip>TCP_HIT/2004170GET/metacrawler/images/head.gif-NONE/-image/gif893252015.31838<client-ip>TCP_HIT/200406GET/metacrawler/images/bg2.gif-NONE/-image/gif893252015.636800<client-ip>TCP_REFRESH_MISS/2008872GET/-DIRECT/ text/html893252015.728355<client-ip>TCP_HIT/2005691GET/metacrawler/images/market2.gif-NONE/-image/gif893252016.138465<client-ip>TCP_HIT/200219GET/metacrawler/templates/tips/../../images/pixel.gif-NONE/-image/gif 893252016.430757<client-ip>TCP_REFRESH_HIT/2002106GET/metacrawler/templates/tips/../../images/ultimate.jpg-DIRECT/ image/jpegFigure1:This excerpt of a proxy log generated by Squid1.1records the timestamp,elapsed-time,client,code/status, bytes,method,URL,client-username,peerstatus/peerhost and objecttype for each request.It is also is an example of how requests can be logged in an order inappropriate for replaying in later experiments.[CY97],since they need at least to have access to the links within web pages—something that is not available from server logs.Even if page contents were logged(as in[CBC95,MDFK97]),caches that perform prefetching may prefetch objects that are not on the user request logs and thus have unknown characteristics such as size and web server response times.Real systems/isolated networks.In order to combat the difficulties associated with a live network connection,measurement techniques often eliminate the variability of the Internet by using local servers and isolated networks,which may generate un-realistic expectations of performance.In addition to not taking into consideration current web server and network conditions,isolated networks do not allow for retrieval of current content(updated web pages).Real systems and networks.Because the state of the Internet changes continuously (i.e.web servers and intermediate network connections may be responsive at one time,and sluggish the next), tests on a live network are generally not replicable.Ad-ditionally,to perform such tests the experiment requires reliable hardware,robust software,and a robust network connection to handle the workload applied.Finally,con-nection to a real network requires compatibility with,and no abuse of,existing systems(e.g.one cannot run exper-iments that require changes to standard httpd servers or experimental extensions to TCP/IP).Artificial workloads.Synthetic traces are often used to generate workloads that have characteristics that do not currently exist to help an-swer questions like whether the system can handle twice as many requests per second,or whether caching of non-web objects is feasible.However,artificial workloads often make significant assumptions(e.g.that all objects are cachable,or that requests follow a particular distribu-tion)which are necessary for testing,but not necessarily known to be true.Captured logs.Using actual client request logs can be more realistic than artificial workloads,but since on average the lifetime of a web page is short(less than two months[Wor94, GS96,Kah97,DFKM97]),any captured log loses its value quickly as more references within it are no longer valid,either by becoming inaccessible or by changing content(i.e.,looking at a page a short time later may not give the same content).In addition,unless the logs are recorded and processed carefully,it is possible for the logs to reflect an inaccurate request ordering as the sam-ple Squid logs show in Figure1.Note that the request for the main page follows requests for three subitems of that main page.This is because entries in the log are recorded when the request is completed,and the times-tamp records the time at which the socket is closed.Fi-nally,proxy cache trace logs are inaccurate when they re-turn stale objects since they may not have the same char-acteristics as current objects.Note that logs generated from non-caching proxies or from HTTP packet sniffing may not have this drawback.In other work[Dav99b],we further examine the drawbacks of using standard trace logs and investigate what can be learned from more com-plete logs that include additional information such as page content.Current request stream workloads.Using a live request stream produces experiments that are not reproducible(especially when paired with live hardware/networks).Additionally,the test hardware andsoftware may have difficulties handling a high real load. On the other hand,if the samples are large enough and have similar characteristics,an argument for comparabil-ity might be made.3.3Placement of current work in the space ofmethodologiesIn this section,we place work from the research literature into the evaluation space described above.Note that in-clusion/citation in a particular area does not necessarily indicate the main thrust of the paper mentioned.Area A1:Simulations using artificial workloads.It can be difficult to characterize a workload sufficiently to be able to generate a credible artificial workload.This, and the wide availability of a number of proxy traces, means that only a few researchers attempt this kind of re-search:•Jiang and Kleinrock[JK97,JK98]evaluate prefetching mechanisms theoretically(area A1)but also on a limited trace set(area A2).•Manley,Seltzer and Courage[MSC98]propose a new web benchmark which generates realistic loads to focus on measuring latency.This tool would be used to perform experiments somewhere between areas A1and A2as it uses captured logs to build particular loads on request.•Tewari et.al.[TVDS98]use synthetic traces for their simulations of caching continuous media traf-fic.Area A2:Simulations using captured logs. Simulating proxy performance is much more popular than one might expect from the list of research in area A1.In fact,the most common mechanism for evaluating an algorithm’s performance by far is simulation over one or more captured traces.•Cunha et.al.[CBC95],Partl[Par96],Williams et.al.[WAS+96],Gwertzman and Seltzer [GS96],Cao and Irani[CI97],Bolot and Hoschka[BH96],Gribble and Brewer[GB97], Duska,Marwood and Feeley[DMF97],Cac-eres et.al.[CDF+98],Niclausse,Liu and Nain [NLN98],Kurcewicz,Sylwestrzak and Wierzbicki [KSW98]and Scheuermann,Shim and Vingralek [SSV97,SSV98,SSV99]all utilize trace-basedsimulation to evaluate different cache management algorithms.•Trace-based simulation is also used in evaluating approaches to prefetching,such as Bestavros’s server-side speculation[Bes95],Padmanabhan and Mogul’s persistent HTTP protocol along with prefetching[PM96],Kroeger,Long and Mogul’s calculations on limits to latency improvement from caching and prefetching[KLM97],Markatos and Chronaki’s object popularity-based method [MC98],Fan,Cao and Jacobson’s latency re-duction to low-bandwidth clients via prefetching [FCJ99]and Crovella and Barford’s prefetching with simulated network effects[CB98].•Markatos[Mar96]simulates performance of a web server augmented with a main memory cache on a number of public web server traces.•Mogul et.al.[MDFK97]use two large,full con-tent traces to evaluate delta-encoding and compres-sion methods for HTTP via calculated savings(area A2)but also performed some experiments to in-clude computational costs on real hardware with representative request samples(something between areas B1and B2).Area A3:Simulation based on current requests.We are aware of no published research that could be cat-egorized in this area.The algorithms examined in ar-eas A1and A2do not need the characteristics of live re-quest streams(such as contents rather than just headers of HTTP requests and replies).Those researchers who use live request streams all use real systems of some sort (as will be seen below).Area B1:Real systems on an isolated network using an artificial dataset.A number of researchers have built tools to generate workloads for web systems in a closed environment. Such systems make it possible to generate workloads that are uncommon in practice(or impossible to capture)to illuminate implementation problems.These include:•Almeida,Almeida and Yates[AAY97]propose a web server measurement tool(Webmonitor),and describe experiments driven by an HTTP load gen-erator.•Banga,Douglis and Rabinovich[BDR97]use an artificial workload with custom client and server software to test the use of transmitting only page changes from a server proxy to a client proxy overa slow link.•Almeida and Cao’s Wisconsin Proxy Benchmark [AC98b,AC98a]uses a combination of web client and web server processes on an isolated network to evaluate proxy performance.•While originally designed to exercise web servers, both Barford and Crovella’s SURGE[BC98]and Mosberger and Jin’s httperf[MJ98]generate partic-ular workloads useful for server and proxy evalua-tion.•The CacheFlow[Cac98b]measurement tool was designed specifically for areas C1and C2,but could also be used on an isolated network with an artificial dataset.Area B2:Real systems on an isolated network using captured trace logs.Some of the research projects listed in area B1may be ca-pable of using captured trace logs.In addition,we place the following here:•In evaluating their Crispy Squid,Gadde,Chase and Rabinovich[GCR98]describe the tools and li-braries called Proxycizer.These tools provide a trace-driven client and a characterized web server that surround the proxy under evaluation,much like the Wisconsin Proxy Benchmark.Area B3:Real systems on an isolated network using current requests.Like area A3,we found no research applicable to this area.Since live requests can attempt to fetch objects from around the globe,it is unlikely to be useful within an isolated network.Area C1:Real systems on a live network using an ar-tificial dataset.Some of the research projects in area B1may also be ex-tensible to the use of a live network connection.The pri-mary restriction is the use of real,valid URLs that fetch over the Internet rather than particularfiles on a local server.Area C2:Real systems on a live network using cap-tured logs.With captured logs,the systems being evaluated are as realistically operated without involving real users as clients.Additionally,some of the tools from area B1may be usable in this type of experiment.•Wooster and Abrams[Woo96,WA97]report on evaluating multiple cache replacement algorithmsin parallel within the Harvest cache,both using URL traces and online(grid areas C2and C3,re-spectively)but the multiple replacement algorithms are within a single proxy.Wooster also describes experiments in which a client replayed logs to mul-tiple separate proxies running on a single multipro-cessor machine.•Maltzahn and Richardson[MR97]evaluate prox-ies with the goal offinding enterprise-class sys-tems.They test real systems with a real network connection and used carefully selected high load-generating trace logs.•Liu et.al.test the effect of network connection speed and proxy caching on response time using public traces[LAJF98]on what appears to be a live network connection.•Lee et.al.[LHC+98]evaluate different cache-to-cache relationships using trace logs on a real net-work connection.Area C3:Real systems on a live network using the current request stream.In many respects,this area represents the strongest of all evaluation methodologies.However,most real installa-tions are not used for the comparison of alternate sys-tems or configurations,and so we report on only a few research efforts here:•Chinen and Yamaguchi[CY97]described and eval-uated the performance of the Wcol proxy cache ona live network using live data,but do not compareit to any other caching systems.•Rousskov and Soloviev[RS98]evaluated perfor-mance of seven different proxies in seven different real-world request streams.•Cormack[Cor98]described performance of differ-ent configurations at different times of a live web cache on a live network.•Elsewhere[Dav99a]we describe the Simultaneous Proxy Evaluation(SPE)architecture that compares multiple proxies on the same request stream.While originally designed for this area with a live request stream and live network connection,it can also be used for replaying captured or artificial logs on iso-lated or live networks(areas B1,B2,C1,and C2).4Discussion and ConclusionsEvaluation is a significant concern,both for consumers and for researchers.Objective measurements are essen-tial,as are comparability of measurements from system to system.Furthermore,it is important to eliminate vari-ables that can affect evaluation.Selection of an appropriate evaluation methodology de-pends on the technology being tested and the desired level of evaluation.Some technologies,such as protocol extensions,are often impossible to test over a live net-work because other systems do not support it.Similarly, if the goal is tofind bottlenecks in an implementation, one may want to stress the system with very high syn-thetic loads since such loads are unlikely to be available in captured request traces.In fact,this reveals another in-stance in which a live network and request stream should not be used—when the goal of the test is to drive the system into a failure mode tofind its limits.In general,we argue for the increased believability of methodologies in the evaluation space as you move down and to the right when objectively testing the successful performance of a web caching system.If the tested sys-tems make decisions based on content,a method from the bottom row is likely to be required.When looking for failure modes,it will be more useful to consider method-ologies near the upper left of the evaluation space. This survey provides a sampling of the published web-caching research and presents one of potentially many spaces of evaluation methodologies.In particular,we haven’t really considered aspects of testing inter-cache communication,cache consistency,or low-level proto-col issues such as connection caching which are signif-icant in practice[CDF+98].In this paper we have described a space of evaluation methodologies and shown where current research efforts fall within it.By considering the space of appropriate evaluation methodologies,one can select the best trade-off of information provided,implementation costs,com-parability,and relevance to the target environment. AcknowledgmentsThanks are due to Haym Hirsh,Brett Vickers,and anony-mous reviewers for their helpful comments on earlier drafts of this paper.References[AAY97]Jussara Almeida,Virg´ilio A.F.Almeida,and David J.Yates.Measuring the behavior ofa world wide web server.In Proceedings ofthe Seventh Conference on High PerformanceNetworking(HPN),pages57–72.IFIP,April1997.[AC98a]Jussara Almeida and Pei Cao.Measur-ing Proxy Performance with the WisconsinProxy Benchmark.In Third InternationalWWW Caching Workshop,Manchester,Eng-land,June1998.Also available as TechnicalReport1373,Computer Sciences Dept.,Univ.of Wisconsin-Madison,April,1998.[AC98b]Jussara Almeida and Pei Cao.Wiscon-sin proxy benchmark1.0.Available from/˜cao/wpb1.0.html,1998.[Apa98]Apache Group.Apache HTTP server documentation.Available at/,1998.[BC98]Paul Barford and Mark Crovella.Gener-ating representative web workloads for net-work and server performance evaluation.InProceedings of the Joint International Confer-ence on Measurement and Modeling of Com-puter Systems(SIGMETRICS’98/PERFOR-MANCE’98),pages151–160,Madison,WI,June1998.[BDR97]Gaurav Banga,Fred Douglis,and Michael Rabinovich.Optimistic Deltas for WWWLatency Reduction.In Proceedings of theUSENIX Technical Conference,1997. [Bes95]Azer ing Speculation to Re-duce Server Load and Service Time on theWWW.In Proceedings of CIKM’95:TheFourth ACM International Conference on In-formation and Knowledge Management,Bal-timore,MD,November1995.Also avail-able as Technical Report TR-95-006,Com-puter Science Department,Boston University. [BH96]Jean-Chrysostome Bolot and Philipp Hoschka.Performance engineering ofthe world wide web:Application to dimen-sioning and cache design.In Proceedingsof the Fifth International World Wide WebConference,Paris,France,May1996.。

相关文档
最新文档