Adaptive Caching by Refetching (View in Color)
Boosting原理及应用
Boosting原理及应用[object Object]Boosting是一种用于提升机器学习模型性能的集成学习方法,它通过训练一系列弱分类器,并将它们组合成一个强分类器。
Boosting的原理是通过迭代的方式,逐步改进弱分类器的性能,使得它们在错误分类的样本上有更高的权重,从而达到提升整体分类性能的目的。
Boosting的核心思想是将多个弱分类器进行加权组合,使得它们能够协同工作,并形成一个更强大的分类器。
在每一轮迭代中,Boosting会根据上一轮分类器的性能调整样本权重,使得对错误分类的样本施加更高的权重,从而在下一轮中更加关注这些难以分类的样本。
这种迭代的过程会一直进行,直到达到一定的迭代次数或者分类器的性能不再提升为止。
1. Adaboost(Adaptive Boosting):Adaboost是Boosting算法最经典的实现之一,它通过迭代的方式训练一系列弱分类器,并将它们加权组合成一个强分类器。
Adaboost的特点是能够适应不同的数据分布,对于难以分类的样本会给予更高的权重,从而提升整体的分类性能。
2. Gradient Boosting:Gradient Boosting是一种通过梯度下降的方式逐步优化模型性能的Boosting算法。
它的核心思想是在每一轮迭代中,计算损失函数的负梯度,并将其作为下一轮训练样本的权重调整。
通过迭代的方式,逐步改进弱分类器的性能,从而提升整体的分类准确率。
3. XGBoost(eXtreme Gradient Boosting):XGBoost是Gradient Boosting的一种优化实现,它在Gradient Boosting的基础上引入了一些创新的技术,如正则化、缺失值处理和并行计算等。
XGBoost在很多机器学习竞赛中取得了优秀的成绩,并被广泛应用于各种实际问题中。
4. LightGBM:LightGBM是一种基于梯度提升树的Boosting算法,它在XGBoost的基础上进行了一些改进,使得它能够更快地训练模型,并具有更低的内存消耗。
HP Color LaserJet Enterprise MFP M776用户指南说明书
Legal informationCopyright and License© Copyright 2019 HP Development Company, L.P.Reproduction, adaptation, or translation without prior written permission is prohibited, except as allowedunder the copyright laws.The information contained herein is subject to change without notice.The only warranties for HP products and services are set forth in the express warranty statementsaccompanying such products and services. Nothing herein should be construed as constituting anadditional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.Edition 1, 10/2019Trademark CreditsAdobe®, Adobe Photoshop®, Acrobat®, and PostScript® are trademarks of Adobe Systems Incorporated.Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries.macOS is a trademark of Apple Inc., registered in the U.S. and other countries.AirPrint is a trademark of Apple Inc., registered in the U.S. and other countries.Google™ is a trademark of Google Inc.Microsoft®, Windows®, Windows® XP, and Windows Vista® are U.S. registered trademarks of MicrosoftCorporation.UNIX® is a registered trademark of The Open Group.iiiT able of contents1 Printer overview (1)Warning icons (1)Potential shock hazard (2)Printer views (2)Printer front view (2)Printer back view (4)Interface ports (4)Control-panel view (5)How to use the touchscreen control panel (7)Printer specifications (8)T echnical specifications (8)Supported operating systems (11)Mobile printing solutions (12)Printer dimensions (13)Power consumption, electrical specifications, and acoustic emissions (15)Operating-environment range (15)Printer hardware setup and software installation (16)2 Paper trays (17)Introduction (17)Load paper to Tray 1 (multipurpose tray) (17)Load Tray 1 (multipurpose tray) (18)Tray 1 paper orientation (19)Use alternative letterhead mode (24)Enable Alternative Letterhead Mode by using the printer control-panel menus (24)Load paper to Tray 2 (24)Load Tray 2 (24)Tray 2 paper orientation (26)Use alternative letterhead mode (29)Enable Alternative Letterhead Mode by using the printer control-panel menus (29)Load paper to the 550-sheet paper tray (30)Load paper to the 550-sheet paper tray (30)550-sheet paper tray paper orientation (32)Use alternative letterhead mode (35)Enable Alternative Letterhead Mode by using the printer control-panel menus (35)ivLoad paper to the 2 x 550-sheet paper trays (36)Load paper to the 2 x 550-sheet paper trays (36)2 x 550-sheet paper tray paper orientation (38)Use alternative letterhead mode (41)Enable Alternative Letterhead Mode by using the printer control-panel menus (41)Load paper to the 2,700-sheet high-capacity input paper trays (41)Load paper to the 2,700-sheet high-capacity input paper trays (41)2,700-sheet HCI paper tray paper orientation (43)Use alternative letterhead mode (45)Enable Alternative Letterhead Mode by using the printer control-panel menus (45)Load and print envelopes (46)Print envelopes (46)Envelope orientation (46)Load and print labels (47)Manually feed labels (47)Label orientation (48)3 Supplies, accessories, and parts (49)Order supplies, accessories, and parts (49)Ordering (49)Supplies and accessories (50)Maintenance/long-life consumables (51)Customer self-repair parts (51)Dynamic security (52)Configure the HP toner-cartridge-protection supply settings (53)Introduction (53)Enable or disable the Cartridge Policy feature (53)Use the printer control panel to enable the Cartridge Policy feature (54)Use the printer control panel to disable the Cartridge Policy feature (54)Use the HP Embedded Web Server (EWS) to enable the Cartridge Policy feature (54)Use the HP Embedded Web Server (EWS) to disable the Cartridge Policy feature (55)Troubleshoot Cartridge Policy control panel error messages (55)Enable or disable the Cartridge Protection feature (55)Use the printer control panel to enable the Cartridge Protection feature (56)Use the printer control panel to disable the Cartridge Protection feature (56)Use the HP Embedded Web Server (EWS) to enable the Cartridge Protection feature (56)Use the HP Embedded Web Server (EWS) to disable the Cartridge Protection feature (57)Troubleshoot Cartridge Protection control panel error messages (57)Replace the toner cartridges (58)T oner-cartridge information (58)Remove and replace the cartridges (59)Replace the imaging drums (62)Imaging drum information (62)Remove and replace the imaging drums (63)Replace the toner-collection unit (66)T oner-collection unit information (66)vRemove and replace the toner-collection unit (67)Replace the staple cartridge (M776zs model only) (70)Staple cartridge information (70)Remove and replace the staple cartridge (71)4 Print (73)Print tasks (Windows) (73)How to print (Windows) (73)Automatically print on both sides (Windows) (74)Manually print on both sides (Windows) (74)Print multiple pages per sheet (Windows) (75)Select the paper type (Windows) (75)Additional print tasks (76)Print tasks (macOS) (77)How to print (macOS) (77)Automatically print on both sides (macOS) (77)Manually print on both sides (macOS) (77)Print multiple pages per sheet (macOS) (78)Select the paper type (macOS) (78)Additional print tasks (79)Store print jobs on the printer to print later or print privately (79)Introduction (79)Create a stored job (Windows) (79)Create a stored job (macOS) (80)Print a stored job (81)Delete a stored job (81)Delete a job that is stored on the printer (81)Change the job storage limit (82)Information sent to printer for Job Accounting purposes (82)Mobile printing (82)Introduction (82)Wi-Fi, Wi-Fi Direct Print, NFC, and BLE printing (82)Enable wireless printing (83)Change the Wi-Fi Direct name (83)HP ePrint via email (83)AirPrint (84)Android embedded printing (85)Print from a USB flash drive (85)Enable the USB port for printing (85)Method one: Enable the USB port from the printer control panel (85)Method two: Enable the USB port from the HP Embedded Web Server (network-connectedprinters only) (85)Print USB documents (86)Print using high-speed USB 2.0 port (wired) (86)Method one: Enable the high-speed USB 2.0 port from the printer control panel menus (86)Method two: Enable the high-speed USB 2.0 port from the HP Embedded Web Server (network-connected printers only) (87)vi5 Copy (88)Make a copy (88)Copy on both sides (duplex) (90)Additional copy tasks (92)6 Scan (93)Set up Scan to Email (93)Introduction (93)Before you begin (93)Step one: Access the HP Embedded Web Server (EWS) (94)Step two: Configure the Network Identification settings (95)Step three: Configure the Send to Email feature (96)Method one: Basic configuration using the Email Setup Wizard (96)Method two: Advanced configuration using the Email Setup (100)Step four: Configure the Quick Sets (optional) (104)Step five: Set up Send to Email to use Office 365 Outlook (optional) (105)Introduction (105)Configure the outgoing email server (SMTP) to send an email from an Office 365 Outlookaccount (105)Set up Scan to Network Folder (108)Introduction (108)Before you begin (108)Step one: Access the HP Embedded Web Server (EWS) (108)Step two: Set up Scan to Network Folder (109)Method one: Use the Scan to Network Folder Wizard (109)Method two: Use Scan to Network Folder Setup (110)Step one: Begin the configuration (110)Step two: Configure the Scan to Network Folder settings (111)Step three: Complete the configuration (118)Set up Scan to SharePoint (118)Introduction (118)Before you begin (118)Step one: Access the HP Embedded Web Server (EWS) (118)Step two: Enable Scan to SharePoint and create a Scan to SharePoint Quick Set (119)Scan a file directly to a SharePoint site (121)Quick Set scan settings and options for Scan to SharePoint (122)Set up Scan to USB Drive (123)Introduction (124)Step one: Access the HP Embedded Web Server (EWS) (124)Step two: Enable Scan to USB Drive (124)Step three: Configure the Quick Sets (optional) (125)Default scan settings for Scan to USB Drive setup (126)Default file settings for Save to USB setup (126)Scan to email (127)Introduction (127)Scan to email (127)Scan to job storage (129)viiIntroduction (129)Scan to job storage on the printer (130)Print from job storage on the printer (132)Scan to network folder (132)Introduction (132)Scan to network folder (132)Scan to SharePoint (134)Introduction (134)Scan to SharePoint (134)Scan to USB drive (136)Introduction (136)Scan to USB drive (136)Use HP JetAdvantage business solutions (138)Additional scan tasks (138)7 Fax (140)Set up fax (140)Introduction (140)Set up fax by using the printer control panel (140)Change fax configurations (141)Fax dialing settings (141)General fax send settings (142)Fax receive settings (143)Send a fax (144)Additional fax tasks (146)8 Manage the printer (147)Advanced configuration with the HP Embedded Web Server (EWS) (147)Introduction (147)How to access the HP Embedded Web Server (EWS) (148)HP Embedded Web Server features (149)Information tab (149)General tab (149)Copy/Print tab (150)Scan/Digital Send tab (151)Fax tab (152)Supplies tab (153)Troubleshooting tab (153)Security tab (153)HP Web Services tab (154)Networking tab (154)Other Links list (156)Configure IP network settings (157)Printer sharing disclaimer (157)View or change network settings (157)Rename the printer on a network (157)viiiManually configure IPv4 TCP/IP parameters from the control panel (158)Manually configure IPv6 TCP/IP parameters from the control panel (158)Link speed and duplex settings (159)Printer security features (160)Introduction (160)Security statements (160)Assign an administrator password (160)Use the HP Embedded Web Server (EWS) to set the password (160)Provide user access credentials at the printer control panel (161)IP Security (161)Encryption support: HP High Performance Secure Hard Disks (161)Lock the formatter (161)Energy-conservation settings (161)Set the sleep timer and configure the printer to use 1 watt or less of power (161)Set the sleep schedule (162)Set the idle settings (162)HP Web Jetadmin (163)Software and firmware updates (163)9 Solve problems (164)Customer support (164)Control panel help system (165)Reset factory settings (165)Introduction (165)Method one: Reset factory settings from the printer control panel (165)Method two: Reset factory settings from the HP Embedded Web Server (network-connectedprinters only) (166)A “Cartridge is low” or “Cartridge is very low” message displays on the printer control panel (166)Change the “Very Low” settings (166)Change the “Very Low” settings at the control panel (166)For printers with fax capability (167)Order supplies (167)Printer does not pick up paper or misfeeds (167)Introduction (167)The printer does not pick up paper (167)The printer picks up multiple sheets of paper (171)The document feeder jams, skews, or picks up multiple sheets of paper (174)Clear paper jams (174)Introduction (174)Paper jam locations (174)Auto-navigation for clearing paper jams (175)Experiencing frequent or recurring paper jams? (175)Clear paper jams in the document feeder - 31.13.yz (176)Clear paper jams in Tray 1 (13.A1) (177)Clear paper jams in Tray 2 (13.A2) (182)Clear paper jams in the fuser (13.B9, 13.B2, 13.FF) (188)ixClear paper jams in the duplex area (13.D3) (194)Clear paper jams in the 550-sheet trays (13.A3, 13.A4) (199)Clear paper jams in the 2 x 550 paper trays (13.A4, 13.A5) (206)Clear paper jams in the 2,700-sheet high-capacity input paper trays (13.A3, 13.A4, 13.A5, 13.A7) (213)Resolving color print quality problems (220)Introduction (220)Troubleshoot print quality (221)Update the printer firmware (221)Print from a different software program (221)Check the paper-type setting for the print job (221)Check the paper type setting on the printer (221)Check the paper type setting (Windows) (221)Check the paper type setting (macOS) (222)Check toner-cartridge status (222)Step one: Print the Supplies Status Page (222)Step two: Check supplies status (222)Print a cleaning page (222)Visually inspect the toner cartridge or cartridges (223)Check paper and the printing environment (223)Step one: Use paper that meets HP specifications (223)Step two: Check the environment (223)Step three: Set the individual tray alignment (224)Try a different print driver (224)Troubleshoot color quality (225)Calibrate the printer to align the colors (225)Troubleshoot image defects (225)Improve copy image quality (233)Check the scanner glass for dirt and smudges (233)Calibrate the scanner (234)Check the paper settings (235)Check the paper selection options (235)Check the image-adjustment settings (235)Optimize copy quality for text or pictures (236)Edge-to-edge copying (236)Improve scan image quality (236)Check the scanner glass for dirt and smudges (237)Check the resolution settings (238)Check the color settings (238)Check the image-adjustment settings (239)Optimize scan quality for text or pictures (239)Check the output-quality settings (240)Improve fax image quality (240)Check the scanner glass for dirt and smudges (240)Check the send-fax resolution settings (242)Check the image-adjustment settings (242)Optimize fax quality for text or pictures (242)Check the error-correction setting (243)xSend to a different fax machine (243)Check the sender's fax machine (243)Solve wired network problems (244)Introduction (244)Poor physical connection (244)The computer is unable to communicate with the printer (244)The printer is using incorrect link and duplex settings for the network (245)New software programs might be causing compatibility problems (245)The computer or workstation might be set up incorrectly (245)The printer is disabled, or other network settings are incorrect (245)Solve wireless network problems (245)Introduction (245)Wireless connectivity checklist (245)The printer does not print after the wireless configuration completes (246)The printer does not print, and the computer has a third-party firewall installed (246)The wireless connection does not work after moving the wireless router or printer (247)Cannot connect more computers to the wireless printer (247)The wireless printer loses communication when connected to a VPN (247)The network does not appear in the wireless networks list (247)The wireless network is not functioning (247)Reduce interference on a wireless network (248)Solve fax problems (248)Checklist for solving fax problems (248)What type of phone line are you using? (249)Are you using a surge-protection device? (249)Are you using a phone company voice-messaging service or an answering machine? (249)Does your phone line have a call-waiting feature? (249)Check fax accessory status (249)General fax problems (250)The fax failed to send (250)No fax address book button displays (250)Not able to locate the Fax settings in HP Web Jetadmin (250)The header is appended to the top of the page when the overlay option is enabled (251)A mix of names and numbers is in the recipients box (251)A one-page fax prints as two pages (251)A document stops in the document feeder in the middle of faxing (251)The volume for sounds coming from the fax accessory is too high or too low (251)Index (252)xiPrinter overview1Review the location of features on the printer, the physical and technical specifications of the printer,and where to locate setup information.For video assistance, see /videos/LaserJet.The following information is correct at the time of publication. For current information, see /support/colorljM776MFP.For more information:HP's all-inclusive help for the printer includes the following information:●Install and configure●Learn and use●Solve problems●Download software and firmware updates●Join support forums●Find warranty and regulatory informationWarning iconsUse caution if you see a warning icon on your HP printer, as indicated in the icon definitions.●Caution: Electric shock●Caution: Hot surface●Caution: Keep body parts away from moving partsPrinter overview1●Caution: Sharp edge in close proximity●WarningPotential shock hazardReview this important safety information.●Read and understand these safety statements to avoid an electrical shock hazard.●Always follow basic safety precautions when using this product to reduce risk of injury from fire orelectric shock.●Read and understand all instructions in the user guide.●Observe all warnings and instructions marked on the product.●Use only a grounded electrical outlet when connecting the product to a power source. If you do notknow whether the outlet is grounded, check with a qualified electrician.●Do not touch the contacts on any of the sockets on the product. Replace damaged cordsimmediately.●Unplug this product from wall outlets before cleaning.●Do not install or use this product near water or when you are wet.●Install the product securely on a stable surface.●Install the product in a protected location where no one can step on or trip over the power cord.Printer viewsIdentify certain parts of the printer and the control panel.Printer front viewLocate features on the front of the printer.2Chapter 1 Printer overviewPrinter front view3Printer back viewLocate features on the back of the printer.Interface portsLocate the interface ports on the printer formatter. 4Chapter 1 Printer overviewControl-panel viewThe control panel provides access to the printer features and indicates the current status of the printer.NOTE:Tilt the control panel for easier viewing.The Home screen provides access to the printer features and indicates the current status of the printer.screens.NOTE:The features that appear on the Home screen can vary, depending on the printerconfiguration.Control-panel view5Figure 1-1Control-panel view?i 12:42 PM6Chapter 1 Printer overviewHow to use the touchscreen control panelPerform the following actions to use the printer touchscreen control panel.T ouchT ouch an item on the screen to select that item or open that menu. Also, when scrolling T ouch the Settings icon to open the Settings app.How to use the touchscreen control panel 7SwipeT ouch the screen and then move your finger horizontally to scroll the screen sideways.Swipe until the Settings app displays.Printer specificationsDetermine the specifications for your printer model.IMPORTANT:The following specifications are correct at the time of publication, but they are subject to change. For current information, see /support/colorljM776MFP .T echnical specificationsReview the printer technical specifications.Product numbers for each model ●M776dn - #T3U55A ●Flow M776z - #3WT91A ●Flow M776zs - #T3U56APaper handling specificationsPaper handling features Tray 1 (100-sheet capacity)Included Included Included Tray 2 (550-sheet capacity)IncludedIncludedIncluded8Chapter 1 Printer overview550-sheet paper trayOptional Included Not included NOTE:The M776dn models accept one optional550-sheet tray.Optional Included Included2 x 550-sheet paper tray and standNOTE:The M776dn models accept one optional550-sheet tray that may be installed on top of thestand.Optional Not included Not included2,700-sheet high-capacity input (HCI) paper trayand standNOTE:The M776dn models accept one optional550-sheet tray that may be installed on top of theoptional printer stand.Printer standOptional Not included Not included NOTE:The M776dn models accept one optional550-sheet tray that may be installed on top of theoptional printer stand.Inner finisher accessory Not included Not included Included Automatic duplex printing Included IncludedIncludedIncluded Included Included10/100/1000 Ethernet LAN connection with IPv4and IPv6Hi-Speed USB 2.0Included Included IncludedIncluded Included IncludedEasy-access USB port for printing from a USBflash drive or upgrading the firmwareIncluded Included Included Hardware Integration Pocket for connectingaccessory and third-party devicesHP Internal USB Ports Optional Optional OptionalOptional Optional OptionalHP Jetdirect 2900nw Print Server accessory forWi-Fi connectivity and an additional Ethernet portOptional IncludedIncludedHP Jetdirect 3100w accessory for Wi-Fi, BLE, NFC,and proximity badge readingPrints 45 pages per minute (ppm) on Letter-sizepaper and 46 ppm on A4-size paperEasy-access USB printing for printing from a USBIncluded Included Includedflash driveT echnical specifications9Included Included Included Store jobs in the printer memory to print later orprint privatelyScans 100 pages per minute (ppm) on A4 andIncluded Included Included letter-size paper one-sidedIncluded Included Included 200-page document feeder with dual-headscanning for single-pass duplex copying andscanningNot included Included Included HP EveryPage T echnologies including ultrasonicmulti-feed detectionNot included Included Included Embedded optical character recognition (OCR)provides the ability to convert printed pages intotext that can be edited or searched using acomputerIncluded Included Included SMART Label feature provides paper-edgedetection for automatic page croppingIncluded Included Included Automatic page orientation for pages that haveat least 100 characters of textIncluded Automatic tone adjustment sets contrast,Included Includedbrightness, and background removal for eachpageIncluded Included Includedfolders on a networkIncludedSend documents to SharePoint®Included IncludedIncluded Included Included NOTE:Memory reported on the configurationpage will change from 2.5 GB to 3 GB with theoptional 1 GB SODIMM installed.Mass storage: 500 GB hard disk drive Included Included IncludedSecurity: HP Trusted Platform Module (TPM)Included Included IncludedT ouchscreen control panel Included Included IncludedRetractable keyboard Not included Included Included 10Chapter 1 Printer overviewFax Optional Included IncludedSupported operating systemsUse the following information to ensure printer compatibility with your computer operating system.Linux: For information and print drivers for Linux, go to /go/linuxprinting.UNIX: For information and print drivers for UNIX®, go to /go/unixmodelscripts.The following information applies to the printer-specific Windows HP PCL 6 print drivers, HP print driversfor macOS, and to the software installer.Windows: Download HP Easy Start from /LaserJet to install the HP print driver. Or, go tothe printer-support website for this printer: /support/colorljM776MFP to download the printdriver or the software installer to install the HP print driver.macOS: Mac computers are supported with this printer. Download HP Easy Start either from /LaserJet or from the Printer Support page, and then use HP Easy Start to install the HP print driver.1.Go to /LaserJet.2.Follow the steps provided to download the printer software.Windows 7, 32-bit and 64-bit The “HP PCL 6” printer-specific print driver is installed for this operating system aspart of the software installation.Windows 8.1, 32-bit and 64-bit The “HP PCL-6” V4 printer-specific print driver is installed for this operating systemas part of the software installation.Windows 10, 32-bit and 64-bit The “HP PCL-6” V4 printer-specific print driver is installed for this operating systemas part of the software installation.Windows Server 2008 R2, SP 1, 64-bit The PCL 6 printer-specific print driver is available for download from the printer-support website. Download the driver, and then use the Microsoft Add Printer tool toinstall it.Windows Server 2012, 64-bit The PCL 6 printer-specific print driver is available for download from the printer-support website. Download the driver, and then use the Microsoft Add Printer tool toinstall it.Windows Server 2012 R2, 64-bit The PCL 6 printer-specific print driver is available for download from the printer-support website. Download the driver, and then use the Microsoft Add Printer tool toinstall it.Windows Server 2016, 64-bit The PCL 6 printer-specific print driver is available for download from the printer-support website. Download the driver, and then use the Microsoft Add Printer tool toinstall it.Windows Server 2019, 64-bit The PCL 6 printer-specific print driver is available for download from the printer-support website. Download the driver, and then use the Microsoft Add Printer tool toinstall it.Supported operating systems11macOS 10.13 High Sierra, macOS 10.14 MojaveDownload HP Easy Start from /LaserJet , and then use it to install the print driver.NOTE:Supported operating systems can change.NOTE:For a current list of supported operating systems and HP’s all-inclusive help for the printer, go to /support/colorljM776MFP .NOTE:For details on client and server operating systems and for HP UPD driver support for this printer, go to /go/upd . Under Additional information , click Specifications .●Internet connection●Dedicated USB 1.1 or 2.0 connection or a network connection● 2 GB of available hard-disk space ●1 GB RAM (32-bit) or2 GB RAM (64-bit)●Internet connection●Dedicated USB 1.1 or 2.0 connection or a network connection●1.5 GB of available hard-disk spaceNOTE:The Windows software installer installs the HP Smart Device Agent Base service. The file size is less than 100 kb. Its only function is to check for printers connected via USB hourly. No data is collected. If a USB printer is found, it then tries to locate a JetAdvantage Management Connector (JAMc) instance on the network. If a JAMc is found, the HP Smart Device Agent Base is securelyupgraded to a full Smart Device Agent from JAMc, which will then allow printed pages to be accounted for in a Managed Print Services (MPS) account. The driver-only web packs downloaded from for the printer and installed through the Add Printer wizard do not install this service.T o uninstall the service, open the Control Panel , select Programs or Programs and Features , and then select Add/Remove Programs or Uninstall a Programto remove the service. The file name isHPSmartDeviceAgentBase.Mobile printing solutionsHP offers multiple mobile printing solutions to enable easy printing to an HP printer from a laptop, tablet, smartphone, or other mobile device.T o see the full list and to determine the best choice, go to /go/MobilePrinting .NOTE:Update the printer firmware to ensure all mobile printing capabilities are supported.●Wi-Fi Direct (wireless models only, with HP Jetdirect 3100w BLE/NFC/Wireless accessory installed)●HP ePrint via email (Requires HP Web Services to be enabled and the printer to be registered with HP Connected)●HP Smart app ●Google Cloud Print12Chapter 1 Printer overview。
NETGEAR N150 WiFi Range Extender WN1000RP 说明书
Boost your existing WiFi OverviewThe NETGEAR WiFi Range Extender boosts your existing WiFi and delivers greater wireless speed where the signal is intermittent or weak, improve range & connectivity you desire for iPads ®, smartphones, laptops & more.. The convenient wall-plug design enables placement wherever there’s a power outlet.• Works with any WiFi router• Extend WiFi for mobile devices • Reduce mobile data plan charges• Convenient wall-plug designN150 WiFi Range Extender Data SheetWN1000RPBoost WiFiBoost existing WiFi coverage throughout your home.Ideal for Mobile DevicesImprove WiFi strength for smartphones,tablets, laptops & more.CompatibleWorks with any existing WiFi router or gateway.It’s EasyEasy installation using any web browser;no CD required.Eliminate Dead ZonesEliminate WiFi dead zones and enjoy a more reliable WiFi connection.Reduce BillsReduce 3G/4G mobile data plan charges by connecting to WiFi.Ideal for Tablets &SmartphonesExisting WiFiSometimes your router does not provide the WiFi coverage you needExtenderBoosts the range of your existing WiFi & creates a stronger signal in hard-to-reach areasNetwork ConnectionsRouter connection status Mobile device connection status Secure connection (WPS)Connect to powerPower on/offN150RangeExtender Data Sheet WiFiWN1000RP WiFi Analytics AppHow strong is your WiFi signal? Use the NETGEAR WiFi Analytics app & get advanced analytics to optimize your existing or newly extended WiFi network. Check your network status, WiFi signal strength, identify crowded WiFi channels & much more!Here’s what you can do with the WiFi Analytics App!• Get a network status overview• Check WiFi signal strength• Measure WiFi channel interference• Keep track of WiFi strength by location• And more...Scan to install appThis product is packaged with a limited warranty, the acceptance of which is a condition of sale. Warranty valid only when purchased from a NETGEAR authorized reseller.* 24/7 basic technical support provided for 90 days from date of purchase when purchased from a NETGEAR authorized reseller.1Works with devices supporting Wi-Fi Protected Setup™ (WPS).Data throughput, signal range, and wireless coverage per sq. ft. are not guaranteed and may vary due to differences in operating environments of wireless networks, including without limitation building materials and wireless interference. Specifications are subject to change without notice.NETGEAR, and the NETGEAR Logo are trademarks of NETGEAR, Inc. Mac and the Mac logo are trademarks of Apple Inc. Any other trademarks herein are for reference purposes only. ©2015 NETGEAR, Inc.NETGEAR, Inc. 350 E. Plumeria Drive, San Jose, CA 95134-1911 USA, /supportD-WN1000RP-3N150 WiFi Range Extender Data SheetWN1000RPPackage Contents• N150 WiFi Range Extender (WN1000RP)• Installation guidePhysical Specifications• Dimensions: 2.64 x 2.17 x 1.34 in (67.05 x 55.11 x 34.03 mm)• Weight: 0.22 lb (.099 kg)Warranty• Warranty localized to country of saleSecurity• WiFi Protected Access® (WPA/WPA2-PSK)and WEPStandards• IEEE® 802.11 b/g 2.4GHz with some 11nfeaturesSupport• 24/7 basic technical support free for 90 daysEase of Use• CD-less setup—great for mobile devices • Push ‘N’ Connect using Wi-Fi Protected Setup® (WPS)1System Requirements• 2.4GHz 802.11 b/g/n wireless router or gateway • Microsoft® Internet Explorer® 5.0, Firefox® 2.0 or Safari® 1.4 or Google Chrome 11.0 browsersor higher。
负载均衡load balance的英文缩写
负载均衡load balance的英文缩写Title: Load Balance (LB): The Cornerstone of Efficient Resource ManagementIntroductionIn the realm ofputing and networking, one term that frequently crops up is "Load Balance" or LB for short. This concept plays a pivotal role in optimizing resource utilization, enhancing system performance, and ensuring fault tolerance. In this article, we will delve into the intricacies of load balancing, its significance, and how it contributes to efficient resource management.What is Load Balancing?Load balancing refers to the methodical distribution of network traffic across multiple servers to optimize resource utilization, enhance responsiveness, and avoid overloading any single server. It is an essentialponent of fault-tolerant systems as it ensures that no single point of failure exists.The Importance of Load BalancingThe importance of load balancing can be summed up in three main points:1. Improved Performance: By distributing the workload evenly across multiple servers, each server operates within its optimal capacity, leading to better overall system performance.2. Enhanced Availability: If one server fails or needs maintenance, the load balancer redirects traffic to other available servers, thereby ensuring continuous service availability.3. Scalability: As the demand for services increases, new servers can be added to the system without disrupting existing services. This allows for easy expansion and scalability of the system.How does Load Balancing Work?Load balancing typically involves the use of a software or hardware device called a load balancer. The load balancer acts as a traffic cop, directing client requests to the various backend servers based on certain predefined algorithms and policies. These algorithms may consider factors such as server availability, server load, geographic location, or specific application requirements.Types of Load Balancing AlgorithmsThere are several types of load balancing algorithms, including:1. Round Robin: Each iing request is assigned to the next available server in a rotation.2. Least Connections: New requests are sent to the server with the fewest active connections.3. IP Hash: A hash function is used to determine which server should handle a request based on the client's IP address.4. Weighted Algorithms: Servers are assigned weights based on their processing power or capacity, and requests are distributed accordingly.ConclusionLoad balancing (LB) is a crucial aspect of modernputing and networking infrastructure. Its ability to distribute workloads efficiently, ensure high availability, and facilitate scalability makes it an indispensable tool for managing resources effectively. Understanding the concepts and mechanisms behind load balancing can help organizations make informed decisions about their IT infrastructure and improve the overall user experience.。
3GPP TS 36.331 V13.2.0 (2016-06)
3GPP TS 36.331 V13.2.0 (2016-06)Technical Specification3rd Generation Partnership Project;Technical Specification Group Radio Access Network;Evolved Universal Terrestrial Radio Access (E-UTRA);Radio Resource Control (RRC);Protocol specification(Release 13)The present document has been developed within the 3rd Generation Partnership Project (3GPP TM) and may be further elaborated for the purposes of 3GPP. The present document has not been subject to any approval process by the 3GPP Organizational Partners and shall not be implemented.This Specification is provided for future development work within 3GPP only. The Organizational Partners accept no liability for any use of this Specification. Specifications and reports for implementation of the 3GPP TM system should be obtained via the 3GPP Organizational Partners' Publications Offices.KeywordsUMTS, radio3GPPPostal address3GPP support office address650 Route des Lucioles - Sophia AntipolisValbonne - FRANCETel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16InternetCopyright NotificationNo part may be reproduced except as authorized by written permission.The copyright and the foregoing restriction extend to reproduction in all media.© 2016, 3GPP Organizational Partners (ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, TTC).All rights reserved.UMTS™ is a Trade Mark of ETSI registered for the benefit of its members3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational PartnersLTE™ is a Trade Mark of ETSI currently being registered for the benefit of its Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM AssociationBluetooth® is a Trade Mark of the Bluetooth SIG registered for the benefit of its membersContentsForeword (18)1Scope (19)2References (19)3Definitions, symbols and abbreviations (22)3.1Definitions (22)3.2Abbreviations (24)4General (27)4.1Introduction (27)4.2Architecture (28)4.2.1UE states and state transitions including inter RAT (28)4.2.2Signalling radio bearers (29)4.3Services (30)4.3.1Services provided to upper layers (30)4.3.2Services expected from lower layers (30)4.4Functions (30)5Procedures (32)5.1General (32)5.1.1Introduction (32)5.1.2General requirements (32)5.2System information (33)5.2.1Introduction (33)5.2.1.1General (33)5.2.1.2Scheduling (34)5.2.1.2a Scheduling for NB-IoT (34)5.2.1.3System information validity and notification of changes (35)5.2.1.4Indication of ETWS notification (36)5.2.1.5Indication of CMAS notification (37)5.2.1.6Notification of EAB parameters change (37)5.2.1.7Access Barring parameters change in NB-IoT (37)5.2.2System information acquisition (38)5.2.2.1General (38)5.2.2.2Initiation (38)5.2.2.3System information required by the UE (38)5.2.2.4System information acquisition by the UE (39)5.2.2.5Essential system information missing (42)5.2.2.6Actions upon reception of the MasterInformationBlock message (42)5.2.2.7Actions upon reception of the SystemInformationBlockType1 message (42)5.2.2.8Actions upon reception of SystemInformation messages (44)5.2.2.9Actions upon reception of SystemInformationBlockType2 (44)5.2.2.10Actions upon reception of SystemInformationBlockType3 (45)5.2.2.11Actions upon reception of SystemInformationBlockType4 (45)5.2.2.12Actions upon reception of SystemInformationBlockType5 (45)5.2.2.13Actions upon reception of SystemInformationBlockType6 (45)5.2.2.14Actions upon reception of SystemInformationBlockType7 (45)5.2.2.15Actions upon reception of SystemInformationBlockType8 (45)5.2.2.16Actions upon reception of SystemInformationBlockType9 (46)5.2.2.17Actions upon reception of SystemInformationBlockType10 (46)5.2.2.18Actions upon reception of SystemInformationBlockType11 (46)5.2.2.19Actions upon reception of SystemInformationBlockType12 (47)5.2.2.20Actions upon reception of SystemInformationBlockType13 (48)5.2.2.21Actions upon reception of SystemInformationBlockType14 (48)5.2.2.22Actions upon reception of SystemInformationBlockType15 (48)5.2.2.23Actions upon reception of SystemInformationBlockType16 (48)5.2.2.24Actions upon reception of SystemInformationBlockType17 (48)5.2.2.25Actions upon reception of SystemInformationBlockType18 (48)5.2.2.26Actions upon reception of SystemInformationBlockType19 (49)5.2.3Acquisition of an SI message (49)5.2.3a Acquisition of an SI message by BL UE or UE in CE or a NB-IoT UE (50)5.3Connection control (50)5.3.1Introduction (50)5.3.1.1RRC connection control (50)5.3.1.2Security (52)5.3.1.2a RN security (53)5.3.1.3Connected mode mobility (53)5.3.1.4Connection control in NB-IoT (54)5.3.2Paging (55)5.3.2.1General (55)5.3.2.2Initiation (55)5.3.2.3Reception of the Paging message by the UE (55)5.3.3RRC connection establishment (56)5.3.3.1General (56)5.3.3.1a Conditions for establishing RRC Connection for sidelink communication/ discovery (58)5.3.3.2Initiation (59)5.3.3.3Actions related to transmission of RRCConnectionRequest message (63)5.3.3.3a Actions related to transmission of RRCConnectionResumeRequest message (64)5.3.3.4Reception of the RRCConnectionSetup by the UE (64)5.3.3.4a Reception of the RRCConnectionResume by the UE (66)5.3.3.5Cell re-selection while T300, T302, T303, T305, T306, or T308 is running (68)5.3.3.6T300 expiry (68)5.3.3.7T302, T303, T305, T306, or T308 expiry or stop (69)5.3.3.8Reception of the RRCConnectionReject by the UE (70)5.3.3.9Abortion of RRC connection establishment (71)5.3.3.10Handling of SSAC related parameters (71)5.3.3.11Access barring check (72)5.3.3.12EAB check (73)5.3.3.13Access barring check for ACDC (73)5.3.3.14Access Barring check for NB-IoT (74)5.3.4Initial security activation (75)5.3.4.1General (75)5.3.4.2Initiation (76)5.3.4.3Reception of the SecurityModeCommand by the UE (76)5.3.5RRC connection reconfiguration (77)5.3.5.1General (77)5.3.5.2Initiation (77)5.3.5.3Reception of an RRCConnectionReconfiguration not including the mobilityControlInfo by theUE (77)5.3.5.4Reception of an RRCConnectionReconfiguration including the mobilityControlInfo by the UE(handover) (79)5.3.5.5Reconfiguration failure (83)5.3.5.6T304 expiry (handover failure) (83)5.3.5.7Void (84)5.3.5.7a T307 expiry (SCG change failure) (84)5.3.5.8Radio Configuration involving full configuration option (84)5.3.6Counter check (86)5.3.6.1General (86)5.3.6.2Initiation (86)5.3.6.3Reception of the CounterCheck message by the UE (86)5.3.7RRC connection re-establishment (87)5.3.7.1General (87)5.3.7.2Initiation (87)5.3.7.3Actions following cell selection while T311 is running (88)5.3.7.4Actions related to transmission of RRCConnectionReestablishmentRequest message (89)5.3.7.5Reception of the RRCConnectionReestablishment by the UE (89)5.3.7.6T311 expiry (91)5.3.7.7T301 expiry or selected cell no longer suitable (91)5.3.7.8Reception of RRCConnectionReestablishmentReject by the UE (91)5.3.8RRC connection release (92)5.3.8.1General (92)5.3.8.2Initiation (92)5.3.8.3Reception of the RRCConnectionRelease by the UE (92)5.3.8.4T320 expiry (93)5.3.9RRC connection release requested by upper layers (93)5.3.9.1General (93)5.3.9.2Initiation (93)5.3.10Radio resource configuration (93)5.3.10.0General (93)5.3.10.1SRB addition/ modification (94)5.3.10.2DRB release (95)5.3.10.3DRB addition/ modification (95)5.3.10.3a1DC specific DRB addition or reconfiguration (96)5.3.10.3a2LWA specific DRB addition or reconfiguration (98)5.3.10.3a3LWIP specific DRB addition or reconfiguration (98)5.3.10.3a SCell release (99)5.3.10.3b SCell addition/ modification (99)5.3.10.3c PSCell addition or modification (99)5.3.10.4MAC main reconfiguration (99)5.3.10.5Semi-persistent scheduling reconfiguration (100)5.3.10.6Physical channel reconfiguration (100)5.3.10.7Radio Link Failure Timers and Constants reconfiguration (101)5.3.10.8Time domain measurement resource restriction for serving cell (101)5.3.10.9Other configuration (102)5.3.10.10SCG reconfiguration (103)5.3.10.11SCG dedicated resource configuration (104)5.3.10.12Reconfiguration SCG or split DRB by drb-ToAddModList (105)5.3.10.13Neighbour cell information reconfiguration (105)5.3.10.14Void (105)5.3.10.15Sidelink dedicated configuration (105)5.3.10.16T370 expiry (106)5.3.11Radio link failure related actions (107)5.3.11.1Detection of physical layer problems in RRC_CONNECTED (107)5.3.11.2Recovery of physical layer problems (107)5.3.11.3Detection of radio link failure (107)5.3.12UE actions upon leaving RRC_CONNECTED (109)5.3.13UE actions upon PUCCH/ SRS release request (110)5.3.14Proximity indication (110)5.3.14.1General (110)5.3.14.2Initiation (111)5.3.14.3Actions related to transmission of ProximityIndication message (111)5.3.15Void (111)5.4Inter-RAT mobility (111)5.4.1Introduction (111)5.4.2Handover to E-UTRA (112)5.4.2.1General (112)5.4.2.2Initiation (112)5.4.2.3Reception of the RRCConnectionReconfiguration by the UE (112)5.4.2.4Reconfiguration failure (114)5.4.2.5T304 expiry (handover to E-UTRA failure) (114)5.4.3Mobility from E-UTRA (114)5.4.3.1General (114)5.4.3.2Initiation (115)5.4.3.3Reception of the MobilityFromEUTRACommand by the UE (115)5.4.3.4Successful completion of the mobility from E-UTRA (116)5.4.3.5Mobility from E-UTRA failure (117)5.4.4Handover from E-UTRA preparation request (CDMA2000) (117)5.4.4.1General (117)5.4.4.2Initiation (118)5.4.4.3Reception of the HandoverFromEUTRAPreparationRequest by the UE (118)5.4.5UL handover preparation transfer (CDMA2000) (118)5.4.5.1General (118)5.4.5.2Initiation (118)5.4.5.3Actions related to transmission of the ULHandoverPreparationTransfer message (119)5.4.5.4Failure to deliver the ULHandoverPreparationTransfer message (119)5.4.6Inter-RAT cell change order to E-UTRAN (119)5.4.6.1General (119)5.4.6.2Initiation (119)5.4.6.3UE fails to complete an inter-RAT cell change order (119)5.5Measurements (120)5.5.1Introduction (120)5.5.2Measurement configuration (121)5.5.2.1General (121)5.5.2.2Measurement identity removal (122)5.5.2.2a Measurement identity autonomous removal (122)5.5.2.3Measurement identity addition/ modification (123)5.5.2.4Measurement object removal (124)5.5.2.5Measurement object addition/ modification (124)5.5.2.6Reporting configuration removal (126)5.5.2.7Reporting configuration addition/ modification (127)5.5.2.8Quantity configuration (127)5.5.2.9Measurement gap configuration (127)5.5.2.10Discovery signals measurement timing configuration (128)5.5.2.11RSSI measurement timing configuration (128)5.5.3Performing measurements (128)5.5.3.1General (128)5.5.3.2Layer 3 filtering (131)5.5.4Measurement report triggering (131)5.5.4.1General (131)5.5.4.2Event A1 (Serving becomes better than threshold) (135)5.5.4.3Event A2 (Serving becomes worse than threshold) (136)5.5.4.4Event A3 (Neighbour becomes offset better than PCell/ PSCell) (136)5.5.4.5Event A4 (Neighbour becomes better than threshold) (137)5.5.4.6Event A5 (PCell/ PSCell becomes worse than threshold1 and neighbour becomes better thanthreshold2) (138)5.5.4.6a Event A6 (Neighbour becomes offset better than SCell) (139)5.5.4.7Event B1 (Inter RAT neighbour becomes better than threshold) (139)5.5.4.8Event B2 (PCell becomes worse than threshold1 and inter RAT neighbour becomes better thanthreshold2) (140)5.5.4.9Event C1 (CSI-RS resource becomes better than threshold) (141)5.5.4.10Event C2 (CSI-RS resource becomes offset better than reference CSI-RS resource) (141)5.5.4.11Event W1 (WLAN becomes better than a threshold) (142)5.5.4.12Event W2 (All WLAN inside WLAN mobility set becomes worse than threshold1 and a WLANoutside WLAN mobility set becomes better than threshold2) (142)5.5.4.13Event W3 (All WLAN inside WLAN mobility set becomes worse than a threshold) (143)5.5.5Measurement reporting (144)5.5.6Measurement related actions (148)5.5.6.1Actions upon handover and re-establishment (148)5.5.6.2Speed dependant scaling of measurement related parameters (149)5.5.7Inter-frequency RSTD measurement indication (149)5.5.7.1General (149)5.5.7.2Initiation (150)5.5.7.3Actions related to transmission of InterFreqRSTDMeasurementIndication message (150)5.6Other (150)5.6.0General (150)5.6.1DL information transfer (151)5.6.1.1General (151)5.6.1.2Initiation (151)5.6.1.3Reception of the DLInformationTransfer by the UE (151)5.6.2UL information transfer (151)5.6.2.1General (151)5.6.2.2Initiation (151)5.6.2.3Actions related to transmission of ULInformationTransfer message (152)5.6.2.4Failure to deliver ULInformationTransfer message (152)5.6.3UE capability transfer (152)5.6.3.1General (152)5.6.3.2Initiation (153)5.6.3.3Reception of the UECapabilityEnquiry by the UE (153)5.6.4CSFB to 1x Parameter transfer (157)5.6.4.1General (157)5.6.4.2Initiation (157)5.6.4.3Actions related to transmission of CSFBParametersRequestCDMA2000 message (157)5.6.4.4Reception of the CSFBParametersResponseCDMA2000 message (157)5.6.5UE Information (158)5.6.5.1General (158)5.6.5.2Initiation (158)5.6.5.3Reception of the UEInformationRequest message (158)5.6.6 Logged Measurement Configuration (159)5.6.6.1General (159)5.6.6.2Initiation (160)5.6.6.3Reception of the LoggedMeasurementConfiguration by the UE (160)5.6.6.4T330 expiry (160)5.6.7 Release of Logged Measurement Configuration (160)5.6.7.1General (160)5.6.7.2Initiation (160)5.6.8 Measurements logging (161)5.6.8.1General (161)5.6.8.2Initiation (161)5.6.9In-device coexistence indication (163)5.6.9.1General (163)5.6.9.2Initiation (164)5.6.9.3Actions related to transmission of InDeviceCoexIndication message (164)5.6.10UE Assistance Information (165)5.6.10.1General (165)5.6.10.2Initiation (166)5.6.10.3Actions related to transmission of UEAssistanceInformation message (166)5.6.11 Mobility history information (166)5.6.11.1General (166)5.6.11.2Initiation (166)5.6.12RAN-assisted WLAN interworking (167)5.6.12.1General (167)5.6.12.2Dedicated WLAN offload configuration (167)5.6.12.3WLAN offload RAN evaluation (167)5.6.12.4T350 expiry or stop (167)5.6.12.5Cell selection/ re-selection while T350 is running (168)5.6.13SCG failure information (168)5.6.13.1General (168)5.6.13.2Initiation (168)5.6.13.3Actions related to transmission of SCGFailureInformation message (168)5.6.14LTE-WLAN Aggregation (169)5.6.14.1Introduction (169)5.6.14.2Reception of LWA configuration (169)5.6.14.3Release of LWA configuration (170)5.6.15WLAN connection management (170)5.6.15.1Introduction (170)5.6.15.2WLAN connection status reporting (170)5.6.15.2.1General (170)5.6.15.2.2Initiation (171)5.6.15.2.3Actions related to transmission of WLANConnectionStatusReport message (171)5.6.15.3T351 Expiry (WLAN connection attempt timeout) (171)5.6.15.4WLAN status monitoring (171)5.6.16RAN controlled LTE-WLAN interworking (172)5.6.16.1General (172)5.6.16.2WLAN traffic steering command (172)5.6.17LTE-WLAN aggregation with IPsec tunnel (173)5.6.17.1General (173)5.7Generic error handling (174)5.7.1General (174)5.7.2ASN.1 violation or encoding error (174)5.7.3Field set to a not comprehended value (174)5.7.4Mandatory field missing (174)5.7.5Not comprehended field (176)5.8MBMS (176)5.8.1Introduction (176)5.8.1.1General (176)5.8.1.2Scheduling (176)5.8.1.3MCCH information validity and notification of changes (176)5.8.2MCCH information acquisition (178)5.8.2.1General (178)5.8.2.2Initiation (178)5.8.2.3MCCH information acquisition by the UE (178)5.8.2.4Actions upon reception of the MBSFNAreaConfiguration message (178)5.8.2.5Actions upon reception of the MBMSCountingRequest message (179)5.8.3MBMS PTM radio bearer configuration (179)5.8.3.1General (179)5.8.3.2Initiation (179)5.8.3.3MRB establishment (179)5.8.3.4MRB release (179)5.8.4MBMS Counting Procedure (179)5.8.4.1General (179)5.8.4.2Initiation (180)5.8.4.3Reception of the MBMSCountingRequest message by the UE (180)5.8.5MBMS interest indication (181)5.8.5.1General (181)5.8.5.2Initiation (181)5.8.5.3Determine MBMS frequencies of interest (182)5.8.5.4Actions related to transmission of MBMSInterestIndication message (183)5.8a SC-PTM (183)5.8a.1Introduction (183)5.8a.1.1General (183)5.8a.1.2SC-MCCH scheduling (183)5.8a.1.3SC-MCCH information validity and notification of changes (183)5.8a.1.4Procedures (184)5.8a.2SC-MCCH information acquisition (184)5.8a.2.1General (184)5.8a.2.2Initiation (184)5.8a.2.3SC-MCCH information acquisition by the UE (184)5.8a.2.4Actions upon reception of the SCPTMConfiguration message (185)5.8a.3SC-PTM radio bearer configuration (185)5.8a.3.1General (185)5.8a.3.2Initiation (185)5.8a.3.3SC-MRB establishment (185)5.8a.3.4SC-MRB release (185)5.9RN procedures (186)5.9.1RN reconfiguration (186)5.9.1.1General (186)5.9.1.2Initiation (186)5.9.1.3Reception of the RNReconfiguration by the RN (186)5.10Sidelink (186)5.10.1Introduction (186)5.10.1a Conditions for sidelink communication operation (187)5.10.2Sidelink UE information (188)5.10.2.1General (188)5.10.2.2Initiation (189)5.10.2.3Actions related to transmission of SidelinkUEInformation message (193)5.10.3Sidelink communication monitoring (195)5.10.6Sidelink discovery announcement (198)5.10.6a Sidelink discovery announcement pool selection (201)5.10.6b Sidelink discovery announcement reference carrier selection (201)5.10.7Sidelink synchronisation information transmission (202)5.10.7.1General (202)5.10.7.2Initiation (203)5.10.7.3Transmission of SLSS (204)5.10.7.4Transmission of MasterInformationBlock-SL message (205)5.10.7.5Void (206)5.10.8Sidelink synchronisation reference (206)5.10.8.1General (206)5.10.8.2Selection and reselection of synchronisation reference UE (SyncRef UE) (206)5.10.9Sidelink common control information (207)5.10.9.1General (207)5.10.9.2Actions related to reception of MasterInformationBlock-SL message (207)5.10.10Sidelink relay UE operation (207)5.10.10.1General (207)5.10.10.2AS-conditions for relay related sidelink communication transmission by sidelink relay UE (207)5.10.10.3AS-conditions for relay PS related sidelink discovery transmission by sidelink relay UE (208)5.10.10.4Sidelink relay UE threshold conditions (208)5.10.11Sidelink remote UE operation (208)5.10.11.1General (208)5.10.11.2AS-conditions for relay related sidelink communication transmission by sidelink remote UE (208)5.10.11.3AS-conditions for relay PS related sidelink discovery transmission by sidelink remote UE (209)5.10.11.4Selection and reselection of sidelink relay UE (209)5.10.11.5Sidelink remote UE threshold conditions (210)6Protocol data units, formats and parameters (tabular & ASN.1) (210)6.1General (210)6.2RRC messages (212)6.2.1General message structure (212)–EUTRA-RRC-Definitions (212)–BCCH-BCH-Message (212)–BCCH-DL-SCH-Message (212)–BCCH-DL-SCH-Message-BR (213)–MCCH-Message (213)–PCCH-Message (213)–DL-CCCH-Message (214)–DL-DCCH-Message (214)–UL-CCCH-Message (214)–UL-DCCH-Message (215)–SC-MCCH-Message (215)6.2.2Message definitions (216)–CounterCheck (216)–CounterCheckResponse (217)–CSFBParametersRequestCDMA2000 (217)–CSFBParametersResponseCDMA2000 (218)–DLInformationTransfer (218)–HandoverFromEUTRAPreparationRequest (CDMA2000) (219)–InDeviceCoexIndication (220)–InterFreqRSTDMeasurementIndication (222)–LoggedMeasurementConfiguration (223)–MasterInformationBlock (225)–MBMSCountingRequest (226)–MBMSCountingResponse (226)–MBMSInterestIndication (227)–MBSFNAreaConfiguration (228)–MeasurementReport (228)–MobilityFromEUTRACommand (229)–Paging (232)–ProximityIndication (233)–RNReconfiguration (234)–RNReconfigurationComplete (234)–RRCConnectionReconfiguration (235)–RRCConnectionReconfigurationComplete (240)–RRCConnectionReestablishment (241)–RRCConnectionReestablishmentComplete (241)–RRCConnectionReestablishmentReject (242)–RRCConnectionReestablishmentRequest (243)–RRCConnectionReject (243)–RRCConnectionRelease (244)–RRCConnectionResume (248)–RRCConnectionResumeComplete (249)–RRCConnectionResumeRequest (250)–RRCConnectionRequest (250)–RRCConnectionSetup (251)–RRCConnectionSetupComplete (252)–SCGFailureInformation (253)–SCPTMConfiguration (254)–SecurityModeCommand (255)–SecurityModeComplete (255)–SecurityModeFailure (256)–SidelinkUEInformation (256)–SystemInformation (258)–SystemInformationBlockType1 (259)–UEAssistanceInformation (264)–UECapabilityEnquiry (265)–UECapabilityInformation (266)–UEInformationRequest (267)–UEInformationResponse (267)–ULHandoverPreparationTransfer (CDMA2000) (273)–ULInformationTransfer (274)–WLANConnectionStatusReport (274)6.3RRC information elements (275)6.3.1System information blocks (275)–SystemInformationBlockType2 (275)–SystemInformationBlockType3 (279)–SystemInformationBlockType4 (282)–SystemInformationBlockType5 (283)–SystemInformationBlockType6 (287)–SystemInformationBlockType7 (289)–SystemInformationBlockType8 (290)–SystemInformationBlockType9 (295)–SystemInformationBlockType10 (295)–SystemInformationBlockType11 (296)–SystemInformationBlockType12 (297)–SystemInformationBlockType13 (297)–SystemInformationBlockType14 (298)–SystemInformationBlockType15 (298)–SystemInformationBlockType16 (299)–SystemInformationBlockType17 (300)–SystemInformationBlockType18 (301)–SystemInformationBlockType19 (301)–SystemInformationBlockType20 (304)6.3.2Radio resource control information elements (304)–AntennaInfo (304)–AntennaInfoUL (306)–CQI-ReportConfig (307)–CQI-ReportPeriodicProcExtId (314)–CrossCarrierSchedulingConfig (314)–CSI-IM-Config (315)–CSI-IM-ConfigId (315)–CSI-RS-Config (317)–CSI-RS-ConfigEMIMO (318)–CSI-RS-ConfigNZP (319)–CSI-RS-ConfigNZPId (320)–CSI-RS-ConfigZP (321)–CSI-RS-ConfigZPId (321)–DMRS-Config (321)–DRB-Identity (322)–EPDCCH-Config (322)–EIMTA-MainConfig (324)–LogicalChannelConfig (325)–LWA-Configuration (326)–LWIP-Configuration (326)–RCLWI-Configuration (327)–MAC-MainConfig (327)–P-C-AndCBSR (332)–PDCCH-ConfigSCell (333)–PDCP-Config (334)–PDSCH-Config (337)–PDSCH-RE-MappingQCL-ConfigId (339)–PHICH-Config (339)–PhysicalConfigDedicated (339)–P-Max (344)–PRACH-Config (344)–PresenceAntennaPort1 (346)–PUCCH-Config (347)–PUSCH-Config (351)–RACH-ConfigCommon (355)–RACH-ConfigDedicated (357)–RadioResourceConfigCommon (358)–RadioResourceConfigDedicated (362)–RLC-Config (367)–RLF-TimersAndConstants (369)–RN-SubframeConfig (370)–SchedulingRequestConfig (371)–SoundingRS-UL-Config (372)–SPS-Config (375)–TDD-Config (376)–TimeAlignmentTimer (377)–TPC-PDCCH-Config (377)–TunnelConfigLWIP (378)–UplinkPowerControl (379)–WLAN-Id-List (382)–WLAN-MobilityConfig (382)6.3.3Security control information elements (382)–NextHopChainingCount (382)–SecurityAlgorithmConfig (383)–ShortMAC-I (383)6.3.4Mobility control information elements (383)–AdditionalSpectrumEmission (383)–ARFCN-ValueCDMA2000 (383)–ARFCN-ValueEUTRA (384)–ARFCN-ValueGERAN (384)–ARFCN-ValueUTRA (384)–BandclassCDMA2000 (384)–BandIndicatorGERAN (385)–CarrierFreqCDMA2000 (385)–CarrierFreqGERAN (385)–CellIndexList (387)–CellReselectionPriority (387)–CellSelectionInfoCE (387)–CellReselectionSubPriority (388)–CSFB-RegistrationParam1XRTT (388)–CellGlobalIdEUTRA (389)–CellGlobalIdUTRA (389)–CellGlobalIdGERAN (390)–CellGlobalIdCDMA2000 (390)–CellSelectionInfoNFreq (391)–CSG-Identity (391)–FreqBandIndicator (391)–MobilityControlInfo (391)–MobilityParametersCDMA2000 (1xRTT) (393)–MobilityStateParameters (394)–MultiBandInfoList (394)–NS-PmaxList (394)–PhysCellId (395)–PhysCellIdRange (395)–PhysCellIdRangeUTRA-FDDList (395)–PhysCellIdCDMA2000 (396)–PhysCellIdGERAN (396)–PhysCellIdUTRA-FDD (396)–PhysCellIdUTRA-TDD (396)–PLMN-Identity (397)–PLMN-IdentityList3 (397)–PreRegistrationInfoHRPD (397)–Q-QualMin (398)–Q-RxLevMin (398)–Q-OffsetRange (398)–Q-OffsetRangeInterRAT (399)–ReselectionThreshold (399)–ReselectionThresholdQ (399)–SCellIndex (399)–ServCellIndex (400)–SpeedStateScaleFactors (400)–SystemInfoListGERAN (400)–SystemTimeInfoCDMA2000 (401)–TrackingAreaCode (401)–T-Reselection (402)–T-ReselectionEUTRA-CE (402)6.3.5Measurement information elements (402)–AllowedMeasBandwidth (402)–CSI-RSRP-Range (402)–Hysteresis (402)–LocationInfo (403)–MBSFN-RSRQ-Range (403)–MeasConfig (404)–MeasDS-Config (405)–MeasGapConfig (406)–MeasId (407)–MeasIdToAddModList (407)–MeasObjectCDMA2000 (408)–MeasObjectEUTRA (408)–MeasObjectGERAN (412)–MeasObjectId (412)–MeasObjectToAddModList (412)–MeasObjectUTRA (413)–ReportConfigEUTRA (422)–ReportConfigId (425)–ReportConfigInterRAT (425)–ReportConfigToAddModList (428)–ReportInterval (429)–RSRP-Range (429)–RSRQ-Range (430)–RSRQ-Type (430)–RS-SINR-Range (430)–RSSI-Range-r13 (431)–TimeToTrigger (431)–UL-DelayConfig (431)–WLAN-CarrierInfo (431)–WLAN-RSSI-Range (432)–WLAN-Status (432)6.3.6Other information elements (433)–AbsoluteTimeInfo (433)–AreaConfiguration (433)–C-RNTI (433)–DedicatedInfoCDMA2000 (434)–DedicatedInfoNAS (434)–FilterCoefficient (434)–LoggingDuration (434)–LoggingInterval (435)–MeasSubframePattern (435)–MMEC (435)–NeighCellConfig (435)–OtherConfig (436)–RAND-CDMA2000 (1xRTT) (437)–RAT-Type (437)–ResumeIdentity (437)–RRC-TransactionIdentifier (438)–S-TMSI (438)–TraceReference (438)–UE-CapabilityRAT-ContainerList (438)–UE-EUTRA-Capability (439)–UE-RadioPagingInfo (469)–UE-TimersAndConstants (469)–VisitedCellInfoList (470)–WLAN-OffloadConfig (470)6.3.7MBMS information elements (472)–MBMS-NotificationConfig (472)–MBMS-ServiceList (473)–MBSFN-AreaId (473)–MBSFN-AreaInfoList (473)–MBSFN-SubframeConfig (474)–PMCH-InfoList (475)6.3.7a SC-PTM information elements (476)–SC-MTCH-InfoList (476)–SCPTM-NeighbourCellList (478)6.3.8Sidelink information elements (478)–SL-CommConfig (478)–SL-CommResourcePool (479)–SL-CP-Len (480)–SL-DiscConfig (481)–SL-DiscResourcePool (483)–SL-DiscTxPowerInfo (485)–SL-GapConfig (485)。
abstractvalueadaptingcache 的 get 和 lookup -回复
abstractvalueadaptingcache 的get 和lookup -回复题目: abstractvalueadaptingcache 的get 和lookup 方法详解摘要: abstractvalueadaptingcache是在缓存系统中常用的一种机制,它通过get和lookup方法来实现缓存数据的获取和查找。
本文将详细介绍abstractvalueadaptingcache的概念、用途以及如何使用get和lookup 方法。
第一部分: 概念介绍abstractvalueadaptingcache是一种在缓存系统中经常使用的机制。
它允许将缓存与外部数据源进行适配,以便在缓存系统中实现更高效的数据访问。
abstractvalueadaptingcache的主要功能是在缓存数据的获取和查找时进行适配转换。
第二部分: 使用get方法获取缓存数据在abstractvalueadaptingcache中,get方法是用来获取缓存数据的主要方式。
当我们需要从缓存中获取数据时,可以使用get方法来实现。
get方法的使用步骤如下:1. 调用get方法并传入需要获取数据的Key。
例如: cache.get(key)。
2. 系统会首先在缓存中查找对应的数据。
3. 如果找到了匹配的数据,则直接返回该数据。
4. 如果没有找到匹配的数据,则尝试从外部数据源中获取数据。
5. 如果从外部数据源中获取到了数据,则将数据放入缓存中,并返回。
get方法的一些注意事项:1. get方法返回的数据类型通常是Object,需要根据实际情况进行类型转换。
2. 在使用get方法时,需要注意缓存的过期时间和淘汰策略,以保证数据的准确性和及时性。
3. 在高并发情况下,可能会出现多个线程同时进行数据获取的情况,需要注意并发安全性。
第三部分: 使用lookup方法查找缓存数据除了get方法,abstractvalueadaptingcache还提供了lookup方法来实现缓存数据的查找。
微软培训师内训资料英文
• BAD EXAMPLE : “I welcome this kind of examination, because people have to know whether their president is a crook. Well, I’m not a crook.〞 -- Richard M. Nixon
– Especially with large audience.
• Don’t fidget or put in pocket. watch.
Visual Skills – Face
• Show emotion! • Most of the time:
• Don’t use ambiguous words in speech.
Vocal Skills
• Project & resonate your voice. • No “UM〞s and “ER〞s. (Pause
instead). • Silence is a tool (To draw attention).
The Central Message (it)
• People will not remember everything. • Have ONE clear walk-away message. • What do you want people to remember in 3
months? • The answer to the question:
– Eyes – Body – Hands – Face
Philips 439P9H 32 10 SuperWide 曲面显示屏说明书
Philips Brilliance32:10 SuperWide curved LCD displayP Line43 (43.4" / 110.2 cm diag.)3840 x 1200439P9HWide open possibilitieswith two high-performance monitors in onePhilips 43” curved 32:10 SuperWide display is like two full-size high-performancemonitors in-one. Productivity enhancing features like USB-C and pop-up webcam with Windows Hello deliver performance and convenience you expect.Expand your horizons•32:10 SuperWide designed to replace multiscreen setups •MultiView enables simultaneous dual connection and view •1800r curved display for a more immersive experience •Effortlessly smooth action with Adaptive-Sync technology Optimal Connectivity•Built in USB-C docking station•Built-in KVM switch to easily switch between sources Designed for the way you work•Securely sign in with pop-up webcam with Windows Hello™•DisplayHDR 400 for more lifelike and outstanding visuals •Less eye fatigue with Flicker-free technology •LowBlue Mode for easy on-the-eyes productivity•Tilt, swivel and height-adjust for an ideal viewing positionHighlights32:10 SuperWide32:10 SuperWide 43" screen, with 3840 x 1200 resolution, is designed to replace multiscreen setups for massive wide view. It's like having two 16:10 displays side-by-side. SuperWide monitors offer screen area of dual monitors without the complicated setup.Adaptive-Sync technologyGaming shouldn't be a choice between choppy gameplay or broken frames. Get fluid, artifact-free performance at virtually any framerate with Adaptive-Sync technology, smooth quick refresh and ultra-fast response time.MultiView technologyWith the ultra-high resolution PhilipsMultiView display you can now experience a world of connectivity. MultiView enables active dual connect and view so that you can workwith multiple devices like a PC and notebook simultaneously, for complex multi-tasking.1800r Curved displayInnovative curved display offers less image distortion, a wider field of view, reduced glare, and more comfort for eyes.Built in USB-C docking stationThis Philips display features a built-in USB type-C docking station with power delivery. Its slim, reversible USB-C connector allows for easy, one-cable docking. Simplify by connecting all your peripherals like keyboard, mouse and your RJ-45 Ethernet cable to the monitor's docking station. Simply connect yournotebook and this monitor with a single USB-C cable to watch high-resolution video and transfer super-speed data, while powering up and re-charging your notebook at the same time.MultiClient Integrated KVMWith MultiClient Integrated KVM switch, you can control two separate PCs with onemonitor-keyboard-mouse set up. A convenient button allows you to quickly switch between sources. Handy with set-ups that require dualPC computing power or sharing one large monitor to show two different PCs.Windows Hello™ pop-up webcamPhilips' innovative and secure webcam pops up when you need it and securely tucks back into the monitor when you are not using it. The webcam is also equipped with advanced sensors for Windows Hello™ facialrecognition, which conveniently logs you into your Windows devices in less than 2 seconds, 3 times faster than a password.DisplayHDR 400VESA-certified DisplayHDR 400 delivers a significant step-up from normal SDR displays. Unlike, other 'HDR compatible' screens, true DisplayHDR 400 produces astonishingbrightness, contrast and colors. With global dimming and peak brightness up-to 400 nits, images come to life with notable highlights while featuring deeper, more nuanced blacks. It renders a fuller palette of rich new colors, delivering a visual experience that engagesyour senses.Issue date 2023-03-23 Version: 7.0.212 NC: 8670 001 60105 EAN: 87 12581 75956 8© 2023 Koninklijke Philips N.V.All Rights reserved.Specifications are subject to change without notice. Trademarks are the property of Koninklijke Philips N.V. or their respective owners.SpecificationsPicture/Display•LCD panel type: VA LCD•Adaptive sync•Backlight type: W-LED system•Panel Size: 43.4 inch / 110.2 cm•Display Screen Coating: Anti-Glare, 2H, Haze 25%•Effective viewing area: 1052.3 (H) x 328.8 (V) mm - at a 1800R curvature*•Aspect ratio: 32:10•Maximum resolution: 3840 x 1200 @ 100 Hz*•Pixel Density: 93 PPI•Response time (typical): 4 ms (Gray to Gray)*•Brightness: 450 cd/m²•Contrast ratio (typical): 3000:1•SmartContrast: 80,000,000:1•Pixel pitch: 0.274 x 0.274 mm•Viewing angle: 178º (H) / 178º (V), @ C/R > 10•Picture enhancement: SmartImage•Display colors: Color support 1.07 billion colors •Color gamut (min.): BT. 709 Coverage: 99%*, DCI-P3 Coverage: 95%*•Color gamut (typical): NTSC 105%*, sRGB 123%*, Adobe RGB 91%*•HDR: DisplayHDR 400 certified (DP / HDMI)•Scanning Frequency: 30 - 150 kHz (H) / 48 - 100 Hz (V)•SmartUniformity: 93 ~ 105%•Delta E: < 2 (sRGB)•sRGB•Flicker-free•LowBlue Mode•EasyReadConnectivity•Signal Input: DisplayPort 1.4* x 2; HDMI 2.0b x 1; USB-C 3.2 Gen 1 x 2 (upstream, power delivery up to 90W)•HDCP: HDCP 2.2 (HDMI / DP), HDCP 1.4 (USB-C)•USB:: USB-C 3.2 Gen 1 x 2 (upstream), USB 3.2 x 4 (downstream with 1 fast charge B.C 1.2)•Audio (In/Out): Headphone out•RJ45: Ethernet LAN up to 1G*•Sync Input: Separate SyncUSB•USB-C: Reversible plug connector•Super speed: Data and Video transfer•DP: Built-in Display Port Alt mode•Power delivery: USB PD version 3.0•USB-C max. power delivery: Up to 90W* (5V/3A; 7V/3A; 9V/3A; 10V/3A;12V/3A; 15V/3A; 20V/3.75A; 20V/4.5A)Convenience•Built-in Speakers: 5 W x 2•Built-in webcam: Pop-up 2.0 megapixel FHD camera with microphone and LED indictor (for Windows 10 Hello)•MultiView: PBP (2x devices)•User convenience: SmartImage, Input, User, Menu, Power On/Off•Control software: SmartControl•OSD Languages: Brazil Portuguese, Czech, Dutch,English, Finnish, French, German, Greek,Hungarian, Italian, Japanese, Korean, Polish,Portuguese, Russian, Simplified Chinese, Spanish,Swedish, Traditional Chinese, Turkish, Ukrainian•Other convenience: Kensington lock, VESA mount(100x100mm)•Plug & Play Compatibility: DDC/CI, Mac OS X,sRGB, Windows 10 / 8.1 / 8 / 7Stand•Height adjustment: 130 mm•Swivel:-/+20 degree•Tilt: -5~10 degreePower•ECO mode: 36.2 W (typ.)•On mode: 41.8 W (typ.) (EnergyStar 8.0 testmethod)•Standby mode: 0.4 W (typ.)•Off mode: Zero watts with Zero switch•Energy Label Class: G•Power LED indicator: Operation - White, Standbymode- White (blinking)•Power supply: Built-in, 100-240VAC, 50-60HzDimensions•Product with stand(max height): 1058 x 560 x303 mm•Product without stand (mm): 1058 x 361 x137 mm•Packaging in mm (WxHxD): 1150 x 525 x 350 mmWeight•Product with stand (kg): 14.37 kg•Product without stand (kg): 10.34 kg•Product with packaging (kg): 20.19 kgOperating conditions•Temperature range (operation): 0°C to 40 °C•Temperature range (storage): -20°C to 60 °C•Relative humidity: 20%-80 %•Altitude: Operation: +12,000ft (3,658m), Non-operation: +40,000ft (12,192m)•MTBF (demonstrated): 70,000 hrs (excludedbacklight)Sustainability•Environmental and energy: EnergyStar 8.0,EPEAT*, TCO Certified, RoHS, WEEE•Recyclable packaging material: 100 %•Post consumer recycled plastic: 35%•Specific Substances: PVC / BFR free housing,Mercury freeCompliance and standards•Regulatory Approvals: CE Mark, FCC Class B,UKRAINIAN, ICES-003, CU-EAC, TUV/GS, TUVErgoCabinet•Front bezel: Black•Rear cover: Black•Foot:Black•Finish: TextureWhat's in the box?•Monitor with stand•Cables:HDMI cable,DP cable, USB-C to C/A,Power cable•User Documentation*Radius of the arc of the display curvature in mm*The maximum resolution works for either USB-C, DP or HDMIinput.*Response time value equal to SmartResponse*BT. 709 / DCI-P3 Coverage based on CIE1976*NTSC Area based on CIE1976*sRGB Area based on CIE1931*Adobe RGB Coverage based on CIE1976*DisplayPort 1.4 version is for HDR*Activities such as screen sharing, on-line streaming video and audioover the Internet can impact your network performance. Yourhardware, network bandwidth and its performance will determineoverall audio and video quality.*For USB-C power and charging function, your Notebook/devicemust support USB-C standard Power Delivery specifications. Pleasecheck with your Notebook user manual or manufacturer for moredetails.*For Video transmission via USB-C, your Notebook/device mustsupport USB-C DP Alt mode*USB-C max. power delivery: 1st USB-C port can support to 75 Wand 2nd USB-C port can support to 15 W.*If your Ethernet connection seems slow, please enter OSD menuand select USB 3.0 or higher version which can support the LANspeed to 1G.*EPEAT rating is valid only where Philips registers the product. Pleasevisit https:/// for registration status in your country.*The monitor may look different from feature images.。
nativealloc concurrent copying gc freed
nativealloc concurrent copying gcfreedNativeAlloc Concurrent Copying GC Freed是一个Android中的垃圾回收机制,用于回收Java heap中未使用的内存。
本文将会详细介绍该机制的工作原理以及如何优化其性能。
第一步,我们需要了解Java Heap的结构。
Java Heap是Java虚拟机最大的一块内存区域,主要用于存储对象实例。
Java Heap可以被划分为年轻代和老年代两部分。
年轻代又可以被分为eden space、survivor space 0和survivor space 1三个区域。
Java虚拟机的垃圾回收机制主要针对年轻代和老年代进行回收。
第二步,我们介绍一下“Concurrent Copying GC”算法。
这是一种基于“分代假说”的垃圾回收算法,它将Java Heap划分为年轻代和老年代两部分。
年轻代又被划分为eden space和两个survivor space。
当年轻代的eden space被填满后会触发一次垃圾回收,此时所有存活的对象会被复制到survivor space中。
当survivor space也被填满后,会发生Minor GC,将存活的对象复制到另一个survivor space中。
第三步,我们讨论一下NativeAlloc。
NativeAlloc是一种在Native内存池中分配内存的机制。
由于Native内存池不受Java虚拟机的管理,因此在使用NativeAlloc时需要特别小心,以避免内存泄漏和未定义的行为。
第四步,我们来谈谈如何优化NativeAlloc Concurrent Copying GC Freed机制的性能。
首先,尽量避免使用NativeAlloc,除非必要情况下才使用。
其次,在使用Java Heap时尽量使用比较小的对象,以减少内存占用。
接着,尽量避免创建过多的对象,可以使用对象池的方式来减少内存占用。
adaptivethreshold 偏移量 -回复
adaptivethreshold 偏移量-回复什么是adaptivethreshold 偏移量?Adaptivethreshold 偏移量(Adaptive Threshold Offset)是一种图像处理中常用的技术,用于将图像分割为二值图像。
图像分割是图像处理的基础步骤之一,它将图像中的目标和背景分离出来,使得图像更容易理解和处理。
在进行图像分割时,常常需要选择一个合适的阈值来确定目标和背景,而adaptivethreshold 偏移量就是用来计算这个阈值的。
为了更好地理解adaptivethreshold 偏移量的作用,首先需要了解adaptivethreshold 分割算法的工作原理。
这一算法是基于自适应阈值的分割方法,它可以根据图像中的像素灰度值的局部特性自动选择阈值,从而实现更好的分割效果。
adaptivethreshold 分割算法根据像素周围的邻域信息来确定每个像素的阈值,这使得它能够在受到光照和噪声等因素影响时仍能有效地进行图像分割。
具体而言,adaptivethreshold 偏移量是用来调整adaptivethreshold 算法中的阈值的。
adaptivethreshold 算法根据图像局部邻域的平均灰度值来计算阈值,而adaptivethreshold 偏移量可以对这个计算结果进行修正。
当我们在使用adaptivethreshold 算法进行图像分割时,可能会遇到一些特殊情况,例如图像中存在很强的光照差异或者噪声干扰较多。
这些情况下,adaptivethreshold 的计算结果可能会偏离我们期望的分割效果,因此我们需要根据实际情况调整阈值。
adaptivethreshold 偏移量可以通过增加或减少adaptivethreshold 算法中的阈值来实现。
当图像中存在较强光照差异时,我们可以增加偏移量来使得阈值增加,从而更好地适应光照变化。
而当图像中存在噪声干扰时,我们可以减少偏移量来使得阈值减小,从而更好地保留目标的细节信息。
稀疏恢复和傅里叶采样
Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leslie A. Kolodziejski Chair, Department Committee on Graduate Students
2
Sparse Recovery and Fourier Sampling by Eric Price
Submitted to the Department of Electrical Engineering and Computer Science on August 26, 2013, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science
Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Department of Electrical Engineering and Computer Science August 26, 2013
Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piotr Indyk Professor Thesis Supervisor
adaptivethreshold 偏移量 -回复
adaptivethreshold 偏移量-回复什么是adaptivethreshold 偏移量(Adaptive Threshold Offset)?在计算机视觉和图像处理领域,自适应阈值(adaptive thresholding) 是一种常见的技术,用于将图像分割为背景和前景。
它基于每个像素周围的局部像素值计算一个适应性的阈值来决定像素属于背景还是前景。
这种方法在处理不同光照条件下的图像时非常有用,因为它能更好地适应不同区域的亮度变化。
而adaptivethreshold 偏移量(适应性阈值偏移量)是自适应阈值处理中的一个参数,它用于调整局部阈值。
具体而言,对于每个像素,自适应阈值将利用该像素周围邻域的像素值来计算局部阈值。
在此过程中,adaptivethreshold 偏移量起到了微调的作用,用于调整像素的划分结果。
自适应阈值算法有许多不同的变体。
其中一种常用的变体是基于加权邻域平均值的自适应阈值算法。
该算法计算每个像素周围邻域的平均像素值,并根据该平均值和adaptivethreshold 偏移量来判断该像素属于背景还是前景。
如果该像素值高于平均值加上偏移量,则将其视为前景;反之,则将其视为背景。
那么,如何选择适当的adaptivethreshold 偏移量?选择适当的adaptivethreshold 偏移量是一个关键的步骤,它直接影响到自适应阈值处理的结果。
一个合理的偏移量应该能够在准确分割图像的同时,保留足够的图像细节。
在实际应用中,选择适当的偏移量通常需要进行实验和调试。
一种常见的方法是基于视觉效果进行手动调整,观察不同偏移量下的分割结果,并选择最佳的效果。
这种方法虽然简单直观,但需要大量的人工实验和主观判断,不够自动化和准确。
另一种更为智能的方法是基于图像统计特征进行自动优化。
通过分析图像的统计信息,可以确定某个范围内的像素值分布情况,并根据这些信息选择适当的偏移量。
这种方法通常结合了数学模型和启发式算法,通过学习和迭代优化来确定最佳偏移量。
THE CACHE INFERENCE PROBLEM and its Application to Content and Request Routing
T HE C ACHE I NFERENCE P ROBLEMand its Application to Content and Request Routing N IKOLAOS L AOUTARIS G EORGIOS Z ERVAS A ZER B ESTAVROS G EORGE K OLLIOS nlaout@ zg@ best@ gkollios@Abstract—In many networked applications,independent caching agents cooperate by servicing each other’s miss streams, without revealing the operational details of the caching mecha-nisms they employ.Inference of such details could be instru-mental for many other processes.For example,it could be used for optimized forwarding(or routing)of one’s own miss stream(or content)to available proxy caches,or for making cache-aware resource management decisions.In this paper,we introduce the Cache Inference Problem(CIP)as that of inferring the characteristics of a caching agent,given the miss stream of that agent.While CIP is insolvable in its most general form,there are special cases of practical importance in which it is,including when the request stream follows an Independent Reference Model (IRM)with generalized power-law(GPL)demand distribution. To that end,we design two basic“litmus”tests that are able to detect LFU and LRU replacement policies,the effective size of the cache and of the object universe,and the skewness of the GPL demand for ing extensive experiments under synthetic as well as real traces,we show that our methods infer such characteristics accurately and quite efficiently,and that they remain robust even when the IRM/GPL assumptions do not hold,and even when the underlying replacement policies are not“pure”LFU or LRU.We exemplify the value of our inference framework by considering example applications.I.I NTRODUCTIONMotivation:Caching is a fundamental building block of computing and networking systems and subsystems.In a given system,multiple caching agents may co-exist to provide a single service or functionality.A canonical example is the use of caching in multiple levels of a memory system to implement the abstraction of a“memory hierarchy”.For distributed systems,the use of caching is paramount: route caches are used to store recently-used IP routing entries and is consulted before routing tables are accessed;route caches are also used in peer-to-peer overlays to cache hub and hub cluster information as well as Globally Unique IDentifiers (GUIDs)to IP translations for content and query routing purposes[1];DNS lookups are cached by DNS resolvers to reduce the load on upper-level DNS servers[2];distributed host caching systems are used in support of P2P networks to allow peers tofind and connect to one another[3];web proxy and reverse proxy caches are used for efficient content distribution and delivery[4],[5],and the list goes on.When multiple caching agents are used to support a single operation,be it a single end-host or a large CDN overlay network,the“design”of such agents is carefully coordinated Computer Science Dept,Boston University,Boston,Massachusetts,USA. Dept of Informatics and Telecommunications,University of Athens,Greece. outaris is supported by a Marie Curie Outgoing International Fellowship of the EU MOIF-CT-2005-007230.A.Bestavros and G.Kollios are supported in part by NSF CNS Cybertrust Award#0524477,CNS NeTS Award #0520166,CNS ITR Award#0205294,and EIA RI Award0202067.to ensure efficient operation.For instance,the size and replace-ment strategies of the caching mechanisms used to implement traditional memory hierarchies are matched up carefully to maximize overall performance under a given workload(typi-cally characterized by reference models or benchmarks).Sim-ilarily,the placement,sizing,and cache replacement schemes used for various proxies in a CDN are carefully coordinated to optimize the content delivery process subject to typically-skewed demand profile from various client populations. Increasingly,however,a system may be forced to rely on one or more caching agents that are not necessarily part of its autonomous domain and as such may benefit from infering the characteristics of such agents.Next,we give a concrete example to motivate the work presented in this paper. Application to Informed Caching Proxy Selection:Consider a web server(or a proxy thereof),which must decide which one of potentially many candidate reverse caching proxies (possibly belonging to competing ISPs)it should use to“front end”its popular content.Clearly,in such a setting,it cannot be assumed that details of the design or policies of such caching agents would be known to(let alone controlled or tuned by) the entity that wishes to use such agents.Yet,it is clear that such knowledge could be quite valuable.For example,given a choice,a web server would be better off selecting a reverse proxy cache with a larger cache size.Alternatively,if the web server needs to partition its content across multiple reverse proxies,it would be better off delegating content that exhibits popularity over long time scales to proxies employing an LFU-like replacement policy,while delegating content that exhibits popularity over shorter time scales to proxies employing an LRU-like replacement policy.Even if the web server has no choice in terms of the reverse caching proxy to use,it may be useful for the web server to ascertain the characteristics of the reference stream served by the caching proxy.The above are examples of how knowledge of the charac-teristics of remote caching agents could empower a server to make judicious ter,in Section VIII,we show how the same information could also empower a client-side caching proxy to make judicious decisions regarding which other caching proxies to use to service its own miss stream.Both of these are examples of Informed Caching Proxy Selection, which has applications in any system that allows autonomous entities theflexibility of selecting from among(or routing content or requests across)multiple remote caching agents. The Cache Inference Problem:In many of the distributed ap-plications and networking systems(to which we have alluded above),a caching agent that receives a request which it cannot service from its local storage redirects(or routes)such requeststo other subsystems–e.g.,another caching agent,or an origin server.Such redirected requests constitute the“miss stream”of the caching agent.Clearly,such a miss stream carries in it information about the caching mechanisms employed by the caching agent.While the subsystems at the receiving end of such miss streams may not be privy to the“design”of the caching agent producing such a miss stream,it is natural to ask whether“Is it possible to use the information contained in a miss stream to infer some of the characteristics of the remote cache that produced that stream?”Accordingly,the most general Cache Inference Problem (CIP)could be stated as follows:“Given the miss stream of a caching agent,infer the replacement policy,the capacity, and the characteristics of the input request stream”.The above question is too unconstrained to be solvable,as it allows arbitrary reference streams as inputs to the cache. Thus,we argue that a more realistic problem statement would be one in which assumptions about common characteristics of the cache reference stream(i.e.,the access pattern)are made.We single out two such assumptions,both of which are empirically justified,commonly made in the literature, and indeed leveraged in many existing systems.Thefirst is that the reference stream follows the Independent Reference Model(IRM)[6],whereas the second is that the demand for objects follows a Generalized Power Law(GPL).As we show in Section II,CIP when subjected to the IRM and GPL assumptions is solvable.As we hinted above,we underscore that the GPL and IRM assumptions on the request stream have substantial empirical justification,making the solution of the CIP problem of significant practical value to many systems.Moreover,as will be evident later in this paper,the solutions we devise for the CIP problem subject to GPL and IRM work well even when the request stream does not satisfy the GPL and IRM assumptions. Beyond Inference of Caching Characteristics:So far,we have framed the scope of our work as that of“infering”the characteristics of an underlying caching agent.While that is the main technical challenge we consider in this paper(with many direct applications),we note that the techniques we develop have much wider applicability and value.In particular, we note that our techniques could be seen as“modeling”as opposed to“inference”tools.For instance,in this paper we provide analytical and numerical approaches that allow us to discern whether an underlying cache uses LFU or LRU replacement.In many instances,however,the underlying cache may be using one of many other more elaborate replace-ment strategies(e.g.,GreedyDual,LRFU,LRU(k),etc.)or replacement strategies that use specific domain knowledge (e.g.,incorporating variable miss penalties,or variable object sizes,or expiration times,etc.).The fact that an underlying cache may be using a replacement strategy other than LFU and LRU does not mean that our techniques cannot be used effectively to build a robust model of the underlying cache. By its nature,a model is a simplifcation or an abstraction of an underlying complex reality,which is useful as long as it exhibits verifiable predictive powers that could assist/inform the processes using such models–e.g.,to inform resource schedulers or resource allocation processes.The inherent value of the techniques we devise in this paper for modeling purposes is an important point that we would like to emphasize at the outset.To drive this point home,we discuss below an example in which modeling an underlying cache could be used for informed resource management. Application to Caching-Aware Resource Management: Consider the operation of a hosting service.Typically,such a service would employ a variety of mechanisms to improve performance and to provide proper allocation of resources across all hosted web sites.For instance,the hosting service may be employing traffic shapers to apportion its out-bound bandwidth across the various web sites to ensure fairness, while at the same time,it may be using a third-party caching network to front-end popular content.For simplicity,consider the case in which two web sites A and B are hosted under equal Service Level Agreements(SLAs).As such,a natural setting for the traffic shaper might be to assign equal out-bound bandwidth to both A and B.Now,assume that web site A features content with a highly-skewed popularity profile and which is accessed with high intensity,whereas web site B features content with a more uniform popularity and which is accessed with equally high ing the techniques we devise in this paper,by observing the miss streams of A and B,the hosting service would be able to construct a“model”of the effective caches for A and B.We emphasize“effective”because in reality,there is no physical separate caches for A and for B,but rather,the caching network is shared by A and B(and possibly many other web sites).In other words, using the techniques we devise in this paper would allow us to construct models of the“virtual”caches for A and ing such models,we may be able to estimate(say)the hit rates and that the caching network delivers for A and B.For instance,given the different cacheability of the workloads for A and B,we may get.Given such knoweldge,the hosting service may opt to alter its traffic shaping decision so as to allocate a larger chunk of the out-bound bandwidth to B to compensate it for its inability to reap the benefits from the caching network.This is an example of caching-aware resource management which benefits from the ability to build effective models of underlying caching processes.1Paper Overview and Contributions:In addition to concisely formulating the cache inference problem in Section II,this paper presents a general framework that allows us to solve these problems in Section III.In Sections IV and V,we develop analytical and numerical approaches that allow us to obtain quite efficient instantiations of our inference procedure for LFU and LRU.In Section VI,we generalize our instanti-ations to allow for the inference of the size of the underlying cache and that of the object universe.In Section VII,we present evidence of the robustness of our inference techniques, using extensive simulations,driven with both synthetic and real traces.In Section VIII,we present experimental results that illustrate the use of our inference techniques for informed request routing.We conclude the paper in Sections IX and X with a summary of related work and on-going research.1The ability to infer the relative benefits from a shared cache in order to inform resource management decisions has been contemplated quite recently for CPU scheduling in emerging multi-core architectures[7],[8].II.P ROBLEM S TATEMENT AND A SSUMPTIONS Consider an object set,where denotes the th unit-sized object.2Now,assume that there exists a client that generates requests for the objects in.Let denote the th request and let indicate that the th request refers to object.The requests constitute the input of a cache memory with space that accommodates up to objects,operating under an unknown replacement policy ALG.3If the requested object is currently cached,then leads to a cache hit,meaning that the requested object can be sent back to the client immediately.If is not currently cached,then leads to a cache miss,meaning that the object has to be fetchedfirst before it can be forwarded to the client,and potentially cached for future use.Let denote the th miss that appears on the“output”of the cache,where by output we mean the communication channel that connects the cache to a subsystem that is able to produce a copy of all requested objects,e.g.,an“origin server”that maintains permanent copies of all objects:if the th miss is due to the th request,.In both cases(hit or miss), a request affects the information maintained for the operation of ALG,and in the case of a miss,it also triggers the eviction of the object currently specified by ALG.Definition1:(CIP)Given:(i1)the miss-stream,,and(i2)the cache size and the object universe size,find:(o1)the input request stream,, and(o2)the replacement policy ALG.If one thinks of ALG as being a function ALG,then CIP amounts to inverting the output and obtaining the input and the function ALG itself.Can the above general CIP be solved?The answer is no.To see that,it suffices to note that even if we were are also given ALG,we still could not infer the’s from the’s and,in fact,we cannot even determine.For instance,consider the case in which some very popular objects appear on the miss-stream only when they are requested for thefirst time and never again as all subsequent requests for them lead to hits.These objects are in effect“invisible”as they can affect and in arbitrary ways that leave no sign in the miss-stream and thus cannot be infered.4Since the general CIP is too unconstrained to be solvable, we lower our ambition,and target more constrained versions that are indeed solvable.Afirst step in this direction is to impose the following assumption(constraint).Assumption1:Requests occur under the Independent Ref-erence Model(IRM):requests are generated independently and follow identical stationary distributions,i.e.,,whereis a Probability Mass Function(PMF)over the object set.2The unit-sized object assumption is a standard one in analytic studies of replacement algorithms[6],[9]to avoid adding0/1-knapsack-type complex-ities to a problem that is already combinatorial.Practically,it is justified on the basis that in many caching systems the objects are much smaller than the available cache size,and that for many applications the objects are indeed of the same size(e.g.,entries in a routing table).3We refer interested readers to[10]for a fairly recent survey of cache replacement strategies.4Similar examples illustrating the insolvability of CIP can be given without having to resort to such pathological cases.The IRM assumption[6]has long being used to characterize cache access patterns[11],[12]by abstracting out the impact of temporal correlations,which was shown in[13]to be minuscule,especially under typical,Zipf-like object popularity profiles.Another justification for making the IRM assumption is that prior work[14]has showed that temporal correlations decrease rapidly with the distance between any two references. Thus,as long as the cache size is not minuscule,temporal correlations do not impact fundamentally the i.i.d.assumption in IRM.Regarding the stationarity assumption of IRM,we note that previous cache modeling and analysis works have assumed that the request distribution is stationary over some long-enough time scale.Moreover,for many application do-mains,this assumption is well supported by measurement and characterization studies.5The IRM assumption makes CIP simpler since rather than identifying the input stream,it suffices to characterize statistically by infering the of the’s.Still,it is easy to see that CIP remains insolvable even in this form(see[15] for details).We,therefore,make one more assumption that makes CIP solvable.Assumption2:The PMF of the requests is a Generalized Power Law(GPL),i.e.,the th most popular object,hereafter (without loss of generality)assumed to be object is re-quested with probability,where is the skewness of the GPL and is a normalization constant.The GPL assumption allows for an exact specification of the input using only a single unknown—the skewness parameter .6Combining the IRM and GPL assumptions leads to the following simpler version of CIP,which we call CIP2.Definition2:(CIP2)Given:(i1)the miss-stream,,and(i2)the cache size and the object universe size,find:(o1)the skewness parameter of the IRM/GPL input,and(o2)the replacement policy ALG.Before concluding this section,we should emphasize that the IRM/GPL assumptions were made to obtain a framework within which the CIP problem becomes solvable using sound analysis and/or numerical methods.However,as we will show in later sections of this paper,the inference techniques we devise are quite robust even when one or both of the IRM and GPL assumptions do not hold.This is an important point that we want to make sure is quite understood at this stage.III.A G ENERAL I NFERENCE F RAMEWORKLet denote the appearance probability of the th most popular object in the miss stream of CIP2—let this be object.For a cache of size,under LFU replacement ,whereas under LRU replacement may assume any value between and,depending on demand skewness and cache size(more details in Section V).In both cases,,where is the indicator function.Below,we present our general inference framework for solving CIP2.It consists of the following three steps:5Obviously,if the demand is non-stationary and radically changing over shorter small time scales,then no analysis can be carried out.6Without the GPL assumption the input introduces unknowns,whereas without the IRM assumption it introduces unknowns,that can be arbitrary many,and even exceed.1.Hypothesis:In this step we hypothesize that a knownreplacement policy ALG operates in the cache.2.Prediction:Subject to the hypothesis above,we want to predict the PMF of the miss stream obtained when ALGoperates on an IRM GPL input request stream.This requires the following:(i)obtain an estimate of the exact skewness of the unknown GPL()input PMF,and(ii)derive the steady-state hit probabilities for different objects under thehypothesized ALG and,i.e.,,.The’s and the’s(corresponding to a GPL())lead to our predicted miss-stream as follows:(1)3.Validation:In this step we compare our predicted PMF to the PMF of the actual(observed)miss stream,and decide whether our hypothesis about ALG was correct.In order to define a similarity measure between and,we view each PMF as a(high-dimensional)vector,and define the distance between them to be the-norm distance between their corre-sponding vectors.Thus,the error between the predicted PMF and the observed one is given by. In this work,unless otherwise stated,we use the-norm.A positive validation implies a solution to CIP2as we would have inferred both the unknown replacement algorithm and the PMF of the input stream,using only the observed miss stream.In the above procedure,the prediction step must be cus-tomized for different ALG s assumed in the hypothesis step. In Sections IV and V,we do so for LFU and LRU,using a combination of analytical and fast numerical methods.The above inference procedure has important advantages over an alternative naive“simulation-based”approach that assumes an and an ALG,performs the corresponding sim-ulation,and then validates the assumptions by comparing the actual and the predicted PMFs.While the overall approach is similar,we take special care to employ analytic or fast numeric techniques where possible and thus avoid time-consuming simulations(mainly for the prediction step).Such advantages will be evident when we present our analytical method for de-riving under LFU,and the corresponding analytical and fast numerical methods for obtaining the predicted miss streams for both LFU and LRU.Needless to say,fast inference is important if it is to be performed in an on-line fashion.IV.T HE P REDICTION S TEP FOR LFUThe main challenge here is to obtain an estimate of by using the observed miss stream of an LFU cache.We start with some known techniques from the literature and then present our own SPAI analytical method.Having obtained the estimate and the corresponding,it is straightforward to construct the predicted PMF of the actual miss stream PMF .This is so because LFU acts as a cut-offfilter on the most-popular objects in the request stream,leading to the following relationship for.(2)A.Existing Methods:MLE,OLS,and RATThere are several methods for estimating the skewness of a power-law distribution through samples of its tail.In this section we will briefly present three popular ones:the Maxi-mum Likelihood Estimator(MLE),the Linear Least Squares (OLS)estimator,and the RAT estimator.MLS and OLS are numeric methods of significant computational complexity, whereas RAT is analytic.The Maximum Likelihood Estimator(MLE):Given a vector of miss stream observations we would like to determine the value of the exponent which maximizes the probability (likelihood)of the sampled data[16].Using the power law input demand,we can derive an equation that can be solved using standard iterative numerical methods and provide an unbiased estimate of[15].An Alternative Analytical Estimator(RAT):MLE is asymptot-ically optimal.In practice we may be interested in obtaining an estimate with a limited number of observations.A(poorer) estimator uses and to get.Equating towe get(3)(4)(5)A Linear Least-Squares Estimator(OLS):Yet another method to estimate is to use linear least-squares estimation on the plot of,i.e.,the PDF of the miss stream.Thisgraphical method is well documented and perhaps one of the most commonly used(e.g.,in[17]).B.SPAI:Our Single Point Analytic Inversion MethodAs its name suggests,SPAI considers a single(measurement)“point”–the appearance frequency of the most popular object in the miss stream of an LFU cache,which is subject to GPL demand with skewness parameter–and uses it to derive an estimate of.A brief overview of SPAI goes as follows:We start with an equation that associates the measured frequency at the miss-stream with the request frequency of the unknown input stream.We use our assumption(that the input is GPL)to substitute all’s by analytic expressions of and.We employ a number of algebraic approximations to transform our initial non-algebraic equation into a polynomial equation of our single unknown .We then obtain a closed-form expression for by solving a corresponding cubic equation obtained by an appropriate truncation of our original polynomial equation.The details are presented next.Number of miss stream observationsαNumber of miss stream observationsαNumber of miss stream observationsαNumber of miss stream observationsαFig.1.SPAI vs MLE,OLS,and RAT on inferring the exponent of a GPL from samples of its tail (objects with rank and higher)shown withconfidence intervals.Using the integral approximationfor the th generalized Harmonic number of order ,,we can write:For large we have ,leading to the followingapproximation.After term re-arrangement,we re-write the last equation as(6)or equivalently aswhere:(7)Expanding the exponential formon around point we get .By substituting in Eq.(7),we get the following master equation :(8)Equation (8)can be approximated through “truncation”,i.e.,by limiting to instead of .For we get a quadraticequation which has one non-zero real solution.Forwe get a cubic equation with two real solutions and we can choose the positive one.Finally,for we get:7(9)At least one of the roots of the cubic equation in the paren-theses of Eq.(9)is real —we select the one in and useit to obtain the skewness parameter through.7Notice that for we actually have a cubic equation of (insidethe parentheses)after factoring out the common term .We can therefore use the cubic formula [18]to obtain a closed-form solution for the unknown .Theoretically we could go even further and consider ,which would put a quartic equation in the parentheses.The solution of the general quartic equation,however,involves very cumbersome formulas for the roots and is marginally valuable since the cubic equation already provides a close approximation as will be demonstrated later on.and the quarticequation is actually as far as we can go because forwe would have a quintic equation in the parentheses for which there does not exist a general solution over the rationals in terms of radicals (the “Abel-Ruffini”theorem).C.SPAI Versus Existing ApproachesWe perform the following experiment in order to compare the performance of SPAI to existing approaches that could be used for the prediction step.We simulate an LFU cache of size ,driven by a input request stream.In Figure 1we plot the estimated skewness obtained from SPAI and the other methods (on the y-axis)for different numbers of miss-stream samples (on the x-axis).We use low ()and high ()skewness,and low and high()relative cache sizes.These results indicate that SPAI performs as well as (and most of the time better than)MLE and OLS,over which it has the additional advantage of being an analytical method ,thus incurring no computational complexity.SPAI’s rival analytical method,RAT,performs much worse.V.T HE P REDICTION S TEP FOR LRUThe steady-state hit probabilities of LRU bear no simple characterization,and this has several consequences on the prediction step of our cache inference framework presented in Section III.First,even if we had ,it is not trivial to derive the ’s (and therefore get the ’s through Eq.(1)).Computing exact steady-state hit probabilities for LRU under IRM is a hard combinatorial problem,for which there exist mostly numerical approximation techniques [19],[20],[21].Second,and most importantly,the steady-state hit probabilities depend on the skewness of the input through a function.Since we do not have a simple analytic expressionfor,it is impossible to invert Eq.(1)and obtain a closed-form expression for ,as with our SPAI method for LFU.Unfortunately,our analytical derivation of [22]cannot be used for this purpose as it involves a complex non-algebraic function 8that leads to a final equation for (through Eq.(1))that admits no simple closed-form solution.In light of the discussion above,our approach for the prediction step for LRU replacement is the following.We resort to a numeric search technique for obtaining ,i.e.,we start with an arbitrary initial value ,computethe corresponding steady-state hit probabilitiesusing either the numeric technique of Dan and Towsley [19],or our own analytical one from [22],and compute the miss probability for each object of the input.Next,we sort these in decreasing popularity to obtain the predicted miss streamand,finally,the error .We then use a local search approach to find the that minimizes the error .8The unknown appearing in complex polynomial and exponential formsover multiple different bases.。
ant-design-vue-pro multitab 用法
ant-design-vue-pro multitab 用法The topic I will be covering is the usage of "ant-design-vue-pro multitab."我将要介绍的主题是:"ant-design-vue-pro multitab" 的用法。
"ant-design-vue-pro" is a popular UI library for Vue.js. It provides a wide range of reusable components and layouts that can be used to build modern and professional-looking web applications. One of the features it offers is the multitab functionality, which allows users to have multiple tabs within a single page."ant-design-vue-pro" 是一个流行的用于Vue.js的UI库。
它提供了大量可重复使用的组件和布局,可以用来构建现代化、专业化的网页应用程序。
其提供的一个功能是多标签页(multitab),可以让用户在同一页内打开多个标签页。
To use the multitab feature in "ant-design-vue-pro," you first need to import the necessary components from the library. Once imported, you can start using them in yourVue templates. To create a multitab layout, you typically need a container element to hold the tabs, and each tab needs its own unique identifier.要在 "ant-design-vue-pro" 中使用多标签页功能,首先需要从该库中导入所需的组件。
abstractvalueadaptingcache 的 get 和 lookup -回复
abstractvalueadaptingcache 的get 和lookup -回复AbstractValueAdaptingCache 的get 方法是Spring Framework 中的一个缓存操作方法,它用于获取给定键的值。
而lookup 方法则是AbstractValueAdaptingCache 中的内部方法,用于执行具体的缓存查找操作。
在本篇文章中,我们将深入探讨和解释这两个方法的作用、实现细节和使用场景。
一、AbstractValueAdaptingCache 简介AbstractValueAdaptingCache 是Spring Framework 中实现了Cache 接口的一个抽象类,它提供了缓存操作的基本实现。
它通过对缓存值的适配,来实现对缓存的读取和存储,还提供了诸如清除缓存、获取缓存命中率等功能。
二、get 方法详解get 方法是AbstractValueAdaptingCache 类中用于获取缓存值的核心方法。
1. 方法签名javaNullablepublic ValueWrapper get(Object key) {...}2. 方法参数- key: 需要获取缓存值的键。
3. 方法实现get 方法首先会调用lookup 方法进行缓存查找,然后根据查找结果返回相应的缓存值。
javaNullablepublic ValueWrapper get(Object key) {return lookup(key);}三、lookup 方法详解lookup 方法是AbstractValueAdaptingCache 类中用于执行具体缓存查找操作的方法。
1. 方法签名javaNullableprotected abstract Object lookup(Object key);2. 方法参数- key: 需要查找缓存值的键。
3. 方法实现lookup 方法由AbstractValueAdaptingCache 的子类实现具体的缓存查找逻辑。
rpki cache server 验证原理
rpki cache server 验证原理RPKI (Resource Public Key Infrastructure) is a system used to verify the authenticity and validity of Internet routing information. RPKI cache servers play a crucial role in this process by storing and distributing the RPKI data required for validation.The overall RPKI verification process involves a hierarchy of trust, where a top-level certificate authority (CA) signs the certificates of other CAs, and these CAs issue certificates for individual Internet resource holders (such as Internet service providers or autonomous systems). These certificates contain cryptographic keys that are used to sign Route Origin Authorizations (ROAs) that specify which autonomous systems are authorized to originate certain IP prefixes.When a router receives a route announcement, it can use the RPKI cache server to validate the origin AS using the following steps: 1. The router extracts the IP prefix and the AS path from the received route announcement.2. The router checks its local cache for a valid ROA for the specific IP prefix.3. If a valid ROA is found in the cache, the router extracts the AS numbers allowed to originate the prefix from the ROA.4. The router then checks if the AS path in the received route matches the authorized AS numbers in the ROA.5. If the AS path matches the authorized AS numbers, the route is considered valid and can be entered into the router's routing table. The RPKI cache server plays a vital role in this process byproviding the necessary RPKI data to the routers. The cache server regularly fetches the RPKI data from the top-level CA and caches it locally for efficient validation. It maintains a database of certificates, ROAs, and other relevant information.The cache server also ensures the freshness and validity of the cached data by periodically fetching updates from the top-level CA. This update process, known as "delta synchronization," ensures that the cache server has the latest certificates and ROAs.Additionally, the cache server may implement various optimizations like caching negative responses (no ROA found) and filtering ROAs based on their impact on routing stability and security.In summary, the RPKI cache server verifies the authenticity and validity of Internet routing information by caching and distributing RPKI data, performing regular updates, and enabling efficient validation of route origins based on received route announcements and ROAs.。
缓存自适应算法 -回复
缓存自适应算法-回复缓存自适应算法(Caching Adaptive Algorithm)是一种用于优化计算机系统中缓存的算法。
该算法通过动态调整缓存策略和缓存容量,以适应不同的工作负载和资源利用需求。
在本文中,我们将一步一步地介绍缓存自适应算法的原理、应用场景以及优缺点。
一、引言随着计算机应用的普及和互联网的迅速发展,许多系统在处理大量的数据请求时,往往会遇到性能瓶颈。
为了提高系统的响应速度和效率,缓存技术成为了一个重要的解决方案。
缓存技术通过将常用的数据存储在快速访问的存储介质中,以减少对慢速存储介质(如硬盘)的访问次数,从而提高系统的性能。
然而,传统的缓存技术往往需要事先配置好缓存的容量和替换策略,从而无法适应动态变化的工作负载和不确定的数据访问模式。
这就导致了在一些负载较高的系统中,缓存容量的利用率较低,不能充分发挥缓存的作用。
为了解决这个问题,缓存自适应算法应运而生。
二、原理缓存自适应算法通过动态检测系统的工作负载和资源利用情况,自动调整缓存的容量和替换策略,以最大程度地提高缓存的利用效率。
通常,缓存自适应算法包括两个主要模块:负载监测和策略调整。
1. 负载监测:负载监测模块通过监测系统的运行状态和数据访问模式,收集相关的信息指标,例如请求频率、访问模式、命中率等。
这些指标将被用于评估当前的工作负载和缓存的利用程度。
2. 策略调整:策略调整模块根据负载监测模块收集到的信息,动态地调整缓存的容量和替换策略。
例如,在负载较高时,可以增加缓存容量以提高命中率;而在负载较低时,可以减小缓存容量以释放资源。
三、应用场景缓存自适应算法在许多不同的应用场景中得到了广泛的应用。
以下是一些典型的应用场景:1. Web服务器:在Web服务器中,缓存自适应算法可以根据用户请求的频率和模式,动态地调整缓存的容量和内容。
这样可以提高用户的访问速度,并减轻服务器的负载。
2. 数据库管理系统:在数据库管理系统中,缓存自适应算法可以根据查询频率和数据访问模式,调整缓存的容量和替换策略。
做RAID时Write Policy,Read Policy,Cache Policy如何配置
Read-ahead(预读)启用逻辑驱动器的SCSI预读功能。
可将此参数设为No-Read-Ahead(非预读)、Read-ahead(预读)或Adaptive(自适应)。
默认设置为Adaptive(自适应)。
*No-Read-Ahead(非预读)指定控制器在当前逻辑驱动器中不使用预读方式。
*Read-ahead(预读)指定控制器在当前逻辑驱动器中使用预读方式。
*Adaptive(自适应)指定如果最近两次的磁盘访问出现在连续的扇区内,则控制器开始采用Read-ahead(预读)。
如果所有的读取请求都是随机的,则该算法回复到No-Read-Ahead(非预读),但仍要判断所有的读取请求是否有按顺序操作的可能。
Cache Policy(高速缓存策略)适合在特定逻辑驱动器上读取。
它并不影响Read ahead(预读)高速缓存。
*Cached I/O(高速缓存I/O)指定所有读取数据在高速缓存存储器中缓存。
*Direct I/O(直接I/O)指定读取数据不在高速缓存存储器中缓存。
此为默认设置。
它不会代替高速缓存策略设置。
数据被同时传送到高速缓存和主机。
如果再次读取同一数据块,则从高速缓存存储器读取Write Policy(写入策略)将高速缓存方法设置为回写或通过写。
*在Write-back(回写)高速缓存中,当控制器高速缓存已接收到某个事务中的所有数据时,该控制器将数据传输完成信号发送给主机。
*在Write-through(通过写)高速缓存中,当磁盘子系统已接收到一个事务中的所有数据时,该控制器将数据传输完成信号发送给主机。
Write-through(通过写)高速缓存与Write-back(回写)高速缓存相比具有数据安全的优势,但Write-back(回写)高速缓存比起Write-through(通过写)又有性能上的优势。
Dell EMC Live Optics User Guide
Live OpticsMake IT decisions with confidenceSizing IT systems for performance and capacity can be complex, imprecise and costly if done incorrectly. To help guide our customers through mission critical decisions, Dell EMC’s team of solution experts developed LIVE OPTICS, an innovative and complimentary tool that reduces the guesswork involved in data center expansion and troubleshooting. Through a simply-run program, LIVE OPTICS collects data from your environment and produces a report that gives you the confidence and knowledge you need to make the right decisions for your business.This tool is available at no charge and will help you make the most impactful resource decisions, whether it’s eliminating system bottlenecks or analyzing opportunities for virtualization or data center expansion. With LIVE OPTICS, you get an accurate sense of your current IT environment based on actual workload demands that helps you identify areas for further optimization.Measure the core metrics of your environmentLIVE OPTICS works non-disruptively in Windows, Linux and VMware environments and customers typically allocate approximately 24 hours for data collection. The LIVE OPTICS Collector runs remotely and is agentless, gathering core metrics such as disk I/O, throughput, free and used capacity, and memory utilization. Then LIVE OPTICS produces an in-depth analysis of server workloads and capacity requirements, and generates two types of reports:•Aggregation report of resource needs across disparate servers with a simulation of those workloads if consolidated to shared resources•In-depth Individual Server report that enables IT administrators to search for potential bottlenecks or hotspots so they can be eliminated from a new designUnderstand the impact on your business Based on an analysis of your system andworkload metrics, a Dell EMC SolutionsArchitect will help you look for ways tooptimize your data center and plan forupcoming projects. During this conversation,you’ll receive a detailed report of your system performance data and an analysis of theimpact any changes would have on your data center. As your enterprise solutions partners, Realdolmen and Dell EMC will work with youevery step of the way to help you assess the results and determine the best approach for expanding or enhancing your IT systems.Through the use of LIVE OPTICS you’ll gain an objective, quick and meaningful assessment of your IT environment that will help you makethe right decisions for your business.。
enablecreatecacheannotation参数
enableCreateCacheAnnotation是一个配置参数,通常用于确定是否启用创建缓存注解的功能。
在某些框架或系统中,为了提高性能和响应速度,缓存是一种常用的优化手段。
通过缓存,重复的或耗时的计算可以避免,从而显著提高系统的整体性能。
enableCreateCacheAnnotation参数的作用是控制是否在代码中生成缓存注解。
这些注解可以用来指示运行时系统(如 Spring 等)进行相应的缓存处理。
当该参数为真(true)时,系统将根据配置和逻辑生成缓存注解,这些注解可以被框架识别并用于缓存操作。
启用缓存注解可以提高应用的性能,但也可能导致一些问题。
例如,如果缓存的数据不正确或过期,可能会影响系统的正确性。
因此,在启用该参数之前,应确保了解其可能的影响,并进行适当的测试和验证。
总之,enableCreateCacheAnnotation参数用于控制是否生成缓存注解,从而提高应用的性能。
在使用时,应谨慎评估其潜在影响,并根据实际需求进行配置。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
,Manfred K.Warmuth,Scott A.Brandt,Ismail Arirbgramacy,manfred,scott,ari@University of California Santa CruzAbstractWe are constructing caching policies that have13-20%lower miss ratesthan the best of twelve baseline policies over a large variety of requeststreams.This represents an improvement of49–63%over Least RecentlyUsed,the most commonly implemented policy.We achieve this not bydesigning a specific new policy but by using on-line Machine Learningalgorithms to dynamically shift between the standard policies based ontheir observed miss rates.A thorough experimental evaluation of ourtechniques is given,as well as a discussion of what makes caching aninteresting on-line learning problem.1IntroductionCaching is ubiquitous in operation systems.It is useful whenever we have a small,fast main memory and a larger,slower secondary memory.Infile system caching,the secondary memory is a hard drive or a networked storage server while in web caching the secondary memory is the Internet.The goal of caching is to keep within the smaller memory data objects(files,web pages,etc.)from the larger memory which are likely to be accessed again in the near future.Since the future request stream is not generally known,heuristics,called caching policies,are used to decide which objects should be discarded as new objects are retained.More precisely,if a requested object already resides in the cache then we call it a hit,corresponding to a low-latency data access.Otherwise,we call it a miss,corresponding to a high-latency data access as the data is fetched from the slower secondary memory into the faster cache memory.In the case of a miss,room must be made in the cache memory for the new object.To accomplish this a caching policy discards from the cache objects which it thinks will cause the fewest or least expensive future misses.In this work we consider twelve baseline policies including seven common policies (RAND,FIFO,LIFO,LRU,MRU,LFU,and MFU),andfive more recently devel-oped and very successful policies(SIZE and GDS[CI97],GD*[JB00],GDSF and LFUDA[ACD99]).These algorithms employ a variety of directly observable criteria including recency of access,frequency of access,size of the objects,cost of fetching the objects from secondary memory,and various combinations of these.The primary difficulty in selecting the best policy lies in the fact that each of these policies may work well in different situations or at different times due to variations in workload, system architecture,request size,type of processing,CPU speed,relative speeds of the0.10.20.30.40.50.60.70.8205000210000215000220000225000230000235000lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd 00.10.20.30.40.50.60.70.8205000210000215000220000225000230000(a)(b)205000210000215000220000225000230000235000Lowest miss rate policy switches between SIZE, GDS, GDSF, and GD*size gds gdsf gd 205000210000215000220000225000230000Lowest miss rate policy ... SIZE, GDS, GDSF, and GD*(c)(d)Figure 1:Miss rates (axis)of a)the twelve fixed policies (calculated w.r.t.a window of 300requests)over 30,000requests (axis),b)the same policies on a random permutation of the data set,c)and d)the policies with the lowest miss rates in the figures above.different memories,load on the communication network,etc.Thus the difficult questionis:In a given situation,which policy should govern the cache?For example,the requeststream from disk accesses on a PC is quite different from the request stream produced byweb-proxy accesses via a browser,or that of a file server on a local network.The relativeperformance of the twelve policies vary greatly depending on the application.Furthermore,the characteristics of a single request stream can vary temporally for a fixed application.For example,a file server can behave quite differently during the middle of the night whilemaking tape archives in order to backup data,whereas during the day its purpose is toserve file requests to and from other machines and/or users.Because of their differingdecision criteria,different policies perform better given different workload characteristics.The request streams become even more difficult to characterize when there is a hierarchyor a network of caches handling a variety of file-type requests.In these cases,choosing afixed policy for each cache in advance is doomed to be sub-optimal.The usual answer to the question of which policy to employ is either to select one thatworks well on average,or to select one that provides the best performance on some im-portant subset of the workload.However,these strategies have two inherent costs.First,the selection (and perhaps tuning)of the single policy to be used in any given situationis done by hand and may be both difficult and error-prone,especially in complex systemarchitectures with unknown and/or time-varying workloads.And second,the performanceof the chosen policy with the best expected average case performance may in fact be worsethan that achievable by another policy at any particular moment.Figure 1(a)shows the hitrate of the twelve policies described above on a representative portion of one of our datasets (described below in Section 3)and Figure 1(b)shows the hit rate of the same policieson a random permutation of the request stream.As can be clearly be seen,the miss rateson the permuted data set are quite different from those of the original data set,and it is thisdifference that our algorithms aim to exploit.Figures 1(c)and (d)show which policy isbest at each instant of time for the data segment and the permuted data segment.It is clearfrom these (representative)figures that the best policy changes over time.To avoid the perils associated with trying to hand-pick a single policy,one would like tobe able to automatically and dynamically select the best policy for any given situation.Inother words,one wants a cache replacement policy which is “adaptive”.In our SystemsResearch Group,we have identified the need for such a solution in the context of complex network architectures and time-varying workloads and suggested a preliminary framework in which a solution could operate[AAG ar],but without specific algorithmic solutions to the adaptation problem.This paper presents specific algorithmic solutions that address the need identified in that work.It is difficult to give a precise definition of“adaptive”when the data stream is continually changing.We use the term“adaptive”only informally and when we want to be precise we use off-line comparators to judge the performance of our on-line algorithms,as is com-monly done in on-line learning[LW94,CBFH97,KW97].A good adaptive on-line policy must do well compared to off-line comparators.In this paper we use two off-line compara-tors:BestFixed and BestShifting().BestFixed is the a posteriori selected policy with the lowest miss rate on the entire request stream for our twelve policies.BestShifting() considers all possible partitions of the request stream into at most segments of length up to along with the best policy for each segment.BestShifting()chooses the parti-tion with the lowest total miss rate over the entire dataset.The upper bounds and are necessary so that BestFixed()can be feasibly computed with dynamic programming. Off-line comparators that optimally partition the data stream are used extensively in the on-line learning community[LW94,HW98,BW02].For completeness,we also compare to the de facto standard policy,Least Recently Used(LRU).Rather than develop a new caching policy(well-plowed ground,to say the least),this paper uses a master policy to dynamically determine the success rate of all the other policies and switch among them based on their relative performance on the current request stream.We show that with no additional fetches,this policy works about as well as BestFixed.We define a refetch as a fetch of a previously seen object that was kept by the currently favored policy but was discarded from the real cache.With refetching,this policy can outperform BestFixed.In particular,when all required objects are refetched instantly,this policy has a13-20%lower miss rate than BestFixed,and almost the same performance as BestShift-ing.For reference,when compared with LRU,this policy has a49-63%lower miss rate. Disregarding misses on objects never seen before(compulsory misses),the performance improvements are even greater.Because the refetches are themselves potentially costly,it is important to note that they can be done in the background.Our preliminary experiments show this to be both feasible and effective,capturing most of the advantage of instant refetching.2The Master PolicyWe seek to develop an on-line master policy that determines which of a set of baseline policies should currently govern the real cache.Appropriate switch points need to be found and switches must be facilitated.Our key idea is“virtual caches”.A virtual cache simulates the operation of a single baseline policy.Each virtual cache records a few bytes of metadata about each object in its cache:ID,size,and calculated priority.The object data is only kept in the real cache,making the cost of maintaining the virtual caches negligible1.Via the virtual caches,the master policy can observe the miss rates of each policy on the actual request stream in order to determine their performance on the current workload.A simple heuristic for doing this is to continuously monitor the number of misses of each policy in a past window of,for example,1000requests.The master policy can give control of the real cache to the policy with the least misses in this window.While this works well in practice,maintaining such a window for manyfixed policies isexpensive.A better master policy keeps a single weight for each policy(non-negative and summing to one)which represents an estimate of its current relative performance.The master policy is always governed by the policy with the maximum weight2.Weights are updated by using the combined loss and share updates of Herbster and War-muth[HW98]and Bousquet and Warmuth[BW02]from the expert framework[CBFH97] for on-line learning.Here the experts are the caching policies.This technique is preferred to the window-based master policy because it uses much less memory,and because the parameters of the weight updates are easier to tune than the window size.This also makes the resulting master policy more robust(not shown).2.1The Weight UpdatesUpdating the weight vector after each trial is a two-part process.First,the weights of all policies that missed the new request are multiplied by a factorand then renormalized.We call thisthe loss update.Since the weights arerenormalized,they remain unchangedif all policies miss the new request.As noticed by Herbster and War-muth[HW98],multiplicative updatesdrive the weights of poor expertsto zero so quickly that it becomesdifficult for them to recover if theirexperts subsequently start doing well. Therefore,the second share update prevents the weights of experts that did well in the past from becoming0.20.40.60.81205000210000215000220000225000230000235000 FSUPWeightRequests Over TimeWeight History for Individual Policieslrufifomrulifosizelfumfurandgdsgdsflfudagd Figure2:Weights of baseline policies.too small,allowing them to recover quickly,as shown in Figure2.There are a number of share updates[HW98,BW02]with various recovery properties.Wechoose the F IXED S HARE TO U NIFORM P AST(FSUP)update because of its simplicityand efficiency.Note that the loss bounds proven in the expert framework for the combinedloss and share update do not apply in this context.This is because we use the mixtureweights only to select the best policy(discussion in full paper).However,our experimentalresults suggest that we are exploiting the recovery properties of the combined update thatare discussed extensively by Bousquet and Warmuth[BW02].Formally,for each trial,the loss update ismissØ ØwØo-0.100.10.20.30.40.50.6205000210000215000220000225000230000M i s s R a t e Requests Over Time Miss Rate Differences bestF - demd bestF - dfrd bestF - instFigure 3:BestFixed -P,where P Instantaneous,Demand,and Background Rollover .The higher the the more quickly past good policies will recover.In our experiments weusedand .2.2Demand vs.Instantaneous RolloverWhen space is needed to cache a new request,the master policy discards objects not present in the governing policy’s virtual cache 3.This causes the content of the real cache to “roll over”to the content of the current governing virtual cache.We call this demand rollover because objects in the governing virtual cache are refetched into the real cache on demand.While this master policy works almost as well as BestFixed,we were not satisfied and wanted to do as well as BestShifting.We noticed that the content of the real cache lagged behind the content of the governing virtual cache and had more misses.As a consequence,the miss rate of the master policy was greatly improved if,as soon as we switched over to a new governing policy,we refetched all the files in that policy’s virtual cache that were not retained in the real cache.We call this instantaneous rollover .By appropriate tuning of the update parameters and ,the number of instantaneous rollovers can be kept reasonablysmall and the miss rates of our master policy are almost as good as BestShifting().Here an upper bound for is chosen (generously)to be twice the number of rollovers used by our master policy,and is set to the maximum segment length.2.3Background RolloverBecause instantaneous rollover immediately refetches everything in the governing virtual cache that is not already in the real cache,it may cause a large number of refetches even when the number of policy switches is kept small.If all refetches are counted as misses,then the miss rate of such a master policy is comparable to that of BestFixed.The same is true for BestShifting.However,from a user perspective,refetching is advantageous be-cause of the latency advantage gained by having required objects in memory before they are needed.And from a system perspective,refetches can be “free”if they are done when the system is idle.To take advantage of these “free”refetches,we introduce the concept of background rollover .The exact criteria for when to refetch each missing object will depend heavily on the system,workload,and expected cost and benefit of each object.To characterize the performance of background rollover without addressing these architectural details,the following background refetching strategies were examined:1refetch for every cache miss;1for every hit;1for every request;2for every request;1for every hit and 5for every miss,etc.Each background technique gave fewer misses than BestFixed,approach-ing and nearly matching the performance obtained by the master policy using instantaneous0.10.20.30.40.50.60.70.8205000210000215000220000225000230000235000M i s s R a t e s Requests Over Time Miss Rates under FSUP with Master lru fifo mru lifo size lfu mfurand gds gdsf lfuda gd roll Figure 4:“Tracking”the best policy.Figure 5:Miss rates of BestFixed,Demand,Back-ground,Instantaneous,and BestShifting.Works Server Week Month LRU #Requests 138k 48k Cache size 900KB 4MB %Skipped 6.5%15.7%#Compuls 0.0200.152#Shifts 8893LRU Miss Rate 0.0880.450GDS 0.07554.7%DemandMiss Rate0.0610.450%BestF-9.6%-12.8%%LRU 30.9%48.5%0.0689.8%59.4%Backgrnd 2Miss Rate0.0470.349%BestF15.4%12.4%%LRU 46.6%60.3%0.06513.4%60.8%BestShiftingMiss Rate0.0420.312%BestF23.6%21.8%%LRU 52.2%30.1%Figure 6:Performance Summary.rollover.Of course,techniques which reduce the number of policy switches (by tuning and )also reduce the number of refetches.Figure 3compares the performance of each master policy with that of BestFixed and shows that the three master policies almost always outperform BestFixed.3Data and ResultsFigure 4shows how the master policy with instantaneous rollover (labeled ’roll’)“tracks”the baseline policy with the lowest miss rate over the representative data segment used in previous figures.Figure 5shows the performance of our master policies with respect to BestFixed,BestShifting,and LRU.It shows that demand rollover does slightly worse than BestFixed,while background 1(1refetch every request)and background 2(1refetch every hit and 5every miss)do better than BestFixed and almost as well as instantaneous,which itself does almost as well as BestShifting.All of the policies do significantly better than LRU.Discounting the compulsory misses,our best policies have 1/3fewer “real”misses than BestFixed and 1/2the “real”misses of LRU.Figure 6summarizes the performance of our algorithms over three large datasets.These were gathered using Carnegie Mellon University’s DFSTrace system [MS96]and had durations ranging from a single day to over a year.The traces we used represent a variety of workloads including a personal workstation (Work-Week),a single user (User-Month),and a remote storage system with a large number of clients,filtered by LRU on the clients’local caches (Server-Month-LRU).For each data set,the table shows the number of requests,%of requests skipped (sizecache size),number of compulsory misses of objects not previously seen,and the number of rollovers.For each policy,the table shows miss rate,and %improvement over BestFixed(labeled ’BF’)and LRU.4ConclusionOperating systems have many hidden parameter tweaking problems which are ideal appli-cations for on-line Machine Learning algorithms.These parameters are often set to values which provide good average case performance on a test workload.For example,we have identified candidate parameters in device management,file systems,and network proto-cols.Previously the on-line algorithms for predicting as well as the best shifting expert were used to tune the time-out for spinning down the disk of a PC[HLSS00].In this pa-per we use the weight updates of these algorithms for dynamically determining the best caching policy.This application is more elaborate because we needed to actively gather performance information about the caching policies via virtual caches.In future work we will do a more thorough study of feasibility of deferred rollover by building actual sys-tems using the algorithms we investigated in the simulations described in this paper.We will also explore the relationship of our methods to reinforcement learning and multi-arm bandit problems.Acknowledgements:Thanks to Jonathan Panttaja,Ahmed Amer,and Ethan Miller for their contributions to this research.References[AAG ar]Ismail Ari,Ahmed Amer,Robert Gramacy,Ethan Miller,Scott Brandt,and DarrellD.E.Long.ACME:Adaptive caching using multiple experts.In Proceedings of the2002Workshop on Distributed Data and Structures(WDAS2002).Carleton Scientific,(to appear).[ACD99]Martin Arlitt,Ludmilla Cherkasova,John Dilley,Rich Friedrich,and Tai Jin.Eval-uating content management techniques for Web proxy caches.In Proceedings of theWorkshop on Internet Server Performance(WISP99),May1999.[BW02]O.Bousquet and M.K.Warmuth.Tracking a small set of experts by mixing past posteriors.J.of Machine Learning Research,2002.To appear,prel.version in COLT01. [CBFH97]N.Cesa-Bianchi,Y.Freund,D.Haussler,D.P.Helmbold,R.E.Schapire,and M.K.Warmuth.How to use expert advice.Journal of the ACM,44(3):427–485,1997.[CI97]Pei Cao and Sandy Irani.Cost-aware WWW proxy caching algorithms.In Proceedings of the1997Usenix Symposium on Internet Technologies and Systems(USITS-97),1997. [HLSS00]David P.Helmbold,Darrell D.E.Long,Tracey L.Sconyers,and Bruce Sherrod.Adap-tive disk spin-down for mobile computers.ACM/Baltzer Mobile Networks and Appli-cations(MONET),pages285–297,2000.[HW98]M.Herbster and M.K.Warmuth.Tracking the best expert.Journal of Machine Learn-ing,32(2):151–178,August1998.Special issue on concept drift.[JB00]Shudong Jin and Azer Bestavros.Greedydual*web caching algorithm:Exploiting the two sources of temporal locality in web request streams.Technical Report2000-011,4,2000.[KW97]J.Kivinen and M.K.Warmuth.Additive versus exponentiated gradient updates for linear rmation and Computation,132(1):1–64,January1997.[LW94]N.Littlestone and M.K.Warmuth.The weighted majority rmation and Computation,108(2):212–261,1994.[MS96]Lily Mummert and Mahadev Satyanarayanan.Long term distributedfile reference tracing:Implementation and experience.Software-Practice and Experience(SPE),26(6):705–736,June1996.。