Policies for Caching OLAP Queries in Internet Proxies
ad域面试要点 -回复
ad域面试要点-回复Active Directory (AD) is Microsoft's directory service that provides centralized authentication, authorization, and management of resources within a Windows domain. AD is a crucial component in a Windows network infrastructure, and as such, proficiency in AD is essential for any IT professional working with Windows-based systems. In this article, we will explore the key points to cover in an AD domain interview, focusing on the topics mentioned within square brackets.[Overview of AD Domain Structure]Before diving into the specific interview questions, it is essential to have a solid understanding of AD domain structure. An AD domain is a logical grouping of computers, users, and other network resources that share a common directory database. AD follows a hierarchical structure, with the domain being the primary administrative unit. Within a domain, you can have multiple domain controllers (DCs) that share the responsibility of authenticating users and managing resources.[Key Components of AD]1. Domain Controllers (DCs): Domain controllers are serversrunning Windows Server operating systems and hosting AD services. They store the AD database, authenticate users, and handle resource management within the domain.2. Domains: Domains are the basic administrative units within AD. They provide a boundary for security policy enforcement and replication boundaries for AD data.3. Organizational Units (OUs): OUs are containers within a domain used to organize and manage objects, such as users, groups, and computers. OUs enable administrators to apply group policies and delegate administrative control.4. Forests: A forest is a collection of one or more domains that share a common schema, configuration, and global catalog. Forests enable organizations to implement separate AD namespaces while still maintaining a level of interoperability.[AD Authentication]A significant aspect of AD is user authentication. Here are some commonly asked questions related to this topic:1. How does AD authenticate users?AD uses the Kerberos authentication protocol by default. When a user logs in to a domain, their credentials are validated by a domain controller using Kerberos.2. What is the purpose of the Global Catalog (GC)?The Global Catalog is a distributed data repository that contains a subset of all objects from every domain in a forest. It allows users to search for objects from any domain without the need to contact multiple domain controllers.[Group Policy Management]Group Policy is a powerful feature of AD that allows administrators to manage settings and configurations for users and computers. Here are some key points related to Group Policy:1. What is Group Policy?Group Policy is a set of rules and configurations that can be applied to users and computers within a domain or an OU. It enables administrators to define security settings, deploy software, and manage user environment settings.2. How are Group Policies stored and applied?Group Policies are stored within the SysVol directory on domain controllers and replicated to all DCs in the domain. Policies are applied to users and computers when they log in to the domain. They are hierarchical in nature and are processed from the domain level down to the OU level.[Replication and High Availability]Maintaining a highly available and efficient AD environment requires proper replication and fault tolerance. Consider the following points:1. How does AD replication work?AD replication is the process of synchronizing changes made to the AD database between domain controllers. Replication follows a multi-master model, where all domain controllers are equal and can make changes. Replication traffic is compressed and encrypted.2. What is Tombstone Lifetime?The Tombstone Lifetime is the period for which deleted objects are retained in AD. After this period, the deleted objects are permanently removed from the AD database.[Tools and Utilities]Having knowledge of the various tools and utilities available for AD management is essential. Some commonly used tools include:1. Active Directory Users and Computers: This tool provides a graphical user interface for managing AD objects, such as users, groups, and OUs.2. Active Directory Sites and Services: This tool allows administrators to manage AD replication, create and manage site links, and define site boundaries.In summary, mastering the key aspects of AD domain structure, authentication, group policy management, replication, and the associated tools and utilities is crucial for success in an AD domain interview. By demonstrating a solid understanding of these topics, you will showcase your proficiency in managing and troubleshooting AD environments.。
2024版DBdoctor
03Identify slow or ineffective queriesthat may be affecting databaseperformance Monitor query response timeEvaluate CPU, memory, disk, and network usage to ensure optimal resource allocation Analyze resource utilization Detect potential bottlenecks such as table locks, missing indexes, or inefficient query plans Identify bottlenecks Check database performanceEvaluate database securityAudit user accessVerify user permissions and access controls to ensure only authorized users canaccess the databaseCheck for vulnerabilityScan for known security vulnerabilities and misconfigurations that could exposethe database to attackMonitor surveillance activityIdentify unusual or surveillance activity that may indicate an attempted orsuccessful attack03Perform regular backups Ensure regular backups are taken to protect against data loss and facilitate recovery in case of corruption 01Validate data consistency Check for data integrity issues such as duplicate records, missing data, or incorrect relationships 02Monitor data changes Track changes to data over time to identify unauthorized modifications or potential data corruption Detecting database integrityOptimize database structureNormalize database01Remove redundancy and ensure data consistency bynormalizing the database structureUse appropriate data types02Choose the most suitable data types for each column to reducestorage space and improve query performanceIndex strategically03Create indexes on columns that are frequently used in queriesto speed up data retrieval However, avoid over indexing as it canslow down data insertion and updatesImprove database query efficiencyOptimize SQL queriesWrite effective SQL queries that avoid unnecessary joins, subqueries, and complex calculations UseEXPLAIN or similar tools to analyze query execution plans and identify bottlenecksUse prepared statementsPrepared statements can improve query performance by reducing the need to parse and compile SQLstatements for each requestImplementation cachingCaching frequently accessed data can significantly improve performance by reducing thenumber of database queries requiredTune memoryallocationAdjust memory related parameters such as buffer pool size and sort area size to match the workload and available system resourcesConfigure I/OoperationsOptimize I/O operations byadjusting parameters like disk I/Obandwidth and I/O wait time toimprove overall databaseperformanceEnable parallelprocessingUtilize parallel processingcapabilities of the databasesystem to speed up complexqueries and improve overallthroughput010203 Adjusting database configuration parametersDiagnostic database failureIdentify the type of failureDetermine why the failure is due to hardware, software, network, or humanerrorCollect diagnostic informationGather relevant logs, error messages, and system metrics to aid in thediagnosis processAnalyze the dataUse tools and techniques such as SQL queries, log analysis, and performancemonitoring to understand the root cause of the failureFix database errorsRestore from backupIf possible, restore the database to a previous stateusing backupsApply patches and updatesEnsure that the database software is up to date withthe latest patches and updates to address any knownissuesManually fix errorsIf necessary, manually edit data or correctconfiguration settings to resolve the issueDeploy multipleinstances of the database across different servers or in different locations to ensure high availabilityImplementati onredundancySet up monitoring tools to track database performance and send alerts when potential issues are detectedMonitor and alertPerform regular maintenance tasks such as index rebuilds, data archiving, and updating statistics to keep thedatabase running smoothlyRegularmaintenanceDevelop acomprehensive disaster recovery plan that includes procedures for restoring data in case of acatastrophic failureDisaster recovery planningPrevent database failures from happening againDaily checksPerform daily checks on the database to ensure its smooth operation, including checking the status of hardware, software, and network connections Weekly maintenanceConduct weekly maintenance tooptimize database performance,such as updating statistics,reorganizing data files, andclearing out objective dataMonthly reviewsConduct monthly reviews toassess the overall health andperformance of the database,identify any potential issues, andmake recommendations forimprovementsRegular maintenance of the databaseReal time monitoring of database status24/7 monitoringContinuously monitor the database to detect anyissues or abnormalities in real time, ensuring promptresponse and resolutionAlert notificationsSet up alert notifications to inform relevantpersonnel of any critical issues or potential problemswith the database, enabling timely interventionDetailed reportingProvide detailed reporting on the status andperformance of the database, including historicaltrends and comparisons with previous periodsRapid responseImmediate response to any issues or alerts related to the database, minimizing downtime and impact on business operations要点一要点二ExperttroubleshootingUtilize expert knowledge and tools to quickly diagnose and resolve complex database issuesPreventive measuresImplement preventive measures to reduce the likelihood of future issues, such as regular updates and patches, securityenhancements, and performance tuning要点三Timely handling of database issuesStrengthening database security protection01Implement strong authentication and access controls: Require users toauthenticate with unique credentials and restrict access to sensitive databased on user roles and privileges02Encrypt sensitive data: Apply encryption algorithms to protect sensitivedata, such as personal identifiable information (PII), credit cardnumbers, and passwords03Regularly update and patch the database: Keep the database softwareup to date with the latest security patches and updates to limitvulnerabilityPrevent SQL injection attachmentsSanitize and validate user inputsAlways sanitize and validate user inputs to prevent malicious code from beingrejected into SQL queriesUse parameterized queries or prepared statementsThese techniques ensure that user inputs are treated as data rather thanexecutable code, reducing the risk of SQL injectionImplement Web Application Firewall (WAF)A WAF can help detect and block SQL injection attachments by analyzingincoming traffic and filtering out malicious requests01 02 03Establish a regular backup scheduleBack up the database at regular intervals to ensure that data can be restored in case of an emergencyTest backup restoration procedures Periodically test the backup restoration procedures to ensure their effectiveness in case of data loss or corruptionStore backups securelyProtect the backups with encryption and store them in a secure location, such as an off site data center or cloud storage facilityRegularly backup the database to ensure data securityAssess the necessity of migration or upgrade01Analyze current database performance and identifypotential bottlenecks or issues02Evaluate the benefits of migrating or upgrading to anew database system, such as improved performance,scalability, or security03Advisor the costs and risks associated with migrationor upgrade, including downtime, data loss, andcompatibility issuesDevelop a detailed migration or upgrade plan Identify the specific steps required to migrate or upgrade the database, including datatransfer, schema changes, and any necessary modifications to applications or scriptsDevelop a testing plan to ensure that the migration or upgrade process does not introduceany new issues or bugsDocument the plan thoroughly, including any dependencies, prerequisites, and expectedoutcomesBack up the current database toensure that data can be restoredif necessaryExecute the migration or upgrade plan, carefully followingthe documented steps andtesting planMonitor the migration orupgrade process closure toensure that it is proceeding asexpected and address any issuesthat are urgentVerify the integrity andconsistency of the migrated or upgraded data by comparing itto the original data set andrunning any necessary tests or checksPerformance migration or upgrade operations to ensure data consistency。
proximal policy optimization algorithms 原文
proximal policy optimization algorithms 原文Proximal Policy Optimization (PPO) is a type of reinforcement learning algorithm that attempts to optimize a policy function that maps an agent's current state to an action to be taken in that state. PPO is a model-free algorithm, meaning that it does not require a pre-defined model of the environment to operate. Instead, PPO interacts with the environment through experience, using this experience to update its policy function and improve its performance in the task at hand.PPO differs from other reinforcement learning algorithms in several key ways. Most importantly, PPO has been designed to address some of the shortcomings of earlier algorithms, such as Trust Region Policy Optimization (TRPO), by being less computationally intensive and easier to use in practice. In addition, PPO is often more robust when applied to complex, real-world environments, where other algorithms may struggle to converge. One of the main features of PPO is its use of a clipped surrogate objective function. This function is used to update the policy function in a way that prevents it from changing too much from one iteration to the next, allowing for more stable learning. The surrogate objective function is based on the ratio of the new policy to the old policy, multiplied by the advantage function (which measures how good the current policy is at achieving the task at hand). If this ratio is too large or too small, it is clipped to a maximum or minimum value, ensuring that policy updates are not too extreme.Another important aspect of PPO is its use of multiple mini-batches of experience data to update the policy function. This helps to ensure that the updates are more robust and stable, and reduces the need for larger amounts of experience data to be collected before learning can begin. Additionally, PPO uses a value function network to estimate the value of each state visited by the agent, which is used to update the advantage function in conjunction with the policy function.There are several variants of PPO, such as PPO-Clip and PPO-Penalty. PPO-Clip is the most commonly used variant, and is characterized by its use of a clipped surrogate objective function. PPO-Penalty is similar to PPO-Clip, but uses a penalty term instead of a clipping term to ensure that the policy updates remain within a certain range.PPO has been successfully applied to a wide range of tasks, including robotics, video games, and natural language processing. One of the main benefits of PPO is its ability to quickly learn robust policies with fewer iterations than some other reinforcement learning algorithms. However, there are still some limitations to PPO, such as its inability to handle continuous control tasks with high-dimensional action spaces. Nonetheless, PPO continues to be a popular choice for researchers and practitioners alike, due to its simplicity and effectiveness in many applications.。
Adobe ColdFusion 2016企业版应用服务器用户指南说明书
Adobe ColdFusion (2016 release) Enterprise EditionThe 2016 release of Adobe ColdFusion Enterprise Edition is a tried and testedapplication server that simplifies complex coding tasks in enterprise environments. Rapidly develop web and mobile applications that are robust, scalable, secure and adept at handling high loads with high reliability. Create new channels for your offerings by using the all-new API Manager to implement your API strategy faster. Get unprecedented control over PDF generation and manipulation.Embrace futuristic technologies —Move your APIs swiftly from concept to production with the all-new API Manager. Manage APIs across their lifecycle, get insights into usage and track all aspects of performance. Secure your APIs, restrict access beyond a specified threshold and maximize returns on your APIs through the developer portal.Deploy enterprise-ready applications —Get unprecedented control over PDF generation and manipulation, including new capabilities such as redaction and sanitization. Use the new security code analyzer to automatically detect vulnerabilities. Leverage overall performance enhancements to make existing applications work faster.Build applications quickly —Work faster with many nifty new features that include a command-line interface, CFML enhancements, SOAP to REST translation, and web services support. Leverage your existing CFML skills to develop mobile apps and use built-in integration with Adobe PhoneGap Build.Adobe ColdFusion (2016 release)Enterprise EditionGet a robust platform for scalable, high-performing web and mobile applications.If you recently purchased ColdFusion 11, you might be eligible for a complimentary upgrade to ColdFusion (2016 release). To find out more, contact Customer Service at 800-833-6687 or /support/contactVERSION COMPARISON CHART• AvailableRestrictedEnhanced Enhanced FeaturesBlank Not AvailableNew Features• AvailableRestricted Enhanced Enhanced FeaturesBlank Not AvailableNew Features• AvailableRestricted Enhanced Enhanced FeaturesBlank Not AvailableNew Features• AvailableRestricted Enhanced Enhanced FeaturesBlank Not AvailableNew Features• AvailableRestricted Enhanced Enhanced Features Blank Not AvailableNew Features• AvailableRestricted Enhanced Enhanced FeaturesBlank Not AvailableNew FeaturesAdobe, the Adobe logo, ColdFusion and ColdFusion Builder are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. All other trademarks are the property of their respective owners.© 2016 Adobe Systems Incorporated. All rights reserved.xxxxxxxx x/xxAdobe Systems Incorporated 345 Park AvenueSan Jose, CA 95110-2704 USA•Available RestrictedEnhanced Enhanced Features Blank Not AvailableNew Features。
线程池饱和策略拒绝策略
线程池饱和策略拒绝策略
线程池饱和策略是指当线程池中的线程都在忙碌时,如何处理新的任务。
拒绝策略是指在任务被拒绝之前,如何处理任务。
常用的线程池饱和策略有四种:
1. CallerRunsPolicy:在任务被拒绝后,主线程会执行该任务。
这种策略可以避免任务丢失,但会导致主线程变慢。
2. AbortPolicy:在任务被拒绝后,会抛出一个RejectedExecutionException异常。
这种策略可以避免主线程变慢,但会丢失任务。
3. DiscardPolicy:在任务被拒绝后,会直接丢弃任务,不会抛出异常。
这种策略可以避免抛出异常,但会丢失任务。
4. DiscardOldestPolicy:在任务被拒绝后,会丢弃最早加入任务队列的任务,然后重新尝试执行任务。
这种策略可以避免丢失任务,但会丢失最早加入队列的任务。
在实际开发中,选择合适的饱和策略非常重要,需要根据具体情况进行选择。
如果任务不能丢失,可以选择CallerRunsPolicy策略;如果不能影响主线程,可以选择AbortPolicy策略;如果不能抛出异常,可以选择DiscardPolicy策略;如果可以丢失最早加入队列的任务,可以选择DiscardOldestPolicy策略。
- 1 -。
负载均衡load balance的英文缩写
负载均衡load balance的英文缩写Title: Load Balance (LB): The Cornerstone of Efficient Resource ManagementIntroductionIn the realm ofputing and networking, one term that frequently crops up is "Load Balance" or LB for short. This concept plays a pivotal role in optimizing resource utilization, enhancing system performance, and ensuring fault tolerance. In this article, we will delve into the intricacies of load balancing, its significance, and how it contributes to efficient resource management.What is Load Balancing?Load balancing refers to the methodical distribution of network traffic across multiple servers to optimize resource utilization, enhance responsiveness, and avoid overloading any single server. It is an essentialponent of fault-tolerant systems as it ensures that no single point of failure exists.The Importance of Load BalancingThe importance of load balancing can be summed up in three main points:1. Improved Performance: By distributing the workload evenly across multiple servers, each server operates within its optimal capacity, leading to better overall system performance.2. Enhanced Availability: If one server fails or needs maintenance, the load balancer redirects traffic to other available servers, thereby ensuring continuous service availability.3. Scalability: As the demand for services increases, new servers can be added to the system without disrupting existing services. This allows for easy expansion and scalability of the system.How does Load Balancing Work?Load balancing typically involves the use of a software or hardware device called a load balancer. The load balancer acts as a traffic cop, directing client requests to the various backend servers based on certain predefined algorithms and policies. These algorithms may consider factors such as server availability, server load, geographic location, or specific application requirements.Types of Load Balancing AlgorithmsThere are several types of load balancing algorithms, including:1. Round Robin: Each iing request is assigned to the next available server in a rotation.2. Least Connections: New requests are sent to the server with the fewest active connections.3. IP Hash: A hash function is used to determine which server should handle a request based on the client's IP address.4. Weighted Algorithms: Servers are assigned weights based on their processing power or capacity, and requests are distributed accordingly.ConclusionLoad balancing (LB) is a crucial aspect of modernputing and networking infrastructure. Its ability to distribute workloads efficiently, ensure high availability, and facilitate scalability makes it an indispensable tool for managing resources effectively. Understanding the concepts and mechanisms behind load balancing can help organizations make informed decisions about their IT infrastructure and improve the overall user experience.。
BW常用术语
C
Cache / OLAP Cache
A technology to improve the performance. Cache buffers query result data, in order to provider them for further accesses.
Cache Mode
B
BEx
Short for Business Explorer. It includes following tools to present the reports to end user: Analyzer / Web Application Designer / Report Designer / Web Analyzer.
Client
A client is a subset of data in an SAP system. Data shared by all clients is called client-independent data, as compared with client-dependent data. When logging on to an SAP system, a user must specify which client to use. Once in the system, the user has access to both client-dependent data and client-independent data.
Aggregate rollup
Aggregate rollup is a procedure to update aggregates with new data loads. Reference Aggregate
【CUUG内部资料】OCP最新考试题库-1Z0-062(4)
: which change caused this performance difference. 3群 Which method or feature should you use? 交流 A. Compare Period ADDM report CP考试 B. AWR Compare Period report O C. Active Session History (ASH) report
D. Taking a new snapshot and comparing it with a preserved snapshot Correct Answer: B (解析:比较数据库不同时间段的性能差异,最好的方法就是比较 AWR 报告)
QUESTION 48 You want to capture column group usage and gather extended statistics for better cardina lity estimates for the CUSTOMERS table in the SH schema. Examine the following steps: 1. Issue the SELECT DBMS_STATS.CREATE_EXTENDED_STATS (`SH', `CUSTOMERS') FROM dual statement. 2. Execute the DBMS_STATS.SEED_COL_USAGE (null, `SH', 500) procedure.
iRules经典教材!
Nathan McMahon 5/5/2008
IntroductionLeabharlann to iRulesIntro
What Can iRules Do When Should iRules Be Used TCL and iRules iControl Additional Resources 3 3 4 4 5
Common Tasks
Parsing Strings IP Addresses Data Groups Statistics Tracking Users Persistence Search and Replace Exiting Events and iRules Troubleshooting with iRules Troubleshooting iRules Optimizing iRules 32 34 39 42 44 48 51 54 57 60 62
Components of an iRule
Events Condition Action Basic Syntax Basic Commands HTTP Commands Operators Commands Continued 7 8 9 10 13 15 20 23
basic checkbox options within the LTM. Some examples include cookie encryption, header insertion via the HTTP profile, and URI load balancing via the powerful HTTP Class profile. If your requirements can be accomplished via a profile or similar built in mechanism, use it first. If your requirement cannot be met with the canned capabilities of the LTM, then it’s worth asking “without using an iRule, how could similar functionality be achieved?” A common answer is “we can have our software developers add this to the application in six months.” True. You could wait six months and expend valuable developer resources. Or instead spend thirty minutes and have the issue resolved quickly by an iRule. Other objections such as the CPU overhead are valid points and need to be evaluated on a case by case basis as will be discussed later in the Optimizing iRules section. As for being locked into the F5 BIG-IP LTM as the solution to your application networking solution, it’s worth asking the simple question, “does the LTM with the iRule solve your issues?” Every scenario is different, but in the end, it’s important to understand that by employing iRules you are not venturing into unchartered waters alone. Nearly %60 of all F5 BIG-IP LTM customers have at least one iRule in use. There is a fantastic community of tens of thousands of developers and administrators that have contributed their experiences via DevCentral to make iRules and the LTM the standard they are today in the application delivery networking space.
2025年软件资格考试数据库系统工程师(中级)(基础知识、应用技术)合卷试卷及答案指导
2025年软件资格考试数据库系统工程师(基础知识、应用技术)合卷(中级)模拟试卷(答案在后面)一、基础知识(客观选择题,75题,每题1分,共75分)1、数据库系统工程师在数据库设计过程中,以下哪个阶段是确定数据库中数据模型和概念模型的阶段?A、需求分析阶段B、概念结构设计阶段C、逻辑结构设计阶段D、物理结构设计阶段2、在关系数据库中,以下哪种数据类型可以存储固定长度的字符串?A、VARCHARB、CHARC、TEXTD、BLOB3、在数据库系统中,为了确保数据的一致性,在执行事务时必须遵循ACID属性。
以下哪个选项不是ACID属性的一部分?A. 原子性B. 一致性C. 隔离性D. 可用性4、下列关于关系数据库规范化理论的描述中,哪一项是不正确的?A. 第一范式要求每个属性都应该是不可再分的基本项。
B. 满足第二范式的前提是先满足第一范式,并且所有非主属性完全依赖于整个候选键。
C. 第三范式消除了传递依赖。
D. BCNF(Boyce-Codd范式)比第三范式更严格,它不允许任何属性部分依赖或传递依赖于候选键。
5、在数据库系统中,以下哪一项不是关系模型的三要素?A. 属性B. 关系C. 范式D. 约束6、在SQL语言中,用于删除表的命令是:A. DROP TABLEB. DELETE FROMC. TRUNCATE TABLED. DELETE7、在数据库系统中,什么是数据模型?请简述其作用。
8、什么是数据库规范化理论?请简述其目的。
(1)第一范式(1NF):要求每个属性都是不可分割的最小数据单位。
(2)第二范式(2NF):在满足1NF的基础上,要求非主属性完全依赖于主键。
(3)第三范式(3NF):在满足2NF的基础上,要求非主属性不传递依赖于主键。
(4)巴斯-科德范式(BCNF):在满足3NF的基础上,要求每个非平凡函数依赖都由主键决定。
通过规范化理论,可以优化数据库设计,提高数据库的质量和性能。
impala sql提升效率技巧
impala sql提升效率技巧English Answer:1. Using the correct data format: Impala is a columnar database, which means that it stores data in columns rather than rows. This can make a big difference in performance, as it allows Impala to quickly scan through large amounts of data without having to read every row. If your data is not already in a columnar format, you can use the `ALTER TABLE` statement to convert it.2. Partitioning your data: Partitioning your data can also improve performance, as it allows Impala to divide your data into smaller, more manageable chunks. This can make it faster to query your data, as Impala will only need to scan the partitions that are relevant to your query. You can partition your data using the `PARTITION BY` clause in the `CREATE TABLE` statement.3. Using indexes: Indexes can also help to improveperformance, as they allow Impala to quickly find the data that you are looking for. You can create indexes on any column that you frequently query. To create an index, usethe `CREATE INDEX` statement.4. Using materialized views: Materialized views are a type of pre-computed query that can improve performance for frequently used queries. When you create a materialized view, Impala will store the results of the query in a separate table. This means that when you run the query again, Impala will be able to return the results from the materialized view instead of having to re-execute the query. To create a materialized view, use the `CREATE MATERIALIZED VIEW` statement.5. Using hints: Hints are a way to tell Impala how to execute a query. You can use hints to specify the join order, the scan order, or the storage format. Hints can be used to improve performance for specific queries. To use a hint, use the `HINT` clause in the `SELECT` statement.Chinese Answer:1. 使用正确的 data format,Impala 是一种列式数据库,这意味着它是以列而非行为单位来存储数据的。
plan_hash_value查执行计划
英文回答:The plan_hash_value serves as a distinct identifier for a specific execution plan within the Oracle database system. It is systematically generated based on the execution plan associated with a given SQL statement, and serves as a fundamental tool forparing and analyzing disparate execution plans for the same SQL statement. The plan_hash_value represents a hashed value that encapsulates the intricacies of the execution plan, and can be extracted from the execution plan information found within the V$SQL_PLAN orDBA_HIST_SQL_PLAN views. This value is particularly valuable in the identification and diagnostic analysis of performance issues pertaining to SQL statements and their respective execution plans.计划—hash—value成为甲骨文数据库系统内具体执行计划的明显标识符。
它根据与给定的SQL语句相关的实施计划而系统生成,并成为对同一SQL语句的不同实施计划进行分解和分析的基本工具。
计划—hash—value 代表一个散列值,它包罗了执行计划的复杂之处,可以从V¥SQL—PLAN或DBA—HIST—SQL—PLAN视图中找到的执行计划信息中提取。
我想做地理班长英语作文
IntroductionAs an enthusiastic learner with a profound passion for geography, I am penning down my compelling reasons and well-structured plan to assume the role of the Geography Class Monitor. This essay, spanning over 1441 words, delves into the various aspects that make me a suitable candidate for this position, elucidating my understanding of the responsibilities, my dedication to fostering a conducive learning environment, my strategies for enhancing class engagement, and my commitment to promoting academic excellence and personal growth among my peers. It is through this comprehensive and meticulous approach that I aim to demonstrate my unwavering resolve and capability to lead our geography class towards new heights of academic success and collective accomplishment.Understanding the Role and ResponsibilitiesA class monitor in any subject, particularly geography, is not merely a title but a crucial bridge between the students and the teacher. The role encompasses a myriad of responsibilities, including facilitating classroom activities, ensuring effective communication, maintaining discipline, and promoting a positive learning atmosphere. Recognizing these duties, I understand that being the Geography Class Monitor necessitates strong organizational skills, effective communication, and an unwavering commitment to fostering a love for the subject.Firstly, I would ensure the smooth execution of classroom routines by assisting the teacher in distributing study materials, managing class schedules, and coordinating group assignments. I believe that a well-organized classroom environment paves the way for focused learning, allowing students to delve deeper into the intricacies of geography without distractions.Secondly, I would strive to establish open channels of communication between classmates and the teacher, addressing concerns, queries, or suggestions promptly and constructively. By doing so, I aim to foster a sense of inclusivity and belonging, where every student feels comfortable expressing their thoughts and ideas.Lastly, as a firm believer in the power of discipline and respect, I would work diligently to uphold high behavioral standards within the classroom. I understand that discipline is not just about enforcing rules, but also about instilling values like punctuality, attentiveness, and mutual respect, which are essential for a productive learning experience.Fostering a Conducive Learning EnvironmentCreating a vibrant and engaging learning environment is central to my vision as the Geography Class Monitor. To achieve this, I plan to implement several innovative strategies that cater to diverse learning styles and enhance overall class participation.One such strategy would be to organize interactive sessions, such as debates, quizzes, and group presentations on contemporary geographical issues. These activities would not only deepen our understanding of the subject matter but also cultivate critical thinking, public speaking skills, and teamwork.Moreover, incorporating technology, such as digital mapping tools and virtual field trips, would bring the subject alive, making it more relatable and enjoyable.Additionally, I would initiate a 'Geography Club' where students can engage in extracurricular activities like map-making competitions, geo-caching adventures, or guest lectures from industry experts. Such initiatives would not only reinforce classroom learning but also stimulate interest in the subject beyond the confines of the syllabus.Promoting Academic Excellence and Personal GrowthAs the Geography Class Monitor, I am committed to nurturing academic excellence and personal growth among my peers. I believe that every student has the potential to excel in geography, given the right guidance and support.To facilitate this, I would propose regular review sessions and study groups, where we can collectively discuss challenging concepts, share study resources, and provide peer-to-peer tutoring. I am convinced that collaborative learning fosters a deeper understanding of the subject and encourages students to help one another, thereby strengthening our bond as a class.Furthermore, I would encourage individualized learning by identifying each student's strengths, weaknesses, and learning preferences. By doing so, I could provide tailored guidance, recommend additional resources, or even facilitate one-on-one discussions with the teacher, ensuring that no student is left behind in their geographical journey.Personal Qualities and Passion for GeographyBeyond the strategic plans and responsibilities, my personal qualities and deep passion for geography make me an ideal candidate for the role of the Geography Class Monitor. I am known for my enthusiasm, dedication, and a keen eye for detail, traits that have consistently earned me recognition for my academic achievements and leadership roles in various school projects.My fascination with geography stems from its ability to explain the intricate relationships between human societies and the natural world. I am captivated by the diversity of landscapes, cultures, and ecosystems that our planet harbors, and I am eager to share this awe and curiosity with my classmates. Moreover, my avid reading of geographical literature, participation in online forums, and voluntary involvement in environmental initiatives attest to my unwavering commitment to the subject.ConclusionIn conclusion, my aspiration to serve as the Geography Class Monitor is rooted in a profound understanding of the role's responsibilities, a well-conceived plan to foster a conducive learning environment, and an unwavering commitment to promoting academic excellence and personal growth. With my strong organizational skills, effective communication, passion for geography, and innovative strategies, I am confident in my ability to lead our class towards a collective journey of discovery, understanding, and appreciation of the world we inhabit. I eagerly look forward to the opportunity to translate this visioninto reality and contribute to shaping a vibrant and dynamic geography class that thrives on knowledge, collaboration, and a shared love for the subject.。
s3 policy 语法
s3 policy 语法
S3(Amazon Simple Storage Service)的Policy语法是一种
用于定义访问权限的语言,它允许您控制谁可以访问您的存储桶以
及如何访问。
Policy语法是以JSON格式编写的,它包括一系列的
元素和条件,用于指定允许或拒绝对存储桶和其中对象的访问权限。
在S3的Policy语法中,您可以使用一系列的元素来定义访问
权限,包括"Effect"(效果)、"Action"(动作)、"Resource"
(资源)、"Principal"(主体)等。
其中,"Effect"指定是允许还
是拒绝访问,"Action"指定要执行的操作,"Resource"指定要应用
策略的资源,"Principal"指定被授权或被拒绝访问的实体。
Policy语法还支持条件元素,这些条件元素允许您根据特定的
条件来限制或允许访问。
例如,您可以基于请求者的IP地址、请求
中包含的头信息、请求的时间等条件来限制访问。
此外,您还可以使用通配符来指定多个资源或操作。
例如,您
可以使用""来表示所有操作,或者使用""来表示所有资源。
总的来说,S3的Policy语法提供了灵活的方式来定义访问权
限,允许您根据具体的需求和场景来精确地控制访问权限。
当编写S3的Policy语法时,您需要仔细考虑所需的访问控制策略,并确保语法的正确性以及安全性。
线程池自定义拒绝策略
线程池自定义拒绝策略线程池是一种用于管理和调度线程的技术,它通过维护一组线程来处理任务队列中的任务。
当任务数量超过线程池的最大容量时,线程池需要使用拒绝策略来拒绝新的任务。
拒绝策略决定了线程池如何处理拒绝的任务。
线程池的拒绝策略是一个重要的设计决策,它决定了线程池在高负载情况下的表现和可靠性。
不同的拒绝策略适用于不同的场景和需求。
线程池的常见拒绝策略有四种:AbortPolicy、CallerRunsPolicy、DiscardPolicy和DiscardOldestPolicy。
1. AbortPolicy(终止策略):AbortPolicy是线程池的默认拒绝策略,当线程池无法处理新的任务时,它会抛出RejectedExecutionException异常来拒绝任务。
这种策略会立即抛出异常,不会进行任何处理。
这也是最简单和最直接的拒绝策略。
2. CallerRunsPolicy(调用者运行策略):CallerRunsPolicy是一种比较宽容的拒绝策略。
当线程池无法处理新的任务时,它会将任务返回给调用者线程来执行。
这种策略可以确保任务被执行,但可能会造成调用线程的阻塞。
3. DiscardPolicy(丢弃策略):DiscardPolicy是一种比较激进的拒绝策略。
当线程池无法处理新的任务时,它会直接丢弃新的任务,不做任何处理。
这种策略会导致一些任务被丢弃,可能会造成数据丢失或其他问题。
4. DiscardOldestPolicy(丢弃最旧策略):DiscardOldestPolicy是一种比较高效的拒绝策略。
当线程池无法处理新的任务时,它会丢弃任务队列中最旧的任务,然后尝试重新将任务加入队列。
这种策略可以保证任务的执行,但可能会导致一些较早的任务被丢弃。
除了上述常见的拒绝策略,线程池还允许自定义拒绝策略。
自定义拒绝策略可以根据具体的需求来实现特定的处理逻辑。
一般情况下,自定义拒绝策略需要实现RejectedExecutionHandler接口,并重写其rejectedExecution方法。
Securing the software defined network control layer
Securing the Software-DefinedNetwork Control LayerPhillip Porras,Steven Cheung,Martin Fong,Keith Skinner,and Vinod YegneswaranComputer Science LaboratorySRI International{porras,cheung,mwfong,skinner,vinod}@Abstract—Software-defined networks(SDNs)pose both an opportunity and challenge to the network security community. The opportunity lies in the ability of SDN applications to express intelligent and agile threat mitigation logic against hostile flows,without the need for specialized inline hardware.However, the SDN community lacks a secure control-layer to manage the interactions between the application layer and the switch infrastructure(the data plane).There are no available SDN controllers that provide the key security features,trust models, and policy mediation logic,necessary to deploy multiple SDN applications into a highly sensitive computing environment.We propose the design of security extensions at the control layer to provide the security management and arbitration of conflicting flow rules that arise when multiple applications are deployed within the same network.We present a prototype of our design as a Security Enhanced version of the widely used OpenFlow Floodlight Controller,which we call SE-Floodlight.SE-Floodlight extends Floodlight with a security-enforcement kernel(SEK) layer,whose functions are also directly applicable to other OpenFlow controllers.The SEK adds a unique set of secure appli-cation management features,including an authentication service, role-based authorization,a permission model for mediating all configuration change requests to the data-plane,inlineflow-rule conflict resolution,and a security audit service.We demonstrate the robustness and scalability of our system implementation through both a comprehensive functionality assessment and a performance evaluation that illustrates its sub-linear scaling properties.I.I NTRODUCTIONSDN frameworks,such as OpenFlow(OF),embrace the paradigm of highly programmable switch infrastructures[1] managed by a separate centralized control layer.Within the OpenFlow network stack,the control layer is the key com-ponent responsible for mediating theflow of information and control functions between one or more network applications and the data plane(i.e.,OpenFlow-enabled switches).The net-work applications are typically traffic-engineering applications, such asflow-based load or priority management services,or perhaps applications designed to mitigate hostileflows or to enableflows to bypass faulty network segments.To date,OpenFlow controllers[2],[3],[4],[5]have largely operated as the coordination point through which network applications conveyflow rules,submit configuration requests, and probe the data plane for state information.As a controller communicates with all switches within its network,or network slice[6],it provides the means to distribute a coordinated set of flow rules across the network to optimizeflow routes and divert and balance traffic to improve the network’s efficiency[7].From a network security perspective,OpenFlow offers researchers a unique point of control over anyflow(orflow participant)deemed to be hostile.An OpenFlow-based secu-rity application,or OF security app,can implement much more complexflow management logic than simply halting or forwarding aflow.Such apps can incorporate stateful flow-rule production logic to implement complex quarantine procedures of theflow producer,or they could migrate a malicious connection into a counter-intelligence application in a manner not easily perceived by theflow participants.Flow-based security detection algorithms can also be redesigned as OF security apps but implemented much more concisely and deployed more efficiently[8].Thus,there is a compelling motivation for sensitive computing environments to consider SDNs as a potential source of innovative threat mitigation.However,there are also significant security challenges posed by OpenFlow,and SDN’s more broadly.The question of what network security policy is embodied across a set of OF switches is entirely a function of how the current set of OF apps react to the incoming stream offlow requests.When peer OF apps submitflow rules,these rules may produce complex interdependencies,or they may produceflow handling con-flicts.The need forflow-rule arbitration,as new candidate rules are created by the application layer,is an absolute prerequisite for maintaining a consistent network security policy.Within the OpenFlow community,the need for security pol-icy enforcement is not lost.Efforts to develop virtual network slicing,such as in FlowVisor[6]and in the Beacon OpenFlow controller[9],propose to enable secure network operations by segmenting,or slicing,network control into independent virtual machines.Each network domain is governed by a self-consistent OF app,which is architected to not interfere with those OF apps that govern other network slices.In this sense,OpenFlow security has been cast as a non-interference property.However,even within a given network slice,the problem remains that security constraints within the slice must still be enforced.Permission to freely reproduce all or part of this paper for noncommercial purposes is granted provided that copies bear this notice and the full citation on thefirst page.Reproduction for commercial purposes is strictly prohibited without the prior written consent of the Internet Society,thefirst-named author (for reproduction of an entire paper only),and the author’s employer if the paper was prepared within the scope of employment.NDSS’15,8-11February2015,San Diego,CA,USACopyright2015Internet Society,ISBN1-891562-38-X/10.14722/ndss.2015.23222App%A1%App%A2%App%N% Controller%logic to recover from loss from conflict arbitration is an essential element toward the design of robust and modular OF applications.B.Challenge2:Flow Constraints vs.Flow CircuitsDefiningfilters to constrain communications between net-work entities is a central element for defining network security policies.However,the OpenFlow protocol’s introduction of the Set action empowers apps to instruct a switch to rewrite the header attributes of a matchingflow.Indeed,perhaps the central benefit of SDNs is this ability to deploy software-enabled orchestration offlows to manage network resources in an agile manner.However,this inherentflexibility in the OpenFlow protocol to define complex virtual circuit paths, also introduces significant management challenges,such as the origin binding[10]problem.The creation of virtual circuits offers a particular challenge to designing a control layer capable of maintaining a consistent network security policy,where the conflicts between incoming candidateflow rules and existingflow rules are detected and resolved before rules are forwarded to the switch.For example,let us consider the submission of aflow rule by a security application A1,which seeks to prevent two hosts from communicating.Rule1:(criteria)A to B,(action)drop As thisflow constraint(i.e.,droppingflows from A to B) is installed to protect the network,it should hold that any subsequent candidateflow rule should be rejected if it conflicts with Rule1.Now consider the potential submission of three subsequentflow rules submitted by app A2,which may arrive in any order.Rule m:*to D,set D→B,Output to tableRule n:A to*,set A→C,Output to tableRule o:C to B,forwWe observe that,individually,none of the threeflow rules (m,n,o)conflicts with Rule1.The action”Output to table”indicates that once the set operation is performed,the result should continue evaluation among the remainingflow rules.In one possible rule submission ordering[m,n,then o], if the controller were to allow rule o to be forwarded to the switch,then a logic chain will arise.That is,aflow from A to D results in the following:D’s address is set to B by rule m, A’s address is set to C by n,and rule o cause theflow to be forward to B.In effect,any submission order of m,n,and o, leads to the circumvention of Rule1.The inability to reliably handle invariant property violations in recursiveflow rule logic chains established by such virtual circuits,using the Set action,is a fundamental deficiency of existing systems such as VeriFlow[11]and commercial systems like V Armour[12].Without a scalable inline solution, this basic challenge will hinder the deployment of multi-application deployments of OpenFlow in networks that require strong policy enforcement.C.Challenge3:An Application Permission ModelIn addition to creatingflow rules,OpenFlow provides apps with a wide range of switch commands and probes.For example,applications may reconfigure a switch in a manner that changes how the switch processesflow rules.Apps may query statistics and register for callback switch events,and they can issue vendor-specific commands to the switch.While network operators might choose to run a third party OF app,OpenFlow offers them no ability to constrain which commands and requests the apps will issue to the switch.This notion of constraining a third party OF app’s permissions may seem analogous to that of smartphone users who can choose to accept or deny a mobile app permission to access certain phone features or services.Why provide an OF App unneeded access to issue low-level vendor-specific commands,if the intent of the running app has no requirement for such calls to be issued?D.Challenge4:Application AccountabilityThe absence of design considerations for multi-app support in OpenFlow also results in a lack of ability for the control layer to verify which app has issued whichflow rule.Co-existing applications could issue interleavedflow rules,all of which are then treated identically by the control layer and data plane.An arbitration system cannot assign unique precedence or priority toflow rules or switch commands produced from any application.Rather,OpenFlow’s design requires that arbi-tration of conflictingflow rules occurs among the applications themselves.This approach limits the reusability of OF apps and results in the monolithic application designs,such as Google’s B4example.E.Challenge5:Privilege separationThe legacy of well-known OpenFlow controllers have essentially treated the application layer as a library layer, in which the traffic engineering logic is instantiated as in-terpreted scripts,module plugins,or loadable libraries that execute within the process space of the controller.This has largely been for the purpose of performance,but as shown in[13],the current state of established controllers are highly susceptible to network collapse from minor coding errors, vulnerabilities,or malicious logic embedded within a traffic engineering application.Even a slow memory leak within a TE application can cease the entire control layer,rendering the network unresponsive to newflow requests.While[13]explored the robustness challenges of merging the application layer into the control layer,the implications of current controller architectures are equally problematic for implementing security mediation services.Among its implica-tions,privilege separation dictates that the element responsible for security mediation should operate independently from those elements it mediates.Thus,for the OpenFlow control layer to operate as a truly independent mediator,applications should not be instantiated in the same process context as the controller.In OpenFlow,separation between the application and con-trol layer is achieved through a Northbound API,which is essentially an API defined to transmit messages between the OpenFlow application and control layers,where each operates in a separate process context.There is substantial effort to 3establish a Northbound API standard[14],but in addition to providing the process separation necessary for fair mediation, the Northbound API should also provide strong authentication and identification services to link each app to allflow rules it has created.F.Toward a Security-Enhanced Control LayerIn the remainder of this paper,we present our design of a security enhanced control layer that address all of the above outlined challenges.Based on this design,we created a security-enhanced version of an existing controller,which we call SE-Floodlight.This implementation represents the first fully functional prototype OpenFlow controller designed to provide a comprehensive security mediation of multi-application OpenFlow network stacks.III.D ESIGNING A N O PEN F LOW M EDIATION P OLICYTo better understand the notion of secure mediation in the context of OpenFlow networks,let us delve more precisely into what information is exchanged by the parties being mediated through the control layer.Table I enumerates the forms of data and control function exchanges that occur between the application layer and data plane within an OpenFlow v1.0stack.1Each row identifies the various data exchange operations that must be mediated by the control layer,and indicates the mediation policy that is implemented.Protocol handshakes used to manage the controller-to-switch communication channel are excluded:the focus here is the mediation of application layer to switch exchanges.SE-Floodlight introduces a security enforcement kernel (SEK)into the Floodlight controller,whose purpose is to medi-ate all data exchange operations between the application layer and the data plane.For each operation,the SEK applies the application to data plane mediation scheme shown in in Table I. The Minimum authorization column in Table I identifies the minimum role that an application must be assigned to perform the operation.As discussed in Section IV-A,SE-Floodlight implements a hierarchical authorization role scheme,with three default authorization roles.The lowest authorization role,APP, is intended primarily for(non-security-related)traffic engi-neering applications,and provides sufficient permissions for most suchflow-control applications.The security authorization role,SEC,is intended for applications that implement security services.The highest authorization role,ADMIN,is intended for applications such as the operator console app.The objective of the mediation service is to provide a configurable permission model for a given SE-Floodlight de-ployment,in which both the set of roles may be extended and their permissions may be customized for each newly defined role.Column3of Table I presents the default Mediation policy that is assigned to each available interaction between the application layer and data plane.First,the ability to define 1While the Open Network Foundation(ONF)continues to introduce new features and data types in the evolving OpenFlow standards,we believe the broad majority of these emerging features will translate into a variation of these data exchange operations.Flow Data Exchange Mediation Minimum Direction Operation Policy Authorization01:A to D Flow rule mod RCA(Section IV-C)APP02:D to A Flow removal messages Global read APP03:D to A Flow error reply Global read APP04:A to D Barrier requests Permission APP05:D to A Barrier replies Selected read APP06:D to A Packet-In return Selected read APP07:A to D Packet-Out Permission SEC08:A to D Switch port mod Permission ADMIN09:D to A Switch port status Permission ADMIN10:A to D Switch set config Permission ADMIN11:A to D Switch get config Permission APP12:D to A Switch config reply Selected read APP13:A to D Switch stats request Permission APP14:D to A Switch stats report Selected read APP15:A to D Echo requests Permission APP16:D to A Echo replies Selected read APP17:D to A Vendor features Permission ADMIN18:A to D Vendor actions Permission ADMINTABLE I.A SUMMARY OF CONTROL LAYER MEDIATION POLICIES FOR DATA FLOWS INITIATED FROM THE APPLICATION LAYER TO DATA PLANE(A to D)AND FROM THE DATA PLANE TOAPPLICATION LAYER(D to A)or override existingflow policies within a switch(row1)is the inherent purpose of every OpenFlow application.However, SE-Floodlight introduces Rule-based Conflict Analysis(RCA), described in Section IV-C,to ensure that each candidateflow rule submitted does not conflict with an existingflow rule whose author is associated with a higher authority than the author of the candidate rule.In OpenFlow,a rule conflict arises when the candidate rule enables or disables a networkflow that is otherwise in-versely prohibited(or allowed)by existing rules.In OpenFlow, conflicts are either direct or indirect.A direct conflict occurs when the candidate rule contravenes an existing rule(e.g., an existing rule forwards packets between A and B while the candidate rule drops them).An indirect conflict occurs when the candidate rule logically couples with pre-existing rules that contravenes an existing rule.We discuss rule conflict evaluation in Section IV-C.A second class of operations involve event notificationsused to track theflow table state.These operations do not alter network policies,but provide traffic engineering appli-cations with information necessary to make informedflow management decisions.The default permission model defines these operations as two forms of public read.The Global read represents data-plane events that are streamed to all interested applications who care to receive them(rows2and3).Selected read operations refer to individual events forwhich an application can register through the controller to receive switch state-change notifications(rows5,6,12,14,and16).Selected read notification represent replies to permission-protected operations,which are discussed next.The Packet-In notification is an exception,in that it is received in response toflow rule insertions(vetted through RCA)that trigger the switch to notify that application when packets are received that match theflow rule criteria or when no matchingflow rule is found.The third class of operations involve those that require explicit permissions(Rows4,7-11,13,15,17,and18).These operations either perform direct alterations to the networkflow policies implemented by the switch,or enable the operator to4Permission))Mediator)Aggregate'Flow'Table'Administrator))flow)rules)Security)Service)flow)rules)Applica8on)flow)rules)Security Enforcement KernelRole:based)Source)Auth)State)Table)Manager)Switch)Callback)Tracking)RCA)–)Conflict)Analyzer)Floodlight))controller)with)security)extensions)Northbound*Proxy*Server*Security*Audit*Subsystem*App*Creden8al*Management*Java*Class*OF@App*App*Creden8al*Management*To*Applica8on*Layer*To*Data*Plane*Security*Audit**Trail*Role/based'Segrega3on'Network*applica8on*as*controller@resident*Java*class*2.High-level overview of the SE-Floodlight architectureAn application’s authorization role is assigned during the application authentication procedure,which is described in Section IV-B.Authorizations are group roles,whose members inherit both the rule authority used in conflict resolution dis-cussed in Section IV-C,and the set of associated permissions presented in Section III.This authorization scheme introduces three default application authorization roles(types),which may be augmented by the operator with additional sub-roles,as desired.Applications assigned the administrator role,ADMIN,such as an administrative console application,may produceflow rules that are assigned highest priority by theflow-rule conflict-resolution scheme.Second,network applications that are in-tended to dynamically change or extend the security policy should be assigned the security application,SEC,authorization role.Security applications will typically produceflow rules in response to perceived runtime threats,such as a malicious flow,an infected internal asset,a blacklisted external entity, or an emergent malicious aggregate traffic pattern.Flow rules produced with the SEC role are granted the second highest priority in the rule conflict resolution scheme,overriding all other messages but those from the administrator.All remaining apps are assigned the APP authorization role.B.Module AuthenticationFor multi-application environments it is critical to un-derstand which application has submitted which OpenFlow message.Module authentication provides the foundation for mechanisms such as application permission enforcement,role-based conflict resolution,and security audit for holding errant applications accountable.SE-Floodlight enables two operating modes when mediat-ing OpenFlow network applications.First,it can mediate an application that is implemented as an internally-loaded Java class module,which is the(legacy)method used by Floodlight. Second,SE-Floodlight introduces a client-server Northbound API,which enables operators to separate the controller process (the security mediator)from the process space of the applica-tion(the agent being mediated);see Section IV-F.We begin by explaining the authentication and credential-assignment scheme used when the OpenFlow application is spawned as a class module.Our approach utilizes the Java pro-tected factory method construct,which produces a protected subclass for each message sent from the client application Java class to the SEK.The protected factory supplies the authentication-supporting API extensions used to communicate with the SEK,and it embeds the application’s credential into a protected subclass for each object passed to the SEK.Neither the factory nor its protected sub-classes are modifiable by the Java client,unless the client contains embedded JNI code or violates coding conventions,which we discuss in Step1below.Our goal is to introduce a class-based OF application messaging scheme that adds an administrator-assigned credential to each message produced by the app,while preventing the app from tampering with this credential.The following steps outlines our approach:•Step1:JNI pre-inspector module:Prior to credential establishment of the Java class module,the Java class modulemust be pre-parsed by an inspection utility.If the module is discovered to contain application-supplied JNI code or classes that are declared within or extend classes in”reserved”packages,the Java class will fail pre-inspection.2•Step2:Application role assignment:Java class integra-tion begins with an installation phase,in which the adminis-trator generates a runtime credential,which includes a signed manifest for a specified Java class module,its superclasses, and embedded classes.The credential uniquely identifies the application and incorporates the authorization group role to which the application will be assigned when instantiated.The administrator also assigns the authorization role to a security group,which specifies an upper limit of group priority that group members may assign to the messages sent to the controller.This upper limit enables the application to assign precedents to its own stream offlow rules within the sub-range of priorities corresponding to their role.•Step3:Class validation function:If a credential has been assigned to the class module by the administrator,the the integrity of the manifest and classfiles contained within the running JVM context are digitally verified.If verified,the class is provided a protected factory that will populate each created object with a non-accessible subclass that contains the role found within the application’s credential(i.e.,all objects produced through this factory are assigned the credential pro-vided by the administrator in Step2).Integrity check failures result in unloading of the application and a raised exception.If no credential is present,SE-Floodlight can be configured to either not load the module or to automatically load the module using the default(APP)credential.•Step4:The application submits a message to the SEK:When the SEK receives a message from a Java class module,it inspects the message to determine whether it contains a factory-supplied protected subclass.If the message contains the protected subclass,the SEK assigns the message the group specified within the credential.If the protected subclass is not present,the SEK associates the message with a default role assigned by the administrator.3C.Conflict Detection and ResolutionThe SEK employs an algorithm called Rule-chain Conflict Analysis(RCA)4to detect when a candidateflow-rule conflicts with one or more rules resident in the switchflow table.By conflict,we mean that the candidateflow rule by itself,or when combined with other residentflow rules,enables or prevents a communicationflow that is otherwise prohibited by one more existingflow rules.2Any application requiring native code should be deployed using the Northbound API,Section IV-F,while applications containing classes that inject themselves into”reserved”packages will be summarily rejected.3This may occur if the application employs Floodlight’s legacy message API.No Floodlight API enables the application to bypass SEK message evaluation,and all such messages will inherit the administrator-assigned default authorization role(ideally,the lowest[APP]authorization role).4The version presented here is motivated by the Alias-set Rule Reduction (ARR)algorithm presented in a previous workshop paper[15].61)RCA Internal Rule Representation:RCA maintains an internal representation of each OpenFlow rule,r,present in the switch.This representation is composed of the following elements:•r.criteria:This corresponds to theflow-rule-match structures,as specified in the OpenFlow1.0specifi-cation.•r.action:Corresponds to theflow-rule actionfield:the SET action effects are captured in the r.SET modsfield(below).D-No output.Drop packets corresponding to thiscriteriaO f-Forward:output to port,output to controller,or broadcastO t-Output to table:theflow may continue evalua-tion by otherflow rules.This action enables twoor more rules to be logically chained togetherwhen one rule whose action is O t also containslogic to alter theflow’s attributes to satisfy thecriteria of a second rule(i.e.the two rules forma rule chain).•r.flow altering:Indicates whether the rule incorporatesa SET action that alters the attributes involved in thematch criteria•r.SET mods:Captures the alterations to theflow at-tributes performed by the SET action.(r.SET modsequals r.criteria for allflow rules that exclude the SETaction).•r.priority:The priority of the rule as assigned by the application that authored the rule.(Each authorizationrole specifies a maximum priority that the applicationsmay use.)2)Synthetic Rules:Synthetic rules are produced by RCA and stored in the SEK’s state table,but they are not forwarded to the switch.Rather,syntheticflow rules augment the state table by capturing the end points of virtualflows established through rule chains.From the OpenFlow controller perspec-tive,rule chains are reducible to two deterministic end-points.A completed rule chain is one that is terminated by aflow rule whose action is either drop(D)or forward(O f).A candidate rule can be chained to an existing rule in either of two ways. Tail chaining occurs with a resident rule,r,when r.action== O t and r.SET mods matches r c.criteria(the criteria of our candidate rule).Head Chaining is the complementary case, where r c.action==O t and r c.SET mods matches r.criteria.Tracking the correspondence between rules that contribute to the formation of a synthetic rule is accomplished by adding two attributes to the internal rule representation:•r.parents:∅if this r is not a synthetic rule.Otherwise it specifies the two rules that were chained to produce the synthetic rule.•r.child:∅if not a parent of a synthetic rule.Otherwise it specifies the synthetic rule created from r.Upon deletion of r, the child is garbage collected.D.The RCA AlgorithmSet Definitions:The set of active RCA internal rules,which implicitly and explicitly correspond toflow rules resident ineach switch’sflow table,is denoted by the set A.Upon initialization of RCA,set A=∅.From A,we also createa subset,H f,which represents the set of resident rules thatmay become participants at the head of a rule chain,i.e., r.flow altering==true and r.action==O t.Much of RCA’s task is to match the candidate rule criteria, r c.criteria against a resident rule’s r i.criteria.To perform rule matching,RCA employs a binary tree with source and destination n-tuples as keys.For each candidate rule,r c,RCA performs the follow steps:Step1:Testing for direct conflict:For each binary tree match between candidate r c and A,RCA determines which rule takes precedence.Rule precedence is assigned to the rule with the highest priority.If r i takes precedence over r c,then r c is rejected with a permission(PERM)error code and RCA exits(no further evaluation is required).If r c takes precedence over r i,then r i is deleted from theflow table(a delete notification is produced)and we proceed to Step2.Otherwise, the two rule priorities are equal.In this case,if neither r c nor r i’s action is O t,their actions do not match,and r c.SET mods match r i.SET mods,then the matching precedence policy is applied.Otherwise,RCA proceeds to Step2,and considers whether r c will form a chain with other resident rules.Matching Precedence Policy:When the two rule priorities are equal,then by administrative configuration,RCA will employ either a FIFO or LIFO strategy,assigning precedence to either the resident rule(FIFO)or the candidate(LIFO).FIFO is the default strategy,in that it rejects the existing candidate rule,thereby requiring the explicit recognition and removal of the conflicting resident rule before the candidate is added.function D IRECT F LOW T ESTING(r c)for r i∈A matching r c doif(r i.priority>r c.priority)then return;else if(r i.priority<r c.priority)then goto step2;else if((r i.action=O t)||(r c.action=O t))thengoto step2;else if r c.set mods=r i.set mods then goto step2;else goto match precedence;end ifend forend functionfunction MATCH PRECEDENCE(r i,r c)if(strategy==LIFO)then reject r c and exit;else if(strategy==FIFO)thenmark r i for deletion and goto step2end ifend functionThe next task is to identify conflicts that arise when the candidate combines with a resident rule to produce a rule chain.Here,let us introduce two synthetic rule variables:a synthetic tail-chain rule r tr=∅,and a synthetic head-chain rule r hr=∅.We also set r c−orig=r c.Step2:Detect a tail-chaining candidate rule:For each rule r i,in H f,if r c.criteria.src matches r i.SET mods.src and r c.criteria.dst matches r i.SET mods.dst,then we construct a synthetic tail chain rule that combines theflow logic of r i fol-lowed by r c.Synthetic rule r tr.criteria is set to r i.criteria.src 7。
policies英文解释
policies英文解释英文回答:Policies are sets of rules that guide behavior and decision-making in organizations. They establish the principles and standards that employees must adhere to in order to achieve the organization's goals and objectives. Policies provide a framework for consistent decision-making and ensure that all employees have a clear understanding of what is expected of them. They can address a wide range of topics, including employee conduct, workplace safety, financial management, and customer service.Policies can be either formal or informal. Formal policies are written documents that are approved by management and communicated to employees. They aretypically comprehensive and detail-oriented, providing specific instructions on how employees should conduct themselves in various situations. Informal policies are unwritten rules or customs that are passed down through theorganization. They may not be as detailed or specific as formal policies, but they can still be influential in shaping employee behavior.Effective policies are clear, concise, and easy to understand. They should be developed with the input of employees and stakeholders, and they should be regularly reviewed and updated to ensure that they remain relevant and effective. When policies are well-written and implemented, they can help to create a positive and productive work environment. They can improve employee morale, reduce conflicts, and ensure that the organization operates in a fair and consistent manner.中文回答:政策是一套规则,旨在指导组织中的行为和决策。
Updates RFC 2535 Status of this Memo Redefinition of DNS AD bit
DNSEXT Working Group Brian Wellington INTERNET-DRAFT Olafur Gudmundsson <draft-ietf-dnsext-ad-is-secure-06.txt> June 2002Updates: RFC 2535Redefinition of DNS AD bitStatus of this MemoThis document is an Internet-Draft and is in full conformance withall provisions of Section 10 of RFC2026.Internet-Drafts are working documents of the Internet EngineeringTask Force (IETF), its areas, and its working groups. Note thatother groups may also distribute working documents as Internet-Drafts.Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as referencematerial or to cite them other than as ‘‘work in progress.’’The list of current Internet-Drafts can be accessed at/ietf/1id-abstracts.txtThe list of Internet-Draft Shadow Directories can be accessed at/shadow.htmlComments should be sent to the authors or the DNSEXT WG mailing list namedroppers@This draft expires on December 25, 2002.Copyright NoticeCopyright (C) The Internet Society (2002). All rights reserved.AbstractBased on implementation experience, the RFC2535 definition of theAuthenticated Data (AD) bit in the DNS header is not useful. Thisdraft changes the specification so that the AD bit is only set onanswers where signatures have been cryptographically verified or the server is authoritative for the data and is allowed to set the bit by policy.Expires December 2002 [Page 1]INTERNET-DRAFT AD bit set on secure answers June 2002 1 - IntroductionFamiliarity with the DNS system [RFC1035] and DNS security extensions [RFC2535] is helpful but not necessary.As specified in RFC 2535 (section 6.1), the AD (Authenticated Data)bit indicates in a response that all data included in the answer and authority sections of the response have been authenticated by theserver according to the policies of that server. This is notespecially useful in practice, since a conformant server SHOULD never reply with data that failed its security policy.This draft redefines the AD bit such that it is only set if all data in the response has been cryptographically verified or otherwisemeets the server’s local security policy. Thus, a responsecontaining properly delegated insecure data will not have AD set, nor will a response from a server configured without DNSSEC keys. Asbefore, data which failed to verify will not be returned. Anapplication running on a host that has a trust relationship with the server performing the recursive query can now use the value of the AD bit to determine if the data is secure or not.1.1 - MotivationA full DNSSEC capable resolver called directly from an applicationcan return to the application the security status of the RRsets inthe answer. However, most applications use a limited stub resolverthat relies on an external full resolver. The remote resolver canuse the AD bit in a response to indicate the security status of thedata in the answer, and the local resolver can pass this information to the application. The application in this context can be either a human using a DNS tool or a software application.The AD bit SHOULD be used by the local resolver if and only if it has been explicitly configured to trust the remote resolver. The AD bit SHOULD be ignored when the remote resolver is not trusted.An alternate solution would be to embed a full DNSSEC resolver intoevery application. This has several disadvantages.- DNSSEC validation is both CPU and network intensive, and cachingSHOULD be used whenever possible.- DNSSEC requires non-trivial configuration - the root key must beconfigured, as well as keys for any "islands of security" that willexist until DNSSEC is fully deployed. The number of configurationpoints should be minimized.Expires December 2002 [Page 2]INTERNET-DRAFT AD bit set on secure answers June 2002 1.2 - RequirementsThe key words "MAY", "MAY NOT" "MUST", "MUST NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", in this document are to be interpreted asdescribed in RFC2119.1.3 - Updated documents and sectionsThe definition of the AD bit in RFC2535, Section 6.1, is changed.2 - Setting of AD bitThe presence of the CD (Checking Disabled) bit in a query does notaffect the setting of the AD bit in the response. If the CD bit isset, the server will not perform checking, but SHOULD still set theAD bit if the data has already been cryptographically verified orcomplies with local policy. The AD bit MUST only be set if DNSSECrecords have been requested via the OK bit [RFC3225] and relevant SIG records are returned.2.1 - Setting of AD bit by recursive serversSection 6.1 of RFC2535 says:"The AD bit MUST NOT be set on a response unless all of the RRs inthe answer and authority sections of the response are eitherAuthenticated or Insecure."The replacement text reads:"The AD bit MUST NOT be set on a response unless all of the RRsets in the answer and authority sections of the response are Authenticated." "The AD bit SHOULD be set if and only if all RRs in the answersection and any relevant negative response RRs in the authoritysection are Authenticated."A recursive DNS server following this modified specification willonly set the AD bit when it has cryptographically verified the datain the answer.2.2 - Setting of AD bit by authoritative serversA primary server for a secure zone MAY have the policy of treatingauthoritative secure zones as Authenticated. Secondary servers MAYhave the same policy, but SHOULD NOT consider zone data Authenticated unless the zone was transferred securely and/or the data wasverified. An authoritative server MUST only set the AD bit forauthoritative answers from a secure zone if it has been explicitlyconfigured to do so. The default for this behavior SHOULD be off. Expires December 2002 [Page 3]INTERNET-DRAFT AD bit set on secure answers June 2002 2.2.1 - Justification for setting AD bit w/o verifying dataThe setting of the AD bit by authoritative servers affects only asmall set of resolvers that are configured to directly query andtrust authoritative servers. This only affects servers that functionas both recursive and authoritative. All recursive resolvers SHOULDignore the AD bit.The cost of verifying all signatures on load by an authoritativeserver can be high and increases the delay before it can beginanswering queries. Verifying signatures at query time is alsoexpensive and could lead to resolvers timing out on many queriesafter the server reloads zones.Organizations that require that all DNS responses containcryptographically verified data MUST separate the functions ofauthoritative and recursive servers, as authoritative servers are notrequired to validate local secure data.3 - Interpretation of the AD bitA response containing data marked Insecure in the answer or authoritysection MUST never have the AD bit set. In this case, the resolverSHOULD treat the data as Insecure whether or not SIG records arepresent.A resolver MUST NOT blindly trust the AD bit unless it communicateswith the full function resolver over a secure transport mechanism orusing message authentication such as TSIG [RFC2845] or SIG(0)[RFC2931] and is explicitly configured to trust this resolver.4 - Applicability statementThe AD bit is intended to allow the transmission of the indicationthat a resolver has verified the DNSSEC signatures accompanying therecords in the Answer and Authority section. The AD bit MUST only betrusted when the end consumer of the DNS data has confidence that theintermediary resolver setting the AD bit is trustworthy. This canonly be accomplished via out of band mechanism such as:- Fiat: An organization can dictate that it is OK to trust certain DNS servers.- Personal: Because of a personal relationship or the reputation of a resolver operator, a DNS consumer can decide to trust thatresolver.- Knowledge: If a resolver operator posts the configured policy of aresolver a consumer can decide that resolver is trustworthy.In the absence of one or more of these factors AD bit from a resolverSHOULD NOT be trusted. For example, home users frequently depend on Expires December 2002 [Page 4]INTERNET-DRAFT AD bit set on secure answers June 2002 their ISP to provide recursive DNS service; it is not advisable totrust these resolvers. A roaming/traveling host SHOULD not use DNSresolvers offered by DHCP when looking up information where security status matters.When faced with a situation where there are no satisfactory recursive resolvers available, running one locally is RECOMMENDED. This hasthe advantage that it can be trusted, and the AD bit can still beused to allow applications to use stub resolvers.4 - Security Considerations:This document redefines a bit in the DNS header. If a resolvertrusts the value of the AD bit, it must be sure that the responder is using the updated definition, which is any DNS server/resolversupporting the OK bit[RFC3225].Authoritative servers can be explicitly configured to set the AD bit on answers without doing cryptographic checks. This behavior MUST be off by default. The only affected resolvers are those that directly query and trust the authoritative server, and this functionalitySHOULD only be used on servers that act both as authoritative servers and recursive resolver.Resolvers (full or stub) that trust the AD bit on answers from aconfigured set of resolvers are DNSSEC security compliant.5 - IANA Considerations:None.6 - Internationalization Considerations:None. This document does not change any textual data in anyprotocol.7 - Acknowledgments:The following people have provided input on this document: RobertElz, Andreas Gustafsson, Bob Halley, Steven Jacob, Erik Nordmark,Edward Lewis, Jakob Schlyter, Roy Arends, Ted Lindgreen.Normative References:[RFC1035] P. Mockapetris, ‘‘Domain Names - Implementation andSpecification’’, STD 13, RFC 1035, November 1987.Expires December 2002 [Page 5]INTERNET-DRAFT AD bit set on secure answers June 2002 [RFC2535] D. Eastlake, ‘‘Domain Name System Security Extensions’’, RFC 2535, March 1999.[RFC2845] P. Vixie, O. Gudmundsson, D. Eastlake, B. Wellington,‘‘Secret Key Transaction Authentication for DNS (TSIG)’’, RFC 2845, May 2000.[RFC2931] D. Eastlake, ‘‘DNS Request and Transaction Signatures(SIG(0))’’, RFC 2931, September 2000.[RFC3225] D. Conrad, ‘‘Indicating Resolver Support of DNSSEC’’, RFC3225, December 2001.Authors AddressesBrian Wellington Olafur GudmundssonNominum Inc.2385 Bay Road 3826 Legation Street, NWRedwood City, CA, 94063 Washington, DC, 20015USA USA<Brian.Wellington@> <ogud@>Full Copyright StatementCopyright (C) The Internet Society (2002>. All Rights Reserved.This document and translations of it may be copied and furnished toothers, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, publishedand distributed, in whole or in part, without restriction of anykind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, thisdocument itself may not be modified in any way, such as by removingthe copyright notice or references to the Internet Society or otherInternet organizations, except as needed for the purpose ofdeveloping Internet standards in which case the procedures forcopyrights defined in the Internet Standards process must befollowed, or as required to translate it into languages other thanEnglish.The limited permissions granted above are perpetual and will not berevoked by the Internet Society or its successors or assigns.This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERINGTASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDINGBUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION Expires December 2002 [Page 6]INTERNET-DRAFT AD bit set on secure answers June 2002 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OFMERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE."Expires December 2002 [Page 7]。
disablecircularreferencedetect
disablecircularreferencedetectBy default, circular reference detection is turned on in most programming languages and databases in order to protect against this type of situation. However, there are occasions when it is necessary to disable it in order to perform certain tasks. For example, it may be necessary to disable it in order to create a self-referencing object in order to store information about itself in a database.In order to disable circular reference detection, it is necessary to first identify the objects that are involved in the circular reference. Once the objects have been identified, then the developer can add code that will ignore any circular references. This code can be added before the execution of the code so that it does not interfere with the rest of the program.Although disabling the circular reference detection can be useful in certain situations, it is important to remember thatit should only be done when absolutely necessary and with caution. Otherwise, it could lead to data integrity issues, performance problems, and even system crashes. Therefore, it is usually best to leave it enabled unless there is a specific problem that can only be solved by disabling it.。
线程池的默认拒绝策略
线程池的默认拒绝策略线程池拒绝策略就是当线程池中的任务队列已经满了,同时线程池中的核心线程和最大线程数也已达到极限,无法再创建新的线程时,线程池该如何处理任务。
线程池的拒绝策略就是为了解决这种情况而设计的机制。
线程池的拒绝策略有很多种,其中一些拒绝策略如下:1. AbortPolicy:直接抛出异常,拒绝任务的执行。
2. CallerRunsPolicy:只要线程池没有关闭,该策略直接在调用者的线程中,运行当前被丢弃的任务。
显然这样做不会真正的拒绝任务,而是会使得调用线程Block住,因此该策略可能会影响性能。
3. DiscardOldestPolicy:抛弃队列中最老的一个请求,也就是即将被执行的一个任务,并尝试再次提交当前任务。
4. DiscardPolicy:也就是直接丢弃当前任务,不做任何处理。
这些拒绝策略都有它们各自的适用场景,不同的场景选择不同的拒绝策略,有时会使程序更具健壮性和稳定性。
接下来将对线程池的默认拒绝策略AbortPolicy进行详细介绍。
线程池的默认拒绝策略是AbortPolicy,该策略会直接抛出RejectedExecutionException异常,告诉用户线程池已经无法接收新的任务。
该拒绝策略在实际应用中可能会导致一些问题,比如会使得提交任务的线程阻塞,这可能会让一些关键任务无法执行,同样也有可能会导致大量线程被创建出来,从而降低整个系统的性能表现。
然而,AbortPolicy在某些情况下仍然是很有用的。
例如在对应用的并发性要求非常高的情况下,我们希望线程池能够立即反应,而不是阻塞调用者。
在这种情况下,选择AbortPolicy作为拒绝策略通常是一个不错的选择。
如果您需要更多的线程池拒绝策略的信息,您可以继续探索JDK API文档或寻找相关文章进行了解。
总之,拒绝策略是线程池中非常重要的一部分,它们可以大大提高系统的稳定性和可用性,因此在实际应用中千万不要掉以轻心。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Policies for Caching OLAP Queriesin Internet ProxiesThanasis Loukopoulos and Ishfaq Ahmad,Senior Member,IEEE Abstract—The Internet now offers more than just simple information to the users.Decision makers can now issue analytical,as opposed to transactional,queries that involve massive data(such as,aggregations of millions of rows in a relational database)in order to identify useful trends and patterns.Such queries are often referred to as On-Line-Analytical Processing(OLAP).Typically,pages carrying query results do not exhibit temporal locality and,therefore,are not considered for caching at Internet proxies.In OLAP processing,this is a major problem as the cost of these queries is significantly larger than that of the transactional queries.This paper proposes a technique to reduce the response time for OLAP queries originating from geographically distributed private LANs and issued through the Web toward a central data warehouse(DW)of an enterprise.An active caching scheme is introduced that enables the LAN proxies to cache some parts of the data,together with the semantics of the DW,in order to process queries and construct the resulting pages.OLAP queries arriving at the proxy are either satisfied locally or from the DW,depending on the relative access costs.We formulate a cost model for characterizing the respective latencies,taking into consideration the combined effects of both common Web access and query processing.We propose a cache admittance and replacement algorithm that operates on a hybrid Web-OLAP input,outperforming both pure-Web and pure-OLAP caching schemes.Index Terms—Distributed systems,data communication aspects,Internet applications databases,Web caching,OLAP.Ç1I NTRODUCTIONC ACHING has emerged as a primary technique for copingwith high latencies experienced by the Internet users. There are four major locations where caching is performed:1.proxy at the front-end of a server farm[7],work cache at the end-points of the backbonenetwork[13],N proxy[1],[39],and4.browser.Although caching at these locations has been shown to significantly reduce Web traffic[3],dynamically generated pages are not cacheable.Dynamic pages typically consist of a static part and a dynamic one(for example,query results).On the other hand,the need for decision support systems has become of paramount importance in today’s business, leading many enterprises to building decision support databases called data warehouses(DWs)[14].Decision makers issue analytical,as opposed to transactional,queries that typically involve aggregations of millions of rows in order to identify interesting trends.Such queries are often referred to as OLAP(On-Line-Analytical-Processing).Users perceive the data of the DW as cells in a multidimensional data-cube[15].Fetching from the DW the parts of the cube needed by queries and performing aggregations over them is an extremely time consuming task.A common technique to accelerate such queries is to precalculate and store some results.Such stored fragments are essentially parts of views in relational database terms and hence we will refer to their storage as materialization/caching of OLAP views.Most of the past work on view selection for materialization is limited to the central server.In this paper,we address the problem of caching OLAP queries posed by ad hoc,geographically spanned users, through their Web browsers.Unlike previous approaches, e.g.,[18],[20],we employ the existing proxy infrastructure and propose a method of caching both Web pages and OLAP query results in common proxy servers.Our work is applicable to other caching points,provided that significant traffic towards the DW passes through them(e.g.,edge servers of a Content Distribution Network[22]).Some preliminary results were presented in[25].Web pages carrying OLAP query results,abbreviated as WOQPs(Web OLAP query pages),are essentially dynamic pages and are normally marked as uncacheable.This is not because their content changes frequently(as is the case for instance with sport pages where continuous updates occur in the server),but is rather due to the fact that it is unlikely that successive queries bear the same results.Therefore, unless the caching entity is enhanced with query processing capabilities,it is impossible to use a cached WOQP in order to answer future queries inquiring a subset of the cached results.The proposed active caching framework enables the proxies to answer queries using the views cached locally and construct the WOQPs needed to present the results in the users’browsers.For tackling cache replacement issues, we develop an analytical cost model and propose strategies that are empirically proven to lead to high quality solutions. Although active caching has been employed before in answering transactional queries[26],to the best of our.T.Loukopoulos is with the Department of Computer and CommunicationEngineering,University of Thessaly,37Glavani—28th October str.,Deligiorgi Bld.,38221Volos,Greece.E-mail:luke@inf.uth.gr..I.Ahmad is with the University of Texas at Arlington,Box19015,CSE,UTA,Arlington,TX76019.E-mail:iahmad@.Manuscript received13Sept.2004;revised4July2005;accepted8Sept.2005;published online24Aug.2006.Recommended for acceptance by J.Fortes.For information on obtaining reprints of this article,please send e-mail to:tpds@,and reference IEEECS Log Number TPDS-0231-0904.1045-9219/06/$20.00ß2006IEEE Published by the IEEE Computer Societyknowledge,this is the first time that OLAP data are considered.The special case of OLAP involves unique challenges(for instance the results may vary in size by many orders of magnitude)and provides new opportu-nities for optimizations(e.g.,the interdependencies of the views in a lattice).The rest of the paper is organized as follows:Section2 provides an overview of OLAP queries and illustrates the lattice notion to describe OLAP views.Section3presents the proposed framework for caching OLAP queries in departmental LAN proxies.Section4deals with the caching and replacement strategies for OLAP views.Section5 discusses the simulation results,while Section6presents the related work.Finally,Section7includes some summar-izing remarks.2O VERVIEW OF OLAP Q UERIESDWs are collections of historical,summarized,and con-solidated data,originating from several different databases (sources).Analysts and knowledge workers issue analytical (OLAP)queries over very large data sets,often millions of rows.DW’s contents are multidimensional and the typical OLAP queries consist of group_by and aggregate operations along one or more dimensions.Fig.1depicts an example of a2D space with the dimensions being the customer’s name and the product id.The value at each cell in the2D grid gives the volume of sales for a specific<product id;customer id>pair.An OLAP query could,for example,ask for the total volume of sales for the product SX2or the customer T.Johnson,shown as shaded cells in Fig.1.It could also be a group by query for two products and three customers as shown in the shaded rectangle,or an aggregation of total sales.A view is a derived relation,which is defined in terms of base relations and is normally recomputed each time it is referenced.A materialized view is a view that is computed once and then stored in the database.In the example of Fig.1,we might consider for materialization the results of the four described queries.The advantage of having some views materialized is that future queries can be answered with little processing and disk I/O latency.Moreover,queries asking for a subset of the materialized data may be answered by accessing one view,or through a combination of two or more views as shown in Fig.2.In our2D example,any rectangle in the plain can be a potentially materialized view.Due to the fact that OLAP queries are ad hoc,stored fragments will most likely be able to only partially answer future queries,in which case we need to combine the results obtained by querying multiple stored fragments as shown in Figs.2b and2c.This approach though can be time consuming since all possible combina-tions of fragments may have to be considered for answering a query.Therefore,it is sound practice to consider whole views as the only candidates for materialized views[12], [15]and not fragments of them.In this paper,we follow this approach.For instance,in the example of Fig.1,the only candidates for materialization are the p,c, views,together with the whole plain(pc view).It is easy to see that under this strategy the total number of candidate views for materialization is2r,where r is the number of dimensions.Views have computational interdependencies,which can be explored in order to answer queries.A common way to represent such dependencies is the lattice notation.Skip-ping the formal definitions,we illustrate the notion through the example of Fig.3.The three dimensions account for <product;customer;time>.A node in the lattice accounts for a specific view and a directed edge connecting two nodes shows the computational dependency between the specific pair of views,i.e.,the pointed view can compute the other,e.g.,pc can compute p.Only dependencies between views differing1level are shown in the lattice diagram (Fig.3a),e.g.,c can be derived from pct but there is no direct edge connecting the two views.A query is answered by different views at different costs.A widely used assumption in the OLAP literature is that the cost for querying a view is proportional to the view size [15].Fig.3a shows the associated query costs for a 3D lattice.We should notice that the costs increase as we move from a lower level to a higher level in the lattice.This is reasonable since higher views are normally larger.In Fig.3b,we expand the lattice adding all the edges in the transitive closure and for each edge we attach the cost of computing the lower view,using the upper one.Again,we should notice the relation of the computational cost to the view size,e.g.,deriving p view from pct incurs higher cost than computing p from pc,while the cheapest way to materialize view is to calculate it from p as compared to pc and pct.Fig.1.An example of OLAP queries in2D space.ing materialized views to answer queries.(a)Query answeredby one view,(b)query answered by combining three views,and(c)querycannot be answered by any view combination.Answering an OLAP query of the form:SELECT<grouping predicates>AGG(predicate) FROM<data list>WHERE<selection predicates>GROUP BY<grouping predicates>involves the following steps:1)the query dimensions are defined as the union of the selection and grouping predicates,2)the corresponding to the dimensions view is located,and3)in case the view is not cached,we check whether any of its ancestors are present and select the one with the minimum cost to answer the query.Since recomputing views from the raw data is an expensive procedure,it is common practice that the central DW always keeps the topmost view materialized,in order to be able to handle all OLAP queries[20].We follow the same policy in the central DW but not in the proxy,since the size of the topmost view may be prohibitively large.A well-studied problem in the database community is the view selection under storage and update constraints (see Section6),which can be defined as:Given the query frequencies and the view sizes,select the set of views to materialize so as to minimize the total query cost under storage capacity constraints and with respect to an update window.The problem is solved with static centralized solutions that are inefficient in the Web environment.Our approach is fundamentally different since we consider a distributed environment where OLAP views are cached together with normal Web pages.3S YSTEM M ODELWe consider an environment consisting of an enterprise with a central DW located at its headquarters and multiple regional departments having their own LANs.Each LAN is assumed to be connected to the Internet through a proxy server.Clients from the regional departments access the Web site of the company and issue OLAP queries as well as other Web traffic.The Web server of the company forwards the queries to the DW,fetches the results,creates the relevant WOQP and sends it back.In general,a WOQP has a static part possibly consisting of many files(e.g.,HTML document,gif images),and a dynamic part consisting of the query results.Throughout the paper,we treat the static files as one composite object and assume that all WOQPs have the same static part.This is done without loss of generality, since extending the framework to account for different static parts is straightforward.3.1Limitations of Existing Caching SchemesA brute force approach for caching WOQPs at a client proxy is to treat them as static HTML documents,putting an appropriate TTL(time-to-live)value.The main drawback of this strategy is that the proxy will be able to satisfy a query only if it had been submitted in the past in its exact form.For instance,a user request for the projection at each year of the volume of products sold between2000and2002will not be answered,although the proxy might have cached a WOQP referring to the volumes sold between1999and2002.Treating WOQPs as normal Web pages will also affect the overall system performance when it comes to cache replacement decisions.The majority of replacement algorithms proposed in the literature[9],[17]assume that only network latency determines cache miss cost.This is not sufficient in our environment,since the processing time for answering an OLAP query at the server side is another significant factor. Therefore,we need to develop a new cache replacement policy that can take into account both delays.3.2The Proposed Caching PolicyOur aim is to allow WOQP construction at the proxy using locally cached views.Active caching[10]was proposed in order to allow front-end network proxies to dynamically generate pages.A cache applet is kept together with the static part of the page and in the presence of a request the applet fetches the dynamic data from the original site and combines them with the cached static part to create the HTML document.The main benefit of this approach is that Web page construction is done close to the client and network latencies are avoided.We implement a similar scheme as follows:The first time an OLAP query arrives at the central site,it triggers a number of different files to be sent to the client proxy:.The WOQP answering the query..The static part of the WOQP.ttice and expanded lattice diagrams for<p;c;t>dimensions with associated query and view computing costs.(a)Query costs associated with each node and(b)costs for computing views associated with each edge..A cache applet..The view lattice diagram together with the asso-ciated query costs(Fig.3a)and a flag indicatingwhether the view is materialized at the server or not..The id of the view used by the server to answer the query.The proxy forwards the WOQP to the end-user without caching it and caches the applet,the lattice diagram and the static part of the WOQP.Afterward,it runs the cache applet,which is responsible for deciding whether to fetch the answering view from the server or not.Subsequent queries are intercepted and the cache applet is invoked to handle them.The applet checks whether the currently cached views can answer the query at a cost lower than sending the request to the server and selects the minimum cost cached view to do so.Then,it combines the query results with the static part of the WOQP to create the answering page.In case the views currently cached in the proxy cannot answer the query or answering the query from the proxy is more costly than doing so from the server, the request is forwarded to the Web server.The Web server responds with the WOQP carrying the results,together with the id of the view used to satisfy the query.The WOQP is forwarded to the client without being cached and,subsequently,the applet decides whether to download the answering view or not.The alternative of sending only the query results to the proxy and constructing the WOQP there is not considered in this paper,although the model can encapsulate this case as well.We found that unless the results are very small(not common in OLAP),the additional overhead of going through two connections to reach the client instead of one nullifies any traffic gains. Moreover,it is reasonable to assume that WOQP construction in the proxy is more expensive than in the Web server(when the later operates under normal workload)and,therefore,it should only happen when query results are computable from the locally cached views which is more beneficial than redirecting the request to the Web server.If the storage left in the cache is not sufficient to store a newly arrived object(view or Web page),the proxy decides which objects to remove from the cache.In order to do so,it asks the cache applet for the benefit values of the cached views.The cache applet,the lattice diagram,and the static part of the WOQPs are never considered in the cache replacement phase for possible eviction.They are deleted from the cache only when the traffic towards the central DW falls below a threshold specified by an administrating entity.4C ACHING V IEWSDeriving an analytical cost model in order to decide whether to fetch a view or not is necessary.Furthermore, a suitable cache replacement strategy must be developed that takes into account both the nature of the normal Web traffic and the additional characteristics of OLAP queries. We tackle both problems by enhancing the GDSP(Popu-larity-Aware Greedy-Dual-Size)[17]algorithm to take into account query processing latencies.The resulting algorithm is referred to as VHOW(Virtual Hybrid OLAP Web). Similar enhancements are applicable to most proxy cache replacement algorithms proposed in the literature.Table1 summarizes the notation used.4.1The VHOW AlgorithmLet W i denote the i th Web page(either normal page,or WOQP),assuming a total ordering of them,sðW iÞits size and fðW iÞits access frequency.The basic form of VHOW algorithm computes a benefit value BðW iÞfor each page using the following formula:BðW iÞ¼fðW iÞMðW iÞ=sðW iÞ;ð1Þwhere MðW iÞstands for the cost of fetching W i from the server in case of a cache miss.In other words BðW iÞrepresents the per byte cost saved as a result of all accesses to W i during a certain time period.The access frequency of W i is computed as follows:f jþ1ðW iÞ¼2Àt=T f jðW iÞþ1;ð2Þwhere j denotes the j th reference to W i,t is the elapsed number of requests between the jþ1th and j th access,and T is a constant controlling the rate of decay.The intuition behind(2)is to reduce past access importance.In our experiments f1was set to1/2and T to1/5th of the total number of requests.VHOW inherits a dynamic aging mechanism from GDSP,in order to avoid cache pollution by previously popular objects.Each time a page is requested,its cumulative benefit value HðW iÞis computed by summing its benefit BðW iÞwith the cumulative benefit L of the last object evicted from cache.Thus,objects that were frequently accessed in the past,but account for no recent hits are forced out of the cache,whereas,if eviction was only based on the benefit values(and not on the cumulative benefit)they would have stayed for a larger time period.Below is the basic description of VHOW in pseudocode:L¼0IF(W i requested)IF(W i is cached)HðW iÞ¼LþBðW iÞELSEWHILE(available space<sðW iÞ)DOL¼min f HðW kÞ:W k are cached gEvict from cache W x:HðW xÞ¼¼LStore W iHðW iÞ¼LþBðW iÞIn order to compute the cost MðW iÞvarious functions can be chosen.For instance,by selecting MðW iÞ¼18W i,the algorithm behaves like LFU.A more suitable metric is the latency for fetching an object from the server.Most of research papers compute this latency as the summation of the time required to setup a connection and the actual transfer time.This is clearly not appropriate in case of OLAP queries since the miss penalty depends also on the query processing time at the central site,which in terms depends on which views are already materialized in the server.In the sequel,we provide a cost model to compute the miss and benefit costs for caching views in the proxy.4.2Cost ModelLet V be the set of views in an r-dimensional datacube (V j j¼2r).A page W i that arrives at the proxy is the answer for a unique query Q i.In case W i refers to normal Web traffic,Q i¼ .Let VðPÞdenote the set of views currentlycached at the proxy and VðSÞthe ones materialized at theserver.Furthermore,let VðSÞibe the view among the set VðSÞthat can answer Q i with minimum cost and VðPÞi,a similar view among set VðPÞ.Hence,we refer to the correspondingquery costs as CðVðSÞi Þand CðVðPÞiÞ.Moreover,let V all i bethe view that would answer Q i with the minimum cost if all views were materialized(either at the proxy or at the server).In case Q i can not be answered by VðPÞ,VðPÞi¼and CðVðPÞiÞ¼1.We should notice that Q i can always be satisfied by VðSÞsince the topmost view is always materialized at the central server.Moreover,if Q i¼ ,CðVðSÞi Þ¼CðVðPÞiÞ¼0.Let LðP!SÞbe the cost(in termsof latency)for establishing a connection between the proxy and the server,and TðS!PÞbe the average transfer rate at which the server sends data to the proxy.The network latency N i,exhibited when fetching W i from the central server is given by:N i¼LðP!SÞþsðW iÞ=TðS!PÞ,where sðW iÞ¼sðwÞþsðQ iÞ,with sðwÞdenoting the size of the static part of the page and sðQ iÞthe size of the query results.Finally,we denote the time required to construct W i(having obtained the query results)at the central server and the proxy by FðSÞi and FðPÞi,respectively.In case Q i¼ ,FðSÞi¼FðPÞi¼0.The total cost MðW iÞof a cache miss for W i in terms of latency is given by:MðW iÞ¼CðVðSÞiÞþFðSÞiþN i:ð3ÞNotice that,in case W i comes from normal Web traffic(3)is reduced to:MðW iÞ¼N iðQ i¼ Þ:ð4ÞEquations(3)and(4)define the miss cost for a WOQP and a normal Web page,respectively.The benefit and cumulative benefit values can then be derived using(1).TABLE1 Notation Used in thePaperUnder our scheme we do not consider caching WOQPs due to the ad hoc nature of OLAP queries.Concerning views,we can compute directly the benefit BðV jÞof keeping V j view in the cache,by taking the difference in total cost for answering the queries before and after a possible eviction of V j from the cache.Let fðV jÞdenote the access frequency of V j.Since there are no direct hits for views we use the following alternative to compute fðV jÞ.Whenever a query Q i arrives,the cache applet adapts the frequency of V all i using(2).Let A iðVðPÞ;VðSÞÞdenote the cost for satisfying Q i in the whole system(both proxy and server).Q i can be answered either by VðPÞor by VðSÞ,depending on the relative cost difference.Thus,we end up with the following equation:A iðVðPÞ;VðSÞÞ¼minCðVðPÞiÞþFðPÞi;CðVðSÞiÞþFðSÞiþN i():ð5ÞLet sðV jÞbe the size of view V j and sðV jÞbe the average query size for queries with V all i¼V j.Since all queries satisfied by the same view incur the same processing cost (proportional to the view size),the benefit value of V j can be computed as follows:BðV jÞ¼P8V kfðV kÞ½A VkðVðPÞÀf V j g;VðSÞÞÀA VkðVðPÞ;VðSÞÞsðV jÞ;ð6Þwhere A VkðVðPÞ;VðSÞÞstands for the cost of answering at the system a query:Q i:V all i¼V k&&sðQ iÞ¼sðV kÞ.4.3Deriving the ParametersHere,we provide details on how to compute the parametersof(5),(6).CðVðSÞi Þand CðVðPÞiÞare computed by finding atthe lattice diagram the query costs of the correspondingVðSÞi and VðPÞiviews as described in putingCðVðSÞi Þrequires each node of the cached lattice diagram tomaintain two fields.The first one(materialized field)denotes whether the view is materialized at the central site or not, while the second(cached field)shows if it is cached at the proxy.Unless the central site follows a static view selection policy,we need a consistency mechanism in order to keep the materialized field up to date.The cache applet is responsible for defining which view can answer a query with the minimum cost.In order to avoid traversing the lattice upon every query arrival,each node stores two additional fields.The first(local_answer-ing_view),shows the cached view that can answer the queries related to the node at a minimum cost,while the second(remote_answering_view)keeps the id of the mini-mum cost answering view at the DW.This information can be maintained efficiently when a new view is added or deleted from the cache.Benefit calculation requires further discussion.Whenever a new query Q i corresponding to the V i view arrives,the benefit values of V i and all its ancestors in the lattice,must be updated(notice that the benefit of successor views in the lattice do not alter since V i is not computable by them). Straightforward calculation of(6)requires OðV j j2Þtime in the worst-case(V j j is the number of views in the lattice). However,we can incrementally compute the benefits in OðV j jÞworst-case time,by noticing that only the coefficient of V i changes in(6).Finally,estimation of the network latency parameters can be done by keeping statistics of past downloads and predicting future latency,in a way similar to how RTT(Round-Trip-Time)is estimated by the TCP protocol[35].4.4Cache Admittance of Views in VHOWWeb caching algorithms consider for caching all arriving objects,stemming from the fact that Web traffic exhibits temporal locality[6].However,when views come in question,such approach is inadequate since their size can be large,resulting in many objects being evicted from the cache in order to free space.To avoid this,we follow an alternative policy.When a view V j is considered for caching at the proxy,its benefit value BðV jÞis calculated using(6)and,conse-quently,its cumulative benefit value HðV jÞis defined as in Section4.1.In case there is not enough storage space left to cache V j,instead of evicting immediately the object with the least cumulative benefit which might still not free enough space,we calculate the aggregated cumulative benefit of a set of objects that if deleted from the cache,enough space would be freed.V j is cached only if HðV jÞis greater than this aggregated value.Fig.4shows a description in pseudocode of the complete VHOW caching algorithm.Deleting objects from the cache in order to fit a new view deserves further attention.The problem can be formulated as: Given a set of n objects,each of benefit b i and size s i,find a subset D such as:Pi2Db i B andPi2Ds i!S,with B,S,b i, s i integers.Notice that by interchanging the roles of benefit and size we end with the(0,1)Knapsack problem[29],the decision problem of which is known to be NP-complete.(0,1) Knapsack can be solved to optimality using dynamic programming[29]however,the method incurs unacceptable (for caching purposes)running time.Therefore,we followed a heuristic approach.We start by adding in a candidate list D (evict_list in the pseudocode)the objects O j of minimum cumulative benefit HðO jÞ,until:1)PO j2DHðO jÞHðVðSÞiÞandPO j2DsðO jÞ!sðVðSÞiÞor,2)PO j2DHðO jÞ!HðVðSÞiÞ.In the first case,the view is admitted after deleting the objects in the candidate list,while in the second case we check whetherthe last added object in D,let O k,satisfies HðO kÞHðVðSÞiÞand sðO kÞ!sðVðSÞiÞ,in which case O k is replaced with VðSÞi,otherwise VðSÞiis not admitted.5E XPERIMENTAL E VALUATIONTwo series of experiments were conducted.The first aimedat investigating the throughput of a hybrid Web-OLAP proxy,while the second used simulation in order to determine the potential benefits in query cost terms.5.1System ThroughputBefore proceeding with identifying the potential gains in query cost terms,we investigated whether augmenting a proxy with query answering capabilities has an adverse effect on the throughput of the rest of HTTP requests.For。