Cooperative Concurrency Control
CCSA-基础词汇
CCSA-基础词汇CCSA基本词汇Aacceptance testing 验收测试access存取、访问access control访问控制accountability责任Accounting controls 会计控制achieve达到activity活动activity network diagram作业网络图activity-based costing(ABC)system 作业成本法activity-level objective作业层次目标Activity reports活动报告add value增加价值adequacy充分,足够;适当adequate controlAnalysis and Evaluation(IIAPreformance Standard 2320)分析和评价(IIA工作标准2320)Analysis audit procedures分析性审计程序analytical review分析性复核anonymity匿名,匿名者appearance表面application应用appraisal costs鉴定成本appreciation评价,鉴别appropriate适当的appropriation拨款,占用,盗用,挪用approve批准Approval批准Approval of risk-based plans (批准)以风险为导向的计划Approval of work programs (批准)工作方案artificial intelligence人工智能assess对??????进行评估,评价assessment评估Assessment control控制(评估)Assessment of control processes控制程序(评估)Assessment definition/timing of定义/时间安排(评估)Assessment of quality programs质量方案(评估)Assessment of risk management process风险管理过程(评估)Assets,control and ues of资产、控制和使用Assignment of authority and responsibility 权责划分assistance援助,帮助assistant辅助的,助理的assumption假设assurance确认、保证Assurance Services确认服务Assurance Services and consulting services确认服务和咨询服务Assurance Services nature of确认服务的性质Asymmetrical Digital Subscriber Line(ADSL) 非对称用户数字环线asynchronous异步attribute Standards属性标准attributes sampling属性抽样audience受众audit conclusion审计结论audit committees审计委员会communications with audit committees与审计委员会的沟通audit coverage审计覆盖面audit directors审计主管audit directors compliance with IIA’s attribute standards(审计主管)遵循IIA的属性标准duties of audit directors审计主管责任personnel responsibilities of audit directors人事责任planning by audit directors规划audit directors and policies/procedures 审计主管和政策/程序quality assurance role of audit directors 质量保障角色auditing control审计控制audit managers/supervisors审计经理/督导audit evidence审计证据audit finding审计发现audit methodology审计方法audit objectives审计目标auditors-in-charge主管审计师audit plans/planning审计计划audit procedures审计程序audit programs审计方案assessments to audit programs 评估审计方案audit recommendation审计建议audit reports审计报告audit resources审计资源audit risk审计风险audit scope审计范围audit team leaders 审计小组领导audit time budgets 审计时间预算audit trail审计踪迹audit work planning 审计工作计划audit work programs 审计工作方案auditee被审计单位authentication鉴别authority权威性,权限authorization授权avoid避免awareness意识Bbackup/restart procedure 备份/重启程序balance controls余额控制bar chart条形图,柱状图bar-code条码BBS电子公告牌benchmarking基准比较法biometric technology生物技术Board董事会(审计委员会)internal control responsibilities of board of directors(董事会的)内部控制责任bounded rationality有限理性brainstorming头脑风暴break-even盈亏平衡点bridge网桥browser浏览器bus network总线网business application systems业务应用系统see also internal control application development内部控制应用程序开发application system documentation control应用系统文本记录控制corrective controls纠正性控制data integrity controls数据完整性控制data origination/preparation/input controls 数据产生/编制/输入控制data output controls数据输出控制data processing controls数据处理控制detective controls检查性控制inventory of controls in存货控制operational application system controls操作应用系统控制preventive controls预防性控制spreadsheet controls电子数据表控制system-related file maintenance controls 系统相关文档维护控制business organizations经营组织business process reengineering业务流程再造business risk,audit risk vs.经营风险,审计风险Ccallback回拨capacity plan能力计划capital structure资本结构cause原因cause-and-effect diagrams因果图centralized processing集中处理certainty确定性certification authority(CA)证书认证中心Certified Internal Auditor(CIA) 国际注册内部审计师challenge挑战change control变更控制change management变革管理characteristics特性charter章程Check Sheet日常检查单checklists问题清单Chief Audit Executive(CAE) 首席审计执行官circumstantial evidence附属证据class类client委托人,客户,客户机client feedback mechanisms客户反馈机制closing conference结束会议cluster(block)sampling分块抽样CoCo model,see Criteria of control model 控制标准模式Code of Ethics职业道德规范monitoring compliance with 监控合规性coefficient of correlation相关系数coefficient of determination 决定系数cold site冷站collection收款combination controls合并控制comment观点,评论Committee委员会Committee of Sponsoring Organizations(COSO) 发起组织委员会Committee structure委员会结构communicating results沟通结果communication and internal control沟通和内部控制Compact Disc/Read-Only Memory(CD-ROM) 紧凑式只读光盘comparison比较comparison controls比较控制compensating controls 补偿性控制competency能力,胜任competent能胜任的,足够的competitive bid竞标compiler编译器complexity复杂性compliance遵循/合规性legal considerations in evaluating programs for compliance针对合规性评价方案的法律考虑事项compliance monitoring监控compliance audit合规性审计compliance test符合性测试computation controls计算控制computer-assisted audit technique计算机辅助审计技术computer controls计算机控制concerns关注点concise简洁conclusive evidence确证证据concurrency并发concurrent access controls 并行存取控制concurrent control并行控制condition条件conditional probability 条件概率confidence interval置信区间confidence level置信水平confidentiality保密confirmation函证conflicts of interest利益冲突、关注焦点consensus testing一致性测试consistent with与??????一致constructive建设性Consulting Services咨询服务contemporary management controls 现代管理控制contingency design权变理论contingency plan应急计划control控制self-assessment of control自我评估control activities控制活动control assessment控制评估control breakdowns控制崩溃control chart控制图control environment控制环境control files控制文档control report控制报告Control Self-assessment控制自我评价Control Self-assessment(CSA) model控制自我评估(CSA)模式control-based以控制为基础的controller会计部主任cooperation协作corporate governance公司治理corporate governance CoCo model控制标准(CoCo)模式corporate governance control self-assessment model控制自我评估模式corporate governance COCO’s internal control COCO的内部控制corporate governance separation of ownership from control源自控制的所有权分离corrective action纠正行动corrective controls纠正性控制corroborative evidence佐证证据COSO internal control modelCOSO内部控制模式cost of assurance保证成本cost of quality质量成本cost-volume-profit analysis 本-量-利分析credit committee信贷委员会criteria标准Critical Path Method(CPM) 关键路径法critical thinking关键思考cross-referencing交叉索引current当前、流动current ratio流动比率Cyclical Redundancy Checking(CRC) 循环冗余检验Ddata mining数据挖掘data processing controls数据处理控制data synthesis数据统计database数据库deadly embrace死锁debugger调试器decentralized processing 分散处理decision-tree analysis决策树分析decline谢绝,下降deduction扣除额;演绎;推论deficiency缺陷delegation授权Delphi techniques 德尔菲技术departmentalization 部门化depend upon取决于dependency check 相关性检验design设计detailed testing详细测试detect发掘,侦查detective control检查型控制diagnostic诊断difference estimation sampling 差额估计抽样digital certificate数字证书Digital Data Network(DDN) 数字数据网络digital signature数字签名direct cutover strategy直接转换策略direct evidence直接证据direct-access file直接存取文件directed sampling定向抽样directive control指导型控制disclosure披露discount折扣discovery sampling发现抽样discretionary access control 自主访问控制discriminate analysis 辨别分析disk磁盘disk utility磁盘工具distributed processing 分布处理distribution分发diversify分散,多样化division of labor劳动分工divisional structure分布型结构documental information 文件信息dollar-unit sampling货币单位抽样download下载dounsizing降型化draft草稿Due Professional Care 应有的职业审慎性dumb terminal哑终端Dynamic Link Libraries(DLL)动态链接库dynamic programming动态规划EE-commerce activities电子商务活动risk/control issues with E-commerce activities 电子商务活动风险/控制问题Economic Order Quantity(EOQ)经济订货量effect效果effectiveness有效性Electronic Data Interchange(EDI) 电子数据转换Electronic Data Process(EDP)电子数据处理Electronic Funds Transfer(EFT) 电子资金转账electronic voting电子投票embedded audit module嵌入式审计模块employees雇员responsibilities of employees雇员职责empowerment授权endorse背书end-user终端用户End-User Computing(EUC) 终端用户计算engagement审计业务engagement area审计业务范围engagement client审计业务客户engagement conclusion审计业务结论engagement information审计业务信息engagement observation审计业务观察结果engagement recommendation 审计业务建议engagement result审计业务结果enhance提高、加强environmental audits环境审计equal-weight加权平均ethics伦理/道德ethics monitoring compliance with code of conduct对行为规范遵循性的监控evaluate评估,评价evidence证据exception report例外报告existence check存在性检验exit conference退出会议expectation期望expected value期望值expert system专家系统expertise专长exponential smoothing 指数平滑exposure暴露extent延伸external assessment外部评估external information外部信息external-internal information 外-内信息Ffacilitated team workshops 推动型专题讨论会fail-soft protection故障弱化保护failure cost失败成本FDIC Improvement Act联邦存款保险公司改进法feasibility可行性feedback payments反馈控制feed-forward control前馈控制field字段final audit report最终审计报告final engagement communication 最终审计业务沟通financial control systems财务控制系统firewall防火墙fishbone diagram鱼骨图flat file平面文件flat organizations扁平化组织flexible budgeting弹性预算flowcharting流程图focus groups专题小组follow-up by internal auditors 后续审计follow -up后续follow -up review 跟踪检查format check格式检验Fragmentation 分割frame relay帧中继framework框架fraud舞弊fraud auditing 舞弊审计。
Concurrent control manner
专利名称:Concurrent control manner发明人:PIITAA EI FURANASHEKU,ピーター・エイ・フラナシェク,JON TEIMOSHII ROBINSON,ジョン・ティモシー・ロビンソン,AREKUSANDAATOMASHIAN,アレクサンダー・トマシアン申请号:JP特願平3-320458申请日:19911204公开号:JP第2531881号B2公开日:19960904专利内容由知识产权出版社提供摘要:A wait depth limited concurrency control method for use in a multi-user data processing environment restricts the depth of the waiting tree to a predetermined depth, taking into account the progress made by transactions in conflict resolution. In the preferred embodiment for a centralized transaction processing system, the waiting depth is limited to one. Transaction specific information represented by a real-valued function L, where for each transaction T in the system at any instant in time L(T) provides a measure of the current "length" of the transaction, is used to determine which transaction is to be restarted in case of a conflict between transactions resulting in a wait tree depth exceeding the predetermined depth. L(T) may be the number of locks currently held by a transaction T, the maximum of the number of locks held by any incarnation of transaction T, including the current one, or the sum of the number of locks held by each incarnation of transaction T up to the current one. In a distributed transaction processing system, L(T) is based on time wherein each global transaction is assigned a starting time, and this starting time is included in the startup message for eachsubtransaction, so that the starting time of global transaction is locally known at any node executing one of its subtransactions.申请人:INTAANASHONARU BIJINESU MASHIINZU CORP,インターナショナル・ビジネス・マシーンズ・コーポレイション地址:アメリカ合衆国10504、ニューヨーク州 アーモンク (番地なし)国籍:US代理人:合田 潔 (外2名)更多信息请下载全文后查看。
乐观锁
Optimistic concurrency controlFrom Wikipedia, the free encyclopediaIn the field of relational database management systems, optimistic concurrency control (OCC) is a concurrency control method that assumes that multiple transactions can complete without affecting each other, and that therefore transactions can proceed without locking the data resources that they affect. Before committing, each transaction verifies that no other transaction has modified its data. If the check reveals conflicting modifications, the committing transaction rolls back.[1]OCC is generally used in environments with low data contention. When conflicts are rare, transactions can complete without the expense of managing locks and without having transactions wait for other transactions' locks to clear, leading to higher throughput than other concurrency control methods. However, if conflicts happen often, the cost of repeatedly restarting transactions hurts performance significantly; other concurrency control methods have better performance under these conditions.OCC phasesMore specifically, OCC transactions involve these phases:▪Begin: Record a timestamp marking the transaction's beginning.▪Modify: Read and write database values.▪Validate: Check whether other transactions have modified data that this transaction has used (read or wrote). Always check transactions that completed after this transaction's start time. Optionally, check transactions that are still active at validation time.▪Commit/Rollback: If there is no conflict, make all changes part of the official state of the database. If there is a conflict, resolve it, typically by aborting the transaction, although other resolution schemes are possible.Web usageThe stateless nature of HTTP makes locking infeasible for web user interfaces. It's common for a user to start editing a record, then leave without following a "cancel" or "logout" link. If locking is used, other users who attempt to edit the same record must wait until the first user's lock times out.HTTP does provide a form of in-built OCC, using the ETag and If-Match headers.[2]Some database management systems offer OCC natively - without requiring special application code. For others, the application can implement an OCC layer outside of the database, and avoid waiting or silently overwriting records. In such cases, the form includes a hidden field with the record's original content, a timestamp, a sequence number, or an opaque token. On submit, this is compared against the database. If it differs, the conflict resolution algorithm is invoked.Examples▪MediaWiki's edit pages use OCC.[3]▪Bugzilla uses OCC; conflicts are called "mid-air collisions".[4]▪The Ruby on Rails framework has an API for OCC.[5]▪The Grails framework uses OCC in its default conventions.[6]▪Most revision control systems support the "merge" model for concurrency, which is OCC. ▪Mimer SQL is a DBMS that only implements optimistic concurrency control.[7]▪Google App Engine data store uses OCC.[8]。
数据访问模式:数据并发控制(DataConcurrencyControl)
数据访问模式:数据并发控制(DataConcurrencyControl) 1.数据并发控制(Data Concurrency Control)简介 数据并发控制(Data Concurrency Control)是⽤来处理在同⼀时刻对被持久化的业务对象进⾏多次修改的系统。
当多个⽤户修改业务对象的状态并试图并发地将其持久化到数据库时,需要⼀种机制来确保⼀个⽤户不会对另⼀个并发⽤户的事务状态造成负⾯影响。
有两种形式的并发控制:乐观和悲观。
乐观并发控制假设当多个⽤户对业务对象的状态同时进⾏修改时不会造成任何问题,也称为最晚修改⽣效(last change wins)。
对于⼀些系统,这是合理的⾏为。
但如果业务对象的状态需要与从数据库中取出的状态保持⼀致,就需要悲观并发控制。
悲观并发控制可以有多中风格,可以在检索出记录后锁定数据表,也可以保存业务对象原始内容的副本,然后再进⾏更新之前将该副本与数据存储中的版本进⾏⽐对。
确保在这次事务期间没有对该记录进⾏修改。
2.数据并发控制的实现⽰例 常⽤的数据并发控制实现⽅式有两种:数据库实现及代码控制实现。
悲观并发控制在数据库实现⽅⾯可以有加⼊数据库锁机制,代码控制实现⽅⾯可以增加⼀个保存版本号字段,⽤于版本之间的对⽐。
使⽤版本号来检查在业务实体从数据库中检索出之后是否被修改。
更新时,把业务实体的版本号与数据库中的版本号进⾏⽐对之后再提交修改。
这样确保业务实体在被检索出后没有被修改。
使⽤版本号来实现悲观并发控制的⽅式,其中的版本号可以使⽤数据库中提供的数据类型timestamp或代码中控制管理版本。
数据库timestamp类型字段在每⼀次update操作均会⽣成新值。
1>、timestamp版本控制 SQL Server中timestamp类型对应C#的byte[]类型,timestamp字段值为byte[8]。
2>、程序代码实现版本控制 代码结构: EntityBase.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;namespace DataAccessPatterns.DataConcurrencyControl.Model{public abstract class EntityBase{public Guid Version { get; set; }}}View Code Person.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;namespace DataAccessPatterns.DataConcurrencyControl.Model{public class Person : EntityBase{public Guid ID { get; set; }public string FirstName { get; set; }public string LastName { get; set; }}}View Code IPersonRepository.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;using DataAccessPatterns.DataConcurrencyControl.Model;namespace DataAccessPatterns.DataConcurrencyControl.Repository{public interface IPersonRepository{void Add(Person person);void Save(Person person);Person FindBy(Guid id);}}View Code PersonRepository.csusing System;using System.Collections.Generic;using System.Linq;using System.Text;using DataAccessPatterns.DataConcurrencyControl.Model;namespace DataAccessPatterns.DataConcurrencyControl.Repository{public class PersonRepository : IPersonRepository{public void Add(Person person){using (var context = new DataAccessPatternsContext()){context.Persons.Add(person);context.SaveChanges();}}public void Save(Person person){// person.Version为获取出来的上⼀次版本,Version字段值保持获取出来时字段值。
最全编程常用英语词汇
最全编程常⽤英语词汇打开应⽤保存⾼清⼤图其实在国内,绝⼤部分⼯作并不真的要求你英语多好,编程也⼀样。
如果只是做到平均⽔准或者⽐较好,都未必要英语很熟。
但是⼀般我还是会建程序员们好好学英语,迈过这个坎,你会发现完全不⼀样的世界,你会明⽩以前这个困惑真的是……下⾯是编程常⽤的英语词汇,赶紧收藏吧。
按字母索引A英⽂译法 1 译法 2 译法 3a block of pointers ⼀块指针⼀组指针abbreviation 缩略语abstract 抽象的abstract syntax tree, AST 抽象语法树abstraction 抽象abstraction barrier 抽象屏障抽象阻碍abstraction of function calls 函数调⽤抽象access 访问存取access function 访问函数存取函数accumulator 累加器activate 激活ad hoc 专设adapter 适配器address 地址algebraic data type 代数数据类型algorithm 算法alias 别名allocate 分配配置alternative 备选amortized analysis 平摊分析anaphoric 指代annotation 注解anonymous function 匿名函数antecedent 前提前件先决条件append 追加拼接application 应⽤应⽤程序application framework 应⽤框架application program interface, API 应⽤程序编程接⼝application service provider, ASP 应⽤程序服务提供商applicative 应⽤序argument 参数⾃变量实际参数/实参arithmetic 算术array 数组artificial intelligence, AI ⼈⼯智能assemble 组合assembly 汇编assignment 赋值assignment operator 赋值操作符associated 关联的association list, alist 关联列表atom 原⼦atomic 原⼦的atomic value 原⼦型值attribute 属性特性augmented 扩充automatic memory management ⾃动内存管理automatically infer ⾃动推导autometa theory ⾃动机理论auxiliary 辅助B英⽂译法 1 译法 2 译法 3backquote 反引⽤backtrace 回溯backward compatible 向下兼容bandwidth 带宽base case 基本情形base class 基类Bayes' theorem 贝叶斯定理best viable function 最佳可⾏函式最佳可⾏函数Bezier curve 贝塞尔曲线bignum ⼤数binary operator ⼆元操作符binary search ⼆分查找⼆分搜索⼆叉搜索binary search tree ⼆叉搜索树binary tree ⼆叉树binding 绑定binding vector 绑定向量bit 位⽐特bit manipulation 位操作black box abstraction ⿊箱抽象block 块区块block structure 块结构区块结构block name 代码块名字Blub paradox Blub 困境body 体主体boilerplate 公式化样板bookkeeping 簿记boolean 布尔border 边框bottom-up design ⾃底向上的设计bottom-up programming ⾃底向上编程bound 边界bounds checking 边界检查box notation 箱⼦表⽰法brace 花括弧花括号bracket ⽅括弧⽅括号branch 分⽀跳转breadth-first ⼴度优先breadth-first search, BFS ⼴度优先搜索breakpoint 断点brevity 简洁buffer 缓冲区buffer overflow attack 缓冲区溢出攻击bug 臭⾍building 创建built-in 内置byte 字节bytecode 字节码C英⽂译法 1 译法 2 译法 3cache 缓存call 调⽤callback 回调CamelCase 驼峰式⼤⼩写candidate function 候选函数capture 捕捉case 分⽀character 字符checksum 校验和child class ⼦类choke point 滞塞点chunk 块circular definition 循环定义clarity 清晰class 类类别class declaration 类声明class library 类库client 客户客户端clipboard 剪贴板clone 克隆closed world assumption 封闭世界假定closure 闭包clutter 杂乱code 代码code bloat 代码膨胀collection 收集器复合类型column ⾏栏column-major order ⾏主序comma 逗号command-line 命令⾏command-line interface, CLI 命令⾏界⾯Common Lisp Object System, CLOS Common Lisp 对象系统Common Gateway Interface, CGI 通⽤⽹关接⼝compatible 兼容compilation 编译compilation parameter 编译参数compile 编译compile inline 内联编译compile time 编译期compiled form 编译后的形式compiler 编译器complex 复杂complexity 复杂度compliment 补集component 组件composability 可组合性composition 组合组合函数compound value 复合数据复合值compression 压缩computation 计算computer 计算机concatenation 串接concept 概念concrete 具体concurrency 并发concurrent 并发conditional 条件式conditional variable 条件变量configuration 配置connection 连接cons 构造cons cell 构元 cons 单元consequent 结果推论consistent ⼀致性constant 常量constraint 约束constraint programming 约束式编程container 容器content-based filtering 基于内容的过滤context 上下⽂语境环境continuation 延续性continuous integration, CI 持续集成control 控件cooperative multitasking 协作式多任务copy 拷贝corollary 推论coroutine 协程corruption 程序崩溃crash 崩溃create 创建crystallize 固化curly 括弧状的curried 柯⾥的currying 柯⾥化cursor 光标curvy 卷曲的cycle 周期D英⽂译法 1 译法 2 译法 3dangling pointer 迷途指针野指针Defense Advanced Research Projects Agency, DARPA 美国国防部⾼级研究计划局data 数据data structure 数据结构data type 数据类型data-driven 数据驱动database 数据库database schema 数据库模式datagram 数据报⽂dead lock 死锁debug 调试debugger 调试器debugging 调试declaration 声明declaration forms 声明形式declarative 声明式说明式declarative knowledge 声明式知识说明式知识declarative programming 声明式编程说明式编程declarativeness 可声明性declaring 声明deconstruction 解构deduction 推导推断default 缺省默认defer 推迟deficiency 缺陷不⾜define 定义definition 定义delegate 委托delegationdellocate 释放demarshal 散集deprecated 废弃depth-first 深度优先depth-first search, BFS 深度优先搜索derived 派⽣derived class 派⽣类design pattern 设计模式designator 指⽰符destructive 破坏性的destructive function 破坏性函数destructuring 解构device driver 硬件驱动程序dimensions 维度directive 指令directive 指⽰符directory ⽬录disk 盘dispatch 分派派发distributed computing 分布式计算DLL hell DLL 地狱document ⽂档dotted list 点状列表dotted-pair notation 带点尾部表⽰法带点尾部记法duplicate 复本dynamic binding 动态绑定dynamic extent 动态范围dynamic languages 动态语⾔dynamic scope 动态作⽤域dynamic type 动态类型E英⽂译法 1 译法 2 译法 3effect 效果efficiency 效率efficient ⾼效elaborateelucidatingembedded language 嵌⼊式语⾔emulate 仿真encapsulation 封装enum 枚举enumeration type 枚举类型enumrators 枚举器environment 环境equal 相等equality 相等性equation ⽅程equivalence 等价性error message 错误信息error-checking 错误检查escaped 逃脱溢出escape character 转义字符evaluate 求值评估evaluation 求值event 事件event driven 事件驱动exception 异常exception handling 异常处理exception specification 异常规范exit 退出expendable 可扩展的explicit 显式exploratory programming 探索式编程export 导出引出expression 表达式expressive power 表达能⼒extensibility 可扩展性extent 范围程度external representation 外部表⽰法extreme programming 极限编程F英⽂译法 1 译法 2 译法 3factorial 阶乘family (类型的)系feasible 可⾏的feature 特⾊field 字段栏位file ⽂件file handle ⽂件句柄fill pointer 填充指针fineo-grained 细粒度firmware 固件first-class 第⼀类的第⼀级的⼀等的first-class function 第⼀级函数第⼀类函数⼀等函数first-class object 第⼀类的对象第⼀级的对象⼀等公民fixed-point 不动点fixnum 定长数定点数flag 标记flash 闪存flexibility 灵活性floating-point 浮点数floating-point notation 浮点数表⽰法flush 刷新fold 折叠font 字体force 迫使form 形式form 表单formal parameter 形参formal relation 形式关系forward 转发forward referencesfractal 分形fractions 派系framework 框架freeware ⾃由软件function 函数function literal 函数字⾯常量function object 函数对象functional arguments 函数型参数functional programming 函数式编程functionality 功能性G英⽂译法 1 译法 2 译法 3game 游戏garbage 垃圾garbage collection 垃圾回收garbage collector 垃圾回收器generalized 泛化generalized variable ⼴义变量generate ⽣成generator ⽣成器generic 通⽤的泛化的generic algorithm 通⽤算法泛型算法generic function 通⽤函数generic programming 通⽤编程泛型编程genrative programming ⽣产式编程global 全局的global declaration 全局声明glue program 胶⽔程序goto 跳转graphical user interface, GUI 图形⽤户界⾯greatest common divisor 最⼤公因数Greenspun's tenth rule 格林斯潘第⼗定律H英⽂译法 1 译法 2 译法 3hack 破解hacker ⿊客handle 处理器处理程序句柄hard disk 硬盘hard-wirehardware 硬件hash tables 哈希表散列表header 头部header file 头⽂件heap 堆helper 辅助函数辅助⽅法heuristic 启发式high-order ⾼阶higher-order function ⾼阶函数higher-order procedure ⾼阶过程hyperlink 超链接HyperText Markup Language, HTML 超⽂本标记语⾔HyperText Transfer Protocol, HTTP 超⽂本传输协议I英⽂译法 1 译法 2 译法 3identical ⼀致identifier 标识符ill type 类型不正确illusion 错觉imperative 命令式imperative programming 命令式编程implement 实现implementation 实现implicit 隐式import 导⼊incremental testing 增量测试indent 缩排缩进indentation 缩排缩进indented 缩排缩进indention 缩排缩进infer 推导infinite loop ⽆限循环infinite recursion ⽆限递归infinite precision ⽆限精度infix 中序information 信息information technology, IT 信息技术inheritance 继承initialization 初始化initialize 初始化inline 内联inline expansion 内联展开inner class 内嵌类inner loop 内层循环input 输⼊instances 实例instantiate 实例化instructive 教学性的instrument 记录仪integer 整数integrate 集成interactive programming environment 交互式编程环境interactive testing 交互式测试interacts 交互interface 接⼝intermediate form 过渡形式中间形式internal 内部internet 互联⽹因特⽹interpolation 插值interpret 解释interpreter 解释器interrupt 中⽌中断intersection 交集inter-process communication, IPC 进程间通信invariants 约束条件invoke 调⽤item 项iterate 迭代iteration 迭代的iterative 迭代的iterator 迭代器J英⽂译法 1 译法 2 译法 3jagged 锯齿状的job control language, JCL 作业控制语⾔judicious 明智的K英⽂译法 1 译法 2 译法 3kernel 核⼼kernel language 核⼼语⾔keyword argument 关键字参数keywords 关键字kludge 蹩脚L英⽂译法 1 译法 2 译法 3larval startup 雏形创业公司laser 激光latitudelayout 版型lazy 惰性lazy evaluation 惰性求值legacy software 历史遗留软件leverage 杠杆 (动词)利⽤lexical 词法的lexical analysis 词法分析lexical closure 词法闭包lexical scope 词法作⽤域Language For Smart People, LFSP 聪明⼈的语⾔library 库函数库函式库lifetime ⽣命期linear iteration 线性迭代linear recursion 线性递归link 链接连接linker 连接器list 列表list operation 列表操作literal 字⾯literal constant 字⾯常量literal representation 字⾯量load 装载加载loader 装载器加载器local 局部的局域的local declarations 局部声明local function 局部函数局域函数local variable 局部变量局域变量locality 局部性loop 循环lvalue 左值Mmachine instruction 机器指令machine language 机器语⾔machine language code 机器语⾔代码machine learning 机器学习macro 宏mailing list 邮件列表mainframes ⼤型机maintain 维护manifest typing 显式类型manipulator 操纵器mapping 映射mapping functions 映射函数marshal 列集math envy 对数学家的妒忌member 成员memorizing 记忆化memory 内存memory allocation 内存分配memory leaks 内存泄漏menu 菜单message 消息message-passing 消息传递meta- 元-meta-programming 元编程metacircular 元循环method ⽅法method combination ⽅法组合⽅法组合机制micro 微middleware 中间件migration (数据库)迁移minimal network 最⼩⽹络mirror 镜射mismatch type 类型不匹配model 模型modifier 修饰符modularity 模块性module 模块monad 单⼦monkey patch 猴⼦补丁monomorphic type language 单型语⾔Moore's law 摩尔定律mouse ⿏标multi-task 多任务multiple values 多值mutable 可变的mutex 互斥锁Multiple Virtual Storage, MVS 多重虚拟存储N英⽂译法 1 译法 2 译法 3namespace 命名空间native 本地的native code 本地码natural language ⾃然语⾔natural language processing ⾃然语⾔处理nested 嵌套nested class 嵌套类network ⽹络newline 换⾏新⾏non-deterministic choice ⾮确定性选择non-strict ⾮严格non-strict evaluation ⾮严格求值nondeclarativenondestructive version ⾮破坏性的版本number crunching 数字密集运算O英⽂译法 1 译法 2 译法 3object 对象object code ⽬标代码object-oriented programming ⾯向对象编程Occam's razor 奥卡姆剃⼑原则on the fly 运⾏中执⾏时online 在线open source 开放源码operand 操作对象operating system, OS 操作系统operation 操作operator 操作符optimization 优化optimization of tail calls 尾调⽤优化option 选项optional 可选的选择性的optional argument 选择性参数ordinary 常规的orthogonality 正交性overflow 溢出overhead 额外开销overload 重载override 覆写P英⽂译法 1 译法 2 译法 3package 包pair 点对palindrome 回⽂paradigm 范式parallel 并⾏parallel computer 并⾏计算机param 参数parameter 参数形式参数/形参paren-matching 括号匹配parent class ⽗类parentheses 括号Parkinson's law 帕⾦森法则parse tree 解析树分析树parser 解析器partial application 部分应⽤partial applied 分步代⼊的partial function application 部分函数应⽤particular ordering 部分有序pass by adress 按址传递传址pass by reference 按引⽤传递传引⽤pass by value 按值传递传值path 路径patternpattern match 模式匹配perform 执⾏performance 性能performance-criticalpersistence 持久性phrenology 相⾯physical 物理的pipe 管道pixel 像素placeholder 占位符planning 计画platform 平台pointer 指针pointer arithmetic 指针运算poll 轮询polymorphic 多态polymorphism 多态polynomial 多项式的pool 池port 端⼝portable 可移植性portal 门户positional parameters 位置参数precedence 优先级precedence list 优先级列表preceding 前述的predicate 判断式谓词preemptive multitasking 抢占式多任务premature design 过早设计preprocessor 预处理器prescribe 规定prime 素数primitive 原语primitive recursive 主递归primitive type 原⽣类型principal type 主要类型print 打印printed representation 打印表⽰法printer 打印机priority 优先级procedure 过程procedurual 过程化的procedurual knowledge 过程式知识process 进程process priority 进程优先级productivity ⽣产⼒profile 评测profiler 评测器性能分析器programmer 程序员programming 编程programming language 编程语⾔project 项⽬prompt 提⽰符proper list 正规列表property 属性property list 属性列表protocol 协议pseudo code 伪码pseudo instruction 伪指令purely functional language 纯函数式语⾔pushdown stack 下推栈Q英⽂译法 1 译法 2 译法 3qualified 修饰的带前缀的qualifier 修饰符quality 质量quality assurance, QA 质量保证query 查询query language 查询语⾔queue 队列quote 引⽤quoted form 引⽤形式R英⽂译法 1 译法 2 译法 3race condition 条件竞争竞态条件radian 弧度Redundant Array of Independent Disks, RAID 冗余独⽴磁盘阵列raise 引起random number 随机数range 范围区间rank (矩阵)秩排名rapid prototyping 快速原型开发rational database 关系数据库raw 未经处理的read 读取read-evaluate-print loop, REPL 读取-求值-打印循环read-macro 读取宏record 记录recursion 递归recursive 递归的recursive case 递归情形reference 引⽤参考referential transparency 引⽤透明refine 精化reflection 反射映像register 寄存器registry creep 注册表蠕变regular expression 正则表达式represent 表现request 请求resolution 解析度resolve 解析rest parameter 剩余参数return 返回回车return value 返回值reuse of software 代码重⽤right associative 右结合Reduced Instruction Set Computer, RISC 精简指令系统计算机robust 健壮robustness 健壮性鲁棒性routine 例程routing 路由row-major order 列主序remote procedure call, RPC 远程过程调⽤run-length encoding 游程编码run-time typing 运⾏期类型runtime 运⾏期rvalue 右值S英⽂译法 1 译法 2 译法 3S-expression S-表达式save 储存Secure Sockets Layer, SSL 安全套接字层scaffold 脚⼿架鹰架scalar type 标量schedule 调度scheduler 调度程序scope 作⽤域SCREAMING_SNAKE_CASE 尖叫式蛇底⼤写screen 屏幕scripting language 脚本语⾔search 查找搜寻segment of instructions 指令⽚段semantics 语义semaphore 信号量semicolon 分号sequence 序列sequential 循序的顺序的sequential collection literalsserial 串⾏serialization 序列化series 串⾏级数server 服务器shadowing 隐蔽了sharp 犀利的sharp-quote 升引号shortest path 最短路径SICP 《计算机程序的构造与解释》side effect 副作⽤signature 签名simple vector 简单向量simulate 模拟Single Point of Truth, SPOT 真理的单点性single-segment 单段的sketch 草图初步框架slash 斜线slot 槽smart pointer 智能指针snake_case 蛇底式⼩写snapshot 屏幕截图socket 套接字software 软件solution ⽅案source code 源代码space leak 内存泄漏spaghetti ⾯条式代码意⾯式代码spaghetti stack 意⾯式栈⾯条式栈spam 垃圾邮件spec 规格special form 特殊形式special variable 特殊变量specialization 特化specialize 特化specialized array 特化数组specification 规格说明规范splitter 切分窗⼝sprite 精灵图square 平⽅square root 平⽅根squash 碰撞stack 栈stack frame 栈帧stakeholderstandard library 标准函式库state machine 状态机statement 陈述语句static type 静态类型static type system 静态类型系统status 状态store 保存stream 流strict 严格strict evaluation 严格求值string 字串字符串string template 字串模版strong type 强类型structural recursion 结构递归structured values 结构型值subroutine ⼦程序subset ⼦集substitution 代换substitution model 代换模型subtype ⼦类型superclass 基类superfluous 多余的supertype 超集support ⽀持suspend 挂起swapping values 交换变量的值symbol 符号symbolic computation 符号计算syntax 语法system administrator 系统管理员system administrator disease 系统管理员综合症System Network Architecture, SNA 系统⽹络体系T英⽂译法 1 译法 2 译法 3(database)table 数据表table 表格tag 标签标记tail-recursion 尾递归tail-recursive 尾递归的TAOCP 《计算机程序设计艺术》target ⽬标taxable operators 需节制使⽤的操作符taxonomy 分类法template 模版temporary object 临时对象testing 测试text ⽂本text file ⽂本⽂件thread 线程thread safe 线程安全three-valued logic 三值逻辑throw 抛出丢掷引发throwaway program ⼀次性程序timestamp 时间戳token 词法记号语义单位语元top-down design ⾃顶向下的设计top-level 顶层trace 追踪trailing space ⾏尾空⽩transaction 事务transition network 转移⽹络transparent 透明的traverse 遍历tree 树tree recursion 树形递归trigger 触发器tuple 元组Turing machine 图灵机Turing complete 图灵完备typable 类型合法type 类型type constructor 类构造器type declaration 类型声明type hierarchy 类型层级type inference 类型推导type name 类型名type safe 类型安全type signature 类型签名type synonym 类型别名type variable 类型变量typing 类型指派输⼊U英⽂译法 1 译法 2 译法 3user interface, UI ⽤户界⾯unary ⼀元的underflow 下溢unification 合⼀统⼀union 并集universally quantify 全局量化unqualfied 未修饰的unwindinguptime 运⾏时间Uniform Resource Locator, URL 统⼀资源定位符user ⽤户utilities 实⽤函数V英⽂译法 1 译法 2 译法 3validate 验证validator 验证器value constructor 值构造器vaporware 朦胧件variable 变量variable capture 变量捕捉variadic input 可变输⼊variant 变种venture capitalist, VC 风险投资商vector 向量viable function 可⾏函数video 视频view 视图virtual function 虚函数virtual machine 虚拟机virtual memory 虚内存volatile 挥发vowel 元⾳W英⽂译法 1 译法 2 译法 3warning message 警告信息web server ⽹络服务器weight 权值权重well type 类型正确wildcard 通配符window 窗⼝word 单词字wrapper 包装器包装What You See Is What You Get, WYSIWYG 所见即所得What You See Is What You Want, WYSIWYW 所见即所想Y英⽂译法 1 译法 2 译法 3Y combinator Y组合⼦Z英⽂译法 1 译法 2 译法 3Z-expression Z-表达式zero-indexed 零索引的专业名词英⽂译法 1 译法 2 译法 3The Paradox of Choice 选择谬论。
CONCURRENCY CONTROL
---------
---------
---------
-
T2 T4 T6 |------------------|-------------------|-------
-----
-----
--
Serial Schedule (7)
• Throughput of a non-serial schedule with two disk drives
Test Schedule Serializability (1)
Algorithm TSS Input: A schedule S over n transactions T1, ..., Tn. Output: A decision on whether S is serializable.
Test Schedule Serializability (2)
Serial Schedule (1)
Definition: A schedule is a serial schedule if operations from different transactions are not interleaved. a serial schedule: R1(X) W1(X) R2(X) W2(X) another serial schedule: R2(X) W2(X) R1(X) W1(X) a non-serial schedule: R1(X) R2(X) W1(X) W2(X)
Serial Schedule (2)
Several Observations: (1) Serial schedule guarantees transaction consistency. (2) n transactions may form n! different serial schedules.
华东师范大学系统分析与集成博士研究生课程
华东师范大学系统分析与集成博士研究生课程专业名称:系统分析与集成课程编号:B0112010711003 课程名称:非线性控制系统理论与应用课程英文名称:Nonlinear Control-System Theory and Application学分: 3 周学时总学时:54课程性质:博士学位专业课适用专业:系统理论、系统分析与集成教学内容及基本要求:教学内容:1. 反馈系统分析(包括绝对稳定性,小增益定理,描写函数方法)2. 反馈线性化(包括输入-状态线性化,输入-输出线性化,状态反馈控制)、3. 微分几何方法(包括微分几何工具,输入-输出线性化,输入-状态线性化4. Lyapunov设计方法5. Backstepping方法6. 滑模控制7. 自适应控制。
基本要求:要求掌握解决问题的思想方法和技巧。
考核方式及要求:笔试。
学习本课程的前期课程要求:线性系统教材及主要参考书目、文献与资料:1. Hassan K. Khalil:《Nonlinear System (Second edition)》。
填写人:陈树中教授审核人:顾国庆教授课程编号:B0112010711004 课程名称:分布计算与分布式系统课程英文名称:Systems and Architecture of Distributed Databases学分: 3 周学时总学时:54课程性质:博士学位专业课适用专业:系统理论、系统分析与集成教学内容及基本要求:教学内容:本课程主要讨论分布式数据库系统的原理,技术和系统结构。
在第一部分,介绍DBMS的主要成分。
第二部分介绍经典的分布数据库系统理论和系统。
第三部分主要讨论Internet/Intranet时代的分布数据库理论和系统。
基本要求:学生在理解讲课内容的基础上,阅读大量相关论文,从而对基本知识有深入理解和对前沿技术有全面的了解。
考核方式及要求:考试。
学习本课程的前期课程要求:数据库系统基础,计算机网络基础教材及主要参考书目、文献与资料:1.周龙骧等:《分布式数据库管理系统实现技术》,科学出版社,1998。
Designing Efficient Cooperative Caching Schemes for Multi-Tier Data-Centers over RDMA–enab
Designing Efficient Cooperative Caching Schemes for Multi-Tier Data-Centersover RDMA-enabled NetworksS UNDEEP N ARRAVULA,H YUN-W OOK J IN,D HABALESWAR K.P ANDATechnical ReportOSU-CISRC-6/05-TR39Designing Efficient Cooperative Caching Schemes for Multi-Tier Data-Centersover RDMA-enabled NetworksSundeep Narravula Hyun-Wook Jin Dhabaleswar K.PandaComputer Science and EngineeringThe Ohio State Universitynarravul,jinhy,panda@AbstractCaching has been a very important technique in improv-ing the performance and scalability of web-serving data-centers.Research community has proposed cooperationof caching servers to achieve higher performance benefits.These existing cooperative cache designs often partially du-plicate cached data redundantly on multiple servers forhigher performance while optimizing the data-fetch costsfor multiple similar requests.With the advent of RDMA en-abled interconnects these cost estimates have changed thebasic factors involved.Further,utilization of large scaleof resources available across the tiers in todays multi-tierdata-centers is of obvious importance.Hence,a systematicstudy of these various trade-offs involved is of paramountimportance.In this paper,we present cooperative cacheschemes that are designed to benefit in the light of the abovementioned trends.In particular,we design schemes tak-ing advantage of RDMA capabilities of networks and multi-ple tier resources of modern multi-tier data-centers.Ourdesigns are implemented on InfiniBand based clusters towork in conjunction with Apache based servers.Our ex-perimental results show that our schemes show throughputimprovement of up to35%better than the basic cooperativecaching schemes and180%better than the simple singlenode caching schemes.1IntroductionBanking on their high performance-to-cost ratios,clus-ters have easily become the most viable method of host webservers.The very structure of clusters,with several nodesconnected with high performance local interconnects,hasseen the emergence of a popular web-serving systems:data-centers.With the explosive growth of the adoption of Inter-Since lack of duplication of content incurs higher data-transfer overheads for cache retrieval,traditional net-work hardware/software architectures that impose signifi-cant load on the server CPUs and memory cannot benefit fully from these.On the other hand,Remote Direct Mem-ory Access(RDMA)enabled network interface cards(e.g. InfiniBand)are capable of providing reliable communica-tion without server CPU’s intervention.Hence,we design our cache cooperation protocols using one-sided operations to alleviate the possible effects of the high volume of data transfers between individual cache and sustain good overall performance.Further,current generation data-centers have evolved into complex multi-tiered structures presenting more inter-esting design options for cooperative caching.The nodes in the multi-tier data-center are partitioned into multiple tiers with each tier providing a part request processing function-ality.The end client responses are generated with a collec-tive effort of these tiers.The front-end proxy nodes typi-cally perform caching functions.Based on the resource us-age,we propose the use of the available back-end nodes to assist the proxies in the caching services.Also,this back-end server participation and access to cache is needed for several reasons:(i)Invalidating caches when needed,(ii) Cache usage by the back-end and(iii)Updating the caches when needed[11].The constraint of providing these ac-cess mechanisms to the back-end nodes increases the over-all complexity of the design and could potentially incur ad-ditional overheads.In our design,we handle this challenge by introducing additional passive cooperative cache system processing modules on the back-end servers.The benefits of these cache modules on the back-end servers are two-fold in our design:(i)They provide the back-end servers access to the caching system and(ii)They provide better overall performance by contributing a limited amount resources of the back-end server when possible.We implement our system over InfiniBand using Apache Web and Proxy Servers[10].We further evaluate the var-ious design alternatives using multiple workloads to study the trade offs involved.The following are the main contri-butions of our work:A Basic RDMA based Cooperative Cache design:wepropose an architecture that enables proxy servers like apache to cooperate with other servers and deliver high performance by leveraging the benefits of RDMASchemes to improve performance:We propose three schemes to supplement and improve the performance of cooperative caches-(i)Cooperative Cache With-out Redundancy,(ii)Multi-Tier Aggregate Coopera-tive Cache and(iii)Hybrid Cooperative CacheDetailed Experimental evaluation and analysis of the trade offs involved.Especially the issues associatedwith working-set size andfile sizes are analyzed in de-tailOur experimental results show throughput improvements of up to35%for certain cases over the basic cooperative caching scheme and improvements of Upton180%over simple caching methods.We further show that our schemes scale well for systems with large working-sets and large files.The remaining part of the paper is organized as follows: Section2provides a brief background about InfiniBand,and multi-tier data-centers.In Section3we present the design detail of our implementation.Section4deals with the de-tailed performance evaluation and analysis of our designs. In Section5,we discuss current work in relatedfields and conclude the paper in Section6.2Background2.1InfiniBand ArchitectureInfiniBand Architecture[4]is an industry standard that defines a System Area Network(SAN)that offers high bandwidth and low latency.In an InfiniBand network, processing nodes and I/O nodes are connected to the fab-ric by Host Channel Adapters(HCA)and Target Channel Adapters,respectively.An abstraction interface for HCA’s is specified in the form of InfiniBand Verbs.InfiniBand sup-ports both channel and memory semantics.In channel se-mantics,send/receive operations are used for communica-tion.To receive a message,the receiverfirst posts a receive descriptor into a receive queue.Then the sender posts a send descriptor into a send queue to initiate data transfer. In channel semantics there is a one-to-one match between the send and receive descriptors.Multiple send and receive descriptors can be posted and consumed in FIFO order.The memory semantic operation allows a process to write to a virtually contiguous buffer on a remote node.Such one-sided operation does not incur software overhead at the re-mote side.Remote Memory Direct Access(RDMA)Read, RDMA Write and Remote Atomic Operations(fetch-and-add and compare-and-swap)are the supported one-sided operations.2.2Multi-Tier Data-CentersA typical data-center architecture consists of multiple tightly interacting layers known as tiers.Each tier can contain multiple physical nodes.Figure1shows a typi-cal Multi-Tier Data-Center.Requests from clients are load-balanced by the edge services tier on to the nodes in the front-end proxy tier.This tier mainly does caching of con-tent generated by the other back-end tiers.The other func-tionalities of this tier include embedding inputs from vari-ous application servers into a single HTML document(forframed documents for example),balancing the requests sent to the back-end based on certain pre-defined algorithms.The middle tier consists of two kinds of servers.First, those which host static content such as documents,im-ages,musicfiles and others which do not change with time. These servers are typically referred to as web-servers.Sec-ond,those which compute results based on the query itself and return the computed data in the form of a static docu-ment to the users.These servers,referred to as application servers,usually handle compute intensive queries which in-volve transaction processing and implement the data-center business logic.The last tier consists of database servers.These servers hold a persistent state of the databases and other data repos-itories.These servers could either be compute intensive or I/O intensive based on the query format.For simple queries, such as search queries,etc.,these servers tend to be more I/O intensive requiring a number offields in the database to be fetched into memory for the search to be performed.For more complex queries,such as those which involve joins or sorting of tables,these servers tend to be more compute intensive.3Design and Implementation of Proposed Cooperative Cache SchemesIn this section,we propose four schemes for cooperative caching and describe the design details of our schemes.At each stage we also justify our design choices.This section is broadly categorized into four main parts:(i)Section3.1: RDMA based design and implementation of basic cooper-ative caching,(ii)Section3.2:Design of No Redundancy scheme,(iii)Section3.3:Multi-tier extensions for Cooper-ative caches and(iv)Section3.4:A combined hybrid ap-proach for Cooperative caches.Wefirst start with a detailed design description of the common components of all our schemes.External Module:As described earlier in Section2, proxy server nodes provide basic caching services in a multi-tier data-center.The traditional data-center applica-tions service requests in two ways:(i)by using different server threads for different concurrent requests or(ii)by using single asynchronous server to process to service re-quests.Catering to both these approaches,our design uses an asynchronous external helper module to provide cooper-ative caching support.Figure2shows the the typical setup on each node.This module handles inter-node communi-cation by using InfiniBand’s native Verbs API(V API)and it handles intra-node communication with the basic data-center applications using IPC.This module is designed to be asynchronous to handle multiple overlapping requests from the data-center applications.Soft Shared State:The cache meta-data information is maintained consistent across the all the servers by using a home node based approach.The cache entry key space (called key-table)is partitioned and distributed equally among the participating nodes and hence all the nodes han-dle a similar amount of meta-data key entries.This ap-proach is popularly know as the home node based approach. It is to be noted that in our approach we just handle the meta-data on the home node and since the actual data itself can reside on any node,our approach is much more scalable than the traditional home node based approaches where the data and the meta-data reside on the home node.All modifications to thefile such as invalidations,loca-tion transfers,etc.are performed on the home node for the respectivefile.This cache meta-data information is period-ically broadcasted to other interested nodes.Additionally, this information can also be requested by other interested nodes on demand.The information exchange uses RDMA Read operations for gathering information and send-receive operations for broadcasting information.This is done to avoid complex locking procedures in the system.Basic Caching Primitives:Basic caching operations can be performed using a small set of primitives.The in-ternal working caching primitives needs to be designed ef-ficiently for scalability and high performance.Our vari-ous schemes implement these primitives in different ways and are detailed in the following sub-sections.The basic caching primitives needed areCache Fetch:To fetch an entity already present in cacheCache Store:To store a new entity in cacheCache Validate:To verify the validity of a cached en-tityCache Invalidate:To invalidate a cache entity when neededBuffer Management:The cooperative cache module running on each node reserves a chunk of memory.This memory is then allocated to the cache entities as needed. Since this memory needs to be pooled into the global coop-erative cache space,this memory is registered(i.e.locked in physical memory)with the InfiniBand HCA to enable effi-cient memory transfers by RDMA.Several researchers have looked at the different aspects of optimizing this limited buffer usage and have suggested different cache replace-ment algorithms for web caches.Our methods are orthog-onal to these issues and can easily leverage the benefits of the proposed algorithms.3.1Basic RDMA based Cooperative Cache(BCC)In our design,the basic caching services are provided by a set of cooperating modules residing on all the participatingInternetApplications ApplicationsFront−end Mid−tierBack−end Figure 1.A Typical Multi-Tier Data-Center (Courtesy CSP Architecture design [12])Figure 2.External Module based Designserver nodes.Each cooperating module keeps track of the local cache state as a set of local page-tables and places this information in the soft shared state for global access.The basic RDMA based Cooperative Caching is achieved by designing the cache primitives using RDMA operations.The communication messages between the modules are divided into two main components:(i)control messages and (ii)data messages.The control messages are further classified into (i)meta-data read messages and (ii)meta-data update messages.Since data messages form the bulk volume of the total communications we use RDMA operations for these.In addition,the meta-data read mes-sages use the RDMA Read capabilities.Meta-data update messages are exchanged using send-receive operations to avoid concurrency control related issues.The basic cache primitives are handled by BCC in the following manner:Cache Fetch involves three simple steps:(i)finding the cache entry (ii)finding a corresponding amount of local free space and (iii)fetching the data using RDMA Read opera-tion.Cache Store involves the following steps:in case the local node has enough free space the entity is cached and key-table is updated.In cases where local node has no free memory,the entity is stored into a temporary buffer and the local copies of all page tables are searched for a suitable candidate remote node for a possible free space.A control message is sent to that node which then performs an RDMA Read operation of this data and notifies the original node of the transfer.Once a control message is sent with a storerequest to a remote node,then the current entity is consid-ered to be a responsibility of the remote node.For both these primitives,in cases where free space is not available system-wide,a suitable replacement is chosen and data is stored in place of the replacement.Cache Validate and Cache Invalidate involve a meta-data read or a meta-data update to the home node respectively.As mentioned earlier,RDMA Read is used for the read op-eration.Although this scheme provides a way to share cache across the proxy nodes,there may be redundancy in the en-tries across the system.3.2Cooperative Cache Without Redundancy(CCWR)In this scheme,the main emphasis is on the redundant duplicates in the system.At each step of request processing,the modules systematically search the system for possible duplicate copies of cache entities and these are chosen for replacement.In aggregate,the cache replacement decisions are taken in the following priority:(i)Local free space,(ii)Remote node free space,(iii)Local redundant copies of en-tries cached elsewhere in the system,(iv)remote redundant copies have duplicates in the system and (v)replacement of suitable entity by removing an existing entry to make space for the new entry.We again describe the details of designs of the cache primitives.The case of Cache Fetch presents interesting design op-tions.The data from remote node is fetched into local free space or in place of local redundant copy in the same prior-ity order.However,in case there are no free buffer spaces or local duplicates available for getting the data,remote cache entity is swapped with some local cached entity.In our de-sign,we select a suitable local replacement,send a store message to the remote cache for this local replacement and followed by a RDMA Read of the required remote cache entity.The remote node follows a similar mechanism to decide on storage and sends back an acknowledgment.Fig-ure3shows the swap case of this scheme.The dotted lines shown in thefigure are control messages.Cache Store design in this case is similar to the previ-ous approach,the main difference being the priority order described above.The memory space for storing new cache entries being searched in the order of free space,redundant copies and permanent replacements.The CCWR scheme benefits significantly by increas-ing the total amount of memory available for cooperative caching by removing redundant cache entries.For large working sets this yields higher overall performance.3.3Multi-Tier Aggregate Cooperative Cache(MTACC)In typical multi-tier data-centers proxy servers perform all caching operations.However,the system can benefit sig-nificantly by having access to additional memory resources. There are several back-end nodes in the data-center that might not be using their memory resources to the maximum extent.In MTACC,we utilize this free memory on servers from other tiers of the multi-tier data-center.This provides us with more aggregate system memory across the multi-ple tiers for cooperative caching.Further,the involvement back-end modules in caching can be possibly extended to the caching support for dynamically changing data[11].MTACC scheme is designed with passive cooperative caching modules running on the back-end servers.These passive modules do not generate cache store or retrieve re-quests themselves,but help the other modules to utilize their pooled memory.In addition,these passive modules do not act as home nodes for meta-data storage,minimizing the necessity for cache request processing overheads on these back-end servers.In addition,in certain scenarios like in case of cache in-validates and updates,the back-end servers need to initi-ate these invalidate operations[11].Utilizing the modules existing on the back-end nodes,the back-end nodes can perform operations like invalidations,etc.efficiently with the help of the closer and direct access to cache to achieve significant performance benefits.Figure4shows a typical setup for MTACC.3.4Hybrid Cooperative Cache(HYBCC)Though the schemes CCWR and MTACC can achieve good performance by catering to larger working sets,they have certain additional working overhead to remove redun-dant cache entries.While this overhead does not impact the performance in cases when the working set is large or when the requestedfile is large,it does impact the performance of the smaller cache entities or smaller working setfiles to a certain extent.CCWR adds certain overhead to the basic cache process-ing.The added lookups for duplicates and the higher cost of swapping make up these overheads.MTACC also adds similar overheads.This aggregated cache system size can cause higher overheads for request processing.To address these issues,we propose the use of the Hybrid Cooperative Caching Scheme(HYBCC).In this scheme, we employ different techniques for differentfile sizes.To extent possible,smaller cache entities are not checked for duplications.Further,the smaller cache entities are stored and their lookups are performed on only the proxy servers without using the web servers.So smaller cache entities are not stored on the passive nodes and are duplicated to the extent possible reducing the effect of the associated over-heads.Our experimental results show that this can achieve a good balance for different kinds of traces.4Experimental ResultsIn this section,we present a detailed experimental eval-uation of our designs.Here,we compare the following lev-els of caching schemes:(i)Apache default caches(AC)(ii) BCC,(iii)CCWR,(iv)MTACC and(v)HYBCC.Experimental Testbed:For our experiments we used 20nodes with dual Intel Xeon 2.66GHz processors. InfiniBand network connected with Mellanox InfiniHost MT23108Host Channel Adapters(HCAs).The clusters are connected using a Mellanox MTS14400144port switch. The Linux kernel version used was2.4.20-8smp.Mellanox IBGD1.6.1with SDK version3.2and the HCAfirmware version3.3was used.These nodes were setup with two web-servers and with the number of proxy servers varying from two to eight.The client requests were generated from multiple threads on10 nodes.The web-servers and application servers used in the reference implementation are Apache2.0.52.All proxy nodes we configured for caching of data.Web server nodes were also used for caching for the schemes MTACC and HYBCC as needed.Each node was allowed to cache64 MBytes of data for any of the experiments.Traces Used:Four synthetic traces representing the working sets in Zipf[15]traces were used.Thefiles sizes in the traces were varied from8k bytes to64k bytes.Since theRequest forFile B Present File BIn CacheFigure 3.Cooperative Caching Without RedundancyServer NodeProxy TierBack−End TierGlobal Cache SpaceLocal CacheFigure 4.Multi-Tier Aggregate Cooperative Cachingworking sets of Zipf traces all have similar request probabil-ities,a trace comprising of just the working set is seemingly random.The working set sizes for these traces are shown in Table1.These present us with a number of cases in which the working sets are larger than,equal to or smaller than the total cache space available to the caching system.Table 1shows the comparison working set size and the system cache size for various cases.Trace2nodes8nodes8k-trace80M/128M80M/512M 16k-trace160M/128M160M/512M 32k-trace320M/128M320M/512M 64k-trace640M/128M640M/512M Table1.Working Set and Cache Sizes for VariousConfigurations4.1Basic PerformanceAs an indication of the potential of the various caching schemes,we measure the overall data-center throughput. Figures5and6show the throughput measured for the four traces.We see that the basic throughput for all the coopera-tive caching schemes are significantly higher than the base case of basic Apache caching(AC).Impact of Working Set Size:We notice that the per-formance improvements from the AC scheme to the other schemes show steep improvements when the cooperative caching schemes can hold the entire working set of that trace.For example,the throughputs for the cooperative caching schemes for the8k-trace for two nodes in Fig-ure5are about10000TPS,where as the performance for AC is just above5000TPS.This shows a performance im-provement of about a factor of two.This is because the AC scheme cannot hold the working set of the8k-trace which is about80MBytes.Since each node can hold64 MBytes,AC incurs cache misses and two node coopera-tive caching shows good performance.We see similar per-formance jumps for all cases where the working setfits in cache.Figure7clearly shows a marked improvement for larger traces(32k-trace and64k-trace)for MTACC and HY-BCC.This benefit comes from the fact that MTACC and HYBCC can accomodate more of the working set by aggre-gating cache from nodes across several tiers.Impact of Total Cache Size:The total cache size of the system for each case is as shown in Table1.For each con-figuration,as expected,we notice that the overall system performance improves for the cases where the working-set sizes are larger then the total system cache size.In partic-ular,the performance of the64k-trace for the8node case achieves a throughput of about9500TPS while using the memory aggregated from the web server for caching.This clearly shows an improvement of close to20.5%improve-ment over basic caching scheme BCC.Impact of System Size:The performance of the8k-trace in Figure8shows a drop in performance for the CCWR and the MTACC cases.This is because as a result of aggre-gated cache across tiers for MTACC its total system size increases,hence the total overheads for each lookup also increase as compared to CCWR.On the other hand,since HYBCC uses CCWR for small cache entities and MTACC for large cache entities,its improvement ratios of HYBCC in Figure8clearly show that the HYBCC scheme does well in all cases.4.2Detailed AnalysisIn this section,we discuss the performance benefits seen for each of the schemes and analyze the same.AC:These numbers show the system throughput achiev-able by using the currently available and widely used simple single node caching.Since all the nodes here take local de-cisions the performance is limited by the amount of cache available on individual nodes.BCC:As shown by researchers earlier,the performance of the BCC scheme marks significant performance improve-ment over the AC scheme.These performance numbers hence represent throughput achievable by basic cooperative caching schemes.In addition,the trends for the BCC per-formance also show the effect of working-set size as men-tioned earlier.We see that as we increase the number of proxy servers,the performance benefit seen by the BCC scheme with respect to AC increases.The performance ben-efit ratio as shown in Figures7and8clearly shows this marked improvement.CCWR:From the Figures5and6,we observe that the performance for the CCWR method shows two interesting trends:(i)the performance for the traces16k-trace,32k-trace and64k-trace show improvement of up to32%as compared to the BCC scheme with the improvement grow-ing with higher size traces and(ii)the performance of the 8k-trace shows a drop of about5%as compared to the BCC scheme.The primary reason for this performance drop is the cost of additional book-keeping required for eliminat-ing copies.We measured this lookup cost for this scheme to be about5%to10%of the total request processing time for afile of8Kbytes size.Since this cost does not grow withfile size,its effect on largerfile sizes is negligible.MTACC:The main difference between the CCWR scheme and the MTACC scheme is the increase in the to-tal system cache size and the total system meta-data infor-mation size.The additional system size improves perfor-mance by accommodating more entities in cache.On the other hand,the higher meta-data size incurs higher lookup and synchronization costs.These reasons both show effecton the overall performance of the data-center.The8node case in Figure6shows that the performance of8k-trace de-creases with MTACC as compared to BCC and CCWR and the performance improves for16k-trace,32k-trace and64k-trace.We observe similar trends for the2node case in Fig-ure5.HYBCC:HYBCC overcomes the problems of lower performance for smallerfiles as seen above by using a hy-brid scheme described in Section3.4.In this case,we ob-serve in Figures5and6that the HYBCC scheme matches the best possible performance.Also,we notice that the im-provement of the HYBCC scheme over the BCC scheme is up to35%.5Related WorkSeveral researchers[11][8][2][5]have focussed on the various aspects of caching.Cooperation of multiple servers is proposed as an improtant technique in caching[5][9].A popular approach of cooperative caching proposed in[5] uses application level redirects of requests to enable coop-erative caching.This approach needs all the data-center servers to have different external IP addresses visible to the client and incurs higher overheads.On the other hand ap-proaches like[9][1]use either a home node based approach for the data or use a single node for management activities. These approaches could easily lead to performance bottle-necks.In our approach,we use the concept of home node for just the meta-data instead of the actual cached data.This alleviates the bottleneck problem to a large extent.In our approach,we use an approach similar to the N-Chance approach proposed in thefile-system research con-text in XFS[13].Significant work[7][6][3]has done in respect to the cache replacement algorithms.Our proposed schemes are orthogonal to these and can easily leverage the benefits of these.6ConclusionThe importance of Caching as an instrument for improv-ing the performance and scalability of web-serving data-centers is immense.Existing cooperative cache designs of-ten partially duplicate cached data redundantly on multiple servers for higher performance while optimizing the data-fetch costs for multiple similar requests.With the advent of RDMA enabled interconnects these cost estimates have changed the basic factors involved.Further,the utilization of the large scale of resources available across the tiers in today’s multi-tier data-centers is of obvious importance.In this paper,we have presented cooperative cache schemes that have been designed to benefit in the light of the above mentioned trends.In particular,we have designed schemes that take advantage of RDMA capabilities of net-works and the resources spread across the multiple tiers of modern multi-tier data-centers.Our designs have been im-plemented on InfiniBand based clusters to work in conjunc-tion with Apache based servers.We have evaluate these with appropriate request traces.Our experimental results have shown that our schemes perform up to35%better than the basic cooperative caching schemes for certain cases and 180%better than the simple single node caching schemes.We further analyze the performances of each of our schemes and propose a hybrid caching scheme that shows high performance in all cases.We have observed that sim-ple caching schemes are better suited for cache entities of small sizes and advanced schemes are better suited for the larger cache entities.As future work we propose to extend our work to support dynamic data cooperative caching.7AcknowledgmentsWe thank Karthikeyan Vaidyanathan and Pavan Balaji for their contribution towards the basic data-center setup and their valuable comments.References[1]WhizzBee Web Server./.[2]Marc Abrams,Charles R.Standridge,Ghaleb Abdulla,StephenWilliams,and Edward A.Fox.Caching proxies:limitations and po-tentials.In Proceedings of the4th International WWW Conference, Boston,MA,December1995.[3]Multimedia Proxy Across.Cost-based cache replacement and serverselection for.[4]Infiniband Trade Association.http://www.infi.[5]Scott M.Baker and Bongki Moon.Distributed cooperative Webputer Networks and ISDN Systems,31(11-16):1215–1229,May1999.[6]P.Cao and S.Irani.Greedydual-size:A cost-aware www proxycaching algorithm,1997.[7]Pei Cao and Sandy Irani.Cost-aware WWW proxy caching algo-rithms.In Proceedings of the1997Usenix Symposium on Internet Technologies and Systems(USITS-97),Monterey,CA,1997.[8]Francisco Matias Cuenca-Acuna and Thu D.Nguyen.Cooperativecaching middleware for cluster-based servers.In Tenth IEEE Inter-national Symposium on High Performance Distributed Computing (HPDC-10).IEEE Press,2001.[9]Li Fan,Pei Cao,Jussara Almeida,and Andrei Broder.Summarycache:A scalable wide-area Web cache sharing protocol.In Pro-ceedings of the ACM SIGCOMM’98conference,pages254–265, September1998.[10]The Apache Foundation./.[11]S.Narravula,P.Balaji,K.Vaidyanathan,S.Krishnamoorthy,J.Wu,and D.K.Panda.Supporting Strong Coherency for Active Caches in Multi-Tier Data-Centers over InfiniBand.In Proceedings of System Area Networks(SAN),2004.[12]Hemal V.Shah,Dave B.Minturn,Annie Foong,Gary L.McAlpine,Rajesh S.Madukkarumukumana,and Greg J.Regnier.CSP:A Novel System Architecture for Scalable Internet and Communication Ser-vices.In the Proceedings of the3rd USENIX Symposium on Internet Technologies and Systems,pages pages61–72,San Francisco,CA, March2001.。
数据库系统概念(database system concepts)英文第六版 PPT 第15章
requesting and releasing locks. Locking protocols restrict the set of possible schedules.
Database System Concepts - 6th Edition
15.5
©Silberschatz, Korth and Sudarshan
The Two-Phase Locking Protocol (Cont.)
There can be conflict serializable schedules that cannot be obtained if
two-phase locking is used.
However, in the absence of extra information (e.g., ordering of access
to data), two-phase locking is needed for conflict serializability in the following sense: Given a transaction Ti that does not follow two-phase locking, we can find a transaction Tj that uses two-phase locking, and a schedule for Ti and Tj that is not conflict serializable.
©Silberschatz, Korth and Sudarshan
Lock-Based Protocols (Cont.)
Lock-compatibility matrix
Reaching Consensus- A Basic Problem in Cooperative Applications
Reaching Consensus - A Basic Problem in Cooperative ApplicationsEdgar NettGMD - German National research Center for Information TechnologySchloß Birlinghoven, D-53754 St. Augustin, Germanye-mail: nett@gmd.deAbstractIn cooperative applications, a group of processes cooperate to perform a common task or service. A major problem to be solved is how to reach a common view, i.e. a consensus between the cooperating processes across different sites on the global state, on the computational progress, and to guarantee the persistence of the agreed (intermediate) result. The consensus problem is omnipresent in distributed systems since it can be identified on many levels within the system. In this paper, we will focus on a reliable broadcast protocol, a decentralized, non-blocking commit protocol, and the evaluation of a distributed data structure called dependency assessment graph (DAG). It will turn out that all solutions provided have in common that they rely on the notion of atomicity, i.e. the use of indivisible building blocks.Keywordsdistributed computing, cooperation, consensus, reliable communication, dynamic actions1. IntroductionThe fundamental difference between traditional, centralized systems and distributed systems is that distributed systems have the partial failure property, i.e. a component may fail, but the rest of the system continues to work [6]. Therefore, any system that is distributed must be capable of surviving partial failures. The other fundamental property of a distributed system is the possibility to exploit concurrency. By that, a distributed system can do more work in the same amount of time which increases the performance of the system. Other meaningful factors for the use of distributed systems are enhanced resource utilization through sharing and scalability. A distributed system should comprise multiple processing elements that can run autonomously. Each of these elements, in the following termed sites, contains at least a CPU and memory. The sites are interconnected by a communication medium which allows concurrent processes on different sites to exchange messages. In cooperative applications, a group of processes cooperates to perform a common task or service [5]. A major problem to be solved is how to reach a common view, i.e. a consensus between the cooperating processes across different sites on the global state, on the computational progress, and to guarantee the persistence of the agreed (intermediate) result. The consensus problem is omnipresent in distributed systems since it can be identified on many levels within the system [1]. It is necessary, e.g., for synchronization, reliable communication, fault diagnosis, checkpointing, committing, and dependency assessment. In this paper we will focus on a reliable broadcast protocol, a decentralized, non-blocking commit protocol, and the evaluation of a distributed data structure called dependency assessment graph (DAG). It will turn out that all solutions provided have in common that they rely on the notion of atomicity, i.e. the use of indivisible building blocks.2. The reliable broadcast protocolBased on simple and fast communication services (like datagram service), system software has to be designed realizing protocols for sending and receiving messages also in the presence of failures. Beside site crashes, due to the unreliable communication medium we now have to cope with omission failuresmeaning that messages may be lost or arrive in different order. Having these failure assumption in mind reaching a consistent view is very difficult.Let us specify some properties a protocol realizing reliable communication among a group of processes should explore. If we do not only consider unicast communication which involves a single source and a single destination for messages our first requirement states that if a message reaches one recipient it must be guaranteed that all other operational recipients receive this message, too. This means that all operational recipients constitute an atomic, i.e. indivisible unit. The semantics is equivalent to the case of having only one recipient. This requirement is known as atomic broadcast. Let us consider for example a set of processes each maintaining one copy of a replicated data. If modifications of the data are notified via atomic broadcast it is ensured that all replicas are still identical. But atomicity does no longer suffice if we consider for instance structured data like queues. Updating a queue containing several elements may require more than one message. In this case it is imported that every process receive a set of messages in the same order. Thus, the broadcast protocol should define a total order over all delivered messages. Since the applied fault model also comprises site crashes, the protocol needs to know who at a given time is a living member of the group. The problem is to keep all the fault-free sites informed of the membership regardless of whether sites are joining or leaving the system.One protocol exploring the desired properties described above is represented by the Reliable Broadcast Protocol (RBP) developed within our system [ ]. The RBP is based on ideas of the broadcast protocol presented by [2]. It is characterized by the following properties :Atomicity : a message issued by a sender is either received by all operational participants of a group or by none of them.Total Ordering : the sequence of the delivered messages is identical at all operational receivers.Site membership: Site crashes and reintegration requests are handled by the RBP. A site crash is detected by the RBP and leads to the constitution and dissemination of the new membership. A site willing to be part of the protocol is integrated by the RBP again resulting in a new membership.One possibility to realize atomic broadcast is to send an acknowledgement from each recipient to the sender and from the sender to each recipient. Thus, a positive acknowledgement system exists between the sender and the recipients. A second way to realize atomic broadcast would be to broadcast each received message to all other operational recipients. Unfortunately, both solutions are not very efficient.Fig. 1 : How to acknowledge received messagesAll sites participating in the broadcast protocol agree on a single site to be the so-called token-site. Each message is sent to all recipients including the token-site. If the token-site receives a broadcast message it broadcasts an acknowledgement message containing the message id and a so-called sequence number. The sequence number defines a total order among all delivered messages. With the help of the sequence number a recipient can detect omissions and duplicated messages. A recipient ignores duplicated messages and for lost messages it sends resend requests to the token-site. Note, that a message is acknowledged only once (by the token-site). All other recipients have to send acknowledgements only if they have detected a transmission error. Since only the token-site acknowledges send requests there exists a positive acknowledgement system between the sender and the token-site. In contrast, between the token-site and the receiver the protocol defines a negative acknowledgement system because resend request are only sent in case an error occurred.To guarantee that the protocol remains operational in case the token site fails, our RBP forms a logical ring among all operational sites. The token site responsibility periodically rotates among the ring of operational sites. The token is part of an acknowledgement meaning that the token transfer does not require additional messages. Only if no broadcast messages are received for a certain period of time explicit token transfer messages are sent. These messages are also used as ´I am alive´ messages.The token transfer guarantees that only a limited number of messages have to be saved in order to be able to response to resend requests of lost messages. If a token site becomes token-site again, it may discard all messages it has acknowledged when having been token site last time. This is because every other site must have been token site in the mean time. Therefore, no other site will request messages a token-site has saved last time it has been token-site. Note, that the new token-site only accepts the token if it does not miss any message for which the previous token site has sent an acknowledgement.In case of a site crash or an integration request the protocol enters the so-called reformation phase. In this phase a new membership list is formed and disseminated. This list is called token-list. Among the members of the token-list the new token-site is determined. Then, the protocol reenters the normal phase called message transfer phase. The first message to be delivered is the message containing the token-list. This message is used to inform the application about changes in the membership. Especially, the application is notified about site faults. The message containing the new token-list is integrated into the total order of all messages transferred within the RBP. This feature of the RBP is very imported as we will see later on.Now, site fault detection is addressed. The crash of a site other than the token-site is detected if the token-site tries to take over the token-site responsibility to the next site in the ring and if that site does not acknowledge the token transfer. The crash of the token site is detected if a sender does not receive an acknowledge for its message or if an recipient does not get any answer to its resend requests. To detect site crashes also if no messages are delivered each site assumes to become token-site periodically. If it does not receive a token transfer messages in a specified amount of time it broadcasts a ´Are you alive´message. If it does not receive any answer to this request after a certain number of repetitions it assumes the token site to be down and it initiates a new reformation phase.3. The decentralized, non-blocking commit protocolReaching consensus about the successful completion of a common task and to guarantee the respective computational progress is accomplished by running a so-called commit protocol. A commit protocol represents a distributed agreement protocol guaranteeing that all processes involved in commit processing come to a consistent decision about the outcome. It ensures that the task will either be committed by making its results permanent or will be totally aborted without affecting the system. Again, the approach is to consider such a task as an atomic unit. Those units are often called atomic actions or transactions. However, in contrast to atomic broadcasts, the consensus must also include those processes which cannot participate in the commit protocol due to site crashes. A well-known commit protocolrepresents the centralized two-phase commit protocol [4] where the commit decision is determined by a centralized instance called the coordinator of the commit protocol. But, the need of a centralized coordinator leads to undesired blocking situations if, e.g., the coordinator fails before having propagated the commit decision to all involved sites. In this case a participant has to wait until the coordinator succeeds to recover because he cannot safely anticipate a decision that is consistent with the view of the faulty coordinator. Protocols enabling the surviving sites to come to an outcome decision without waiting for the faulty site to recover are known as non-blocking commit protocols. A well-known non-blocking commit protocol represents the three-phase commit protocol [11]. Within the three-phase commit protocol additional messages are exchanged between the surviving participants enabling one of them to take over the responsibility of the coordinator if the coordinator should fail. Although the three-phase commit protocol avoids blocking situations, this protocol has performance penalties because of the increased number of message exchanges.A penalty of both protocols represents their central coordination. Having a centralized coordinator that is responsible for determining the outcome of a commit request this single site becomes a central point for failures. Commit protocols avoiding a centralized coordinator are know as decentralized commit protocols. Within a decentralized commit protocol each involved site represents an independent participant that can determine the commit decision itself based on the messages exchanged with all other participants. In case of a site fault the faulty participant is able to determine the commit decision as long as at least one of the surviving participant knows the commit decision. Another advantage of a decentralized protocol is that the message announcing the outcome of a commit protocol is no longer needed since the commit decision can be determined decentralized by the participant itself. Our objective was to design a sophisticated commit protocol that avoids any kind of blocking situations and additionally exploits the advantages accomplished by a decentralized communication structure. In contrast to the non-blocking three-phase commit protocol where blocking situations are prevented by the protocol itself (by introducing additional message exchanges), our goal was to resolve the blocking problem by means of the underlying RBP described above.In the following section it will turn out which properties of the RBP are exploited to realize our decentralized non-blocking two-phase commit protocol [7]. Its structure is depicted in Fig. 2.Fig. 2. Structure of the non-blocking, decentralized two-phase commit protocolThe commit protocol is initiated by sending a commit request to all participating sites. If a participant receives a commit request, it prepares the affected action(s) by writing its (their) effects to the log (i.e. to non-volatile memory). In case a participant fails to prepare he sends a failed message to all other participants and the commit request is aborted. In case of a successful prepare, he sends a preparedmessage to all other participants and waits for their prepared or failed message, respectively. The commit request message contains the identity of all sites participating in commit processing thus enabling a participant to determine on which message it has to base its decision. If at least one participant fails to prepare, the commit request has to be aborted and a failed record is written to the log. If a participant receives a prepared message from all other participants and it is prepared too, the commit request is logically committed. The participant fixes the commit decision by writing a commit record to the log. By this means a participant determines the outcome of a commit request in a decentralized manner by the sequence of messages it receives.Let us now consider the treatment of site faults. A site fault may occur any time during the execution of a commit protocol. In order to avoid blocking situations the following properties supplied by the RBP are exploited: Atomicity, Ordering and Site fault detection. It is important to note that the RBP guarantees that the message announcing a site fault is inserted into the total ordering of all messages. Inserting the site fault message into the total message ordering enables a participant of the commit protocol to make the following assumptions about the messages received by the surviving sites : At the time the site fault occurs the participant of a non-faulty site has received all messages the participant of the faulty site has received before it crashed. Furthermore, it is ensured that at the time the surviving participants recognize the site fault they have received all messages in the same order. We will know explain why these assumptions are needed to enable the surviving sites to determine the outcome of a commit protocol in a decentralized manner and without waiting for the reintegration of the faulty site. The problem for the surviving sites is to be able to agree on a decision about the outcome of the running commit protocol that is consistent with the view of the faulty site. This is not always trivial. Especially if the prepared message of the faulty site has not yet been received by other participants it is not clear whether the participant is already in the state prepared. However, if the faulty site received the prepared messages from all other participants the action may even be committed at the faulty site. Thus, additional information about the state of the faulty participant is needed. This information is gathered from the total message ordering supplied by the RBP. In order to terminate a commit request a participant only commits this request if it has received its own prepared messages as well as the prepared message of all other participants. Thus, on receiving a site fault message a participant only has to inspect the messages it has received in order to determine in which state the faulty site could not have been terminated. If a participant failed and the site fault message was received before the prepared message of the faulty site the faulty site could have reached the state "committed". Since the faulty site is still allowed to abort independently the commit request must be aborted on the surviving site too. On the other hand, if a participant failed and a surviving site receives the prepared message of the faulty site before site fault message the faulty site cant not be forced to abort the commit request after having recovered. Thus, the surviving sites can safely decide to commit the corresponding commit request.After having addressed how the surviving sites act in case of a participating site crash we will now consider the behaviour of a faulty site. To recover it scans its log. In case he finds the outcome of a commit request locally on its log, no other participant has to be asked. Otherwise, if the outcome could not be fixed before the site has crashed an outcome request is sent to all other participants. If one of the other participants answers committed, the commit request will be committed at the recovering site, too. If one participant answers failed the commit request was aborted. In both cases the recovering participant writes the outcome to its log.In order to enable a faulty site to ask the surviving sites about the outcome of a commit request it must be ensured that the surviving sites maintain the information about commit protocols for a certain period of time. The question then is to determine when a participant may forget the information recorded on the log about a specific commit request. Deleting obsolete log data is necessary because the time needed to reintegrate faulty sites increases with the amount of log data to be analysed during restart. In a decentralized commit protocol, all participants store the outcome of a commit request. Thus, the participants must now agree when to forget about it. An agreement is reached by running a so-called erase protocol that ensures that the outcome of a commit request is only destroyed if all participants know about it. To do so, each site that is aware of the outcome sends a committed message to all participants. If a participant receives this message from all other participants he is sure that none of themwill ever ask him about the outcome later on. He fixes the end of the erase protocol by writing a done record to the log. Note, that the erase protocol can be decoupled from the corresponding commit protocol thus allowing to run a group erase protocol which allows to forget about the outcome of a set of commit requests.In order to increase the efficiency of our system we allow several commit protocols to be executed in parallel. This may lead to a problem if actions are involved in different commit protocol executions. We call this "the problem of competitive commit requests". To resolve this problem a concurrently running commit request is delayed until the competitive commit request has been terminated. By this means the execution of competitive commit requests are sequentialized. But, sequentializing commit requests may cause a deadlock. In order to avoid deadlocks, again the total ordering property of the RBP is exploited. To do so, the execution of concurrently running commit protocols are coordinated so that different commit requests are performed by each participant in the same order. To achieve this, the initiator of a commit protocol delays the commit processing until the commit request message is received by the initiator itself.4. Dynamic actionsCommit protocols, as presented in the preceding chapter, require that the commit set is known at the time the protocol is initiated. The commit set comprises all processes that have to participate in the protocol. Its determination is easy as long as all processes belong to the same action. This is guaranteed by the atomicity property which enforces that actions have to be executed isolated from other concurrent actions. But, this is a brute force approach, only useful for a restricted class of applications, and does not match with the general purpose character of distributed systems which also should support communication and cooperation. The main problem to be solved is how to decide on the outcome of a computation and to guarantee the respective system progress by executing a commit procedure even if we no longer want to rely on a concurrency control mechanism that enforces the isolated execution of concurrent actions. This implies that we must also be able to handle the effects of the dissemination of information that even may become invalid due to an abort of the respective action. In our action model, we allow for information flow of uncommitted data and support the notion of information dependencies between actions. An action B that uses uncommitted data depends on the action A that produced that data. This dependency is observed in two cases: it represents an abort dependency meaning that an abort of action A implies the cascading abort of action B. It stands for a commit dependency meaning that the commit of action B implies the commit of action A. (otherwise, action B could still be subject to an cascading action abort caused by the abort of action A). Note that this definition of commit dependency still allows the actions A and B to be committed together in a joint commit protocol execution. This contrasts to the definition of commit dependency found in [3], which requires action A to commit before action B in the described situation. Our definition is more general and, for instance, does not lead to a deadlock-situation in case of cyclic commit dependencies. Taking information dependencies caused by the use of uncommitted data into account, we come to the following definitions: The execution of actions is abort correct, if any action, that relies on erroneous input (e.g., caused by the abort of another action), is aborted as well. It is said to commit correct, if the commit of any action implies that its input cannot become erroneous by any other abort.Since the considered dependencies arise dynamically according to the actual information flow, we call the resulting abstraction dynamic actions[8]. Dynamic actions provide the atomicity property while ensuring abort correctness and commit correctness in spite of the use of uncommitted information by tolerating cascading aborts and still preventing the domino effect. For a former treatment of this subject, the reader is referred to [10]The following section will tackle the problem of handling dependencies evolving due to information flow. Dependencies are maintained locally at each site in a data structure called local DAG. All these DAGs make up the DAG, a distributed data structure that reflects the dependency structure of the actions being executed in the distributed system.As an example, Fig. 3 shows a DAG distributed over three sites. The nodes of this graph are the so called recovery units (RUs), which describe the effect of an action at a single site. So, at each visited site an action is represented by a corresponding RU. Every RU is clearly identified by the identifier of the respective action i and the host node j. Two kinds of dependencies (represented by directed edges) between RUs are distinguished: action dependencies and information dependencies. The first one addresses the distributed nature of actions. Like in transactions it connects all recovery units belonging to the same action in order to ensure the All-or-Nothing property. The second one reflects the conceptual extensions of our action model. It records the fact that the effect of the destination RU depends on information of the originating RU belonging to a different action whose effect is still revocable. Regarding the determination of commit sets and abort sets, respectively, it is important to note that information dependencies produce only local effects and, therefore, can be exploited locally at each site [9].Fig.3: Example of a dependency assessment graph.Now, procedures to evaluate the DAG are discussed. The occurrence of errors may lead to the abort of a RU. Due to the described dependencies between RUs other RUs may be affected as well to guarantee abort correctness. The abort set AS(a) of a RU contains all actions that have to be aborted as a consequence of the abort of a. It is determined by applying the following two propagation rules: Forward Propagation Rule (FPR):If a RU has to be aborted, all information dependent RUs have to be aborted as well.Remote Propagation Rule (RPR):If a RU has to be aborted, all action dependent RUs have to be aborted as well.By repeatedly applying the FPR, the entire subgraph reachable from the starting node RU following the edges representing information dependencies will be subsumed in the abort set. FPR only is applied if data objects modified by the failing action have been accessed by other actions in the meantime. The term "Forward Propagation Rule" should refer to the fact that this rule affects the computational progress that has already been made after the execution of the erroneous part of the respective action meaning that it does not imply a conventional rollback situation. The RPR deals with the remote consequences of afailing RU due to the distributed nature of the respective action. By that, it guarantees the failure atomicity property of an action by ensuring that its effect is obliterated on all visited sites in the system. Providing commit correctness is accomplished by determining the commit set of the an action. The algorithmic procedure to determine the commit set is simple and very similar to how the abort set is computed. Pictorially spoken, we only have to apply the Forward Propagation Rule in the opposite direction. Therefore, the only thing to change is that we have to exchange the FPR by the so-called Backward Propagation Rule (BPR):If a RU has to be committed, all RUs on which it is information dependent have to be committed as well. The evaluation of this rule can happen just before running the two -phase commit protocol or by storing redundant information in the DAG during the execution of an action thus giving the possibility to trade time against structural redundancy [7] .5. Concluding remarksReaching consensus among cooperating processes across different sites on a global state and guaranteeing computational progress is a major problem distributed systems supporting cooperative applications. The problem of reaching consensus can be identified on different levels within the system where each level may provide the adequate protocols. In this paper, an atomic, ordered broadcast protocol considering the membership problem in case of site crashes is realized supporting reliable communication between different processes. This communication protocol is an adequate base for running consensus protocols on a higher level. As an important example, we have presented the decentralized, non-blocking commit protocol guaranteeing the computational progress in compliance with all participating processes. In general, reaching consensus about the computational progress requires the knowledge who is involved in the common progress. In conventional transaction systems determining the participants of a commit protocol is very easy because actions are always executed isolated from other concurrent actions. But, if we want to commit system progress while allowing for communication and cooperation we have to cope with complex dependencies evolving dynamically between different processes. The dependency assessment graph (DAG) presented in this paper records this complex dependency structure and allows to determine the set of processes involved in commit processing even if they are not enforced to be executed isolated from each other.6. References[1]G. Barborak, M. Malek : Consensus Problem in fault-tolerant computing, ACM Computing Surveys, 25(2),171-220, June 1993[2]J.M. Chang, N. Maxemchuk. Reliable Broadcast Protocols, ACM Transaction on Computing Systems, Vol 2,No 3, pp 251-273, 1984[3]P. K. Chrysanthis, K. Ramamritham : A Formalism for Extended Transaction Models, 17th Int. Conf. on VeryLarge Data Base, 103-112, Barcelona, 1991[4]J. Gray : Notes on database operating systems, in Operating Sytems - An advanced course, Lecture Notes onComputer Science, vol. 66, springer, 393-481, 1978[5] M Kaashoek: Group communication in distributed Compter Systems, PhD. Thesis, Vrije Universiteit,Amsterdam, 1992[6]S. Mullender : Distributed Systems, Addison Wesley, acm Press, 2nd edition, 1993[7] E. Nett , R. Schumann. Supporting fault-tolerant distributed computations under real-time requirements.Computer Communications, Vol. 15, No 4, may 1992。
计算机英语(全)
《计算机英语(第3版)》词汇表GlossaryAa priori / 7eiprai5C:rai / ad. 〈拉〉用演绎方法,经推理abbreviate / E5bri:vieit / v. 缩写abbreviation / E7bri:vi5eiF E n / n. 缩写(词)abridge / E5bridV / v. 缩短;节略abstract / Ab5strAkt / v. 把.抽象出来;提出,抽出abstract machine 抽象机abstraction / Ab5strAkF E n / n. 抽象;提取abundance / E5bQndEns / n. 大量,丰富,充足abusive / E5bju:siv / a. 谩骂的;毁谤的accommodate / E5kCmEdeit / v. 容纳;使适应accounting / E5kauntiN / n. 会计(制度);记账;结账;结算activate / 5Aktiveit / v. 激活,启动actualization / 7AktFuElai5zeiF E n / n. 实现address / E5dres / v. 编址;寻址Address box 地址框address bus 地址总线addressee / 7Adre5si: / n. 收信人;收件人;被访地址addressing / E5dresiN / n. 编址;寻址adjacent / E5dVeis E nt / a. 相邻的,毗连的administer / Ed5ministE / v. 掌管;实施administrator / Ed5ministreitE / n. (系统、程序等的)管理员advent / 5Advent / n. 出现,到来adversary / 5AdvEs E ri / n. 对手,敌手adverse / 5AdvE:s, Ad5vE:s / a. 不利的,有害的adware / 5AdwZE / n. 广告软件affiliate / E5filiit / n. 附属机构;分公司affiliate marketing 联属网络营销,联盟营销affiliation / E7fili5eiF E n / n. 联系;从属关系aggregate / 5AgrigEt / a. 聚集的;合计的aggregation / 7A^ri5^eiF E n / n. 聚集,聚合,集合albeit / C:l5bi:it / conj. 尽管algorithm / 5AlgEriT E m / n. 算法align / E5lain / v. 对准,对齐alignment / E5lainmEnt / n. 对准,对齐allocate / 5AlEkeit / v. 分配;分派allot / E5lCt / v. 分配;分派allude / E5l j u:d / v. 暗指,影射;间接提到(to)allure / E5l j uE / n. 诱惑力,魅力alternate / C:l5tE:nit, 5C:ltE‐/ a. 供选择的,供替换的;备用的alternatively / R:l5tE:nEtivli / ad. 或者,非此即彼ambiguous / Am5bigjuEs / a. 含糊不清的,模棱两可的amenable / E5mi:nEbl / a. 顺从的;易作出响应的analog(ue) / 5AnElCg / a. 模拟的analogous / E5nAlEgEs / a. 相似的;可比拟的(to/with)analogy / E5nAlEdVi / n. 比拟,类推,类比analyst / 5AnElist / n. 分析员,分析师analytic(al) / 7AnE5litik(E l) / a. 分析的Analytical Engine 分析机,解析机analyzer / 5AnElaizE / n. 分析程序,分析算法,分析器animation / 7Ani5meiF E n / n. 动画(制作)anomaly / E5nCmEli / n. 异常(现象);不按常规antitrust / 7Anti5trQst / a. 反托拉斯的,反垄断的antivirus software 防病毒软件app / Ap / n. 〈口〉应用程序,应用软件(= application)apparatus / 7ApE5reitEs / n. 器械;设备appealing / E5pi:liN / a. 吸引人的;有感染力的append / E5pend / v. 附加applet / 5AplEt / n. 小应用程序application / 7Apli5keiF E n / n. 应用程序,应用软件application programming interface 应用程序编程接口,应用程序设计接口approximation / E7prCksi5meiF E n / n. 近似(值)arcane / B:5kein / a. 神秘的,晦涩难解的;秘密的architecture / 5B:kitektFE / n. 体系结构arena / E5ri:nE / n. 竞技场;活动场所array / E5rei / n. 数组;一系列artificial intelligence 人工智能assembler / E5semblE / n. 汇编程序,汇编器(4A)assembly / E5sembli / n. 部件;组合体(11A)assembly code 汇编代码assembly language 汇编语言assignment statement 赋值语句assortment / E5sC:tmEnt / n. 分类;各种各样asterisk / 5AstErisk / n. 星号asynchronous / ei5siNkrEnEs / a. 不同时的;异步的,非同步的athletic / AW5letik / a. 运动的,体育的attachment / E5tAtFmEnt / n. 附件audible / 5C:dEbl / a. 听得见的audit / 5C:dit / v. 审计;审核(12A)augment / C:g5ment / v. 扩大;增加authentication / C:7Wenti5keiF E n / n. 验证,鉴别author / 5C:WE / v. 著作,写作;编写authorize / 5C:WEraiz / v. 授权;委托authorized / 5C:WEraizd / a. 经授权的automate / 5C:tEmeit / v. 使自动化automated / 5C:tEmeitid / a. 自动化的automatic teller machine 自动柜员机,取款机(12A)automation / 7C:tE5meiF E n / n. 自动化autonomous / C:5tCnEmEs / a. 自治的,自主的,独立的autonomous agent 自主主体Bbackbone / 5bAkbEun / n. 骨干(网),基干(网)backcountry / 5bAk7kQntri / a. 偏僻乡村的backup / 5bAkQp / n. & a. 备份,后备/ 备份的,后备的(6B)balance / 5bAlEns / n. 余额;差额;结算(6B)bandwidth / 5bAndwidW / n. 带宽banking / 5bANkiN / n. 银行业务;银行业banner / 5bAnE / n. 标题,横幅bar / bB: / n. 条;条形图bar chart 条形图bar code 条形码(11A)base class 基类,基本类(6C)base ten notation 以10为底的记数法batch / bAtF / n. (一)批beta / 5bi:tE; 5beitE / n. 希腊语的第二个字母(B,β);测试版(6C)beta testing β测试(法)(6C)bill / bil / v. 给.开账单;把.登账billing / 5biliN / n. 开(账)单;记账binary / 5bainEri / a. & n. 二进制的/ 二进制(数)binary notation 二进制记数法biometric(al) / 7baiEu5metrik(E l) / a. 生物统计学的(11A)bit / bit / n. 位,比特bit map 位图;位映象bit pattern 位模式blackboard model 黑板法模型blackout / 5blAkaut / n. 断电,停电(12A)block / blCk / n. (字、信息、程序、数据等的)块;分程序block character 块字符blog / blCg / n. 博客,网志,网络日志(weblog的缩略)blooper /5blu:pE / n. 过失,失礼blueprint / 5blu:print / n. 蓝图bolster / 5bEulstE / v. 支撑;支持;提高bombsight / 5bCmsait / n. 轰炸瞄准器boot / bu:t / v. & n. [亦作boot up]引导,启动bottleneck / 5bCtlnek / n. 瓶颈,障碍bounds checking 边界检查boxer / 5bRksE / n. 拳击运动员,拳师bracket / 5brAkit / n. 括号brand-new / 5brAnd5n j u: / a. 全新的,崭新的break-in / 5breikin / n. 闯入;盗窃(12A)breeze / bri:z / n. 〈主美口〉不费吹灰之力的事bridge / bridV / n. 网桥,桥接器bridging / 5bridViN / n. 桥接,跨接(6C)broadband / 5brC:dbAnd / a. 宽带的broker / 5brEukE / n. 代理者;代理程序brownout / 5braunaut / n. 负载偏重期,电压不足(12A)browse / brauz / v. 浏览browser / 5brauzE / n. 浏览器buffer / 5bQfE / n. 缓冲区,缓冲器(12A)bug / bQg / n. (程序)错误,故障(4A)building block 积木块,构建模块,构件built-in / 5bilt5in / a. 内置的,内部的bureaucratic / 7bjuErE u5krAtik / a. 官僚(政治)的;官僚主义的bursty / 5bE:sti / a. 猝发的,突发的bus / bQs / n. 总线bus topology 总线拓扑结构bust / bQst / n. 崩溃,不景气(11A)bypass / 5baipB:s / v. 绕过;越过byte / bait / n. 字节bytecode / 5baitkEud / n. 字节码Ccable television 有线电视cancellation / 7kAnsE5leiF E n / n. 取消;删去(6B)capitalization / 7kApitElai5zeiF E n; ‐li5z / n. 大写字母的使用captivating / 5kAptiveitiN / a. 迷人的,可爱的cardinality / 7kB:di5nAliti / n. 基数性;(集的)势,(集的)基数carrier / 5kAriE / n. 载波carrier sense 载波检测,载波监听carte blanche / 5kB:t5blCnF / n. 〈法〉全权,自由处理权cartridge / 5kB:tridV / n. 盒,匣(4A)cascade / kA5skeid / n. 小瀑布;瀑布状物;级联;层叠cascading / kA5skeidiN / a. 级联的;层叠的(6B)cascading rollback 级联回滚(6B)case / keis / v. 〈俚〉(尤指企图盗窃时)探察,察看cash on delivery 货到付款,交货付款catastrophe / kE5tAstrEfi / n. 灾难,灾祸(12A)categorize / 5kAtigEraiz / v. 将.分类;为.取名描述cater / 5keitE / v. 满足需要;迎合;考虑(for, to)cathode / 5kAWEud / n. 阴极cathode ray tube 阴极射线管cell / sel / n. 单元;单元格cell phone 蜂窝电话,移动电话,手机cellular / 5seljulE / a. 蜂窝状的,多孔的cellular telephone 蜂窝电话,移动电话,手机census / 5sensEs / n. 人口普查central processing unit 中央处理器centrifugal / sen5trifju^El / a. 离心的centrifugal pump 离心泵chain-letter / 5tFein7letE / v. 向.发送连锁信(或连锁邮件)champ / tFAmp / n. 〈口〉冠军channel / 5tFAnl / v. (通过某种渠道)输送,传送;引导(6C)character set 字符集chatty / 5tFAti / a. 聊天式的,轻松而亲切的;爱闲聊的check box 复选框,选择框,校验框checkers / 5tFekEz / n. 〈美〉西洋跳棋checkout / 5tFekaut / n. 结账(离去);付款台chip / tFip / n. 芯片circuit board 电路板circuit breaker 断路器(12A)circuitry / 5sE:kitri / n. 电路circulation / 7sE:kju5leiF E n / n. 流通;循环circumvent / 7sE:kEm5vent / v. 绕过;规避civic-minded / 5sivik5maindid / a. 关心公益的;热心公民(或市民)事务的clarity / 5klAriti / n. 清晰,明晰class hierarchy 类层次classified / 5klAsifaid / a. 归入密级的,保密的click / klik / v. & n. (鼠标)单击/(鼠标的)点击client / 5klaiEnt / n. 客户程序;客户机client/server model 客户机/服务器模型clip / klip / n. 剪下来的东西;电影(或)电视片段clipper / 5klipE / chip 加密芯片(12A)clog / 5klCg / v. 塞满;阻塞(12A)cluster / 5klQstE / n. (人或物的)群,组;(果实、花等的)串,束,簇clutter / 5klQtE / n. 凌乱,杂乱;杂乱的东西coaxial / 7kEu5AksiEl / a. 同轴的coaxial cable 同轴电缆coexist / 7kEuig5zist / v. 共存;同时存在cognition / kC^5niF E n / n. 认识,认知coherent / kEu5hiEr E nt / a. 相干的;一致的;协调的coincide / 7kEuin5said / v. 相符,相一致;重合collaboration / kE7lAbE5reiF E n / n. 合作;协作collaborative / kE5lAbEreitiv / a. 合作的,协作的collide / kE5laid / v. 冲突;碰撞(8C)collusion / kE5lu:V E n / n. 共谋,勾结,串通colon / 5kEulEn / n. 冒号commit / kE5mit / v. 提交,委托;承诺(6B)commit point 提交点(6B)commonplace / 5kCmEnpleis / a. 普通的,平凡的(12A)compact disc 光盘compatibility / kEm7pAti5biliti / n. 兼容性compatible / kEm5pAtEbl / a. 兼容的compelling / kEm5peliN / a. 强制性的;有强烈吸引力的;令人信服的competitor / kEm5petitE / n. 竞争者compilation / 7kCmpi5leiF E n / n. 编译;汇编compile / kEm5pail / v. 汇编;编译compiled code 编译执行的代码compiled language 编译执行的语言(4A)compiler / kEm5pailE / n. 编译程序,编译器complementary / 7kRmpli5ment E ri / a. 补充的;互补的compliant / kEm5plaiEnt / a. 顺应的,顺从的,遵从的complication / 7kCmpli5keiF E n / n. 复杂情况;困难,难题comprehend / 7kCmpri5hend / v. 理解,领会computational / 7kCmpju:5teiF E nEl / a. 计算(机)的conceive / kEn5si:v / v. (构)想出conceptual / kEn5septjuEl / a. 概念的conceptually / kEn5septjuEli / ad. 概念上concise / kEn5sais / a. 简明的,简要的concurrency / kEn5kQrEnsi / n. 同时发生,并发,并行性concurrent / kEn5kQrEnt / a. 同时发生的,并发的,并行的conditional statement 条件语句conditioning / kEn5diFEniN / n. 调节,调整conducive / kEn5d j u:siv / a. 有助的,有益的(to)confidential / 7kCnfi5denF E l / a. 秘密的,机密的configurable / kEn5figErEbl / a. 可配置的(6C)configuration / kEn 7figju5reiF E n / n. 配置configuration item 配置项configure / kEn5figE / v. 配置congestion / kEn5dVestF E n / n. 拥挤;拥塞conjecture / kEn5dVektFE / n. 推测,猜想connection string 连接字符串(6C)connectivity / kEnek5tiviti / n. 连通(性),连接(性)connector / kE5nektE / n. 连接器,插头座connotation / 7kCnE u5teiF E n / n. 内涵(意义),涵义consolidate / kEn5sClideit / v. (把.)联为一体,合并;巩固consortium / kEn5sC:tiEm / n. ([复]-tia / ‐tiE / 或-tiums)联营企业;(国际)财团,联盟constrain / kEn5strein / v. 约束,限制constraint / kEn5streint / n. 约束,限制contender / kEn5tendE / n. 争夺者,竞争者contention / kEn5tenF E n / n. 争用;争夺context switch 上下文转换,语境转换controller / kEn5trEulE / n. 控制器converge / kEn5vE:dV / v. 会聚;结合;收敛convergence / kEn5vE:dV E ns / n. 会聚;结合;收敛conversion / kEn5vE:F E n / n. 转换;转变converter / kEn5vE:tE / n. 转换器,转换程序(8C)convoluted / 5kCnvElu:tid / a. 盘绕的;盘错的,错综复杂的cookie / 5kuki / n. “甜饼”(指一种临时保存网络用户信息的结构)cooperative / kEu5Cp E rEtiv / a. 合作的,协作的Copy command 复制命令cordless telephone 无绳电话corporate / 5kC:p E rit / a. 公司的;社团的corrupt / kE5rQpt / v. 破坏,损坏;腐蚀corruption / kE5rQpF E n / n. 破坏;腐化;讹误(6B)cosmetic / kCz5metik / a. 化妆用的;装饰性的;非实质性的cost effectiveness 成本效益cost-effective / 5kCsti5fektiv / a. 有成本效益的;合算的counterfeiter / 5kauntE7fitE / n. 伪造者(尤指伪造货币的人)(12A)countermeasure / 5kauntE7meVE / n. 对策,对抗手段counterpart / 5kauntEpB:t / n. 对应的物(或人)country code 国家代码coupon / 5ku:pCn / n. 购物优惠券,礼券,赠券courier / 5kuriE / n. 信使courtesy / 5kE:tisi / n. 谦恭有礼;好意coverage / 5kQvEridV / n. 新闻报道;覆盖范围coworker / kEu5wE:kE / n. 同事crack / krAk / v. 破译cracker / 5krAkE / n. 非法侵入(计算机系统)者(12A)credit card 信用卡criterion / krai5tiEriEn / n. ([复]-ria / ‐riE / 或-rions)标准,准则(6C)critter / 5kritE / n. 〈美口〉生物;动物cryptic / 5kriptik / a. 隐秘的;(简短而)意义含糊的;费解的cursive / 5kE:siv / n. 草体;草体字(母)cursor / 5kE:sE / n. 光标custom / 5kQstEm / a. 定制的,自定义的customize / 5kQstEmaiz / v. 定制,使用户化customized / 5kQstEmaizd / a. 定制的,用户化的(11A)cut-through / 5kQtWru: / a. 穿越式的,直通的cyber / 5saibE / a. 计算机(网络)的cyber cafe 网吧cyberspace / 5saibEspeis / n. 电脑空间,网络空间Ddaemon / 5di:mEn / n. 端口监督/监控程序,守护程序data bus 数据总线data capture 数据捕获,数据收集data declaration 数据声明data field 数据字段;数据域data flow diagram 数据流程图data item 数据项(6B)data library 数据(文件)库(6C)data link 数据链路data stream 数据流(6C)dated / 5deitid / a. 过时的datum / 5deitEm / n. ([复]data)数据de facto / di:5fAktEu / a. 〈拉〉实际的,事实上的deadlock / 5dedlCk / n. 死锁;僵局(6B)debit / 5debit / n. 借方;借记,借入debit card 借方卡debug / di:5bQg / v. 调试,排除(程序)中的错误(4A)debugger / di:5bQgE / n. 调试程序,排错程序(4A)decentralize / 7di:5sentrElaiz / v. 分散decimal / 5desim E l / a. 十进制的decimal notation 十进制记数法decipher / di5saifE / v. 破译,译解deck / dek / n. 卡片叠,卡片组decode / 7di:5kEud / v. 译(码),解(码)decorative / 5dekErEtiv / a. 装饰性的decrement / 5dekrimEnt / v. 减少,减缩(6B)dedicate / 5dedikeit / v. 把.献给,把.用于dedicated / 5dedikeitid / a. 专用的deduce / di5d j u:s / v. 推论,推断default / di5fC:lt / n. 默认,缺省,系统设定值(4A)definitive / di5finitiv / a. 决定性的;确定的;规定的degradation / 7de^rE5deiF E n / n. 降级,退化degraded / di5greidid / a. 降级的,退化的déjà-vu / 7deiVB:5vju: / n. 〈法〉似曾经历的错觉deliverable / di5liv E rEb E l / n. [常作复]可交付使用的产品delivery on payment 付款交货delve / delv / v. 搜索,翻查demodulate / di:5mCdjuleit / v. 解调demodulator / di:5mCdjuleitE / n. 解调器demultiplex / 7di:5mQltipleks / v. 分路,把.分成多路denote / di5nEut / v. 表示;意思是deploy / di5plCi / v. 部署derivation / 7deri5veiF E n / n. 派生(物)descriptor / di5skriptE / n. 描述符,解说符desensitize / 7di:5sensitaiz / v. 使不敏感,使降低敏感性designate / 5dezigneit / v. 指定,指明;命名;指派designated / 5dezi^neitid / a. 指定的,派定的desk / desk / n. 服务台;部门desktop / 5desktCp / a. & n. 桌面的;台式(计算机)的/ 桌面;台式(计算)机destine / 5destin / v. 预定,指定(往某一目的地或供某种用途)(for)detrimental / 7detri5ment E l / a. 有害的;不利的devastating / 5devEsteitiN / a. 破坏性极大的,毁灭性的(6B)device driver 设备驱动程序devotee / 7devE5ti: / n. 献身者;爱好者dexterity / dek5steriti / n. 灵巧,敏捷dialog box 对话框dial-up / 5dai E lQp / a. 拨号的Difference Engine 差分机differentiator / 7difE5renFieitE / n. 区分者;鉴别者digit / 5didVit / n. 数字digital camera 数码照相机digitize / 5didVitaiz / v. 使(数据)数字化dinosaur / 5dainEsC: / n. 恐龙;(尤指废弃过时的)庞然大物dip / dip / n. (暂时或小幅度的)减少;下降(11A)direct debit 直接借记,直接付款direct memory access 直接存储器存取directory / di5rekt E ri / n. 目录disability / 7disE5biliti / n. 无能力;残疾disable / dis5eibl / v. 使丧失能力;停用(6C)disastrous / di5zB:strEs / a. 灾难性的(6B)discretion / di5skreF E n / n. 斟酌决定(或处理)的自由discrimination / dis7krimi5neiF E n / n. 区别;歧视disenfranchise / 7disin5frAntFaiz / v. 剥夺.的公民权;剥夺.的权利disgruntled / dis5grQntld / a. 不满的(12A)disk drive 磁盘机,磁盘驱动器dismantle / dis5mAntl / v. 解散;拆开;废除disparate / 5disp E rit / a. 完全(或根本)不同的;不相干的(11A)disparity / dis5pAriti / n. 不同,差异dispatcher / di5spAtFE / n. 调度程序disrupt / dis5rQpt / v. 扰乱;使中断dissemination / di7semi5neiF E n / n. 散布;传播distributed / di5stributid / a. 分布(式)的distributor / di5stribjutE / n. 经销商;批发商disturbance / di5stE:b E ns / n. 干扰(12A)documentation / 7dCkjumen5teiF E n / n. 文件编制,文档编制;[总称]文件证据,文献资料domain / dE5mein / n. 领域,域domain name system 域名系统domain name 域名doorway / 5dC:wei / n. 出入口,门口dot-bomb / 5dCtbCm / n. 网络炸弹(11A)dot-com / 5dCtkCm / n. 网络公司(11A)dotted decimal notation 点分十进制记数法double-check / 5dQbl5tFek / v. 复核;从两方面查对down / daun / v. 击倒;击落;倒下downturn / 5dauntE:n / n. 衰退;下降趋势(11A)drive / draiv / n. 驱动器driver / 5draivE / n. 驱动程序,驱动器drop shipping 直达货运drop-down menu 下拉式菜单(或选项屏)droplet / 5drCplit / n. 小滴drum / drQm / n. 磁鼓dual / 5dju:E l; 5du:El / a. 双的;双重的duke / d j u:k / n. 公爵dupe / dju:p; du:p / v. 欺骗,愚弄duplicate / 5d j u:plikeit / v. 复制;重复duplicate / 5d j u:plikit / a. 复制的;副(本)的;重复的Eeditor / 5editE / n. 编辑程序,编辑器electrical contact 电触点electronic bulletin board 电子公告板(12A)embed / im5bed / v. 把.嵌入embedded / im5bedid / a. 嵌入(式)的embody / im5bCdi / v. 使具体化,体现;包含embrace / im5breis / v. (欣然)接受;(乐意)利用emoticon / i5mEutikCn / n. 情感符(emot ion icon的缩合)emulate / 5emjuleit / v. 模拟,仿真,模仿encapsulate / in5kApsjuleit; ‐sE‐/ v. 封装encapsulation / in7kApsju5leiF E n; ‐sE‐/ n. 封装encipher / in5saifE / v. 把.译成密码encode / en5kEud / v. 把.编码;把.译成电码(或密码)(4A)encompass / in5kQmpEs / v. 包含,包括encrypt / in5kript / v. 把.加密(12A)encryption / in5kripF E n / n. 加密end user 最终用户,终端用户endpoint / 5endpCint / n. 端点enfranchise / in5frAntFaiz / vt. 给.公民权(或选择权)Enter key 回车键enthusiast / in5Wju:ziAst / n. 热心者,狂热者entity / 5entiti / n. 实体entity relationship diagram 实体关系图,实体联系图,E-R图enumerate / i5n j u:mEreit / v. 列举,枚举;遍历(6C)equate / i5kweit / v. 等同,相等erase / i5reiz / v. 擦除,消除,清除erroneous / i5rEuniEs / a. 错误的,不正确的(6B)essence / 5es E ns / n. 本质,实质etch / etF / v. 蚀刻Ethernet / 5i:WE7net / n. 以太网(标准)etiquette / 5etiket / n. 礼节;(行业中的)道德规范;规矩evoke / i5vEuk / v. 唤起;使人想起excerpt / ek5sE:pt/ v. 摘录;引用exclusive lock 排它锁,互斥(型)锁(6B)executable / 5eksikju:tEbl / a. & n. 可执行的/ 可执行文件execution / 7eksi5kjuF E n / n. 执行,运行existing / ig5zistiN / a. 现存的,现有的,现行的expertise / 7ekspE:5ti:z / n. 专门知识(或技能),专长expiration / 7ekspi5reiF E n / n. 期满;截止exploratory / ik5splCrEt E ri / a. 探索的;勘探的expression / ik5spreF E n / n. 表达式extensibility / ik7stensE5biliti / n. 可扩展性,可扩充性extension / ik5stenF E n/ n. 扩展,扩充;扩展名extract / ik5strAkt / v. 提取,析取,抽取Ffable / 5feibl / n. 寓言fabricate / 5fAbrikeit / v. 制作facsimile / fAk5simili / n. 传真factor / 5fAktE / v. 把.分解成(into)familiarize / fE5miliEraiz / v. 使熟悉;使通晓fault tolerance 容错ferry / 5feri / n. & v. 渡船;摆渡;渡口/ 渡运;运送fiber / 5faibE / n. 〈主美〉纤维;光纤fiber-optic / 5faibEr5Cptik / a. 光纤的fiber-optic cable 光缆fictional / 5fikF E nEl / a. 小说的;虚构的field / fi:ld / n. & v. 字段;域;信息组/ 派.上场;实施;产生File Transfer Protocol 文件传送协议filename / 5failneim / n. 文件名(12A)filestore / 5failstC: / n. 文件存储(器)fill / fil / n. 填充film clip 剪片fine-tune / 5fain5t j u:n / v. 微调,细调(6C)finger / 5fiNgE / n. finger命令,远程用户信息服务命令fingerprint / 5fiN^Eprint / n. 指纹(印),手印(11A)fingerprint reader 指纹读取器(11A)firewall / 5faiEwC:l / n. 防火墙firmware / 5fE:mwZE / n. [总称]固件fitting / 5fitiN / n. 试穿,试衣flag / flAg / v. 用标记表明flame / fleim / v. (向.)发送争论(或争辩)邮件flaming / 5fleimiN / n. 争论(特指在邮件讨论组或网络论坛中争论)flat file 平面文件,展开文件flatbed scanner 平板扫描仪flaw / flC: / n. 缺点,瑕疵flawed / flC:d / a. 有缺点的,有瑕疵的flawless / 5flC:lis / a. 无瑕的,完美的floppy / 5flCpi / a. (松)软的floppy disk 软(磁)盘flowchart / 5flEutFB:t / n. 流程图flux / flQks / n. (不断的)变动;波动(6B)focal / 5fEuk E l / a. 焦点的focal point 焦点,活动(或注意、兴趣等的)中心foil / fCil / v. 挫败;使受挫折folder / 5fEuldE / n. 文件夹folksonomy / 5fEuk7sCnEmi / n. 公众(或大众、分众)分类(法)(folks与tax onomy的缩合)footwear / 5futwZE / n. [总称]鞋类foreseeable / fC:5si:Ebl / a. 可预见到的formulate / 5fC:mjuleit / v. 构想出;系统地阐述forward slash 正斜杠foster / 5fCstE / v. 培养,促进frame / freim / n. 帧,画面/ 图文框;框架frame relay 帧中继fraud / frC:d / n. 欺骗,欺诈fraudulent / 5frC:djulEnt / a. 欺骗性的,欺诈性的fraught / frC:t / a. [一般作表语]充满的(with)(11A)frequency array 频率数组frivolous / 5frivElEs / a. 轻薄的;琐屑的function statement 函数语句functional language 函数式语言functionality / 7fQNkFE5nAliti / n. 功能性fuse / fju:z / v. 熔合;熔凝Ggaggle / 5gAgl / n. (紊乱而有联系的)一堆garbage collector 垃圾收集器(6C)gateway / 5geitwei / n. 网关(8C)gauge / geidV / v. 估计,判断;计量gender / 5dVendE / n. 性别general-purpose register 通用寄存器generator / 5dVenEreitE / n. 生成程序,生成器;发生器generic / dVi5nerik / a. 类属的;一般的;通用的gift card 礼物卡,打折卡giga- / 5dVi^E, 5dVai^E / comb. form 表示“吉”,“千兆”,“十亿”,“109”gigabit / 5dVigEbit / n. 吉位,千兆比特grammatical / ^rE5mAtik E l / a. (符合)语法的granularity / 7^rAnju5lAriti / n. 粒度,间隔尺寸graphic(al) / 5grAfik(E l) / a. 图形的,图示的,图解的graphical user interface 图形用户界面graphics / 5grAfiks / n. 图形,图形显示graphics card 显(示)卡grayscale / 5greiskeil / n. 灰度级,灰度标green / gri:n / n. (高尔夫)球穴区;高尔夫球场greeting card 贺卡grim / grim / a. 严厉的;阴森的gullible / 5^Qlib E l / a. 易受骗的,易上当的Hhack / hAk / v. & n. 非法闯入(计算机网络),黑客攻击hacker / 5hAkE / n. 黑客handheld / 5hAndheld / n. 手持式计算机,掌上电脑hand-held computer 手持式计算机,掌上电脑hand-held scanner 手持式扫描仪handle / 5hAndl / n. 句柄;(文件、对象等的)(名)称(编)号,名(字编)号handler / 5hAndlE / n. 处理程序;处理器handshaking / 5hAnd7FeikiN / n. 握手,信号交换haphazard / 7hAp5hAzEd / a. 无计划的,随意的harass / 5hArEs / v. 骚扰;烦扰harassment / 5hArEsmEnt, hE5rAs‐/ n. 骚扰hard disk 硬(磁)盘hard drive 硬盘驱动器hard-core / 5hB:dkC: / a. 中坚的,铁杆的hassle / 5hAsl / n. 搏斗;困难;麻烦header / 5hedE / n. 标题,报头,头标;页眉heading / 5hediN / n. 航向head-up display 前导显示器,平视显示器heterogeneity / 7hetErEudVi5ni:Eti / n. 各种各样,异构性,非均匀性heterogeneous / 7hetErEu5dVi:niEs / a. 不同种类的,多机种的,异构的heuristic / hju5ristik / a. 试探的,启发式的hierarchical / 7haiE5rB:kik E l / a. 分级的,分层的,层次的hierarchy / 5haiE7rB:ki / n. 层次,分层(结构),分级(结构)high-definition / 5haidefi5niF E n / a. 高清晰度的high-end / 5hai5end / a. 高端的,尖端的hindrance / 5hindrEns / n. 妨碍,障碍(12A)hinge / hindV / v. 依.而定,以.为转移(on, upon)histogram / 5histEu^rAm / n. 直方图,矩形图;频率分布图hoax / hEuks / n. 骗局;恶作剧home page 主页homepage / 5hEumpeidV / n. 主页homogeneous / 7hCmEu5dVi:niEs / a. 同种类的;同性质的;同构的hop / hCp / n. & v. 路程段;中继段;跳跃/ 跳跃host / hEust / n. & v. 主机/ 作.的主机host address 主机地址hosted service 托管服务hotspot / 5hCtspCt / n. (网络)热点,热区HTML tag HTML标记hub / hQb / n. (网络)集线器;(轮)毂Hungarian / hQN5gZEriEn / a. 匈牙利的hurricane / 5hQrikEn / n. 飓风hybrid / 5haibrid / n. 混合物;杂(交)种hyperlink / 5haipEliNk / n. 超(级)链接hypertext / 5haipEtekst / n. 超(级)文本Hypertext Markup Language 超(级)文本标记语言Hypertext Transfer Protocol 超(级)文本传输协议,超(级)文本传送协议Iicon / 5aikCn / n. 图标,图符ideagora / ai5diE7^EurE / n. 创意集市,点子市集identifier / ai5dentifaiE / n. 标识符identity theft 身份(信息)盗取ideological / 7aidiE u 5lCdVik E l / a. 思想上的;意识形态的imaging / 5imidViN / n. 成像(技术)immigration / 7imi5^reiF E n / n. 移民,移居impediment / im5pedimEnt / n. 妨碍;障碍物imperative / im5perEtiv / a. 必要的(6C)implementation / 7implimen5teiF E n / n. 实现,执行inadvertent / 7inEd5vE:t E nt / a. 因疏忽造成的;粗心大意的(6B)incentive / in5sentiv / n. 刺激;动机inclined / in5klaind / a. 倾向于.的;有.爱好(或天赋)的incoming / 5in7kQmiN / a. 进来的;输入的inconsistency / 7inkEn5sist E nsi / n. 不一致incorporate / in5kC:pEreit / v. 包含,吸收;把.合并,使并入increment / 5inkrimEnt / v. 增加,增长(6B)incremental / 7inkri5ment E l / a. 增量的;递增的indexing / 5indeksiN / n. 编索引;标引;加下标;变址indiscreet / 7indis5kri:t / a. 不慎重的,轻率的indiscriminate / 7indi5skriminit / a. 不加区别的;不加选择的;任意的infamous / 5infEmEs / a. 臭名昭著的(12A)infection / in5fekF E n / n. (病毒)传染;感染inference engine 推理机(4A)info / 5infEu / n. 〈口〉信息(= information)information superhighway 信息高速公路infrastructure / 5infrE7strQktFE / n. 基础结构inheritance hierarchy 继承层次in-house / 5in5haus / ad. & a. 在机构内部;无外援地/内部的,自用的initiate / i5niFieit / v. 开始;发起inject / in5dVekt / v. 注入;注射ink cartridge 墨盒(4A)inkjet / 5iNkdVet / a. 喷墨的innate / 7i5neit / a. 固有的;天生的innovation / 7inE u5veiF E n / n. 革新,创新(6C)input stream 输入(信息)流insightful / 5in7saitf u l / a. 富有洞察力的;有深刻见解的instant messaging 即时通信,即时消息instantaneous / 7inst E n5teiniEs / a. 瞬间的,即刻的instruction set 指令集instrumental / 7instru5ment E l, 7instrE‐/ a. 起作用的,有帮助的(in,to)integer / 5intidVE / n. 整数integral / 5intigr E l / a. 构成整体所必需的,基本的integrated / 5inti^reitid / a. 集成的,综合的,一体化的integrated circuit 集成电路integrated network 综合网络integrated use interface 综合用户接口integrator / 5intigreitE / n. 积分器intellectual property 知识产权(11A)intent / in5tent / n. 意图,目的interactive / 7intEr5Aktiv / a. 交互(式)的interactivity / 7intErAk5tiviti / n. 交互性intercept / 7intE5sept/ v. 拦截;截取interchange / 5intEtFeindV / n. 交换,互换interchange / 7intE5tFeindV / v. 交换,互换interchangeable / 7intE5tFeindVEb E l / a. 可交换的,可互换的(8C)interdisciplinary / 7intE5disiplin E ri / a. 学科间的,跨学科的interface / 5intEfeis / n. 界面;接口interfacing / 5intE7feisiN / n. 接口技术interleave / 5intEli:v / v. 交错,交叉,交替;隔行扫描interlink / 7intE5liNk / v. 链接,互连intermediary / 7intE5mi:diEri / a. & n. 中间的,居间的/ 媒介(物);调解人;中间人intermediate language 中间语言,中级语言internet / 5intEnet / n. 互联网,互连网Internet Protocol IP协议,网际协议,网间协议internetwork / 7intE5netwE:k / n. 互联网,互连网(8C)interoperability / 5intEr7Cp E rE5biliti / n. 互操作性,互用性interoperate / 7intEr5Cp E reit / v. 互操作,互用interpreted code 解释执行的代码interpreted language 解释执行的语言(4A)interpreter / in5tE:pritE / n. 解释程序,解释器interpreter program 解释程序(4A)interprocess communication 进程间通信interrupt / 7intE5rQpt / n. 中断interrupt handler 中断处理程序interrupt signal 中断信号intersection / 7intE5sekF E n / n. 交,相交,交集;交点intertwine / 7intE5twain / v. (使)缠结,(使)缠绕在一起interweave / 7intE5wi:v / v. (使)交织(6B)interwork / intE5wE:k / v. 配合工作,互工作intranet / 5intrEnet / n. 企业内部(互联)网,内联网intrinsically / in5trinsik E li / ad. 固有地;本质上;内在地intruder / in5tru:dE / n. 侵入者;闯入者intrusion / in5tru:V E n / n. 侵入;打扰intuitive / in5t j u:itiv / a. 直觉的;凭直觉获知的inundate / 5inEndeit / v. 淹没;(似洪水般)布满inventory / 5invEnt E ri; ‐tC:ri / n. & v. 存货(清单),库存(11A)/ 为.开列存货清单;编制.的目录;盘存inverse / 7in5vE:s, 5invE:s / a. 反的,逆的inverted / in5vE:tid / a. 反向的;倒置的invocation / 7invE5keiF E n / n. 调用invoke / in5vEuk / v. 调用;激活involved / in5vClvd / a. (由于复杂或混乱而)难处理的,棘手的;复杂的ironic / ai5rCnik / a. 具有讽刺意味的;出乎意料的irony / 5aiErEni / n. 反语;讽刺文体irreplaceable / 7iri5pleisEbl / a. 不能替代的;失去后无法补偿的(12A)irrespective / 7iri5spektiv / a. 不考虑的;不顾的(of)itemize / 5aitEmaiz / v. 逐条记载,详细登录iteration / 7itE5reiF E n / n. 迭代(法),重复Jjewelry / 5dVu:Elri / n. 〈美〉[总称]珠宝,首饰jot / dVCt / v. 草草记下;匆匆记下joystick / 5dVCistik / n. 控制杆,操纵杆,游戏杆juvenile / 5dVu:vEnail / n. 少年Kkernel / 5kE:n E l / n. 内核,内核程序,核心程序key escrow / 5eskrEu, es5k‐/ chip 密钥托管芯片(12A)keyword / 5ki:wE:d / n. 关键词,关键字Llabel switching 标签交换laptop / 5lAptCp / a. & n. 膝上型的,便携式的/ 膝上型计算机,便携式计算机latency / 5leit E nsi / n. 潜伏时间,等待时间laundry list 〈美口〉细目清单,开列一长串项目的单子lax / lAks / a. 松(弛)的(12A)layered / 5leiEd / a. 分层的learning curve 学习曲线legacy / 5legEsi / a. 旧版本的,老化的legality / li5^Aliti / n. 依法,合法(性)legitimate / li5dVitimit / a. 合法的;正当的(12A)letter-perfect / 7letE5pE:fikt / a. 字字正确的;毫无讹误的lever / 5li:vE; 5le‐/ n. (杠)杆;控制杆leverage / 5li:v E ridV; 5le‐/ v. 充分利用library / 5laibrEri / n. 库,程序(或文件、对象)库library routine 库程序,库存(子)程序,程序库例行程序(4A)life cycle 生命周期,生存周期,寿命周期light pen 光笔linear / 5liniE / a. 线(性)的;直线的linguistic / liN5gwistik / a. 语言(学)的linguistics / liN5^wistiks / n. 语言学linker / 5liNkE / n. 连接程序,链接程序liquid crystal display 液晶显示(器)list box 列表框listing / 5listiN / n. 列表;一览表;目录listserv / 5listsE:v / n. 邮件发送清单(或邮件列表)管理程序literally / 5lit E rEli / ad. 逐字地;从字面上;确实地,不加夸张地local area network 局域网locator / lE u5keitE; 5lEukeitE / n. 定位器,定位符log / lC^ / n. (运行)记录;(系统)日志(6B)logical element 逻辑元件login / 5lCgin; 5lC:g‐/ n. 注册,登陆,进入系统logistics / lE u5dVistiks / n. 后勤(学)logo / 5lEu^Eu / n. 标识,标志图,徽标loom / lu:m / n. 织机looping / 5lu:piN / n. 循环;构成环形lowercase / 5lEuE5keis / a. (字母)小写的low-tech / 7lEu5tek / a. 低技术的lurk / lE:k / v. 潜伏;暗藏;潜行(6B)Mmacro / 5mAkrEu / n. 宏,宏指令magnetic disk 磁盘magneto-optical / mAg7ni:tEu5Cptik E l / a. 磁光的magnify / 5mA^nifai / v. 放大,扩大mailing list 邮件发送清单,邮件列表main memory 主存(储器)main page 主页mainframe / 5meinfreim / n. 主机,大型机mainstream / 5meinstri:m / a. 主流的malicious / mE5liFEs / a. 恶意的mall / mC:l / n. (车辆不得入内只限行人活动的)商业区,商业大街malware / 5mAlwZE / n. 恶意软件manager / 5mAnidVE / n. 管理程序,管理器managerial / 7mAni5dViEriEl / n. 管理的map / mAp / v. 映射;变换,变址/ 详细安排,筹划(out)marginalize / 5mB:dVin E laiz / v. 使处于社会边缘;忽视,排斥marketer / 5mB:kitE / n. 在市场上做买卖的人;专营特定商品的商人(或商号)marketplace / 5mB:kitpleis / n. 市场,集市,市集(11A)mashup / 5mAFQp / n. 混搭,混合,聚合masquerade / 7mAskE5reid, 7mB:s‐/ v. 假冒;假扮mass produce / 5mAsprE7d j u:s / v. (使用机器)大量生产mass storage 海量存储器,大容量存储器matrix / 5meitriks / n. ([复]-trices / ‐trisi:z / 或-trixes)(矩)阵maximize / 5mAksimaiz / v. 使增加(或扩大)到最大限度measured / 5meVEd / a. 慢而稳的;慎重的,几经斟酌的(11A)mediate / 5mi:dieit / v. 调解;传递medic / 5medik / n. 〈口〉(随军急救)卫生队队员;医科学生;医生medication / 7medi5keiF E n / n. 药物治疗;药物medicinal / mi5dis E nEl / a. (医)药的;药用的;有疗效的mega- / 5me^E / comb. form 表示“兆”,“百万”,“106”memo / 5memEu, 5mi:‐/ n. 〈口〉备忘录(= memorandum)memory / 5mem E ri / n. 存储器,内存memory circuitry 存储电路memory location 存储单元(4A)memory-mapped I/O 存储映射输入/输出memory-resident / 5mem E ri 5rezid E nt / a. 内存驻留的,常驻内存的merchandise / 5mE:tF E ndaiz, ‐dais / n. [总称]商品(11A)merge / mE:dV / v. 合并;结合mesh / meF / n. 网;网格;网状结构;网状网络messaging / 5mesidViN / n. 消息接发,通信metadata / 7metE5deitE / n. 元数据metamorphic / 7metE5mC:fik / a. 变形的metaphor / 5metEfE / n. 隐喻;比喻methodology / 7meWE5dClEdVi / n. (学科的)一套方法;方法论metropolitan / 7metrE5pClit E n / a. 大城市的,大都会的metropolitan area network 城域网microchip / 5maikrEutFip / n. 微芯片microcomputer / 5maikrEukEm5pju:tE / n. 微型计算机microformat / 5maikrEu7fC:mAt / n. 微格式microminiaturization / 5maikrEu7minitFErai5zeiF E n / n. 微小型化,超小型化microprocessor / 7maikrEu5prEusesE / n. 微处理器microsecond / 5maikrE u7sek E nd / n. 微秒,10-6秒microwave oven 微波炉middleware / 5midlwZE / n. 中(间)件,中间设备millisecond / 5mili7sek E nd / n. 毫秒mimic / 5mimik / v. 模仿miniature / 5min i EtFE / a. 小型的,微小的minicomputer / 7minikEm5pju:tE / n. 小型计算机minimal / 5minim E l / a. 最小的;最低限度的minimize / 5minimaiz / v. 使减少到最低限度mnemonic / ni:5mCnik / a. 助记的;记忆的。
cooperativecontrol-3
California Partners for Advanced Transit and Highways (PATH)System for allowing cars to be driven automatically down a freeway at close spacing Idea: reduce speed of collision via close spacing; need to worry about string stabilityMove from a human-controlled, centralized structure to a more distributed system Enable ``free flight'' technologies allowing aircraft to travel in direct paths rather than staying in pre-defined air traffic control corridors.Improve the current system by developing cockpit ``sensors'' such as augmentedRichard M. Murray, Caltech CDS EECI, Mar 09Servers/resource allocationSupply chain mgmtFactoryWarehouseDistributorsConsumersAdvertisementRetailersCooperative Control Systems FrameworkAgent dynamicsVehicle “role”• encodes internal state + relationship to current task •Transition Communications graph •Encodes the system information flow •Neighbor set Communications channel•Communicated information can be lost, delayed, reordered; rate constraints•! = binary random process (packet loss)Task•Encode as finite horizon optimal control•Assume task is coupled, env’t estimated Strategy•Control action for individual agentsDecentralized strategy•Similar structure for role updatei (x,α)α∈A α =r (x,α)GMJGCD, 2007˙x i =f i (x i ,u i )x i ∈R n ,u i ∈R m y i =h i (x i )y i ∈R qy i j [k ]=γy i (t k −τj )t k +1−t k >T rJ = T 0L (x,α,E (t ),u )dt +V (x (T ),α(T )),u i (x,α)=u i (x i ,αi ,y −i ,α−i ,ˆE)y −i ={y j 1,...,y j m i }j k ∈N im i =|N i |{g i j(x,α):r i j (x,α)}αi=r i j (x,α)g (x,α)=true unchanged otherwise .u i =k i (x,α)Example: satellite formationRichard M. Murray, Caltech CDS EECI, Mar 099Stability ConditionTheorem The closed loop system is (neutrally) stable iff the Nyquist plot of the open loop system does not encircle -1/!i (L ), where !i (L ) are the nonzero eigenvalues of L.ExampleFax and M IEEE TAC 2004Example RevisitedExample•Adding link increases the number of three cycles (leads to “resonances”)•Change in control law required to avoid instability•Q: Increasing amount of information available decreases stability (??)•A: Control law cannot ignore the information " add’l feedback insertedxxxxRichard M. Murray, Caltech CDS EECI, Mar 0911Improving Performance through CommunicationBaseline: stability only•Poor performance due to interconnectionMethod #1: tune information flow filter•Low pass filter to damp response •Improves performance somewhat Method #2: consensus + feedforward•Agree on center of formation, then move •Compensate for motion of vehicles by adjusting information flowFax and M IEEE TAC 2004Special Case: (Asymptotic) ConsensusConsensus: agreement between agents using information flow graphCan prove asymptotic convergence to single value if graph is connectedIf w ij = 1/(in-degree) + graph is converge to average of initial conditionExtensions (Jadbabaie/Morse, Moreau, Olfati-Saber, Xiao, Chandy/Charpentier, ...)Switching (packet loss, dropped links, etc),time delays, plant uncertainty Nearest neighbor graphs, small world networks, optimal weights Nonlinear: potential fields, passive systems, gradient systems Distributed Kalman filtering, distributed optimizationSelf-similar algorithms for operation with varying connectednessSee also: gossip algorithms, load balancing, distributed computing (Tsitsiklis)-1xxxxLook at motion between selected vehiclesRichard M. Murray, Caltech CDS Transients can grow as you pass information to be used to provide 15RobustnessGupta, Langbort and MCDC 06Formation Operations: Graph SwitchingControl questions•How do we split and rejoin teams of vehicles?•How do we specify vehicle formations and control them?•How do we reconfigure formations (shape and topology)Consensus-based approach using balanced graphs•If each subgraph is balanced, disagreement vector provides common Lyapunov fcn •EECI, Mar 09Richard M. Murray, Caltech CDSOptimization-BasedLocal MPC + CLF•Assume neighbors followstraight linesCompatibility constraint:•each vehicle transmits planto neighbors•stay w/in bounded path ofwhat was transmitted42d31q refSimulation Resultstovisits waypointRichard M. Murray, Caltech CDS Rendezvous (Tiwari et al)Goal: collection of vehicle arrive in a given = max and min distances of vehicles at the time ta that first of them enters rendezvous Find a control law a such that from all initial Approach: create invariant regions (cones)Solution is centralized: each vehicle needs to 25Coverage (Cortes, Martinez, Bullo)Place N vehicles over a region to maximize sensor coveragePartition region Q into set of polytopes that cover Q Let represent sensing performance (small is good); = distributionRepresent coverage problem as minimizing the cost functionW }φ(q )。
2010JCR中科院期刊分区表
0963-5483
COMPUT
SYST SCI
COMPUTER SYSTEMS SCIENCE AND ENGINEERING
ENG
0267-6192
COMPUT ING
COMPUTING
0010-485X
CRYPTO LOGIA
CRYPTOLOGIA
DESIGN
CODE CRYPTO
DESIGNS
JOURNAL
FOR
COMPUTATION
AND
0332-1649
COMPUT
APPL ENG
COMPUTER APPLICATIONS IN ENGINEERING EDUCATION
EDUC
1061-3773
COMPUT CONCRE Computers and Concrete TE
1598-8198
0020-7721
INT J UNCONV International Journal of Unconventional Computing COMPUT
1548-7199
J ALGORI JOURNAL OF ALGORITHMS THMS
0196-6774
J CELL AUTOM
Journal
0938-1279
BIOMED ENG- BIOMEDICAL ENGINEERING-APPLICATIONS BASIS APP COMMUNICATIONS BAS C
1016-2372
COMPEL
COMPEL-THE INTERNATIONAL MATHEMATICS IN ELECTRI
刊名 简称
刊名全称
BEHAV
MULTI-VERSION CONCURRENCY CONTROL METHOD IN DATABA
专利名称:MULTI-VERSION CONCURRENCY CONTROLMETHOD IN DATABASE, AND DATABASESYSTEM发明人:WEN, Jijun,NIE, Yuanyuan,LI, Jian申请号:EP14876808.8申请日:20140723公开号:EP3079078A1公开日:20161012专利内容由知识产权出版社提供专利附图:摘要:Embodiments of the present invention disclose a multi-version concurrency control method in a database and a database system, which are mainly applied to thefield of database technologies. The database system sets a data page link of a page, where the data page link includes a page pointer of each version page of the page, and a page pointer of a version page is used to point to another version page prior to a last operation on the version page. In this way, when a page in the database is read, if a timestamp of a current version page is greater than a timestamp of a read transaction included in a data reading request, page-level rollback may be directly performed according to a data page link of the page that is requested to read to roll back to a page that needs to be read, which helps a user to know a page in the database at any time, that is, which facilitates queries for data on each version page in the database. Further, the database system may implement record rollback efficiently by combining a data page link and a record link, thereby realizing consistent reading.申请人:Huawei Technologies Co., Ltd.地址:Huawei Administration Building Bantian Longgang District Shenzhen, Guangdong 518129 CN国籍:CN代理机构:Epping - Hermann - Fischer更多信息请下载全文后查看。
简述mvcc执行流程
简述mvcc执行流程英文回答:Multi-Version Concurrency Control (MVCC) is a concurrency control mechanism that allows multiple transactions to operate on the same data concurrently without overwriting each other's changes. It achieves this by maintaining multiple versions of each data item, one for each active transaction.When a transaction starts, it is assigned a unique transaction ID (TID). All reads and writes to the database are performed using the TID of the current transaction. When a transaction reads a data item, it retrieves the version of the data item that was committed before the transaction started. When a transaction writes a data item, it creates a new version of the data item and associates it with its TID.Other transactions can continue to read the old versionof the data item, even after the new version has been created. This allows multiple transactions to operate on the same data concurrently without overwriting each other's changes.When a transaction commits, its changes are made permanent and the new versions of the data items it modified become the new committed versions. Any uncommitted transactions that were reading the old versions of the data items will now see the new committed versions.MVCC is an optimistic concurrency control mechanism. It assumes that transactions will not conflict with each other and allows them to proceed without first checking for conflicts. If a conflict does occur, it is detected when one transaction attempts to commit its changes and the database system aborts the transaction.MVCC is a highly efficient concurrency control mechanism that can handle a large number of concurrent transactions. It is used in many popular database systems, including PostgreSQL, Oracle, and MySQL.中文回答:MVCC(多版本并发控制)是一种并发控制机制,它允许多个事务同时对同一数据进行操作,而不会覆盖彼此的更改。
并发控制--Concurrencycontrol--乐观、悲观及方法
并发控制--Concurrencycontrol--乐观、悲观及⽅法In and , especially in the fields of , , , and , concurrency control ensures that correct results for operations are generated, while getting those results as quickly as possible.Concurrency control mechanisms[]Categories[]The main categories of concurrency control mechanisms are:- Delay the checking of whether a transaction meets the isolation and other integrity rules (e.g., and ) until its end, without blocking any of its (read, write) operations ("...and be optimistic about the rules being met..."), and then abort a transaction to prevent the violation, if the desired rules are to be violated upon its commit. An aborted transaction is immediately restarted and re-executed, which incurs an obvious overhead (versus executing it to the end only once). If not too many transactions are aborted, then being optimistic is usually a good strategy.Pessimistic - Block an operation of a transaction, if it may cause violation of the rules, until the possibility of violation disappears.Blocking operations is typically involved with performance reduction.Semi-optimistic - Block operations in some situations, if they may cause violation of some rules, and do not block in other situations while delaying rules checking (if needed) to transaction's end, as done with optimistic.Different categories provide different performance, i.e., different average transaction completion rates (throughput), depending on transaction types mix, computing level of parallelism, and other factors. If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance.The mutual blocking between two transactions (where each one blocks the other) or more results in a , where the transactions involved are stalled and cannot reach completion. Most non-optimistic mechanisms (with blocking) are prone to deadlocks which are resolved by an intentional abort of a stalled transaction (which releases the other transactions in that deadlock), and its immediate restart and re-execution. The likelihood of a deadlock is typically low.Blocking, deadlocks, and aborts all result in performance reduction, and hence the trade-offs between the categories.Methods[]Many methods for concurrency control exist. Most of them can be implemented within either main category above. The major methods,[1] which have each many variants, and in some cases may overlap or be combined, are:1. Locking (e.g., - 2PL) - Controlling access to data by assigned to the data. Access of a transaction to a data item (database object)locked by another transaction may be blocked (depending on lock type and access operation type) until lock release.2. Serialization (also called Serializability, or Conflict, or Precedence graph checking) - Checking for in the schedule's and breakingthem by aborts.3. (TO) - Assigning timestamps to transactions, and controlling or checking access to data by timestamp order.4. (or Commit ordering; CO) - Controlling or checking transactions' chronological order of commit events to be compatible with theirrespective .Other major concurrency control types that are utilized in conjunction with the methods above include:(MVCC) - Increasing concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object) depending on scheduling method.- Synchronizing access operations to , rather than to user data. Specialized methods provide substantial performance gains.Private workspace model (Deferred update) - Each transaction maintains a private workspace for its accessed data, and its changed data become visible outside the transaction only upon its commit (e.g., Weikum and Vossen 2001). This model provides a different concurrency control behavior with benefits in many cases.The most common mechanism type in database systems since their early days in the 1970s has been (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of both (2PL) and (CO). It is pessimistic. In spite of its long name (for historical reasons) the idea of the SS2PL mechanism is simple: "Release all locks applied by a transaction only after the transaction has ended." SS2PL (or Rigorousness) is also the name of the set of all schedules that can be generated by this mechanism, i.e., these areSS2PL (or Rigorous) schedules, have the SS2PL (or Rigorousness) property.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Institut für Mathematische Maschinenund Datenverarbeitungder Friedrich-Alexander-UniversitätErlangen-Nürnberg Lehrstuhl für Informatik IV(Betriebssysteme)Cooperative Concurrency ControlRainer PruyUniversity Erlangen-NürnbergInstitut für mathematische Maschinen und Datenverarbeitung IVLehrstuhl für BetriebssystemeMartensstraße 1D-W-8520ErlangenGermanyphone: +499131 85-7909fax: +49913139388email: pruy@informatik.uni-erlangen.deAbstractToday research on concurrency in object systems concentrates on concurrent objects. If onehas to address situations involving concurrency control conditions spanning several objects one isstill on one’s own. This paper illustrates the need to be able to describe concurrency control in distri-buted situations. It presents the concept of cooperative concurrency control as a first step in addres-sing this problem. Cooperative concurrency control separates functional behaviour and concurrency control into individual objects interacting to achieve the behaviour of a single concurrent object. Asthis approach is based on object interaction it naturally extends to distributed situations.Keywordsconcurrency, distribution, object interaction, cooperation, composition1§1IntroductionExperience has taught that integrating concurrency with the notion of objects is not an easy task. Research in this field focused on the problem of concurrent objects. The main reason behind this is the observation that the need for concurrency control is evident only at the level of individual objects.An important concept of object systems is encapsulation. It prevents the implementation of an object from being visible and accessible from outside that object. Instead the behaviour of an object is specified in a more abstract description called interface. This provides for independen-ce of the clients of an object from the actual implementation.In a sequential context the interface needs only to describe the functional aspects of the beha-viour of an object. In the presence of concurrency, the interface, in addition to this functional aspect, has to reflect dynamic behaviour. In recent ongoing approaches objects are assumed to be concurrent and dynamic behaviour is specified in terms of concurrency constraints applied to method invocations ([Caro90],[Geib92],[Frøl92]).Inheritance, according to [Wegn87], is the defining property of object orientation. It formalizes incremental modifications as a key means to achieving reuse. Recent works (e. g. [BeAk92],[-Mats90], [Shiba91]) concentrated on integrating concurrency control with inheritance. The re-sults can be summarized by the following two requirements for object interface descriptions:–separation of concernsThe different aspects of the behaviour of an object should be specified using sepa-rate descriptions allowing for independent modifications of these descriptions. Inthe presence of concurrency there are at least two such aspects, namely functionalbehaviour and dynamic behaviour. If there are independent aspects of dynamic be-haviour even these should be expressed using separate descriptions.The overall behaviour of an object results from a combination of all behaviouralaspects. Incremental modification of individual aspects requires clean interfacesbetween the separate descriptions.–flexible modelling of an object-state needed for concurrency controlEncapsulation prevents concurrency constraints from referring to state informationinternal to the object. Thus, needed state information has to be modelled at the in-terface level. The limitations of the underlying principles used in this modellingprocess restrict the set of concurrency constraints, which can be expressed.The aforementioned approaches concentrate on concurrency within a single object. Later in this paper it is stated that handling concurrency internal to objects is not sufficient. Currently there are no approaches addressing concurrency problems involving several objects. [Holl92] and [John91] among others propose concepts to specify multi-object configurations. However, they concentrate on functional aspects of object composition. They do not account for additional concurrency constraints arising from such composition. Section 2 will discuss distributed con-2§ 2Concurrency in distributed environmentscurrency and the solutions available with current object systems. A model for concurrency con-trol will be presented in section 3. Based on this model an implementation strategy is developed, which is outlined in section 4.2Concurrency in distributed environmentsObjects provide a natural way for expressing distribution. They already describe a distributed system. In this system the processing of methods models local operations and object interaction models the communication among (distributed) components. By packaging objects during the process of configuring a system, coarser distributional situations can be reflected. Handling con-currency only in the (local) context of individual objects ignores the need for controlling con-current execution under constraints spanning several objects.A standard answer to this problem is delegating responsibility to alternate concepts. Usually tr-ansaction concepts are mentioned. However, transactions are not well suited for handling gene-ral concurrency control problems. They are intended for achieving data consistency and use the concept of mutual exclusion to achieve this goal. If concurrency constraints include flow control or operation sequence requirements, transactions do not aid in formulating a solution.An alternate approach to handling concurrency control in distributed situations is to introduce a central object which manages concurrency constraints. Such an object requires only object in-ternal concurrency control mechanisms. Therefore this approach can be followed with available object systems. It voids, however, any possibility of exploiting distribution. For example, it is no longer possible to optimize placement of objects within a system for communication costs as any access to the objects involved in a distributed concurrent constellation has to get through this central concurrency control object. A simple example will illustrate the problem.In an object environment it is easy to construct a buffer with a FIFO behaviour based on object references. This allows objects which are currently elements of that buffer to remain at their lo-cation within the object system. Figure 2.1 pictures a linked list implementation of such a buffer. Usually the behaviour of such a buffer would be implemented using a single object. It would probably use two instance variables, in the example called fifo_head and fifo_tail, for managing the linked list of elements. Now, if the programmer knows that retrieve always gets called from a certain point within the system and store always gets called from a different point, he or she may try to optimize communication costs by spliting FIFO into a fifo_head and a fifo_tail object and collocating each of these objects with the object(s) using it. However, the two objects fifo_head and fifo_tail together exhibit the same behaviour as the FIFO object. The concurrency constraints are the very same. In both cases concurrency control has to serialize calls to retrieve and store as soon as the buffer contains, at most, one element. But this uncovers an inconsistency. The same functional behaviour with identical concurrency control requirements requires totally different descriptions according to whether distribution has to be considered or not. Currently there is no possibility for reuse among a distributed and a non-distributed solution to the FIFO buffer problem. While this may be una-voidable in many cases for the pure functional behaviour, as the notion of locality is different3Abb. 2.1Example: FIFO bufferbetween distributed and non-distributed descriptions, concurrency control should not directly be affected by such a design decision. Decisions made by concurrency control are the same whether functional behaviour is implemented using a single object (FIFO) or several objects (fifo_head and fifo_tail).The example above makes obvious the need for concurrency control which are able to handle constraints spanning several objects: when the combined behaviour of these objects constitutes a behaviour which would be specified using a single object as long as distribution is not to be considered. The example also illustrates the observation that concurrency constraints do not de-pend on the actual implementation of the corresponding functional behaviour. In the same way a functional description just relies on a specific behaviour of concurrency control without being interested in the details of how this behaviour is being achieved.This observation can be extended to object structures whose behaviour can not easily be captu-red by specifying a single object, as is the case when that behaviour is built on distribution. It can be viewed as a special case of the principle of separation of concerns. It also leads to the conclusion that this principle is important not only in the case of specifying behaviour of a sing-le object but also in the case of describing concurrent behaviour of object structures.3 A model for concurrency controlNow, that the problem has been outlined, it is necessary to look at how concurrency control ope-rates. Figure 3.1 pictures the dependencies involved with concurrency control. This structure can be observed even in current declarative approaches to concurrency control ([Deco89],[-Neus91],[McHa91]). As a first step it will be interpreted in terms of single object concurrency. 4In this context concurrency is recognizable as concurrent requests pending for processing at the encapsulation border of an object. Execution of these requests is controlled by concurrency constraints as specified at the interface of that object.Concurrency constraints are expressed in terms of an abstract state. However, this abstract state can in no way refer to information from the internal state as this information is hidden behind the encapsulation border.In order to solve this dilemma one could try to infer the current abstract state from the knowled-ge of the initial state and monitoring requests of an object. Such monitoring, though, is inaccu-rate due to indeterminism resulting from concurrent execution of requests. As long as such in-determinism does not occur, state inference could be considered a possibility.The figure above is intended to suggest a different solution. While the interface is not allowed to violate encapsulation nor to access the internal state defined by the actual implementation, this implementation is allowed to provide information about that internal state for its environ-ment. This means that a programmer, during implementation of the behaviour of an object, also has to define how this implementation maps into the behaviour specified at the interface.As, during implementation, a programmer already has to consider whether this implementation really conforms to the interface specification it does not seem to be a problem to make him or her state this conformance explicitly. This can be accomplished by providing a mapping from internal state to abstract state. It is, however, obvious that this mapping, as it only maps state information, can only be part of the conformance considerations to be performed by program-mers.In the case of single concurrent objects, it is tempting to describe how concurrency control in-fluences processing of requests in terms beyond the scope of the object model. Being an intrinsic property of objects, concurrency control may be explained using any model suitable for descri-bing the effects of concurrency control on the execution of requests. If concurrency constraints may be applied to a group of objects this freedom is lost. The semantics established by the un-derlying object system restricts the set of possible models. The key properties are encapsulation of object states and a communicational model of object interaction.5It is obvious that an approach to handling distributed concurrency, which depends on functio-nality not available to ordinary object interaction, is not acceptable. Any such functionality al-lowing concurrency control to violate encapsulation would have disastrous effects on the se-mantics of objects. Functionality allowing influencing of object communication would compli-cate the understanding of object interaction. Thus, influencing concurrent execution within a group of objects has to be based on available object interaction semantics. This leaves two pos-sibilities. Either interpose filter objects into communication paths or make objects explicitly in-teract to perform concurrency control.Interposing communication paths easily fits into the composition-filters model as proposed in [AkBV92]. This approach uses filters specified as part of the interface of an object to interpose and control messages directed to that object. While these filters can easily be used to perform concurrency control in the context of a single object, they are yet too weak to perform concur-rency control on object groups. Concurrency constraints over a group of objects would require interaction among the controlling instances. If these controlling instances are implemented using composition filters, these filters need to be able to cooperate. Consequently a filter would be required to process and consume some of the messages directed to its object and to send mes-sages to other objects involved in a concurrency control relationship. Being part of the object interface, the concurrency control behaviour is tied to the object description. In order to allow reuse of concurrency control descriptions one needs a semantics and mechanisms for reuse of filters. Separating filters from objects will result in definition of filter objects. However, using filter objects for interposing object interaction suffers from naming problems. A client of an ob-ject has to name and address the correct filter object to get to the object actually performing the request.4The cooperative approachIn this section an alternative to realizing concurrency control by interposing object interaction is proposed. It directly follows the concurrency control model presented above. It follows the principle of separation of concerns by encapsulating individual aspects of a behaviour into in-dividual objects.Encapsulation does not restrict an object to performing concurrency control within the bounds of that object. This allows one object to call upon another object to perform concurrency control. By doing this, concurrency control is no longer an intrinsic property of an object. Instead, a be-haviour formerly exhibited by a single object is now represented by the combined behaviour of a group of interacting objects. It is obvious that such a model is applicable to configurations al-ready consisting of a group of interacting objects.Now the world of objects is divided up into two partitions; objects providing arbitrary functional behaviour and objects providing concurrency control needed by the former. But it has yet to be explained how these concurrency control objects can perform their operation. It already has 6been remarked that there is no legal possibility for controlling method execution from outside an object. This, too, has to be initiated from inside that object. However, now all elements of the cooperative approach are available and may be combined to form a homogenous picture.When an object needs a decision from a (concurrency) control object it calls a method from that control object. With this call the object may pass, as parameters, any information needed by the control object to perform the decision. The decisions performed by the control object are actual-ly about whether an executing request within the calling object may proceed or not. Such a deci-sion is performed by delaying an answer to a decision request until the calling executing request may proceed. The object is awaiting this decision either by waiting for the response implicitly, in the case of synchronous communication semantics, or by explicitly requesting an answer in the case of asynchronous communication or wait-by-necessity semantics. When the execution of requests within an object changes state in a way influencing concurrency control, that object may inform an appropriate control object about this state change also using method calls. The combination of all information available to a control object determines the abstract state model available to this control object for performing concurrency control requests.According to the interface control model of concurrency control, starting the execution of a re-quest is the only point to which concurrency control decisions are applicable. However, the more complex a method is the more likely it is for this method to cover sections of different con-currency constraints [Mack84]. This suggests the allowance for method substructures to be made visible at the interface of an object. With this, transitions among these method components also constitute points where concurrency control decisions are required.Introduction of method substructures also suggests a possibility for cleanly integrating the calls to a control object into implementation and into interface description of an object. The sub-structure of a method can be interpreted as a block structure as known from imperative program-ming languages. Upon entry of a block, a call to a control object is performed. Parameters of that call communicate necessary information. The block is entered and execution continues as soon as the call returns. On exit of that block another call to the control object is performed. This is used to inform the control object that execution of this block by the current executing request is being terminated. This also communicates state changes caused by executing this block to the control object.The control object, as it is itself an ordinary object, conforms to a type. This type is mediated by the signature of the methods of that object as observable from an object using this control object. A control object is not part of an ordinary object using it, but gets connected to this or-dinary object at instantiation time of this object.It is easy to employ several control objects from a single object, as there is no distinction bet-ween these control objects and any other object known as far as object interaction is concerned. However, this only works if these different concurrency control objects manage distinct inde-pendent concurrency constraints, not only independent in semantics but independent in compo-nents operated upon. Concurrency constraints intended to be expressed independently require the introduction of an additional level of indirection, a kind of composing control object which implements the composition of the independent constraints.7An example§ 5For simple composition rules it would be possible to integrate composition with the control ob-ject interaction semantics. For example, an AND-operation on constraints would require the more restrictive constraint to be queried first. This occurs with the example of a buffer which is superimposed a request-release-constraint. Execution might only continue after all constraints affected granted access. The example of a privileged reader in a multiple reader/single writer context, who will be granted access even in the presence of active writers, illustrates an OR con-junction, which is required the granting of continuation as soon as the privilege constraint grants continuation. These two examples make it clear that it is better to realize constraint composition using a special object than to try to force it into calling semantics for control objects while loo-sing homogeneity among object interaction semantics.5An exampleFigure 2.1 introduced the FIFO buffer example. The behaviour of such a buffer can be formu-lated using a declarative description based on synchronisation counters as follows: retrieve:N > 0start(retrieve)⇒inc(N)store:N <MAX end(store)⇒dec(N)In this case N denotes a synchronisation counter,MAX denotes the upper bound on the buffer, start(a) and end(a) refer to the start respective termination of the given method, and inc(N)and dec(N) denote increment and decrement of the counter denoted by N.The processing of requests within a FIFO buffer object can be illustrated using the following description in a hypothetical language:8§ 5An example{CNTRL:param class buffer_cntrl type buffer_cntrl_type;// other declarations …retrieve:() fifi-> (…){CNTRL.enter_retrieve(…);// implementation of retrieveCNTRL.leave_retrieve(…);}store:(…) -> (){CNTRL.enter_store(…);// implementation of storeCNTRL.leave_store(…);}}The argument declarations of retrieve and store have been left out for brevity. The argu-ments passed at the calls to CNTRL depend on what information a (control) object of type buf-fer_cntrl_type is expecting. In the case of this simple example, no arguments need being passed at all.Figure 5.1 pictures possible configurations available with cooperative concurrency control to solve the FIFO buffer problem.The“single object” case illustrates the structure, that ordinary objects will exhibit under coo-perative concurrency control. An object (FIFO) implementing functional behaviour (actual buf-fering of items) cooperates with a control object (CNTRL) managing concurrent execution of requests.If the functional behaviour is for some reason distributed among several objects (head and tail), the situation that is labeled distributed buffer - central control is to be encountered. The internal state of the participating functional objects is mapped onto a common abstract state managed by the central control object (CNTRL). Each participating functional object contributes to the com-posite behaviour. It is worth noting, that from the control objects point of view, it does not matter how state and functionality are actually distributed among the functional objects as the central control object still operates on a central view of the composite configuration.In order to efficiently exploit communication properties caused by distribution control objects also have to be split into several parts. This configuration is pictured as distributed buffer -distributed control. It requires special control objects (CNTRL head and CNTRL tail) which use§ 5 An example Arraya suitable distributed concurrency control protocol to implement a distributed view of the ab-stract state of the (distributed) FIFO buffer managed. The functional part of this configuration is identical to the case using central control.This illustrates how cooperative concurrency control allows for independence among imple-mentation of functional and concurrency control behaviour. Changes in the actual implementa-tion of functional behaviour do not cause any changes in the implementation of concurrency control. Functional behaviour need not know anything about the actual realization of concur-rency control.§ 6Conclusions6ConclusionsResearch in inheritance in the context of concurrent objects uncovered the principle of separa-tion of concerns. This paper has presented the concept of cooperative concurrency control. Coo-perative concurrency control separates functional aspects and concurrency control aspects of the behaviour of an object into independent objects. These objects use normal object interaction to achieve the behaviour of the original object via cooperation. Concurrency control objects, en-capsulating concurrency control behaviour, are used by objects realizing functional behaviour. They perform concurrency control decisions. The connection between functional objects and concurrency control objects includes the type of the control object. It is established, for examp-le, by parametrization at instantiation time of a functional object.The cooperative approach is consistent with the notion that an object is responsible for the cor-rect execution of its methods in the context of a concurrent environment. It strictly follows the principle of separation of concerns. As it is based on mechanisms already available with object systems (namely object interaction and parametrization), it is applicable to a variety of current object-based and object-oriented programming languages. Reuse of components (functional ob-jects and concurrency control objects) is available as long as the underlying language does sup-port it.Besides the practical use of cooperative concurrency control, this approach constitutes a base for investigating concurrency control in distributed environments. Declarative approaches on specifying concurrency control can be implemented using cooperative concurrency control. This is due to the flexibility of abstract state specification allowed by modelling abstract state using state information provided with parameters from calls to control object methods. Decla-rative concurrency control specifications, together with a semantics of such specifications, also provide the possibility of deriving from them an actual implementation of a control object. Re-search in this domain could try to integrate declarative concurrency control with framework concepts currently researched and try to describe semantics of concurrency control in terms of cooperative concurrency control.7ReferencesAkBV92M. Aksit, L. Bergmans, S. Vural: “An Object-Oriented Language-Database Integration Model: The Composition-Filters Approach”,ECOOP ‘92 EuropeanConference on Object-Oriented Programming, LNCS 615, 1992BeAk92L. Bergmans, M. Aksit: “Reusability Problems in Object-Oriented Concurrent Programs”,ECOOP 92 Workshop on concurrency,position paper, 1992Caro90 D. Caromel: “Concurrency: An Object-Oriented Approach”,TOOLS 2, pg 183-197, 1990References§ 7Deco89 D. Decouchant, S. Krakowiak, M. Meysembourg, M. Riveill, Rousset de Pina: “A Synchronisation Mechanism for Typed Objects in a Distributed System”,Proc. ofthe ACM SIGPLAN Workshop on Object-Based Concurrent Programming,SIGPLAN, 1989Frøl92S. Frølund, “Inheritance of Synchronisation Constraints in Concurrent Object-Oriented Programming Languages”,ECOOP ‘92 European Conference on Object-Oriented Programming, LNCS 615, 1992Geib92J.-M. Geib, L. Courtrai: “Abstractions for Synchronization to InheritSynchronisation Constraints”,ECOOP ‘92 Workshop on concurrency, positionpaper, 1992Holl92I. Holland: “Specifying reusable components using Contracts”,ECOOP ‘92 European Conference on Object-Oriented Programming, LNCS 615, 1992John91R. Johnson, V. Russo: “Reusing Object-Oriented Design”,University of Illinois, Technical Report UIUCDCS 91-1696, May 1991Neus91 C. Neusius: “Synchronizing Actions”,ECOOP ‘91 European Conference on Object-Oriented Programming, Springer Verlag, 1991Mack84L. Mackert: “Modellierung, Spezifikation und korrekte Realisierung von asynchronen Systemen”, Dissertation, Arbeitsberichte des IMMD, Bd. 16, Nr. 7,1984Mats90S. Matsuoka, K. Wakita, A. Yonezawa: “Analysis of inheritance Anomaly in Concurrent Object-Oriented Languages”,ECOOP/OOPSLA ‘90 Workshop onObjekt-Based Concurrent Systems, Aug 1990McHa91 C. McHale, B. Walsh, S. Baker, A. Donnelly: “Scheduling Predicates”,Trinity College, Dublin, Technical Report TCD-CS-91-24, 1991Shiba91 E. Shibayama: “Reuse of Concurrent Object Descriptions”,Concurrency: Theory, Language and Architecture, LNCS 491, 1991Wegn87P. Wegner: “Dimensions of Object-Based Languages”,Proc. of OOPSLA 1987, SIGPLAN, vol. 22, no. 12, pg. 168-182, 1987。