Trusted computing, trusted third parties, and verified communications

合集下载

可信计算的研究与发展

可信计算的研究与发展

可信计算的研究与发展一、概述随着信息技术的快速发展,计算机和网络系统已经成为现代社会不可或缺的基础设施。

这些技术的广泛应用也带来了严重的信息安全问题,如数据泄露、恶意软件攻击、网络钓鱼等。

为了应对这些挑战,可信计算(Trusted Computing)技术应运而生。

可信计算是一种通过硬件和软件结合,确保计算机系统自身安全可信,从而保护存储在其中的信息不被非法访问和篡改的技术。

可信计算技术起源于上世纪末,随着计算机体系结构的演进和信息安全需求的提升,其研究和发展逐渐受到全球范围内的关注。

作为一种综合性的安全防护机制,可信计算旨在构建一个安全可信的计算环境,使得计算机系统在执行关键任务时能够抵御各种安全威胁。

近年来,可信计算技术取得了显著的进展。

一方面,可信计算平台(Trusted Platform Module,TPM)的广泛应用为计算机系统提供了硬件级别的安全支持另一方面,可信计算软件技术(如可信操作系统、可信数据库等)的不断发展,为上层应用提供了更加安全可靠的运行环境。

可信计算技术还涉及到了密码学、访问控制、身份认证等多个领域,形成了一套完整的安全防护体系。

尽管可信计算技术取得了显著的研究成果,但其在实际应用中仍面临着诸多挑战。

例如,如何确保TPM的安全性和可靠性、如何平衡系统性能与安全性之间的矛盾、如何适应不断变化的安全威胁等。

未来可信计算技术的研究和发展仍需要不断探索和创新,以满足日益增长的信息安全需求。

本文将对可信计算技术的研究与发展进行综述,分析当前的研究热点和难点问题,并展望未来的发展趋势。

通过对可信计算技术的深入了解和研究,有望为信息安全领域的发展提供新的思路和方向。

1. 可信计算的概念定义可信计算(Trusted Computing)是一种计算模式,旨在增强计算机系统的安全性、可靠性和完整性。

其核心思想是在硬件、软件和系统之间建立一个可信任的基础,以确保数据和代码在执行过程中的保密性、完整性和可用性。

TPM

TPM

其次是选择具有TPM的PC芯片组,目前有符合TPM的PC芯片组(ATI RS465 Chipset其南桥 SB460),但我国的ULi(宇力电子)预计在新款的AMD64平台芯片组:M1697加入TPM v1.2的能 力,一般而言TPM功能会内建在芯片组中的南桥芯片(强调整合多样I/O外围功能)。
若不在CPU、芯片组层级实现,往下在Super I/O芯片层次一样可加入,例如NS(Winbond) 的SafeKeeper系列芯片,包括Notebook用的PC8392T以及Desktop用的PC8374T,此种加入TPM 功能的Super I/O芯片也称为Trusted I/O芯片。
国际级大厂皆已推出TPM PC
虽然TPM不仅适用在PC,但不可讳言的PC确实是最大宗且最迫 切需要导入资安辨识(手机至少还有个出厂序号、PIN等防护)的电 子装置,而且Notebook的迫切性高于Desktop(外出风险高), Desktop在商用或特殊机构等场合也一样需要资安辨识。
以实例为证,Dell、Fujitsu、HP、IBM(Lenovo)、NEC、 Samsung、Toshiba等业者都已有推出具TPM机制的Notebook,而 Dell、HP、IBM(Lenovo)、NEC、Quay、MPC、Optima等业者也 都有推出TPM的Desktop,此外SentiVision公司推出具TPM的IP STB (视讯机顶盒),以及IBM实验性的提出Linux PDA:e-LAP也具有 TPM。
• 1.2从不同的芯片层级提供TPM能力
谈论了一些TPM背景、用意、作法、概况后,以下我们就来谈论各种实现方式。 第一种作法是选择具有TPM功能的CPU,例如Transmeta的Crusoe TM5800及其后续CPU(如 Effineon),另外VIA的C3(C5P,Nehemiah核心)、C3-M、Eden、Eden-N,以及最新的C7、 C7-M等CPU也都内建硬件RNG及AES加密引擎,但是否符合TPM标准尚待确认。

计算机网络安全第4章

计算机网络安全第4章
1991年,英、法、德、荷四国针对TCSEC准则的局限性,提出了包含 保密性、完整性、可用性等概念的欧洲“信息技术安全评估准则 ”(Information Technology Security Evaluation Criteria,ITSEC)。
1988年,加拿大开始制订“加拿大可信计算机产品评估准则”(The Canadian Trusted Computer Product Evaluation Criteria, CTCPEC)。该标准将安全需求分为机密性、完整性、可靠性和可说明 性4个层次。
4
4.1 网络安全评价标准的形成
1990年,国际标准化组织(ISO)开始开发通用的国际标准评估准则。
1993年,美国对TCSEC做了补充和修改,制订了“组合的联邦标准”(简 称FC)。
1993年,由加拿大、法国、德国、荷兰、英国、美国NIST和美国NSA 六国七方联合,开始开发通用准则CC(Information Technology Security Common Criteria)。1996年1月发布CC 1.0版,1996年4月被 ISO采纳,1997年10月完成了CC 2.0的测试版,1998年5月发布了CC 2.0版。
加拿大的安全标准对开发的产品或评估过程强调功 能和保证两个部分。
1. 功能(Functionality) 2. 保证(Assurance)
8
4.2.3 加拿大评价标准
CTCPEC标准制定得较为细致,它的功能标准中的每个服务都具有不同 的级别,表4-2列出了它的不同标准中服务的级别。
9
4.2.4 美国联邦标准
10
4.2.4 美国联邦标准
通常,保护轮廓由如下5个方面构成。
1. 描述元素 2. 基本原理 3. 功能要求 4. 开发保证要求 5. 评估保证部分

信息安全学第8章 可信计算平台[精]

信息安全学第8章 可信计算平台[精]

3. 安全管理
示范者:Intel主动管理技术。 Intel主动管理技术(AMT)技术是 为远程计算机管理而设计的,这项技术 对于安全管理来说具有非常独特的意义 和重要的作用,而且AMT的运作方式与 TPM规范所提到的方式非常吻合。
在支持AMT的计算机系统当中, 即使在软件系统崩溃、BIOS损坏 甚至是没有开机的状态下管理员 仍然能在远程对计算机完成很多 操作。
2. 网络保护
示范者:3Com嵌入式防火墙。 3Com公司提供集成了嵌入式防火 墙(EFW)的网卡产品,用以向安装了 该产品的计算机提供可定制的防火墙保 护,另外还提供硬件VPN功能。
由于支持基于TPM规范的认证, 所以用户能够利用这类网卡执行 更好的计算机管理,使得只有合 法的网卡才能用于访问企业网络。
这一特性能防止攻击者在远程计算 机上控制合法用户的计算机执行恶意 程序。
8.1.4 可信计算应用
可信计算的实际应用,主要是针 对安全要求较高的场合,可信计算平 台能够为用户提供更加有效的安全防 护。 下面以PC平台为主线了解目前主 要的可信计算应用。
1. 操作系统安全
示范者:微软Windows。 微软在Windows操作系统当中应用 较多的一项技术是微软加密文件系统 (EFS),这是微软向操作系统中集成可 信计算技术的最早尝试之一,Windows 2000及之后出现的Windows XP等系统 都支持该特性。
所以,最低层的故障,引起数据输 出的差错,导致系统最后的失效。
1. 按失效的范围划分
●内容失效 当系统服务所传递的内容与系统规
定所要求实现的内容不同时 ●定时失效
主体的可信性可以定义为其行为的预 期性,软件的行为可信性可以划分级别, 可以传递,而且在传递过程中会有损失。

可信计算TrustedComputing卿斯汉0年月日PPT文档共62页

可信计算TrustedComputing卿斯汉0年月日PPT文档共62页
有什么损失。——卡耐基 47、书到用时方恨少、事非经过不知难。——陆游 48、书籍把我们引入最美好的社会,使我们认识各个时代的伟大智者。——史美尔斯 49、熟读唐诗三百首,不会作诗也会吟。——孙洙 50、谁和我一样用功,谁就会和我一样成功。——莫扎特
可信计算TrustedComputing卿斯汉0 年月日

6、黄金时代是在我们的前面,而不在 我们的 后面。

7、心急吃不了热汤圆。

8、你可以很有个性,但某些时候请收 敛。

9、只为成功找方法,不为失败找借口 (蹩脚 的工人 总是说 工具不 好)。

10、只要下定决心克服恐惧,便几乎 能克服 任何恐 惧。因 为,请 记住, 除了在 脑海中 ,恐惧 无处藏 身。-- 戴尔. 卡耐基 。

可信计算机评估准则

可信计算机评估准则

可信计算机评估准则英文回答:The Trusted Computing Evaluation Criteria (TCE) was released by the National Information Assurance Partnership (NIAP) in 2004. It is an evaluation criteria for commercial off-the-shelf (COTS) information technology (IT) products that are designed to provide trusted services in support of security-sensitive applications.The TCE is based on the Common Criteria for Information Technology Security Evaluation (CC), which is an international standard for evaluating the security of IT products. The TCE extends the CC by adding requirements for trusted computing, such as:The ability to measure the integrity of the system.The ability to control access to the system.The ability to audit the actions of the system.The TCE is a voluntary standard, but it is often usedby governments and organizations to evaluate the securityof IT products.中文回答:可信计算评估准则(TCE)是由国家信息安全保障伙伴关系(NIAP) 于 2004 年发布的。

可信计算机系统评估准则tesec_概述说明

可信计算机系统评估准则tesec_概述说明

可信计算机系统评估准则tesec 概述说明1. 引言1.1 概述在当今信息化时代,计算机系统已经融入了人们的生活和工作中。

然而,随着技术的不断发展和应用范围的扩大,计算机系统所面临的安全问题也日益复杂和严峻。

为了确保计算机系统能够可靠、安全地运行,并保护用户的隐私和敏感数据,在评估计算机系统的可信度方面就显得尤为重要。

可信计算机系统评估准则(TESec)是一种被广泛采用并认可的评价体系,用于对计算机系统进行安全性评估和认证。

TESec旨在提供一个标准化、综合性的方法来确定计算机系统是否满足安全性要求,并帮助用户选择与自身需求匹配的可信计算机系统。

1.2 文章结构本文将首先介绍TESec背后的背景和起源,包括可信计算机系统在当今社会中的重要性以及TESec颁布和发展过程。

然后,将详细探讨TESec主要内容,包括安全性要求和目标、可信度评估方法和指标以及技术需求和规范要求。

接下来,我们将通过案例研究来展示TESec在实际应用中的效果与作用,包括嵌入式系统安全评估与认证、云计算平台可信性分析与验证以及虚拟化环境下可信感知技术研究。

最后,我们将对TESec进行总结,分析其存在的不足,并展望未来TESec 的发展方向和应用前景。

1.3 目的本文旨在向读者介绍和概述可信计算机系统评估准则TESec,明确其背景和发展历程,并详细说明其主要内容和应用领域。

通过深入了解TESec,读者可以获得对该评估准则的全面理解,进而提高对计算机系统评估和选择的能力。

同时,在对TESec进行总结和展望时,本文也能够为未来相关研究提供启示和参考。

2. 可信计算机系统评估准则TESec的背景2.1 可信计算机系统的重要性可信计算机系统是在确保其安全和可靠性方面经过严格评估和验证的计算机系统。

在当今信息化社会中,计算机系统承担着关键任务,如金融交易、电子健康记录和国家安全等。

因此,为确保这些系统能够正确运行并防范各种威胁,对其进行评估变得至关重要。

Trusted Computing

Trusted Computing
Still have acsess to windows services
NGSCB
Strong Process Attestation Isolation Secure IO Sealed Storage
NGSCB
Attestation
Ability to verify the operating environment Remote verification
And has been
Encryption keys, algorithms, and protocols have been extracted New application can be constructed which does not honor DRM restrictions in content
Checks hash of ROM section before decrypting Flash Boot Loader (FBL) TEA hash algorithm
Potentially Stronger Security
Its not required to keep data in secret ROM confidential. Only integrity needs to be assured.
How it was broken
Weak hash algorithm used Modifying the FBL to jump to a new address, without changing the hash of the FBL.
Xbox Security
What is needed for Xbox security
Multi Multi IO IO

信息系统安全等级保护基本要求

信息系统安全等级保护基本要求
❖ 9 可信路径
当连接用户时(如注册、更改主体安全级),计算机信息系统可信计算基提供它 与用户之间的可信通信路径。可信路径上的通信只能由该用户或计算机信息系统可信 计算基激活,且在逻辑上与其他路径上的通信相隔离,且能正确地加以区分。
基本要求—第二部分 概述
信息系统安全保护等级
信息系统根据其在国家安全、经济建设、社会生 活中的重要程度,遭到破坏后对国家安全、社会 秩序、公共利益以及公民、法人和其他组织的合 法权益的危害程度等,由低到高划分成五级。五 级定义见GB/T 22240-2008。
保护
基本要求
某等级信息系统
基本保护
测评
基本要求
补充的安全措施 GB17859-1999 通用技术要求 安全管理要求 高级别的基本要求 等级保护其他标准 安全方面相关标准 等等
特殊需求 补充措施 精确保护
基本保护
基本要求—第四部分 主要内容
基本技术要求
技术类安全要求与信息系统提供的技术安全机制 有关,主要通过在信息系统中部署软硬件并正确 的配置其安全功能来实现。
基本要求—第一部分 标准术语与定义
❖ 1 自主访问控制
计算机信息系统可信计算基定义并控制系统中命名用户对命名客体的访问。实施 机制(例如:访问控制表)允许命名用户和(或)以用户组的身份规定并控制客体的 共享;阻止非授权用户读取敏感信息。并控制访问权限扩散。
自主访问控制机制根据用户指定方式或默认方式,阻止非授权用户访问客体。访 问控制的粒度是单个用户。访问控制能够为每个命名客体指定命名用户和用户组,并 规定他们对客体的访问模式。没有存取权的用户只允许由授权用户指定对客体的访问 权。
❖ 4 身份鉴别
计算机信息系统可信计算基初始执行时,首先要求用户标识自己的身份,而且, 计算机信息系统可信计算基维护用户身份识别数据并确定用户访问权及授权数据。计算 机信息系统可信计算基使用这些数据,鉴别用户身份,并使用保护机制(例如:口令) 来鉴别用户的身份;阻止非授权用户访问用户身份鉴别数据。通过为用户提供唯一标识 ,计算机信息系统可信计算基能够使用户对自己的行为负责。计算机信息系统可信计算 基还具备将身份标识与该用户所有可审计行为相关联的能力。

美世国际职位评估工具IPE3介绍

美世国际职位评估工具IPE3介绍

美世国际职位评估工具IPE3介绍IPE3包括了五个主要的评估因素,分别是知识技能、学历背景、经验要求、职责范围和团队合作。

这些评估因素涵盖了职位的各个方面,可以帮助企业全面而系统地评估职位的要求和特点。

IPE3采用了一种量化的评估方法,通过对各个评估因素进行综合评分,可以得出一个客观的评价结果。

这种量化的评估方法可以帮助企业更加科学地确定职位的级别和薪酬,避免主观因素对职位评估的影响。

同时,IPE3还可以帮助企业比较不同的职位,找出各个职位之间的关联和差异,为企业的组织架构和人力资源管理提供参考依据。

除了量化的评估方法,IPE3还采用了一种综合的评估技术,将各种评估因素综合考虑,并进行权衡和平衡。

这种综合的评估技术可以更好地反映职位的实际要求和特点,为企业提供更准确的评估结果。

IPE3还具有一些其他的特点和优势。

首先,它是一种标准化的评估工具,能够为企业提供一种统一的评估标准和方法,避免不同评估者之间的主观偏差。

其次,IPE3是一种灵活的评估工具,可以根据企业的实际情况和需要进行定制化,适应各种不同的行业和组织类型。

最后,IPE3还是一种高效的评估工具,可以帮助企业节省时间和成本,提高职位评估的效率和准确性。

总的来说,美世国际职位评估工具IPE3是一种科学、客观、综合、标准化、灵活和高效的评估工具,可以帮助企业更好地了解、分析和评价各种职位的内容和要求,为企业的组织架构和人力资源管理提供参考依据。

IPE3已经在许多知名企业中得到应用,获得了良好的评价和口碑,是企业进行职位评估的理想选择。

IPE3的应用范围非常广泛,适用于各种不同类型的企业和组织。

无论是制造业、服务业、金融业、医疗健康行业还是科技行业,都可以通过IPE3来进行职位评估。

通过IPE3,企业可以更好地确定岗位的级别和薪酬水平,帮助企业建立科学合理的薪酬体系。

在人才招聘和选拔方面,IPE3也可以为企业提供参考依据,帮助企业更好地匹配人才与岗位,确保岗位与人才的匹配度。

可信计算概述

可信计算概述

可信计算概述⽬录⼀、为什么需要可信计算?⼆、什么是可信计算?三、可信计算的发展概况四、可信计算技术五、围绕可信计算的⼀些争议参考⽂献⼀、为什么需要可信计算?如今信息技术已经成为了⼈们⽣活中不可分割的⼀部分,⼈们每天都通过计算机和互联⽹获取信息、进⾏各种活动。

但计算机与⽹络空间并不总是安全的,⼀⽅⾯⿊客们会通过在⽹络中散布恶意病毒来对正常⽤户进⾏攻击,例如2017年5⽉爆发的勒索病毒;另⼀⽅⾯许多不良⼚商会在⾃⼰的软件中“开后门”,趁⽤户不注意时获取⽤户的隐私或者弹出弹窗⼴告,这些都给维护⽹络空间的信息安全带来了巨⼤的挑战。

为了使⼈们能够正常地通过计算机在互联⽹上进⾏各种活动,我们必须建⽴⼀套安全、可靠的防御体系来确保我们的计算机能够按照预期稳定地提供服务。

⽬前⼤部分⽹络安全系统主要由防⽕墙、⼊侵检测、病毒防范等组成。

这种常规的安全⼿段只能在⽹络层、边界层设防,在外围对⾮法⽤户和越权访问进⾏封堵,以达到防⽌外部攻击的⽬的。

由于这些安全⼿段缺少对访问者源端—客户机的控制,加之操作系统的不安全导致应⽤系统的各种漏洞层出不穷,其防护效果正越来越不理想。

此外,封堵的办法是捕捉⿊客攻击和病毒⼊侵的特征信息,⽽这些特征是已发⽣过的滞后信息,属于“事后防御”。

随着恶意⽤户的攻击⼿段变化多端,防护者只能把防⽕墙越砌越⾼、⼊侵检测越做越复杂、恶意代码库越做越⼤,误报率也随之增多,使得安全的投⼊不断增加,维护与管理变得更加复杂和难以实施,信息系统的使⽤效率⼤⼤降低,⽽对新的攻击毫⽆防御能⼒。

近年来,“震⽹”“⽕焰”“Mirai”“⿊暗⼒量”“WannaCry勒索病毒”等重⼤安全事件频频发⽣,显然,传统防⽕墙、⼊侵检测、病毒防范等“⽼三样”封堵查杀的被动防御已经过时,⽹络空间安全正遭遇严峻挑战。

安全防护⼿段在终端架构上缺乏控制,这是⼀个⾮常严重的安全问题,难以应对利⽤逻辑缺陷的攻击。

⽬前利⽤逻辑缺陷的漏洞频繁爆出,如“幽灵”“熔断”,都是因为CPU性能优化机制存在设计缺陷,只考虑了提⾼计算性能⽽没有考虑安全性。

国外信息安全测评认证体系简介

国外信息安全测评认证体系简介

国外信息安全测评认证体系简介1、信息安全度量基准1、1 信息安全测评认证标准的发展在信息技术方面美国一直处于领导地位,在有关信息安全测评认证方面美国也是发源地。

早在70年代美国就开展信息安全测评认证标准研究,并于1985年由美国国防部正式公布了可信计算机系统评估准则(TCSEC)即桔皮书,也就是大家公认的第一个计算机信息系统评估标准。

在随后的十年里,不同的国家都开始主动开发建立在TCSEC基础上的评估准则,这些准则更灵活、更适应了IT技术的发展。

可信计算机系统评估准则(TCSEC)开始主要是作为军用标准,后来延伸至民用。

其安全级别从高到低分为A、B、C、D四类,级下再分A1、B1、B2、B3、C1、C2、D等7级。

欧洲的信息技术安全性评估准则(ITSEC)1.2版于1991年由欧洲委员会在结合法国、德国、荷兰和英国的开发成果后公开发表。

ITSEC作为多国安全评估标准的综合产物,适用于军队、政府和商业部门。

它以超越TCSEC为目的,将安全概念分为功能与功能评估两部分。

加拿大计算机产品评估准则(CTCPEC)1.0版于1989年公布,专为政府需求而设计,1993年公布了3.0版。

作为ITSEC和TCSEC的结合,将安全分为功能性要求和保证性要求两部分。

美国信息技术安全联邦准则(FC)草案1.0版也在1993年公开发表,它是结合北美和欧洲有关评估准则概念的另一种标准。

在此标准中引入了“保护轮廓(PP)”这一重要概念,每个轮廓都包括功能部分、开发保证部分和评测部分。

其分级方式与TCSEC不同,充分吸取了ITSEC、CTCPEC中的优点,主要供美国政府用,民用和商用。

由于全球IT市场的发展,需要标准化的信息安全评估结果在一定程度上可以互相认可,以减少各国在此方面的一些不必要开支,从而推动全球信息化的发展。

国际标准化组织(ISO)从1990年开始着手编写通用的国际标准评估准则。

该任务首先分派给了第1联合技术委员会(JTC1)的第27分委员会(SC27)的第3工作小组(WG3)。

网络与信息系统安全评估标准介绍

网络与信息系统安全评估标准介绍
• 信息系统安全评估,或简称为系统评估,是在具体 的操作环境与任务下对一个系统的安全保护能力进 行的评估 。
信息技术安全评估准则发展过程
• 20世纪60年代后期,1967年美国国防部(DOD)成 立了一个研究组,针对当时计算机使用环境中的安 全策略进行研究,其研究结果是“Defense Science Board report”
• 发起组织包括六国七方:加拿大、法国、德国、荷 兰、英国、美国NIST及美国NSA,他们的代表建立 了CC编辑委员会(CCEB)来开发CC
信息技术安全评估准则发展过程
• 1996年1月完成CC1.0版 ,在1996年4月被ISO采纳 • 1997年10月完成CC2.0的测试版 • 1998年5月发布CC2.0版 • 1999年12月ISO采纳CC,并作为国际标准ISO
• 70年代的后期DOD对当时流行的操作系统KSOS, PSOS,KVM进行了安全方面的研究
信息技术安全评估准则发展过程
• 80年代后,美国国防部发布的“可信计算机系统评 估准则(TCSEC)”(即桔皮书)
• 后来DOD又发布了可信数据库解释(TDI)、可信网 络解释(TNI)等一系列相关的说明和指南
TCSEC
• D类是最低保护等级,即无保护级 • 是为那些经过评估,但不满足较高评估等级要求的
系统设计的,只具有一个级别 • 该类是指不符合要求的那些系统,因此,这种系统
不能在多用户环境下处理敏感信息
四个安全等级: 无保护级 自主保护级 强制保护级 验证保护级
TCSEC
TCSEC
• C类为自主保护级 • 具有一定的保护能力,采用的措施是自主访问控制
可信网络解释(TNI)
• 功能性是指一个安全服务的目标和实现方法,它包 括特性、机制及实现

美国国防部公布的可信计算机系统评价准则TrustedComputer

美国国防部公布的可信计算机系统评价准则TrustedComputer



把握安全原则

相对性原则

安全是相对的,不是绝对的。没有任何一种 技术是可以提供100%的安全性 不必要的功能一概不要,必要的功能也要加 以严格的限制 不要“诱惑”攻击者。最好的安全防护是隐 藏终端的信息价值

最小化原则


隐蔽性原则

运用安全技术
防火墙 补丁管理 使用一种杀毒软件 “谨慎”浏览、下载 做好备份!
系统安全防护 网络安全防护 信息安全防护

检测
保护可以在一定程度上阻止入侵,但它并 不能阻止所有的入侵。 检测即在保护部分保护不了的入侵发生时, 将其检测出来,并给予记录

响应

指入侵事件发生后,进行的处理,杜绝危 害的进一步蔓延扩大,力求系统尚能提供 正常服务。
恢复

恢复是在事件发生后,把系统恢复到原来 的状态,或者比原来更安全的状态。
非对称密码

解决密钥交换、签名和认证等问题
密码理论与技术在网络安全中广泛应用, 是其它各种安全技术的一个重要基础
系统安全技术
口令认证、挑战/应答、Keberos认证 访问控制与授权 审计追踪

网络安全技术
防火墙技术 入侵检测技术

安全协议

安全应用协议

HTTPS、SSH、SMIME、PGP

可用性

保证信息确实能为授权使用者所用,即保 证合法用户在需要时可以使用所需信息, 防止由于主客观因素造成系统拒绝服务, 防止因系统故障或误操作等使信息丢失或 妨碍对信息的使用。
可控性

可控性是指信息和信息系统时刻处于合法 所有者或使用者的有效掌握与控制之下。。

多级数据库安全管理系统

多级数据库安全管理系统

等级说明:B2
B2级(结构化保护级) 建立形式化的安全策略模型并对系统内的所有主体和客体实施DAC和MAC,从主体到客体扩大到I/O设备等所有资源。要求开发者对隐蔽信道进行彻底地搜索。TCB划分保护与非保护部分,存放于固定区内。 经过认证的B2级以上的安全系统非常稀少 典型例子 操作系统:只有Trusted Information Systems公司的Trusted XENIX一种产品 数据库:北京人大金仓信息技术股份有限公司的 KingbaseES V7产品
多级参照完整性: 假设参照关系R1具有外观主码AK1,外码FK1,被参照关系R2具有外观主码AK2,多级关系R1的一个实例 r1 和多级关系R2的一个实例 r2满足参照完整性,当且仅当 所有t11∈r1 , t11[FK1] ≠null , E t21∈r2 ,t11[FK1] = t21[AK2] Λ t11[TC]= t21[TC]Λ t11[CFK1] ≥ t21[CAK2]。 外键的访问等级必须支配引用元组中主键的访问等级。
表1 原始Weapon多级关系
表2 Weapon U 级实例
表1 原始Weapon多级关系
表3 原始Weapon S级实例
表4 Weapon TS级实例
空值完整性
多级外码完整性与参照完整性
多实例完整性
实例间完整性
实体完整性
多级关系完整性:
6.4.3 多级关系完整性
一、实体完整性 设AK是定义在关系模式R上的外观主码,一个多级关系满足实体完整性,当且仅当对R的所有实例Rc与t∈Rc,有: Ai∈AK => t[Ai]≠null// 主码属性不能为空 Ai , Aj∈AK => t[Ci]=t[Cj]//主键各属性安全等级一致,保证在任何级别上主键都是完整的。 Ai AK => t[Ci] >= t[CAK]//非码属性安全等级支配键值,保证关系可见的任何级别上,主键不可能为空。

tcsec提出的安全要求

tcsec提出的安全要求

TCSEC提出的安全要求Trusted Computer System Evaluation Criteria (TCSEC),也称为“橙皮书”,是美国国家安全局于1983年发布的一份计算机安全文件,用于评估计算机系统的安全性。

TCSEC将安全性分为几个等级,每个等级对应一组安全要求,用于评估计算机系统的安全性和保险字段等级(Class)。

各等级要求Class DClass D是TCSEC中最低的安全级别。

Class D要求提供常规安全保护,以防止未经授权的访问、数据破坏和系统崩溃。

他们强制执行操作系统级规则,如身份验证和访问控制,从而创建了一个隔离环境,防止恶意软件或攻击者在系统中运行。

预计等级:任何追求日常标准诚实的公司。

Class CClass C是TCSEC中接近常规的安全等级,提供比Class D更高的安全保护。

要求对用户和访问进行控制,并管理信任级别和权限,以完全掌握系统中的所有进程。

预计等级:敏感但不是受限制的数据或操作的系统。

Class BClass B要求系统具有更高级别的安全保护,在硬件和软件层面上验证是否具有完全完整性和可靠性,以防止不仅仅是对有意破坏的安全性的攻击,还对机密性的保护。

预计等级:需要保护机密的军事、政府和金融操作。

Class AClass A要求更高(甚至是最高)级别的安全保障,以防止针对系统或相关功能的各种形式的攻击,在正常使用期间保证连续性和数据的保密性。

这些系统必须为拥有最高级别安全称号的团体准备,而这些团体的确切身份有时被直接制定。

总结TCSEC系统以各种级别提供安全保护,并帮助企业保护他们的数据、财产和客户免受各种威胁。

虽然TCSEC目前已过时,但现代计算机安全标准仍基于类似的原则,以确保系统安全得到优化和保护。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Trusted Computing,Trusted Third Parties,and Verified CommunicationsMart´ın AbadiUniversity of California at Santa CruzAbstractTrusted Computing gives rise to a new supply of trusted third parties on which distributed systems can potentially rely.They are the secure system components(hardware and software)built into nodes with Trusted Computing capabilities.These trusted third parties may be usedfor supporting communications in distributed systems.In particular,a trusted third party cancheck and certify the data sent from a node A to a node B,so that B can have some confidence inthe properties of the data despite A’s possible incompetence or malice.We present and explorethis application of Trusted Computing,both in general and in specific instantiations.1IntroductionTrusted third parties can be useful in a variety of tasks in distributed systems.For instance,certifica-tion authorities are helpful in associating public keys with the names of users and other principals; in multi-player games,servers can contribute to preventing some forms of cheating;and smart-cards with limited resources may rely on trusted,off-card servers for verifying downloaded bytecode class files.Unfortunately,resorting to trusted third parties is not always practical,as it typically results in deployment difficulties,communication overhead,and other costs.Moreover,well-founded trust is scarce in large-scale distributed systems,and so are reliable trusted third parties.This paper considers new trusted third parties that may appear within general-purpose com-puting platforms as a result of several current efforts.Those efforts include substantial projects in industry,such as the work of the former Trusted Computing Platform Alliance(TCPA)and its suc-cessor the Trusted Computing Group(TCG),and Microsoft’s Next Generation Secure Computing Base(NGSCB,formerly known as Palladium)[11].They also include research projects such as XOM[19]and Terra[13].The trusted third parties are the secure system components(hardware and software)built into nodes with Trusted Computing capabilities.These trusted third parties can contribute to both secrecy and integrity properties in distributed systems.In particular,when two nodes A and B communicate,the trusted third party embedded in A can check and certify the messages that A sends to B.This verification may have a variety of meanings—it can for example ensure the well-formedness of datafields,the absence of known viruses,the safety of mobile code,or the validity of certificate chains.The verification can offer security guarantees to B,often more efficiently than if B performed the check itself.Although the verification clearly depends on A’s secure system components,it is protected against malfunctions1in the rest of A,and can prevent their spread to B.The description and study of this scenario are the main contents of this paper.The next section discusses efforts such as TCPA,the appearance of new trusted third parties, and(briefly)the applications that they may enable.Section3sets out our assumptions.Section4 explains the use of a trusted third party for verified communications,on a case-by-case basis.Sec-tion5considers some examples,and section6summarizes benefits and drawbacks.Section7then outlines more general mechanisms for verified communications;it relies on machinery for remote invocation and on extensible runtimes.Section8develops an example.Section9discusses exten-sions in which data is partly secret or generated by the trusted third party.Section10concludes. 2New trusted third parties?Next we identify more precisely the new third parties described in the introduction,and consider whether they should be trusted.We also discuss the applications(some old,some new)that may rely on this trust.2.1The new third partyWith systems such as NGSCB,a computing platform includes a protected execution environment, with protected memory,storage,and I/O.The platform is open in that it can run arbitrary programs like today’s ordinary PCs,but those arbitrary programs should not compromise the security kernel or any subsystem under its protection.Moreover,the security kernel can authenticate the programs, and it in turn can be remotely authenticated.Therefore,the security kernel may serve as a trusted third party for an interaction in a distributed system.Conveniently,this trusted third party is local to a node.In particular,the security kernel may assist a remote principal in interactions with the rest of the node,which may be arbitrarily corrupted. Moreover,the security kernel may communicate directly with a local human user,through secure I/O;it may therefore assist the user in its interactions with the rest of the node.A subsystem protected by the security kernel may also play the role of trusted third party. Through standard delegation techniques(e.g.,[17]),the protected subsystem can act on behalf of the security kernel and its clients.The main advantage of relying on a protected subsystem is to retain,to the extent possible,the simplicity,manageability,and security of the kernel proper.Figure1is a typical picture of a system with NGSCB.It shows a system with two sides:a left-hand side with arbitrary software(not necessarily trusted)and a right-hand side with secure system components,including an operating system and user-mode code.2.2ApplicationsThis trusted third party can contribute to security in distributed systems,in several ways:•The trusted third party can contribute to secrecy properties,for example holding secrets for a user,and presenting those secrets only to appropriate remote servers.The secrets would be kept from viruses that may come with arbitrary programs.2Figure1:A typical picture of a system with NGSCB•The trusted third party can also contribute to integrity properties,for example checking in-coming and outgoing data.In particular,as suggested in the introduction and explained in section4,the trusted third party embedded in a node A can check and certify the messages that A sends to another node B.The trusted third party can protect B against A’s incompetence or malice,for example against A’s viruses.While the secrecy properties have received a fair amount of attention,we believe that the opportu-nities and problems related to integrity are also important.They are the focus of this paper.One may wonder also about availability properties—for example,asking whether the trusted third party can help protect against denial-of-service attacks.We address availability only indirectly (see section6).Trusted Computing is often narrowly associated with protecting movies and other proprietary content on commodity platforms,but it enables other significant applications.Several of those applications remain in the broad realm of Digital Rights Management(DRM).For instance,users may want to attach rights restrictions to their e-mail messages and documents;protected execution environments can help in enforcing those restrictions.Similarly,however,it has been argued that protected execution environments enable censorship and other worrisome applications[2].Beyond DRM,NGSCB could be employed for secure document signing and transaction authorization[11], for instance.Notwithstanding such intriguing ideas,it appears that the thinking about applications remains active,and far from complete.One of the goals of this paper is to contribute to this thinking.2.3Limits on trustTCPA,TCG,and NGSCB have been rather controversial.While they are associated with the phrases“Trusted Computing”or“Trustworthy Computing”,they have also been called“Treach-3erous Computing”[25].Relying on them in the manner described in this paper will perhaps be considered naive.Even putting aside any consideration of treachery,trust should not be absolute, but relative to a set of properties or actions,and it is dangerous to confuse trusted and trustworthy.Following Anderson[1],we mostly use an acronym rather than“Trusted Computing”or a sim-ilar name.We pick SCB,which may stand for“Secure Computing Base”(or“Sneaky Computing Base”)because the descriptions in this paper focus on NGSCB,as we explain in section3.By an SCB we loosely mean a collection of system components,hardware and software,including a secu-rity coprocessor with cryptographic keys and capabilities,a microkernel or other operating system, and possibly some protected subsystems running on top of these.Section3lists our assumptions more specifically.The trust that one places on an SCB may be partly based on the properties of its hardware.If this hardware is easy to subvert,then assurances by the SCB may be worthless.On the other hand, a modest level of tamper-resistance may be both achievable and sufficient for many applications. First,attacks on hardware(unlike buffer-overflow attacks,for instance)are not in general subject to large-scale automation.Moreover,many nodes(and their SCBs)are in physical environments in which serious tampering is hard or would be easily detected—for example,in shared workspaces and data centers.In other environments,a key question is whether the people who are in a position to perform the tampering would benefit from it.Whenever the SCB works on behalf of users, defending them from viruses and other software attacks,we may not need to worry about protecting the SCB from the users.Trust in an SCB may also be partly based on trust in its developer,its administrators,and other principals.For instance,if Acme makes chips with embedded secret keys,and issues certificates for the corresponding public keys,then the chips are reasonable trusted third parties only if Acme can be trusted to manage the secret keys appropriately.Thus,Acme is a trusted third party too. However,trust in Acme may be based on an open review,and may be further justified if Acme never has direct access to the secret keys.On this basis,it seems reasonable or at least plausible that SCBs would be trusted third parties—and even trustworthy third parties—in specific contexts.3AssumptionsWe focus on NGSCB partly because of its practical importance,partly for the sake of concreteness, but most of the paper applies verbatim to other systems such as XOM;it may also apply to future versions of these systems,which continue to evolve.This section presents the main assumptions on which we rely.We expect that the SCB in a system is able to communicate with other parts of the system, typically at a modest cost;in particular,this communication may be through local memory.In addition,we make the following assumptions:•Authenticity:The capability of making assertions that can be verified by others(local or remote)as coming from this SCB,or from an SCB in a particular group.For instance,ina common design,the SCB holds a signature key that it can use for signing statements;acertification authority(perhaps operated by the SCB’s manufacturer,owner,or a delegate)4issues certificates for the corresponding public key,associating the public key with this SCB or with a group of trusted SCBs.•Protection:Protection from interference from the rest of the system when performing local computations.Two additional assumptions are not essential,but sometimes convenient:•Persistent state:The SCB may keep some persistent state across runs.This state may be as simple as a monotonic ing this monotonic counter,the SCB may implement mechanisms for maintaining more complex state.In particular,assuming that the SCB hasa monotonic counter,it can maintain other state on untrusted storage,using digital signa-tures and encryption;the counter should be incremented,and its value attached to the state, whenever an update happens,thus offering protection against replay attacks.•Weak timeliness:The SCB has secure means to know the time,to within the precision al-lowed by network and scheduling delays.In particular,the SCB may get the correct time signed by a trusted network time server TS for which it knows the public key.In each ex-change with TS,the SCB would challenge TS with a fresh nonce(for example by applying a one-way hash function to a secret plus a monotonic counter).Network and scheduling delays may lead the SCB to accept an old value for the time,but never a future value.Without this assumption,the SCB can include nonces as proofs of timeliness for its assertions to on-line interlocutors.The nonces would be provided as challenges by those interlocutors.The assumption removes the need for the challenge messages.On the other hand,we do not assume several properties that may be hard to obtain.•No availability:If the SCB never runs,or runs with degraded performance,then the availabil-ity of the whole system may suffer,but with no direct effect on integrity or secrecy properties.The SCB may not have a reliable way to detect when it is running slowly.Since we do not assume availability,we cannot assume synchronized clocks.The SCB may try to verify the timeliness of incoming statements(using nonces or timestamps),buta scheduling delay may compromise the accuracy of this verification.•No obfuscation:All elements of the SCB design and implementation can be open;only its secret signature key needs to remain secret.•No secure I/O:There is no requirement of secure communication with keyboards,mice, display,network ports,or other external entities,beyond what is given by the capability of signing and checking assertions.Availability is difficult to achieve in part because it requires control of scheduling.Obfuscation has often been tried,for example in DRM systems;open design is typically deemed superior,particu-larly for systems that should be trusted.Secure I/O is central to some of the envisioned applications of SCBs,but it requires support that we cannot universally take for granted at present,and it also could benefit from progress on user interfaces.54Verified communications with an SCBIn this section we show how an SCB can serve as a trusted third party for checking and certifying communications.First,in section4.1,we review examples of input verification,and their impor-tance for security.Then,in section4.2,we explain how these examples can rely on SCB support. Later sections are concerned with refining the examples,discussing benefits and drawbacks,and generalizing.Throughout the paper,we emphasize communications that involve programs at their endpoints.Accordingly,we often refer to the sender as the caller and to the receiver as the callee. However,many of the ideas and techniques that we present do not require that the messages be-ing exchanged are calls to program functions;they apply more broadly to arbitrary messages in a distributed setting.4.1Checking inputsWhen a program receives data,it is prudent that it verify that the data has the expected properties before doing further computation with it(e.g.,[15]).These verifications may for example include:•Checking that an argument is of the expected size,thus thwarting buffer-overflow attacks.•Checking that a graph is acyclic,so as to avoid infinite loops in later graph manipulations.•Checking that an argument is of the expected type or structure.•Checking the validity of a“proof of work”(evidence that the sender has performed some moderately hard computation,of the kind suggested for discouraging spam;e.g.,[10,16]).•Checking that cryptographic parameters have particular properties(often number-theoretic properties)needed for security[3,Principle6].•Checking that a set of credentials forms a chain and implies some expected conclusion,for example that the sender is a member of a group.Further,interesting examples arise in cases where the data is code(or may include code):•Checking that the data does not contain one of a set of known viruses.•Checking that a piece of mobile code is well-typed.This mobile code might be written in a source language,an intermediate language,or in binary.As in Java Virtual Machines[20] and the Common Language Runtime(CLR)[6],the typing provides a base level of security.With some research type systems(e.g.,[8,21]),the typing may ensure further properties, such as compliance with resource-usage rules and secure information-flow properties.•Checking the legality of a logical proof that a piece of mobile code satisfies some property,for example an application-specific safety property,termination,or an information-flow property.Research on proof-carrying code[22]explores these ideas.6Figure2:A verified input•More speculatively,checking that compiled mobile code is a correct implementation of a given source program(that is,that the compiler did not make a mistake in a particular run).Research on translation validation[24]explores these ideas.As these and other examples illustrate,authenticating the origin of data is often essential,but further checking can be essential too.In particular,the checking can serve in preventing the spread of infections from senders to receivers.Some checking may be done automatically by standard machinery in distributed systems;for example,remote procedure call(RPC)machinery can enforce simple typing properties before deliv-ering arguments to remotely invoked procedures.Such automatic checking is particularly justified for generic properties that are easy to verify.On the other hand,application-specific properties and properties that are expensive to verify tend to be treated on a case-by-case basis.4.2Using an SCBSuppose that a piece of code relies on a certain property of its inputs,and that therefore this property should be checked.The checking can happen at the code’s boundary or deeper inside the code.It could also happen at the caller,though in general the caller may not know what property to ensure, and crucially the caller cannot always be trusted.Having an SCB in the caller leads to a new possibility,depicted in Figure2:the SCB can serve as a trusted third party that is responsible for the checking,and that certifies that the checking has succeeded.This certification consists in a signed assertion that the call(including its arguments) satisfies a given property.The signed assertion should contain a proof of timeliness,such as a timestamp or a nonce.The signature may simply be a public-key digital signature.When the7SCB and the consumer of the signature share a secret,on the other hand,the signature may be an inexpensive MAC(message authentication code).This MAC may be applied automatically if caller and callee communicate over an authenticated channel,such as can be implemented on top of the SSL and SSH protocols.This authenticated channel has another clear benefit:proving the identity of the caller to the callee.When it receives a certificate,the callee should check that it matches the call,that it is timely, that it claims the expected property,and also that it is issued by a sufficiently trusted SCB.All these checks but the last should be straightforward.Checking that the certificate is issued by an appropriate SCB is a classical authorization problem(a matter of trust rather than of remote integrity verification).When the SCB is identified with a public key,the public key may be in a group of keys trusted for the purpose.On the other hand,the SCB may prove only that it is a proper SCB in a certain group,without revealing its exact identity;this case is more elaborate but does not introduce new difficulties.There is no requirement that the callee have an SCB.However,an SCB at the callee can provide a secure environment in which to perform the checks just described;it can also serve for certifying properties of communications in the opposite direction,such as the result(if any)of the call.There remains the problem of letting the caller’s SCB know what property to check.This information may be hard-wired on a case-by-case basis.In general,it is attractive to envision that the property would be advertised along with the interface to the code being called.Much like the caller learns about the existence of the code entry point,and about the expected types and semantics of arguments,the caller should learn about the expected properties of these arguments.We return to this subject below.Using an SCB for checking inputs has a number of desirable features,as well as some poten-tially problematic ones.Before we discuss them,however,it is useful to consider a few instantia-tions of the method for particular checks.5ExamplesNext we consider four examples,both because of their intrinsic interest and in order to elucidate general features of the method described in section4.2.5.1TypecheckingIn the simplest example,the SCB of the caller typechecks the call,and writes a corresponding certificate.For simple typing properties of small arguments,this example is wasteful.If the caller’s SCB and the callee are not already communicating on an authenticated channel,then the callee may need to check some public-key certificates;when typechecking is simple and fast,trading it for a public-key operation is hardly attractive.As arguments get larger,delegating the typechecking to the caller’s SCB becomes more rea-sonable.For instance,suppose that the caller is uploading a large amount of data into the callee’s database,and that this data is supposed to be in a particular format.In general,checking or impos-8ing this format may require some processing and some buffering.If the caller’s SCB can guarantee that the format is obeyed,then the callee may need to compute a message digest(relatively fast) and perform at most one public-key operation,independently of the size of the data,without any buffering.Delegating the typechecking to the caller’s SCB also becomes more reasonable for complex typing tasks.For instance,the callee may be relieved to avoid the task of checking that a piece of XML conforms to a particular schema,or that a piece of mobile code is well-typed.Indeed,the typechecking of mobile code can be fairly expensive,to the point where it is difficult or impossible on resource-constrained environments.In a recent paper[18],Leroy discusses the cost of traditional bytecode verification on Java cards,and also discusses alternatives.Leroy writes:bytecode verification as it is done for Web applets is a complex and expensive process,requiring large amounts of working memory,and therefore believed to be impossibleto implement on a smart card.The alternatives include both off-card verification and the combination of off-card code transfor-mations with easier on-card verification.Leroy ingeniously develops this latter alternative.On the former alternative,Leroy writes:The drawback of this approach is to extend the trusted computing base to include off-card components.The cryptographic signature also raises delicate practical issues(howto deploy the signature keys?)and legal issues(who takes liability for a buggy appletproduced by faulty off-card tools?).Having the off-card verification done in the caller’s SCB mitigates these concerns:•Extending the trusted computing base to an SCB appears less problematic than extending it to an arbitrary machine with arbitrary software and arbitrary viruses.•The deployment of SCBs should include the deployment of their keys and of certificates for those keys.•The off-card verifier can be chosen by the consumer of the code,or a delegate,and the SCB can guarantee that it is this verifier that it runs.Therefore,the SCB would not be liable for a faulty verifier.(However,other parties would still have to be responsible for more fundamental infrastructure failures such as bugs in SCBs or leak of the master secret keys.) Moreover,any work done in the caller’s SCB needs to be done only once,while work done at the consumer needs to take place once per consumer(and even more often when consumers obliviously download the same piece of mobile code multiple times).In addition to smart-cards,servers can also be resource constrained.In the design of busy servers that deal with many clients,one typically shifts as much work as possible to the clients.In our case,the client’s SCB would be responsible for checking code uploaded to the server(servlets). For instance,when the server is a database,and its data cannot be sent to the client because of privacy considerations or sheer size,the client may upload code to run against the data;the client’s9SCB could ensure the safety of the code.More broadly,the client’s SCB could also ensure that the code conforms to any server policies.In short,although there exist clever alternatives,typechecking in the caller’s SCB appears as a viable approach to an actual problem.Although it is not always advantageous,it does have some appealing properties,and it can be a good choice.5.2Proof checkingResearch on proof-carrying code develops the idea that mobile code should be accompanied by proofs that establish that the code satisfies logical properties.As a special case,the properties may represent basic guarantees such as memory-safety,which can also be obtained by typechecking. However,proof-carrying code is considerably more general.As suggested above,the properties may include application-specific safety properties,termination,and information-flow security prop-erties.For example,a proof may guarantee that the code uses only certain limited resources,or that it does not leak pieces of private user data.Such properties may be attractive whether the receiver of the code is a resource-constrained personal smart-card or a busy database server.Although the verification of proofs is typically simpler than their construction,it is not a trivial task.It is roughly as hard as typechecking(discussed in section5.1),and in fact proof checking can be formulated as a kind of typechecking.In addition,proofs can be bulky,creating communication overhead.For example,a recent paper[14]that treats device-driver properties includes proof sizes, for instance up to156KB of proof for a program of around17KLOC.Other proof encodings are possible(e.g.,[23]),and may lead to a reduction of proof sizes by an order of magnitude.While these encodings are both insightful and effective,they can lead to slower proof checking,and in any case the proofs often remain much larger than signed statements.For example,a proof for the hotjava code takes354KB[23],substantially less than the code itself(2.75MB),but more than a thousand times the size of a signature;checking the proof took close to one minute on a400MHz machine,much more than checking a signed statement.Alternatively,with our approach,the SCB of the code producer could be responsible for check-ing the proof.The proof could be constructed outside the SCB,by whatever means,and given to the SCB with a cheap,local memory transfer,rather than network communication.The SCB could then transmit an assertion that the proof exists,in a certificate,rather than the proof itself.The consumer of the code would simply check the certificate rather than the proof.Leroy’s concerns about off-card bytecode verification apply also to this scenario,though again the use of an SCB should mitigate them and offer some advantages.To date,there is only limited experience in the deployment and use of proof-carrying code tech-nology.Therefore any assessment of the use of SCBs in this context may remain rather speculative. Nevertheless,as for typechecking,this use of SCBs appears as a sensible and potentially attractive variant.5.3Certificate checkingFor access control in distributed systems,the reference monitor that evaluates a request typically needs to consider digitally signed certificates and assemble evidence on whether the request should10be granted.If the request comes from a source S and it is for an operation O on a target object T,the certificates may for example say that S is a member of a group G,that G is included in another group G’,that all members of G’can perform O on objects owned by a principal P,and that P does in fact own T.Examples with chains of5–6certificates are not uncommon in some systems(e.g.,[7, 9]).The certificates may be obtained by a variety of methods(pushed or pulled);selecting the relevant certificates and assembling them into a proof can be difficult.Therefore,several systems have,to various extents,shifted the work of providing proofs to the sources of requests[26,4,5]. Nevertheless,the checking of proofs remains in the reference monitor.Using an SCB,we can go further:the source of a request need not present a pile of certificates or even a proof,but rather its SCB can provide a certificate that it has checked a proof.(In addition, the SCB should present certificates to establish its trustworthiness,and the reference monitor should check them,but these certificates may be trivial,and in any case they should not vary much from request to request.)Thus,the task of the reference monitor becomes simpler.This approach could also have privacy advantages:the source’s SCB need not reveal all the source’s certificates—including the exact identity of the source and its group memberships—as those are processed locally.Private information about the source can thus be kept from the ref-erence monitor,and also from any parties that somehow succeed in compromising the reference monitor,which may not have an SCB.Conversely,the reference monitor may be able to disclose its access-control policy to the source’s SCB without making it public.(However,this disclosure is not essential:the SCB may provide only a partial proof if it does not know the access-control policy,so the approach applies in that case also.)Clearly,realizing this privacy advantage may require additional machinery,such as specifications of privacy properties that control theflow of certificates;the development of this machinery is perhaps interesting but beyond the scope of this paper.While the explanation above concerns a reference monitor that evaluates a request,much the same applies to an on-line authority that issues certificates—for example,an authority that issues a certificate of membership in a group G to anyone who proves membership in another group G’.More generally,the protected environment of an SCB appears as an appealing place for cer-tificate processing and manufacturing.With some care,its weak timeliness properties should be adequate for this application.5.4Virus confinement and communications censorship?Preventing the spread of viruses is an eminently worthy application of SCBs.Because viruses can in general attack anti-virus software,it is attractive to run that software under the protection of SCBs.In particular,when two nodes communicate,either or both can use their SCBs to check and certify the absence of known viruses in the data they exchange.One may ask,however,whether any negative applications of SCBs might make them unattrac-tive overall.In particular,the same infrastructure that blocks viruses could well be used for censor-ing other kinds of contents.Fortunately,communications censorship—at least in the form described here—can be avoided.First,there may be legal protections against it.Hardware attacks on SCBs may also defeat censorship,though they negate protection against viruses at the same time.Finally, censorship may be avoided at the software level,since communications between consenting nodes11。

相关文档
最新文档