操作系统(第二版)习题答案
《操作系统》第二版 徐宗元OS-习题答案
习题参考答案1.6.3选择题1.(1) (5) (6) (7) (10)2. A—(2) B—(1) C—(1) D—(4) E--(3)3. A—(3) B—(4) C—(1) D—(3) E—(4)4. A—(8) B--(9) C—(1) D—(5) E—(2)5. A—(5) B—(2)6. A—(2) B—(3) C—(4) E—(1)7. A—(2) B—(1) C—(3) E—(4)8. A—(2) B—(4) C—(3)9. A—(4) B—(5)10. A—(4) B—(2)11. A—(3) B—(1) C—(1) D—(3) E--(4)12. A—(3) B—(2) C—(4) D—(1) E--(2)13. A—(2)14. A—(1)15. A—(3) B—(4)16. A—(1)17. A—(2) B—(4) C—(3) D—(1)18. A—(3)19. A—(4)1.6.4 问答题3.答:批处理OS:目标是提高系统资源的利用效率。
系统自动地连续处理一批作业,用户不能直接干预作业执行。
没有多路性、独立性、交互性、及时性,系统要求可靠。
适合对处理结束时间要求不太严格、作业运行步骤比较规范、程序已经过考验的作业成批处理。
分时OS:目标是为了满足多个用户及时进行人-机交互的需要。
系统采用时间片轮转方式,多个用户同时在各自的终端上与系统进行交互式工作,系统对各用户请求及时响应。
有多路性(多个用户同时在各自的终端上工作)、独立性(用户感觉独占计算机)、交互性(用户能与系统进行广泛的人机对话)、及时性(系统对各用户请求及时响应),系统要求可靠。
适用于频繁交互的作业,如程序调试、软件开发等。
实时OS:目标是为了提高系统的响应时间,对随机发生的外部事件作出及时响应并对其进行处理。
系统采用“事件驱动”方式,接收到外部信号后及时处理,并且要求在严格的时限内处理完接收的事件,实时性(快速的响应时间)和高度可靠性是实时OS最重要的设计目标。
操作系统第二版课后习题答案
操作系统第二版课后习题答案操作系统第二版课后习题答案操作系统是计算机科学中的重要领域,它负责管理计算机硬件和软件资源,为用户提供良好的使用体验。
在学习操作系统的过程中,课后习题是巩固和深化知识的重要方式。
本文将为大家提供操作系统第二版课后习题的答案,帮助读者更好地理解和掌握操作系统的知识。
第一章:引论1. 操作系统的主要功能包括进程管理、内存管理、文件系统管理和设备管理。
2. 进程是指正在执行的程序的实例。
进程控制块(PCB)是操作系统用来管理进程的数据结构,包含进程的状态、程序计数器、寄存器等信息。
3. 多道程序设计是指在内存中同时存放多个程序,通过时间片轮转等调度算法,使得多个程序交替执行。
4. 异步输入输出是指程序执行期间,可以进行输入输出操作,而不需要等待输入输出完成。
第二章:进程管理1. 进程调度的目标包括提高系统吞吐量、减少响应时间、提高公平性等。
2. 进程调度算法包括先来先服务(FCFS)、最短作业优先(SJF)、优先级调度、时间片轮转等。
3. 饥饿是指某个进程长时间得不到执行的情况,可以通过调整优先级或引入抢占机制来解决。
4. 死锁是指多个进程因为争夺资源而陷入无限等待的状态,可以通过资源预分配、避免环路等方式来避免死锁。
第三章:内存管理1. 内存管理的主要任务包括内存分配、内存保护、地址转换等。
2. 连续内存分配包括固定分区分配、可变分区分配和动态分区分配。
3. 分页和分段是常见的非连续内存分配方式,分页将进程的地址空间划分为固定大小的页,分段将进程的地址空间划分为逻辑段。
4. 页面置换算法包括最佳置换算法、先进先出(FIFO)算法、最近最久未使用(LRU)算法等。
第四章:文件系统管理1. 文件是操作系统中用来存储和组织数据的逻辑单位,可以是文本文件、图像文件、音频文件等。
2. 文件系统的主要功能包括文件的创建、删除、读取、写入等操作。
3. 文件系统的组织方式包括层次目录结构、索引结构、位图结构等。
操作系统(第二版)课后习题答案
1.什么是操作系统?其主要功能是什么?操作系统是控制和管理计算机系统内各种硬件和软件资源,有效组织多道程序运行的系统软件(或程序集合),是用户和计算机直接的程序接口.2.在某个计算机系统中,有一台输入机和一台打印机,现有两道程序投入运行,程序A、B 同时运行,A略早于B。
A的运行轨迹为:计算50ms、打印100ms、再计算50ms、打印100ms,结束。
B的运行轨迹为:计算50ms、输入80ms、再计算100ms,结束。
试说明:(1)两道程序运行时,CPU是否空闲等待?若是,在那段时间段等待?(2)程序A、B是否有等待CPU的情况?若有,指出发生等待的时刻。
0 50 100 150 200 250 30050 100 50 10050 100 20 100(1) cpu有空闲等待,在100ms~150ms的时候.(2) 程序A没有等待cpu,程序B发生等待的时间是180ms~200ms.1.设公共汽车上,司机和售票员的活动如下:司机的活动:启动车辆;正常行车;到站停车。
售票员的活动:关车门;售票;开车门。
在汽车不断的到站、停车、行驶过程中,用信号量和P、V操作实现这两个活动的同步关系。
semaphore s1,s2;s1=0;s2=0;cobegin司机();售票员();coendprocess 司机(){while(true){P(s1) ;启动车辆;正常行车;到站停车;V(s2);}}process 售票员(){while(true){关车门;V(s1);售票;P(s2);开车门;上下乘客;}}2.设有三个进程P、Q、R共享一个缓冲区,该缓冲区一次只能存放一个数据,P进程负责循环地从磁带机读入数据并放入缓冲区,Q进程负责循环地从缓冲区取出P进程放入的数据进行加工处理,并把结果放入缓冲区,R进程负责循环地从缓冲区读出Q进程放入的数据并在打印机上打印。
请用信号量和P、V操作,写出能够正确执行的程序。
linux操作系统(第二版)课后习题答案
linux操作系统(第二版)课后习题答案Linux操作系统(第二版)课后习题答案在学习Linux操作系统的过程中,课后习题是非常重要的一部分。
通过做课后习题,我们可以更好地巩固所学的知识,加深对Linux操作系统的理解。
下面我将为大家总结一些常见的课后习题答案,希望对大家的学习有所帮助。
1. 什么是Linux操作系统?它有哪些特点?答:Linux操作系统是一种开源的Unix-like操作系统,具有多用户、多任务和多线程的特点。
它具有稳定性高、安全性好、性能优越等特点。
2. 请简要介绍Linux文件系统的组成结构。
答:Linux文件系统的组成结构包括根目录、用户目录、系统目录、设备文件、普通文件等。
其中根目录是整个文件系统的起点,用户目录是每个用户的个人目录,系统目录包括系统文件和程序文件,设备文件用于访问设备,普通文件包括文本文件、二进制文件等。
3. 请简要介绍Linux系统的启动过程。
答:Linux系统的启动过程包括硬件初始化、引导加载程序启动、内核初始化、用户空间初始化等步骤。
其中硬件初始化是指计算机硬件的自检和初始化,引导加载程序启动是指引导加载程序加载内核,内核初始化是指内核加载并初始化各种设备和服务,用户空间初始化是指启动系统的用户空间进程。
4. 请简要介绍Linux系统的文件权限管理。
答:Linux系统的文件权限管理包括文件所有者、文件所属组、文件权限等。
文件所有者是指文件的所有者,文件所属组是指文件所属的组,文件权限包括读、写、执行权限等。
5. 请简要介绍Linux系统的进程管理。
答:Linux系统的进程管理包括进程的创建、销毁、调度等。
进程的创建是指创建新的进程,进程的销毁是指销毁已有的进程,进程的调度是指对进程进行调度和管理。
通过以上课后习题的答案总结,我们可以更好地了解Linux操作系统的基本知识和常见操作。
希望大家在学习过程中多做课后习题,加深对Linux操作系统的理解,提高自己的操作技能。
计算机操作系统第二版答案(郁红英)
习题二1.操作系统中为什么要引入进程的概念?为了实现并发进程之间的合作和协调,以及保证系统的安全,操作系统在进程管理方面要做哪些工作?答:( 1)为了从变化的角度动态地分析研究可以并发执行的程序,真实地反应系统的独立性、并发性、动态性和相互制约,操作系统中就不得不引入“进程”的概念;( 2)为了防止操作系统及其关键的数据结构,受到用户程序有意或无意的破坏,通常将处理机的执行状态分成核心态和用户态;对系统中的全部进程实行有效地管理,其主要表现是对一个进程进行创建、撤销以及在某些进程状态之间的转换控制,2.试描述当前正在运行的进程状态改变时,操作系统进行进程切换的步骤。
答:(1)就绪状态→运行状态。
处于就绪状态的进程,具备了运行的条件,但由于未能获得处理机,故没有运行。
( 2)运行状态→就绪状态。
正在运行的进程,由于规定的时间片用完而被暂停执行,该进程就会从运行状态转变为就绪状态。
(3)运行状态→阻塞状态。
处于运行状态的进程,除了因为时间片用完而暂停执行外还有可能由于系统中的其他因素的影响而不能继续执行下去。
3.现代操作系统一般都提供多任务的环境,试回答以下问题。
(1)为支持多进程的并发执行,系统必须建立哪些关于进程的数据结构?答:为支持进程的并发执行,系统必须建立“进程控制块(PCB)”,PCB的组织方式常用的是链接方式。
(2)为支持进程的状态变迁,系统至少应该供哪些进程控制原语?答:进程的阻塞与唤醒原语和进程的挂起与激活原语。
(3)当进程的状态变迁时,相应的数据结构发生变化吗?答:创建原语:建立进程的PCB,并将进程投入就绪队列。
;撤销原语:删除进程的 PCB,并将进程在其队列中摘除;阻塞原语:将进程 PCB中进程的状态从运行状态改为阻塞状态,并将进程投入阻塞队列;唤醒原语:将进程 PCB中进程的状态从阻塞状态改为就绪状态,并将进程从则色队列摘下,投入到就绪队列中。
4.什么是进程控制块?从进程管理、中断处理、进程通信、文件管理、设备管理及存储管理的角度设计进程控制块应该包含的内容。
操作系统(第二版)课后习题答案
故需要一次间接寻址,就可读出该数据
如果要求读入从文件首到263168Byte处的数据(包括这个数据),读岀过程:首先根据直接寻
址读出前10块;读出一次间接索引指示的索引块1块;将索引下标从0〜247对应的数据块全部 读入。即可。共读盘块数10+1+248=259块
3.某文件系统采用索引文件结构,设文件索引表的每个表目占用3Byte,存放盘块的块号,盘块 的大小为512Byte。此文件系统采用直接、一次间接、二次间接、三次间接索引所能管理的最大
(1)|100-8|+|18-8|+|27-18|+|129-27|+|110-129|+|186-110|+|78-186|+|147-78|+|41-147|+ |10-47|+|64-10|+|12-64|=728
8:00
10:00
120mi n
1
2
8:50
50min
10:00
10:50
120mi n
3
9:00
10mi n
10:50
11:00
120mi n
12
4
9:50
20mi n
11:00
11:20
90mi n
平均周转时间T=,平均带权周转时间W=
②SJF短作业优先法)
作业
到达时间
运行时间
开始时间
完成时间
周转时间
页面长度为4KB,虚地址空间共有土)个页面
3.某计算机系统提供24位虚存空间,主存空间为218Byte,采用请求分页虚拟存储管理,页面尺
寸为1KB。假定应用程序产生虚拟地址(八进制),而此页面分得的块号为100(八进制),说明
《Linux操作系统(第2版) )》课后习题答案
《Linux操作系统(第2版)》课后习题答案练习题一、选择题1. Linux最早是由计算机爱好者 B 开发的。
A. Richard PetersenB. Linus TorvaldsC. Rob PickD. Linux Sarwar2. 下列 C 是自由软件。
A. Windows XPB. UNIXC. LinuxD. Windows 20003. 下列 B 不是Linux的特点。
A. 多任务B. 单用户C. 设备独立性D. 开放性4. Linux的内核版本是 A 的版本。
~A. 不稳定B. 稳定的C. 第三次修订D. 第二次修订5. Linux安装过程中的硬盘分区工具是 D 。
A. PQmagicB. FDISKC. FIPSD. Disk Druid6. Linux的根分区系统类型是 C 。
A. FATl6B. FAT32C. ext4D. NTFS二、填空题1. GNU的含义是:GNU's Not UNIX。
2. Linux一般有3个主要部分:内核(kernel)、命令解释层(Shell或其他操作环境)、实用工具。
3. 安装Linux最少需要两个分区,分别是swap交换分区和/(根)分区。
4. Linux默认的系统管理员账号是root 。
;三、简答题(略)1.简述Red Hat Linux系统的特点,简述一些较为知名的Linux发行版本。
2.Linux有哪些安装方式安装Red Hat Linux系统要做哪些准备工作3.安装Red Hat Linux系统的基本磁盘分区有哪些4.Red Hat Linux系统支持的文件类型有哪些练习题一、选择题1. C 命令能用来查找在文件TESTFILE中包含四个字符的行A. grep’’TESTFILEB. grep’….’TESTFILEC. grep’^$’TESTFILED. grep’^….$’TESTFILE—2. B 命令用来显示/home及其子目录下的文件名。
计算机操作系统第二版答案
计算机操作系统第二版答案习题一1. 1. 什么是操作系统?它的主要功能是什么?什么是操作系统?它的主要功能是什么?答:操作系统是用来管理计算机系统的软、硬件资源,合理地组织计算机的工作流程,以方便用户使用的程序集合;其主要功能有进程管理、存储器管理、设备管理和文件管理功能。
管理功能。
2. 2. 2. 什么是多道程序设计技术?多道程序设计技什么是多道程序设计技术?多道程序设计技术的主要特点是什么?答:多道程序设计技术是把多个程序同时放入内存,使它们共享系统中的资源;特点:多道,即计算机内存中同时存放多道相互独立的程序;宏观上并行,是指同时进入系统的多道程序都处于运行过程中;微观上串行,是指在单处理机环境下,内存中的多道程序轮流占有CPU CPU,交替执行。
,交替执行。
3. 3. 批处理系统是怎样的一种操作系统?它的特点是什批处理系统是怎样的一种操作系统?它的特点是什么?答:批处理操作系统是一种基本的操作系统类型。
在该系统中,用户的作业被成批的输入到计算机中,然后在操作系统的控制下,用户的作业自动地执行;特点是:资源利用率高、系统吞吐量大、平均周转时间长、无交互能力。
长、无交互能力。
4. 4. 4. 什么是分时系统?什么是实时系统?什么是分时系统?什么是实时系统?试从交互性、及时性、独立性、多路性和可靠性几个方面比较分时系统和实时系统。
较分时系统和实时系统。
答:分时系统:一个计算机和许多终端设备连接,每个 答:分时系统:一个计算机和许多终端设备连接,每个用户可以通过终端向计算机发出指令,请求完成某项工作,在这样的系统中,用户感觉不到其他用户的存在,好像独占计算机一样。
计算机一样。
实时系统:对外部输入的信息,实时系统能够在规定的 实时系统:对外部输入的信息,实时系统能够在规定的时间内处理完毕并作出反应。
比较:交互性:实时系统具时间内处理完毕并作出反应。
有交互性,但人与系统的交互,仅限于访问系统中某些特定的专用服务程序。
《Linux操作系统(第2版) )》课后习题答案
《Linux操作系统(第2版)》课后习题答案练习题一、选择题1. Linux最早是由计算机爱好者 B 开发的。
A. Richard PetersenB. Linus TorvaldsC. Rob PickD. Linux Sarwar2. 下列 C 是自由软件。
A. Windows XPB. UNIXC. LinuxD. Windows 20003. 下列 B 不是Linux的特点。
A. 多任务B. 单用户C. 设备独立性D. 开放性4. Linux的内核版本是 A 的版本。
~A. 不稳定B. 稳定的C. 第三次修订D. 第二次修订5. Linux安装过程中的硬盘分区工具是 D 。
A. PQmagicB. FDISKC. FIPSD. Disk Druid6. Linux的根分区系统类型是 C 。
A. FATl6B. FAT32C. ext4D. NTFS二、填空题1. GNU的含义是:GNU's Not UNIX。
2. Linux一般有3个主要部分:内核(kernel)、命令解释层(Shell或其他操作环境)、实用工具。
3. 安装Linux最少需要两个分区,分别是swap交换分区和/(根)分区。
4. Linux默认的系统管理员账号是root 。
;三、简答题(略)1.简述Red Hat Linux系统的特点,简述一些较为知名的Linux发行版本。
2.Linux有哪些安装方式安装Red Hat Linux系统要做哪些准备工作3.安装Red Hat Linux系统的基本磁盘分区有哪些4.Red Hat Linux系统支持的文件类型有哪些练习题一、选择题1. C 命令能用来查找在文件TESTFILE中包含四个字符的行A. grep’’TESTFILEB. grep’….’TESTFILEC. grep’^$’TESTFILED. grep’^….$’TESTFILE—2. B 命令用来显示/home及其子目录下的文件名。
操作系统实用教程(第二版)-OS习题答案
操作系统习题解答1. 存储程序式计算机的主要特点是什么?答:主要特点是以顺序计算为基础,根据程序规定的顺序依次执行每一个操作,控制部件根据程序对整个计算机的活动实行集中过程控制,即为集中顺序过程控制。
这类计算是过程性的,实际上这种计算机是模拟人们的手工计算的产物。
即首先取原始数据,执行一个操作,将中间结果保存起来;再取一个数,和中间结果一起又执行一个操作,如此计算下去。
在遇到多个可能同时执行的分支时,也是先执行完一个分支,然后再执行第二个分支,直到计算完毕。
2. 批处理系统和分时系统各具有什么特点?答:批处理系统是在解决人一机矛盾以及高速度的中央处理机和低速度的I/O设备这两对矛盾的过程中发展起来的。
它的出现改善了CPU和外设的使用情况,其特点是实现了作业的自动定序、自动过渡,从而使整个计算机系统的处理能力得以提高。
在多道系统中,若采用了分时技术,就是分时操作系统,它是操作系统的另一种类型。
它一般采用时间片轮转的办法,使一台计算机同时为多个任务服务。
对用户都能保证足够快的响应时间,并提供交互会话功能。
它与批处理系统之间的主要差别在于,分时系统是人机交互式系统,响应时间快;而批处理系统是作业自动定序和过渡,无人机交互,周转时间长。
3. 实时系统的特点是什么?一个实时信息处理系统和一个分时系统从外表看来很相似,它们有什么本质的区别呢?答:实时系统对响应时间的要求比分时系统更高,一般要求响应时间为秒级、毫秒级甚至微秒级。
将电子计算机应用到实时领域,配置上实时监控系统,便组成各种各样的专用实时系统。
实时系统按其使用方式不同分为两类:实时控制系统和实时信息处理系统。
实时控制是指利用计算机对实时过程进行控制和提供监督环境。
实时信息处理系统是指利用计算机对实时数据进行处理的系统。
实时系统大部分是为特殊的实时任务设计的,这类任务对系统的可靠性和安全性要求很高。
与分时系统相比,实时系统没有那样强的交互会话功能,通常不允许用户通过实时终端设备去编写新的程序或修改已有的程序。
操作系统第二版罗宇_课后答案
操作系统第二版罗宇_课后答案操作系统部分课后习题答案1.2操作系统以什么方式组织用户使用计算机?请问:操作系统以进程的方式非政府用户采用计算机。
用户所须要顺利完成的各种任务必须由适当的程序去表达出来。
为了同时实现用户的任务,必须使适当功能的程序执行。
而进程就是指程序的运转,操作系统的进程调度程序同意cpu在各进程间的转换。
操作系统为用户提供更多进程建立和完结等的系统调用功能,并使用户能建立崭新进程。
操作系统在初始化后,可以为每个可能将的系统用户建立第一个用户进程,用户的其他进程则可以由母进程通过“进程建立”系统调用展开建立。
1.4早期监督程序(monitor)的功能是什么?请问:早期监督程序的功能就是替代系统操作员的部分工作,自动控制作业的运转。
监督程序首先把第一道作业调到主存,并启动该作业。
运转完结后,再把下一道作业调到主存启动运转。
它如同一个系统操作员,负责管理批作业的i/o,并自动根据作业控制说明书以单道以太网的方式掌控作业运转,同时在程序运行过程中通过提供更多各种系统调用,掌控采用计算机资源。
1.7试述多道程序设计技术的基本思想。
为什么采用多道程序设计技术可以提高资源利用率?请问:多道程序设计技术的基本思想就是,在主存同时维持多道程序,主机以交错的方式同时处置多道程序。
从宏观来看,主机内同时维持和处置若干道已开始运行但尚未完结的程序。
从微观来看,某一时刻处理机只运转某道程序。
可以提高资源利用率的原因:由于任何一道作业的运行总是交替地串行使用cpu、外设等资源,即使用一段时间的cpu,然后使用一段时间的i/o设备,由于采用多道程序设计技术,加之对多道程序实施合理的运行调度,则可以实现cpu和i/o设备的高度并行,可以大大提高cpu与外设的利用率。
1.8什么就是分时系统?其主要特征就是什么?适用于于哪些应用领域?答:分时系统是以多道程序设计技术为基础的交互式系统,在此系统中,一台计算机与多台终端相连接,用户通过各自的终端和终端命令以交互的方式使用计算机系统。
现代操作系统(第二版)习题答案
MODERN OPERATING SYSTEMS SECOND EDITIONPROBLEM SOLUTIONSANDREW S. TANENBAUM Vrije Universiteit Amsterdam, The NetherlandsPRENTICE HALLUPPER SADDLE RIVER, NJ 07458SOLUTIONS TO CHAPTER 1 PROBLEMS1. An operating system must provide the users with an extended (i.e., virtual) machine, and it must manage the I/O devices and other system resources.2. Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.3. Input spooling is the technique of reading in jobs, for example, from cards, onto the disk, so that when the currently executing processes are finished, there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. Input spooling on a personal computer is not very likely, but output spooling is.4. The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100 percent busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).5. Second generation computers did not have the necessary hardware to protect the operating system from malicious user programs.6. It is still alive. For example, Intel makes Pentium I, II, and III, and 4 CPUs with a variety of different properties including speed and power consumption. All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.7. A 25×80 character monochrome text screen requires a 2000-byte buffer. The 1024 ×768 pixel 24-bit color bitmap requires 2,359,296 bytes. In 1980 these two options would have cost $10 and $11,520, respectively. For current prices, check on how much RAM currently costs, probably less than $1/MB.8. Choices (a), (c), and (d) should be restricted to kernel mode.9. Personal computer systems are always interactive, often with only a single user. Mainframe systems nearly always emphasize batch or timesharing with many users. Protection is much more of an issue on mainframe systems, as is efficient use of all resources.10. Every nanosecond one instruction emerges from the pipeline. This meansthe machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per2 PROBLEM SOLUTIONS FOR CHAPTER 1stage would also execute 1 billion instructions per second. All that matters is how often a finished instructions pops out the end of the pipeline.11. The manuscript contains 80 × 50 × 700 = 2.8 million characters. This is, of course, impossible to fit into the registers of any currently available CPU and is too big for a 1-MB cache, but if such hardware were available, the manuscript could be scanned in 2.8 msec from the registers or 5.8 msec from the cache. There are approximately 2700 1024-byte blocks of data, so scanning from the disk would require about 27 seconds, and from tape 2 minutes 7 seconds. Of course, these times are just to read the data. Processing and rewriting the data would increase the time.12. Logically, it does not matter if the limit register uses a virtual address or a physical address. However, the performance of the former is better. If virtual addresses are used, the addition of the virtual address and the base register can start simultaneously with the comparison and then can run in parallel. If physical addresses are used, the comparison cannot start until the addition is complete, increasing the access time.13. Maybe. If the caller gets control back and immediately overwrites the data, when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.14. A trap is caused by the program and is synchronous with it. If the program is run again and again, the trap will always occur at exactly the same position in the instruction stream. An interrupt is caused by an external event and its timing is not reproducible.15. Base = 40,000 and limit = 10,000. An answer of limit = 50,000 is incorrect for the way the system was described in this book. It could have been implemented that way, but doing so would have required waiting until the address + base calculation was completed before starting the limit check, thus slowing down the computer.16. The process table is needed to store the state of a process that is currently suspended, either ready or blocked. It is not needed in a single process system because the single process is never suspended.17. Mounting a file system makes any files already in the mount point directory inaccessible, so mount points are normally empty. However, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being checked or repaired.PROBLEM SOLUTIONS FOR CHAPTER 1 318. Fork can fail if there are no free slots left in the process table (and possibly if there is no memory or swap space left). Exec can fail if the file name given does not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it. 19. If the call fails, for example because fd is incorrect, it can return −1. It can also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes.20. It contains the bytes: 1, 5, 9, 2.21. Block special files consist of numbered blocks, each of which can be read or written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files. 22. System calls do not really have names, other than in a documentation sense. When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program.23. Yes it can, especially if the kernel is a message-passing system.24. As far as program logic is concerned it does not matter whether a call to a library procedure results in a system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.25. Several UNIX calls have no counterpart in the Win32 API:Link: a Win32 program cannot refer to a file by an alternate name or see it in more than one directory. Also, attempting to create a link is a convenient way to test for and create a lock on a file.Mount and umount: a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod: Windows programmers have to assume that every user can access every file.Kill: Windows programmers cannot kill a misbehaving program that is not cooperating.4 PROBLEM SOLUTIONS FOR CHAPTER 126. The conversions are straightforward:(a) A micro year is 10−6 × 365× 24× 3600= 31.536 sec. (b) 1000 meters or 1 km.(c) There are 240 bytes, which is 1,099,511,627,776 bytes. (d) It is 6 × 1024 kg. SOLUTIONS TO CHAPTER 2 PROBLEMS1. The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the process could go directly from blocked to running. The other missing transition,from ready to blocked, is impossible. A ready process cannot do I/O or anything else that might block it. Only a running process can block.2. You could have a register containing a pointer to the current process table entry. When I/O completed, the CPU would store the current machine state in the current process table entry. Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry (the service procedure). This process would then be started up.3. Generally, high-level languages do not allow one the kind of access to CPU hardware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process’stack area. Also, interrupt service routines must execute as rapidly as possible.4. There are several reasons for using a separate stack for the kernel. Two of them are as follows. First, you do not want the operating system to crash because a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program’ s memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.5. It would be difficult, if not impossible, to keep the file system consistent. Suppose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter, another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If the first process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid.PROBLEM SOLUTIONS FOR CHAPTER 2 56. When a thread is stopped, it has values in the registers. They must be saved, just as when the process is stopped the registers must be saved. Timesharing threads is no different than timesharing processes, so each thread needs its own register save area.7. No. If a single-threaded process is blocked on the keyboard, it cannot fork.8. A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9. Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.10. User-level threads cannot be preempted by the clock uless the whole process’ quantum has been used up. Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick adifferent thread from the same process to run next if it so desires.11. In the single-threaded case, the cache hits take 15 msec and cache misses take 90 msec. The weighted average is 2/3×15+ 1/3 ×90. Thus the mean request takes 40 msec and the server can do 25 per second. For a multithreaded server, all the waiting for the disk is overlapped, so every request takes 15 msec, and the server can handle 66 2/3 requests per second.12. Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a telephone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, th e entire database takes 64 megabytes, and can easily be kept in the server’ s memory to provide fast lookup.13. The pointers are really necessary because the size of the global variable is unknown. It could be anything from a character to an array of floating-point numbers. If the value were stored, one would have to give the size to create3global, which is all right, but what type should the second parameter of set3global be, and what type should the value of read3global be?14. It could happen that the runtime system is precisely at the point of blocking or unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag,6 PROBLEM SOLUTIONS FOR CHAPTER 2then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handler.15. Yes it is possible, but inefficient. A thread wanting to do a system call first sets an alarm timer, then does the call. If the call blocks, the timer returns control to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems can develop. This is not an attractive way to build a threads package.16. The priority inversion problem occurs when a low-priority process is in its critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise.17. Each thread calls procedures on its own, so it must have its own stack for the local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.18. A race condition is a situation in which two (or more) processes are about to perform some action. Depending on the exact timing, one or the other goesfirst. If one of the processes goes first, everything works, but if another one goes first, a fatal error occurs.19. Yes. The simulated computer could be multiprogrammed. For example, while process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds one to the variable, we have a race condition.20. Yes, it still works, but it still is busy waiting, of course.21. It certainly works with preemptive scheduling. In fact, it was designed for that case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.22. Yes it can. The memory word is used as a flag, with 0 meaning that no one is using the critical variables and 1 meaning that someone is using them. Put a 1 in the register, and swap the memory word and the register. If the register contains a 0 after the swap, access has been granted. If it contains a 1, access has been denied. When a process is done, it stores a 0 in the flag in memory. PROBLEM SOLUTIONS FOR CHAPTER 2 723. To do a semaphore operation, the operating system first disables interrupts. Then it reads the value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up, it must check to see if any processes are blocked on the semaphore. If one or more processes are blocked, one of then is removed from the list of blocked processes and made runnable. When all these operations have been completed, interrupts can be enabled again.24. Associated with each counting semaphore are two binary semaphores, M, used for mutual exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s, and a list of processes blocked on that semaphore. To implement down, a process first gains exclusive access to the semaphores, counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the process is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up, first M is down ed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to be done is to up M. If, however, the counter is now negative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order.25. If the program operates in phases and neither process may enter the next phase until both are finished with the current phase, it makes perfect sense to use a barrier.26. With round-robin scheduling it works. Sooner or later L will run, and eventually it will leave its critical region. The point is, with priority scheduling, Lnever gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.27. With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.28. It is very expensive to implement. Each time any variable that appears in a predicate on which some process is waiting changes, the runtime system must re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened ona signal primitive.8 PROBLEM SOLUTIONS FOR CHAPTER 229. The employees communicate by passing messages: orders, food, and bags in this case. In UNIX terms, the four processes are connected by pipes. 30. It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.31. If a philosopher blocks, neighbors can later see that he is hungry by checking his state, in test, so he can be awakened when the forks are available.32. The change would mean that after a philosopher stopped eating, neither of his neighbors could be chosen next. In fact, they would never be chosen. Suppose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similary, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.33. Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer is currently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.34. It will need nT sec.35. If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be removed from the list of runnable processes.36. In simple cases it may be possible to determine whether I/O will be limiting by looking at source code. For instance a program that reads all its input filesinto buffers at the start will probably not be I/O bound, but a problem that reads and writes incrementally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program , you can compare this with total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.37. For multiple processes in a pipeline, the common parent could pass to the operating system information about the flow of data. With this information PROBLEM SOLUTIONS FOR CHAPTER 2 9the OS could, for instance, determine which process could supply output to a process blocking on a call for input.38. The CPU efficiency is the useful CPU time divided by the total CPU time. When Q ≥ T, the basic cycle is for the process to run for T and undergo a process switch for S. Thus (a) and (b) have an efficiency of T/(S + T). When the quantum is shorter than T, each run of T will require T/Q process switches, wasting a time ST/Q. The efficiency here is thenT + ST/Q T 333333333which reduces to Q/(Q + S), which is the answer to (c). For (d), we just substitute Q for S and find that the efficiency is 50 percent. Finally, for (e), as Q → 0 the efficiency goes to 0.39. Shortest job first is the way to minimize average response time. 0 < X≤ 3: X , 3, 5, 6, 9.3 < X≤ 5: 3, X , 5, 6, 9.5 < X≤ 6: 3, 5, X , 6, 9.6 < X≤ 9: 3, 5, 6, X , 9.X >9: 3, 5, 6, 9, X.40. For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.41. The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be swapped in 5 times.42. A check could be made to see if the program was expecting input and did anything with it. A program that was not expecting input and did not process it would not get any special priority boost.43. The sequence of predictions is 40, 30, 35, and now 25.44. The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To beschedulable, this must be less than 1. Thus x must be less than 12.5 msec. 45. Two-level scheduling is needed when memory is too small to hold all the ready processes. Some set of them is put into memory, and a choice is made 10 PROBLEM SOLUTIONS FOR CHAPTER 2from that set. From time to time, the set of in-core processes is adjusted. This algorithm is easy to implement and reasonably efficient, certainly a lot better than say, round robin without regard to whether a process was in memory or not.46. The kernel could schedule processes by any means it wishes, but within each process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel handles the mechanism.47. A possible shell script might beif [ ! –f numbers ]; then echo 0 > numbers; fi count=0 while (test $count != 200 ) docount=‘expr $count + 1 ‘ n=‘tail –1 numbers‘ expr $n + 1 >>numbers doneRun the script twice simultaneously, by starting it once in the background (using &) and again in the foreground. Then examine the file numbers . It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two copies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this:if ln numbers numbers.lock then n=‘tail –1 numbers‘ expr $n + 1 >>numbersrm numbers.lock fiThis version will just skip a turn when the file is inaccessible, variant solutions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER 3 PROBLEMS1. In the U.S., consider a presidential election in which three or more candidates are trying for the nomination of some party. After all the primary electionsPROBLEM SOLUTIONS FOR CHAPTER 3 11are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources (votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.2. If the printer starts to print a file before the entire file has been received (this is often allowed to speed response), the disk may fill with other requests that can’ t be printed until the first file is done, but which use up disk space needed to receive the file currently being printed. If the spooler does not start to print a file until the entire file has been spooled it can reject a request that is too big.Starting to print a file is equivalent to reserving the printer; if the reservation is deferred until it is known that the entire file can be received, a deadlock of the entire system can be avoided. The user with the fil e that won’ t fit is still deadlocked of course, and must go to another facility that permits printing bigger files.3. The printer is nonpreemptable; the system cannot start printing another job until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows that4. Yes. It does not make any difference whatsoever.5. Yes, illegal graphs exist. We stated that a resource may only be held by a single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules. Arcs from squares to squares or from circles to circles also violate the rules.6. A portion of all such resources could be reserved for use only by processes owned by the administrator, so he or she could always run a shell and programs needed to evaluate a deadlock and make decisions about which processes to kill to make the system usable again.7. Neither change leads to deadlock. There is no circular wait in either case.8. Voluntary relinquishment of a resource is most similar to recovery through preemption. The essential difference is that computer processes are not expected to solve such problems on their own. Preemption is analogous to the operator or the operating system acting as a policeman, overriding the normal rules individual processes obey.12 PROBLEM SOLUTIONS FOR CHAPTER 39. The process is asking for more resources than the system has. There is no conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.10. If the system had two or more CPUs, two or more processes could run in parallel, leading to diagonal trajectories.11. Yes. Do the whole thing in three dimensions. The z-axis measures the number of instructions executed by the third process.12. The method can only be used to guide the scheduling if the exact instant at which a resource is going to be claimed is known in advance. In practice, this is rarely the case.13. A request from D is unsafe, but one from C is safe.14. There are states that are neither safe nor deadlocked, but which lead to deadlocked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation:Has Needs Available。
计算机操作系统第二版答案(郁红英)
计算机操作系统第二版答案(郁红英)号量、消息队列指针等。
为了设备管理,进程控制块的内容应该包括进程占有资源的情况。
5.假设系统就绪队列中有10个进程,这10个进程轮换执行,每隔300ms轮换一次,CPU在进程切换时所花费的时间是10ms,试问系统化在进程切换上的开销占系统整个时间的比例是多少?答:因为每隔300ms换一次进程,且每个进程切换时所花费的时间是10ms,则系统化在进程切换上的开销占系统整个时间的比例是10/(300+10)=3.2% 6.试述线程的特点及其与进程之间的关系。
答:(1)特点:线程之间的通信要比进程之间的通信方便的多;同一进程内的线程切换也因为线程的轻装而方便的多。
同时线程也是被独立调度的分配的;(2)线程与进程的关系:线程和进程是两个密切相关的概念,一个进程至少拥有一个线程,进程根据需要可以创建若干个线程。
线程自己基本上不拥有资源,只拥有少量必不可少的资源(线程控制块和堆栈)7.根据图2-18,回答以下问题。
(1)进程发生状态变迁1、3、4、6、7的原因。
答:1表示操作系统把处于创建状态的进程移入就绪队列;3表示进程请求I/O或等待某事件;4表示进程用行的时间片用完;6表示I/O完成或事件完成;7表示进程完成。
(2)系统中常常由于某一进程的状态变迁引起另一进程也产生状态变迁,这种变迁称为因果变迁。
下述变迁是否为因果变迁:3~2,4~5,7~2,3~6,是说明原因。
答:3→2是因果变迁,当一个进程从运行态变为阻塞态时,此时CPU空闲,系统首先到高优先级队列中选择一个进程。
4→5是因果变迁,当一个进程运行完毕时,此时CPU空闲,系统首先到高优先级队列中选择进程,但如果高优先级队列为空,则从低优先队列中选择一个进程。
7→2 是因果变迁,当一个进程运行完毕时,CPU空闲,系统首先到高优先级队列中选择一个进程。
3→6不是因果变迁。
一个进程阻塞时由于自身的原因而发生的,和另一个进程等待的时间到达没有因果关系。
计算机操作系统第二版答案(郁红英)
习题二1.操作系统中为什么要引入进程的概念?为了实现并发进程之间的合作和协调,以及保证系统的安全,操作系统在进程管理方面要做哪些工作?答:(1)为了从变化的角度动态地分析研究可以并发执行的程序,真实地反应系统的独立性、并发性、动态性和相互制约,操作系统中就不得不引入“进程”的概念;(2)为了防止操作系统及其关键的数据结构,受到用户程序有意或无意的破坏,通常将处理机的执行状态分成核心态和用户态;对系统中的全部进程实行有效地管理,其主要表现是对一个进程进行创建、撤销以及在某些进程状态之间的转换控制,2.试描述当前正在运行的进程状态改变时,操作系统进行进程切换的步骤。
答:(1)就绪状态→运行状态。
处于就绪状态的进程,具备了运行的条件,但由于未能获得处理机,故没有运行。
(2)运行状态→就绪状态。
正在运行的进程,由于规定的时间片用完而被暂停执行,该进程就会从运行状态转变为就绪状态。
(3)运行状态→阻塞状态。
处于运行状态的进程,除了因为时间片用完而暂停执行外还有可能由于系统中的其他因素的影响而不能继续执行下去。
3.现代操作系统一般都提供多任务的环境,试回答以下问题。
(1)为支持多进程的并发执行,系统必须建立哪些关于进程的数据结构?答:为支持进程的并发执行,系统必须建立“进程控制块(PCB)”,PCB的组织方式常用的是链接方式。
(2)为支持进程的状态变迁,系统至少应该供哪些进程控制原语?答:进程的阻塞与唤醒原语和进程的挂起与激活原语。
(3)当进程的状态变迁时,相应的数据结构发生变化吗?答:创建原语:建立进程的PCB,并将进程投入就绪队列。
;撤销原语:删除进程的PCB,并将进程在其队列中摘除;阻塞原语:将进程PCB中进程的状态从运行状态改为阻塞状态,并将进程投入阻塞队列;唤醒原语:将进程PCB中进程的状态从阻塞状态改为就绪状态,并将进程从则色队列摘下,投入到就绪队列中。
4.什么是进程控制块?从进程管理、中断处理、进程通信、文件管理、设备管理及存储管理的角度设计进程控制块应该包含的内容。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
第1章一、填空1.计算机由硬件系统和软件系统两个部分组成,它们构成了一个完整的计算机系统。
2.按功能划分,软件可分为系统软件和应用软件两种。
3.操作系统是在裸机上加载的第一层软件,是对计算机硬件系统功能的首次扩充。
4.操作系统的基本功能是处理机(包含作业)管理、存储管理、设备管理和文件管理。
5.在分时和批处理系统结合的操作系统中引入“前台”和“后台”作业的概念,其目的是改善系统功能,提高处理能力。
6.分时系统的主要特征为多路性、交互性、独立性和及时性。
7.实时系统与分时以及批处理系统的主要区别是高及时性和高可靠性。
8.若一个操作系统具有很强的交互性,可同时供多个用户使用,则是分时操作系统。
9.如果一个操作系统在用户提交作业后,不提供交互能力,只追求计算机资源的利用率、大吞吐量和作业流程的自动化,则属于批处理操作系统。
10.采用多道程序设计技术,能充分发挥CPU 和外部设备并行工作的能力。
二、选择1.操作系统是一种B 。
A.通用软件B.系统软件C.应用软件D.软件包2.操作系统是对C 进行管理的软件。
A系统软件B.系统硬件C.计算机资源D.应用程序3.操作系统中采用多道程序设计技术,以提高CPU和外部设备的A 。
A.利用率B.可靠性C.稳定性D.兼容性4.计算机系统中配置操作系统的目的是提高计算机的B 和方便用户使用。
A.速度B.利用率C.灵活性D.兼容性5.C 操作系统允许多个用户在其终端上同时交互地使用计算机。
A.批处理B.实时C.分时D.多道批处理6.如果分时系统的时间片一定,那么D ,响应时间越长。
A.用户数越少B.内存越少C.内存越多D.用户数越多三、问答1.什么是“多道程序设计”技术?它对操作系统的形成起到什么作用?答:所谓“多道程序设计”技术,即是通过软件的手段,允许在计算机内存中同时存放几道相互独立的作业程序,让它们对系统中的资源进行“共享”和“竞争”,以使系统中的各种资源尽可能地满负荷工作,从而提高整个计算机系统的使用效率。
基于这种考虑,计算机科学家开始把CPU、存储器、外部设备以及各种软件都视为计算机系统的“资源”,并逐步设计出一种软件来管理这些资源,不仅使它们能够得到合理地使用,而且还要高效地使用。
具有这种功能的软件就是“操作系统”。
所以,“多道程序设计”的出现,加快了操作系统的诞生。
第2章一、填空1.进程在执行过程中有3种基本状态,它们是运行态、就绪态和阻塞态。
2.系统中一个进程由程序、数据集合和进程控制块(PCB)三部分组成。
3.在多道程序设计系统中,进程是一个动态概念,程序是一个静态概念。
4.在一个单CPU系统中,若有5个用户进程。
假设当前系统为用户态,则处于就绪状态的用户进程最多有4 个,最少有0 个。
注意,题目里给出的是假设当前系统为用户态,这表明现在有一个进程处于运行状态,因此最多有4个进程处于就绪态。
也可能除一个在运行外,其他4个都处于阻塞。
这时,处于就绪的进程一个也没有。
5.总的来说,进程调度有两种方式,即不可剥夺方式和剥夺方式。
6.进程调度程序具体负责中央处理机(CPU)的分配。
7.为了使系统的各种资源得到均衡使用,进行作业调度时,应该注意CPU忙碌作业和I/O忙碌作业的搭配。
8.所谓系统调用,就是用户程序要调用操作系统提供的一些子功能。
9.作业被系统接纳后到运行完毕,一般还需要经历后备、运行和完成三个阶段。
10.假定一个系统中的所有作业同时到达,那么使作业平均周转时间为最小的作业调度算法是短作业优先调度算法二、选择1.在进程管理中,当C 时,进程从阻塞状态变为就绪状态。
A.进程被调度程序选中B.进程等待某一事件发生C.等待的事件出现D.时间片到2.在分时系统中,一个进程用完给它的时间片后,其状态变为A 。
A.就绪B.等待C.运行D.由用户设定3.下面对进程的描述中,错误的是D 。
A.进程是动态的概念B.进程的执行需要CPUC.进程具有生命周期D.进程是指令的集合4.操作系统通过B 对进程进行管理。
A.JCB B.PCB C.DCT D.FCB 5.一个进程被唤醒,意味着该进程D 。
A.重新占有CPU B.优先级变为最大C.移至等待队列之首D.变为就绪状态6.由各作业JCB形成的队列称为C 。
A.就绪作业队列B.阻塞作业队列C.后备作业队列D.运行作业队列7.既考虑作业等待时间,又考虑作业执行时间的作业调度算法是A 。
A.响应比高者优先B.短作业优先C.优先级调度D.先来先服务8.作业调度程序从处于D 状态的队列中选取适当的作业投入运行。
A.就绪B.提交C.等待D.后备9.A 是指从作业提交系统到作业完成的时间间隔。
A.周转时间B.响应时间C.等待时间D.运行时间10.计算机系统在执行C 时,会自动从目态变换到管态。
A.P操作B.V操作C.系统调用D.I/O指令三、问答7.作业调度与进程调度有什么区别?答:作业调度和进程调度(即CPU调度)都涉及到CPU的分配。
但作业调度只是选择参加CPU竞争的作业,它并不具体分配CPU。
而进程调度是在作业调度完成选择后的基础上,把CPU真正分配给某一个具体的进程使用。
3.某系统有三个作业:系统确定在它们全部到达后,开始采用响应比高者优先调度算法,并忽略系统调度时间。
试问对它们的调度顺序是什么?各自的周转时间是多少?解:三个作业是在9.5时全部到达的。
这时它们各自的响应比如下:作业1的响应比=(9.5 – 8.8)/ 1.5 = 0.46作业2的响应比=(9.5 – 9.0)/ 0.4 = 1.25作业3的响应比=(9.5 – 9.5)/ 1.0 = 0因此,最先应该调度作业2运行,因为它的响应比最高。
它运行了0.4后完成,这时的时间是9.9。
再计算作业1和3此时的响应比:作业1的响应比=(9.9 – 8.8)/ 1.5 = 0.73作业3的响应比=(9.9 – 9.5)/ 1.0 = 0.40因此,第二个应该调度作业1运行,因为它的响应比最高。
它运行了1.5后完成,这时的时间是11.4。
第三个调度的是作业3,它运行了1.0后完成,这时的时间是12.4。
整个实施过程如下。
作业的调度顺序是2→1→3。
各自的周转时间为:作业1为0.9;作业2为2.6;作业3为2.9。
第3章一、填空1.将作业相对地址空间的相对地址转换成内存中的绝对地址的过程称为地址重定位。
2.使用覆盖与对换技术的主要目的是提高内存的利用率。
3.存储管理中,对存储空间的浪费是以内部碎片和外部碎片两种形式表现出来的。
4.地址重定位可分为静态重定位和动态重定位两种。
5.在可变分区存储管理中采用最佳适应算法时,最好按尺寸法来组织空闲分区链表。
6.在分页式存储管理的页表里,主要应该包含页号和块号两个信息。
7.静态重定位在程序装入时进行,动态重定位在程序执行时进行。
8.在分页式存储管理中,如果页面置换算法选择不当,则会使系统出现 抖动 现象。
9.在请求分页式存储管理中采用先进先出(FIFO )页面淘汰算法时,增加分配给作业的块数时, 缺页中断 的次数有可能会增加。
10.在请求分页式存储管理中,页面淘汰是由于 缺页 引起的。
二、选择1.虚拟存储器的最大容量是由 B 决定的。
A .内、外存容量之和B .计算机系统的地址结构C .作业的相对地址空间D .作业的绝对地址空间2.采用先进先出页面淘汰算法的系统中,一进程在内存占3块(开始为空),页面访问序列为1、2、3、4、1、2、5、1、2、3、4、5、6。
运行时会产生 D 次缺页中断。
A .7B .8C .9D .10从图3-8中的“缺页计数”栏里可以看出应该选择D 。
1 2 3 4 1 2 5 1 2 3 4 5 6页面走向→ 3个内存块→缺页计数→图3-8 选择题2配图 3.系统出现“抖动”现象的主要原因是由于 A 引起的。
A .置换算法选择不当B .交换的信息量太大C .内存容量不足D .采用页式存储管理策略4.实现虚拟存储器的目的是 D 。
A .进行存储保护B .允许程序浮动C .允许程序移动D .扩充主存容量5.作业在执行中发生了缺页中断,那么经中断处理后,应返回执行 B 指令。
A .被中断的前一条B .被中断的那条C .被中断的后一条D .程序第一条6.在实行分页式存储管理系统中,分页是由 D 完成的。
A .程序员B .用户C .操作员D .系统7.下面的 A 页面淘汰算法有时会产生异常现象。
A .先进先出B .最近最少使用C .最不经常使用D .最佳8.在一个分页式存储管理系统中,页表的内容为: 若页的大小为4KB ,则地址转换机构将相对地址0转换成的物理地址是 A 。
A .8192B .4096C .2048D .1024注意,相对地址0肯定是第0页的第0个字节。
查页表可知第0页存放在内存的第2块。
现在块的尺寸是4KB,因此第2块的起始地址为8192。
故相对地址0所对应的绝对地址(即物理地址)是8192。
9.下面所列的存储管理方案中,A 实行的不是动态重定位。
A.固定分区B.可变分区C.分页式D.请求分页式10.在下面所列的诸因素中,不对缺页中断次数产生影响的是C 。
A.内存分块的尺寸B.程序编制的质量C.作业等待的时间D.分配给作业的内存块数三、问答2.叙述静态重定位与动态重定位的区别。
答:静态重定位是一种通过软件来完成的地址重定位技术。
它在程序装入内存时,完成对程序指令中地址的调整。
因此,程序经过静态重定位以后,在内存中就不能移动了。
如果要移动,就必须重新进行地址重定位。
动态重定位是一种通过硬件支持完成的地址重定位技术。
作业程序被原封不动地装入内存。
只有到执行某条指令时,硬件地址转换机构才对它里面的地址进行转换。
正因为如此,实行动态重定位的系统,作业程序可以在内存里移动。
也就是说,作业程序在内存中是可浮动的。
3.一个虚拟地址结构用24个二进制位表示。
其中12个二进制位表示页面尺寸。
试问这种虚拟地址空间总共多少页?每页的尺寸是多少?答:如下图所示,由于虚拟地址中是用12个二进制位表示页面尺寸(即页内位移),所以虚拟地址空间中表示页号的也是12个二进制位。
这样,这种虚拟地址空间总共有:212 = 4096(页)每页的尺寸是:212 = 4096 = 4K(字节)3.1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 若采用最近最久未用(LRU)页面淘汰算法,作业在得到2块和4块内存空间时,各会产生出多少次缺页中断?如果采用先进先出(FIFO)页面淘汰算法时,结果又如何?解:(1)采用最近最久未用(LRU)页面淘汰算法,作业在得到2块内存空间时所产生的缺页中断次数为18次,如图3-10(a)所示;在得到4块内存空间时所产生的缺页中断次数为10次,如图3-10(b)所示。