操作系统精髓与设计重点店课后习题整理
操作系统--精髓与设计原理(第八版)第四章复习题答案
操作系统--精髓与设计原理(第⼋版)第四章复习题答案操作系统--精髓与设计原理(第⼋版)第四章复习题答案4.1 表3.5列出了在⼀个没有线程的操作系统中进程控制块的基本元素。
对于多线程系统,这些元素中哪些可能属于线程控制块,哪些可能属于进程控制块?这对于不同的系统来说通常是不同的,但⼀般来说,进程是资源的所有者,⽽每个线程都有它⾃⼰的执⾏状态。
关于表3.5中的每⼀项的⼀些结论如下:进程控制信息:调度和状态信息主要处于线程级;数据结构在两级都可出现;进程间通信和线程间通信都可以得到⽀持;特权在两级都可以存在;存储管理通常在进程级;资源信息通常也在进程级;进程标识:进程必须被标识,⽽进程中的每⼀个线程也必须有⾃⼰的ID。
处理器状态信息:这些信息通常只与进程有关。
4.2 请列出线程间的模式切换⽐进程间的模式切换开销更低的原因。
包含的状态信息更少。
4.3 在进程概念中体现出的两个独⽴且⽆关的特点是什么?资源所有权: 进程包括存放进程映像的虚拟地址空间;回顾第3章的内容可知,进程映像是程序、数据、栈和进程控制块中定义的属性集。
进程总具有对资源的控制权或所有权,这些资源包括内存、I/O通道、I/O设备和⽂件等。
操作系统提供预防进程间发⽣不必要资源冲突的保护功能。
调度/执⾏:进程执⾏时采⽤⼀个或多程序(见图1.5)的执⾏路径(轨迹),不同进程的执⾏过程会交替进⾏。
因此,进程具有执⾏态(运⾏、就绪等)和分配给其的优先级,是可被操作系统调度和分派的实体。
4.4 给出在单⽤户多处理系统中使⽤线程的四个例⼦。
前台和后台操作异步处理加速执⾏模块化程序结构。
4.5 哪些资源通常被⼀个进程中的所有线程共享?进程中的所有线程共享该进程的状态和资源,例如地址空间,⽂件资源,执⾏特权等。
4.6 列出⽤户级线程由于内核级线程的三个优点。
由于所有线程管理数据结构都在⼀个进程的⽤户地址空间中,线程切换不需要内核级模式的特权,因此,进程不需要为了线程管理⽽切换到内核模式,这节省了在两种模式间进⾏切换(从⽤户模式到内核模式;从内核模式返回⽤户模式)的开销。
(完整word版)操作系统课后重点习题整理(word文档良心
第一章1.17 Define the essential properties of the following types of operating systems: 列出下列操作系统的基本特点:a. Batch批处理b. Interactive交互式c. Time sharing分时d. Real time实时e. Network网络g. Distributed分布式f.并行式h.集群式i.手持式Answer:作业ch1-第四题(第六版答案)a. Batch相似需求的Job分批、成组的在计算机上执行,Job由操作员或自动Job程序装置装载;可以通过采用 buffering, off-line operation, spooling, multiprogramming 等技术使CPU 和 I/O不停忙来提高性能批处理适合于需要极少用户交互的Job。
b. Interactive由许多短交易组成,下一次交易的结果可能不可预知需要响应时间短c. Time sharing使用CPU调度和多道程序提供对系统的经济交互式使用,CPU快速地在用户之间切换一般从终端读取控制,输出立即打印到屏幕d. Real time在专门系统中使用,从传感器读取信息,必须在规定时间内作出响应以确保正确的执行e. Network在通用OS上添加联网、通信功能远程过程调用文件共享f. Distributed具有联网、通信功能提供远程过程调用提供多处理机的统一调度调度统一的存储管理分布式文件系统第二章第六版2.3 What are the differences between a trap and an interrupt? What is the use of each function?答:作业ch2-第二题(第六版答案)An interrupt是硬件产生的系统内的流的改变A trap是软件产生的“中断”。
interrupt可以被I/O用来产生完成的信号,从而避免CPU对设备的轮询A trap可以用来调用OS的例程或者捕获算术错误第七版2.3讨论向操作系统传递参数的三个主要的方法。
操作系统第五版--精髓与设计概要第7章课后习题答案2
7.1.如果使用动态分区方案,下图所示为在某个给定的时间点的内存配置:阴影部分为已经被分配的块;空白部分为空闲块。
接下来的三个内存需求分别为40MB,20MB和10MB。
分别使用如下几种放置算法,指出给这三个需求分配的块的起始地址。
a.首次适配b.最佳适配c.临近适配(假设最近添加的块位于内存的开始)d.最坏适配答:a.40M的块放入第2个洞中,起始地址是80M. 20M的块放入第一个洞中.起始地址是20M. 10M的块的起始地址是120M。
b.40M,20N,10M的起始地址分别为230M,20M和160M.c.40M,20M,10M的起始地址是80M,120160M.d.40M,20M,10M,的起始地址是80M,230M,360M.7.2.使用伙伴系统分配一个1MB的存储块。
a.利用类似于图7.6的图来说明按下列顺序请求和返回的结果:请求70;请求35;请求80;返回A;请求60;返回B;返回D;返回C。
b.给出返回B之后的二叉树表示。
答:a.b.7.3.考虑一个伙伴系统,在当前分配下的一个特定块地址为011011110000.a.如果块大小为4,它的伙伴的二进制地址为多少?b.如果块大小为16,它的伙伴的二进制地址为多少?答:a.011011110100b.0110111000007.4.令buddy k(x)为大小为2k、地址为x的块的伙伴的地址,写出buddy k(x)的通用表达式。
答:7.5.Fabonacci序列定义如下:F0=0,F1=1,F n+2=F n+1+F n,n≧0a.这个序列可以用于建立伙伴系统吗?b.该伙伴系统与本章介绍的二叉伙伴系统相比,有什么优点?答:a.是。
字区大小可以确定Fn = Fn-1 + Fn-2.。
b.这种策略能够比二叉伙伴系统提供更多不同大小的块,因而具有减少内部碎片的可能性。
但由于创建了许多没用的小块,会造成更多的外部碎片。
7.6.在程序执行期间,每次取指令后处理器把指令寄存器的内容(程序计数器)增加一个字,但如果遇到会导致在程序中其他地址继续执行的转跳或调用指令,处理器将修改这个寄存器的内容。
[操作系统][操作系统-精髓与设计原理(第五版)]复习资料
第一章计算机系统中状态寄存器和控制寄存器有哪些?PC和IR寄存器主要存放什么?答: 1、程序计数器Program Counter (PC) 2、指令寄存器Instruction Register (IR) 3、程序状态字Program Status Word (PSW) 4、Condition Codes or FlagsPC寄存器主要存放的是下一条要执行的指令的地址;IR寄存器主要存放当前要执行的指令1)计算机中中断寄存器的处理过程是怎样的?2)答:3)处理器在响应中断前结束当前指令的执行4)处理器对中断进行测定,确定存在未响应的中断,并给提交中断的设备发送确认信号,确认信号允许该设备取消它的中断信号.5)处理器需要为把控制权转移到中断程序中去做准备.首先,需要保存从中断点恢复当前程序所需要的信息,要求的最少信息包括程序状态字和保存在程序计数器中的下一条要执行的指令的地址,它们被压入系统控制栈中6)处理器把响应此中断的中断处理器入口地址装入程序计数器中7)在这一点,与被中断程序相关的程序计数器和PSW被保存到系统栈中8)中断处理器现在可以开始处理中断,其中包括检查与I/O操作相关的状态信息或其他引起中断的时间,还可能包括给I/O设备发送附加命令或应答9)当中断出来结束后,被保存的寄存器值从栈中释放并恢复到寄存器中10)最后的操作是从栈中恢复PSW和程序计数器的值,其结果是下一条要执行的之类来自前面被中断的程序.程序IO、中断IO、DMADMA的优点是什么?原因是什么?优点: 实现高速外设和主存储器之间自动成批交换数据尽量减少CPU干预直接存储器访问技术DMA通过系统总线中的一个独立控制单元---DMA控制器,自动地控制成块数据在内存和I/O单元之间的传送.当处理器需要读写一整块数据的时候,它给DMA控制但愿发送一条命令.处理器发送完命令后就可以处理其他的事情了,DMA控制器将自动管理数据的传送,当这个过程完成过,它会给处理器发一个中断,这样处理器只在开始传送和传送结束时关注一下就可以了,这大大提高了处理I/O的效能.第二章操作系统的类型有哪些?什么是分时操作系统?答: 类型: A.批处理操作系统B、分时操作系统C、实时操作系统D、网络操作系统E、分布式操作系统。
操作系统精髓与设计原理第五版习题与答案
第1章计算机系统概述1.1 列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2 定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3 一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4 什么是中断?中断:其他模块(I/O,存储器)中断处理器正常处理过程的机制。
1.5 多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6 存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7 什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8 列出并简要地定义I/O操作的三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
中断驱动I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令,并继续执行后续指令,直到后者完成,它将被I/O 模块中断。
操作系统精髓与设计原理第五版 课后题答案
操作系统精髓与设计原理第五版课后题答案C HAPTER 2O PERATING S YSTEMO VERVIEWReview Questions2.1 Convenience: An operating system makes a computer more convenientto use. Efficiency: An operating system allows the computer systemresources to be used in an efficient manner. Ability to evolve: Anoperating system should be constructed in such a way as to permit theeffective development, testing, and introduction of new systemfunctions without interfering with service.2.5 The execution context, or process state, is the internal data by which theoperating system is able to supervise and control the process. Thisinternal information is separated from the process, because theoperating system has information not permitted to the process. Thecontext includes all of the information that the operating system needsto manage the process and that the processor needs to execute theprocess properly. The context includes the contents of the variousprocessor registers, such as the program counter and data registers. Italso includes information of use to the operating system, such as thepriority of the process and whether the process is waiting for thecompletion of a particular I/O event.Problems2.1 The answers are the same for (a) and (b). Assume that althoughprocessor operations cannot overlap, I/O operations can.1 Job: TAT = NT Processor utilization = 50%2 Jobs: TAT = NT Processor utilization = 100%4 Jobs: TAT = (2N – 1)NT Processor utilization = 100% 2.4 A system call is used by an application program to invoke a functionprovided by the operating system. Typically, the system call results intransfer to a system program that runs in kernel mode.C HAPTER 3P ROCESS D ESCRIPTION ANDC ONTROLReview Questions3.5 Swapping involves moving part or all of a process from main memoryto disk. When none of the processes in main memory is in the Ready state, the operating system swaps one of the blocked processes out onto disk into a suspend queue, so that another process may be brought into main memory to execute.3.10 The user mode has restrictions on the instructions that can be executedand the memory areas that can be accessed. This is to protect theoperating system from damage or alteration. In kernel mode, theoperating system does not have these restrictions, so that it canperform its tasks.Problems3.1 •Creation and deletion of both user and system processes. Theprocesses in the system can execute concurrently for informationsharing, computation speedup, modularity, and convenience.Concurrent execution requires a mechanism for process creation and deletion. The required resources are given to the process when it iscreated, or allocated to it while it is running. When the processterminates, the OS needs to reclaim any reusable resources.•Suspension and resumpti on of processes. In process scheduling, theOS needs to change the process's state to waiting or ready state when it is waiting for some resources. When the required resources areavailable, OS needs to change its state to running state to resume itsexecution.•Provision of mechanism for process synchronization. Cooperatingprocesses may share data. Concurrent access to shared data mayresult in data inconsistency. OS has to provide mechanisms forprocesses synchronization to ensure the orderly execution ofcooperating processes, so that data consistency is maintained.•Provision of mechanism for process communication. The processesexecuting under the OS may be either independent processes orcooperating processes. Cooperating processes must have the meansto communicate with each other.•Provision of mechanisms for deadlock handling. In amultiprogramming environment, several processes may compete fora finite number of resources. If a deadlock occurs, all waitingprocesses will never change their waiting state to running state again, resources are wasted and jobs will never be completed.3.3Figure 9.3 shows the result for a single blocked queue. The figurereadily generalizes to multiple blocked queues.C HAPTER 4P ROCESS D ESCRIPTION ANDC ONTROLReview Questions4.2 Less state information is involved.4.5 Address space, file resources, execution privileges are examples.4.6 1. Thread switching does not require kernel mode privileges becauseall of the thread management data structures are within the useraddress space of a single process. Therefore, the process does notswitch to the kernel mode to do thread management. This saves theoverhead of two mode switches (user to kernel; kernel back to user). 2.Scheduling can be application specific. One application may benefit most from a simple round-robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm. Thescheduling algorithm can be tailored to the application withoutdisturbing the underlying OS scheduler. 3. ULTs can run on anyoperating system. No changes are required to the underlying kernel to support ULTs. The threads library is a set of application-level utilities shared by all applications.4.7 1. In a typical operating system, many system calls are blocking. Thus,when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. 2. In a pure ULT strategy, a multithreaded application cannot take advantage ofmultiprocessing. A kernel assigns one process to only one processor ata time. Therefore, only a single thread within a process can execute at atime.Problems4.2Because, with ULTs, the thread structure of a process is not visible to theoperating system, which only schedules on the basis of processes.C HAPTER 5C ONCURRENCY:M UTUALE XCLUSION ANDS YNCHRONIZATIONReview Questions5.1 Communication among processes, sharing of and competing forresources, synchronization of the activities of multiple processes, and allocation of processor time to processes.5.9 A binary semaphore may only take on the values 0 and 1. A generalsemaphore may take on any integer value.Problems5.2 ABCDE; ABDCE; ABDEC; ADBCE; ADBEC; ADEBC;DEABC; DAEBC; DABEC; DABCE5.5Consider the case in which turn equals 0 and P(1) sets blocked[1] totrue and then finds blocked[0] set to false. P(0) will then setblocked[0] to true, find turn = 0, and enter its critical section. P(1) will then assign 1 to turn and will also enter its critical section.C HAPTER 6C ONCURRENCY:D EADLOCK ANDS TARVATIONReview Questions6.2 Mutual exclusion. Only one process may use a resource at a time. Holdand wait. A process may hold allocated resources while awaitingassignment of others. No preemption. No resource can be forciblyremoved from a process holding it.6.3 The above three conditions, plus: Circular wait. A closed chain ofprocesses exists, such that each process holds at least one resourceneeded by the next process in the chain.Problems6.4 a. 0 0 0 00 7 5 06 6 2 22 0 0 20 3 2 0b. to d. Running the banker's algorithm, we see processes can finishin the order p1, p4, p5, p2, p3.e. Change available to (2,0,0,0) and p3's row of "still needs" to (6,5,2,2).Now p1, p4, p5 can finish, but with available now (4,6,9,8) neitherp2 nor p3's "still needs" can be satisfied. So it is not safe to grantp3's request.6.5 1. W = (2 1 0 0)2. Mark P3; W = (2 1 0 0) + (0 1 2 0) = (2 2 2 0)3. Mark P2; W = (2 2 2 0) + (2 0 0 1) = (4 2 2 1)4. Mark P1; no deadlock detectedReview Questions7.1 Relocation, protection, sharing, logical organization, physicalorganization.7.7 A logical address is a reference to a memory location independent ofthe current assignment of data to memory; a translation must be made to a physical address before the memory access can be achieved. A relative address is a particular example of logical address, in which the address is expressed as a location relative to some known point, usually the beginning of the program. A physical address, or absolute address, is an actual location in main memory.Problems7.6 a. The 40 M block fits into the second hole, with a starting address of80M. The 20M block fits into the first hole, with a starting address of 20M. The 10M block is placed at location 120M.40M 40M 60M 40M 40M 40M 30Mb. The three starting addresses are 230M, 20M, and 160M, for the 40M, 20M, and 10M blocks, respectively. 40M 60M 60M 40M 40M 40M 30Mc. The three starting addresses are 80M, 120M, and 160M, for the 40M,20M, and 10M blocks, respectively. C HAPTER 7M EMORY M ANAGEMENT7.12 a. The number of bytes in the logical address space is (216 pages) (210bytes/page) = 226 bytes. Therefore, 26 bits are required for the logical address.b. A frame is the same size as a page, 210 bytes.c. The number of frames in main memory is (232 bytes of mainmemory)/(210 bytes/frame) = 222 frames. So 22 bits is needed tospecify the frame.d. There is one entry for each page in the logical address space.Therefore there are 216 entries.e. In addition to the valid/invalid bit, 22 bits are needed to specify theframe location in main memory, for a total of 23 bits.30M40M40M60M40M40M40Md. The three starting addresses are 80M, 230M, and 360M, for the 40M,20M, and 10M blocks, respectively.C HAPTER 8V IRTUAL M EMORYReview Questions8.1 Simple paging: all the pages of a process must be in main memory forprocess to run, unless overlays are used. Virtual memory paging: not all pages of a process need be in main memory frames for the process to run.; pages may be read in as needed8.2 A phenomenon in virtual memory schemes, in which the processorspends most of its time swapping pieces rather than executinginstructions.Problems8.1 a. Split binary address into virtual page number and offset; use VPNas index into page table; extract page frame number; concatenateoffset to get physical memory addressb. (i) 1052 = 1024 + 28 maps to VPN 1 in PFN 7, (7 ⨯ 1024+28 = 7196)(ii) 2221 = 2 ⨯ 1024 + 173 maps to VPN 2, page fault(iii) 5499 = 5 ⨯ 1024 + 379 maps to VPN 5 in PFN 0, (0 ⨯ 1024+379 =379)8.4 a. PFN 3 since loaded longest ago at time 20b. PFN 1 since referenced longest ago at time 160c. Clear R in PFN 3 (oldest loaded), clear R in PFN 2 (next oldestloaded), victim PFN is 0 since R=0d. Replace the page in PFN 3 since VPN 3 (in PFN 3) is used furthestin the futuree. There are 6 faults, indicated by **4 0 0 0 *2*4 2*1**3 2VPN of pages in memory in LRU order 32143243434242241241243122Review Questions9.1 Long-term scheduling: The decision to add to the pool of processes tobe executed. Medium-term scheduling: The decision to add to thenumber of processes that are partially or fully in main memory.Short-term scheduling: The decision as to which available process willbe executed by the processor9.3 Turnaround time is the total time that a request spends in the system(waiting time plus service time. Response time is the elapsed timebetween the submission of a request until the response begins toappear as output.Problems9.1 Each square represents one time unit; the number in the square refersto the currently-running process.FCFS A A A B B B B B C C D D D D D E E E E E RR, q = 1 A B A B C A B C B D B D E D E D E D E E RR, q = 4 A A A B B B B C C B D D D D E E E E D E SPN A A A C C B B B B B D D D D D E E E E E SRT A A A C C B B B B B D D D D D E E E E E HRRN A A A B B B B B C C D D D D D E E E E E Feedback, q = 1 A B A C B C A B B D B D E D E D E D E EFeedback, q = 2i A B A A C B B C B B D D E D D E E D E EC HAPTER 9U NIPROCESSORS CHEDULINGA B C D ET a0 1 3 9 12T s 3 5 2 5 5 FCFS T f 3 8 10 15 20T r 3.00 7.00 7.00 6.00 8.00 6.20T r/T s 1.00 1.40 3.50 1.20 1.60 1.74 RR qT f 6.00 11.00 8.00 18.00 20.00= 1T r 6.00 10.00 5.00 9.00 8.00 7.60T r/T s 2.00 2.00 2.50 1.80 1.60 1.98RR qT f 3.00 10.00 9.00 19.00 20.00= 4T r 3.00 9.00 6.00 10.00 8.00 7.20T r/T s 1.00 1.80 3.00 2.00 1.60 1.88 SPN T f 3.00 10.00 5.00 15.00 20.00T r 3.00 9.00 2.00 6.00 8.00 5.60T r/T s 1.00 1.80 1.00 1.20 1.60 1.32SRT T f 3.00 10.00 5.00 15.00 20.00T r 3.00 9.00 2.00 6.00 8.00 5.60T r/T s 1.00 1.80 1.00 1.20 1.60 1.32 HRRT f 3.00 8.00 10.00 15.00 20.00NT r 3.00 7.00 7.00 6.00 8.00 6.20T r/T s 1.00 1.40 3.50 1.20 1.60 1.74FB qT f7.00 11.00 6.00 18.00 20.00= 1T r7.00 10.00 3.00 9.00 8.00 7.40T r/T s 2.33 2.00 1.50 1.80 1.60 1.85 FB T f 4.00 10.00 8.00 18.00 20.00q = 2i T r 4.00 9.00 5.00 9.00 8.00 7.00 T r/T s 1.33 1.80 2.50 1.80 1.60 1.819.16 a. Sequence with which processes will get 1 min of processor time:1 2 3 4 5 Elapsed timeA A A A A A A A A A A A A A BBBBBBBBCCDDDDDEEEEEEEEEEE1015192327303336384042434445The turnaround time for each process:A = 45 min,B = 35 min,C = 13 min,D = 26 min,E = 42 minThe average turnaround time is = (45+35+13+26+42) / 5 = 32.2 min b.Priority Job Turnaround Time3 4 6 7 9 BEACD99 + 12 = 2121 + 15 = 3636 + 3 = 3939 + 6 = 45The average turnaround time is: (9+21+36+39+45) / 5 = 30 min c.Job Turnaround TimeA B C D E 1515 + 9 = 24 24 + 3 = 27 27 + 6 = 33 33 + 12 = 45The average turnaround time is: (15+24+27+33+45) / 5 = 28.8 min d.RunningTimeJob Turnaround Time6 9 12 15 DBEA3 + 6 = 99 + 9 = 1818 + 12 = 3030 + 15 = 45The average turnaround time is: (3+9+18+30+45) / 5 = 21 minC HAPTER 10M ULTIPROCESSOR AND R EAL-T IMES CHEDULINGReview Questions10.1 Fine: Parallelism inherent in a single instruction stream. Medium: Parallelprocessing or multitasking within a single application. Coarse:Multiprocessing of concurrent processes in a multiprogrammingenvironment. Very Coarse: Distributed processing across network nodes toform a single computing environment. Independent: Multiple unrelatedprocesses.10.4 A hard real-time task is one that must meet its deadline; otherwise it willcause undesirable damage or a fatal error to the system. A soft real-timetask has an associated deadline that is desirable but not mandatory; it stillmakes sense to schedule and complete the task even if it has passed itsdeadline.Problems10.1 For fixed priority, we do the case in which the priority is A, B, C. Eachsquare represents five time units; the letter in the square refers to thecurrently-running process. The first row is fixed priority; the secondrow is earliest deadline scheduling using completion deadlines.A AB B A AC C A A B B A A C C A AA AB B AC C A C A A B B A A C C C A AFor fixed priority scheduling, process C always misses its deadline.10.4normal executionexecution in critical sectionT 1T 2T 3s locked by T 3s unlockeds locked by T 1Once T 3 enters its critical section, it is assigned a priority higher than T1. When T3 leaves its critical section, it is preempted by T 1.C HAPTER 11I/O M ANAGEMENT AND D ISK S CHEDULING Review Questions11.1 Programmed I/O: The processor issues an I/O command, on behalf of aprocess, to an I/O module; that process then busy-waits for theoperation to be completed before proceeding. Interrupt-driven I/O:The processor issues an I/O command on behalf of a process,continues to execute subsequent instructions, and is interrupted by the I/O module when the latter has completed its work. The subsequent instructions may be in the same process, if it is not necessary for that process to wait for the completion of the I/O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends arequest for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.11.5 Seek time, rotational delay, access time.Problems11.1 If the calculation time exactly equals the I/O time (which is the mostfavorable situation), both the processor and the peripheral devicerunning simultaneously will take half as long as if they ran separately.Formally, let C be the calculation time for the entire program and let T be the total I/O time required. Then the best possible running timewith buffering is max(C, T), while the running time without buffering is C + T; and of course ((C + T)/2) ≤ max(C, T) ≤ (C + T). Source:[KNUT97].11.3 Disk head is initially moving in the direction of decreasing tracknumber:FIFO SSTF SCAN C-SCANNext track accessed Numberof trackstraversedNexttrackaccessedNumberof trackstraversedNexttrackaccessedNumberof trackstraversedNexttrackaccessedNumberof trackstraversed27 73 110 10 64 36 64 36129 102 120 10 41 23 41 23 110 19 129 9 27 14 27 14 186 76 147 18 10 17 10 17 147 39 186 39 110 100 186 17641 106 64 122 120 10 147 3910 31 41 23 129 9 129 1864 54 27 14 147 18 120 9120 56 10 17 186 39 110 10 Average 61.8 Average 29.1 Average 29.6 Average 38If the disk head is initially moving in the direction of increasing tracknumber, only the SCAN and C-SCAN results change:SCAN C-SCANNext track accessed Numberof trackstraversedNexttrackaccessedNumberof trackstraversed110 10 110 10120 10 120 10129 9 129 9147 18 147 18186 39 186 3964 122 10 17641 23 27 1727 14 41 1410 17 64 23 Average 29.1 Average 35.1Review Questions12.1 A field is the basic element of data containing a single value. A recordis a collection of related fields that can be treated as a unit by some application program.12.5 Pile: Data are collected in the order in which they arrive. Each recordconsists of one burst of data. Sequential file: A fixed format is used for records. All records are of the same length, consisting of the same number of fixed-length fields in a particular order. Because the length and position of each field is known, only the values of fields need to be stored; the field name and length for each field are attributes of the file structure. Indexed sequential file: The indexed sequential file maintains the key characteristic of the sequential file: records are organized in sequence based on a key field. Two features are added; an index to the file to support random access, and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file, but is integrated so that records in the overflow file are located by following a pointer from their predecessor record. Indexed file: Records are accessed only through their indexes. The result is that there is now no restriction on the placement of records as long as a pointer in at least one index refers to that record. Furthermore,variable-length records can be employed. Direct, or hashed, file: The direct file makes use of hashing on the key value.Problems12.1 Fixed blocking: F = largest integer B RWhen records of variable length are packed into blocks, data formarking the record boundaries within the block has to be added to separate the records. When spanned records bridge block boundaries, some reference to the successor block is also needed. One possibility is a length indicator preceding each record. Another possibility is a special separator marker between records. In any case, we can assume that each record requires a marker, and we assume that the size of a marker is about equal to the size of a block pointer [WEID87]. For spanned blocking, a block pointer of size P to its successor block may C HAPTER 12F ILE M ANAGEMENTbe included in each block, so that the pieces of a spanned record can easily be retrieved. Then we haveVariable-length spanned blocking: F=B-P R+PWith unspanned variable-length blocking, an average of R/2 will be wasted because of the fitting problem, but no successor pointer is required:Variable-length unspanned blocking: F=B-R2 R+P12.3 a. Indexedb. Indexed sequentialc. Hashed or indexed。
《操作系统精髓与设计原理_第五版》练习题及答案(DOC)
第 1 章计算机系统概述1.1 、图 1.3中的理想机器还有两条I/O 指令:0011 = 从 I/O 中载入 AC0111 = 把 AC保存到 I/O 中在这种情况下, 12 位地址标识一个特殊的外部设备。
请给出以下程序的执行过程(按照图 1.4 的格式):1.从设备 5 中载入 AC。
2.加上存储器单元 940 的内容。
3.把 AC保存到设备 6 中。
假设从设备 5 中取到的下一个值为3940 单元中的值为 2。
答案:存储器( 16 进制内容):300:3005;301:5940;302:7006步骤 1:3005->IR;步骤 2:3->AC步骤 3:5940->IR;步骤 4:3+2=5->AC步骤 5:7006->IR:步骤 6:AC->设备 61.2 、本章中用 6 步来描述图 1.4 中的程序执行情况,请使用MAR和 MBR扩充这个描述。
答案: 1. a. PC 中包含第一条指令的地址300,该指令的内容被送入MAR中。
b. 地址为 300 的指令的内容(值为十六进制数1940)被送入 MBR,并且PC增 1。
这两个步骤是并行完成的。
c.MBR中的值被送入指令寄存器 IR 中。
2.a.指令寄存器 IR 中的地址部分( 940)被送入 MAR中。
b.地址 940 中的值被送入 MBR中。
c.MBR中的值被送入 AC中。
3. a. PC 中的值( 301)被送入 MAR中。
b. 地址为 301 的指令的内容(值为十六进制数5941)被送入 MBR,并且 PC增 1。
c.MBR中的值被送入指令寄存器 IR 中。
4.a.指令寄存器 IR 中的地址部分( 941)被送入 MAR中。
b.地址 941 中的值被送入 MBR中。
c.AC中以前的内容和地址为 941 的存储单元中的内容相加,结果保存到 AC中。
5.a. PC中的值( 302)被送入 MAR中。
b. 地址为 302 的指令的内容(值为十六进制数2941)被送入 MBR,并且PC增 1。
《操作系统精髓与设计原理》习题第三章
《操作系统精髓与设计原理》习题第三章第三章习题3.10.1关键术语阻塞态:进程在某些事件发⽣之前不能执⾏,等待这种事件发⽣的状态。
退出态:操作系统从可执⾏进程组中释放出的进程,⾃⾝停⽌了,或者因某种原因被取消。
内核态:某些指令只能在特权状态下执⾏,⽽这种特权状态称为内核态。
⼦进程:由⼀个进程创建的进程,该进程的终⽌受⽗进程的影响。
中断:由外部事件引发进程挂起,CPU转⽽去处理发起中断的事件,并处理结束后恢复进程的执⾏。
模式切换:CPU由⽤户态和核⼼态之间相互切换。
新建态:进程创建时仅仅创建了对应的进程控制块⽽没有在内存中创建相应的映像,此时进程的代码和数据在外存中进程切换:在某⼀时刻,⼀个正在运⾏的进程被中断,操作系统指定另外⼀个进程为运⾏态,并把控制权交给它。
包括为前⼀个进程保存进程控制块和上下⽂信息,并把它们替换成第⼆个进程的。
交换:内存将⼀个内存中⼀个区域的内容与辅助存贮器中⼀个区域的内容互相交换的过程。
程序状态字:包含状态代码、执⾏模式,以及其他反应进程状态的信息的单个寄存器或寄存器组。
陷阱:转向某个指定地址的⾮编程的条件转移,是由硬件⾃动激活的,跳转发⽣的位置会被记录下来。
进程控制块:操作系统中进程信息的描述,是⼀个数据结构,包含有进程标识信息、处理器状态信息、进程控制信息等。
进程映像:⼀个进程的所有组成部分,包括程序、数据、栈和进程控制块。
进程:进程是进程实体的运⾏过程,是系统资源分配和调度的基本单位。
3.10.2复习题3.1什么是指令跟踪(轨迹)?(What is an instruction trace?)An instruction trace for a program is the sequence of instructions that execute for that process.⼀个进程运⾏指令的序列称作指令轨迹。
3.2通常哪些事件会导致创建⼀个进程?(What common events lead to the creation of a process?New batch job;interactive logon;created by OS to provide a service;spawned by existing process.新的批处理作业;交互登陆(终端⽤户登陆到系统);操作系统因为提供⼀项服务⽽创建;由现有的进程派⽣。
-操作系统精髓与设计原理(第五版)+课后题答案1
C HAPTER 2O PERATING S YSTEMO VERVIEWReview Questions2.1 Convenience: An operating system makes a computer more convenientto use. Efficiency: An operating system allows the computer systemresources to be used in an efficient manner. Ability to evolve: Anoperating system should be constructed in such a way as to permit theeffective development, testing, and introduction of new system functionswithout interfering with service.2.5 The execution context, or process state, is the internal data by which theoperating system is able to supervise and control the process. Thisinternal information is separated from the process, because the operatingsystem has information not permitted to the process. The context includesall of the information that the operating system needs to manage theprocess and that the processor needs to execute the process properly. Thecontext includes the contents of the various processor registers, such asthe program counter and data registers. It also includes information ofuse to the operating system, such as the priority of the process andwhether the process is waiting for the completion of a particular I/Oevent.Problems2.1 The answers are the same for (a) and (b). Assume that although processoroperations cannot overlap, I/O operations can.1 Job: TAT = NT Processor utilization = 50%2 Jobs: TAT = NT Processor utilization = 100%4 Jobs: TAT = (2N – 1)NT Processor utilization = 100% 2.4 A system call is used by an application program to invoke a functionprovided by the operating system. Typically, the system call results intransfer to a system program that runs in kernel mode.C HAPTER 3P ROCESS D ESCRIPTION ANDC ONTROLReview Questions3.5 Swapping involves moving part or all of a process from main memory todisk. When none of the processes in main memory is in the Ready state, the operating system swaps one of the blocked processes out onto disk into a suspend queue, so that another process may be brought into main memory to execute.3.10 The user mode has restrictions on the instructions that can be executedand the memory areas that can be accessed. This is to protect theoperating system from damage or alteration. In kernel mode, theoperating system does not have these restrictions, so that it can perform its tasks.Problems3.1•Creation and deletion of both user and system processes. Theprocesses in the system can execute concurrently for informationsharing, computation speedup, modularity, and convenience.Concurrent execution requires a mechanism for process creation anddeletion. The required resources are given to the process when it iscreated, or allocated to it while it is running. When the processterminates, the OS needs to reclaim any reusable resources.•Suspension and resumption of processes. In process scheduling, theOS needs to change the process's state to waiting or ready state when it is waiting for some resources. When the required resources areavailable, OS needs to change its state to running state to resume itsexecution.•Provision of m echanism for process synchronization. Cooperatingprocesses may share data. Concurrent access to shared data may result in data inconsistency. OS has to provide mechanisms for processessynchronization to ensure the orderly execution of cooperatingprocesses, so that data consistency is maintained.•Provision of mechanism for process communication. The processesexecuting under the OS may be either independent processes orcooperating processes. Cooperating processes must have the means to communicate with each other.•Provision of mechanisms for deadlock handling. In amultiprogramming environment, several processes may compete for a finite number of resources. If a deadlock occurs, all waiting processeswill never change their waiting state to running state again, resources are wasted and jobs will never be completed.3.3Figure 9.3 shows the result for a single blocked queue. The figure readilygeneralizes to multiple blocked queues.C HAPTER 4P ROCESS D ESCRIPTION ANDC ONTROLReview Questions4.2 Less state information is involved.4.5 Address space, file resources, execution privileges are examples.4.6 1. Thread switching does not require kernel mode privileges because allof the thread management data structures are within the user address space of a single process. Therefore, the process does not switch to the kernel mode to do thread management. This saves the overhead of two mode switches (user to kernel; kernel back to user). 2. Scheduling can be application specific. One application may benefit most from a simpleround-robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm. The scheduling algorithm can be tailored to the application without disturbing the underlying OSscheduler. 3. ULTs can run on any operating system. No changes arerequired to the underlying kernel to support ULTs. The threads library isa set of application-level utilities shared by all applications.4.7 1. In a typical operating system, many system calls are blocking. Thus,when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. 2. In a pure ULT strategy, a multithreaded application cannot take advantage ofmultiprocessing. A kernel assigns one process to only one processor at a time. Therefore, only a single thread within a process can execute at atime.Problems4.2Because, with ULTs, the thread structure of a process is not visible to theoperating system, which only schedules on the basis of processes.C HAPTER 5C ONCURRENCY:M UTUALE XCLUSION ANDS YNCHRONIZATIONReview Questions5.1 Communication among processes, sharing of and competing forresources, synchronization of the activities of multiple processes, and allocation of processor time to processes.5.9 A binary semaphore may only take on the values 0 and 1. A generalsemaphore may take on any integer value.Problems5.2ABCDE; ABDCE; ABDEC; ADBCE; ADBEC; ADEBC;DEABC; DAEBC; DABEC; DABCE5.5Consider the case in which turn equals 0 and P(1) sets blocked[1] totrue and then finds blocked[0] set to false. P(0) will then setblocked[0] to true, find turn = 0, and enter its critical section. P(1) will then assign 1 to turn and will also enter its critical section.C HAPTER 6C ONCURRENCY:D EADLOCK ANDS TARVATIONReview Questions6.2 Mutual exclusion. Only one process may use a resource at a time. Holdand wait. A process may hold allocated resources while awaitingassignment of others. No preemption. No resource can be forciblyremoved from a process holding it.6.3 The above three conditions, plus: Circular wait. A closed chain ofprocesses exists, such that each process holds at least one resource needed by the next process in the chain.Problems6.4 a.0 0 0 00 7 5 06 6 2 22 0 0 20 3 2 0b. to d.Running the banker's algorithm, we see processes can finishin the order p1, p4, p5, p2, p3.e. Change available to (2,0,0,0) and p3's row of "still needs" to (6,5,2,2).Now p1, p4, p5 can finish, but with available now (4,6,9,8) neither p2nor p3's "still needs" can be satisfied. So it is not safe to grant p3'srequest.6.51. W = (2 1 0 0)2. Mark P3; W = (2 1 0 0) + (0 1 2 0) = (2 2 2 0)3. Mark P2; W = (2 2 2 0) + (2 0 0 1) = (4 2 2 1)4. Mark P1; no deadlock detectedReview Questions7.1 Relocation, protection, sharing, logical organization, physicalorganization.7.7 A logical address is a reference to a memory location independent of thecurrent assignment of data to memory; a translation must be made to a physical address before the memory access can be achieved. A relative address is a particular example of logical address, in which the address is expressed as a location relative to some known point, usually thebeginning of the program. A physical address , or absolute address, is an actual location in main memory.Problems7.6 a. The 40 M block fits into the second hole, with a starting address of 80M.The 20M block fits into the first hole, with a starting address of 20M. The 10M block is placed at location 120M.40M 40M 60M 40M 40M 40M30Mb. The three starting addresses are 230M, 20M, and 160M, for the 40M, 20M, and 10M blocks, respectively. 40M 60M 60M 40M 40M 40M30Mc. The three starting addresses are 80M, 120M, and 160M, for the 40M,20M, and 10M blocks, respectively. C HAPTER 7M EMORY M ANAGEMENT7.12 a. The number of bytes in the logical address space is (216 pages) (210bytes/page) = 226 bytes. Therefore, 26 bits are required for the logical address.b. A frame is the same size as a page, 210 bytes.c. The number of frames in main memory is (232 bytes of mainmemory)/(210 bytes/frame) = 222 frames. So 22 bits is needed tospecify the frame.d. There is one entry for each page in the logical address space.Therefore there are 216 entries.e. In addition to the valid/invalid bit, 22 bits are needed to specify theframe location in main memory, for a total of 23 bits.40M40M60M40M40M40M30Md. The three starting addresses are 80M, 230M, and 360M, for the 40M,20M, and 10M blocks, respectively.C HAPTER 8V IRTUAL M EMORYReview Questions8.1Simple paging: all the pages of a process must be in main memory forprocess to run, unless overlays are used. Virtual memory paging: not all pages of a process need be in main memory frames for the process to run.;pages may be read in as needed8.2 A phenomenon in virtual memory schemes, in which the processorspends most of its time swapping pieces rather than executinginstructions.Problems8.1 a.Split binary address into virtual page number and offset; use VPN asindex into page table; extract page frame number; concatenate offsetto get physical memory addressb.(i) 1052 = 1024 + 28 maps to VPN 1 in PFN 7, (7 ⨯ 1024+28 = 7196)(ii) 2221 = 2 ⨯ 1024 + 173 maps to VPN 2, page fault(iii) 5499 = 5 ⨯ 1024 + 379 maps to VPN 5 in PFN 0, (0 ⨯ 1024+379 =379)8.4 a. PFN 3 since loaded longest ago at time 20b. PFN 1 since referenced longest ago at time 160c. Clear R in PFN 3 (oldest loaded), clear R in PFN 2 (next oldestloaded), victim PFN is 0 since R=0d. Replace the page in PFN 3 since VPN 3 (in PFN 3) is used furthest inthe futuree. There are 6 faults, indicated by **4 0 0 0 *2*4 2*1**3 2VPN of pages in memory in LRU order 32143243434242241241243122Review Questions9.1 Long-term scheduling: The decision to add to the pool of processes to beexecuted. Medium-term scheduling: The decision to add to the number of processes that are partially or fully in main memory. Short-termscheduling: The decision as to which available process will be executed by the processor9.3 Turnaround time is the total time that a request spends in the system(waiting time plus service time. Response time is the elapsed timebetween the submission of a request until the response begins to appear as output.Problems9.1 Each square represents one time unit; the number in the square refers tothe currently-running process.FCFSRR, q = 1 RR, q = 4 SPN SRT HRRN Feedback, q = 1 Feedback, q = 2iC HAPTER 9 U NIPROCESSOR S CHEDULINGA B C D ET a0 1 3 9 12T s 3 5 2 5 5FCFS T f 3 8 10 15 20T r 3.00 7.00 7.00 6.00 8.00 6.20T r/T s 1.00 1.40 3.50 1.20 1.60 1.74 RR q = 1 T f 6.00 11.00 8.00 18.00 20.00T r 6.00 10.00 5.00 9.00 8.00 7.60T r/T s 2.00 2.00 2.50 1.80 1.60 1.98 RR q = 4 T f 3.00 10.00 9.00 19.00 20.00T r 3.00 9.00 6.00 10.00 8.00 7.20T r/T s 1.00 1.80 3.00 2.00 1.60 1.88 SPN T f 3.00 10.00 5.00 15.00 20.00T r 3.00 9.00 2.00 6.00 8.00 5.60T r/T s 1.00 1.80 1.00 1.20 1.60 1.32 SRT T f 3.00 10.00 5.00 15.00 20.00T r 3.00 9.00 2.00 6.00 8.00 5.60T r/T s 1.00 1.80 1.00 1.20 1.60 1.32 HRRN T f 3.00 8.00 10.00 15.00 20.00T r 3.00 7.00 7.00 6.00 8.00 6.20T r/T s 1.00 1.40 3.50 1.20 1.60 1.74 FB q = 1 T f7.00 11.00 6.00 18.00 20.00T r7.00 10.00 3.00 9.00 8.00 7.40T r/T s 2.33 2.00 1.50 1.80 1.60 1.85 FB T f 4.00 10.00 8.00 18.00 20.00q = 2i T r 4.00 9.00 5.00 9.00 8.00 7.00 T r/T s 1.33 1.80 2.50 1.80 1.60 1.819.16a.Sequence with which processes will get 1 min of processor time:1 2 3 4 5 Elapsed timeA A A A A A A A A A A A A A A BBBBBBBBBCCCDDDDDDEEEEEEEEEEEE51015192327303336384042434445The turnaround time for each process:A = 45 min,B = 35 min,C = 13 min,D = 26 min,E = 42 minThe average turnaround time is = (45+35+13+26+42) / 5 = 32.2 min b.Priority Job Turnaround Time3 4 6 7 9 BEACD99 + 12 = 2121 + 15 = 3636 + 3 = 3939 + 6 = 45The average turnaround time is: (9+21+36+39+45) / 5 = 30 min c.Job Turnaround TimeA B C D E 1515 + 9 = 24 24 + 3 = 27 27 + 6 = 33 33 + 12 = 45The average turnaround time is: (15+24+27+33+45) / 5 = 28.8 min d.RunningTimeJob Turnaround Time3 6 9 12 15 CDBEA33 + 6 = 99 + 9 = 1818 + 12 = 3030 + 15 = 45The average turnaround time is: (3+9+18+30+45) / 5 = 21 minC HAPTER 10M ULTIPROCESSOR AND R EAL-T IMES CHEDULINGReview Questions10.1 Fine: Parallelism inherent in a single instruction stream. Medium: Parallelprocessing or multitasking within a single application. Coarse:Multiprocessing of concurrent processes in a multiprogramming environment.Very Coarse: Distributed processing across network nodes to form a singlecomputing environment. Independent: Multiple unrelated processes.10.4 A hard real-time task is one that must meet its deadline; otherwise it will causeundesirable damage or a fatal error to the system. A soft real-time task has an associated deadline that is desirable but not mandatory; it still makes sense to schedule and complete the task even if it has passed its deadline.Problems10.1 For fixed priority, we do the case in which the priority is A, B, C. Eachsquare represents five time units; the letter in the square refers to thecurrently-running process. The first row is fixed priority; the second row is earliest deadline scheduling using completion deadlines.For fixed priority scheduling, process C always misses its deadline. 10.4normal execution execution in critical sectionT1T2 T3s lockeds lockedby T1Once T3 enters its critical section, it is assigned a priority higher than T1. When T3 leaves its critical section, it is preempted by T1.C HAPTER 11I/O M ANAGEMENT AND D ISK S CHEDULING Review Questions11.1 Programmed I/O: The processor issues an I/O command, on behalf of aprocess, to an I/O module; that process then busy-waits for theoperation to be completed before proceeding. Interrupt-driven I/O: The processor issues an I/O command on behalf of a process, continues toexecute subsequent instructions, and is interrupted by the I/O modulewhen the latter has completed its work. The subsequent instructions may be in the same process, if it is not necessary for that process to wait forthe completion of the I/O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access(DMA): A DMA module controls the exchange of data between mainmemory and an I/O module. The processor sends a request for thetransfer of a block of data to the DMA module and is interrupted onlyafter the entire block has been transferred.11.5 Seek time, rotational delay, access time.Problems11.1 If the calculation time exactly equals the I/O time (which is the mostfavorable situation), both the processor and the peripheral devicerunning simultaneously will take half as long as if they ran separately.Formally, let C be the calculation time for the entire program and let T be the total I/O time required. Then the best possible running time withbuffering is max(C, T), while the running time without buffering is C + T;and of course ((C + T)/2) ≤ max(C, T) ≤ (C + T). Source: [KNUT97].11.3Disk head is initially moving in the direction of decreasing track number:If the disk head is initially moving in the direction of increasing tracknumber, only the SCAN and C-SCAN results change:Review Questions12.1 A field is the basic element of data containing a single value. A record isa collection of related fields that can be treated as a unit by some application program.12.5 Pile: Data are collected in the order in which they arrive. Each recordconsists of one burst of data. Sequential file: A fixed format is used for records. All records are of the same length, consisting of the same number of fixed-length fields in a particular order. Because the length and position of each field is known, only the values of fields need to be stored; the field name and length for each field are attributes of the file structure. Indexed sequential file: The indexed sequential file maintains the key characteristic of the sequential file: records are organized in sequence based on a key field. Two features are added; an index to the file to support random access, and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file, but is integrated so that records in the overflow file are located by following a pointer from their predecessor record. Indexed file: Records areaccessed only through their indexes. The result is that there is now no restriction on the placement of records as long as a pointer in at least one index refers to that record. Furthermore, variable-length records can be employed. Direct, or hashed, file: The direct file makes use of hashing on the key value.Problems12.1 Fixed blocking: F = largest integer BRWhen records of variable length are packed into blocks, data for markingthe record boundaries within the block has to be added to separate the records. When spanned records bridge block boundaries, some reference to the successor block is also needed. One possibility is a length indicator preceding each record. Another possibility is a special separator marker between records. In any case, we can assume that each record requires a marker, and we assume that the size of a marker is about equal to the size of a block pointer [WEID87]. For spanned blocking, a block pointerC HAPTER 12 F ILE M ANAGEMENTof size P to its successor block may be included in each block, so that the pieces of a spanned record can easily be retrieved. Then we haveVariable-length spanned blocking:F=B-P R+PWith unspanned variable-length blocking, an average of R/2 will be wasted because of the fitting problem, but no successor pointer is required:Variable-length unspanned blocking:F=B-R2 R+P12.3a.Indexedb.Indexed sequentialc.Hashed or indexed。
《操作系统精髓与设计原理·第五版》练习题及答案(DOC)
第1章计算机系统概述1.1、图1.3中的理想机器还有两条I/O指令:0011 = 从I/O中载入AC0111 = 把AC保存到I/O中在这种情况下,12位地址标识一个特殊的外部设备。
请给出以下程序的执行过程(按照图1.4的格式):1.从设备5中载入AC。
2.加上存储器单元940的内容。
3.把AC保存到设备6中。
假设从设备5中取到的下一个值为3940单元中的值为2。
答案:存储器(16进制内容):300:3005;301:5940;302:7006 步骤1:3005->IR;步骤2:3->AC步骤3:5940->IR;步骤4:3+2=5->AC步骤5:7006->IR:步骤6:AC->设备 61.2、本章中用6步来描述图1.4中的程序执行情况,请使用MAR和MBR扩充这个描述。
答案:1. a. PC中包含第一条指令的地址300,该指令的内容被送入MAR中。
b. 地址为300的指令的内容(值为十六进制数1940)被送入MBR,并且PC增1。
这两个步骤是并行完成的。
c. MBR中的值被送入指令寄存器IR中。
2. a. 指令寄存器IR中的地址部分(940)被送入MAR中。
b. 地址940中的值被送入MBR中。
c. MBR中的值被送入AC中。
3. a. PC中的值(301)被送入MAR中。
b. 地址为301的指令的内容(值为十六进制数5941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
4. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
b. 地址941中的值被送入MBR中。
c. AC中以前的内容和地址为941的存储单元中的内容相加,结果保存到AC中。
5. a. PC中的值(302)被送入MAR中。
b. 地址为302的指令的内容(值为十六进制数2941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
6. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
操作系统精髓与设计原理课后答案
操作系统精髓与设计原理课后答案第1章计算机系统概述1.1 列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2 定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3 一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4 什么是中断?中断:其他模块(I/O,存储器)中断处理器正常处理过程的机制。
1.5 多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6 内存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7 什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8 列出并简要地定义I/O操作的三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
中断驱动I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令,并继续执行后续指令,直到后者完成,它将被I/O模块中断。
《操作系统精髓与设计原理·第五版》练习题及答案(DOC)
第1章计算机系统概述1.1、图1.3中的理想机器还有两条I/O指令:0011 = 从I/O中载入AC0111 = 把AC保存到I/O中在这种情况下,12位地址标识一个特殊的外部设备。
请给出以下程序的执行过程(按照图1.4的格式):1.从设备5中载入AC。
2.加上存储器单元940的容。
3.把AC保存到设备6中。
假设从设备5中取到的下一个值为3940单元中的值为2。
答案:存储器(16进制容):300:3005;301:5940;302:7006步骤1:3005->IR;步骤2:3->AC步骤3:5940->IR;步骤4:3+2=5->AC步骤5:7006->IR:步骤6:AC->设备 61.2、本章中用6步来描述图1.4中的程序执行情况,请使用MAR和MBR扩充这个描述。
答案:1. a. PC中包含第一条指令的地址300,该指令的容被送入MAR中。
b. 地址为300的指令的容(值为十六进制数1940)被送入MBR,并且PC增1。
这两个步骤是并行完成的。
c. MBR中的值被送入指令寄存器IR中。
2. a. 指令寄存器IR中的地址部分(940)被送入MAR中。
b. 地址940中的值被送入MBR中。
c. MBR中的值被送入AC中。
3. a. PC中的值(301)被送入MAR中。
b. 地址为301的指令的容(值为十六进制数5941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
4. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
b. 地址941中的值被送入MBR中。
c. AC中以前的容和地址为941的存储单元中的容相加,结果保存到AC中。
5. a. PC中的值(302)被送入MAR中。
b. 地址为302的指令的容(值为十六进制数2941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
6. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
操作系统--精髓与设计原理(第八版)第九章复习题答案
操作系统--精髓与设计原理(第⼋版)第九章复习题答案9.1简要描述三种类型的处理器调度。
长程调度:决定加⼊待执⾏进程池。
中称调度:决定加⼊部分或全部位于内存中的进程集合。
短程调度:决定可⽤I/O设备处理哪个进程挂起的I/O请求。
9.2在交互式操作系统中,通常最重要的性能要求是什么?响应时间9.3 周转时间和响应时间有何区别?周转时间指⼀个进程从提交到完成之间的时间间隔,包括实际执⾏时间和等待资源(包括处理器资源)的时间;响应时间指从提交⼀个请求到开始接收响应之间的时间间隔。
9.4 对于进程调度,较⼩的优先级值是表⽰较低的优先级还是表⽰较⾼的优先级?对于UNIX和许多其他操作系统中,优先级数值越⼤,表⽰的进程优先级越低。
某些系统如Windows的⽤法正好相反,即⼤数值表⽰⾼优先级。
9.5 抢占式调度和⾮抢占式调度有何区别?⾮抢占:在这种情况下,⼀旦进程处于运⾏状态,就会不断执⾏直到终⽌,进程要么因为等待I/O,要么因为请求某些操作系统服务⽽阻塞⾃⼰。
抢占:当前正运⾏进程可能被操作系统中断,并转换为就绪态。
⼀个新进程到达时,或中断发⽣后把⼀个阻塞态进程置为就绪态时,或出现周期性的时间中断时,需要进⾏抢占决策。
9.6 简单定义FCFS调度。
每个进程就绪后,会加⼊就绪队列。
当前正运⾏的进程停⽌执⾏时,选择就绪队列中存在时间最长的进程运⾏。
9.7 简单定义轮转调度。
这种算法周期性地产⽣时钟中断,出现中断时,当前正运⾏的进程会放置到就绪队列中,然后基于FCFS策略选择下⼀个就绪作业运⾏。
9.8 简单定义最短进程优先调度。
这是⼀个⾮抢占策略,其原则是下次选择预计处理时间最短的进程。
9.9 简单定义最短剩余时间调度。
最短剩余时间是在SPN中增加了抢占机制的策略。
在这种情况下,调度程序总是选择预期剩余时间最短的进程。
9.10 简单定义最⾼响应⽐优先调度。
当前进程完成或被阻塞时,选择R值最⼤的就绪进程。
调度决策基于对归⼀化周转时间的估计。
操作系统精髓与设计原理-第8章复习题及习题解答
虚拟内存8.1 简单分页与虚拟分页有什么区别?简单分页:一个程序中的所有的页都必须在主存储器中程序才能正常运行,除非使用覆盖技术。
虚拟内存分页:不是程序的每一页都必须在主存储器的帧中来使程序运行,页在需要的时候进行读取。
8.2 解释什么是抖动。
虚拟内存结构的震动现象,在这个过程中处理器大部分的时间都用于交换块,而不是执行指令。
8.3 为什么在使用虚拟内存时,局部性原理是至关重要的?可以根据局部性原理设计算法来避免抖动。
总的来说,局部性原理允许算法预测哪一个当前页在最近的未来是最少可能被使用的,并由此就决定候选的替换出的页。
8.4 哪些元素是页表项中可以找到的元素?简单定义每个元素。
帧号:用来表示主存中的页来按顺序排列的号码。
存在位(P):表示这一页是否当前在主存中。
修改位(M):表示这一页在放进主存后是否被修改过。
8.5 转移后备缓冲器的目的是什么?转移后备缓冲器(TLB)是一个包含最近经常被使用过的页表项的高速缓冲存储器。
它的目的是为了减少从磁盘中恢复一个页表项所需的时间。
8.6 简单定义两种可供选择的页读取策略。
在请求式分页中,只有当访问到某页中的一个单元时才将该页取入主存。
在预约式分页中,读取的并不是页错误请求的页。
8.7 驻留集管理和页替换策略有什么区别?驻留集管理主要关注以下两个问题:(1)给每个活动进程分配多少个页帧。
(2)被考虑替换的页集是仅限在引起页错误的进程的驻留集中选择还是在主存中所有的页帧中选择。
页替换策略关注的是以下问题:在考虑的页集中,哪一个特殊的页应该被选择替换。
8.8 FIFO和Clock页替换算法有什么区别?时钟算法与FIFO算法很接近,除了在时钟算法中,任何一个使用位为一的页被忽略。
8.9 页缓冲实现的是什么?(1)被替换出驻留集的页不久又被访问到时,仍在主存中,减少了一次磁盘读写。
(2)被修改的页以簇的方式被写回,而不是一次只写一个,这就大大减少了I/O操作的数目,从而减少了磁盘访问的时间。
操作系统--精髓与设计原理(第八版)第一章复习题答案
操作系统--精髓与设计原理(第⼋版)第⼀章复习题答案操作系统--精髓与设计原理(第⼋版)第⼀章复习题答案1.1 列出并简要定义计算机的四个组成部分。
处理器:控制计算机的操作,执⾏数据处理功能。
内存:也叫主存储器,存储数据和程序。
输⼊/输出模块:在计算机和外部环境之间移动数据。
系统总线:在处理器、内存和输⼊输出间提供通信的设施。
1.2 定义处理器寄存器的两种主要类别。
⽤户可见寄存器: 优先使⽤这些寄存器,可以使机器语⾔或者汇编语⾔的程序员减少对主存储器的访问次数。
对⾼级语⾔⽽⾔,由优化编译器负责决定把哪些变量应该分配给主存储器,⼀些⾼级语⾔,如C语⾔,允许程序⾔建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:⽤以控制处理器的操作,且主要被具有特权的操作系统例程使⽤,以控制程序的执⾏。
1.3 ⼀般⽽⾔,⼀条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输⼊数据。
数据处理:处理器可以执⾏很多关于数据的算术操作或者逻辑操作。
控制:某些指令可以改变执⾏顺序。
1.4 什么是中断?中断是指计算机运⾏过程中,出现某些意外情况需主机⼲预时,机器能⾃动停⽌正在运⾏的程序并转⼊处理新情况的程序,处理完毕后⼜返回原被暂停的程序继续运⾏。
1.5 多个中断的处理⽅式是什么?处理多中断有两种⽅法。
第⼀种⽅法是当正在处理⼀个中断时,禁⽌再发⽣中断。
第⼆种⽅法是定义中断优先级,允许⾼优先级的中断打断低优先级的中断处理器的运⾏。
1.6 内存层次各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
并且各层次从上到下,每“位”价格降低,容量递增,访问时间递增。
1.7 什么是⾼速缓存?⾼速缓冲存储器是⽐主存⼩⽽快的存储器,⽤以协调主存跟处理器,作为最近储存地址的缓冲区。
操作系统精髓与设计原理(第5版)课后习题答案
操作系统精髓与设计原理(第5版)课后习题答案第1章计算机系统概述1.1、图1.3中的理想机器还有两条I/O指令:0011 = 从I/O中载入AC0111 = 把AC保存到I/O中在这种情况下,12位地址标识一个特殊的外部设备。
请给出以下程序的执行过程(按照图1.4的格式):1.从设备5中载入AC。
2.加上存储器单元940的内容。
3.把AC保存到设备6中。
假设从设备5中取到的下一个值为3940单元中的值为2。
答案:存储器(16进制内容):300:3005;301:5940;302:7006步骤1:3005->IR;步骤2:3->AC步骤3:5940->IR;步骤4:3+2=5->AC步骤5:7006->IR:步骤6:AC->设备61.2、本章中用6步来描述图1.4中的程序执行情况,请使用MAR 和MBR扩充这个描述。
答案:1. a. PC中包含第一条指令的地址300,该指令的内容被送入MAR中。
b. 地址为300的指令的内容(值为十六进制数1940)被送入MBR,并且PC增1。
这两个步骤是并行完成的。
c. MBR中的值被送入指令寄存器IR中。
2. a. 指令寄存器IR中的地址部分(940)被送入MAR中。
b. 地址940中的值被送入MBR中。
c. MBR中的值被送入AC中。
3. a. PC中的值(301)被送入MAR中。
b. 地址为301的指令的内容(值为十六进制数5941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
4. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
b. 地址941中的值被送入MBR中。
c. AC中以前的内容和地址为941的存储单元中的内容相加,结果保存到AC中。
5. a. PC中的值(302)被送入MAR中。
b. 地址为302的指令的内容(值为十六进制数2941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
《操作系统精髓与设计原理·第五版》练习题及答案要点
第1章计算机系统概述1.1、图1.3中的理想机器还有两条I/O指令:0011 = 从I/O中载入AC0111 = 把AC保存到I/O中在这种情况下,12位地址标识一个特殊的外部设备。
请给出以下程序的执行过程(按照图1.4的格式):1.从设备5中载入AC。
2.加上存储器单元940的内容。
3.把AC保存到设备6中。
假设从设备5中取到的下一个值为3940单元中的值为2。
答案:存储器(16进制内容):300:3005;301:5940;302:7006 步骤1:3005->IR;步骤2:3->AC步骤3:5940->IR;步骤4:3+2=5->AC步骤5:7006->IR:步骤6:AC->设备 61.2、本章中用6步来描述图1.4中的程序执行情况,请使用MAR和MBR扩充这个描述。
答案:1. a. PC中包含第一条指令的地址300,该指令的内容被送入MAR中。
b. 地址为300的指令的内容(值为十六进制数1940)被送入MBR,并且PC增1。
这两个步骤是并行完成的。
c. MBR中的值被送入指令寄存器IR中。
2. a. 指令寄存器IR中的地址部分(940)被送入MAR中。
b. 地址940中的值被送入MBR中。
c. MBR中的值被送入AC中。
3. a. PC中的值(301)被送入MAR中。
b. 地址为301的指令的内容(值为十六进制数5941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
4. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
b. 地址941中的值被送入MBR中。
c. AC中以前的内容和地址为941的存储单元中的内容相加,结果保存到AC中。
5. a. PC中的值(302)被送入MAR中。
b. 地址为302的指令的内容(值为十六进制数2941)被送入MBR,并且PC增1。
c. MBR中的值被送入指令寄存器IR中。
6. a. 指令寄存器IR中的地址部分(941)被送入MAR中。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1 3 4 9 7 8 11 12 6章第10章多CPU调度,实时调度第1章作业:习题P25 1.3 1.8 1.9(更正印刷错误106)1.3操作系统的大神求解答假设有一个32位微处理器,其32位的指令由两个域组成:第一个字节包含操作码,其余部分为一个直接操作数或一个操作数地址。
如果微处理器总线具有如下特征,分析a、最大可直接寻址的存储器能力为多少?以字节为单位如果微处理器总线具有如下特征,分析对系统速度的影响:b、(1)一个32位局部地址总线和一个16位局部数据总线,或者(2)一个16位局部地址总线和一个16位局部数据总线。
C、程序计数器和指令寄存器分别需要多少位答案:没必要全写捡重点(定长)指令32位,1字节操作码,则后3字节为立即数或存地址(a) 最大可直接寻址直接寻址是一种基本的寻址方法,其特点是:在指令格式的地址的字段中直接指出操作数在存的地址。
由于操作数的地址直接给出而不需要经过某种变换,所以称这种寻址方式为直接寻址方式。
2^24(b) 总线问题地址总线32位,数据总线16位直接寻址存储器24位,bus32位,地址传送一次即可;但指令32位,操作数32位(因为是32位微处理器),要两次传送地址总线16位,数据总线16位传送地址,传送指令/数据全部需要2次。
地址可视作:先行地址后列地址(c) PC和IR 至少:PC24位,IR8位一般:PC32bit IR 32bit更现实复杂情形:是否分段,使用段寄存器; 直接寻址中逻辑地址/位移/偏移offset,与有效地址effective address区别OS中,逻辑地址与物理地址1.8一个DMA模块从外部设备给存传送字节,传送速度为9600位每秒(b/s)。
处理器可以每秒100万次的速度取指令,由于DMA活动,处理器的速度将会减慢多少?答案:没必要全写捡重点看清楚题干:每秒100万次取指令,即1M/s取一次指令,不是100M!该CPU主频多少不知,是否使用cache不知,执行一条指令多少时钟周期不知,此题中无需知道还假设,此CPU只取指令要访问存,执行指令不需要读写数据,不访存. 还假设DMA一次访问存传送1个字节凭什么如此假设?9600b/s=1200B/s 即1s中要传送1200次,而原本CPU要1M次访存,现在因DMA要减少1200次,所以影响是1200/1M=0.12%1.9一台计算机包括一个CPU和一台I/O设备D,通过一条共享总线连接到主存储器M,数据总线的宽度为1个字。
CPU每秒最多可执行106条指令,平均每条指令需要5个机器周期,其中3个周期需要使用存储器总线。
存储器读/写操作使用1个机器周期。
假设CPU正在连续不断地执行后台程序,并且需要保证95%的指令执行速度,但没有任何I/O指令。
假设1个处理器周期等于1个总线周期,现在要在M和D之间传送大块数据。
a.若使用程序控制I/O,I/O每传送1个字需要CPU执行两条指令。
请估计通过D的I/O数据传送的最大可能速度。
b.如果使用DMA传送,请估计传送速度。
答案:没必要全写捡重点题干信息:多少位CPU不知,字长多少位不知,以处理器周期为单位,访问存(读1条指令读1字数据)要1周期,执行1指令需要5周期。
CPU每秒最多执行10^6条指令程序IO:传送1字要2条指令限制只能有5%的CPU处理用于IO程序IO:传送1字要2条指令限制只能有5%的CPU处理用于IO此限制下,1秒可执行用于IO的指令为5% * 10^6条指令而2条指令才可传送1字数据,所以每秒IO最大可传送的字为0.5*5%*10^6=25000字/秒DMA情形:最大速度:在CPU执行后台程序时,总共能找到多少周期可以利用。
1周期传送1字5%CPU处理能力,全部可用于DMA,可执行指令条数为5%*10^6,而1条有5周期,所以可传送字:5*5%*10^6 个字DMA情形:最大速度:在CPU执行后台程序时,总共能找到多少周期可以利用。
1周期传送1字最大吗?要见缝插针!后台程序执行时,执行1条指令共5个周期,但只在3个周期中访存,还有2个没有使用,DMA可用这两个周期DMA情形:最大速度:在CPU执行后台程序时,总共能找到多少周期可以利用。
1周期传送1字最后,DMA最大速度为:10^6(0.05 ×5 + 0.95 ×2) = 2.15 ×10^6 即2.15M字/秒第三章:进程描述与控制P103 3.5,3.14 P104 3.111、概念:交换(swapping):操作系统将存中进程的容或部分容写入硬盘,或反之的操作。
进程:具有一定独立功能的程序关于一个数据集合的一次运行活动。
2. 进程有哪三个基本状态?试说明状态转换的典型原因,图示。
(1)处于就绪状态的进程,当进程调度程序为之分配了处理机后,该进程就由就绪状态变为执行状态(2)正在执行的进程因发生某事件而无法执行,如暂时无法取得所需资源,则由执行状态转变为阻塞状态。
(3)正在执行的进程,如因时间片用完或被高优先级的进程抢占处理机而被暂停执行,该进程便由执行转变为就绪状态。
(2)状态转换1不会立即引起其他状态转换。
状态转换2必然立即引发状态转换1:状态转换2发生后,进程调度程序必然要选出一个新的就绪进程投入运行,该新进程可能是其他进程,也可能是刚从执行状态转换成就绪状态的那个进程。
状态转换3可能立即引发状态转换1:状态转换3发生后,若就绪队列非空,则进程调度程序将选出一个就绪进程投入执行。
状态转换4可能引发状态转换1:状态转换4发生后,若CPU空闲,并且没有其他进程竞争CPU,则该进程将被立即调度。
另外,状态转换4还可能同时引发状态转换1和2:若系统采用抢占调度方式,而新就绪的进程具备抢占CPU的条件(如其优先权很高),则它可立即得到CPU转换成执行状态,而原来正在执行的进程则转换成就绪状态。
3.5什么是交换,目的是什么:操作系统将存中进程的容或部分容写入硬盘,或反之的操作。
目的:将暂时无法运行的进程(阻塞状态)从存中移出,空出存,以便在存中装入尽可能多的可运行的进程。
3.14模式切换与进程切换是什么有什么区别是什么:为便于OS实现和管理,处理器一般支持两种(以上的)执行模式:用户态和核态。
OS在核态下运行,用户进程在用户态下运行。
从用户态到核态的改变或反之,称之为模式切换。
用户进程运行时,如处理器响应中断,进入中断处理程序,则由用户态进入核态;而中断返回后,从核态返回用户态。
区别:模式切换不一定会改变当前运行的进程的状态,而进程切换过程中必然会出现模式切换。
模式切换(对应的中断处理)保存/恢复的状态信息少,而进程切换需要保存/恢复的状态信息多。
3.11 中断A)中断如何支持多道程序设计:1.外部设备具备中断能力后,CPU才可能在外部设备开始工作到完成之间执行其它程序;(轮询方式中,外部设备工作完成之前,CPU一直循环测试外部设备的状态,不可能执行其它程序)2.利用中断方式,操作系统可以及时获得控制权,在多个程序之间选择调度。
B)中断如何支持错误处理:在硬件发生异常时,如奇偶校验错,掉电等,以特殊中断形式出现;程序中出现系统错误时,如除零,地址越界,(无访问权限的)非法访问,以特殊中断形式出现;程序中应用语义级的错误,也可以中断形式出现;各种错误统一地用中断方式处理,只要分别编制相应的中断处理程序即可,简化了硬件设计,也方便了用户程序开发。
C)对于单线程而言,说明一个能够引起中断并且导致进程切换的情景,另外说明能引起中断但没有进程切换的例子导致切换例:用户进程要求输入,则在启动外部设备工作后,用户进程进入阻塞状态,进行进程切换;不导致切换例:发生时钟中断,当前运行的用户进程的时间片还未用完,则继续执行。
(另,若不限定为外部中断,则用户进程执行系统调用时,发生中断,若系统调用不涉及IO,则不会发生进程切换)第四章线程对称处理SMP和微核概念:线程是进程中一个相对独立的执行流,是CPU调度的单位。
1、为什么要引入线程,多线程有何优点?操作系统引入线程后,可以简化并发程序的设计,方便在一个进程实现多个并行处理。
多线程的优点包括:实现进程并行处理;方便数据共享;降低了切换时的系统开销;提高了CPU的利用率;改善了程序的响应性。
2.比较TCB与PCB容。
TCB:线程标识,线程状态(运行,就绪,阻塞),处理器状态,(堆栈,私用数据段的)指针PCB: (TCB中没有的)进程的虚拟空间指针,文件等资源,进程的权限,进程间通信等。
3.比较用户级线程与核级线程的异同用户级:优点:不需要OS支持,调度方式灵活,开销小;缺点:不能并行,一线程阻塞,其它线程也不能运行。
核级:优点:可并行,一线程阻塞不会阻塞其它线程,缺点:创建切换开销相对大第九章:单处理器调度P291 9.1, 9.5, 9.109.1简要描述三种类型的处理器调度长程调度:决定加入到待执行的进程池中;中程调度:决定加入到部分或全部在主存中的进程集合中;短程调度:决定哪一个可用进程将被处理器执行。
9.5抢占式和非抢占式有什么区别非抢占:在这种情况下,一旦进程处于运行态,他就不断执行直到终止,或者为等待I/O或请求某些操作系统服务而阻塞自己。
抢占:当前正在运行的进程可能被操作系统中断,并转移到就绪态。
关于抢占的决策可能是在一个新进程到达时,或者在一个中断发生后把一个被阻塞的进程置为就绪态时,或者基于周期性的时间中断。
9.10简答定义最高响应比优先调度调度基于抢占原则并且使用动态优先级机制。
当一个进程第一次进入系统时,它被放置在RQ0。
当它第一次被抢占后并返回就绪状态时,它被防止在RQ1。
在随后的时间里,每当它被抢占时,它被降级到下一个低优先级队列中。
一个短进程很快会执行完,不会在就绪队列中降很多级。
一个长进程会逐级下降。
因此,新到的进程和短进程优先于老进程和长进程。
在每个队列中,除了在优先级最低的队列中,都使用简单的FCFS机制。
一旦一个进程处于优先级最低的队列中,它就不可能再降低,但是会重复地返回该队列,直到运行结束。
补充题:分析多级反馈算法(指出其目标,假设,容及效果)反馈调度算法分析:目标系统效率:减少平均等待时间,提高系统呑吐量公平:减少饥饿现象出现或减轻程度尽量减少系统开销反馈调度算法分析:假设,理由程序由CPU阵发期,IO阵发期交替构成程序完成一次IO后,紧接着可能是一个短暂的IO阵发期程序一开始,一般都是一个CPU阵发期程序运行时间有长有短;长时间运行没有结束的程序可能还需要很长时间才能结束反馈调度算法分析:容设置多个分成优先级不同的就绪队列,高优先级队列的时间片短,低优先级队列的时间片长。
(任一优先级的进程被调度运行时,时间片不会被抢占。