现代操作系统(中文第三版)习题答案
现代操作系统(中文第三版)习题答案精编版
5002.395ns
11、一位校对人员注意到在一部将要出版的操作系统教科书手稿中有一个多次出 现的拼写错误。这本书大致有 700 页。每页 50 行,一行 80 个字符。若把文稿用 电子扫描,那么,主副本进入图 1-9 中的每个存储系统的层次要花费多少时间? 对于内存储方式,考虑所给定的存取时间是每次一个字符,对于磁盘设备,假定 存取时间是每次一个 1024 字符的盘块,而对于磁带,假设给定开始时间后的存 取时间和磁盘存取时间相同。
第2页
cztqwan 2017-06-19
答:原稿包含 80*50*700 = 2800000 字符。当然,这不可能放入任何目前的 CPU 中,但是如果可能的话,在寄存器中只需 2.8ms,在 Cache 中需要 5.6ms,在内 存中需要 28ms,整本书大约有 2700 个 1024 字节的数据块,因此从磁盘扫描大 约为 27 秒,从磁带扫描则需 2 分钟 7 秒。当然,这些时间仅为读取数据的时间。 处理和重写数据将增加时间。
cztqwan 2017-06-19
现代操作系统(第三版)习题答案
cztqwan 2017-06-19
(部分内容来源于网络,转载请注明出处)
cztqwan 2017-06-19
目录
第一章 绪论..................................................................................................................1 第二章 进程与线程......................................................................................................8 第三章 存储管理........................................................................................................21 第四章 文件系统........................................................................................................32 第五章 输入/输出 ......................................................................................................42 第六章 死锁................................................................................................................55 第七章 多媒体操作系统............................................................................................ 65 第八章 多处理机系统................................................................................................ 76 第九章 安全................................................................................................................88 第十章 实例研究 1:Linux .....................................................................................100 第十一章 实例研究 2:Windows Vista .................................................................. 110 第十二章 实例研究 3:Symbian 操作系统 ........................................................... 110 第十三章 操作系统设计.......................................................................................... 110
现代操作系统(中文第三版)习题答案
7、下面的哪一条指令只能在内核态中使用?
a)禁止所有的中断。
b)读日期-时间时钟。
c)设晋日期-时间时钟。
d)改变存储器映像。
答:选择(a)、(c)、(d)应该被限制在内核模式。
8、考虑一个有两个CPU的系统,并且每一个CPU有两个线程(超线程)。假设有三个程序P0,P1,P2,分別以运行时间 5ms,10ms,20ms开始。运行这些程序需要多少时间?假设这三个程序都是100% 限于CPU,在运行时无阻塞,并且一旦设 定就不改变CPU。
的文件复制到装配点,使得他们在进行设备检查或修理时,可以在紧急事件中的普通路径上找到这些文件。
17、在一个操作系统中系统调用的目的是什么? 答:系统调用允许用户进程在内核中访问和执行操作系统功能。用户程序使用系统调用操作系统服务。
18、对于下列系统调用,给出引起失败的条件:fork、exec以及unlink。 答:如果进程表中没有空闲的槽(或者没有内存和交换空间),fork 将失败。如果所给的文件名不存在,或者不是一个有效的 可执行文件,exec将失败。如果将要解除链接的文件不存在,或者调用unlink的进程没有权限,则unlink将失败。 19、在count = write(fd, buffer, nbytes);调用中,能在counБайду номын сангаас中而不是nbytes中返回值吗?如果能,为什么? 答:如果fd不正确,调用失败,将返回1。同样,如果磁盘满,调用也失败,要求写入的字节数和实际写入的字节数可能不 等。在正确终止时,总是返回nbytes。 20、有一个文件,其文件描述符是fd,内含下列字节序列:3,1,4,1,5,9,2,6,5,3,5。 有如下系统调用:
12、在用户程序进行一个系统调用,以读写磁盘文件时,该程序提供指示说明了所需要的文件,一个指向数据缓冲区的指针 以及计数。然后,控制权转给操作系统,它调用相关的驱动程序。假设驱动程序启动磁盘并且直到中断发生才终止。在从磁盘 读的情况下,很明显,调用者会被阻塞(因为文件中没有数据)。在向磁盘写时会发生什么情况?需要把调用者阻塞一直等到 磁盘传送完成为止吗?答:也许。如果调用者取回控制,并且在最终发生写操作时立即重写数据,将会写入错误的数据。然 而,如果驱动程序在返回之前首先复制将数据复制到一个专用的缓冲器,那么调用者可以立即继续执行。另一个可能性是允许 调用者继续,并且在缓冲器可以再用时给它一个信号,但是这需要很高的技巧,而且容易出错。
现代操作系统第三版中文答案
现代操作系统第三版中文答案【篇一:操作系统课后答案】>思考与练习题1. 2. 3. 4. 5. 6. 7. 8. 9.什么是操作系统?它的主要功能是什么?什么是多道程序设计技术?多道程序设计技术的主要特点是什么?批处理系统是怎样的一种操作系统?它的特点是什么?什么是分时系统?什么是实时系统?试从交互性,及时性,独立性,多路性,可靠性等几个方面比较分时系统和实施系统。
实时系统分为哪俩种类型?操作系统主要特征是什么?操作系统也用户的接口有几种?它们各自用在什么场合?“操作系统是控制硬件的软件”这一说法确切吗?为什么?设内存中有三道程序,a,b,c,它们按a~b~c的先后顺序执行,它们进行“计算”和“i/o操作”的时间如表1-2所示,假设三道程序使用相同的i/o设备。
(1) 试画出单道运行时三道程序的时间关系图,并计算完成三道程序要花多少时间。
(2) 试画出多道运行时三道程序的时间关系图,并计算完成三道程序要花多少时间。
10.将下列左右两列词连接起来形成意义最恰当的5对。
dos 网络操作系统 os/2自由软件 unix多任务 linux单任务11.选择一个现代操作系统,查找和阅读相关的技术资料,写一篇关于操作系统如何进行内存管理、存储管理、设备管理和文件管理的文章。
答案1.答:操作系统是控制和管理计算机的软、硬件资源,合理地组织计算机的工作流程,以方便用户使用的程序集合。
2.答:把多个独立的程序同时放入内存,使她们共享系统中的资源。
1)多道,即计算机内存中同时放多道相互独立的程序。
2)宏观上并行,是指共识进入系统的多道程序都处于运行过程。
3)微观上串行,是指在单道处理机环境下,内存中的多道程序轮流地占有cpu,交替执行。
3.答:批处理操作系统是一种基本的操作系统类型。
在该系统中用户的作业被成批地输入到计算机中,然后在操作系统的控制下,用户的作业自动的执行。
特点是:资源利用率高。
系统吞吐量大。
现代操作系统(第三版)答案
MODERNOPERATINGSYSTEMSTHIRD EDITION PROBLEM SOLUTIONSANDREW S.TANENBAUMVrije UniversiteitAmsterdam,The NetherlandsPRENTICE HALLUPPER SADDLE RIVER,NJ07458Copyright Pearson Education,Inc.2008SOLUTIONS TO CHAPTER1PROBLEMS1.Multiprogramming is the rapid switching of the CPU between multiple proc-esses in memory.It is commonly used to keep the CPU busy while one or more processes are doing I/O.2.Input spooling is the technique of reading in jobs,for example,from cards,onto the disk,so that when the currently executing processes arefinished, there will be work waiting for the CPU.Output spooling consists offirst copying printablefiles to disk before printing them,rather than printing di-rectly as the output is generated.Input spooling on a personal computer is not very likely,but output spooling is.3.The prime reason for multiprogramming is to give the CPU something to dowhile waiting for I/O to complete.If there is no DMA,the CPU is fully occu-pied doing I/O,so there is nothing to be gained(at least in terms of CPU utili-zation)by multiprogramming.No matter how much I/O a program does,the CPU will be100%busy.This of course assumes the major delay is the wait while data are copied.A CPU could do other work if the I/O were slow for other reasons(arriving on a serial line,for instance).4.It is still alive.For example,Intel makes Pentium I,II,and III,and4CPUswith a variety of different properties including speed and power consumption.All of these machines are architecturally compatible.They differ only in price and performance,which is the essence of the family idea.5.A25×80character monochrome text screen requires a2000-byte buffer.The1024×768pixel24-bit color bitmap requires2,359,296bytes.In1980these two options would have cost$10and$11,520,respectively.For current prices,check on how much RAM currently costs,probably less than$1/MB.6.Consider fairness and real time.Fairness requires that each process be allo-cated its resources in a fair way,with no process getting more than its fair share.On the other hand,real time requires that resources be allocated based on the times when different processes must complete their execution.A real-time process may get a disproportionate share of the resources.7.Choices(a),(c),and(d)should be restricted to kernel mode.8.It may take20,25or30msec to complete the execution of these programsdepending on how the operating system schedules them.If P0and P1are scheduled on the same CPU and P2is scheduled on the other CPU,it will take20mses.If P0and P2are scheduled on the same CPU and P1is scheduled on the other CPU,it will take25msec.If P1and P2are scheduled on the same CPU and P0is scheduled on the other CPU,it will take30msec.If all three are on the same CPU,it will take35msec.2PROBLEM SOLUTIONS FOR CHAPTER19.Every nanosecond one instruction emerges from the pipeline.This means themachine is executing1billion instructions per second.It does not matter at all how many stages the pipeline has.A10-stage pipeline with1nsec per stage would also execute1billion instructions per second.All that matters is how often afinished instruction pops out the end of the pipeline.10.Average access time=0.95×2nsec(word is cache)+0.05×0.99×10nsec(word is in RAM,but not in cache)+0.05×0.01×10,000,000nsec(word on disk only)=5002.395nsec=5.002395μsec11.The manuscript contains80×50×700=2.8million characters.This is,ofcourse,impossible tofit into the registers of any currently available CPU and is too big for a1-MB cache,but if such hardware were available,the manuscript could be scanned in2.8msec from the registers or5.8msec from the cache.There are approximately27001024-byte blocks of data,so scan-ning from the disk would require about27seconds,and from tape2minutes7 seconds.Of course,these times are just to read the data.Processing and rewriting the data would increase the time.12.Maybe.If the caller gets control back and immediately overwrites the data,when the writefinally occurs,the wrong data will be written.However,if the driverfirst copies the data to a private buffer before returning,then the caller can be allowed to continue immediately.Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused,but this is tricky and error prone.13.A trap instruction switches the execution mode of a CPU from the user modeto the kernel mode.This instruction allows a user program to invoke func-tions in the operating system kernel.14.A trap is caused by the program and is synchronous with it.If the program isrun again and again,the trap will always occur at exactly the same position in the instruction stream.An interrupt is caused by an external event and its timing is not reproducible.15.The process table is needed to store the state of a process that is currentlysuspended,either ready or blocked.It is not needed in a single process sys-tem because the single process is never suspended.16.Mounting afile system makes anyfiles already in the mount point directoryinaccessible,so mount points are normally empty.However,a system admin-istrator might want to copy some of the most importantfiles normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being repaired.PROBLEM SOLUTIONS FOR CHAPTER13 17.A system call allows a user process to access and execute operating systemfunctions inside the er programs use system calls to invoke operat-ing system services.18.Fork can fail if there are no free slots left in the process table(and possibly ifthere is no memory or swap space left).Exec can fail if thefile name given does not exist or is not a valid executablefile.Unlink can fail if thefile to be unlinked does not exist or the calling process does not have the authority to unlink it.19.If the call fails,for example because fd is incorrect,it can return−1.It canalso fail because the disk is full and it is not possible to write the number of bytes requested.On a correct termination,it always returns nbytes.20.It contains the bytes:1,5,9,2.21.Time to retrieve thefile=1*50ms(Time to move the arm over track#50)+5ms(Time for thefirst sector to rotate under the head)+10/100*1000ms(Read10MB)=155ms22.Block specialfiles consist of numbered blocks,each of which can be read orwritten independently of all the other ones.It is possible to seek to any block and start reading or writing.This is not possible with character specialfiles.23.System calls do not really have names,other than in a documentation sense.When the library procedure read traps to the kernel,it puts the number of the system call in a register or on the stack.This number is used to index into a table.There is really no name used anywhere.On the other hand,the name of the library procedure is very important,since that is what appears in the program.24.Yes it can,especially if the kernel is a message-passing system.25.As far as program logic is concerned it does not matter whether a call to a li-brary procedure results in a system call.But if performance is an issue,if a task can be accomplished without a system call the program will run faster.Every system call involves overhead time in switching from the user context to the kernel context.Furthermore,on a multiuser system the operating sys-tem may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.26.Several UNIX calls have no counterpart in the Win32API:Link:a Win32program cannot refer to afile by an alternative name or see it in more than one directory.Also,attempting to create a link is a convenient way to test for and create a lock on afile.4PROBLEM SOLUTIONS FOR CHAPTER1Mount and umount:a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod:Windows uses access control listsKill:Windows programmers cannot kill a misbehaving program that is not cooperating.27.Every system architecture has its own set of instructions that it can execute.Thus a Pentium cannot execute SPARC programs and a SPARC cannot exe-cute Pentium programs.Also,different architectures differ in bus architecture used(such as VME,ISA,PCI,MCA,SBus,...)as well as the word size of the CPU(usually32or64bit).Because of these differences in hardware,it is not feasible to build an operating system that is completely portable.A highly portable operating system will consist of two high-level layers---a machine-dependent layer and a machine independent layer.The machine-dependent layer addresses the specifics of the hardware,and must be implemented sepa-rately for every architecture.This layer provides a uniform interface on which the machine-independent layer is built.The machine-independent layer has to be implemented only once.To be highly portable,the size of the machine-dependent layer must be kept as small as possible.28.Separation of policy and mechanism allows OS designers to implement asmall number of basic primitives in the kernel.These primitives are sim-plified,because they are not dependent of any specific policy.They can then be used to implement more complex mechanisms and policies at the user level.29.The conversions are straightforward:(a)A micro year is10−6×365×24×3600=31.536sec.(b)1000meters or1km.(c)There are240bytes,which is1,099,511,627,776bytes.(d)It is6×1024kg.SOLUTIONS TO CHAPTER2PROBLEMS1.The transition from blocked to running is conceivable.Suppose that a processis blocked on I/O and the I/Ofinishes.If the CPU is otherwise idle,the proc-ess could go directly from blocked to running.The other missing transition, from ready to blocked,is impossible.A ready process cannot do I/O or any-thing else that might block it.Only a running process can block.PROBLEM SOLUTIONS FOR CHAPTER25 2.You could have a register containing a pointer to the current process tableentry.When I/O completed,the CPU would store the current machine state in the current process table entry.Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry(the ser-vice procedure).This process would then be started up.3.Generally,high-level languages do not allow the kind of access to CPU hard-ware that is required.For instance,an interrupt handler may be required to enable and disable the interrupt servicing a particular device,or to manipulate data within a process’stack area.Also,interrupt service routines must exe-cute as rapidly as possible.4.There are several reasons for using a separate stack for the kernel.Two ofthem are as follows.First,you do not want the operating system to crash be-cause a poorly written user program does not allow for enough stack space.Second,if the kernel leaves stack data in a user program’s memory space upon return from a system call,a malicious user might be able to use this data tofind out information about other processes.5.If each job has50%I/O wait,then it will take20minutes to complete in theabsence of competition.If run sequentially,the second one willfinish40 minutes after thefirst one starts.With two jobs,the approximate CPU utiliza-tion is1−0.52.Thus each one gets0.375CPU minute per minute of real time.To accumulate10minutes of CPU time,a job must run for10/0.375 minutes,or about26.67minutes.Thus running sequentially the jobsfinish after40minutes,but running in parallel theyfinish after26.67minutes.6.It would be difficult,if not impossible,to keep thefile system consistent.Sup-pose that a client process sends a request to server process1to update afile.This process updates the cache entry in its memory.Shortly thereafter,anoth-er client process sends a request to server2to read thatfile.Unfortunately,if thefile is also cached there,server2,in its innocence,will return obsolete data.If thefirst process writes thefile through to the disk after caching it, and server2checks the disk on every read to see if its cached copy is up-to-date,the system can be made to work,but it is precisely all these disk ac-cesses that the caching system is trying to avoid.7.No.If a single-threaded process is blocked on the keyboard,it cannot fork.8.A worker thread will block when it has to read a Web page from the disk.Ifuser-level threads are being used,this action will block the entire process, destroying the value of multithreading.Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9.Yes.If the server is entirely CPU bound,there is no need to have multiplethreads.It just adds unnecessary complexity.As an example,consider a tele-phone directory assistance number(like555-1212)for an area with1million6PROBLEM SOLUTIONS FOR CHAPTER2people.If each(name,telephone number)record is,say,64characters,the entire database takes64megabytes,and can easily be kept in the server’s memory to provide fast lookup.10.When a thread is stopped,it has values in the registers.They must be saved,just as when the process is stopped the registers must be saved.Multipro-gramming threads is no different than multiprogramming processes,so each thread needs its own register save area.11.Threads in a process cooperate.They are not hostile to one another.If yield-ing is needed for the good of the application,then a thread will yield.After all,it is usually the same programmer who writes the code for all of them. er-level threads cannot be preempted by the clock unless the whole proc-ess’quantum has been used up.Kernel-level threads can be preempted indivi-dually.In the latter case,if a thread runs too long,the clock will interrupt the current process and thus the current thread.The kernel is free to pick a dif-ferent thread from the same process to run next if it so desires.13.In the single-threaded case,the cache hits take15msec and cache misses take90msec.The weighted average is2/3×15+1/3×90.Thus the mean re-quest takes40msec and the server can do25per second.For a multithreaded server,all the waiting for the disk is overlapped,so every request takes15 msec,and the server can handle662/3requests per second.14.The biggest advantage is the efficiency.No traps to the kernel are needed toswitch threads.The biggest disadvantage is that if one thread blocks,the en-tire process blocks.15.Yes,it can be done.After each call to pthread create,the main programcould do a pthread join to wait until the thread just created has exited before creating the next thread.16.The pointers are really necessary because the size of the global variable isunknown.It could be anything from a character to an array offloating-point numbers.If the value were stored,one would have to give the size to create global,which is all right,but what type should the second parameter of set global be,and what type should the value of read global be?17.It could happen that the runtime system is precisely at the point of blocking orunblocking a thread,and is busy manipulating the scheduling queues.This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching,since they might be in an inconsistent state.One solution is to set aflag when the run-time system is entered.The clock handler would see this and set its ownflag, then return.When the runtime systemfinished,it would check the clockflag, see that a clock interrupt occurred,and now run the clock handler.PROBLEM SOLUTIONS FOR CHAPTER27 18.Yes it is possible,but inefficient.A thread wanting to do a system callfirstsets an alarm timer,then does the call.If the call blocks,the timer returns control to the threads package.Of course,most of the time the call will not block,and the timer has to be cleared.Thus each system call that might block has to be executed as three system calls.If timers go off prematurely,all kinds of problems can develop.This is not an attractive way to build a threads package.19.The priority inversion problem occurs when a low-priority process is in itscritical region and suddenly a high-priority process becomes ready and is scheduled.If it uses busy waiting,it will run forever.With user-level threads,it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run.There is no preemption.With kernel-level threads this problem can arise.20.With round-robin scheduling it works.Sooner or later L will run,and eventu-ally it will leave its critical region.The point is,with priority scheduling,L never gets to run at all;with round robin,it gets a normal time slice periodi-cally,so it has the chance to leave its critical region.21.Each thread calls procedures on its own,so it must have its own stack for thelocal variables,return addresses,and so on.This is equally true for user-level threads as for kernel-level threads.22.Yes.The simulated computer could be multiprogrammed.For example,while process A is running,it reads out some shared variable.Then a simula-ted clock tick happens and process B runs.It also reads out the same vari-able.Then it adds1to the variable.When process A runs,if it also adds one to the variable,we have a race condition.23.Yes,it still works,but it still is busy waiting,of course.24.It certainly works with preemptive scheduling.In fact,it was designed forthat case.When scheduling is nonpreemptive,it might fail.Consider the case in which turn is initially0but process1runsfirst.It will just loop forever and never release the CPU.25.To do a semaphore operation,the operating systemfirst disables interrupts.Then it reads the value of the semaphore.If it is doing a down and the sema-phore is equal to zero,it puts the calling process on a list of blocked processes associated with the semaphore.If it is doing an up,it must check to see if any processes are blocked on the semaphore.If one or more processes are block-ed,one of them is removed from the list of blocked processes and made run-nable.When all these operations have been completed,interrupts can be enabled again.8PROBLEM SOLUTIONS FOR CHAPTER226.Associated with each counting semaphore are two binary semaphores,M,used for mutual exclusion,and B,used for blocking.Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s,and a list of processes blocked on that semaphore.To im-plement down,a processfirst gains exclusive access to the semaphores, counter,and list by doing a down on M.It then decrements the counter.If it is zero or more,it just does an up on M and exits.If M is negative,the proc-ess is put on the list of blocked processes.Then an up is done on M and a down is done on B to block the process.To implement up,first M is down ed to get mutual exclusion,and then the counter is incremented.If it is more than zero,no one was blocked,so all that needs to be done is to up M.If, however,the counter is now negative or zero,some process must be removed from the list.Finally,an up is done on B and M in that order.27.If the program operates in phases and neither process may enter the nextphase until both arefinished with the current phase,it makes perfect sense to use a barrier.28.With kernel threads,a thread can block on a semaphore and the kernel canrun some other thread in the same process.Consequently,there is no problem using semaphores.With user-level threads,when one thread blocks on a semaphore,the kernel thinks the entire process is blocked and does not run it ever again.Consequently,the process fails.29.It is very expensive to implement.Each time any variable that appears in apredicate on which some process is waiting changes,the run-time system must re-evaluate the predicate to see if the process can be unblocked.With the Hoare and Brinch Hansen monitors,processes can only be awakened on a signal primitive.30.The employees communicate by passing messages:orders,food,and bags inthis case.In UNIX terms,the four processes are connected by pipes.31.It does not lead to race conditions(nothing is ever lost),but it is effectivelybusy waiting.32.It will take nT sec.33.In simple cases it may be possible to determine whether I/O will be limitingby looking at source code.For instance a program that reads all its inputfiles into buffers at the start will probably not be I/O bound,but a problem that reads and writes incrementally to a number of differentfiles(such as a compi-ler)is likely to be I/O bound.If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program,you can compare this with the total time to complete execution of the program.This is,of course,most meaningful on a system where you are the only user.34.For multiple processes in a pipeline,the common parent could pass to the op-erating system information about the flow of data.With this information the OS could,for instance,determine which process could supply output to a process blocking on a call for input.35.The CPU efficiency is the useful CPU time divided by the total CPU time.When Q ≥T ,the basic cycle is for the process to run for T and undergo a process switch for S .Thus (a)and (b)have an efficiency of T /(S +T ).When the quantum is shorter than T ,each run of T will require T /Q process switches,wasting a time ST /Q .The efficiency here is thenT +ST /QT which reduces to Q /(Q +S ),which is the answer to (c).For (d),we just sub-stitute Q for S and find that the efficiency is 50%.Finally,for (e),as Q →0the efficiency goes to 0.36.Shortest job first is the way to minimize average response time.0<X ≤3:X ,3,5,6,9.3<X ≤5:3,X ,5,6,9.5<X ≤6:3,5,X ,6,9.6<X ≤9:3,5,6,X ,9.X >9:3,5,6,9,X.37.For round robin,during the first 10minutes each job gets 1/5of the CPU.Atthe end of 10minutes,C finishes.During the next 8minutes,each job gets 1/4of the CPU,after which time D finishes.Then each of the three remaining jobs gets 1/3of the CPU for 6minutes,until B finishes,and so on.The fin-ishing times for the five jobs are 10,18,24,28,and 30,for an average of 22minutes.For priority scheduling,B is run first.After 6minutes it is finished.The other jobs finish at 14,24,26,and 30,for an average of 18.8minutes.If the jobs run in the order A through E ,they finish at 10,16,18,22,and 30,for an average of 19.2minutes.Finally,shortest job first yields finishing times of 2,6,12,20,and 30,for an average of 14minutes.38.The first time it gets 1quantum.On succeeding runs it gets 2,4,8,and 15,soit must be swapped in 5times.39.A check could be made to see if the program was expecting input and didanything with it.A program that was not expecting input and did not process it would not get any special priority boost.40.The sequence of predictions is 40,30,35,and now 25.41.The fraction of the CPU used is35/50+20/100+10/200+x/250.To beschedulable,this must be less than1.Thus x must be less than12.5msec. 42.Two-level scheduling is needed when memory is too small to hold all theready processes.Some set of them is put into memory,and a choice is made from that set.From time to time,the set of in-core processes is adjusted.This algorithm is easy to implement and reasonably efficient,certainly a lot better than,say,round robin without regard to whether a process was in memory or not.43.Each voice call runs200times/second and uses up1msec per burst,so eachvoice call needs200msec per second or400msec for the two of them.The video runs25times a second and uses up20msec each time,for a total of 500msec per second.Together they consume900msec per second,so there is time left over and the system is schedulable.44.The kernel could schedule processes by any means it wishes,but within eachprocess it runs threads strictly in priority order.By letting the user process set the priority of its own threads,the user controls the policy but the kernel handles the mechanism.45.The change would mean that after a philosopher stopped eating,neither of hisneighbors could be chosen next.In fact,they would never be chosen.Sup-pose that philosopher2finished eating.He would run test for philosophers1 and3,and neither would be started,even though both were hungry and both forks were available.Similarly,if philosopher4finished eating,philosopher3 would not be started.Nothing would start him.46.If a philosopher blocks,neighbors can later see that she is hungry by checkinghis state,in test,so he can be awakened when the forks are available.47.Variation1:readers have priority.No writer may start when a reader is ac-tive.When a new reader appears,it may start immediately unless a writer is currently active.When a writerfinishes,if readers are waiting,they are all started,regardless of the presence of waiting writers.Variation2:Writers have priority.No reader may start when a writer is waiting.When the last ac-tive processfinishes,a writer is started,if there is one;otherwise,all the readers(if any)are started.Variation3:symmetric version.When a reader is active,new readers may start immediately.When a writerfinishes,a new writer has priority,if one is waiting.In other words,once we have started reading,we keep reading until there are no readers left.Similarly,once we have started writing,all pending writers are allowed to run.48.A possible shell script might beif[!–f numbers];then echo0>numbers;ficount=0while(test$count!=200)docount=‘expr$count+1‘n=‘tail–1numbers‘expr$n+1>>numbersdoneRun the script twice simultaneously,by starting it once in the background (using&)and again in the foreground.Then examine thefile numbers.It will probably start out looking like an orderly list of numbers,but at some point it will lose its orderliness,due to the race condition created by running two cop-ies of the script.The race can be avoided by having each copy of the script test for and set a lock on thefile before entering the critical area,and unlock-ing it upon leaving the critical area.This can be done like this:if ln numbers numbers.lockthenn=‘tail–1numbers‘expr$n+1>>numbersrm numbers.lockfiThis version will just skip a turn when thefile is inaccessible,variant solu-tions could put the process to sleep,do busy waiting,or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER3PROBLEMS1.It is an accident.The base register is16,384because the program happened tobe loaded at address16,384.It could have been loaded anywhere.The limit register is16,384because the program contains16,384bytes.It could have been any length.That the load address happens to exactly match the program length is pure coincidence.2.Almost the entire memory has to be copied,which requires each word to beread and then rewritten at a different location.Reading4bytes takes10nsec, so reading1byte takes2.5nsec and writing it takes another2.5nsec,for a total of5nsec per byte compacted.This is a rate of200,000,000bytes/sec.To copy128MB(227bytes,which is about1.34×108bytes),the computer needs227/200,000,000sec,which is about671msec.This number is slightly pessimistic because if the initial hole at the bottom of memory is k bytes, those k bytes do not need to be copied.However,if there are many holes andmany data segments,the holes will be small,so k will be small and the error in the calculation will also be small.3.The bitmap needs1bit per allocation unit.With227/n allocation units,this is224/n bytes.The linked list has227/216or211nodes,each of8bytes,for a total of214bytes.For small n,the linked list is better.For large n,the bitmap is better.The crossover point can be calculated by equating these two formu-las and solving for n.The result is1KB.For n smaller than1KB,a linked list is better.For n larger than1KB,a bitmap is better.Of course,the assumption of segments and holes alternating every64KB is very unrealistic.Also,we need n<=64KB if the segments and holes are64KB.4.Firstfit takes20KB,10KB,18KB.Bestfit takes12KB,10KB,and9KB.Worstfit takes20KB,18KB,and15KB.Nextfit takes20KB,18KB,and9 KB.5.For a4-KB page size the(page,offset)pairs are(4,3616),(8,0),and(14,2656).For an8-KB page size they are(2,3616),(4,0),and(7,2656).6.They built an MMU and inserted it between the8086and the bus.Thus all8086physical addresses went into the MMU as virtual addresses.The MMU then mapped them onto physical addresses,which went to the bus.7.(a)M has to be at least4,096to ensure a TLB miss for every access to an ele-ment of X.Since N only affects how many times X is accessed,any value of N will do.(b)M should still be atleast4,096to ensure a TLB miss for every access to anelement of X.But now N should be greater than64K to thrash the TLB, that is,X should exceed256KB.8.The total virtual address space for all the processes combined is nv,so thismuch storage is needed for pages.However,an amount r can be in RAM,so the amount of disk storage required is only nv−r.This amount is far more than is ever needed in practice because rarely will there be n processes ac-tually running and even more rarely will all of them need the maximum al-lowed virtual memory.9.The page table contains232/213entries,which is524,288.Loading the pagetable takes52msec.If a process gets100msec,this consists of52msec for loading the page table and48msec for running.Thus52%of the time is spent loading page tables.10.(a)We need one entry for each page,or224=16×1024×1024entries,sincethere are36=48−12bits in the page numberfield.。
操作系统第三版习题答案
输入 程序 B 打印 程序 B 打印
CPU 时间
程序 A
程序 B
程序 A
50
100
130
(2) CPU 有空闲等待,它发生在 100ms∼130ms 时间段内,此时间段内程序 A 与程序 B
200
230
280
380
ms
都在进行 I/O 操作。 (3) 程序 A 无等待现象,程序 B 在 0ms∼50ms 时间段与 200ms∼230ms 时间段内有等待 现象。 3、设三道程序,按照 A、B、C 优先次序运行,其内部计算和 I/O 操作时间由图给出。 A B C C11=30ms C21=60ms C31=20ms | | | I12=40ms I22=30ms I32=40ms | | | C13=10ms C23=10ms C33=20ms 试画出按多道运行的时间关系图(忽略调度执行时间)。完成三道程序共花多少时间?比 单道程序节省了多少时间?若处理器调度程序每次运行程序的转换时间花 1ms,试画出 各程序状态转换的时间关系图。 解答:完成三道程序抢占式花费时间是 190 ms,非抢占花费时间是 180 ms,单道花费 时间是 260 ms,抢占式比单道节省时间为 70 ms。 单道程序运行时间:260ms A:30+40+10=80 ms B:60+30+10=100 ms C:20+40+20=80 ms 4、在单 CPU 和两台 I/O(I1 和 I2)设备的多道程序设计环境下,同时投入三个作业运行。 它们的执行轨迹如下: Job1:I2(30ms)、CPU(10ms)、I1(30ms)、CPU(10ms)、I2(20ms) Job2:I1(20ms)、CPU(20ms)、I2(40ms) Job3:CPU(30ms)、I1(20ms) 、CPU(10ms)、I1(10ms) 如果 CPU、I1 和 I2 都能并行工作,优先级从高到低为 Job1、Job2 和 Job3,优先级高 的作业可以抢占优先级低的作业的 CPU,但是不抢占 I1 和 I2。试求: (1)每个作业从投入到完成分别需要多少时间。 (2)从投入到完成 CPU 的利用率。 (3) I/O 设备的利用率。 答:(1)JOB1,JOB2,JOB3 从投入到完成分别所需时间为 110,90,110。 (2)每个作业从投入到完成 CPU 的利用率是 72.7%。 (3)I1 的利用率是 72.7%,I2 的利用率是 81.8%。 5、在单 CPU 和两台 I/O(I1 和 I2)设备的多道程序设计环境下,同时投入三个作业运行。 它们的执行轨迹如下: Job1:I2(30ms)、CPU(10ms)、I1(30ms)、CPU(10ms) Job2:I1(20ms)、CPU(20ms)、I2(40ms) Job3:CPU(30ms)、I1(20ms)
计算机操作系统第三版课后习题答案
第一章1.设计现代OS的主要目标是什么?答:(1)有效性(2)方便性(3)可扩充性(4)开放性2.OS的作用可表现在哪几个方面?答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象?答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推动多道批处理系统形成和収展的主要动力是什么?答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。
5.何谓脱机I/O和联机I/O?答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。
而联机I/O 方式是指程序和数据的输入输出都是在主机的直接控制下进行的。
6.试说明推动分时系统形成和发展的主要动力是什么?答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。
主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么?应如何解决?答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配置缓冲区,暂存用户键入的命令或数据。
操作系统(第三版)习题答案
:第一章操作系统引论1.设计现代OS的主要目标是什么答:(1)有效性(2)方便性(3)可扩充性(4)开放性2.OS的作用可表现在哪几个方面答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象、答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推动多道批处理系统形成和发展的主要动力是什么答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。
5.何谓脱机I/O和联机I/O%答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。
而联机I/O方式是指程序和数据的输入输出都是在主机的直接控制下进行的。
6.试说明推动分时系统形成和发展的主要动力是什么答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。
主要表现在:CPU的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么应如何解决答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配置缓冲区,暂存用户键入的命令或数据。
现代操作系统(原书第3版)部分课后答案-第4章
1.这些系统直接把程序载入内存,并且从word0(魔数)开始执行。
为了避免将header作为代码执行,魔数是一条branch指令,其目标地址正好在header之上。
按这种方法,就可能把二进制文件直接读取到新的进程地址空间,并且从0开始运行。
5.rename 调用不会改变文件的创建时间和最后的修改时间,但是创建一个新的文件,其创建时间和最后的修改时间都会改为当前的系统时间。
另外,如果磁盘满,复制可能会失败。
10.由于这些被浪费的空间在分配单元(文件)之间,而不是在它们内部,因此,这是外部碎片。
这类似于交换系统或者纯分段系统中出现的外部碎片。
11.传输前的延迟是9ms,传输速率是2^23Bytes/s,文件大小是2^13Bytes,故从内存读取或写回磁盘的时间都是9+2^13/2^23=9.977ms,总共复制一个文件需要9.977*2=19.954ms。
为了压缩8G磁盘,也就是2^20个文件,每个需要19.954ms,总共就需要20,923 秒。
因此,在每个文件删除后都压缩磁盘不是一个好办法。
12.因为在系统删除的所有文件都会以碎片的形式存在磁盘中,当碎片到达一定量磁盘就不能再装文件了,必须进行外部清理,所以紧缩磁盘会释放更多的存储空间,但在每个文件删除后都压缩磁盘不是一个好办法。
15.由于1024KB = 2^20B, 所以可以容纳的磁盘地址个数是2^20/4 = 2^18个磁盘地址,间接块可以保存2^18个磁盘地址。
与 10 个直接的磁盘地址一道,最大文件有 262154 块。
由于每块为 1 MB,最大的文件是262154 MB。
19.每个磁盘地址需要D位,且有F个空闲块,故需要空闲表为DF位,采用位图法则需要B位,当DF<B时,空闲表采用的空间少于位图,当D=16时,得F/B<1/D=6.25%,即空闲空间的百分比少于6.25%.20.a)1111 1111 1111 0000b)1000 0001 1111 0000c)1111 1111 1111 1100d)1111 1110 0000 110027.平均时间T = 1*h + 40*(1-h)=-39h+40ms28.1500rpm(每分钟1500转),60s/1500=0.004s=4ms,即每转需要4ms,平均旋转延迟为2ms;读取一个k个字节的块所需要的时间T是平均寻道时间,平均旋转延迟和传送时间之和。
操作系统课后习题答案第三版
一、名词解释1、操作系统:是位于硬件层之上,所有其它软件之下的一个系统软件,是管理系统中的软硬资源,使其得以充分利用并方便用户使用的程序集合。
2、进程:具有一定独立功能的程序关于一个数据集合的一次运行活动。
3、线程:也称轻进程,是进程内的一个相对独立的执行流。
4、设备无关性:用户在使用设备时,选用逻辑设备,而不必面对一种设备一种接口.设备管理实现逻辑设备到物理设备的映射,这就是设备无关性.5、数组多路通道:是指连接多台设备.同时为多台设备服务,每次输入/输出一个数据块.这样的通道叫数组多路通道.6、死锁:一组并发进程,因争夺彼此占用的资源而无法执行下去,这种僵局叫死锁.7、文件系统:是指与文件管理有关的那部分软件,被管理的文件及管理所需的数据结构的总体.8、并发进程:进程是一个程序段在其数据集合上的一次运行过程,而并发进程是可以与其它进程并发运行的.9、临界区:是关于临界资源访问的代码段.10、虚拟存储器:是一种扩大内存容量的设计技术,它把辅助存储器作为计算机内存储器的后援,实际上不存在的扩大的存储器叫虚拟存储器^11、动态重定位:在程序运行时,将逻辑地址映射为物理地址的过程叫动态重定位.12、作业:用户要求计算机系统为其完成的计算任务的集合。
13、中断:在程序运行过程中,出现的某种紧急事件,必须中止当前正在运行的程序,转去处理此事件,然后再恢复原来运行的程序,这个过程称为中断。
14、文件:具有符号名而且在逻辑上具有完整意义的信息项的有序序列。
15、进程互斥:两个或两个以上的进程,不同时进入关于同一组共享变量的临界区域,否则可能发生与时间有关的错误,这种现象叫互斥。
16、系统开销:指运行操作系统程序,对系统进行管理而花费的时间和空间。
17、通道:由通道独立控制完成I/O操作,全部完成后向CPU发出中断,CPU丸行中断处理程序。
18、系统调用:使用户或系统程序在程序以及上请求系统为之服务的一种手段。
现代操作系统第三版课后答案1~6章
SOLUTIONS TO CHAPTER 1 PROBLEMS1.An operating system must provide the users with an extended (i.e., virtual) machine, and it must manage the I/O devices and other system resources.2.Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.3. Input spooling is the technique of reading in jobs, for example, from cards, onto the disk, so that when the currently executing processes are finished,there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. Input spooling on a personal computer is not very likely, but output spooling is.4.The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100 percent busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).5.Second generation computers did not have the necessary hardware to protect the operating system from malicious user programs.6.It is still alive. For example, Intel makes Pentium I, II, and III, and 4 CPUs with a variety of different properties including speed and power consumption. All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.7. A 25 ×80 character monochrome text screen requires a 2000-byte buffer. The 1024 ×768 pixel24-bit color bitmap requires 2,359,296 bytes. In 1980 these two options would have cost $10 and $11,520, respectively. For current prices, check on how much RAM currently costs, probably less than $1/MB.8.Choices (a), (c), and (d) should be restricted to kernel mode.9.Personal computer systems are always interactive, often with only a single user. Mainframe systems nearly always emphasize batch or timesharing with many users. Protection is much more of an issue on mainframe systems, as is efficient use of all resources.10.Every nanosecond one instruction emerges from the pipeline. This means the machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per stage would also execute 1 billion instructions per second. All that matters is how often a finished instructions pops out the end of the pipeline.11.The manuscript contains 80 ×50 ×700 = 2.8 million characters. This is, of course, impossible to fit into the registers of any currently available CPU and is too big for a 1-MB cache, but if such hardware were available, the manuscript could be scanned in 2.8 msec from the registers or 5.8 msec from the cache. There are approximately 2700 1024-byte blocks of data, so scanning from the disk would require about 27 seconds, and from tape 2 minutes 7seconds. Of course, these times are just to read the data. Processing and rewriting the data would increase the time.12.Logically, it does not matter if the limit register uses a virtual address or a physical address. However, the performance of the former is better. If virtual addresses are used, the addition of the virtual address and the base register can start simultaneously with the comparison and then can run in parallel. If physical addresses are used, the comparison cannot start until the addition is complete, increasing the access time.13.Maybe. If the caller gets control back and immediately overwrites the data, when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.14. A trap is caused by the program and is synchronous with it. If the program is run again and again, thetrap will always occur at exactly the same position in the instruction stream. An interrupt is caused by an external event and its timing is not reproducible.15.Base = 40,000 and limit = 10,000. An answer of limit = 50,000 is incorrect for the way the system was described in this book. It could have been implemented that way, but doing so would have required waiting until the address+ base calculation was completed before starting the limit check, thus slowing down the computer.16.The process table is needed to store the state of a process that is currently suspended, either ready or blocked. It is not needed in a single process system because the single process is never suspended.17.Mounting a file system makes any files already in the mount point directory inaccessible, so mount points are normally empty. However, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being checked or repaired18.Fork can fail if there are no free slots left in the process table (and possibly if there is no memory or swap space left). Exec can fail if the file name give ndoes not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it.19.If the call fails, for example because fd is incorrect, it can return −1. It can also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes.20.It contains the bytes: 1, 5, 9, 2.21.Block special files consist of nu mbered blocks, each of which can be read or written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files.22.System calls do not really have names, other than in a documentation sense. When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program.23.Yes it can, especially if the kernel is a message-passing system.24.As far as program logic is concerned it does not matter whether a call to a library procedure results ina system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes,further slowing the progress in real time of a calling process.25.Several UNIX calls have no counterpart in the Win32 API:Link: a Win32 program can not refer to a file by an alternate name or see it in more than one directory. Also, attempting to create a link is a convenient way to test for and create a lock on a file.Mount and umount: a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod: Windows programmers have to assume that every user can access every file. Kill: Windows programmers cannot kill a misbehaving program that is not cooperating.26.The conversions are straightforward:(a) A micro year is 10-6 ×365 ×24 ×3600 31.536 sec.(b) 1000 meters or 1 km.(c) There are 240bytes, which is 1,099,511,627,776 bytes.(d) It is 6 ×1024kg.SOLUTIONS TO CHAPTER 2 PROBLEMS1.The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the process could go d irectly from blocked to running. The other missing transition, from ready to blocked, is impossible. A ready process cannot do I/O or anything else that might block it. Only a running process can block.2.You could have a register containing a pointer to the current process table entry. When I/O completed, the CPU would store the current machine state in the current process table entry. Then it would go to the interrupt vector forthe interrupting device and fetch a pointer to another process table entry (the service procedure). This process would then be started up.3.Generally, high-level languages do not allow one the kind of access to CPU hardware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process’ stack area. Also, interrupt service routines must execute as rapidly as possible.4.There are several reasons for using a separate stack for the kernel. Two of them are as follows. First, you do not want the operating system to crash because a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program’s memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.5.It would be difficult, if not impossible, to keep the file system consistent. Suppose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter, another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If thefirst process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid6.When a thread is stopped, it has values in the registers. They must be saved, just as when the process is stopped the registers must be saved. Timesharing threads is no different than timesharing processes, so each thread needs itsown register save area.7.No. If a single-threaded process is blocked on the keyboard, it cannot fork.8. A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9.Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.er-level threads cannot be preempted by the clock uless the whole process’ quantum has been used up. Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick a different thread from the same process to run next if it so desires.11.In the single-threaded case, the cache hits take 15 msec and cache misses take 90 msec. The weighted average is 2/3 15 1 /3 90. Thus the mean request takes 40 msec and the server can do 25 per second. For a multithreaded server, all the waiting for the disk is overlapped, so every request takes15 msec, and the server can handle 66 2/3 requests per second.12.Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a telephone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, the entire database takes 64 megabytes, and can easily be kept in the server’s memory to provide fast lookup.13.The pointers are really necessary because the size of the global variable is unknown. It could be anything from a character to an array of floating-point numbers. If the value were stored, one would have to give the size to create 3 global, which is all right, but what type should the second parameter of set 3 global be, and what type should the value of read 3 global be?14.It could happen that the runtime system is precisely at the point of blocking or unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag,then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handle15.Yes it is po ssible, but inefficient. A thread wanting to do a system call first sets an alarm timer, then does the call. If the call blocks, the timer returns control to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems can develop. This is not an attractive way to build a threads package.16.The priority inversion problem occurs when a low-priority process is in its critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise.17.Each thread calls procedures on its own, so it must have its own stack for the local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.18. A race condition is a situation in which two (or more) processes are about to perform some action. Depending on the exact timing, one or the other goes first. If one of the processes goes first, everything works, but if another one goes first, a fatal error occurs.19.Yes. The simulated computer could be multiprogrammed. For example, while process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds one to the variable, we have a race condition20. Yes, it still works, but it still is busy waiting, of course.21. It certainly works with preemptive scheduling. In fact, it was designed for that case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.22.Yes it can. The memory word is used as a flag, with 0 meaning that no one is using the critical variables and 1 meaning that someone is using them. Put a1 in the register, and swap the memory word and the register. If the register contains a 0 after the swap, access has been granted. If it contains a 1, access has been denied. When a process is done, it stores a 0 in the flag in memory23.To do a semaphore ope ration, the operating system first disables interrupts. Then it reads the value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up, it must check to see if any processes are blocked on the semaphore. If one or more processes are blocked, one of then is removed from the list of blocked processes and made runnable. When all these operations have been completed, interrupts can be enabled again.24.Associated with each counting semaphore are two binary semaphores, M, used for mutual exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter that holds the number of ups minus the number of downs, and a list of processes blocked on that semaphore. To implement down, a process first gains exclusive access to the semaphores, counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the proc- ess is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up, first M is downed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to bedone is to up M. If, however, the counter is now negative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order.25.If the program operates in phases and neither process may enter the next phase until both arefinished with the current phase, it makes perfect sense to use a barrier.26.With round-robin scheduling it works. Sooner or later L will run, and eventually it will leave its critical region. The point is, with priority scheduling, L never gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.27.With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.28.It is very expensive to implement. Each time any variable that appears in a predicate on which some process is waiting changes, the runtime system must re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened on a signal primitive.29.The employees communicate by passing messages: orders, food, and bags in this case. In UNIX terms, the four processes are connected by pipes.30.It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.31.If a philosopher blocks, neighbors can later see that he is hungry by checking his state, in test, so he can be awakened when the forks are available.32.The change would mean that after a philosopher stopped eating, neither of his neighbors could be chosen next. In fact, they would never be chosen. Suppose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similary, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.33.Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer is currently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.34.It will need nT sec.35.If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be removed from the list of runnable processes.36.In simple cases it may be possible to determine whether I/O will be limiting by looking at source code. For instance a program that reads all its input files into buffers at the start will probably not be I/O bound, but a problem that reads and writes increment ally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program , you can compare this with total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.37.For multiple processes in a pipeline, the common parent could pass to the operating system information about the flow of data. With this inf ormation the OS could, for instance, determine which process could supply output to a process blocking on a call for input.38.The CPU efficiency is the useful CPU time divided by the total CPU time.When Q≥T, the basic cycle is for the process to run for T and undergo aprocess switch for S. Thus (a) and (b) have an efficiency of T /(S T ). Whenthe quantum is shorter than T, each run of T will require T /Q processswitches, wasting a time ST /Q. The efficiency here is then333333333T T ST /Q which reduces to Q / (Q S ), which is the answer to (c). For (d), we just substitute Q for S and find that the efficiency is 50 percent. Finally, for (e), as Q→0 the efficiency goes to 0.39.Shortest job first is the way to minimize average response time.0 < X≤3: X, 3, 5, 6, 9.3 < X≤5: 3, X, 5, 6, 9.5 < X≤6: 3, 5, X, 6, 9.6 < X≤9: 3, 5, 6, X, 9.X > 9: 3, 5, 6, 9, X.40.For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of 10 minutes, Cfinishes. During the next 8 minutes, each job gets1/4 of the CPU, after which time Dfinishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until Bfinishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of22 mi nutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.41.The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be swapped in 5 times.42. A check could be made to see if the program was expecting input and did anything with it. A program that was not expecting input and did not process it would not get any special priority boost.43.The sequence of predictions is 40, 30, 35, and now 25.44.The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To be chedulable, this must be less than 1. Thus x must be less than 12.5 msecs45.Two-level scheduling is needed when memory is too small to hold all the ready processes. Some set of them is put into memory, and a choice is made from that set. From time to time, the set of in-core processes is adjusted. This algorithm is easy to implement and reasonably efficient, certainly a lot better than say, round robin without regard to whether a process was in memory or not.46.The kernel could schedule processes by any means it wishes, but within each process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel handles the mechanism.47.A possible shell script might beif [ ! –f numbers ]; then echo 0 > numbers; ficount=0while (test $count != 200 )docount=‘expr $count + 1 ‘n=‘tail –1 numbers‘expr $n + 1 >>numbersdoneRun the script twice simultaneously, by starting it once in the background(using &) and again in the foreground. Then examine the file numbers. It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two copies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this: if ln numbers numbers.lockthenn=‘tail –1 numbers‘expr $n + 1 >>numbersrm numbers.lockfiThis version will just skip a turn when the file is inaccessible, variant solutions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER 3 PROBLEMS1.In the U.S., consider a presidential election in which three or more candidates are trying for the nomination of some party. After all the primary elections are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources(votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.2.If the printer starts to print a file before the entire file ha s been received (this is often allowed to speed response), the disk may fill with other requests that can’t be printed until the first file is done, but which use up disk space needed to receive the file currently being printed. If the spooler does not start t o print a file until the entire file has been spooled it can reject a request that is too big. Starting to print a file is equivalent to reserving the printer; if the reservation is deferred until it is known that the entire file can be received, a deadlock of the entire system can be avoided. The user with the file that won’t fit is still deadlocked of course, and must go to another facility that permits printing bigger files.3.The printer is nonpreemptable; the system cannot start printing another job until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows that4.Yes. It does not make any difference whatsoever.5.Yes, illegal graphs exist. We stated that a resource may only be held by a single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules. Arcs from squares to squares or from circles to circles also violate the rules.6.A portion of all such resources could be reserved for use only by processes owned by the administrator, so he or she could always run a shell and programs needed to evaluate a deadlock and make decisions about which processes to kill to make the system usable again.7.Neither change leads to deadlock. There is no circular wait in either case.8.Voluntary relinquishment of a resource is most similar to recovery through preemption. The essential difference is that computer processes are not expected to solve such problems on their own. Preemption is analogous to th operator or the operating system acting as a policeman, overriding the normal rules individual processes obey.9.The process is asking for more resources than the system has. There is no conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.10.If the system had two or more CPUs, two or more processes could run in parallel, leading to diagonal trajectories.11.Yes. Do the whole thing in three dimensions. The z-axis measures the number of instructions executed by the third process.12.The method can only be used to guide the scheduling if the exact instant at which a resource is going to be claimed is known in advance. In practice, this is rarely the case.13.A request from D is unsafe, but one from C is safe.14.There are states that are neither safe nor deadlocked, but which lead to deadlocked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation:。
计算机操作系统(第三版)完整课后习题答案
第一章1.设计现代OS的主要目标是什么?答:(1)有效性(2)方便性(3)可扩充性(4)开放性2.OS的作用可表现在哪几个方面?答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象?答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS 通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推劢多道批处理系统形成和収展的主要劢力是什么?答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。
5.何谓脱机I/O和联机I/O?答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。
而联机I/O方式是指程序和数据的输入输出都是在主机的直接控制下进行的。
6.试说明推劢分时系统形成和収展的主要劢力是什么?答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。
主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么?应如何解决?答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设臵多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配臵缓冲区,暂存用户键入的命令或数据。
操作系统第三版,课后答案
多道批处理系统:把多个作业同时放入内存,当某个作业因某种原因运行不下去时,系统就转向下一作业运行。
特点:1.多个作业同时存在于内存。
2.作业完成顺序与进入顺序无关。
3.作业由系统程序调入内存。
分时系统:作业直接进入内存。
不允许一个作业长期占有CPU.多个用户分时使用主机,每一用户分得一个时间片,用完这个时间片后操作系统将处理机分给另一用户,如此循环,每一用户可以周期性地获得CPU使用权,这样每一用户都有一种独占CPU的感觉。
分时系统的特征:多路性、独立性、及时性、交互性。
程序的并发执行是指:若干个程序同时在系统中执行,这些程序的执行在时间上是重叠的,一个程序的执行尚未结束,另一个程序的执行已经开始。
进程的定义:可并发执行的程序段,在某个数据集合上的一次执行过程。
⏹进程是程序的一次执行。
⏹进程是一个程序及其数据在处理机上顺序执行时所发生的活动。
⏹进程是程序在一个数据集合上运行的过程,它是系统进行资源分配和调度的一个独立单位。
进程与程序的区别与联系:1.进程是动态的概念,程序是静态的概念;进程离开程序就失去了意义。
2.程序可永久保存,而进程具有短暂生命周期。
3.一个程序可对应于多个进程;4.进程更能真实地描述并发,而程序不能。
进程的状态及其转化:1.运行:该进程已占有了处理机,其程序正在执行。
2.就绪:该进程已准备好,占有了执行所需的除处理机之外的所有资源和条件。
3.阻塞:该进程正在等待系统中某事件发生(例如I/O操作的完成)。
进程的物理结构:两部分:进程控制块(PCB),进程体(程序部分,数据部分)PCB:为系统提供控制、管理进程信息的数据结构,是进程在系统中存在的唯一标识。
一个用户进程其实体存在于内存用户工作区。
其PCB存放于内存操作系统工作区。
PCB内容:进程标识符,处理机状态,进程调度信息,进程控制信息。
PCB的组织方式:链接方式,索引方式。
资源分配原则:子进程只能占有父进程所拥有的资源,撤消进程时,子孙进程全部随之撤消。
计算机操作系统(第三版)课后答案
计算机操作系统答案(第一至五章)第一章os引论1. 设计现代OS的主要目标是什么方便性,有效性,可扩充性和开放性.2. OS的作用可表现为哪几个方面a. OS作为用户与计算机硬件系统之间的接口;b. OS作为计算机系统资源的管理者;c. OS作为扩充机器.3. 试说明推动多道批处理系统形成和发展的主要动力是什么不断提高计算机资源利用率和系统吞吐量的需要;4. 何谓脱机I/O和联机I/Oa. 脱机输入输出方式(Off-Line I/O)是为了解决人机矛盾及CPU和I/O设备之间速度不匹配而提出的.它减少了CPU的空闲等待时间,提高了I/O速度.具体内容是将用户程序和数据在一台外围机的控制下,预先从低速输入设备输入到磁带上,当CPU需要这些程序和数据时,在直接从磁带机高速输入到内存,从而大大加快了程序的输入过程,减少了CPU等待输入的时间,这就是脱机输入技术;当程序运行完毕或告一段落,CPU需要输出时,无需直接把计算结果送至低速输出设备,而是高速把结果输出到磁带上,然后在外围机的控制下,把磁带上的计算结果由相应的输出设备输出,这就是脱机输出技术.b. 若这种输入输出操作在主机控制下进行则称之为联机输入输出方式.5. 试说明推动分时系统形成和发展的主要动力是什么用户的需要.即对用户来说,更好的满足了人-机交互,共享主机以及便于用户上机的需求.6. 试说明实时任务的类型和实时系统的类型.a. 实时任务的类型按任务执行时是否呈现周期性来划分,分为周期性实时任务和非周期性实时任务;---根据对截止时间的要求来划分,分为硬实时任务和软实时任务;b. 通常把要同达行实时控制的系统统称为实时控制系统,把要求对信息进行实时处理的系统成为实时信息处理系统.7. 实现多道程序应解决哪些问题a. 处理机管理问题;b. 内存管理问题;c. I/O设备管理问题;d. 文件管理问题;e. 作业管理问题.8. 试比较单道与多道批处理系统的特点及优缺点.a. 单道批处理系统是最早出现的一种OS,它具有自动性,顺序性和单道性的特点;---多道批处理系统则具有调度性,无序性和多道性的特点;b. 单道批处理系统是在解决人机矛盾及CPU和I/O设备之间速度不匹配的矛盾中形成的,旨在提高系统资源利用率和系统吞吐量,但是仍然不能很好的利用系统资源;---多道批处理系统是对单道批处理系统的改进,其主要优点是资源利用率高,系统吞吐量大;缺点是平均周转时间长,无交互能力.9. 实现分时系统的关键问题是什么应如何解决a. 关键问题:及时接收,及时处理;b. 对于及时接收,只需在系统中设置一多路卡,多路卡作用是使主机能同时接收用户从各个终端上输入的数据;---对于及时处理,应使所有的用户作业都直接进入内存,在不长的时间内,能使每个作业都运行一次.10 为什么要引入实时操作系统更好地满足实时控制领域和实时信息处理领域的需要.11 OS具有哪几大特征它的最基本特征是什么a. 并发(Concurrence),共享(Sharing),虚拟(Virtual),异步性(Asynchronism).b. 其中最基本特征是并发和共享.12 内存管理有哪些主要功能它们的主要任务是什么a. 主要功能: 内存分配,内存保护,地址映射和内存扩充等.b. 内存分配的主要任务是为每道程序分配内存空间,提高存储器利用率,以减少不可用的内存空间,允许正在运行的程序申请附加的内存空间,以适应程序和数据动态增长的需要.---内存保护的主要任务是确保每道用户程序都在自己的内存空间中运行,互不干扰.---地址映射的主要任务是将地址空间中的逻辑地址转换为内存空间中与之对应的物理地址. ---内存扩充的主要任务是借助虚拟存储技术,从逻辑上去扩充内存容量.13 处理机管理具有哪些功能它们的主要任务是什么a. 进程控制,进程同步,进程通信和调度.b. 进程控制的主要任务是为作业创建进程,撤销已结束的进程,以及控制进程在运行过程中的状态转换.---进程同步的主要任务是对诸进程的运行进行调节.---进程通信的任务是实现在相互合作进程之间的信息交换.---调度分为作业调度和进程调度.作业调度的基本任务是从后备队列中按照一定的算法,选择出若干个作业,为它们分配必要的资源;而进程调度的任务是从进程的就绪队列中,按照一定的算法选出一新进程,把处理机分配给它,并为它设置运行现场,是进程投入运行.14 设备管理有哪些主要功能其主要任务是什么a. 主要功能: 缓冲管理,设备分配和设备处理,以及虚拟设备等.b. 主要任务: 完成用户提出的I/O请求,为用户分配I/O设备;提高CPU和I/O设备的利用率;提高I/O速度;以及方便用户使用I/O设备.15 文件管理有哪些主要功能其主要任务是什么a. 主要功能: 对文件存储空间的管理,目录管理,文件的读,写管理以及文件的共享和保护.b. 主要任务: 对用户文件和系统文件进行管理,以方便用户使用,并保证文件的安全性.16 试在交互性,及时性和可靠性方面,将分时系统与实时系统进行比较.a. 分时系统是一种通用系统,主要用于运行终端用户程序,因而它具有较强的交互能力;而实时系统虽然也有交互能力,但其交互能力不及前者.b. 实时信息系统对实用性的要求与分时系统类似,都是以人所能接收的等待时间来确定;而实时控制系统的及时性则是以控制对象所要求的开始截止时间和完成截止时间来确定的.c. 实时系统对系统的可靠性要求要比分时系统对系统的可靠性要求高.17 是什么原因使操作系统具有异步性特征a. 程序执行结果是不确定的,即程序是不可再现的.b. 每个程序在何时执行,多个程序间的执行顺序以及完成每道程序所需的时间都是不确定的,即不可预知性.18 试说明在MS-DOS 以前的版本中,其局限性表现在哪几个方面a. 在寻址范围上,DOS只有1MB,远远不能满足用户需要.b. DOS试单用户单任务操作系统,不支持多任务并发执行,与实际应用相矛盾.19 MS-DOS由哪几部分组成每部分的主要功能是什么略.20 为什么Microsoft在开发OS/2时,选中了80286芯片设计OS/2的主要目标之一是既能充分发挥80286处理器的能力,又能运行在8086处理器环境下开发的程序.因为在80286内部提供了两种工作方式: 实方式和保护方式,使得Intel 80286处理器不仅提供了多任务并发执行的硬件支持,而且还能运行所有在8086下编写的程序。
计算机操作系统第三版课后习题答案 3
第一章1.设计现代OS的主要目标是什么?答:(1)有效性(2)方便性(3)可扩充性(4)开放性2.OS的作用可表现在哪几个方面?答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象?答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS 通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推劢多道批处理系统形成和収展的主要劢力是什么?答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。
5.何谓脱机I/O和联机I/O?答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。
而联机I/O方式是指程序和数据的输入输出都是在主机的直接控制下进行的。
6.试说明推劢分时系统形成和収展的主要劢力是什么?答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。
主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么?应如何解决?答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设臵多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配臵缓冲区,暂存用户键入的命令或数据。
现代操作系统(原书第3版)部分课后答案-第3章
2.由题意得,读或写每个字节需要10/4 = 2.5ns,且128 MB = 2^27 字节,内存紧缩时,几乎整个内存都必须复制,也就是要求读出每一个内存字,然后重写到不同的位置。
因此,对于每个字节的压缩需要5ns。
故总共需要的时间为 2^27 * 5 ns = 671 ms 。
3.128 MB = 2^27 字节对于位图,用于存储管理需要2^27/8n字节,故总共需要 2^27 + 2^27/8n = 2^27*(1+1/8n)字节;对于链表,用于存储管理需要2^27 / 2^16(64kb)=2^11个节点,每个节点大小为需要(32+16+16)/8 = 8字节,故总共需要2^27 + 2^11*8 = 2^27 + 2^14 = 2^27 *(1 +1/(8*2^10) )字节;因此,当n < 2^10字节(即1KB)时,位图> 链表,则使用链表;当n > 1KB时,位图< 链表,则使用位图。
4.首次适配:20KB,10KB,18KB;最佳适配:12KB,10KB,9KB;最差适配:20KB,18KB,15KB;下次适配:20KB,18KB,9KB。
5.虚拟页号|偏移量虚拟地址4KB(页大小)12位偏移量8KB(页大小)13位偏移量20000 100|111000100000 10|0111000100000 32768 1000|000000000000 100|0000000000000 60000 1110|101001100000 111|01010011000007.a)M的最小值是4096,才能使内层循环的每次执行时都引起TLB失效,N的值只会影响到X的循环次数,与TLB失效无关。
b)M的值应该大于4096才能在内层循环每次执行时引起TLB失效,但现在N 的值要大于64K,所以X会超过256KB。
9.页大小为8KB,所以页內地址为13位,故页框有19位,可表示的物理空间有2^19个页框。
第三版操作系统部分课后答案
第1章1.答:所谓“多道程序设计”技术,即是通过软件的手段,允许在计算机内存中同时存放几道相互独立的作业程序,让它们对系统中的资源进行“共享”和“竞争”,以使系统中的各种资源尽可能地满负荷工作,从而提高整个计算机系统的使用效率。
基于这种考虑,计算机科学家开始把CPU、存储器、外部设备以及各种软件都视为计算机系统的“资源”,并逐步设计出一种软件来管理这些资源,不仅使它们能够得到合理地使用,而且还要高效地使用。
具有这种功能的软件就是“操作系统”。
所以,“多道程序设计”的出现,加快了操作系统的诞生。
2.答:拿操作系统来说,它是在裸机上加载的第一层软件,是对计算机硬件系统功能的首次扩充。
从用户的角度看,计算机配置了操作系统后,由于操作系统隐蔽了硬件的复杂细节,用户会感到机器使用起来更方便、容易了。
这样,通过操作系统的作用使展现在用户面前的是一台功能经过扩展了的机器。
这台“机器”不是硬件搭建成的,现实生活中并不存在具有这种功能的真实机器,它只是用户的一种感觉而已。
所以,就把这样的机器称为“虚拟机”。
3.答:在分时系统中,系统把CPU时间划分成许多时间片,每个终端用户可以使用由一个时间片规定的CPU时间,多个用户终端就轮流地使用CPU。
这样的效果是每个终端都开始了自己的工作,得到了及时的响应。
也就是说,“从宏观上看,多个用户同时工作,共享系统的资源”。
但实际上,CPU在每一时刻只为一个终端服务,即“从微观上看,各终端程序是轮流运行一个时间片”。
4.答:由于分布式系统的处理和控制功能是分布的,任何站点发生的故障都不会给整个系统造成太大的影响。
另外,当系统中的设备出现故障时,可以通过容错技术实现系统的重构,以保证系统的正常运行。
这一切都表明分布式系统具有健壮性。
5.答:基于嵌入式应用的多样化,嵌入式操作系统应该面向用户、面向产品、面向应用。
它必须有很强的适应能力,能够根据应用系统的特点和要求,灵活配置,方便剪裁,伸缩自如。
计算机操作系统习题答案(第三版)
计算机操作系统习题答案(第三版)(华仔整理)第一章os引论1. 设计现代OS的主要目标是什么?方便性,有效性,可扩充性和开放性.2. OS的作用可表现为哪几个方面?a. OS作为用户与计算机硬件系统之间的接口;b. OS作为计算机系统资源的管理者;c. OS作为扩充机器.3. 试说明推动多道批处理系统形成和发展的主要动力是什么?不断提高计算机资源利用率和系统吞吐量的需要;4. 何谓脱机I/O和联机I/O?a. 脱机输入输出方式(Off-Line I/O)是为了解决人机矛盾及CPU和I/O设备之间速度不匹配而提出的.它减少了CPU的空闲等待时间,提高了I/O速度.具体内容是将用户程序和数据在一台外围机的控制下,预先从低速输入设备输入到磁带上,当CPU需要这些程序和数据时,在直接从磁带机高速输入到内存,从而大大加快了程序的输入过程,减少了CPU等待输入的时间,这就是脱机输入技术;当程序运行完毕或告一段落,CPU需要输出时,无需直接把计算结果送至低速输出设备,而是高速把结果输出到磁带上,然后在外围机的控制下,把磁带上的计算结果由相应的输出设备输出,这就是脱机输出技术.b. 若这种输入输出操作在主机控制下进行则称之为联机输入输出方式.5. 试说明推动分时系统形成和发展的主要动力是什么?用户的需要.即对用户来说,更好的满足了人-机交互,共享主机以及便于用户上机的需求.6. 试说明实时任务的类型和实时系统的类型.a. 实时任务的类型按任务执行时是否呈现周期性来划分,分为周期性实时任务和非周期性实时任务;---根据对截止时间的要求来划分,分为硬实时任务和软实时任务;b. 通常把要同达行实时控制的系统统称为实时控制系统,把要求对信息进行实时处理的系统成为实时信息处理系统.7. 实现多道程序应解决哪些问题?a. 处理机管理问题;b. 内存管理问题;c. I/O设备管理问题;d. 文件管理问题;e. 作业管理问题.8. 试比较单道与多道批处理系统的特点及优缺点.a. 单道批处理系统是最早出现的一种OS,它具有自动性,顺序性和单道性的特点;---多道批处理系统则具有调度性,无序性和多道性的特点;b. 单道批处理系统是在解决人机矛盾及CPU和I/O设备之间速度不匹配的矛盾中形成的,旨在提高系统资源利用率和系统吞吐量,但是仍然不能很好的利用系统资源;---多道批处理系统是对单道批处理系统的改进,其主要优点是资源利用率高,系统吞吐量大;缺点是平均周转时间长,无交互能力.9. 实现分时系统的关键问题是什么?应如何解决?a. 关键问题:及时接收,及时处理;b. 对于及时接收,只需在系统中设置一多路卡,多路卡作用是使主机能同时接收用户从各个终端上输入的数据;---对于及时处理,应使所有的用户作业都直接进入内存,在不长的时间内,能使每个作业都运行一次.10 为什么要引入实时操作系统?更好地满足实时控制领域和实时信息处理领域的需要.11 OS具有哪几大特征?它的最基本特征是什么?a. 并发(Concurrence),共享(Sharing),虚拟(Virtual),异步性(Asynchronism).b. 其中最基本特征是并发和共享.12 内存管理有哪些主要功能?它们的主要任务是什么?a. 主要功能: 内存分配,内存保护,地址映射和内存扩充等.b. 内存分配的主要任务是为每道程序分配内存空间,提高存储器利用率,以减少不可用的内存空间,允许正在运行的程序申请附加的内存空间,以适应程序和数据动态增长的需要.---内存保护的主要任务是确保每道用户程序都在自己的内存空间中运行,互不干扰.---地址映射的主要任务是将地址空间中的逻辑地址转换为内存空间中与之对应的物理地址. ---内存扩充的主要任务是借助虚拟存储技术,从逻辑上去扩充内存容量.13 处理机管理具有哪些功能?它们的主要任务是什么?a. 进程控制,进程同步,进程通信和调度.b. 进程控制的主要任务是为作业创建进程,撤销已结束的进程,以及控制进程在运行过程中的状态转换.---进程同步的主要任务是对诸进程的运行进行调节.---进程通信的任务是实现在相互合作进程之间的信息交换.---调度分为作业调度和进程调度.作业调度的基本任务是从后备队列中按照一定的算法,选择出若干个作业,为它们分配必要的资源;而进程调度的任务是从进程的就绪队列中,按照一定的算法选出一新进程,把处理机分配给它,并为它设置运行现场,是进程投入运行.14 设备管理有哪些主要功能?其主要任务是什么?a. 主要功能: 缓冲管理,设备分配和设备处理,以及虚拟设备等.b. 主要任务: 完成用户提出的I/O请求,为用户分配I/O设备;提高CPU和I/O设备的利用率;提高I/O速度;以及方便用户使用I/O设备.15 文件管理有哪些主要功能?其主要任务是什么?a. 主要功能: 对文件存储空间的管理,目录管理,文件的读,写管理以及文件的共享和保护.b. 主要任务: 对用户文件和系统文件进行管理,以方便用户使用,并保证文件的安全性.16 试在交互性,及时性和可靠性方面,将分时系统与实时系统进行比较.a. 分时系统是一种通用系统,主要用于运行终端用户程序,因而它具有较强的交互能力;而实时系统虽然也有交互能力,但其交互能力不及前者.b. 实时信息系统对实用性的要求与分时系统类似,都是以人所能接收的等待时间来确定;而实时控制系统的及时性则是以控制对象所要求的开始截止时间和完成截止时间来确定的.c. 实时系统对系统的可靠性要求要比分时系统对系统的可靠性要求高.17 是什么原因使操作系统具有异步性特征?a. 程序执行结果是不确定的,即程序是不可再现的.b. 每个程序在何时执行,多个程序间的执行顺序以及完成每道程序所需的时间都是不确定的,即不可预知性.18 试说明在MS-DOS 3.X以前的版本中,其局限性表现在哪几个方面?a. 在寻址范围上,DOS只有1MB,远远不能满足用户需要.b. DOS试单用户单任务操作系统,不支持多任务并发执行,与实际应用相矛盾.19 MS-DOS由哪几部分组成?每部分的主要功能是什么?略.20 为什么Microsoft在开发OS/2时,选中了80286芯片?设计OS/2的主要目标之一是既能充分发挥80286处理器的能力,又能运行在8086处理器环境下开发的程序.因为在80286内部提供了两种工作方式: 实方式和保护方式,使得Intel 80286处理器不仅提供了多任务并发执行的硬件支持,而且还能运行所有在8086下编写的程序。
计算机操作系统第三版课后答案(整理)
计算机操作系统课后答案计算机操作系统【第一章】1. 设计现代OS的主要目标是什么?方便性,有效性,可扩充性和开放性.1 OS的作用可表现为哪几个方面?a. OS作为用户与计算机硬件系统之间的接口;b. OS作为计算机系统资源的管理者;c. OS作为扩充机器.1. 试说明推动多道批处理系统形成和发展的主要动力是什么?不断提高计算机资源利用率和系统吞吐量的需要;平均周转时间长,无交互能力.1实现分时系统的关键问题是什么?应如何解决?a. 关键问题:及时接收,及时处理;b. 对于及时接收,只需在系统中设置一多路卡,多路卡作用是使主机能同时接收用户从各个终端上输入的数据;---对于及时处理,应使所有的用户作业都直接进入内存,在不长的时间内,能使每个作业都运行一次.1为什么要引入实时操作系统?更好地满足实时控制领域和实时信息处理领域的需要.1OS具有哪几大特征?它的最基本特征是什么?a. 并发(Concurrence),共享(Sharing),虚拟(Virtual),异步性(Asynchronism).b. 其中最基本特征是并发和共享.1内存管理有哪些主要功能?它们的主要任务是什么?a. 主要功能: 内存分配,内存保护,地址映射和内存扩充等.b. 内存分配的主要任务是为每道程序分配内存空间,提高存储器利用率,以减少不可用的内存空间,允许正在运行的程序申请附加的内存空间,以适应程序和数据动态增长的需要.---内存保护的主要任务是确保每道用户程序都在自己的内存空间中运行,互不干扰.---地址映射的主要任务是将地址空间中的逻辑地址转换为内存空间中与之对应的物理地址. ---内存扩充的主要任务是借助虚拟存储技术,从逻辑上去扩充内存容量.1处理机管理具有哪些功能?它们的主要任务是什么?a. 进程控制,进程同步,进程通信和调度.b. 进程控制的主要任务是为作业创建进程,撤销已结束的进程,以及控制进程在运行过程中的状态转换.---进程同步的主要任务是对诸进程的运行进行调节.---进程通信的任务是实现在相互合作进程之间的信息交换.---调度分为作业调度和进程调度.作业调度的基本任务是从后备队列中按照一定的算法,选择出若干个作业,为它们分配必要的资源;而进程调度的任务是从进程的就绪队列中,按照一定的算法选出一新进程,把处理机分配给它,并为它设置运行现场,是进程投入运行.1设备管理有哪些主要功能?其主要任务是什么?a. 主要功能: 缓冲管理,设备分配和设备处理,以及虚拟设备等.b. 主要任务: 完成用户提出的I/O请求,为用户分配I/O设备;提高CPU和I/O设备的利用率;提高I/O速度;以及方便用户使用I/O设备.1文件管理有哪些主要功能?其主要任务是什么?a. 主要功能: 对文件存储空间的管理,目录管理,文件的读,写管理以及文件的共享和保护.b. 主要任务: 对用户文件和系统文件进行管理,以方便用户使用,并保证文件的安全性. 1试在交互性,及时性和可靠性方面,将分时系统与实时系统进行比较.a. 分时系统是一种通用系统,主要用于运行终端用户程序,因而它具有较强的交互能力;而实时系统虽然也有交互能力,但其交互能力不及前者.b. 实时信息系统对实用性的要求与分时系统类似,都是以人所能接收的等待时间来确定;而实时控制系统的及时性则是以控制对象所要求的开始截止时间和完成截止时间来确定的.c. 实时系统对系统的可靠性要求要比分时系统对系统的可靠性要求高.1是什么原因使操作系统具有异步性特征?a. 程序执行结果是不确定的,即程序是不可再现的.b. 每个程序在何时执行,多个程序间的执行顺序以及完成每道程序所需的时间都是不确定的,即不可预知性.1何为微内核技术?在为内核中通常提供了哪些功能?1、足够小的内核2、基于客户?服务器模式3、应用“机制与策略分离”原理4、采用面向对象技术2、功能:进程管理低级存储器管理中断和陷入处理2. 在操作系统中为什么要引入进程概念?它会产生什么样的影响?为了使程序在多道程序环境下能并发执行,并能对并发执行的程序加以控制和描述,而引入了进程概念.2计算机操作系统第三版课后答案(汤子瀛等著)影响: 使程序的并发执行得以实行.2. 试从动态性,并发性和独立性上比较进程和程序?a. 动态性是进程最基本的特性,可表现为由创建而产生,由调度而执行,因得不到资源而暂停执行,以及由撤销而消亡,因而进程由一定的生命期;而程序只是一组有序指令的集合,是静态实体.b. 并发性是进程的重要特征,同时也是OS的重要特征.引入进程的目的正是为了使其程序能和其它进程的程序并发执行,而程序是不能并发执行的.c. 独立性是指进程实体是一个能独立运行的基本单位,同时也是系统中独立获得资源和独立调度的基本单位.而对于未建立任何进程的程序,都不能作为一个独立的单位参加运行.2.试说明PCB的作用?为什么说PCB是进程存在的唯一标志?a. PCB是进程实体的一部分,是操作系统中最重要的记录型数据结构.PCB中记录了操作系统所需的用于描述进程情况及控制进程运行所需的全部信息.因而它的作用是使一个在多道程序环境下不能独立运行的程序(含数据),成为一个能独立运行的基本单位,一个能和其它进程并发执行的进程.b. 在进程的整个生命周期中,系统总是通过其PCB对进程进行控制,系统是根据进程的PCB 而不是任何别的什么而感知到该进程的存在的,所以说,PCB是进程存在的唯一标志.2. 试说明进程在三个基本状态之间转换的典型原因.a. 处于就绪状态的进程,当进程调度程序为之分配了处理机后,该进程便由就绪状态变为执行状态.b. 当前进程因发生某事件而无法执行,如访问已被占用的临界资源,就会使进程由执行状态转变为阻塞状态.c. 当前进程因时间片用完而被暂停执行,该进程便由执行状态转变为就绪状态.2. 在创建一个进程时,需完成的主要工作是什么?a. 操作系统发现请求创建新进程事件后,调用进程创建原语Creat();b. 申请空白PCB;c. 为新进程分配资源;d. 初始化进程控制块;e. 将新进程插入就绪队列.2.在撤消一个进程时,需完成的主要工作是什么?a. OS调用进程终止原语;b. 根据被终止进程的标志符,从PCB集合中检索出该进程的PCB,从中读出该进程的状态;c. 若被终止进程正处于执行状态,应立即中止该进程的执行,并设置调度标志为真;d. 若该进程还有子孙进程,还应将其所有子孙进程予以终止;第 3 页共16 页e. 将该进程所拥有的全部资源,或者归还给其父进程,或者归还给系统;f. 将被终止进程(它的PCB)从所在队列(或链表)中移出,等待其它程序来搜集信息. .2. 试从调度性,并发性,拥有资源及系统开销几个方面,对进程和线程进行比较.a. 在引入线程的OS中,把线程作为调度和分派的基本单位,而把进程作为资源拥有的基本单位;b. 在引入线程的OS中,不仅进程之间可以并发执行,而且在一个进程中的多个线程之间,亦可并发执行,因而使OS具有更好的并发性;c. 进程始终是拥有资源的一个独立单位,线程自己不拥有系统资源,但它可以访问其隶属进程的资源;d. 在创建,撤消和切换进程方面,进程的开销远远大于线程的开销.2. 什么是用户级线程和内核级线程?.a. 内核级线程是依赖于内核的,它存在于用户进程和系统进程中,它们的创建,撤消和切换都由内核实现;---用户级线程仅存在于用户级中,它们的创建,撤消和切换不利用系统调用来实现,因而与内核无关,内核并不知道用户级线程的存在.b. 内核级线程的调度和切换与进程十分相似,调度方式采用抢占式和非抢占式,调度算法采用时间轮转法和优先权算法等,当由线程调度选中一个线程后,再将处理器分配给它;而用户级线程通常发生在一个应用程序的诸线程之间,无需终端进入OS内核,切换规则也较简单,因而,用户级线程的切换速度较快.---用户级线程调用系统调用和调度另一个进程执行时,内核把它们看作是整个进程的行为,内核级线程调用是以线程为单位,内核把系统调用看作是该线程的行为.---对于用户级线程调用,进程的执行速度随着所含线程数目的增加而降低,对于内核级线程则相反..为什么要在OS中引入线程使多个程序能够并发执行,以提高资源利用率和系统吞吐量,减少程序在并发执行时所付出的时空开销,是OS系统具有更好的并发性!当前有哪几种高级通信机制?共享存储器系统、消息传递系统、管道通信系统进程在运行存在哪两种形式的制约,并举例2.为什么进程在进入临界区之前应先执行“进入去”代码?推出前又要执行“退出区”代码?用于将临界区正被访问的标志恢复为未被访问的标志同步机构应遵循哪些基本准则?为什么?空闲让进忙则等待有限等待让权等待4计算机操作系统第三版课后答案(汤子瀛等著)2.. 什么是临界资源和临界区?a. 一次仅允许一个进程使用的资源成为临界资源.b. 在每个进程中,访问临界资源的那段程序称为临界区.2. 为什么进程在进入临界区之前,应先执行"进入区"代码,在退出临界区后又执行"退出区"代码?为了实现多个进程对临界资源的互斥访问,必须在临界区前面增加一段用于检查欲访问的临界资源是否正被访问的代码,如果未被访问,该进程便可进入临界区对资源进行访问,并设置正被访问标志,如果正被访问,则本进程不能进入临界区,实现这一功能的代码成为"进入区"代码;在退出临界区后,必须执行"退出区"代码,用于恢复未被访问标志. 2. 在生产者-消费者问题中,如果缺少了signal(full)或signal(empty),对执行结果会有何影响?生产者-消费者问题可描述如下:var mutex,empty,full: semaphore:=1,n,0;buffer: array[0,...,n-1] of item;in,out: integer:=0,0;beginparbeginproducer: beginrepeat..produce an item in nextp;..wait(empty);wait(mutex);buffer(in):=nextp;in:=(in+1) mod n;signal(mutex);/* ************** */signal(full);/* ************** */until false;endconsumer: beginrepeatwait(full);wait(mutex);nextc:=buffer(out);out:=(out+1) mod n;signal(mutex);第 5 页共16 页/* ************** */signal(empty);/* ************** */consume the item in nextc;until false;endparendend可见,生产者可以不断地往缓冲池送消息,如果缓冲池满,就会覆盖原有数据,造成数据混乱.而消费者始终因wait(full)操作将消费进程直接送入进程链表进行等待,无法访问缓冲池,造成无限等待.2.. 在生产者-消费者问题中,如果将两个wait操作即wait(full)和wait(mutex)互换位置;或者是将signal(mutex)与signal(full)互换位置结果会如何?var mutex,empty,full: semaphore:=1,n,0;buffer: array[0,...,n-1] of item;in,out: integer:=0,0;beginparbeginproducer: beginrepeat..produce an item in nextp;..wait(empty);wait(mutex);buffer(in):=nextp;in:=(in+1) mod n;/* ***************** */signal(full);signal(mutex);/* ***************** */until false;endconsumer: beginrepeat/* **************** */wait(mutex);wait(full);/* **************** */6计算机操作系统第三版课后答案(汤子瀛等著)nextc:=buffer(out);out:=(out+1) mod n;signal(mutex);signal(empty);consume the item in nextc;until false;endparendenda. wait(full)和wait(mutex)互换位置后,因为mutex在这儿是全局变量,执行完wait(mutex),则mutex赋值为0,倘若full也为0,则该生产者进程就会转入进程链表进行等待,而生产者进程会因全局变量mutex为0而进行等待,使full始终为0,这样就形成了死锁.b. 而signal(mutex)与signal(full)互换位置后,从逻辑上来说应该是一样的.2. 试修改下面生产者-消费者问题解法中的错误:producer:beginrepeat..producer an item in nextp;wait(mutex);wait(full); /* 应为wait(empty),而且还应该在wait(mutex)的前面*/buffer(in):=nextp;/* 缓冲池数组游标应前移: in:=(in+1) mod n; */signal(mutex);/* signal(full); */until false;endconsumer:beginrepeatwait(mutex);wait(empty); /* 应为wait(full),而且还应该在wait(mutex)的前面*/nextc:=buffer(out);out:=out+1; /* 考虑循环,应改为: out:=(out+1) mod n; */signal(mutex);/* signal(empty); */consumer item in nextc;until false;end第7 页共16 页2. 试利用记录型信号量写出一个不会出现死锁的哲学家进餐问题的算法.设初始值为1的信号量c[I]表示I号筷子被拿(I=1,2,3,4,...,2n),其中n为自然数.send(I):Beginif I mod 2==1 then{P(c[I]);P(c[I-1 mod 5]);Eat;V(c[I-1 mod 5]);V(c[I]);}else{P(c[I-1 mod 5]);P(c[I]);Eat;V(c[I]);V(c[I-1 mod 5]);}End答:FCFS进程调度算法:一种最简单的调度算法,比较有利于长作业(进程),而不利于短作业(进程)SPF进程调度算法:对短作业或短进程优先调度的算法。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
3、在早期计算机中,每个字节的读写直接由 CPU 处理(即没有 DMA),对于多 道程序而言这种组织方式有什么含义? 答:多道程序的主要原因是当等候 I/O 完成时 CPU 有事可做。如果没有 DMA。 I/O 操作时 CPU 被完全占有,因此,多道程序无利可图(至少在 CPU 利用方面)。 无论程序作多少 I/O 操作,CPU 都是 100%的忙碌。当然,这里假定主要的延迟 是数据复制时的等待。如果 I/O 很慢的话,CPU 可以做其它工作。
5、缓慢采用 GUI 的一个原因是支持它的硬件的成本(高昂)。为了支持 25 行 80 列字符的单色文本屏幕应该需要多少视颊 RAM? 对于 1024x768 像素 24 位色彩 位图需要多少视频 RAM? 在 1980 年($5/KB)这些 RAM 的成本是多少?现在它的 成本是多少? 答:25*80 字符的单色文本屏幕需要 2000 字节的缓冲器。1024*768 像素 24 位颜 色的位图需要 2359296 字节。1980 年这两种选择将分别地耗费$10 和$11520。而 对于当前的价格,将少于$1/MB。
4、系列计算机的思想在 20 世纪 60 年代由 IBM 引入进 System/360 大型机。现 在这种思想已经消亡了还是继续活跃着? 答:它依然存在。例如,Intel 以各种各样的不同的属性包括速度和能力消耗来生 产 Pentium I,II,III 和 4。所有这些机器的体系结构都是兼容的,仅仅是价格上 的不同,这些都是家族思想的本质。
cztqwan 2017-06-19
现代操作系统(第三版)习题答案
cztqwan 2017-06-19
(部分内容来源于网络,转载请注明出处)
cztqwan 2017-06-19
目录
第一章 绪论..................................................................................................................1 第二章 进程与线程......................................................................................................8 第三章 存储管理........................................................................................................21 第四章 文件系统........................................................................................................32 第五章 输入/输出 ......................................................................................................42 第六章 死锁................................................................................................................55 第七章 多媒体操作系统............................................................................................ 65 第八章 多处理机系统................................................................................................ 76 第九章 安全................................................................................................................88 第十章 实例研究 1:Linux .....................................................................................100 第十一章 实例研究 2:Windows Vista .................................................................. 110 第十二章 实例研究 3:Symbian 操作系统 ........................................................... 110 第十三章 操作系统设计.......................................................................................... 110
cztq序设计? 答:多道程序设计技术是指在内存同时放若于道程序,使它们在系统中并发执行, 共享系统中的各种资源。当一道程序暂停执行时,CPU 立即转去执行另一道程 序。
2、什么是 SPOOLing? 读者是否认为将来的高级个人计算机会把 SPOOLing 作 为标准功能? 答:(假脱机技术)输入 SPOOLing 是作业中的读入技术,例如,从卡片在磁盘, 这样当当前执行的进程完成时,将等候 CPU。输出 SPOOLing 在打印之前首先 复制打印文件, 而非直接打印。在个人计算机上的输入 SPOOLing 很少,但是 输出 SPOOLing 非常普遍。