计算机操作系统第二版答案(郁红英)

合集下载

现代操作系统(第二版)习题答案

现代操作系统(第二版)习题答案

MODERN OPERATING SYSTEMS SECOND EDITIONPROBLEM SOLUTIONSANDREW S. TANENBAUM Vrije Universiteit Amsterdam, The NetherlandsPRENTICE HALLUPPER SADDLE RIVER, NJ 07458SOLUTIONS TO CHAPTER 1 PROBLEMS1. An operating system must provide the users with an extended (i.e., virtual) machine, and it must manage the I/O devices and other system resources.2. Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.3. Input spooling is the technique of reading in jobs, for example, from cards, onto the disk, so that when the currently executing processes are finished, there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. Input spooling on a personal computer is not very likely, but output spooling is.4. The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100 percent busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).5. Second generation computers did not have the necessary hardware to protect the operating system from malicious user programs.6. It is still alive. For example, Intel makes Pentium I, II, and III, and 4 CPUs with a variety of different properties including speed and power consumption. All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.7. A 25×80 character monochrome text screen requires a 2000-byte buffer. The 1024 ×768 pixel 24-bit color bitmap requires 2,359,296 bytes. In 1980 these two options would have cost $10 and $11,520, respectively. For current prices, check on how much RAM currently costs, probably less than $1/MB.8. Choices (a), (c), and (d) should be restricted to kernel mode.9. Personal computer systems are always interactive, often with only a single user. Mainframe systems nearly always emphasize batch or timesharing with many users. Protection is much more of an issue on mainframe systems, as is efficient use of all resources.10. Every nanosecond one instruction emerges from the pipeline. This meansthe machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per2 PROBLEM SOLUTIONS FOR CHAPTER 1stage would also execute 1 billion instructions per second. All that matters is how often a finished instructions pops out the end of the pipeline.11. The manuscript contains 80 × 50 × 700 = 2.8 million characters. This is, of course, impossible to fit into the registers of any currently available CPU and is too big for a 1-MB cache, but if such hardware were available, the manuscript could be scanned in 2.8 msec from the registers or 5.8 msec from the cache. There are approximately 2700 1024-byte blocks of data, so scanning from the disk would require about 27 seconds, and from tape 2 minutes 7 seconds. Of course, these times are just to read the data. Processing and rewriting the data would increase the time.12. Logically, it does not matter if the limit register uses a virtual address or a physical address. However, the performance of the former is better. If virtual addresses are used, the addition of the virtual address and the base register can start simultaneously with the comparison and then can run in parallel. If physical addresses are used, the comparison cannot start until the addition is complete, increasing the access time.13. Maybe. If the caller gets control back and immediately overwrites the data, when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.14. A trap is caused by the program and is synchronous with it. If the program is run again and again, the trap will always occur at exactly the same position in the instruction stream. An interrupt is caused by an external event and its timing is not reproducible.15. Base = 40,000 and limit = 10,000. An answer of limit = 50,000 is incorrect for the way the system was described in this book. It could have been implemented that way, but doing so would have required waiting until the address + base calculation was completed before starting the limit check, thus slowing down the computer.16. The process table is needed to store the state of a process that is currently suspended, either ready or blocked. It is not needed in a single process system because the single process is never suspended.17. Mounting a file system makes any files already in the mount point directory inaccessible, so mount points are normally empty. However, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being checked or repaired.PROBLEM SOLUTIONS FOR CHAPTER 1 318. Fork can fail if there are no free slots left in the process table (and possibly if there is no memory or swap space left). Exec can fail if the file name given does not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it. 19. If the call fails, for example because fd is incorrect, it can return −1. It can also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes.20. It contains the bytes: 1, 5, 9, 2.21. Block special files consist of numbered blocks, each of which can be read or written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files. 22. System calls do not really have names, other than in a documentation sense. When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program.23. Yes it can, especially if the kernel is a message-passing system.24. As far as program logic is concerned it does not matter whether a call to a library procedure results in a system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.25. Several UNIX calls have no counterpart in the Win32 API:Link: a Win32 program cannot refer to a file by an alternate name or see it in more than one directory. Also, attempting to create a link is a convenient way to test for and create a lock on a file.Mount and umount: a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod: Windows programmers have to assume that every user can access every file.Kill: Windows programmers cannot kill a misbehaving program that is not cooperating.4 PROBLEM SOLUTIONS FOR CHAPTER 126. The conversions are straightforward:(a) A micro year is 10−6 × 365× 24× 3600= 31.536 sec. (b) 1000 meters or 1 km.(c) There are 240 bytes, which is 1,099,511,627,776 bytes. (d) It is 6 × 1024 kg. SOLUTIONS TO CHAPTER 2 PROBLEMS1. The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the process could go directly from blocked to running. The other missing transition,from ready to blocked, is impossible. A ready process cannot do I/O or anything else that might block it. Only a running process can block.2. You could have a register containing a pointer to the current process table entry. When I/O completed, the CPU would store the current machine state in the current process table entry. Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry (the service procedure). This process would then be started up.3. Generally, high-level languages do not allow one the kind of access to CPU hardware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process’stack area. Also, interrupt service routines must execute as rapidly as possible.4. There are several reasons for using a separate stack for the kernel. Two of them are as follows. First, you do not want the operating system to crash because a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program’ s memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.5. It would be difficult, if not impossible, to keep the file system consistent. Suppose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter, another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If the first process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid.PROBLEM SOLUTIONS FOR CHAPTER 2 56. When a thread is stopped, it has values in the registers. They must be saved, just as when the process is stopped the registers must be saved. Timesharing threads is no different than timesharing processes, so each thread needs its own register save area.7. No. If a single-threaded process is blocked on the keyboard, it cannot fork.8. A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9. Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.10. User-level threads cannot be preempted by the clock uless the whole process’ quantum has been used up. Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick adifferent thread from the same process to run next if it so desires.11. In the single-threaded case, the cache hits take 15 msec and cache misses take 90 msec. The weighted average is 2/3×15+ 1/3 ×90. Thus the mean request takes 40 msec and the server can do 25 per second. For a multithreaded server, all the waiting for the disk is overlapped, so every request takes 15 msec, and the server can handle 66 2/3 requests per second.12. Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a telephone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, th e entire database takes 64 megabytes, and can easily be kept in the server’ s memory to provide fast lookup.13. The pointers are really necessary because the size of the global variable is unknown. It could be anything from a character to an array of floating-point numbers. If the value were stored, one would have to give the size to create3global, which is all right, but what type should the second parameter of set3global be, and what type should the value of read3global be?14. It could happen that the runtime system is precisely at the point of blocking or unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag,6 PROBLEM SOLUTIONS FOR CHAPTER 2then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handler.15. Yes it is possible, but inefficient. A thread wanting to do a system call first sets an alarm timer, then does the call. If the call blocks, the timer returns control to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems can develop. This is not an attractive way to build a threads package.16. The priority inversion problem occurs when a low-priority process is in its critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise.17. Each thread calls procedures on its own, so it must have its own stack for the local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.18. A race condition is a situation in which two (or more) processes are about to perform some action. Depending on the exact timing, one or the other goesfirst. If one of the processes goes first, everything works, but if another one goes first, a fatal error occurs.19. Yes. The simulated computer could be multiprogrammed. For example, while process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds one to the variable, we have a race condition.20. Yes, it still works, but it still is busy waiting, of course.21. It certainly works with preemptive scheduling. In fact, it was designed for that case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.22. Yes it can. The memory word is used as a flag, with 0 meaning that no one is using the critical variables and 1 meaning that someone is using them. Put a 1 in the register, and swap the memory word and the register. If the register contains a 0 after the swap, access has been granted. If it contains a 1, access has been denied. When a process is done, it stores a 0 in the flag in memory. PROBLEM SOLUTIONS FOR CHAPTER 2 723. To do a semaphore operation, the operating system first disables interrupts. Then it reads the value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up, it must check to see if any processes are blocked on the semaphore. If one or more processes are blocked, one of then is removed from the list of blocked processes and made runnable. When all these operations have been completed, interrupts can be enabled again.24. Associated with each counting semaphore are two binary semaphores, M, used for mutual exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s, and a list of processes blocked on that semaphore. To implement down, a process first gains exclusive access to the semaphores, counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the process is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up, first M is down ed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to be done is to up M. If, however, the counter is now negative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order.25. If the program operates in phases and neither process may enter the next phase until both are finished with the current phase, it makes perfect sense to use a barrier.26. With round-robin scheduling it works. Sooner or later L will run, and eventually it will leave its critical region. The point is, with priority scheduling, Lnever gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.27. With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.28. It is very expensive to implement. Each time any variable that appears in a predicate on which some process is waiting changes, the runtime system must re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened ona signal primitive.8 PROBLEM SOLUTIONS FOR CHAPTER 229. The employees communicate by passing messages: orders, food, and bags in this case. In UNIX terms, the four processes are connected by pipes. 30. It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.31. If a philosopher blocks, neighbors can later see that he is hungry by checking his state, in test, so he can be awakened when the forks are available.32. The change would mean that after a philosopher stopped eating, neither of his neighbors could be chosen next. In fact, they would never be chosen. Suppose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similary, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.33. Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer is currently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.34. It will need nT sec.35. If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be removed from the list of runnable processes.36. In simple cases it may be possible to determine whether I/O will be limiting by looking at source code. For instance a program that reads all its input filesinto buffers at the start will probably not be I/O bound, but a problem that reads and writes incrementally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program , you can compare this with total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.37. For multiple processes in a pipeline, the common parent could pass to the operating system information about the flow of data. With this information PROBLEM SOLUTIONS FOR CHAPTER 2 9the OS could, for instance, determine which process could supply output to a process blocking on a call for input.38. The CPU efficiency is the useful CPU time divided by the total CPU time. When Q ≥ T, the basic cycle is for the process to run for T and undergo a process switch for S. Thus (a) and (b) have an efficiency of T/(S + T). When the quantum is shorter than T, each run of T will require T/Q process switches, wasting a time ST/Q. The efficiency here is thenT + ST/Q T 333333333which reduces to Q/(Q + S), which is the answer to (c). For (d), we just substitute Q for S and find that the efficiency is 50 percent. Finally, for (e), as Q → 0 the efficiency goes to 0.39. Shortest job first is the way to minimize average response time. 0 < X≤ 3: X , 3, 5, 6, 9.3 < X≤ 5: 3, X , 5, 6, 9.5 < X≤ 6: 3, 5, X , 6, 9.6 < X≤ 9: 3, 5, 6, X , 9.X >9: 3, 5, 6, 9, X.40. For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.41. The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be swapped in 5 times.42. A check could be made to see if the program was expecting input and did anything with it. A program that was not expecting input and did not process it would not get any special priority boost.43. The sequence of predictions is 40, 30, 35, and now 25.44. The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To beschedulable, this must be less than 1. Thus x must be less than 12.5 msec. 45. Two-level scheduling is needed when memory is too small to hold all the ready processes. Some set of them is put into memory, and a choice is made 10 PROBLEM SOLUTIONS FOR CHAPTER 2from that set. From time to time, the set of in-core processes is adjusted. This algorithm is easy to implement and reasonably efficient, certainly a lot better than say, round robin without regard to whether a process was in memory or not.46. The kernel could schedule processes by any means it wishes, but within each process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel handles the mechanism.47. A possible shell script might beif [ ! –f numbers ]; then echo 0 > numbers; fi count=0 while (test $count != 200 ) docount=‘expr $count + 1 ‘ n=‘tail –1 numbers‘ expr $n + 1 >>numbers doneRun the script twice simultaneously, by starting it once in the background (using &) and again in the foreground. Then examine the file numbers . It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two copies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this:if ln numbers numbers.lock then n=‘tail –1 numbers‘ expr $n + 1 >>numbersrm numbers.lock fiThis version will just skip a turn when the file is inaccessible, variant solutions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER 3 PROBLEMS1. In the U.S., consider a presidential election in which three or more candidates are trying for the nomination of some party. After all the primary electionsPROBLEM SOLUTIONS FOR CHAPTER 3 11are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources (votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.2. If the printer starts to print a file before the entire file has been received (this is often allowed to speed response), the disk may fill with other requests that can’ t be printed until the first file is done, but which use up disk space needed to receive the file currently being printed. If the spooler does not start to print a file until the entire file has been spooled it can reject a request that is too big.Starting to print a file is equivalent to reserving the printer; if the reservation is deferred until it is known that the entire file can be received, a deadlock of the entire system can be avoided. The user with the fil e that won’ t fit is still deadlocked of course, and must go to another facility that permits printing bigger files.3. The printer is nonpreemptable; the system cannot start printing another job until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows that4. Yes. It does not make any difference whatsoever.5. Yes, illegal graphs exist. We stated that a resource may only be held by a single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules. Arcs from squares to squares or from circles to circles also violate the rules.6. A portion of all such resources could be reserved for use only by processes owned by the administrator, so he or she could always run a shell and programs needed to evaluate a deadlock and make decisions about which processes to kill to make the system usable again.7. Neither change leads to deadlock. There is no circular wait in either case.8. Voluntary relinquishment of a resource is most similar to recovery through preemption. The essential difference is that computer processes are not expected to solve such problems on their own. Preemption is analogous to the operator or the operating system acting as a policeman, overriding the normal rules individual processes obey.12 PROBLEM SOLUTIONS FOR CHAPTER 39. The process is asking for more resources than the system has. There is no conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.10. If the system had two or more CPUs, two or more processes could run in parallel, leading to diagonal trajectories.11. Yes. Do the whole thing in three dimensions. The z-axis measures the number of instructions executed by the third process.12. The method can only be used to guide the scheduling if the exact instant at which a resource is going to be claimed is known in advance. In practice, this is rarely the case.13. A request from D is unsafe, but one from C is safe.14. There are states that are neither safe nor deadlocked, but which lead to deadlocked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation:Has Needs Available。

操作系统(第二版)习题答案

操作系统(第二版)习题答案

第1章一、填空1.计算机由硬件系统和软件系统两个部分组成,它们构成了一个完整的计算机系统。

2.按功能划分,软件可分为系统软件和应用软件两种。

3.操作系统是在裸机上加载的第一层软件,是对计算机硬件系统功能的首次扩充。

4.操作系统的基本功能是处理机(包含作业)管理、存储管理、设备管理和文件管理。

5.在分时和批处理系统结合的操作系统中引入“前台”和“后台”作业的概念,其目的是改善系统功能,提高处理能力。

6.分时系统的主要特征为多路性、交互性、独立性和及时性。

7.实时系统与分时以及批处理系统的主要区别是高及时性和高可靠性。

8.若一个操作系统具有很强的交互性,可同时供多个用户使用,则是分时操作系统。

9.如果一个操作系统在用户提交作业后,不提供交互能力,只追求计算机资源的利用率、大吞吐量和作业流程的自动化,则属于批处理操作系统。

10.采用多道程序设计技术,能充分发挥CPU 和外部设备并行工作的能力。

二、选择1.操作系统是一种B 。

A.通用软件B.系统软件C.应用软件D.软件包2.操作系统是对C 进行管理的软件。

A系统软件B.系统硬件C.计算机资源D.应用程序3.操作系统中采用多道程序设计技术,以提高CPU和外部设备的A 。

A.利用率B.可靠性C.稳定性D.兼容性4.计算机系统中配置操作系统的目的是提高计算机的B 和方便用户使用。

A.速度B.利用率C.灵活性D.兼容性5.C 操作系统允许多个用户在其终端上同时交互地使用计算机。

A.批处理B.实时C.分时D.多道批处理6.如果分时系统的时间片一定,那么D ,响应时间越长。

A.用户数越少B.内存越少C.内存越多D.用户数越多三、问答1.什么是“多道程序设计”技术?它对操作系统的形成起到什么作用?答:所谓“多道程序设计”技术,即是通过软件的手段,允许在计算机内存中同时存放几道相互独立的作业程序,让它们对系统中的资源进行“共享”和“竞争”,以使系统中的各种资源尽可能地满负荷工作,从而提高整个计算机系统的使用效率。

操作系统概念精要原书第二版答案

操作系统概念精要原书第二版答案

操作系统概念精要原书第二版答案1一个操作系统的三个主要目的是什么?答:三个主要目的是:为计算机用户提供一个方便方便地在计算机硬件上执行程序的环境。

•根据需要分配计算机的单独资源来执行所需的任务。

分配过程应尽可能的公平和合理。

•作为一种控制程序,它主要具有两个功能: (1)监督用户程序的执行,防止计算机出现错误和不当使用;(2)管理I/O设备的操作和控制。

2我们强调了需要一个操作系统来充分使用计算硬件。

什么时候操作系统适合放弃这一原则并“浪费”资源?为什么这样的系统并不是真正的浪费呢?答:单用户系统应该最大限度地为用户使用该系统。

GUI可能会“浪费”CPU周期,但它却更优化了用户与系统的交互。

3程序员在为实时环境编写操作系统时必须克服的主要缺点是什么?答:主要的缺点是保持操作系统在实时系统的时间限制内。

如果系统在某个时间段内没有完成一个任务,则可能会导致整个系统的崩溃。

因此,在为实时系统编写操作系统时,作者必须确保他的调度方案不允许响应时间超过时间限制。

4记住操作系统的各种细节,考虑操作系统是否应该包括诸如浏览器和邮件程序等应用程序。

主张它应该,也不应该,并解释你的答案。

答:一个论点支持包括流行的应用程序在操作系统是,如果应用程序是嵌入在操作系统中,它可能会更好地利用内核的特性,因此有性能优势的应用程序,运行之外的内核。

然而,反对在操作系统中嵌入应用程序的争论通常占主导地位: (1)应用程序是应用程序——而不是操作系统的一部分,(2)在内核中运行的任何性能缺陷都被安全漏洞所抵消,(3)包含应用程序会导致操作系统臃肿。

5内核模式和用户模式之间的区别如何作为一种基本形式的保护(安全)?答:内核模式和用户模式之间的区别以以下方式提供了一种基本的保护形式。

某些指令只能在CPU处于内核模式时才能执行。

类似地,只有在程序处于内核模式时才能访问硬件设备,而只有在CPU处于内核模式时才能启用或禁用中断。

因此,CPU在用户模式下执行时的能力非常有限,从而加强了对关键资源的保护。

操作系统第二版课后习题答案

操作系统第二版课后习题答案

操作系统第二版课后习题答案操作系统第二版课后习题答案操作系统是计算机科学中的重要领域,它负责管理计算机硬件和软件资源,为用户提供良好的使用体验。

在学习操作系统的过程中,课后习题是巩固和深化知识的重要方式。

本文将为大家提供操作系统第二版课后习题的答案,帮助读者更好地理解和掌握操作系统的知识。

第一章:引论1. 操作系统的主要功能包括进程管理、内存管理、文件系统管理和设备管理。

2. 进程是指正在执行的程序的实例。

进程控制块(PCB)是操作系统用来管理进程的数据结构,包含进程的状态、程序计数器、寄存器等信息。

3. 多道程序设计是指在内存中同时存放多个程序,通过时间片轮转等调度算法,使得多个程序交替执行。

4. 异步输入输出是指程序执行期间,可以进行输入输出操作,而不需要等待输入输出完成。

第二章:进程管理1. 进程调度的目标包括提高系统吞吐量、减少响应时间、提高公平性等。

2. 进程调度算法包括先来先服务(FCFS)、最短作业优先(SJF)、优先级调度、时间片轮转等。

3. 饥饿是指某个进程长时间得不到执行的情况,可以通过调整优先级或引入抢占机制来解决。

4. 死锁是指多个进程因为争夺资源而陷入无限等待的状态,可以通过资源预分配、避免环路等方式来避免死锁。

第三章:内存管理1. 内存管理的主要任务包括内存分配、内存保护、地址转换等。

2. 连续内存分配包括固定分区分配、可变分区分配和动态分区分配。

3. 分页和分段是常见的非连续内存分配方式,分页将进程的地址空间划分为固定大小的页,分段将进程的地址空间划分为逻辑段。

4. 页面置换算法包括最佳置换算法、先进先出(FIFO)算法、最近最久未使用(LRU)算法等。

第四章:文件系统管理1. 文件是操作系统中用来存储和组织数据的逻辑单位,可以是文本文件、图像文件、音频文件等。

2. 文件系统的主要功能包括文件的创建、删除、读取、写入等操作。

3. 文件系统的组织方式包括层次目录结构、索引结构、位图结构等。

操作系统原理与应用(第2版)清大版第1章习题参考答案

操作系统原理与应用(第2版)清大版第1章习题参考答案

第1章习题参考答案1、操作系统(OS)——是管理计算机系统资源(硬件和软件)的系统软件,它为用户使用计算机提供方便、有效和安全可靠的工作环境。

基本功能:处理器(处理机、CPU)管理、存储器管理、设备管理、文件管理、工作管理(系统交互与界面的有效利用)。

2、一个计算机系统是由硬件和软件两大部分组成。

硬件通常指诸如CPU、存储器、外设等这样一类用以完成计算机功能的各种部件。

计算机软件指为计算机编制的程序,加上执行程序时所需要的数据及说明使用该程序的文档资料。

计算机软件包括应用软件和系统软件两大部分。

3、批处理系统的主要特点是:多道、成批、处理过程中不需要人工干预。

分时系统的主要特点是:同时性、交互性、独立性、及时性。

实时系统的主要特点是:及时性、交互性、安全可靠性、多路性。

4、操作系统的不确定性,不是说操作系统本身的功能不确定,也不是说在操作系统控制下运行的用户程序结果不确定,而是说在操作系统控制下多个作业的执行次序和每个作业的执行时间是不确定的。

具体地说,同一批作业,两次或多次运行的执行序列可能是不同的。

如P1、P2、P3,第一次可能是P1、P2、P3;第二次可能是P2、P1、P3。

5、例如,老师在课堂上给学生讲课就是分时系统。

6、关于文件的所有操作就得到操作系统的服务。

7、网络系统软件中的主要部分是网络操作系统,有人也将它称为网络管理系统,它与传统的单机操作系统有所不同,它是建立在单机操作系统之上的一个开放式的软件系统,它面对的是各种不同的计算机系统的互连操作,面对各种不同的单机操作系统之间的资源共享,用户操作协调和与单机操作系统的交互,从而解决多个网络用户(甚至是全球远程的网络用户)之间争用共享资源的分配与管理。

8、良好的用户界面、树形结构的文件系统、字符流式文件、丰富的核外程序、对现有技术的精选和发展。

9、启动输入设备---接受输入数据---保存到内存---到CPU上运行---启动输出设备---在输出设备输出数据。

操作系统(第二版)课后习题答案

操作系统(第二版)课后习题答案
257<10+256
故需要一次间接寻址,就可读出该数据
如果要求读入从文件首到263168Byte处的数据(包括这个数据),读岀过程:首先根据直接寻
址读出前10块;读出一次间接索引指示的索引块1块;将索引下标从0〜247对应的数据块全部 读入。即可。共读盘块数10+1+248=259块
3.某文件系统采用索引文件结构,设文件索引表的每个表目占用3Byte,存放盘块的块号,盘块 的大小为512Byte。此文件系统采用直接、一次间接、二次间接、三次间接索引所能管理的最大
(1)|100-8|+|18-8|+|27-18|+|129-27|+|110-129|+|186-110|+|78-186|+|147-78|+|41-147|+ |10-47|+|64-10|+|12-64|=728
8:00
10:00
120mi n
1
2
8:50
50min
10:00
10:50
120mi n
3
9:00
10mi n
10:50
11:00
120mi n
12
4
9:50
20mi n
11:00
11:20
90mi n
平均周转时间T=,平均带权周转时间W=
②SJF短作业优先法)
作业
到达时间
运行时间
开始时间
完成时间
周转时间
页面长度为4KB,虚地址空间共有土)个页面
3.某计算机系统提供24位虚存空间,主存空间为218Byte,采用请求分页虚拟存储管理,页面尺
寸为1KB。假定应用程序产生虚拟地址(八进制),而此页面分得的块号为100(八进制),说明

操作系统实用教程(第二版)-OS习题答案

操作系统实用教程(第二版)-OS习题答案

操作系统实⽤教程(第⼆版)-OS习题答案操作系统习题解答1. 存储程序式计算机的主要特点是什么?答:主要特点是以顺序计算为基础,根据程序规定的顺序依次执⾏每⼀个操作,控制部件根据程序对整个计算机的活动实⾏集中过程控制,即为集中顺序过程控制。

这类计算是过程性的,实际上这种计算机是模拟⼈们的⼿⼯计算的产物。

即⾸先取原始数据,执⾏⼀个操作,将中间结果保存起来;再取⼀个数,和中间结果⼀起⼜执⾏⼀个操作,如此计算下去。

在遇到多个可能同时执⾏的分⽀时,也是先执⾏完⼀个分⽀,然后再执⾏第⼆个分⽀,直到计算完毕。

2. 批处理系统和分时系统各具有什么特点?答:批处理系统是在解决⼈⼀机⽭盾以及⾼速度的中央处理机和低速度的I/O设备这两对⽭盾的过程中发展起来的。

它的出现改善了CPU和外设的使⽤情况,其特点是实现了作业的⾃动定序、⾃动过渡,从⽽使整个计算机系统的处理能⼒得以提⾼。

在多道系统中,若采⽤了分时技术,就是分时操作系统,它是操作系统的另⼀种类型。

它⼀般采⽤时间⽚轮转的办法,使⼀台计算机同时为多个任务服务。

对⽤户都能保证⾜够快的响应时间,并提供交互会话功能。

它与批处理系统之间的主要差别在于,分时系统是⼈机交互式系统,响应时间快;⽽批处理系统是作业⾃动定序和过渡,⽆⼈机交互,周转时间长。

3. 实时系统的特点是什么?⼀个实时信息处理系统和⼀个分时系统从外表看来很相似,它们有什么本质的区别呢?答:实时系统对响应时间的要求⽐分时系统更⾼,⼀般要求响应时间为秒级、毫秒级甚⾄微秒级。

将电⼦计算机应⽤到实时领域,配置上实时监控系统,便组成各种各样的专⽤实时系统。

实时系统按其使⽤⽅式不同分为两类:实时控制系统和实时信息处理系统。

实时控制是指利⽤计算机对实时过程进⾏控制和提供监督环境。

实时信息处理系统是指利⽤计算机对实时数据进⾏处理的系统。

实时系统⼤部分是为特殊的实时任务设计的,这类任务对系统的可靠性和安全性要求很⾼。

与分时系统相⽐,实时系统没有那样强的交互会话功能,通常不允许⽤户通过实时终端设备去编写新的程序或修改已有的程序。

操作系统原理与实践教程(第二版)习题答案

操作系统原理与实践教程(第二版)习题答案

第1章操作系统概论(1) 试说明什么是操作系统,它具有什么特征?其最基本特征是什么?解:操作系统就是一组管理与控制计算机软硬件资源并对各项任务进行合理化调度,且附加了各种便于用户操作的工具的软件层次。

现代操作系统都具有并发、共享、虚拟和异步特性,其中并发性是操作系统的最基本特征,也是最重要的特征,其它三个特性均基于并发性而存在。

(2) 设计现代操作系统的主要目标是什么?解:现代操作系统的设计目标是有效性、方便性、开放性、可扩展性等特性。

其中有效性指的是OS应能有效地提高系统资源利用率和系统吞吐量。

方便性指的是配置了OS后的计算机应该更容易使用。

这两个性质是操作系统最重要的设计目标。

开放性指的是OS应遵循世界标准规范,如开放系统互连OSI国际标准。

可扩展性指的是OS应提供良好的系统结构,使得新设备、新功能和新模块能方便地加载到当前系统中,同时也要提供修改老模块的可能,这种对系统软硬件组成以及功能的扩充保证称为可扩展性。

(3) 操作系统的作用体现在哪些方面?解:现代操作系统的主要任务就是维护一个优良的运行环境,以便多道程序能够有序地、高效地获得执行,而在运行的同时,还要尽可能地提高资源利用率和系统响应速度,并保证用户操作的方便性。

因此操作系统的基本功能应包括处理器管理、存储器管理、设备管理和文件管理。

此外,为了给用户提供一个统一、方便、有效的使用系统能力的手段,现代操作系统还需要提供一个友好的人机接口。

在互联网不断发展的今天,操作系统中通常还具备基本的网络服务功能和信息安全防护等方面的支持。

(4) 试说明实时操作系统和分时操作系统在交互性、及时性和可靠性方面的异同。

解:●交互性:分时系统能够使用户和系统进行人-机对话。

实时系统也具有交互性,但人与系统的交互仅限于访问系统中某些特定的专用服务程序。

●及时性:分时系统的响应时间是以人能够接受的等待时间为标准,而实时控制系统对响应时间要求比较严格,它是以控制过程或信息处理中所能接受的延迟为标准。

《Linux操作系统(第2版) )》课后习题答案

《Linux操作系统(第2版) )》课后习题答案

《Linux操作系统(第2版)》课后习题答案练习题一、选择题1. Linux最早是由计算机爱好者 B 开发的。

A. Richard PetersenB. Linus TorvaldsC. Rob PickD. Linux Sarwar2. 下列 C 是自由软件。

A. Windows XPB. UNIXC. LinuxD. Windows 20003. 下列 B 不是Linux的特点。

A. 多任务B. 单用户C. 设备独立性D. 开放性4. Linux的内核版本是 A 的版本。

~A. 不稳定B. 稳定的C. 第三次修订D. 第二次修订5. Linux安装过程中的硬盘分区工具是 D 。

A. PQmagicB. FDISKC. FIPSD. Disk Druid6. Linux的根分区系统类型是 C 。

A. FATl6B. FAT32C. ext4D. NTFS二、填空题1. GNU的含义是:GNU's Not UNIX。

2. Linux一般有3个主要部分:内核(kernel)、命令解释层(Shell或其他操作环境)、实用工具。

3. 安装Linux最少需要两个分区,分别是swap交换分区和/(根)分区。

4. Linux默认的系统管理员账号是root 。

;三、简答题(略)1.简述Red Hat Linux系统的特点,简述一些较为知名的Linux发行版本。

2.Linux有哪些安装方式安装Red Hat Linux系统要做哪些准备工作3.安装Red Hat Linux系统的基本磁盘分区有哪些4.Red Hat Linux系统支持的文件类型有哪些练习题一、选择题1. C 命令能用来查找在文件TESTFILE中包含四个字符的行A. grep’’TESTFILEB. grep’….’TESTFILEC. grep’^$’TESTFILED. grep’^….$’TESTFILE—2. B 命令用来显示/home及其子目录下的文件名。

操作系统第二版罗宇_课后答案

操作系统第二版罗宇_课后答案

操作系统第二版罗宇_课后答案操作系统部分课后习题答案1.2操作系统以什么方式组织用户使用计算机?请问:操作系统以进程的方式非政府用户采用计算机。

用户所须要顺利完成的各种任务必须由适当的程序去表达出来。

为了同时实现用户的任务,必须使适当功能的程序执行。

而进程就是指程序的运转,操作系统的进程调度程序同意cpu在各进程间的转换。

操作系统为用户提供更多进程建立和完结等的系统调用功能,并使用户能建立崭新进程。

操作系统在初始化后,可以为每个可能将的系统用户建立第一个用户进程,用户的其他进程则可以由母进程通过“进程建立”系统调用展开建立。

1.4早期监督程序(monitor)的功能是什么?请问:早期监督程序的功能就是替代系统操作员的部分工作,自动控制作业的运转。

监督程序首先把第一道作业调到主存,并启动该作业。

运转完结后,再把下一道作业调到主存启动运转。

它如同一个系统操作员,负责管理批作业的i/o,并自动根据作业控制说明书以单道以太网的方式掌控作业运转,同时在程序运行过程中通过提供更多各种系统调用,掌控采用计算机资源。

1.7试述多道程序设计技术的基本思想。

为什么采用多道程序设计技术可以提高资源利用率?请问:多道程序设计技术的基本思想就是,在主存同时维持多道程序,主机以交错的方式同时处置多道程序。

从宏观来看,主机内同时维持和处置若干道已开始运行但尚未完结的程序。

从微观来看,某一时刻处理机只运转某道程序。

可以提高资源利用率的原因:由于任何一道作业的运行总是交替地串行使用cpu、外设等资源,即使用一段时间的cpu,然后使用一段时间的i/o设备,由于采用多道程序设计技术,加之对多道程序实施合理的运行调度,则可以实现cpu和i/o设备的高度并行,可以大大提高cpu与外设的利用率。

1.8什么就是分时系统?其主要特征就是什么?适用于于哪些应用领域?答:分时系统是以多道程序设计技术为基础的交互式系统,在此系统中,一台计算机与多台终端相连接,用户通过各自的终端和终端命令以交互的方式使用计算机系统。

操作系统原理与实践教程(第二版)习题答案

操作系统原理与实践教程(第二版)习题答案

第1章操作系统概论(1) 试说明什么是操作系统,它具有什么特征?其最基本特征是什么?解:操作系统就是一组管理与控制计算机软硬件资源并对各项任务进行合理化调度,且附加了各种便于用户操作的工具的软件层次。

现代操作系统都具有并发、共享、虚拟和异步特性,其中并发性是操作系统的最基本特征,也是最重要的特征,其它三个特性均基于并发性而存在。

(2) 设计现代操作系统的主要目标是什么?解:现代操作系统的设计目标是有效性、方便性、开放性、可扩展性等特性。

其中有效性指的是OS应能有效地提高系统资源利用率和系统吞吐量。

方便性指的是配置了OS后的计算机应该更容易使用。

这两个性质是操作系统最重要的设计目标。

开放性指的是OS应遵循世界标准规范,如开放系统互连OSI国际标准。

可扩展性指的是OS应提供良好的系统结构,使得新设备、新功能和新模块能方便地加载到当前系统中,同时也要提供修改老模块的可能,这种对系统软硬件组成以及功能的扩充保证称为可扩展性。

(3) 操作系统的作用体现在哪些方面?解:现代操作系统的主要任务就是维护一个优良的运行环境,以便多道程序能够有序地、高效地获得执行,而在运行的同时,还要尽可能地提高资源利用率和系统响应速度,并保证用户操作的方便性。

因此操作系统的基本功能应包括处理器管理、存储器管理、设备管理和文件管理。

此外,为了给用户提供一个统一、方便、有效的使用系统能力的手段,现代操作系统还需要提供一个友好的人机接口。

在互联网不断发展的今天,操作系统中通常还具备基本的网络服务功能和信息安全防护等方面的支持。

(4) 试说明实时操作系统和分时操作系统在交互性、及时性和可靠性方面的异同。

解:●交互性:分时系统能够使用户和系统进行人-机对话。

实时系统也具有交互性,但人与系统的交互仅限于访问系统中某些特定的专用服务程序。

●及时性:分时系统的响应时间是以人能够接受的等待时间为标准,而实时控制系统对响应时间要求比较严格,它是以控制过程或信息处理中所能接受的延迟为标准。

linux操作系统(第二版)课后习题答案

linux操作系统(第二版)课后习题答案

linux操作系统(第二版)课后习题答案Linux操作系统(第二版)课后习题答案Linux操作系统是一种开源的操作系统,广泛应用于各个领域。

在学习Linux操作系统的过程中,课后习题是一个非常重要的部分,通过解答习题可以加深对知识点的理解和应用能力的提升。

本文将为大家提供一些关于Linux操作系统(第二版)课后习题的答案,希望能对大家的学习有所帮助。

一、选择题1. Linux操作系统最早由谁创建?答:Linus Torvalds2. Linux操作系统是哪种类型的操作系统?答:开源操作系统3. Linux操作系统的内核是?答:Linux内核4. Linux操作系统的特点是?答:稳定、安全、可定制性强5. Linux操作系统最早是为了什么目的而创建的?答:为了个人电脑而创建的二、判断题1. Linux操作系统只能运行在服务器上,不能用于个人电脑。

答:错误2. Linux操作系统的文件系统是大小写敏感的。

答:正确3. Linux操作系统只能使用命令行界面,不能使用图形界面。

答:错误4. Linux操作系统不支持多用户同时登录。

答:错误5. Linux操作系统没有商业公司支持,完全由志愿者维护。

答:错误三、填空题1. Linux操作系统的命令行界面称为______。

答:Shell2. Linux操作系统的默认Shell是______。

答:Bash3. Linux操作系统的配置文件一般存放在______目录下。

答:/etc4. Linux操作系统的进程管理工具是______。

答:ps5. Linux操作系统的软件包管理工具是______。

答:apt四、简答题1. 请简要介绍一下Linux操作系统的文件系统结构。

答:Linux操作系统的文件系统结构是由根目录/开始的,包括了多个目录和文件。

常见的目录包括/bin、/etc、/home、/usr等。

其中/bin存放了一些系统命令,/etc存放了系统的配置文件,/home存放了用户的主目录,/usr存放了系统的应用程序和文件。

计算机操作系统第二版答案(郁红英)

计算机操作系统第二版答案(郁红英)

习题二1.操作系统中为什么要引入进程的概念?为了实现并发进程之间的合作和协调,以及保证系统的安全,操作系统在进程管理方面要做哪些工作?答:( 1)为了从变化的角度动态地分析研究可以并发执行的程序,真实地反应系统的独立性、并发性、动态性和相互制约,操作系统中就不得不引入“进程”的概念;( 2)为了防止操作系统及其关键的数据结构,受到用户程序有意或无意的破坏,通常将处理机的执行状态分成核心态和用户态;对系统中的全部进程实行有效地管理,其主要表现是对一个进程进行创建、撤销以及在某些进程状态之间的转换控制,2.试描述当前正在运行的进程状态改变时,操作系统进行进程切换的步骤。

答:(1)就绪状态→运行状态。

处于就绪状态的进程,具备了运行的条件,但由于未能获得处理机,故没有运行。

( 2)运行状态→就绪状态。

正在运行的进程,由于规定的时间片用完而被暂停执行,该进程就会从运行状态转变为就绪状态。

(3)运行状态→阻塞状态。

处于运行状态的进程,除了因为时间片用完而暂停执行外还有可能由于系统中的其他因素的影响而不能继续执行下去。

3.现代操作系统一般都提供多任务的环境,试回答以下问题。

(1)为支持多进程的并发执行,系统必须建立哪些关于进程的数据结构?答:为支持进程的并发执行,系统必须建立“进程控制块(PCB)”,PCB的组织方式常用的是链接方式。

(2)为支持进程的状态变迁,系统至少应该供哪些进程控制原语?答:进程的阻塞与唤醒原语和进程的挂起与激活原语。

(3)当进程的状态变迁时,相应的数据结构发生变化吗?答:创建原语:建立进程的PCB,并将进程投入就绪队列。

;撤销原语:删除进程的 PCB,并将进程在其队列中摘除;阻塞原语:将进程 PCB中进程的状态从运行状态改为阻塞状态,并将进程投入阻塞队列;唤醒原语:将进程 PCB中进程的状态从阻塞状态改为就绪状态,并将进程从则色队列摘下,投入到就绪队列中。

4.什么是进程控制块?从进程管理、中断处理、进程通信、文件管理、设备管理及存储管理的角度设计进程控制块应该包含的内容。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

习题二1.操作系统中为什么要引入进程的概念?为了实现并发进程之间的合作和协调,以及保证系统的安全,操作系统在进程管理方面要做哪些工作?答:(1)为了从变化的角度动态地分析研究可以并发执行的程序,真实地反应系统的独立性、并发性、动态性和相互制约,操作系统中就不得不引入“进程”的概念;(2)为了防止操作系统及其关键的数据结构,受到用户程序有意或无意的破坏,通常将处理机的执行状态分成核心态和用户态;对系统中的全部进程实行有效地管理,其主要表现是对一个进程进行创建、撤销以及在某些进程状态之间的转换控制,2.试描述当前正在运行的进程状态改变时,操作系统进行进程切换的步骤。

答:(1)就绪状态→运行状态。

处于就绪状态的进程,具备了运行的条件,但由于未能获得处理机,故没有运行。

(2)运行状态→就绪状态。

正在运行的进程,由于规定的时间片用完而被暂停执行,该进程就会从运行状态转变为就绪状态。

(3)运行状态→阻塞状态。

处于运行状态的进程,除了因为时间片用完而暂停执行外还有可能由于系统中的其他因素的影响而不能继续执行下去。

3.现代操作系统一般都提供多任务的环境,试回答以下问题。

(1)为支持多进程的并发执行,系统必须建立哪些关于进程的数据结构?答:为支持进程的并发执行,系统必须建立“进程控制块(PCB)”,PCB的组织方式常用的是链接方式。

(2)为支持进程的状态变迁,系统至少应该供哪些进程控制原语?答:进程的阻塞与唤醒原语和进程的挂起与激活原语。

(3)当进程的状态变迁时,相应的数据结构发生变化吗?答:创建原语:建立进程的PCB,并将进程投入就绪队列。

;撤销原语:删除进程的PCB,并将进程在其队列中摘除;阻塞原语:将进程PCB中进程的状态从运行状态改为阻塞状态,并将进程投入阻塞队列;唤醒原语:将进程PCB中进程的状态从阻塞状态改为就绪状态,并将进程从则色队列摘下,投入到就绪队列中。

4.什么是进程控制块?从进程管理、中断处理、进程通信、文件管理、设备管理及存储管理的角度设计进程控制块应该包含的内容。

答:(1)进程控制块是用来描述进程本身的特性、进程的状态、进程的调度信息及对资源的占有情况等的一个数据结构;(2)为了进程管理,进程控制块包括以下几方面。

a)进程的描述信息,包括进程标识符、进程名等。

b)进程的当前状况。

c)当前队列链接指针。

d)进程的家族关系。

为了中断处理,进程控制块的内容应该包括处理机状态信息和各种寄存器的内容。

为了内存管理的需要,进程控制块的内容应该包括进程使用的信号量、消息队列指针等。

为了设备管理,进程控制块的内容应该包括进程占有资源的情况。

5.假设系统就绪队列中有10个进程,这10个进程轮换执行,每隔300ms轮换一次,CPU在进程切换时所花费的时间是10ms,试问系统化在进程切换上的开销占系统整个时间的比例是多少?答:因为每隔300ms换一次进程,且每个进程切换时所花费的时间是10ms,则系统化在进程切换上的开销占系统整个时间的比例是10/(300+10)=3.2% 6.试述线程的特点及其与进程之间的关系。

答:(1)特点:线程之间的通信要比进程之间的通信方便的多;同一进程内的线程切换也因为线程的轻装而方便的多。

同时线程也是被独立调度的分配的;(2)线程与进程的关系:线程和进程是两个密切相关的概念,一个进程至少拥有一个线程,进程根据需要可以创建若干个线程。

线程自己基本上不拥有资源,只拥有少量必不可少的资源(线程控制块和堆栈)7.根据图2-18,回答以下问题。

(1)进程发生状态变迁1、3、4、6、7的原因。

答:1表示操作系统把处于创建状态的进程移入就绪队列;3表示进程请求I/O或等待某事件;4表示进程用行的时间片用完;6表示I/O完成或事件完成;7表示进程完成。

(2)系统中常常由于某一进程的状态变迁引起另一进程也产生状态变迁,这种变迁称为因果变迁。

下述变迁是否为因果变迁:3~2,4~5,7~2,3~6,是说明原因。

答:3→2是因果变迁,当一个进程从运行态变为阻塞态时,此时CPU空闲,系统首先到高优先级队列中选择一个进程。

4→5是因果变迁,当一个进程运行完毕时,此时CPU空闲,系统首先到高优先级队列中选择进程,但如果高优先级队列为空,则从低优先队列中选择一个进程。

7→2 是因果变迁,当一个进程运行完毕时,CPU空闲,系统首先到高优先级队列中选择一个进程。

3→6不是因果变迁。

一个进程阻塞时由于自身的原因而发生的,和另一个进程等待的时间到达没有因果关系。

(3)根据此进程状态转换图,说明该系统CPU调度的策略和效果。

答:当进程调度时,首先从高优先级就绪队列选择一个进程,赋予它的时间片为100ms。

如果高优先级就绪队列为空,则从低优先级就绪队列选择进程,并且赋予该进程的时间片为500ms。

这种策略一方面照顾了短进程,一个进程如果在100ms运行完毕它将退出系统,更主要的是照顾了I/O量大的进程,进程因I/O进入阻塞队列,当I/O完成后它就进入了高优先级就绪队列,在高优先级就绪队列等待的进程总是优于低优先级就绪队列的进程。

而对于计算量较大的进程,它的计算如果在100ms的时间内不能完成,它将进入低优先级就绪队列,在这个队列的进程被选中的机会要少,只有当高优先级就绪队列为空,才从低优先级就绪队列选择进程,但对于计算量大的进程,系统给予的适当照顾时间片增大为500ms。

8.回答以下问题。

(1)若系统中没有运行进程,是否一定没有就绪进程?为什么?答:是,因为当CPU空闲时,系统就会在就绪队列里调度进程,只有当就绪队列为空时,系统中才没有运行程序。

(2)若系统中既没有运行进程,也没有就绪进程,系统中是否就没有阻塞进程?解释。

答:不一定,当运行的程序都因为请求I/O或等待事件时而进入阻塞,系统中就没有就绪进程。

(3)如果系统采用优先级调度策略,运行的进程是否一定是系统中优先级最高的进程?为什么?答:不一定,若优先级高的进程进入阻塞状态时,而且优先级高的就绪队列里没有等待的进程,这时就会调度优先级低的就绪队列的进程。

9.假如有以下程序段,回答下面的问题。

S1: a=3-x;S2: b=2*a;S3: c=5+a;(1)并发程序执行的Bernstein 条件是什么?答:若P1与P2R并发执行,当且仅当R(P1)∩W(P2)∪R(P2)∩W(P1)∪W(P1)∩W(P2)={}时才满足。

(2)试画图表示它们执行时的先后次序。

(3)利用Bernstein 条件证明,S1、S2和S3哪两个可以并发执行,哪两个不能。

答:R(s1)={x},W(s1)={a};R(s2)={a},W(s2)={b};R(s3)={a},W(s3)={c};(1).R(s1)∩W(s2)∪R(s2)∩W(s1)∪W(s1)∩W(s2)={a},则s1与s2不能并发执行;(2). R(s1)∩W(s3)∪R(s3)∩W(s1)∪W(s1)∩W(s3)={a},则s1与s3不能并发执行;(3). R(s2)∩W(s3)∪R(s3)∩W(s2)∪W(s2)∩W(s3)={},则s2与s3可以并发执行。

习题三1.一下进程之间存在相互制约关系吗?若存在,是什么制约关系?为什么?(1)几个同学去图书馆借同一本书。

答:互斥关系;因为他们要借同一本书,不可能同时借到,所以互斥。

(2)篮球比赛中两队同学争抢篮板球。

答:互斥关系;因为争抢同一个篮板,存在互斥关系。

(3)果汁流水线生产中捣碎、消毒、灌装、装箱等各道工序。

答:同步关系;他们必须相互协作才能使进程圆满完成。

(4)商品的入库出库。

答:同步关系;因为商品出库可以为入库提供空间。

(5)工人做工与农民种粮。

答:没有制约关系。

2.在操作系统中引入管程的目的是什么?条件变量的作用是什么?答:用信号量可以实现进程的同步于互斥,但要设置许多信号量,使用大量的P、V操作,而且还要仔细安排P操作的排列次序,否则将会出现错误的结果或是死锁现象。

为了解决这些问题引进了管程;条件变量的作用是使进程不仅能被挂起,而且当条件满足且管程再次可用时,可以恢复该进程并允许它在挂起点重新进入管程。

3.说明P、V操作为什么要设计成原语。

答:用信号量S表示共享资源,其初值为1表示有一个资源。

设有两个进程申请该资源,若其中一个进程先执行P操作。

P操作中的减1操作有3跳及其指令组成:去S送寄存器R;R-1送S。

若P操作不用原语实现,在执行了前述三条指令中的2条,即还未执行R送S时(此时S值仍为1),进程被剥夺CPU,另一个进程执行也要执行P操作,执行后S的值为0,导致信号量的值错误。

正确的结果是两个进程执行完P操作后,信号量S的值为-1,进程阻塞。

4.设有一个售票大厅,可容纳200人购票。

如果厅内不足200人则允许进入,超过则在厅外等候;售票员某时只能给一个购票者服务,购票者买完票后就离开。

试问:(1)购票者之间是同步关系还是互斥关系?答:互斥关系。

(2)用P、V操作描述购票者的工作过程。

semaphore empty=200;semaphore mutex=1;semaphore waiting=0;void buy(){ p(waiting);p(mutex);买票;v(mutex);v(empty);}void waiting(){p(empty);等待;waiting++;}5.进程之间的关系如图3-16所示,试用P、V操作描述它们之间的同步。

semaphore A,B,C,D,E,F,G=0;{S1,V(A),V(B)};{P(A),S2,V(C)};{P(B),S3,V(D),V(E)};{P(D),S4,V(F)};{P(E),S5,V(G)};{P(C),P(F),P(G),S6};6.有4个进程P1、P2、P3、P4共享一个缓冲区,进程P1向缓冲区存入消息,进程P2、P3、P4从缓冲区中取消息,要求发送者必须等三个进程都取过本消息后才能发送下调消息。

缓冲区内每次只能容纳一个消息,用P、V操作描述四个进程存取消息的情况。

答:semaphore p1=0;semaphore p2,p3,p4=1;semaphore cout=0;semaphore mutex=1;void main(){P(p2);P(p3);P(4);V(cout);}write p1(){P(p1);P(metux);P(cout);存入消息;V(p1);V(metux);}Read p2(){ P(mutex);P(p1);读消息;V(p1);V(p2);V(metux);}Read p3(){ P(mutex);P(p1);读消息;V(p1);V(p3);V(metux);}Read p4(){ P(mutex);P(p1);读消息;V(p1);V(p4); V(metux);}7.分析生产者——消费者问题中多个P操作颠倒引起的后果。

相关文档
最新文档