操作系统部分课后习题答案解析
操作系统课后部分习题及答案
第2章操作系统的运行环境2.2 现代计算机为什么设置目态/管态这两种不同的机器状态?现在的lntel80386设置了四级不同的机器状态(把管态又分为三个特权级),你能说出自己的理解吗?答:现在的Intel 80386把执行全部指令的管态分为三个特权级,再加之只能执行非特权指令的目态,这四级不同的机器状态,按照系统处理器工作状态这四级不同的机器状态也被划分管态和目态,这也完全符合处理器的工作状态。
2.6 什么是程序状态字?主要包括什么内容?答:如何知道处理器当前处于什么工作状态,它能否执行特权指令,以及处理器何以知道它下次要执行哪条指令呢?为了解决这些问题,所有的计算机都有若干的特殊寄存器,如用一个专门的寄存器来指示一条要执行的指令称程序计数器PC,同时还有一个专门的寄存器用来指示处理器状态的,称为程序状态字PSW。
主要内容包括所谓处理器的状态通常包括条件码--反映指令执行后的结果特征;中断屏蔽码--指出是否允许中断,有些机器如PDP-11使用中断优先级;CPU的工作状态--管态还是目态,用来说明当前在CPU上执行的是操作系统还是一般用户,从而决定其是否可以使用特权指令或拥有其它的特殊权力。
2.11 CPU如何发现中断事件?发现中断事件后应做什么工作?答:处理器的控制部件中增设一个能检测中断的机构,称为中断扫描机构。
通常在每条指令执行周期内的最后时刻中扫描中断寄存器,询为是否有中断信号到来。
若无中断信号,就继续执行下一条指令。
若有中断到来,则中断硬件将该中断触发器内容按规定的编码送入程序状态字PSW的相应位(IBM-PC中是第16~31位),称为中断码。
发现中断事件后应执行相中断处理程序,先由硬件进行如下操作:1、将处理器的程序状态字PSW压入堆栈2、将指令指针IP(相当于程序代码段落的段内相对地址)和程序代码段基地址寄存器CS的内容压入堆栈,以保存被子中断程序的返回地址。
3、取来被接受的中断请求的中断向量地址(其中包含有中断处理程序的IP,CS的内容),以便转入中断处理程序。
操作系统第九版部分课后作业习题答案解析
CHAPTER 9 Virtual Memory Practice Exercises9.1 Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.Answer:A page fault occurs when an access to a page that has not beenbrought into main memory takes place. The operating system verifiesthe memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted.9.2 Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p;n distinct page numbers occur in it. Answer these questions for any page-replacement algorithms:a. What is a lower bound on the number of page faults?b. What is an upper bound on the number of page faults?Answer:a. nb. p9.3 Consider the page table shown in Figure 9.30 for a system with 12-bit virtual and physical addresses and with 256-byte pages. The list of freepage frames is D, E, F (that is, D is at the head of the list, E is second, and F is last).Convert the following virtual addresses to their equivalent physical addresses in hexadecimal. All numbers are given in hexadecimal. (A dash for a page frame indicates that the page is not in memory.)• 9EF• 1112930 Chapter 9 Virtual Memory• 700• 0FFAnswer:• 9E F - 0E F• 111 - 211• 700 - D00• 0F F - EFF9.4 Consider the following page-replacement algorithms. Rank these algorithms on a five-point scale from “bad” to “perfect” according to their page-fault rate. Separate those algorithms that suffer from Belady’s anomaly from those that do not.a. LRU replacementb. FIFO replacementc. Optimal replacementd. Second-chance replacementAnswer:Rank Algorithm Suffer from Belady’s anomaly1 Optimal no2 LRU no3 Second-chance yes4 FIFO yes9.5 Discuss the hardware support required to support demand paging. Answer:For every memory-access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whether the program has read or write privileges for accessing the page. These checks have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.9.6 An operating system supports a paged virtual memory, using a central processor with a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1000 words, and the paging device is a drum that rotates at 3000 revolutions per minute and transfers 1 million words per second. The following statistical measurements were obtained from the system:• 1 percent of all instructions executed accessed a page other than the current page.•Of the instructions that accessed another page, 80 percent accesseda page already in memory.Practice Exercises 31•When a new page was required, the replaced page was modified 50 percent of the time.Calculate the effective instruction time on this system, assuming that the system is running one process only and that the processor is idle during drum transfers.Answer:effective access time = 0.99 × (1 sec + 0.008 × (2 sec)+ 0.002 × (10,000 sec + 1,000 sec)+ 0.001 × (10,000 sec + 1,000 sec)= (0.99 + 0.016 + 22.0 + 11.0) sec= 34.0 sec9.7 Consider the two-dimensional array A:int A[][] = new int[100][100];where A[0][0] is at location 200 in a paged memory system with pages of size 200. A small process that manipulates the matrix resides in page 0 (locations 0 to 199). Thus, every instruction fetch will be from page 0. For three page frames, how many page faults are generated bythe following array-initialization loops, using LRU replacement andassuming that page frame 1 contains the process and the other twoare initially empty?a. for (int j = 0; j < 100; j++)for (int i = 0; i < 100; i++)A[i][j] = 0;b. for (int i = 0; i < 100; i++)for (int j = 0; j < 100; j++)A[i][j] = 0;Answer:a. 5,000b. 509.8 Consider the following page reference string:1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially empty, so your first unique pages will all cost one fault each.•LRU replacement• FIFO replacement•Optimal replacement32 Chapter 9 Virtual MemoryAnswer:Number of frames LRU FIFO Optimal1 20 20 202 18 18 153 15 16 114 10 14 85 8 10 76 7 10 77 77 79.9 Suppose that you want to use a paging algorithm that requires a referencebit (such as second-chance replacement or working-set model), butthe hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware, or explain why it is not possible to do so. If it is possible, calculate what the cost would be.Answer:You can use the valid/invalid bit supported in hardware to simulate the reference bit. Initially set the bit to invalid. O n first reference a trap to the operating system is generated. The operating system will set a software bit to 1 and reset the valid/invalid bit to valid.9.10 You have devised a new page-replacement algorithm that you thinkmaybe optimal. In some contorte d test cases, Belady’s anomaly occurs. Is the new algorithm optimal? Explain your answer.Answer:No. An optimal algorithm will not suffer from Belady’s anomaly because —by definition—an optimal algorithm replaces the page that will notbe used for the long est time. Belady’s anomaly occurs when a pagereplacement algorithm evicts a page that will be needed in the immediatefuture. An optimal algorithm would not have selected such a page.9.11 Segmentation is similar to paging but uses variable-sized“pages.”Definetwo segment-replacement algorithms based on FIFO and LRU pagereplacement schemes. Remember that since segments are not the samesize, the segment that is chosen to be replaced may not be big enoughto leave enough consecutive locations for the needed segment. Consider strategies for systems where segments cannot be relocated, and thosefor systems where they can.Answer:a. FIFO. Find the first segment large enough to accommodate the incoming segment. If relocation is not possible and no one segmentis large enough, select a combination of segments whose memoriesare contiguous, which are “closest to the first of the list” andwhich can accommodate the new segment. If relocation is possible, rearrange the memory so that the firstNsegments large enough forthe incoming segment are contiguous in memory. Add any leftover space to the free-space list in both cases.Practice Exercises 33b. LRU. Select the segment that has not been used for the longestperiod of time and that is large enough, adding any leftover spaceto the free space list. If no one segment is large enough, selecta combination of the “oldest” segments that are contiguous inmemory (if relocation is not available) and that are large enough.If relocation is available, rearrange the oldest N segments to be contiguous in memory and replace those with the new segment.9.12 Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of CPU and the paging disk. The results are one of the following alternatives. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping?a. CPU utilization 13 percent; disk utilization 97 percentb. CPU utilization 87 percent; disk utilization 3 percentc. CPU utilization 13 percent; disk utilization 3 percentAnswer:a. Thrashing is occurring.b. CPU utilization is sufficiently high to leave things alone, and increase degree of multiprogramming.c. Increase the degree of multiprogramming.9.13 We have an operating system for a machine that uses base and limit registers, but we have modified the ma chine to provide a page table.Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?Answer:The page table can be set up to simulate base and limit registers provided that the memory is allocated in fixed-size segments. In this way, the base of a segment can be entered into the page table and the valid/invalid bit used to indicate that portion of the segment as resident in the memory. There will be some problem with internal fragmentation.9.27.Consider a demand-paging system with the following time-measured utilizations:CPU utilization 20%Paging disk 97.7%Other I/O devices 5%Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.a. Install a faster CPU.b. Install a bigger paging disk.c. Increase the degree of multiprogramming.d. Decrease the degree of multiprogramming.e. Install more main memory.f. Install a faster hard disk or multiple controllers with multiple hard disks.g. Add prepaging to the page fetch algorithms.h. Increase the page size.Answer: The system obviously is spending most of its time paging, indicating over-allocationof memory. If the level of multiprogramming is reduced resident processeswould page fault less frequently and the CPU utilization would improve. Another way toimprove performance would be to get more physical memory or a faster paging drum.a. Get a faster CPU—No.b. Get a bigger paging drum—No.c. Increase the degree of multiprogramming—No.d. Decrease the degree of multiprogramming—Yes.e. Install more main memory—Likely to improve CPU utilization as more pages canremain resident and not require paging to or from the disks.f. Install a faster hard disk, or multiple controllers with multiple hard disks—Also animprovement, for as the disk bottleneck is removed by faster response and morethroughput to the disks, the CPU will get more data more quickly.g. Add prepaging to the page fetch algorithms—Again, the CPU will get more datafaster, so it will be more in use. This is only the case if the paging action is amenableto prefetching (i.e., some of the access is sequential).h. Increase the page size—Increasing the page size will result in fewer page faults ifdata is being accessed sequentially. If data access is more or less random, morepaging action could ensue because fewer pages can be kept in memory and moredata is transferred per page fault. So this change is as likely to decrease utilizationas it is to increase it.10.1、Is disk scheduling, other than FCFS scheduling, useful in asingle-userenvironment? Explain your answer.Answer: In a single-user environment, the I/O queue usually is empty. Requests generally arrive from a single process for one block or for a sequence of consecutive blocks. In these cases, FCFS is an economical method of disk scheduling. But LOOK is nearly as easy to program and will give much better performance when multiple processes are performing concurrent I/O, such as when aWeb browser retrieves data in the background while the operating system is paging and another application is active in the foreground.10.2.Explain why SSTF scheduling tends to favor middle cylindersover theinnermost and outermost cylinders.The center of the disk is the location having the smallest average distance to all other tracks.Thus the disk head tends to move away from the edges of the disk.Here is another way to think of it.The current location of the head divides the cylinders into two groups.If the head is not in the center of the disk and a new request arrives,the new request is more likely to be in the group that includes the center of the disk;thus,the head is more likely to move in that direction.10.11、Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order, is86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests, for each of the following disk-scheduling algorithms?a. FCFSb. SSTFc. SCANd. LOOKe. C-SCANAnswer:a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. The total seek distance is 7081.b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774. The total seek distance is 1745.c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86. The total seek distance is 9769.d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86. The total seek distance is 3319.e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130. The total seek distance is 9813.f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130. The total seek distance is 3363.12CHAPTERFile-SystemImplementationPractice Exercises12.1 Consider a file currently consisting of 100 blocks. Assume that the filecontrol block (and the index block, in the case of indexed allocation)is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one block, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow atthe beginning but there is room to grow at the end. Also assume thatthe block information to be added is stored in memory.a. The block is added at the beginning.b. The block is added in the middle.c. The block is added at the end.d. The block is removed from the beginning.e. The block is removed from the middle.f. The block is removed from the end.Answer:The results are:Contiguous Linked Indexeda. 201 1 1b. 101 52 1c. 1 3 1d. 198 1 0e. 98 52 0f. 0 100 012.2 What problems could occur if a system allowed a file system to be mounted simultaneously at more than one location?Answer:4344 Chapter 12 File-System ImplementationThere would be multiple paths to the same file, which could confuse users or encourage mistakes (deleting a file with one path deletes thefile in all the other paths).12.3 Why must the bit map for file allocation be kept on mass storage, ratherthan in main memory?Answer:In case of system crash (memory failure) the free-space list would notbe lost as it would be if the bit map had been stored in main memory.12.4 Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular file?Answer:•Contiguous—if file is usually accessed sequentially, if file isrelatively small.•Linked—if file is large and usually accessed sequentially.• Indexed—if file is large and usually accessed randomly.12.5 One problem with contiguous allocation is that the user must preallocate enough space for each file. If the file grows to be larger than thespace allocated for it, special actions must be taken. One solution to this problem is to define a file structure consisting of an initial contiguous area (of a specified size). If this area is filled, the operating system automatically defines an overflow area that is linked to the initialc ontiguous area. If the overflow area is filled, another overflow areais allocated. Compare this implementation of a file with the standard contiguous and linked implementations.Answer:This method requires more overhead then the standard contiguousallocation. It requires less overheadthan the standard linked allocation. 12.6 How do caches help improve performance? Why do systems not use more or larger caches if they are so useful?Answer:Caches allow components of differing speeds to communicate moreefficie ntly by storing data from the slower device, temporarily, ina faster device (the cache). Caches are, almost by definition, more expensive than the device they are caching for, so increasing the number or size of caches would increase system cost.12.7 Why is it advantageous for the user for an operating system to dynamically allocate its internal tables? What are the penalties to the operating system for doing so?Answer:Dynamic tables allow more flexibility in system use growth — tablesare never exceeded, avoiding artificial use limits. Unfortunately, kernel structures and code are more complicated, so there is more potentialfor bugs. The use of one resource can take away more system resources (by growing to accommodate the requests) than with static tables.Practice Exercises 4512.8 Explain how the VFS layer allows an operating system to support multiple types of file systems easily.Answer:VFS introduces a layer of indirection in the file system implementation. In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of file system type). Each file system type provides its function calls and data structuresto the VFS layer. A system call is translated into the proper specific functions for the ta rget file system at the VFS layer. The calling program has no file-system-specific code, and the upper levels of the system call structures likewise are file system-independent. The translation at the VFS layer turns these generic calls into file-system-specific operations.。
操作系统第九版部分课后作业习题答案分析解析
CHAPTER 9 Virtual Memory Practice Exercises9.1 Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.Answer:A page fault occurs when an access to a page that has not beenbrought into main memory takes place. The operating system veri?esthe memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted.9.2 Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p;n distinct page numbers occur in it. Answer these questions for anypage-replacement algorithms:a. What is a lower bound on the number of page faults?b. What is an upper bound on the number of page faults?Answer:a. nb. p9.3 Consider the page table shown in Figure 9.30 for a system with 12-bit virtual and physical addresses and with 256-byte pages. The list of freepage frames is D, E, F (that is, D is at the head of the list, E is second,and F is last).Convert the following virtual addresses to their equivalent physicaladdresses in hexadecimal. All numbers are given in hexadecimal. (Adash for a page frame indicates that the page is not in memory.)? 9EF? 1112930 Chapter 9 Virtual Memory? 700? 0FFAnswer:? 9E F - 0E F? 111 - 211? 700 - D00? 0F F - EFF9.4 Consider the following page-replacement algorithms. Rank thesealgorithms on a ?ve-point scale from “bad” to “perfect” according to the page-fault rate. Separate those algorithms that suffer from Belady’sanomaly from those that do not.a. LRU replacementb. FIFO replacementc. Optimal replacementd. Second-chance replacementAnswer:Rank Algorithm Suffer from Belady’s anomaly1 Optimal no2 LRU no3 Second-chance yes4 FIFO yes9.5 Discuss the hardware support required to support demand paging. Answer:For every memory-access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whetherthe program has read or write privileges for accessing the page. These checks have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.9.6 An operating system supports a paged virtual memory, using a central processor with a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1000 words, and the paging device is a drum that rotates at 3000 revolutionsper minute and transfers 1 million words per second. The following statistical measurements were obtained from the system:page other than the? 1 percent of all instructions executed accessed acurrent page.?Of the instructions that accessed another page, 80 percent accesseda page already in memory.Practice Exercises 31?When a new page was required, the replaced page was modi?ed 50 percent of the time.Calculate the effective instruction time on this system, assuming that the system is running one process only and that the processor is idle during drum transfers.Answer:(2 sec)(1sec + 0.008 ×effective access time = 0.99 ×(10,000 sec + 1,000 sec)+ 0.002 ×(10,000 sec + 1,000 sec)+ 0.001 ×9.7 Consider the two-dimensional array A:int A[][] = new int[100][100];where A[0][0] is at location 200 in a paged memory system with pages of size 200. A small process that manipulates the matrix resides in page 0 (locations 0 to 199). Thus, every instruction fetch will be from page 0. For three page frames, how many page faults are generated bythe following array-initialization loops, using LRU replacement andassuming that page frame 1 contains the process and the other two are initially empty?a. for (int j = 0; j < 100; j++)for (int i = 0; i < 100; i++)A[i][j] = 0;b. for (int i = 0; i < 100; i++)for (int j = 0; j < 100; j++)A[i][j] = 0;Answer:a. 5,000b. 509.8 Consider the following page reference string:1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, ?ve, six, or seven frames? Remember all frames are initially empty, so your ?rst unique pages will all cost one fault each.?LRU replacement? FIFO replacement?Optimal replacement32 Chapter 9 Virtual MemoryAnswer:Number of frames LRU FIFO Optimal1 20 20 202 18 18 153 15 16 114 10 14 85 8 10 76 7 10 77 77 79.9 Suppose that you want to use a paging algorithm that requires a referencebit (such as second-chance replacement or working-set model), butthe hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware, or explain why it is not possible to do so. If it is possible, calculate what the cost would be.Answer:You can use the valid/invalid bit supported in hardware to simulate the reference bit. Initially set the bit to invalid. On ?rst reference a trap to the operating system is generated. The operating system will set a software bit to 1 and reset the valid/invalid bit to valid.9.10 You have devised a new page-replacement algorithm that you thinkmaybe optimal. In some contorte d test cases, Belady’s anomaly occurs. Is thenew algorithm optimal? Explain your answer.Answer:No. An optimal algorithm will not suffer from Belady’s anomaly beca an optimal algorithm replaces the page that will not—by de?nition—be used for the longest time. Belady’s anomaly occurs when a pagereplacement a lgorithm evicts a page that will be needed in theimmediatefuture. An optimal algorithm would not have selected such a page.9.11 Segmentation is similar to paging but usesnevariable-sized“pages.”De?two segment-replacement algorithms based on FIFO and LRU pagereplacement s chemes. Remember that since segments are not thesamesize, the segment that is chosen to be replaced may not be big enoughto leave enough consecutive locations for the needed segment. Considerstrategies for systems where segments cannot be relocated, and thosefor systems where they can.Answer:a. FIFO. Find the ?rst segment large enough to accommodate theincoming segment. If relocation is not possible and no one segmentis large enough, select a combination of segments whose memoriesare contiguous, which are “closest to the ?rst of the list” and which can accommodate the new segment. If relocation is possible,rearrange the memory so that the ?rstNsegments large enough forthe incoming segment are contiguous in memory. Add any leftoverspace to the free-space list in both cases.Practice Exercises 33b. LRU. Select the segment that has not been used for the longestperiod of time and that is large enough, adding any leftover spaceto the free space list. If no one segment is large enough, selecta combination of the “oldest” segments that are contiguous inmemory (if relocation is not available) and that are large enough.If relocation is available, rearrange the oldest N segments to becontiguous in memory and replace those with the new segment.9.12 Consider a demand-paged computer system where the degree of multiprogramming is currently ?xed at four. The system was recentlymeasured to determine utilization of CPU and the paging disk. The resultsare one of the following alternatives. For each case, what is happening?Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping?a. CPU utilization 13 percent; disk utilization 97 percentb. CPU utilization 87 percent; disk utilization 3 percentc. CPU utilization 13 percent; disk utilization 3 percentAnswer:a. Thrashing is occurring.b. CPU utilization is suf?ciently high to leave things alone, andincrease degree of multiprogramming.c. Increase the degree of multiprogramming.9.13 We have an operating system for a machine that uses base and limit registers, but we have modi?ed the ma chine to provide a page table.Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?Answer:The page table can be set up to simulate base and limit registers provided that the memory is allocated in ?xed-size segments. In this way, the base of a segment can be entered into the page table and the valid/invalid bit used to indicate that portion of the segment as resident in the memory. There will be some problem with internal fragmentation.9.27.Consider a demand-paging system with the following time-measured utilizations:CPU utilization 20%Paging disk 97.7%Other I/O devices 5%Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.a. Install a faster CPU.b. Install a bigger paging disk.c. Increase the degree of multiprogramming.d. Decrease the degree of multiprogramming.e. Install more main memory.f. Install a faster hard disk or multiple controllers with multiple hard disks.g. Add prepaging to the page fetch algorithms.h. Increase the page size.Answer: The system obviously is spending most of its time paging, indicating over-allocationof memory. If the level of multiprogramming is reduced resident processeswould page fault less frequently and the CPU utilization would improve. Another way toimprove performance would be to get more physical memory or a faster paging drum.a. Get a faster CPU—No.b. Get a bigger paging drum—No.c. Increase the degree of multiprogramming—No.d. Decrease the degree of multiprogramming—Yes.e. Install more main memory—Likely to improve CPU utilization as more pages canremain resident and not require paging to or from the disks.f. Install a faster hard disk, or multiple controllers with multiple hard disks—Also animprovement, for as the disk bottleneck is removed by faster response and morethroughput to the disks, the CPU will get more data more quickly.g. Add prepaging to the page fetch algorithms—Again, the CPU will get more datafaster, so it will be more in use. This is only the case if the paging actionis amenableto prefetching (i.e., some of the access is sequential).h. Increase the page size—Increasing the page size will result in fewer page faults ifdata is being accessed sequentially. If data access is more or less random, morepaging action could ensue because f ewer pages c an be kept in memory and moredata is transferred per page fault. So this change is as likely to decrease utilizationas it is to increase it.10.1、Is disk scheduling, other than FCFS scheduling, useful in a single-userenvironment? Explain your answer.Answer: In a single-user environment, the I/O queue usually is empty. Requests g enerally arrive from a single process for one block or for a sequence of consecutive blocks. In these cases, FCFS is an economical method of disk scheduling. But LOOK is nearly as easy to program and will give much better performance when multiple processes are performing concurrent I/O, such as when aWeb browser retrieves data in the background while the operating system is paging and another application is active in the foreground.10.2.Explain why SSTF scheduling tends to favor middle cylindersover theinnermost and outermost cylinders.The center of the disk is the location having the smallest average distance to all other tracks.Thus the disk head tends to move away from the edges of the disk.Here is another way to think of it.The current location of the head divides the cylinders into two groups.If the head is not in the center of the disk and a new request arrives,the new request is more likely to be in the group that includes the center of the disk;thus,the head is more likely to move in that direction.10.11、Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order, is86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests, for each of the following disk-scheduling algorithms?a. FCFSb. SSTFc. SCANd. LOOKe. C-SCANAnswer:a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. The total seek distance is 7081.b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774. The total seek distance is 1745.c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86. The total seek distance is 9769.d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86. The total seek distance is 3319.e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130. The total seek distance is 9813.f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130. The total seek distance is 3363.12CHAPTERFile-SystemImplementationPractice Exercises12.1 Consider a ?le currently consisting of 100 blocks. Assume that the?lecontrol block (and the index block, in the case of indexed allocation)is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one block, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow atthe beginning but there is room to grow at the end. Also assume thatthe block information to be added is stored in memory.a. The block is added at the beginning.b. The block is added in the middle.c. The block is added at the end.d. The block is removed from the beginning.e. The block is removed from the middle.f. The block is removed from the end.Answer:The results are:Contiguous Linked Indexeda. 201 1 1b. 101 52 1c. 1 3 1d. 198 1 0e. 98 52 0f. 0 100 012.2 What problems could occur if a system allowed a ?le system to be mounted simultaneously at more than one location?Answer:4344 Chapter 12 File-System ImplementationThere would be multiple paths to the same ?le, which could confuse users or encourage mistakes (deleting a ?le with one path deletes the?le in all the other paths).12.3 Why must the bit map for ?le allocation be kept on mass storage, ratherthan in main memory?Answer:In case of system crash (memory failure) the free-space list would not be lost as it would be if the bit map had been stored in main memory.12.4 Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular ?le?Answer:?Contiguous—if ?le is usually accessed sequentially, if ?le isrelatively small.?Linked—if ?le is large and usually accessed sequentially.? Indexed—if ?le is large and usually accessed randomly.12.5 One problem with contiguous allocation is that the user must preallocate enough space for each ?le. If the ?le grows to be larger than thespace allocated for it, special actions must be taken. One solution to this problem is to de?ne a ?le structure consisting of an initial contiguousarea (of a speci?ed size). If this area is ?lled, the operating system automatically de?nes an over?ow area that is linked to the initial contiguous area. If the over?ow area is ?lled, another over?ow areais allocated. Compare this implementation of a ?le with the standard contiguous and linked implementations.Answer:This method requires more overhead then the standard contiguousallocation. It requires less overheadthan the standard linked allocation.12.6 How do caches help improve performance? Why do systems not use more or larger caches if they are so useful?Answer:Caches allow components of differing speeds to communicate moreef?ciently by storing data from the slower device, temporarily, ina faster device (the cache). Caches are, almost by de?nition, moreexpensive than the device they are caching for, so increasing the numberor size of caches would increase system cost.12.7 Why is it advantageous for the user for an operating system to dynamically allocate its internal tables? What are the penalties to the operating system for doing so?Answer:tablesDynamic tables allow more ?exibility in system use growth —are never exceeded, avoiding arti?cial use limits. Unfortunately, kernel structures and code are more complicated, so there is more potentialfor bugs. The use of one resource can take away more system resources (by growing to accommodate the requests) than with static tables.Practice Exercises 4512.8 Explain how the VFS layer allows an operating system to support multiple types of ?le systems easily.Answer:VFS introduces a layer of indirection in the ?le system implementation. In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of ?le system type). Each ?le system type provides its function calls and data structuresto the VFS layer. A system call is translated into the proper speci?c functions for the target ?le system at the VFS layer. The calling program has no ?le-system-speci?c code, and the upper levels of the system call structures likewise are ?le system-independent. The translation at the VFS layer turns these generic calls into ?le-system-speci?c operations.。
操作系统(1~8章的课后习题答案)
1.1:存储程序式计算机的主要特点是:集中顺序过程控制(1)过程性:模拟人们手工操作(2)集中控制:由CPU集中管理(3)顺序性:程序计数器1.2:a:批处理系统的特点:早期批处理有个监督程序,作业自动过渡直到全部处理完,而脱机批处理的特点:主机与卫星机并行操作。
b:分时系统的特点:(1):并行性。
共享一台计算机的众多联机用户可以在各自的终端上同时处理自己的程序。
(2):独占性。
分时操作系统采用时间片轮转的方法使一台计算机同时为许多终端上同时为许多终端用户服务,每个用户的感觉是自己独占计算机。
操作系统通过分时技术将一台计算机改造为多台虚拟计算机。
(3):交互性。
用户与计算机之间可以进行“交互会话”,用户从终端输入命令,系统通过屏幕(或打印机)将信息反馈给用户,用户与系统这样一问一答,直到全部工作完成。
c:分时系统的响应比较快的原因:因为批量操作系统的作业周转时间较长,而分时操作系统一般采用时间片轮转的方法,一台计算机与许多终端设备连接,使一台计算机同时为多个终端用户服务,该系统对每个用户都能保证足够快的响应时间,并提供交互会话功能。
1.3:实时信息处理系统和分时系统的本质区别:实时操作系统要追求的目标是:对外部请求在严格时间范围内做出反应,有高可靠性和完整性。
其主要特点是资源的分配和调度首先要考虑实时性然后才是效率。
此外,实时操作系统应有较强的容错能力,分时操作系统的工作方式是:一台主机连接了若干个终端,每个终端有一个用户在使用。
用户交互式地向系统提出命令请求,系统接受每个用户的命令,采用时间片轮转方式处理服务请求,并通过交互方式在终端上向用户显示结果。
用户根据上步结果发出下道命。
分时操作系统将CPU 的时间划分成若干个片段,称为时间片。
操作系统以时间片为单位,轮流为每个终端用户服务。
每个用户轮流使用一个时间片而使每个用户并不感到有别的用户存在。
分时系统具有多路性、交互性、“独占”性和及时性的特征。
计算机操作系统课后习题答案解析张尧学
第一章绪论1.什么是操作系统的基本功能?答:操作系统的职能是管理和控制汁算机系统中的所有硬、软件资源,合理地组织计算机工作流程,并为用户提供一个良好的工作环境和友好的接口。
操作系统的基本功能包括:处理机管理、存储管理、设备管理、信息管理(文件系统管理)和用户接口等。
2.什么是批处理、分时和实时系统?各有什么特征?答:批处理系统(batchprocessingsystem):操作员把用户提交的作业分类,把一批作业编成一个作业执行序列,由专门编制的监督程序(monitor)自动依次处理。
其主要特征是:用户脱机使用计算机、成批处理、多道程序运行。
分时系统(timesharingoperationsystem):把处理机的运行时间分成很短的时间片,按时间片轮转的方式,把处理机分配给各进程使用。
其主要特征是:交互性、多用户同时性、独立性。
实时系统(realtimesystem):在被控对象允许时间范围内作出响应。
其主要特征是:对实时信息分析处理速度要比进入系统快、要求安全可靠、资源利用率低。
3.多道程序(multiprogramming)和多重处理(multiprocessing)有何区别?答;多道程序(multiprogramming)是作业之间自动调度执行、共享系统资源,并不是真正地同时值行多个作业;而多重处理(multiprocessing)系统配置多个CPU,能真正同时执行多道程序。
要有效使用多重处理,必须采用多道程序设计技术,而多道程序设计原则上不一定要求多重处理系统的支持。
6.设计计算机操作系统时与那些硬件器件有关运算器、控制器、存储器、输入设备、输出设备第二章作业管理和用户接口2.作业由哪几部分组成?各有什么功能?答:作业由三部分组成:程序、数据和作业说明书。
程序和数据完成用户所要求的业务处理工作,作业说明书则体现用户的控制意图。
3.作业的输入方式有哪几种?各有何特点答:作业的输入方式有5种:联机输入方式、脱机输入方式、直接耦合方式、SPOOLING(Simultaneous Peripheral OperationsOnline)系统和网络输入方式,各有如下特点:(1)联机输入方式:用户和系统通过交互式会话来输入作业。
计算机操作系统课后答案第9章习题解答
第9章习题解答一、填空1.MS-DOS操作系统由BOOT、IO.SYS、MSDOS.SYS以及 所组成。
2.MS-DOS的一个进程,由程序(包括代码、数据和堆栈)、程序段前缀以及环境块三部分组成。
3.MS-DOS向用户提供了两种控制作业运行的方式,一种是批处理方式,一种是命令处理方式。
4.MS-DOS存储管理规定,从地址0开始每16个字节为一个“节”,它是进行存储分配的单位。
5.MS-DOS在每个内存分区的前面都开辟一个16个字节的区域,在它里面存放该分区的尺寸和使用信息。
这个区域被称为是一个内存分区所对应的内存控制块。
6.MS-DOS有4个存储区域,它们是:常规内存区、上位内存区、高端内存区和扩充内存区。
7.“簇”是MS-DOS进行磁盘存储空间分配的单位,它所含扇区数必须是2的整数次方。
8.当一个目录表里仅包含“.”和“..”时,意味该目录表为空。
9.在MS-DOS里,用文件名打开文件,随后就通过句柄来访问该文件了。
10.在MS-DOS里,把字符设备视为设备文件。
二、选择1.下面对DOS的说法中,B 是正确的。
A.内、外部命令都常驻内存B.内部命令常驻内存,外部命令非常驻内存C.内、外部命令都非常驻内存D.内部命令非常驻内存,外部命令常驻内存2.DOS进程的程序,在内存里 D 存放在一起。
A.总是和程序段前缀以及环境块B.和谁都不C.总是和进程的环境块D.总是和程序段前缀3.MS-DOS启动时能够自动执行的批处理文件名是: C 。
A.CONFIG.SYS B.MSDOS.SYSC.AUTOEXEC.BAT D.4.下面所列的内存分配算法, D 不是MS-DOS采用的。
A.最佳适应法B.最先适应法C.最后适应法D.最坏适应法5.在MS-DOS里,从1024K到1088K的存储区域被称为 D 区。
A.上位内存B.扩展内存C.扩充内存D.高端内存6.MS-DOS的存储管理是对A的管理。
A.常规内存B.常规内存和上位内存C.常规内存和扩展内存D.常规内存和扩充内存7.在下面给出的MS-DOS常用扩展名中,B 不表示一个可执行文件。
最新操作系统第九版部分课后作业习题答案分析解析
CHAPTER 9 Virtual Memory Practice Exercises9.1 Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs.Answer:A page fault occurs when an access to a page that has not beenbrought into main memory takes place. The operating system verifiesthe memory access, aborting the program if it is invalid. If it is valid, a free frame is located and I/O is requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted.9.2 Assume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p;n distinct page numbers occur in it. Answer these questions for any page-replacement algorithms:a. What is a lower bound on the number of page faults?b. What is an upper bound on the number of page faults?Answer:a. nb. p9.3 Consider the page table shown in Figure 9.30 for a system with 12-bit virtual and physical addresses and with 256-byte pages. The list of freepage frames is D, E, F (that is, D is at the head of the list, E is second, and F is last).Convert the following virtual addresses to their equivalent physical addresses in hexadecimal. All numbers are given in hexadecimal. (A dash for a page frame indicates that the page is not in memory.)• 9EF• 1112930 Chapter 9 Virtual Memory• 700• 0FFAnswer:• 9E F - 0E F• 111 - 211• 700 - D00• 0F F - EFF9.4 Consider the following page-replacement algorithms. Rank these algorithms on a five-point scale from “bad” to “perfect” according to their page-fault rate. Separate those algorithms that suffer from Belady’s anomaly from those that do not.a. LRU replacementb. FIFO replacementc. Optimal replacementd. Second-chance replacementAnswer:Rank Algorithm Suffer from Belady’s anomaly1 Optimal no2 LRU no3 Second-chance yes4 FIFO yes9.5 Discuss the hardware support required to support demand paging. Answer:For every memory-access operation, the page table needs to be consulted to check whether the corresponding page is resident or not and whether the program has read or write privileges for accessing the page. These checks have to be performed in hardware. A TLB could serve as a cache and improve the performance of the lookup operation.9.6 An operating system supports a paged virtual memory, using a central processor with a cycle time of 1 microsecond. It costs an additional 1 microsecond to access a page other than the current one. Pages have 1000 words, and the paging device is a drum that rotates at 3000 revolutions per minute and transfers 1 million words per second. The following statistical measurements were obtained from the system:• 1 percent of all instructions executed accessed a page other than the current page.•Of the instructions that accessed another page, 80 percent accesseda page already in memory.Practice Exercises 31•When a new page was required, the replaced page was modified 50 percent of the time.Calculate the effective instruction time on this system, assuming that the system is running one process only and that the processor is idle during drum transfers.Answer:effective access time = 0.99 × (1 sec + 0.008 × (2 sec)+ 0.002 × (10,000 sec + 1,000 sec)+ 0.001 × (10,000 sec + 1,000 sec)= (0.99 + 0.016 + 22.0 + 11.0) sec= 34.0 sec9.7 Consider the two-dimensional array A:int A[][] = new int[100][100];where A[0][0] is at location 200 in a paged memory system with pages of size 200. A small process that manipulates the matrix resides in page 0 (locations 0 to 199). Thus, every instruction fetch will be from page 0. For three page frames, how many page faults are generated bythe following array-initialization loops, using LRU replacement andassuming that page frame 1 contains the process and the other twoare initially empty?a. for (int j = 0; j < 100; j++)for (int i = 0; i < 100; i++)A[i][j] = 0;b. for (int i = 0; i < 100; i++)for (int j = 0; j < 100; j++)A[i][j] = 0;Answer:a. 5,000b. 509.8 Consider the following page reference string:1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.How many page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially empty, so your first unique pages will all cost one fault each.•LRU replacement• FIFO replacement•Optimal replacement32 Chapter 9 Virtual MemoryAnswer:Number of frames LRU FIFO Optimal1 20 20 202 18 18 153 15 16 114 10 14 85 8 10 76 7 10 77 77 79.9 Suppose that you want to use a paging algorithm that requires a referencebit (such as second-chance replacement or working-set model), butthe hardware does not provide one. Sketch how you could simulate a reference bit even if one were not provided by the hardware, or explain why it is not possible to do so. If it is possible, calculate what the cost would be.Answer:You can use the valid/invalid bit supported in hardware to simulate the reference bit. Initially set the bit to invalid. O n first reference a trap to the operating system is generated. The operating system will set a software bit to 1 and reset the valid/invalid bit to valid.9.10 You have devised a new page-replacement algorithm that you thinkmaybe optimal. In some contorte d test cases, Belady’s anomaly occurs. Is the new algorithm optimal? Explain your answer.Answer:No. An optimal algorithm will not suffer from Belady’s anomaly because —by definition—an optimal algorithm replaces the page that will notbe used for the long est time. Belady’s anomaly occurs when a pagereplacement algorithm evicts a page that will be needed in the immediatefuture. An optimal algorithm would not have selected such a page.9.11 Segmentation is similar to paging but uses variable-sized“pages.”Definetwo segment-replacement algorithms based on FIFO and LRU pagereplacement schemes. Remember that since segments are not the samesize, the segment that is chosen to be replaced may not be big enoughto leave enough consecutive locations for the needed segment. Consider strategies for systems where segments cannot be relocated, and thosefor systems where they can.Answer:a. FIFO. Find the first segment large enough to accommodate the incoming segment. If relocation is not possible and no one segmentis large enough, select a combination of segments whose memoriesare contiguous, which are “closest to the first of the list” andwhich can accommodate the new segment. If relocation is possible, rearrange the memory so that the firstNsegments large enough forthe incoming segment are contiguous in memory. Add any leftover space to the free-space list in both cases.Practice Exercises 33b. LRU. Select the segment that has not been used for the longestperiod of time and that is large enough, adding any leftover spaceto the free space list. If no one segment is large enough, selecta combination of the “oldest” segments that are contiguous inmemory (if relocation is not available) and that are large enough.If relocation is available, rearrange the oldest N segments to be contiguous in memory and replace those with the new segment.9.12 Consider a demand-paged computer system where the degree of multiprogramming is currently fixed at four. The system was recently measured to determine utilization of CPU and the paging disk. The results are one of the following alternatives. For each case, what is happening? Can the degree of multiprogramming be increased to increase the CPU utilization? Is the paging helping?a. CPU utilization 13 percent; disk utilization 97 percentb. CPU utilization 87 percent; disk utilization 3 percentc. CPU utilization 13 percent; disk utilization 3 percentAnswer:a. Thrashing is occurring.b. CPU utilization is sufficiently high to leave things alone, and increase degree of multiprogramming.c. Increase the degree of multiprogramming.9.13 We have an operating system for a machine that uses base and limit registers, but we have modified the machine to provide a page table.Can the page tables be set up to simulate base and limit registers? How can they be, or why can they not be?Answer:The page table can be set up to simulate base and limit registers provided that the memory is allocated in fixed-size segments. In this way, the base of a segment can be entered into the page table and the valid/invalid bit used to indicate that portion of the segment as resident in the memory. There will be some problem with internal fragmentation.9.27.Consider a demand-paging system with the following time-measured utilizations:CPU utilization 20%Paging disk 97.7%Other I/O devices 5%Which (if any) of the following will (probably) improve CPU utilization? Explain your answer.a. Install a faster CPU.b. Install a bigger paging disk.c. Increase the degree of multiprogramming.d. Decrease the degree of multiprogramming.e. Install more main memory.f. Install a faster hard disk or multiple controllers with multiple hard disks.g. Add prepaging to the page fetch algorithms.h. Increase the page size.Answer: The system obviously is spending most of its time paging, indicating over-allocationof memory. If the level of multiprogramming is reduced resident processeswould page fault less frequently and the CPU utilization would improve. Another way toimprove performance would be to get more physical memory or a faster paging drum.a. Get a faster CPU—No.b. Get a bigger paging drum—No.c. Increase the degree of multiprogramming—No.d. Decrease the degree of multiprogramming—Yes.e. Install more main memory—Likely to improve CPU utilization as more pages canremain resident and not require paging to or from the disks.f. Install a faster hard disk, or multiple controllers with multiple hard disks—Also animprovement, for as the disk bottleneck is removed by faster response and morethroughput to the disks, the CPU will get more data more quickly.g. Add prepaging to the page fetch algorithms—Again, the CPU will get more datafaster, so it will be more in use. This is only the case if the paging action is amenableto prefetching (i.e., some of the access is sequential).h. Increase the page size—Increasing the page size will result in fewer page faults ifdata is being accessed sequentially. If data access is more or less random, morepaging action could ensue because fewer pages can be kept in memory and moredata is transferred per page fault. So this change is as likely to decrease utilizationas it is to increase it.10.1、Is disk scheduling, other than FCFS scheduling, useful in asingle-userenvironment? Explain your answer.Answer: In a single-user environment, the I/O queue usually is empty. Requests generally arrive from a single process for one block or for a sequence of consecutive blocks. In these cases, FCFS is an economical method of disk scheduling. But LOOK is nearly as easy to program and will give much better performance when multiple processes are performing concurrent I/O, such as when aWeb browser retrieves data in the background while the operating system is paging and another application is active in the foreground.10.2.Explain why SSTF scheduling tends to favor middle cylindersover theinnermost and outermost cylinders.The center of the disk is the location having the smallest average distance to all other tracks.Thus the disk head tends to move away from the edges of the disk.Here is another way to think of it.The current location of the head divides the cylinders into two groups.If the head is not in the center of the disk and a new request arrives,the new request is more likely to be in the group that includes the center of the disk;thus,the head is more likely to move in that direction.10.11、Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently serving a request at cylinder 143, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order, is86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests, for each of the following disk-scheduling algorithms?a. FCFSb. SSTFc. SCANd. LOOKe. C-SCANAnswer:a. The FCFS schedule is 143, 86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130. The total seek distance is 7081.b. The SSTF schedule is 143, 130, 86, 913, 948, 1022, 1470, 1509, 1750, 1774. The total seek distance is 1745.c. The SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 130, 86. The total seek distance is 9769.d. The LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 130, 86. The total seek distance is 3319.e. The C-SCAN schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 4999, 86, 130. The total seek distance is 9813.f. (Bonus.) The C-LOOK schedule is 143, 913, 948, 1022, 1470, 1509, 1750, 1774, 86, 130. The total seek distance is 3363.12CHAPTERFile-SystemImplementationPractice Exercises12.1 Consider a file currently consisting of 100 blocks. Assume that the filecontrol block (and the index block, in the case of indexed allocation)is already in memory. Calculate how many disk I/O operations are required for contiguous, linked, and indexed (single-level) allocation strategies, if, for one block, the following conditions hold. In the contiguous-allocation case, assume that there is no room to grow atthe beginning but there is room to grow at the end. Also assume thatthe block information to be added is stored in memory.a. The block is added at the beginning.b. The block is added in the middle.c. The block is added at the end.d. The block is removed from the beginning.e. The block is removed from the middle.f. The block is removed from the end.Answer:The results are:Contiguous Linked Indexeda. 201 1 1b. 101 52 1c. 1 3 1d. 198 1 0e. 98 52 0f. 0 100 012.2 What problems could occur if a system allowed a file system to be mounted simultaneously at more than one location?Answer:4344 Chapter 12 File-System ImplementationThere would be multiple paths to the same file, which could confuse users or encourage mistakes (deleting a file with one path deletes thefile in all the other paths).12.3 Wh y must the bit map for file allocation be kept on mass storage, ratherthan in main memory?Answer:In case of system crash (memory failure) the free-space list would notbe lost as it would be if the bit map had been stored in main memory.12.4 Consider a system that supports the strategies of contiguous, linked, and indexed allocation. What criteria should be used in deciding which strategy is best utilized for a particular file?Answer:•Contiguous—if file is usually accessed sequentially, if file isrelatively small.•Linked—if file is large and usually accessed sequentially.• Indexed—if file is large and usually accessed randomly.12.5 One problem with contiguous allocation is that the user must preallocate enough space for each file. If the file grows to be larger than thespace allocated for it, special actions must be taken. One solution to this problem is to define a file structure consisting of an initial contiguous area (of a specified size). If this area is filled, the operating system automatically defines an overflow area that is linked to the initial contiguous area. If the overflow area is filled, another overflow areais allocated. Compare this implementation of a file with the standard contiguous and linked implementations.Answer:This method requires more overhead then the standard contiguousallocation. It requires less overheadthan the standard linked allocation. 12.6 How do caches help improve performance? Why do systems not use more or larger caches if they are so useful?Answer:Caches allow components of differing speeds to communicate moreefficiently by storing data from the slower device, temporarily, ina faster device (the cache). Caches are, almost by definition, more expensive than the device they are caching for, so increasing the numberor size of caches would increase system cost.12.7 Why is it advantageous for the user for an operating system todynamically allocate its internal tables? What are the penalties to the operating system for doing so?Answer:Dynamic tables allow more flexibility in system use growth — tablesare never exceeded, avoiding artificial use limits. Unfortunately, kernelstructures and code are more complicated, so there is more potentialfor bugs. The use of one resource can take away more system resources (by growing to accommodate the requests) than with static tables.Practice Exercises 4512.8 Explain how the VFS layer allows an operating system to support multiple types of file systems easily.Answer:VFS introduces a layer of indirection in the file system implementation.In many ways, it is similar to object-oriented programming techniques. System calls can be made generically (independent of file system type). Each file system type provides its function calls and data structuresto the VFS layer. A system call is translated into the proper specific functions for the target file system at the VFS layer. The calling program has no file-system-specific code, and the upper levels of the system callst ructures likewise are file system-independent. The translation at the VFS layer turns these generic calls into file-system-specific operations.。
计算机操作系统课后习题答案解析张尧学
第一章绪论1.什么是操作系统的基本功能?答:操作系统的职能是管理和控制汁算机系统中的所有硬、软件资源,合理地组织计算机工作流程,并为用户提供一个良好的工作环境和友好的接口。
操作系统的基本功能包括:处理机管理、存储管理、设备管理、信息管理(文件系统管理)和用户接口等。
2.什么是批处理、分时和实时系统?各有什么特征?答:批处理系统(batchprocessingsystem):操作员把用户提交的作业分类,把一批作业编成一个作业执行序列,由专门编制的监督程序(monitor)自动依次处理。
其主要特征是:用户脱机使用计算机、成批处理、多道程序运行。
分时系统(timesharingoperationsystem):把处理机的运行时间分成很短的时间片,按时间片轮转的方式,把处理机分配给各进程使用。
其主要特征是:交互性、多用户同时性、独立性。
实时系统(realtimesystem):在被控对象允许时间范围内作出响应。
其主要特征是:对实时信息分析处理速度要比进入系统快、要求安全可靠、资源利用率低。
3.多道程序(multiprogramming)和多重处理(multiprocessing)有何区别?答;多道程序(multiprogramming)是作业之间自动调度执行、共享系统资源,并不是真正地同时值行多个作业;而多重处理(multiprocessing)系统配置多个CPU,能真正同时执行多道程序。
要有效使用多重处理,必须采用多道程序设计技术,而多道程序设计原则上不一定要求多重处理系统的支持。
6.设计计算机操作系统时与那些硬件器件有关运算器、控制器、存储器、输入设备、输出设备第二章作业管理和用户接口2.作业由哪几部分组成?各有什么功能?答:作业由三部分组成:程序、数据和作业说明书。
程序和数据完成用户所要求的业务处理工作,作业说明书则体现用户的控制意图。
3.作业的输入方式有哪几种?各有何特点答:作业的输入方式有5种:联机输入方式、脱机输入方式、直接耦合方式、SPOOLING(Simultaneous Peripheral OperationsOnline)系统和网络输入方式,各有如下特点:(1)联机输入方式:用户和系统通过交互式会话来输入作业。
操作系统(第二版)课后习题答案
故需要一次间接寻址,就可读出该数据
如果要求读入从文件首到263168Byte处的数据(包括这个数据),读岀过程:首先根据直接寻
址读出前10块;读出一次间接索引指示的索引块1块;将索引下标从0〜247对应的数据块全部 读入。即可。共读盘块数10+1+248=259块
3.某文件系统采用索引文件结构,设文件索引表的每个表目占用3Byte,存放盘块的块号,盘块 的大小为512Byte。此文件系统采用直接、一次间接、二次间接、三次间接索引所能管理的最大
(1)|100-8|+|18-8|+|27-18|+|129-27|+|110-129|+|186-110|+|78-186|+|147-78|+|41-147|+ |10-47|+|64-10|+|12-64|=728
8:00
10:00
120mi n
1
2
8:50
50min
10:00
10:50
120mi n
3
9:00
10mi n
10:50
11:00
120mi n
12
4
9:50
20mi n
11:00
11:20
90mi n
平均周转时间T=,平均带权周转时间W=
②SJF短作业优先法)
作业
到达时间
运行时间
开始时间
完成时间
周转时间
页面长度为4KB,虚地址空间共有土)个页面
3.某计算机系统提供24位虚存空间,主存空间为218Byte,采用请求分页虚拟存储管理,页面尺
寸为1KB。假定应用程序产生虚拟地址(八进制),而此页面分得的块号为100(八进制),说明
操作系统课后习题1-9答案
操作系统课后习题1-9答案练习11.1-1.10题解见书1.11 有⼀台输⼊设备和⼀台输出设备的计算机系统上,运⾏有两道程序。
两道程序投⼊运⾏情况如下:程序1先开始运⾏,其运⾏轨迹为:计算50ms、输出100ms、计算50ms、输出100ms,结束;程序2后开始运⾏,其运⾏轨迹为:计算50ms、输⼊100ms、计算100ms、结束。
1. 忽略调度时间,指出两道程序运⾏时,CPU是否有空闲?在哪部分空闲?指出程序1和程序2. 有⽆等待CPU的情况?如果有,发⽣在哪部分?题解:由题画出CPU利⽤图如下:由图可知,1.CPU有空闲,在100ms~150ms时间段是空闲的。
2.程序1⽆等待时间,⽽程序2在⼀开始的0ms~50ms时间段会等待。
1.12 在计算机系统上运⾏三道程序,运⾏次序为程序1、程序2、程序3。
程序3的运⾏轨迹为:计算60ms、输⼊30ms、计算20ms。
忽略调度时间,画出三道程序运⾏的时间关系图;完成三道程序共花多少时间?与单道程序⽐较,节省了多少时间?解答:三道程序运⾏,完成三道程序共花170ms。
与单道程序(260ms)⽐较,节省了90ms。
(始终按照1-2-3的次序,即程序1→程序2→程序3→程序1→程序2→(在程序3运⾏前会停10ms等待输⼊完成)程序3。
(如果不是按照程序1、2、3的次序完成则会有多种情况。
)1.13 在计算机系统上有两台输⼊/输出设备,运⾏两道程序。
程序1的运⾏轨迹为:计算10ms、输⼊5ms、计算5ms、输出10ms、计算10ms。
程序2的运⾏轨迹为:输⼊10ms、计算10ms、输出5ms、计算5ms、输出10ms。
在顺序环境下,先执⾏程序1,再执⾏程序2,求总的CPU利⽤率为多少?题解:由题画出CPU利⽤图如下:由图可知,在总共80ms的时间⾥,CPU空闲时间为40ms,即:CPU利⽤率=40ms/80ms*100%=50%1.14 ⼀个计算机系统有⾜够的内存空间存放3道程序,这些程序有⼀半的时间在空闲等待I/O操作。
操作系统课后习题答案
操作系统课后习题答案第一章o引论1.设计现代OS的主要目标是什么方便性,有效性,可扩充性和开放性.2.OS的作用可表现为哪几个方面a.OS作为用户与计算机硬件系统之间的接口;b.OS作为计算机系统资源的管理者;c.OS作为扩充机器.4.试说明推动多道批处理系统形成和发展的主要动力是什么不断提高计算机资源利用率和系统吞吐量的需要;5.何谓脱机I/O和联机I/Oa.脱机输入输出方式(Off-LineI/O)是为了解决人机矛盾及CPU和I/O设备之间速度不匹配而提出的.它减少了CPU的空闲等待时间,提高了I/O速度.具体内容是将用户程序和数据在一台外围机的控制下,预先从低速输入设备输入到磁带上,当CPU需要这些程序和数据时,在直接从磁带机高速输入到内存,从而大大加快了程序的输入过程,减少了CPU等待输入的时间,这就是脱机输入技术;当程序运行完毕或告一段落,CPU需要输出时,无需直接把计算结果送至低速输出设备,而是高速把结果输出到磁带上,然后在外围机的控制下,把磁带上的计算结果由相应的输出设备输出,这就是脱机输出技术.b.若这种输入输出操作在主机控制下进行则称之为联机输入输出方式.6.试说明推动分时系统形成和发展的主要动力是什么用户的需要.即对用户来说,更好的满足了人-机交互,共享主机以及便于用户上机的需求.7.实现分时系统的关键问题是什么应如何解决a.关键问题:及时接收,及时处理;b.对于及时接收,只需在系统中设置一多路卡,多路卡作用是使主机能同时接收用户从各个终端上输入的数据;---对于及时处理,应使所有的用户作业都直接进入内存,在不长的时间内,能使每个作业都运行一次.8为什么要引入实时操作系统更好地满足实时控制领域和实时信息处理领域的需要.12试从交互性,及时性和可靠性方面,将分时系统与实时系统进行比较.a.分时系统是一种通用系统,主要用于运行终端用户程序,因而它具有较强的交互能力;而实时系统虽然也有交互能力,但其交互能力不及前者.b.实时信息系统对实用性的要求与分时系统类似,都是以人所能接收的等待时间来确定;而实时控制系统的及时性则是以控制对象所要求的开始截止时间和完成截止时间来确定的.c.实时系统对系统的可靠性要求要比分时系统对系统的可靠性要求高.13OS具有哪几大特征它的最基本特征是什么a.并发(Concurrence),共享(Sharing),虚拟(Virtual),异步性(Aynchronim).b.其中最基本特征是并发和共享.14处理机管理具有哪些功能它们的主要任务是什么a.进程控制,进程同步,进程通信和调度.b.进程控制的主要任务是为作业创建进程,撤销已结束的进程,以及控制进程在运行过程中的状态转换.---进程同步的主要任务是对诸进程的运行进行调节.---进程通信的任务是实现在相互合作进程之间的信息交换.---调度分为作业调度和进程调度.作业调度的基本任务是从后备队列中按照一定的算法,选择出若干个作业,为它们分配必要的资源;而进程调度的任务是从进程的就绪队列中,按照一定的算法选出一新进程,把处理机分配给它,并为它设置运行现场,是进程投入运行.15内存管理有哪些主要功能它们的主要任务是什么a.主要功能:内存分配,内存保护,地址映射和内存扩充等.b.内存分配的主要任务是为每道程序分配内存空间,提高存储器利用率,以减少不可用的内存空间,允许正在运行的程序申请附加的内存空间,以适应程序和数据动态增长的需要.---内存保护的主要任务是确保每道用户程序都在自己的内存空间中运行,互不干扰.---地址映射的主要任务是将地址空间中的逻辑地址转换为内存空间中与之对应的物理地址.---内存扩充的主要任务是借助虚拟存储技术,从逻辑上去扩充内存容量.16设备管理有哪些主要功能其主要任务是什么a.主要功能:缓冲管理,设备分配和设备处理,以及虚拟设备等.b.主要任务:完成用户提出的I/O请求,为用户分配I/O设备;提高CPU和I/O设备的利用率;提高I/O速度;以及方便用户使用I/O设备.17文件管理有哪些主要功能其主要任务是什么a.主要功能:对文件存储空间的管理,目录管理,文件的读,写管理以及文件的共享和保护.b.主要任务:对用户文件和系统文件进行管理,以方便用户使用,并保证文件的安全性.18是什么原因使操作系统具有异步性特征a.程序执行结果是不确定的,即程序是不可再现的.b.每个程序在何时执行,多个程序间的执行顺序以及完成每道程序所需的时间都是不确定的,即不可预知性.第二章2.试画出下面条语句的前趋图:S1:a=5-某;S2:b=a某某;S3:c=4某某;S4:d=b+c;S5:e=d+3.S1->S2->S4->S5......../......S33.程序并发执行为什么会产生间断性因为程序在并发执行过程中存在相互制约性.4.程序并发执行为什么会失去封闭性和可再现性因为程序并发执行时,多个程序共享系统中的各种资源,资源状态需要多个程序来改变,即存在资源共享性使程序失去封闭性;而失去了封闭性导致程序失去可再现性.5.在操作系统中为什么要引入进程概念它会产生什么样的影响为了使程序在多道程序环境下能并发执行,并能对并发执行的程序加以控制和描述,而引入了进程概念.影响:使程序的并发执行得以实行.6.试从动态性,并发性和独立性上比较进程和程序a.动态性是进程最基本的特性,可表现为由创建而产生,由调度而执行,因得不到资源而暂停执行,以及由撤销而消亡,因而进程由一定的生命期;而程序只是一组有序指令的集合,是静态实体.b.并发性是进程的重要特征,同时也是OS的重要特征.引入进程的目的正是为了使其程序能和其它进程的程序并发执行,而程序是不能并发执行的.c.独立性是指进程实体是一个能独立运行的基本单位,同时也是系统中独立获得资源和独立调度的基本单位.而对于未建立任何进程的程序,都不能作为一个独立的单位参加运行.7.试说明PCB的作用为什么说PCB是进程存在的唯一标志a.PCB是进程实体的一部分,是操作系统中最重要的记录型数据结构.PCB中记录了操作系统所需的用于描述进程情况及控制进程运行所需的全部信息.因而它的作用是使一个在多道程序环境下不能独立运行的程序(含数据),成为一个能独立运行的基本单位,一个能和其它进程并发执行的进程.b.在进程的整个生命周期中,系统总是通过其PCB对进程进行控制,系统是根据进程的PCB而不是任何别的什么而感知到该进程的存在的,所以说,PCB是进程存在的唯一标志.8.试说明进程在三个基本状态之间转换的典型原因.a.处于就绪状态的进程,当进程调度程序为之分配了处理机后,该进程便由就绪状态变为执行状态.b.当前进程因发生某事件而无法执行,如访问已被占用的临界资源,就会使进程由执行状态转变为阻塞状态.c.当前进程因时间片用完而被暂停执行,该进程便由执行状态转变为就绪状态.9.为什么要引入挂起状态该状态具有哪些性质a.引入挂起状态处于5中需要:终端用户的需要,父进程的需要,操作系统的需要,对换的需要和负荷调节的需要.b.处于挂起状态的进程不能接收处理机调度.10在进行进程切换时,所要保存的处理机状态信息主要有哪些a.进程当前暂存信息;b.下一条指令地址信息;c.进程状态信息;d.过程和系统调用参数及调用地址信息.11试说明引起进程创建的主要事件.a.用户登陆;b.作业调度;c.提供服务;d.应用请求.12试说明引起进程撤消的主要事件.a.正常结束;b.异常结束;c.外界干预;13在创建一个进程时,需完成的主要工作是什么a.操作系统发现请求创建新进程事件后,调用进程创建原语Creat();b.申请空白PCB;c.为新进程分配资源;d.初始化进程控制块;e.将新进程插入就绪队列.14在撤消一个进程时,需完成的主要工作是什么a.OS调用进程终止原语;b.根据被终止进程的标志符,从PCB集合中检索出该进程的PCB,从中读出该进程的状态;c.若被终止进程正处于执行状态,应立即中止该进程的执行,并设置调度标志为真;d.若该进程还有子孙进程,还应将其所有子孙进程予以终止;e.将该进程所拥有的全部资源,或者归还给其父进程,或者归还给系统;f.将被终止进程(它的PCB)从所在队列(或链表)中移出,等待其它程序来搜集信息.15试说明引起进程阻塞或被唤醒的主要事件是什么a.请求系统服务;b.启动某种操作;c.新数据尚未到达;d.无新工作可做.17.为什么进程在进入临界区之前,应先执行"进入区"代码,在退出临界区后又执行"退出区"代码为了实现多个进程对临界资源的互斥访问,必须在临界区前面增加一段用于检查欲访问的临界资源是否正被访问的代码,如果未被访问,该进程便可进入临界区对资源进行访问,并设置正被访问标志,如果正被访问,则本进程不能进入临界区,实现这一功能的代码成为"进入区"代码;在退出临界区后,必须执行"退出区"代码,用于恢复未被访问标志.18.同步机构应遵循哪些基本准则为什么a.空闲让进.b.忙则等待.c.有限等待.d.让权等待.20.你认为整型信号量机制和记录型信号量机制,是否完全遵循了同步机构的四条准则a.在整型信号量机制中,未遵循"让权等待"的准则.b.记录型信号量机制完全遵循了同步机构的"空闲让进,忙则等待,有限等待,让权等待"四条准则.23.在生产者-消费者问题中,如果缺少了ignal(full)或ignal(empty),对执行结果会有何影响生产者-消费者问题可描述如下: varmute某,empty,full:emaphore:=1,n,0;buffer:array[0,...,n-1]ofitem;in,out:integer:=0,0;beginparbeginproducer:beginrepeat.produceaniteminne某tp;..wait(empty);wait(mute某);buffer(in):=ne某tp;in:=(in+1)modn;ignal(mute某);/某某某某某某某某某某某某某某某某/ ignal(full);/某某某某某某某某某某某某某某某某/ untilfale;endconumer:beginrepeatwait(full);wait(mute某);ne某tc:=buffer(out);out:=(out+1)modn;ignal(mute某);/某某某某某某某某某某某某某某某某/ ignal(empty);/某某某某某某某某某某某某某某某某/conumetheiteminne某tc;untilfale;endparendend可见,生产者可以不断地往缓冲池送消息,如果缓冲池满,就会覆盖原有数据,造成数据混乱.而消费者始终因wait(full)操作将消费进程直接送入进程链表进行等待,无法访问缓冲池,造成无限等待.24.在生产者-消费者问题中,如果将两个wait操作即wait(full)和wait(mute某)互换位置;或者是将ignal(mute某)与ignal(full)互换位置结果会如何varmute某,empty,full:emaphore:=1,n,0;buffer:array[0,...,n-1]ofitem;in,out:integer:=0,0;beginparbeginproducer:beginrepeat..produceaniteminne某tp;.wait(empty);wait(mute某);buffer(in):=ne某tp;in:=(in+1)modn;/某某某某某某某某某某某某某某某某某某某/ ignal(full);ignal(mute某);/某某某某某某某某某某某某某某某某某某某/ untilfale;endconumer:beginrepeat/某某某某某某某某某某某某某某某某某某/ wait(mute某);wait(full);/某某某某某某某某某某某某某某某某某某/ne某tc:=buffer(out);out:=(out+1)modn;ignal(mute某);ignal(empty);conumetheiteminne某tc;untilfale;endparendendwait(full)和wait(mute某)互换位置后,因为mute某在这儿是全局变量,执行完wait(mute某),则mute某赋值为0,倘若full也为0,则该生产者进程就会转入进程链表进行等待,而生产者进程会因全局变量mute某为0而进行等待,使full始终为0,这样就形成了死锁.而ignal(mute某)与ignal(full)互换位置后,从逻辑上来说应该是一样的.25.我们为某临界区设置一把锁W,当W=1时,表示关锁;W=0时,表示锁已打开.试写出开锁原语和关锁原语,并利用它们去实现互斥.开锁原语:unlock(W):W=0;关锁原语:lock(W);if(W==1)dono_op;W=1;利用开关锁原语实现互斥:varW:emaphore:=0;beginparbeginproce:repeatlock(W);criticalectionunlock(W);remainderectionuntilfale;endparend26.试修改下面生产者-消费者问题解法中的错误: producer:beginrepeat..produceraniteminne某tp;wait(mute某);wait(full);/某应为wait(empty),而且还应该在wait(mute某)的前面某/buffer(in):=ne某tp;/某缓冲池数组游标应前移:in:=(in+1)modn;某/ignal(mute某);/某ignal(full);某/untilfale;endconumer:beginrepeatwait(mute某);wait(empty);/某应为wait(full),而且还应该在wait(mute某)的前面某/ne某tc:=buffer(out);out:=out+1;/某考虑循环,应改为:out:=(out+1)modn;某/ignal(mute某);/某ignal(empty);某/conumeriteminne某tc;untilfale;end27.试利用记录型信号量写出一个不会出现死锁的哲学家进餐问题的算法.设初始值为1的信号量c[I]表示I号筷子被拿(I=1,2,3,4,...,2n),其中n为自然数.end(I):BeginifImod2==1then{P(c[I]);P(c[I-1mod5]);V(c[I-1mod5]);}ele{P(c[I-1mod5]);P(c[I]);Eat;V(c[I]);V(c[I-1mod5]);}End28.在测量控制系统中的数据采集任务,把所采集的数据送一单缓冲区;计算任务从该单缓冲中取出数据进行计算.试写出利用信号量机制实现两者共享单缓冲的同步算法.intmute某=1;intempty=n;intfull=0;intin=0;intout=0;{cobeginend();obtain();coend}end(){while(1){..collectdatainne某tp; ..wait(empty);wait(mute某);buffer(in)=ne某tp;in=(in+1)modn;ignal(mute某);ignal(full);}}//endobtain(){while(1){wait(full);wait(mute某);ne某tc:=buffer(out);out:=(out+1)modn;ignal(mute某);ignal(empty);culculatethedatainne某tc;}//while}//obtain29画图说明管程由哪几部分组成为什么要引入条件变量管程由三部分组成:局部于管程的共享变量说明;对该数据结构进行操作的一组过程;对局部于管程的数据设置初始值的语句.(图见P59)因为调用wait原语后,使进程等待的原因有多种,为了区别它们,引入了条件变量.30.如何利用管程来解决生产者-消费者问题(见P60)31.什么是AND信号量试利用AND信号量写出生产者-消费者问题的解法.为解决并行所带来的死锁问题,在wait操作中引入AND条件,其基本思想是将进程在整个运行过程中所需要的所有临界资源,一次性地全部分配给进程,用完后一次性释放.解决生产者-消费者问题可描述如下:varmute某,empty,full:emaphore:=1,n,0;buffer:array[0,...,n-1]ofitem;in,out:integer:=0,0;beginparbeginproducer:beginrepeat..produceaniteminne某tp;..wait(empty);wait(1,2,3,...,n);//1,2,...,n为执行生产者进程除empty外其余的条件wait(mute某);buffer(in):=ne某tp;in:=(in+1)modn;ignal(mute某);ignal(full);ignal(1,2,3,...,n);untilfale;endconumer:beginrepeatwait(full);wait(k1,k2,k3,...,kn);//k1,k2,...,kn为执行消费者进程除full 外其余的条件wait(mute某);ne某tc:=buffer(out);out:=(out+1)modn;ignal(mute某);ignal(empty);ignal(k1,k2,k3,...,kn);conumetheiteminne某tc;untilfale;endparendend33.试比较进程间的低级通信工具与高级通信工具.用户用低级通信工具实现进程通信很不方便,因为其效率低,通信对用户不透明,所有的操作都必须由程序员来实现.而高级通信工具则可弥补这些缺陷,用户可直接利用操作系统所提供的一组通信命令,高效地传送大量的数据.第三章1.高级调度与低级调度的主要任务是什么为什么要引入中级调度a.作业调度又称宏观调度或高级调度,其主要任务是按一定的原则对外存上处于后备状态的作业进行选择,给选中的作业分配内存,输入输出设备等必要的资源,并建立相应的进程,以使该作业的进程获得竞争处理机的权利.b.进程调度又称微观调度或低级调度,其主要任务是按照某种策略和方法选取一个处于就绪状态的进程,将处理机分配给它.c.为了提高内存利用7.选择调度方式和调度算法时,应遵循的准则是什么a.面向用户的准则有周转时间短,响应时间快,截止时间的保证,以及优先权准则.b.面向系统的准则有系统吞吐量高,处理机利用率好,各类资源的平衡利用.11.在时间片轮转法中,应如何确定时间片的大小?a.系统对响应时间的要求;b.就绪队列中进程的数目;c.系统的处理能力。
操作系统教程课后习题参考答案
操作系统教程课后习题参考答案习题一习题二习题三习题四习题五习题六习题一1.设计操作系统的主要目的是什么?设计操作系统的目的是:(1)从系统管理人员的观点来看,设计操作系统是为了合理地去组织计算机工作流程,管理和分配计算机系统硬件及软件资源,使之能为多个用户所共享。
因此,操作系统是计算机资源的管理者。
(2)从用户的观点来看,设计操作系统是为了给用户使用计算机提供一个良好的界面,以使用户无需了解许多有关硬件和系统软件的细节,就能方便灵活地使用计算机。
2.操作系统的作用可表现在哪几个方面?(1) 方便用户使用:操作系统通过提供用户与计算机之间的友好界面来方便用户使用。
(2) 扩展机器功能:操作系统通过扩充硬件功能和提供新的服务来扩展机器功能。
(3) 管理系统资源:操作系统有效地管理系统中的所有硬件和软件资源,使之得到充分利用。
(4) 提高系统效率:操作系统合理组织计算机的工作流程,以改进系统性能和提高系统效率。
(5)构筑开放环境:操作系统遵循国际标准来设计和构造一个开放环境。
其含义主要是指:遵循有关国际工业标准和开放系统标准,支持体系结构的可伸缩性和可扩展性;支持应用程序在不同平台上的可移植性和互操作性。
3.试叙述脱机批处理和联机批处理工作过程(1)联机批处理工作过程用户上机前,需向机房的操作员提交程序、数据和一个作业说明书,后者提供了用户标识、用户想使用的编译程序以及所需的系统资源等基本信息。
这些资料必须变成穿孔信息,(例如穿成卡片的形式),操作员把各用户提交的一批作业装到输入设备上(若输入设备是读卡机,则该批作业是一叠卡片),然后由监督程序控制送到磁带上。
之后,监督程序自动输入第一个作业的说明记录,若系统资源能满足其要求,则将该作业的程序、数据调入主存,并从磁带上调入所需要的编译程序。
编译程序将用户源程序翻译成目标代码,然后由连接装配程序把编译后的目标代码及所需的子程序装配成一个可执行的程序,接着启动执行。
现代操作系统课后习题答案
第二章进程管理第一部分教材习题(P81)3、为什么程序并发执行会产生间断性特征?(P36)4、程序并发执行,为何会失去封闭性和可再现性?(P37)【解】程序在并发执行时,是多个程序共享系统中的各种资源,因而这些资源的状态将由多个程序来改变,致使程序的运行已失去了封闭性。
同时由于失去了封闭性,也将导致其再失去可再现性。
程序在并发执行时,由于失去了封闭性,程序经过多次执行后,其计算机结果已与并发程序的执行速度有关,从而使程序的执行失去了可再现性。
5、在操作系统中为什么要引入进程概念?(P37)它会产生什么样的影响?【解】在操作系统中引入进程的概念,是为了实现多个程序的并发执行。
传统的程序不能与其他程序并发执行,只有在为之创建进程后,才能与其他程序(进程)并发执行。
这是因为并发执行的程序(即进程)是“停停走走”地执行,只有在为它创建进程后,在它停下时,方能将其现场信息保存在它的PCB中,待下次被调度执行是,再从PCB中恢复CPU现场并继续执行,而传统的程序却无法满足上述要求。
建立进程所带来的好处是使多个程序能并发执行,这极大地提高了资源利用率和系统吞吐量。
但管理进程也需付出一定的代价,包括进程控制块及协调各运行机构所占用的内存空间开销,以及为进行进程间的切换、同步及通信等所付出的时间开销。
6、试从动态性、并发性和独立性上比较进程和程序?(P37)【解】(1)动态性:进程既然是进程实体的执行过程,因此,动态性是进程最基本的特性。
动态性还表现为:“它由创建而产生,由调度而执行,因得不到资源而暂停执行,以及由撤消而消亡”。
可见,进程有一定的生命期。
而程序只是一组有序指令的集合,并存放在某种介质上,本身并无运动的含义,因此,程序是个静态实体。
(2)并发性:所谓进程的并发,指的是多个进程实体,同存于内存中,能在一段时间内同时运行。
并发性是进程的重要特征,同时也成为OS的重要特征。
引入进程的目的也正是为了使其程序能和其它进程的程序并发执行,而程序是无法并发执行的。
操作系统第五版费祥林_课后习题答案解析参考
第一章操作系统概论1、有一台计算机,具有IMB 内存,操作系统占用200KB ,每个用户进程各占200KB 。
如果用户进程等待I/O得时间为80 % ,若增加1MB 内存,则CPU 得利用率提高多少?ﻫ答:设每个进程等待I/O 得百分比为P ,则n 个进程同时等待刀O 得概率就是Pn ,当n个进程同时等待I/O 期间CPU 就是空闲得,故CPU 得利用率为1-Pn。
由题意可知,除去操作系统,内存还能容纳4个用户进程,由于每个用户进程等待I/O得时间为80 % , 故:CPU利用率=l-(80%)4 = 0、59若再增加1MB内存,系统中可同时运行9 个用户进程,此时:cPu 利用率=l-(1—80%)9 = 0、87 ﻫ故增加IMB 内存使CPU得利用率提高了47 %:87 %/59 %=147 %147 %-100 % = 47 %2 一个计算机系统,有一台输入机与一台打印机,现有两道程序投入运行,且程序A 先开始做,程序B 后开始运行.程序A 得运行轨迹为:计算50ms 、打印100ms 、再计算50ms 、打印100ms ,结束。
程序B 得运行轨迹为:计算50ms、输入80ms 、再计算100ms ,结束。
试说明(1 )两道程序运行时,CPU有无空闲等待?若有,在哪段时间内等待?为什么会等待?( 2 )程序A、B 有无等待CPU得情况?若有,指出发生等待得时刻。
答:(1)两道下:ﻫﻫ程序运行期间,CPU存在空闲等待,时间为100 至150ms之间(见图中有色部分)ﻫ(2)程序A 无等待现象,但程序B 有等待。
程序B 有等待时间段为180rns至200ms 间(见图中有色部分)ﻫ3设有三道程序,按A 、B 、C优先次序运行,其内部计算与UO操作时间由图给出.ﻫ试画出按多道运行得时间关系图(忽略调度执行时间)。
完成三道程序共花多少时间?比单道运行节省了多少时间?若处理器调度程序每次进行程序转换化时lms , 试画出各程序状态转换得时间关系图。
操作系统课后习题答案(4~6章)
操作系统课后习题答案(4~6章)Chapter 41、存储管理主要研究的内容是:内存存储分配;地址再定位;存储保护;存储扩充的⽅法。
2、什么是虚拟存储器?实现虚存的物质基础是什么?虚存实际上是⼀个地址空间,它有OS产⽣的⼀个⽐内存容量⼤的多的“逻辑存储器”。
其物质基础是:⼀定容量的主存;⼤容量的辅存(外存)和地址变化机构(容量受计算机的地址位数限定)。
有3类虚存:分页式、分段式和段页式。
引⼊虚存的必要性:逻辑上扩充内存容量,实现⼩内存运⾏⼤作业的⽬的;可能性:其物质基础保证。
3、某页式管理系统,主存容量为64KB,分成16块,块号为0,1,2,3,4……,15。
设某作业有4页,其页号为0,1,2,3。
被分别装⼊主存的2,4,1,6块。
试问:(1)该作业的总长度是多少字节?(2)计算出该作业每⼀页在主存中的起始地址。
(3)若给出逻辑地址[0,100]、[1,50]、[2,0]、[3,60],请计算出相应的内存地址。
解:(1)每块的长度=64KB/16=4KB;因为块与页⾯⼤⼩相等,每页容量=4KB;故作业的总长度为:4KB*4=16KB。
(2)因为页号为0,1,2,31,6块中,即PMT为:所以,该作业的:第0页在内存中的起始地址为4K*2=8K;第1页在内存中的起始地址为4K*4=16K;第2页在内存中的起始地址为4K*1=4K;第3页在内存中的起始地址为4K*6=24K;(3)对应内存地址:逻辑地址[0,100]的内存地址为4K*2+100=8192+100=8292;逻辑地址[1,50]的内存地址为4K*4+50=16384+50=16434;逻辑地址[2,0]的内存地址为4K*1+0=4096;逻辑地址[3,60]的内存地址为4K*6+60=24K+60=24576+60=24636。
试回答:(1)给定段号和段内地址,完成地址变换过程。
(2)计算[0,430]、[1,10]、[2,500]、[3,400]的内存地址。
操作系统部分课后习题答案
操作系统部分课后习题答案第一章1、设计现代OS的主要目标就是什么?便利性,有效性,可扩充性与开放性。
2、OS的作用可表现在哪几个方面?(1)OS作为用户与计算机硬件系统之间的接口。
(2)OS作为计算机系统资源的管理者。
(3)OS实现了对计算机资源的抽象。
4、试说明推进多道批处理系统形成与进展的主要动力就是什么主要动力来源于四个方面的社会需求与技术进展(1)不断提高计算机资源的利用率(2)便利用户(3)器件的不断更新换代(4)计算机体系结构的不断进展。
7、实现分时系统的关键问题就是什么?应如何解决关键问题就是当用户在自己的终端上键入命令时,系统应能准时接收并准时处理该命令。
在用户能接受的时延内将结果返回给用户。
解决办法:针对准时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据,为每个终端配置缓冲区,暂存用户键入的命令或数据。
针对准时处理问题,应使全部的用户作业都直接进入内存,并且为每个作业分配一个时光片,允许作业只在自己的时光片内运行。
这样在不长的时光内,能使每个作业都运行一次。
12、试从交互性、准时性以及牢靠性方面,将分时系统与实时系统举行比较。
(1)准时性。
实时信息处理系统对实时性的要求与分时系统类似,都就是以人所能接受的等待时光来确定,而实时控制系统的准时性,就是以控制对象所要求的开头截止时光或完成截止时光来确定的,普通为秒级到毫秒级,甚至有的要低于100微妙。
(2)交互性。
实时信息处理系统具有交互性,但人与系统的交互仅限于拜访系统中某些特定的专用服务程序,不像分时系统那样能向终端用户提供数据与资源分享等服务。
(3)牢靠性。
分时系统也要求系统牢靠,但相比之下,实时系统则要求系统具有高度的牢靠性。
由于任何差错都可能带来巨大的经济损失,甚至就是灾害性后果,所以在实时系统中,往往都实行了多级容错措施保障系统的平安性及数据的平安性。
13、OS有哪几大特征?其最基本的特征就是什么?并发性、分享性、虚拟性与异步性四个基本特征。
操作系统课后题及答案
第一章1.1在多道程序和分时环境中,多个用户同时共享一个系统,这种情况导致多种安全问题。
a. 列出此类的问题b.在一个分时机器中,能否确保像在专用机器上一样的安全度?并解释之。
Answer:a.窃取或者复制某用户的程序或数据;没有合理的预算来使用资源(CPU,内存,磁盘空间,外围设备)b.应该不行,因为人类设计的任何保护机制都会不可避免的被另外的人所破译,而且很自信的认为程序本身的实现是正确的是一件困难的事。
1.4在下面举出的三个功能中,哪个功能在下列两种环境下,(a)手持装置(b)实时系统需要操作系统的支持?(a)批处理程序(b)虚拟存储器(c)分时Answer:对于实时系统来说,操作系统需要以一种公平的方式支持虚拟存储器和分时系统。
对于手持系统,操作系统需要提供虚拟存储器,但是不需要提供分时系统。
批处理程序在两种环境中都是非必需的。
1.10中断(interupt)的目的是什么?陷阱(trap)与中断的区别是什么?陷阱可以被用户程序(user program)有意地的产生吗?如果可以,那目的是什么?Answer:中断是一种在系统内硬件产生的流量变化。
中断操作装置是用来处理中断请求;然后返回控制中断的上下文和指令。
陷阱是软件产生的中断。
中断可以被用来标志I/O的完成,从而排除设备投票站(device polling)的需要。
陷阱可以被用来调用操作系统的程序或者捕捉到算术错误。
1.13给出缓存(caches)十分有用的两个理由。
他们解决了什么问题?他们引起了什么问题?如果缓存可以被做成装备想要缓存的容量(例如,缓存像磁盘那么大),为什么不把它做的那么大,其限制的原因是什么?Answer:当两个或者更多的部件需要交换数据,以及组成部件以不同的速度完成转换时,缓存是十分有用的。
缓存通过在个组成部件之间提供一个中间速度的缓冲区来解决转换问题。
如果速度较快的设备在缓存中发现它所要的数据,它就不需要再等待速度较慢的设备了。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
第一章1.设计现代OS的主要目标是什么?方便性,有效性,可扩充性和开放性。
2.OS的作用可表现在哪几个方面?(1)OS作为用户与计算机硬件系统之间的接口。
(2)OS作为计算机系统资源的管理者。
(3)OS实现了对计算机资源的抽象。
4.主要动力来源于四个方面的社会需求与技术发展(1)不断提高计算机资源的利用率(2)方便用户(3)器件的不断更新换代(4)计算机体系结构的不断发展。
7.关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令。
在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据,为每个终端配置缓冲区,暂存用户键入的命令或数据。
针对及时处理问题,应使所有的用户作业都直接进入内存,并且为每个作业分配一个时间片,允许作业只在自己的时间片内运行。
这样在不长的时间内,能使每个作业都运行一次。
12.试从交互性、及时性以及可靠性方面,将分时系统与实时系统进行比较。
(1)及时性。
实时信息处理系统对实时性的要求与分时系统类似,都是以人所能接受的等待时间来确定,而实时控制系统的及时性,是以控制对象所要求的开始截止时间或完成截止时间来确定的,一般为秒级到毫秒级,甚至有的要低于100微妙。
(2)交互性。
实时信息处理系统具有交互性,但人与系统的交互仅限于访问系统中某些特定的专用服务程序,不像分时系统那样能向终端用户提供数据和资源共享等服务。
(3)可靠性。
分时系统也要求系统可靠,但相比之下,实时系统则要求系统具有高度的可靠性。
因为任何差错都可能带来巨大的经济损失,甚至是灾难性后果,所以在实时系统中,往往都采取了多级容错措施保障系统的安全性及数据的安全性。
13.OS有哪几大特征?其最基本的特征是什么?并发性、共享性、虚拟性和异步性四个基本特征。
最基本的特征是并发性。
14.处理机管理有哪些主要功能?它们的主要任务是什么?处理机管理的主要功能是:进程管理、进程同步(1)进程管理:为作业创建进程,撤销已结束进程,控制进程在运行过程中的状态转换(2)进程同步:为多个进程(含线程)的运行进行协调(3)进程通信:用来实现在相互合作的进程之间的信息交换(4)处理机调度:①作业调度:从后备队里按照一定的算法,选出若干个作业,为他们分配运行所需的资源,首选是分配内存②进程调度:从进程的就绪队列中,按照一定算法选出一个进程把处理机分配给它,并设置运行现场,使进程投入执行。
15.内存管理有哪些主要功能?内存管理的主要功能有:内存分配、内存保护、地址映射和内存扩充。
内存分配:为每道程序分配内存。
内存保护:确保每道用户程序都只在自己的内存空间运行,彼此互不干扰。
地址映射:将地址空间的逻辑地址转换为内存空间与对应的物理地址。
内存扩充:用于实现请求调用功能、置换功能等。
16.设备管理有哪些主要功能?其主要任务是什么?主要功能有: 缓冲管理、设备分配和设备处理以及虚拟设备等。
主要任务: 完成用户提出的I/O请求、为用户分配I/O设备、提高CPU和I/O设备的利用率、提高I/O速度以及方便用户使用I/O设备。
17.文件管理有哪些主要功能?其主要任务是什么?文件管理主要功能:文件存储空间的管理、目录管理、文件的读/写管理和保护。
文件管理的主要任务:管理用户文件和系统文件、方便用户使用、保证文件安全性。
18.操作系统的异步性体现在三个方面:一是进程的异步性,进程以人们不可预知的速度向前推进。
二是程序的不可再现性,即程序执行的结果有时是不确定的。
三是程序执行时间的不可预知性,即每个程序何时执行,执行顺序以及完成时间是不确定的。
23.何谓微内核技术?在微内核中通常提供了哪些功能把操作系统中更多的成分和功能放到更高的层次,即用户模式中去运行,而留下一个尽量小的内核,用它来完成操作系统最基本的核心功能,称这种技术为微内核技术。
在微内核中通常提供了进程、线程管理、低级存储器管理、中断和陷入处理等功能。
第二章5.在操作系统中为什么要引入进程概念?它会产生什么样的影响?为了使程序在多道程序环境下能并发执行,并对并发执行的程序加以控制和描述,在操作系统中引入了进程概念。
影响: 使程序的并发执行得以实行。
6.试从动态性、并发性和独立性上比较进程和程序?(1)动态性是进程最基本的特性,表现为由创建而产生、由调度而执行,因得不到资源而暂停执行,由撤销而消亡。
进程有一定的生命期,而程序只是一组有序的指令集合,静态实体。
(2)并发性是进程的重要特征,同时也是OS的重要特征。
引入进程的目的正是为了使其程序能和其它进程的程序并发执行,而程序是不能并发执行的。
(3)独立性是指进程实体是一个能独立运行的基本单位,也是系统中独立获得资源和独立调度的基本单位。
对于未建立任何进程的程序,不能作为独立单位参加运行。
7.试说明PCB 的作用,为什么说PCB 是进程存在的惟一标志?PCB是进程实体的一部分,是操作系统中最重要的记录型数据结构。
作用是使一个在多道程序环境下不能独立运行的程序,成为一个能独立运行的基本单位,成为能与其它进程并发执行的进程。
OS是根据PCB对并发执行的进程进行控制和管理的。
8.试说明进程在三个基本状态之间转换的典型原因。
1CPU资源23I/O请求4I/O完成13.在创建一个进(1)OS 发现请求创建新进程事件后,调用进程创建原语Creat()(2)申请空白PCB(3)为新进程分配资源(4)初始化进程控制块(5)将新进程插入就绪队列。
14.(1)根据被终止进程标识符,从PCB集中检索出进程PCB读出该进程状态。
(2)若被终止进程处于执行状态,立即终止该进程的执行,置调度标志真指示该进程被终止后重新调度。
(3)若该进程还有子进程,应将所有子孙进程终止,以防它们成为不可控进程。
(4)将被终止进程拥有的全部资源,归还给父进程,或归还给系统。
(5)将被终止进程PCB 从所在队列或列表中移出,等待其它程序搜集信息。
15.试说明引起进程阻塞或被唤醒的主要事件是什么16.进程在运行时存在哪两种形式的制约?并举例说明之。
(1)间接相互制约关系。
举例:有两进程A和B,如果A 提出打印请求,系统已把唯一的一台打印机分配给了进程B,则进程A只能阻塞,一旦B释放打印机,A才由阻塞改为就绪。
(2)直接相互制约关系。
举例:有输入进程A通过单缓冲向进程B提供数据。
当缓冲空时,计算进程因不能获得所需数据而阻塞,当进程A把数据输入缓冲区后,便唤醒进程B,反之,当缓冲区已满时,进程A 因没有缓冲区放数据而阻塞,进程B将缓冲区数据取走后便唤醒A。
17.为什么进程在进入临界区之前应先执行“进入区”代码,而在退出前又要执为了实现多个进程对临界资源的互斥访问,必须在临界区前面增加一段用于检查欲访问的临界资源是否正被访问的代码。
如果未被访问,该进程便可进入临界区对资源进行访问,并设置正被访问标志;如果正被访问,则本进程不能进入临界区,实现这一功能的代码为"进入区"代码,在退出临界区后,必须执行"退出区"代码,用于恢复未被访问标志,使其它进程能再访问此临界资源。
18.同步机构应遵循的基本准则是:空闲让进、忙则等待、有限等待、让权等待原因,为实现进程互斥进入自己的临界区。
23.在生产者消费者问题中,如果缺少了signal(full)或signal(empty),对执如果缺少signal(full),那么表明从第一个生产者进程开始就没有改变信号量full 值,即使缓冲池产品已满,但full值还是0,这样消费者进程执行wait(full)时认为缓冲池是空而取不到产品,消费者进程一直处于等待状态。
如果缺少signal(empty),在生产者进程向n个缓冲区投满产品后消费者进程才开始从中取产品,这时empty=0full=n,那么每当消费者进程取走一个产品empty 值并不改变,直到缓冲池取空了,empty值也是0,即使目前缓冲池有n个空缓冲区,生产者进程要想再往缓冲池中投放产品也会因为申请不到空缓冲区被阻塞。
24.在生产消费者问题中,如果将两个wait操作即wait(full)和wait(mutex)互换位置,或者将signal(mutex)与signal(full)互换位置,结果如何?将wait(full)和wait(mutex)互换位置后,可能引起死锁。
考虑系统中缓冲区全满时,若一生产者进程先执行了wait(mutex)操作并获得成功,则当再执行wait(empty)操作时,它将因失败而进入阻塞状态,它期待消费者进程执行signal(empty)来唤醒自己,在此之前,它不可能执行signal(mutex)操作,从而使试图通过执行wait(mutex)操作而进入自己的临界区的其他生产者和所有消费者进程全部进入阻塞状态,这样容易引起系统死锁。
若signal(mutex)和signal(full)互换位置后只是影响进程对临界资源的释放次序,而不会引起系统死锁,因此可以互换位置。
26.:producer:beginrepeat...producer an item in nextp;wait(mutex);wait(full); /* 应为wait(empty),而且还应该在wait(mutex)的前面 */ buffer(in):=nextp;/* 缓冲池数组游标应前移: in:=(in+1) mod n; */signal(mutex);/* signal(full); */until false;endconsumer:beginrepeatwait(mutex);wait(empty); /* 应为wait(full),而且还应该在wait(mutex)的前面 */ nextc:=buffer(out);out:=out+1; /* : out:=(out+1) mod n; */signal(mutex);/* signal(empty); */consumer item in nextc;until false; end27.试利用记录型信号量写出一个不会出现死锁的哲学家进餐问题的算法. Var chopstick:array[0,…,4] of semaphore;所有信号量均被初始化为1iRepeatWait(chopstick[i]);Wait(. chopstick[(i+1) mod 5]);...Ea.t ;...Signal(chopstick[i]);Signal(chopstick[(i+1) mod 5])Ea.t ;...Think;Until false;28.在测量控制系统中的数据采集任务时,把所采集的数据送往一单缓冲区;计算任务从该单缓冲区中取出数据进行计算。