现代操作系统Modern Opera System -3e-01

合集下载

现代操作系统PPT学习教案

现代操作系统PPT学习教案
27
2) 设备分配
设备分配的基本任务是根据用户的I/O请求, 为其分配所需的设备,其中包括可能需要的相应的 控制器和通道。
需数据结构:系统设备表、设备控制表、控制器控制 表、通道控制表等
3) 设备处理
设备处理程序又称为设备驱动程序。其基本任 务通常是实现CPU和设备控制器之间的通信,由 CPU向设备控制器发出I/O指令,要求它完成指定 的I/O操作,并能接第收27由页/共设39备页 控制器发来的中断请 求,给予及时的响应和相应的处理。
作业提交之前用作业控制语言编制成作业说明书或作 业控制卡,与程序和数据一起提交给系统
引入多道程序后,批处理系统有以下 特征:
(1) 多道性
(2) 无序性
(3) 调度性:作业从提交到运行完成需要经过两次调度,
即作业调度和进程调度。第1作3页业/共3调9页度是指按照一定作业调度算 法,从后备作业队列中选择一个或几个作业调入内存。进程
分布式操 作系统
分布处理, 分布控制
多任务在多
处理单元中 并行执行
操作透明 ,而且物 理位置透 明
各站点资
源可供全 系统共享
容错能力强 ,可靠性高
第19页/共39页
20
1.3 操作系统的特征与功能 1.3.1 操ቤተ መጻሕፍቲ ባይዱ系统的特征 1.并发(Concurrence)
并发与并行的区别 程序与进程
2.共享(Sharing)
扩充
● 请求调入功能。允许在仅装入一部分用户程序和数 据的情况下,启动该进程运行。在运行过程中,当 发现继续运行时所需的程序和数据尚未装入内存时, 可向OS发出请求,由OS将所需部分调入内存,以 便继续运行。
● 对换功能。若内存中已无足够的空间来装入需要调 入的部分时,系统应将内存中的一部分暂时不用的 程再序将和所数需据部调分至调磁入盘内第上存25页,。/共以39页便腾出内存空间,然后

现代操作系统(中文第三版)习题答案精编版

现代操作系统(中文第三版)习题答案精编版
6、在建立一个操作系统时有几个设计目的,例如资源利用、及时性、健壮性等。 请列举两个可能互相矛盾的设计目的。 答:考虑公平和实时。公平要求每一个进程都以公平的方式分配资源,没有进程 能得到超过公平份额的资源。另一方面,实时要求使进程在规定的时间内执行完 毕的基础上分配资源。一个实时的进程可能会得到一个不成比例的资源份额。(非
5002.395ns
11、一位校对人员注意到在一部将要出版的操作系统教科书手稿中有一个多次出 现的拼写错误。这本书大致有 700 页。每页 50 行,一行 80 个字符。若把文稿用 电子扫描,那么,主副本进入图 1-9 中的每个存储系统的层次要花费多少时间? 对于内存储方式,考虑所给定的存取时间是每次一个字符,对于磁盘设备,假定 存取时间是每次一个 1024 字符的盘块,而对于磁带,假设给定开始时间后的存 取时间和磁盘存取时间相同。
第2页
cztqwan 2017-06-19
答:原稿包含 80*50*700 = 2800000 字符。当然,这不可能放入任何目前的 CPU 中,但是如果可能的话,在寄存器中只需 2.8ms,在 Cache 中需要 5.6ms,在内 存中需要 28ms,整本书大约有 2700 个 1024 字节的数据块,因此从磁盘扫描大 约为 27 秒,从磁带扫描则需 2 分钟 7 秒。当然,这些时间仅为读取数据的时间。 处理和重写数据将增加时间。
cztqwan 2017-06-19
现代操作系统(第三版)习题答案
cztqwan 2017-06-19
(部分内容来源于网络,转载请注明出处)
cztqwan 2017-06-19
目录
第一章 绪论..................................................................................................................1 第二章 进程与线程......................................................................................................8 第三章 存储管理........................................................................................................21 第四章 文件系统........................................................................................................32 第五章 输入/输出 ......................................................................................................42 第六章 死锁................................................................................................................55 第七章 多媒体操作系统............................................................................................ 65 第八章 多处理机系统................................................................................................ 76 第九章 安全................................................................................................................88 第十章 实例研究 1:Linux .....................................................................................100 第十一章 实例研究 2:Windows Vista .................................................................. 110 第十二章 实例研究 3:Symbian 操作系统 ........................................................... 110 第十三章 操作系统设计.......................................................................................... 110

现代操作系统(第三版)答案

现代操作系统(第三版)答案

MODERNOPERATINGSYSTEMSTHIRD EDITION PROBLEM SOLUTIONSANDREW S.TANENBAUMVrije UniversiteitAmsterdam,The NetherlandsPRENTICE HALLUPPER SADDLE RIVER,NJ07458Copyright Pearson Education,Inc.2008SOLUTIONS TO CHAPTER1PROBLEMS1.Multiprogramming is the rapid switching of the CPU between multiple proc-esses in memory.It is commonly used to keep the CPU busy while one or more processes are doing I/O.2.Input spooling is the technique of reading in jobs,for example,from cards,onto the disk,so that when the currently executing processes arefinished, there will be work waiting for the CPU.Output spooling consists offirst copying printablefiles to disk before printing them,rather than printing di-rectly as the output is generated.Input spooling on a personal computer is not very likely,but output spooling is.3.The prime reason for multiprogramming is to give the CPU something to dowhile waiting for I/O to complete.If there is no DMA,the CPU is fully occu-pied doing I/O,so there is nothing to be gained(at least in terms of CPU utili-zation)by multiprogramming.No matter how much I/O a program does,the CPU will be100%busy.This of course assumes the major delay is the wait while data are copied.A CPU could do other work if the I/O were slow for other reasons(arriving on a serial line,for instance).4.It is still alive.For example,Intel makes Pentium I,II,and III,and4CPUswith a variety of different properties including speed and power consumption.All of these machines are architecturally compatible.They differ only in price and performance,which is the essence of the family idea.5.A25×80character monochrome text screen requires a2000-byte buffer.The1024×768pixel24-bit color bitmap requires2,359,296bytes.In1980these two options would have cost$10and$11,520,respectively.For current prices,check on how much RAM currently costs,probably less than$1/MB.6.Consider fairness and real time.Fairness requires that each process be allo-cated its resources in a fair way,with no process getting more than its fair share.On the other hand,real time requires that resources be allocated based on the times when different processes must complete their execution.A real-time process may get a disproportionate share of the resources.7.Choices(a),(c),and(d)should be restricted to kernel mode.8.It may take20,25or30msec to complete the execution of these programsdepending on how the operating system schedules them.If P0and P1are scheduled on the same CPU and P2is scheduled on the other CPU,it will take20mses.If P0and P2are scheduled on the same CPU and P1is scheduled on the other CPU,it will take25msec.If P1and P2are scheduled on the same CPU and P0is scheduled on the other CPU,it will take30msec.If all three are on the same CPU,it will take35msec.2PROBLEM SOLUTIONS FOR CHAPTER19.Every nanosecond one instruction emerges from the pipeline.This means themachine is executing1billion instructions per second.It does not matter at all how many stages the pipeline has.A10-stage pipeline with1nsec per stage would also execute1billion instructions per second.All that matters is how often afinished instruction pops out the end of the pipeline.10.Average access time=0.95×2nsec(word is cache)+0.05×0.99×10nsec(word is in RAM,but not in cache)+0.05×0.01×10,000,000nsec(word on disk only)=5002.395nsec=5.002395μsec11.The manuscript contains80×50×700=2.8million characters.This is,ofcourse,impossible tofit into the registers of any currently available CPU and is too big for a1-MB cache,but if such hardware were available,the manuscript could be scanned in2.8msec from the registers or5.8msec from the cache.There are approximately27001024-byte blocks of data,so scan-ning from the disk would require about27seconds,and from tape2minutes7 seconds.Of course,these times are just to read the data.Processing and rewriting the data would increase the time.12.Maybe.If the caller gets control back and immediately overwrites the data,when the writefinally occurs,the wrong data will be written.However,if the driverfirst copies the data to a private buffer before returning,then the caller can be allowed to continue immediately.Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused,but this is tricky and error prone.13.A trap instruction switches the execution mode of a CPU from the user modeto the kernel mode.This instruction allows a user program to invoke func-tions in the operating system kernel.14.A trap is caused by the program and is synchronous with it.If the program isrun again and again,the trap will always occur at exactly the same position in the instruction stream.An interrupt is caused by an external event and its timing is not reproducible.15.The process table is needed to store the state of a process that is currentlysuspended,either ready or blocked.It is not needed in a single process sys-tem because the single process is never suspended.16.Mounting afile system makes anyfiles already in the mount point directoryinaccessible,so mount points are normally empty.However,a system admin-istrator might want to copy some of the most importantfiles normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being repaired.PROBLEM SOLUTIONS FOR CHAPTER13 17.A system call allows a user process to access and execute operating systemfunctions inside the er programs use system calls to invoke operat-ing system services.18.Fork can fail if there are no free slots left in the process table(and possibly ifthere is no memory or swap space left).Exec can fail if thefile name given does not exist or is not a valid executablefile.Unlink can fail if thefile to be unlinked does not exist or the calling process does not have the authority to unlink it.19.If the call fails,for example because fd is incorrect,it can return−1.It canalso fail because the disk is full and it is not possible to write the number of bytes requested.On a correct termination,it always returns nbytes.20.It contains the bytes:1,5,9,2.21.Time to retrieve thefile=1*50ms(Time to move the arm over track#50)+5ms(Time for thefirst sector to rotate under the head)+10/100*1000ms(Read10MB)=155ms22.Block specialfiles consist of numbered blocks,each of which can be read orwritten independently of all the other ones.It is possible to seek to any block and start reading or writing.This is not possible with character specialfiles.23.System calls do not really have names,other than in a documentation sense.When the library procedure read traps to the kernel,it puts the number of the system call in a register or on the stack.This number is used to index into a table.There is really no name used anywhere.On the other hand,the name of the library procedure is very important,since that is what appears in the program.24.Yes it can,especially if the kernel is a message-passing system.25.As far as program logic is concerned it does not matter whether a call to a li-brary procedure results in a system call.But if performance is an issue,if a task can be accomplished without a system call the program will run faster.Every system call involves overhead time in switching from the user context to the kernel context.Furthermore,on a multiuser system the operating sys-tem may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.26.Several UNIX calls have no counterpart in the Win32API:Link:a Win32program cannot refer to afile by an alternative name or see it in more than one directory.Also,attempting to create a link is a convenient way to test for and create a lock on afile.4PROBLEM SOLUTIONS FOR CHAPTER1Mount and umount:a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod:Windows uses access control listsKill:Windows programmers cannot kill a misbehaving program that is not cooperating.27.Every system architecture has its own set of instructions that it can execute.Thus a Pentium cannot execute SPARC programs and a SPARC cannot exe-cute Pentium programs.Also,different architectures differ in bus architecture used(such as VME,ISA,PCI,MCA,SBus,...)as well as the word size of the CPU(usually32or64bit).Because of these differences in hardware,it is not feasible to build an operating system that is completely portable.A highly portable operating system will consist of two high-level layers---a machine-dependent layer and a machine independent layer.The machine-dependent layer addresses the specifics of the hardware,and must be implemented sepa-rately for every architecture.This layer provides a uniform interface on which the machine-independent layer is built.The machine-independent layer has to be implemented only once.To be highly portable,the size of the machine-dependent layer must be kept as small as possible.28.Separation of policy and mechanism allows OS designers to implement asmall number of basic primitives in the kernel.These primitives are sim-plified,because they are not dependent of any specific policy.They can then be used to implement more complex mechanisms and policies at the user level.29.The conversions are straightforward:(a)A micro year is10−6×365×24×3600=31.536sec.(b)1000meters or1km.(c)There are240bytes,which is1,099,511,627,776bytes.(d)It is6×1024kg.SOLUTIONS TO CHAPTER2PROBLEMS1.The transition from blocked to running is conceivable.Suppose that a processis blocked on I/O and the I/Ofinishes.If the CPU is otherwise idle,the proc-ess could go directly from blocked to running.The other missing transition, from ready to blocked,is impossible.A ready process cannot do I/O or any-thing else that might block it.Only a running process can block.PROBLEM SOLUTIONS FOR CHAPTER25 2.You could have a register containing a pointer to the current process tableentry.When I/O completed,the CPU would store the current machine state in the current process table entry.Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry(the ser-vice procedure).This process would then be started up.3.Generally,high-level languages do not allow the kind of access to CPU hard-ware that is required.For instance,an interrupt handler may be required to enable and disable the interrupt servicing a particular device,or to manipulate data within a process’stack area.Also,interrupt service routines must exe-cute as rapidly as possible.4.There are several reasons for using a separate stack for the kernel.Two ofthem are as follows.First,you do not want the operating system to crash be-cause a poorly written user program does not allow for enough stack space.Second,if the kernel leaves stack data in a user program’s memory space upon return from a system call,a malicious user might be able to use this data tofind out information about other processes.5.If each job has50%I/O wait,then it will take20minutes to complete in theabsence of competition.If run sequentially,the second one willfinish40 minutes after thefirst one starts.With two jobs,the approximate CPU utiliza-tion is1−0.52.Thus each one gets0.375CPU minute per minute of real time.To accumulate10minutes of CPU time,a job must run for10/0.375 minutes,or about26.67minutes.Thus running sequentially the jobsfinish after40minutes,but running in parallel theyfinish after26.67minutes.6.It would be difficult,if not impossible,to keep thefile system consistent.Sup-pose that a client process sends a request to server process1to update afile.This process updates the cache entry in its memory.Shortly thereafter,anoth-er client process sends a request to server2to read thatfile.Unfortunately,if thefile is also cached there,server2,in its innocence,will return obsolete data.If thefirst process writes thefile through to the disk after caching it, and server2checks the disk on every read to see if its cached copy is up-to-date,the system can be made to work,but it is precisely all these disk ac-cesses that the caching system is trying to avoid.7.No.If a single-threaded process is blocked on the keyboard,it cannot fork.8.A worker thread will block when it has to read a Web page from the disk.Ifuser-level threads are being used,this action will block the entire process, destroying the value of multithreading.Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9.Yes.If the server is entirely CPU bound,there is no need to have multiplethreads.It just adds unnecessary complexity.As an example,consider a tele-phone directory assistance number(like555-1212)for an area with1million6PROBLEM SOLUTIONS FOR CHAPTER2people.If each(name,telephone number)record is,say,64characters,the entire database takes64megabytes,and can easily be kept in the server’s memory to provide fast lookup.10.When a thread is stopped,it has values in the registers.They must be saved,just as when the process is stopped the registers must be saved.Multipro-gramming threads is no different than multiprogramming processes,so each thread needs its own register save area.11.Threads in a process cooperate.They are not hostile to one another.If yield-ing is needed for the good of the application,then a thread will yield.After all,it is usually the same programmer who writes the code for all of them. er-level threads cannot be preempted by the clock unless the whole proc-ess’quantum has been used up.Kernel-level threads can be preempted indivi-dually.In the latter case,if a thread runs too long,the clock will interrupt the current process and thus the current thread.The kernel is free to pick a dif-ferent thread from the same process to run next if it so desires.13.In the single-threaded case,the cache hits take15msec and cache misses take90msec.The weighted average is2/3×15+1/3×90.Thus the mean re-quest takes40msec and the server can do25per second.For a multithreaded server,all the waiting for the disk is overlapped,so every request takes15 msec,and the server can handle662/3requests per second.14.The biggest advantage is the efficiency.No traps to the kernel are needed toswitch threads.The biggest disadvantage is that if one thread blocks,the en-tire process blocks.15.Yes,it can be done.After each call to pthread create,the main programcould do a pthread join to wait until the thread just created has exited before creating the next thread.16.The pointers are really necessary because the size of the global variable isunknown.It could be anything from a character to an array offloating-point numbers.If the value were stored,one would have to give the size to create global,which is all right,but what type should the second parameter of set global be,and what type should the value of read global be?17.It could happen that the runtime system is precisely at the point of blocking orunblocking a thread,and is busy manipulating the scheduling queues.This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching,since they might be in an inconsistent state.One solution is to set aflag when the run-time system is entered.The clock handler would see this and set its ownflag, then return.When the runtime systemfinished,it would check the clockflag, see that a clock interrupt occurred,and now run the clock handler.PROBLEM SOLUTIONS FOR CHAPTER27 18.Yes it is possible,but inefficient.A thread wanting to do a system callfirstsets an alarm timer,then does the call.If the call blocks,the timer returns control to the threads package.Of course,most of the time the call will not block,and the timer has to be cleared.Thus each system call that might block has to be executed as three system calls.If timers go off prematurely,all kinds of problems can develop.This is not an attractive way to build a threads package.19.The priority inversion problem occurs when a low-priority process is in itscritical region and suddenly a high-priority process becomes ready and is scheduled.If it uses busy waiting,it will run forever.With user-level threads,it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run.There is no preemption.With kernel-level threads this problem can arise.20.With round-robin scheduling it works.Sooner or later L will run,and eventu-ally it will leave its critical region.The point is,with priority scheduling,L never gets to run at all;with round robin,it gets a normal time slice periodi-cally,so it has the chance to leave its critical region.21.Each thread calls procedures on its own,so it must have its own stack for thelocal variables,return addresses,and so on.This is equally true for user-level threads as for kernel-level threads.22.Yes.The simulated computer could be multiprogrammed.For example,while process A is running,it reads out some shared variable.Then a simula-ted clock tick happens and process B runs.It also reads out the same vari-able.Then it adds1to the variable.When process A runs,if it also adds one to the variable,we have a race condition.23.Yes,it still works,but it still is busy waiting,of course.24.It certainly works with preemptive scheduling.In fact,it was designed forthat case.When scheduling is nonpreemptive,it might fail.Consider the case in which turn is initially0but process1runsfirst.It will just loop forever and never release the CPU.25.To do a semaphore operation,the operating systemfirst disables interrupts.Then it reads the value of the semaphore.If it is doing a down and the sema-phore is equal to zero,it puts the calling process on a list of blocked processes associated with the semaphore.If it is doing an up,it must check to see if any processes are blocked on the semaphore.If one or more processes are block-ed,one of them is removed from the list of blocked processes and made run-nable.When all these operations have been completed,interrupts can be enabled again.8PROBLEM SOLUTIONS FOR CHAPTER226.Associated with each counting semaphore are two binary semaphores,M,used for mutual exclusion,and B,used for blocking.Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s,and a list of processes blocked on that semaphore.To im-plement down,a processfirst gains exclusive access to the semaphores, counter,and list by doing a down on M.It then decrements the counter.If it is zero or more,it just does an up on M and exits.If M is negative,the proc-ess is put on the list of blocked processes.Then an up is done on M and a down is done on B to block the process.To implement up,first M is down ed to get mutual exclusion,and then the counter is incremented.If it is more than zero,no one was blocked,so all that needs to be done is to up M.If, however,the counter is now negative or zero,some process must be removed from the list.Finally,an up is done on B and M in that order.27.If the program operates in phases and neither process may enter the nextphase until both arefinished with the current phase,it makes perfect sense to use a barrier.28.With kernel threads,a thread can block on a semaphore and the kernel canrun some other thread in the same process.Consequently,there is no problem using semaphores.With user-level threads,when one thread blocks on a semaphore,the kernel thinks the entire process is blocked and does not run it ever again.Consequently,the process fails.29.It is very expensive to implement.Each time any variable that appears in apredicate on which some process is waiting changes,the run-time system must re-evaluate the predicate to see if the process can be unblocked.With the Hoare and Brinch Hansen monitors,processes can only be awakened on a signal primitive.30.The employees communicate by passing messages:orders,food,and bags inthis case.In UNIX terms,the four processes are connected by pipes.31.It does not lead to race conditions(nothing is ever lost),but it is effectivelybusy waiting.32.It will take nT sec.33.In simple cases it may be possible to determine whether I/O will be limitingby looking at source code.For instance a program that reads all its inputfiles into buffers at the start will probably not be I/O bound,but a problem that reads and writes incrementally to a number of differentfiles(such as a compi-ler)is likely to be I/O bound.If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program,you can compare this with the total time to complete execution of the program.This is,of course,most meaningful on a system where you are the only user.34.For multiple processes in a pipeline,the common parent could pass to the op-erating system information about the flow of data.With this information the OS could,for instance,determine which process could supply output to a process blocking on a call for input.35.The CPU efficiency is the useful CPU time divided by the total CPU time.When Q ≥T ,the basic cycle is for the process to run for T and undergo a process switch for S .Thus (a)and (b)have an efficiency of T /(S +T ).When the quantum is shorter than T ,each run of T will require T /Q process switches,wasting a time ST /Q .The efficiency here is thenT +ST /QT which reduces to Q /(Q +S ),which is the answer to (c).For (d),we just sub-stitute Q for S and find that the efficiency is 50%.Finally,for (e),as Q →0the efficiency goes to 0.36.Shortest job first is the way to minimize average response time.0<X ≤3:X ,3,5,6,9.3<X ≤5:3,X ,5,6,9.5<X ≤6:3,5,X ,6,9.6<X ≤9:3,5,6,X ,9.X >9:3,5,6,9,X.37.For round robin,during the first 10minutes each job gets 1/5of the CPU.Atthe end of 10minutes,C finishes.During the next 8minutes,each job gets 1/4of the CPU,after which time D finishes.Then each of the three remaining jobs gets 1/3of the CPU for 6minutes,until B finishes,and so on.The fin-ishing times for the five jobs are 10,18,24,28,and 30,for an average of 22minutes.For priority scheduling,B is run first.After 6minutes it is finished.The other jobs finish at 14,24,26,and 30,for an average of 18.8minutes.If the jobs run in the order A through E ,they finish at 10,16,18,22,and 30,for an average of 19.2minutes.Finally,shortest job first yields finishing times of 2,6,12,20,and 30,for an average of 14minutes.38.The first time it gets 1quantum.On succeeding runs it gets 2,4,8,and 15,soit must be swapped in 5times.39.A check could be made to see if the program was expecting input and didanything with it.A program that was not expecting input and did not process it would not get any special priority boost.40.The sequence of predictions is 40,30,35,and now 25.41.The fraction of the CPU used is35/50+20/100+10/200+x/250.To beschedulable,this must be less than1.Thus x must be less than12.5msec. 42.Two-level scheduling is needed when memory is too small to hold all theready processes.Some set of them is put into memory,and a choice is made from that set.From time to time,the set of in-core processes is adjusted.This algorithm is easy to implement and reasonably efficient,certainly a lot better than,say,round robin without regard to whether a process was in memory or not.43.Each voice call runs200times/second and uses up1msec per burst,so eachvoice call needs200msec per second or400msec for the two of them.The video runs25times a second and uses up20msec each time,for a total of 500msec per second.Together they consume900msec per second,so there is time left over and the system is schedulable.44.The kernel could schedule processes by any means it wishes,but within eachprocess it runs threads strictly in priority order.By letting the user process set the priority of its own threads,the user controls the policy but the kernel handles the mechanism.45.The change would mean that after a philosopher stopped eating,neither of hisneighbors could be chosen next.In fact,they would never be chosen.Sup-pose that philosopher2finished eating.He would run test for philosophers1 and3,and neither would be started,even though both were hungry and both forks were available.Similarly,if philosopher4finished eating,philosopher3 would not be started.Nothing would start him.46.If a philosopher blocks,neighbors can later see that she is hungry by checkinghis state,in test,so he can be awakened when the forks are available.47.Variation1:readers have priority.No writer may start when a reader is ac-tive.When a new reader appears,it may start immediately unless a writer is currently active.When a writerfinishes,if readers are waiting,they are all started,regardless of the presence of waiting writers.Variation2:Writers have priority.No reader may start when a writer is waiting.When the last ac-tive processfinishes,a writer is started,if there is one;otherwise,all the readers(if any)are started.Variation3:symmetric version.When a reader is active,new readers may start immediately.When a writerfinishes,a new writer has priority,if one is waiting.In other words,once we have started reading,we keep reading until there are no readers left.Similarly,once we have started writing,all pending writers are allowed to run.48.A possible shell script might beif[!–f numbers];then echo0>numbers;ficount=0while(test$count!=200)docount=‘expr$count+1‘n=‘tail–1numbers‘expr$n+1>>numbersdoneRun the script twice simultaneously,by starting it once in the background (using&)and again in the foreground.Then examine thefile numbers.It will probably start out looking like an orderly list of numbers,but at some point it will lose its orderliness,due to the race condition created by running two cop-ies of the script.The race can be avoided by having each copy of the script test for and set a lock on thefile before entering the critical area,and unlock-ing it upon leaving the critical area.This can be done like this:if ln numbers numbers.lockthenn=‘tail–1numbers‘expr$n+1>>numbersrm numbers.lockfiThis version will just skip a turn when thefile is inaccessible,variant solu-tions could put the process to sleep,do busy waiting,or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER3PROBLEMS1.It is an accident.The base register is16,384because the program happened tobe loaded at address16,384.It could have been loaded anywhere.The limit register is16,384because the program contains16,384bytes.It could have been any length.That the load address happens to exactly match the program length is pure coincidence.2.Almost the entire memory has to be copied,which requires each word to beread and then rewritten at a different location.Reading4bytes takes10nsec, so reading1byte takes2.5nsec and writing it takes another2.5nsec,for a total of5nsec per byte compacted.This is a rate of200,000,000bytes/sec.To copy128MB(227bytes,which is about1.34×108bytes),the computer needs227/200,000,000sec,which is about671msec.This number is slightly pessimistic because if the initial hole at the bottom of memory is k bytes, those k bytes do not need to be copied.However,if there are many holes andmany data segments,the holes will be small,so k will be small and the error in the calculation will also be small.3.The bitmap needs1bit per allocation unit.With227/n allocation units,this is224/n bytes.The linked list has227/216or211nodes,each of8bytes,for a total of214bytes.For small n,the linked list is better.For large n,the bitmap is better.The crossover point can be calculated by equating these two formu-las and solving for n.The result is1KB.For n smaller than1KB,a linked list is better.For n larger than1KB,a bitmap is better.Of course,the assumption of segments and holes alternating every64KB is very unrealistic.Also,we need n<=64KB if the segments and holes are64KB.4.Firstfit takes20KB,10KB,18KB.Bestfit takes12KB,10KB,and9KB.Worstfit takes20KB,18KB,and15KB.Nextfit takes20KB,18KB,and9 KB.5.For a4-KB page size the(page,offset)pairs are(4,3616),(8,0),and(14,2656).For an8-KB page size they are(2,3616),(4,0),and(7,2656).6.They built an MMU and inserted it between the8086and the bus.Thus all8086physical addresses went into the MMU as virtual addresses.The MMU then mapped them onto physical addresses,which went to the bus.7.(a)M has to be at least4,096to ensure a TLB miss for every access to an ele-ment of X.Since N only affects how many times X is accessed,any value of N will do.(b)M should still be atleast4,096to ensure a TLB miss for every access to anelement of X.But now N should be greater than64K to thrash the TLB, that is,X should exceed256KB.8.The total virtual address space for all the processes combined is nv,so thismuch storage is needed for pages.However,an amount r can be in RAM,so the amount of disk storage required is only nv−r.This amount is far more than is ever needed in practice because rarely will there be n processes ac-tually running and even more rarely will all of them need the maximum al-lowed virtual memory.9.The page table contains232/213entries,which is524,288.Loading the pagetable takes52msec.If a process gets100msec,this consists of52msec for loading the page table and48msec for running.Thus52%of the time is spent loading page tables.10.(a)We need one entry for each page,or224=16×1024×1024entries,sincethere are36=48−12bits in the page numberfield.。

现代操作系统

现代操作系统

现代操作系统的应用
大型机与嵌入式系统使用很多样化的操作系 统。在服务器方面Linux、UNIX和 WindowsServer占据了市场的大部分份额。 在超级计算机方面,Linux取代Unix成为了 第一大操作系统,截止2012年6月,世界超 级计算机500强排名中基于Linux的超级计 算机占据了462个席位,比率高达92%。随 着智能手机的发展,Android和iOS已经成 为目前最流行的两大手机操作系统。
组员:关敏 王鑫 张宇 程加昕 程千桓
现代操作系统 Operating System
王 鑫 张 宇 程 加 昕 程 千 桓
操作系统概念
• 操作系统(英语:Operating System,简称OS) 是管理和控制计算机硬件与软件资源的计算机程 序,是直接运行在“裸机”上的最基本的系统软 件,任何其他软件都必须在操作系统的支持下才 能运行。操作系统所处位置是用户和计算机的接 口,同时也是计算机硬件和其他软件的接口。是 管理电脑硬件与软件资源的程序,同时也是计算 机系统的内核与基石。操作系统身负诸如管理与 配置内存、决定系统资源供需的优先次序、控制 输入与输出设备、操作网络与管理文件系统等基 本事务。操作系统的型态非常多样,不同机器安 装的OS可从简单到复杂。
• 驱动程序:最底层的、直接控制和监视各类硬件的部分,它们的
职责是隐藏硬件的具体细节,并向其他部分提供一个抽象的、通用的 接口。 • 内核:操作系统内核部分,通常运行在最高特权级,负责提供基 础性、结构性的功能。 • 接口库:是一系列特殊的程序库,它们职责在于把系统所提供的 基本服务包装成应用程序所能够使用的编程接口(API),是最靠近 应用程序的部分。例如,GNU C运行期库就属于此类,它把各种操 作系统的内部编程接口包装成ANSI C和POSIX编程接口的形式。 • 外围:是指操作系统中除以上三类以外的所有其他部分,通常是 用于提供特定高级服务的部件。例如,在微内核结构中,大部分系统 服务,以及UNIX/Linux中各种守护进程都通常被划归此列。

现代操作系统英文版第四版课程设计

现代操作系统英文版第四版课程设计

现代操作系统英文版第四版课程设计一、课程介绍本课程是针对计算机科学与技术专业学生设计的一门必修课程。

本课程旨在深入介绍现代操作系统的基本原理、体系结构、各种模型及其实现技术,在此基础上,介绍操作系统设计的基本方法和策略,包括进程管理、内存管理、设备管理、文件系统和安全性等。

该课程涵盖的内容广泛、深入,不仅适用于计算机科学与技术专业学生,也适用于其他相关专业学生。

此外,本课程也适用于从事操作系统开发和维护工作的技术人员。

二、教材及参考书目教材•Modern Operating Systems, 4th edition (英文版) 参考书目1.Operating System Concepts, 9th edition (Silberschatz)2.Operating Systems: Internals and Design Principles, 9thedition (Stallings)3.Operating Systems: Three Easy Pieces (Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau)4.操作系统概念与实现 (木鱼龙)三、课程安排和内容课程安排本课程共分为16个学时,其中包括14个授课学时和2个上机实验学时。

课程内容第一周•介绍操作系统的基本概念、发展历史和分类。

•讲解操作系统的基本体系结构和主要组成部分。

第二周•讲解操作系统的进程和线程管理,包括进程状态、进程调度和同步互斥机制等。

•讲解进程死锁的原因、检测和恢复机制。

第三周•讲解虚拟内存管理,包括虚拟地址空间、分页机制、页面置换和缺页中断等。

•讲解内存管理的基本概念、页表机制和内存回收机制。

第四周•讲解设备管理的基本概念、I/O模型和设备驱动程序等。

•讲解各种设备管理方式的优缺点和应用场景。

第五周•讲解文件系统的组成结构、I/O连接和数据结构等。

•讲解文件和目录的管理策略、访问权限和保护机制。

现代操作系统英文版第三版课程设计

现代操作系统英文版第三版课程设计

现代操作系统英文版第三版课程设计一、课程简介本门课程旨在让学生深入了解现代操作系统的概念、原理、设计以及实现方式。

通过本门课程的学习,学生将会掌握如下知识:•操作系统的基本理论知识•操作系统的基本概念和分类•多进程、多线程、死锁和并发控制等理论知识•操作系统的内核设计和实现方式•操作系统的设备管理•操作系统的文件管理•操作系统的网络管理等二、课程内容本门课程的主要内容包括:1. 操作系统的基本理论知识本部分将介绍操作系统的基本概念、功能、分类等理论知识。

学生将会了解操作系统的作用、操作系统的分类、操作系统的发展史等。

2. 操作系统的多进程、多线程、死锁和并发控制理论本部分将介绍操作系统中的多进程、多线程、死锁和并发控制等理论知识。

学生将会学习进程和线程的概念、进程和线程的生命周期、死锁的原因和解决方法、并发控制的基本原理、常用的并发控制方法等。

3. 操作系统的内核设计和实现方式本部分将介绍操作系统内核的设计和实现方式。

学生将会学习操作系统内核的组成部分、内核设计的基本原则、内核的实现方式、内核的调度算法等。

4. 操作系统的设备管理本部分将介绍操作系统中的设备管理。

学生将会学习设备的分类、设备管理的基本原理、设备驱动程序的编写方法等。

5. 操作系统的文件管理本部分将介绍操作系统中的文件管理。

学生将会了解文件的属性、文件系统的组织方式、文件的存储管理等。

6. 操作系统的网络管理本部分将介绍操作系统中的网络管理。

学生将会学习网络协议的基本原理、网络层次结构、网络管理的基本方法等。

三、参考教材本门课程的参考教材为《Operating System Concepts》英文版第九版和《现代操作系统》中文版第三版。

其中,英文版教材语言精简、结构清晰,阅读起来轻松;中文版教材语言详细、眼界广泛,对中文阅读者较为友好。

四、考核方式本门课程的考核方式采用期末考试和课程设计的结合方式。

期末考试占总成绩的70%,课程设计占总成绩的30%。

酒店人必学的Opera系统中英讲解

酒店人必学的Opera系统中英讲解
感谢观看
REPORTING
定义
Opera是一种专业的酒店管理软 件系统,旨在提供全面的酒店业 务管理解决方案。
特点
高度集成化、模块化设计、用户 界面友好、多语言支持、实时数 据处理等。
Opera系统发展历程
初始阶段
Opera系统最初由美国Micros公 司开发,专注于酒店前台管理。
发展阶段
随着酒店业的发展,Opera系统逐 渐扩展了其功能范围,包括财务、 人力资源、销售和市场等模块。
入住登记与结账流程演示
入住登记
费用录入与结算
为客人办理入住手续,录入客人信息、分 配房间、收取押金等操作。
录入客人消费项目,如餐饮、洗衣等费用 ,并在离店时进行统一结算。
结账与退房
注意事项
为客人办理结账手续,退还押金、打印发 票等操作,确保客人顺利离店。
在入住登记和结账过程中需注意客人隐私保 护、信息安全、财务规范等问题,确保酒店 运营安全有序。
客户资料录入与维护方法
客户资料录入
在Opera系统中,客户资料的录入包括基本信息(如姓名、地址、电话等)、 预订信息(如入住日期、房型、价格等)以及特殊需求或偏好。确保信息的准 确性和完整性对于提供个性化服务至关重要。
客户资料维护
定期更新客户资料,包括联系方式、预订历史、特殊需求等,有助于酒店更好 地了解客户并提供更加贴心的服务。同时,对于常客或VIP客户,可以设置专门 的档案进行管理。
PART 04
餐饮娱乐管理功能介绍
REPORTING
餐厅预订与座位安排
01 02
预订管理
Opera系统支持在线预订功能,客人可以通过酒店网站或第三方平台预 订餐厅座位。系统可记录客人姓名、联系方式、预订时间、人数等详细 信息,方便后续跟进和服务。

MOS-Ch12-e3 《现代操作系统》Andreww S.Tanenbaum配套课件ppt

MOS-Ch12-e3 《现代操作系统》Andreww S.Tanenbaum配套课件ppt

Communication in Symbian OS
Figure 12-4. Communication in Symbian OS has block oriented structure.
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Removable Media
Features common to removable media: 1. All devices must be inserted and
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
Security in Symbian OS (1)
Steps when an application requires signing: 1. The software developer must obtain a
The Protocol Implementation Layer
• CSY Modules • TSY Modules • PRT Modules • MTMs
Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

现代操作系统英文版第三版教学设计

现代操作系统英文版第三版教学设计

现代操作系统英文版第三版教学设计介绍《现代操作系统》是一本经典的操作系统教材,涵盖了操作系统的许多基本概念和技术。

第三版的英文版本是最新版本,并且在某些方面有所改进和更新。

这份教学设计旨在为教师提供一些指导,帮助他们有效地教授这本书的内容。

教材概述•书名:《现代操作系统》(第三版)•作者:Andrew S. Tanenbaum, Herbert Bos•出版年份: 2018•出版社:Pearson这本书介绍了现代操作系统的基本概念、设计思想和实现技术。

主要内容包括进程管理、内存管理、文件系统、网络通信等。

该书通过深入浅出的方式,对操作系统的基本概念进行了详细的阐述,同时也包含了一些最新的技术和发展趋势。

教学目标•了解操作系统的基本概念、设计思想和实现技术。

•掌握操作系统的进程管理、内存管理、文件系统、网络通信等主要技术。

•培养学生的分析和解决问题的能力,同时提高其编程实践能力。

教学内容第一部分:操作系统概述•操作系统的基本概念和演化历程。

•操作系统的组成和结构。

•操作系统的功能和特性。

第二部分:进程管理•进程的概念和特点。

•进程的创建、撤销和调度。

•进程间通信和同步机制。

第三部分:内存管理•内存的基本概念和层次结构。

•内存的分配和回收。

•虚拟内存和页式存储技术。

第四部分:文件系统•文件的基本概念和属性。

•文件的组织和管理。

•文件系统的实现和优化。

第五部分:网络通信•网络的基本概念和通信技术。

•网络协议栈和协议分层。

•TCP/IP协议族和应用层协议。

教学方法•讲授:教师通过对课程内容的系统讲解,使学生了解到操作系统基本概念、原理和技术的基础知识;•实践:教师引导学生通过编程实践,巩固和深化课程知识点的理解;•研究:鼓励学生独立阅读操作系统相关的论文和参与研究项目,提高学生分析和解决问题的能力。

评价方法•考试:学期考核采用闭卷笔试形式,测试学生对课程内容的掌握情况;•作业:学生需要完成课程作业,在实践中巩固和深化所学知识点;•课堂参与:学生每次参与讨论的质量和数量也是学生的一个重要的参考评分因素。

现代操作系统课后答案

现代操作系统课后答案

现代操作系统课后答案现代操作系统课后答案【篇一:现代操作系统习题答案】>(汤小丹编电子工业出版社2008.4)第1章操作系统引论习题及答案1.11 os有哪几大特征?其最基本的特征是什么?答:并发、共享、虚拟和异步四个基本特征,其中最基本的特征是并发和共享。

1.15 处理机管理有哪些主要功能?其主要任务是什么?答案略,见p17。

1.22 (1)微内核操作系统具有哪些优点?它为何能有这些优点?(2)现代操作系统较之传统操作系统又增加了哪些功能和特征?第2章进程的描述与控制习题及答案略第3章进程的同步与通信习题及答案3.9 在生产者-消费者问题中,如果缺少了signal(full)或signal(empty),对执行结果将会有何影响?答:资源信号量full表示缓冲区中被占用存储单元的数目,其初值为0,资源信号量empty表示缓冲区中空存储单元的数目,其初值为n,signal(full)在生产者进程中,如果在生产者进程中缺少了signal(full),致使消费者进程一直阻塞等待而无法消费由生产者进程生产的数据;signal(empty)在消费者进程中,如果在消费者进程中缺少了signal(empty),致使生产者进程一直阻塞等待而无法将生产的数据放入缓冲区。

3.13 试利用记录型信号量写出一个不会出现死锁的哲学家进餐问题的算法。

答:参考答案一:至多只允许有四位哲学家同时去拿左边的筷子,最终能保证至少有一位哲学家能够进餐,并在用毕时能释放出他用过的两支筷子,从而使更多的哲学家能够进餐。

采用此方案的算法如下:var chopstick:array[0,…,4] of semaphore :=1;room:semphore:=4;repeatwait(room);wait(chopstick[i]);wait(chopstick[(i+1) mod 5]);…eat;…signal(chopstick[i]);signal(chopstick[(i+1) mod 5);signal(room);…think;until false;第4章处理机调度与死锁习题及答案4.1 高级调度与低级调度的主要任务是什么?为什么要引入中级调度?答:略,见p73。

现代操作系统

现代操作系统
进程调度的时机(P83) 调度算法(P85-90)
– 先来先服务法、 轮转法、优先级法、最短作 业优先法、最高响应比优先法
返回
存储管理
无存储器抽象 一种存储器抽象:地址空间
– 交换技术 – 空闲内存管理:位图、链表 虚拟内存( P106 ) 页面置换算法(P113) 分段(P131)
– 文件共享p158
文件系统
磁盘空间管理 – 块大小 – 记录空闲块
文件系统性能p172 – 高速缓存 – 块提前读 – 减少磁盘臂运动
I/O
I/O硬件原理 – I/O设备 – 设备控制器 – 内存映射I/O – DMA – 中断
I/O
I/O软件原理 – 程序控制I/O – 中断驱动I/O – 使用DMA的I/O
引论
什么是操作系统 – 扩展机器 – 资源管理者
操作系统的历史和功能 操作系统的基本类型 操作系统为用户提供两类接口
– 命令接口 – 程序接口
返回
进程与线程
进程的概念 进程的静态描述
– PCB、有关程序段、数据结构集 进程的状态及其转换
– 就绪、运行、阻塞
进程与线程
进程互斥和同步
返回
文件系统
用户视角下的文件 1. 文件命名 2. 文件结构 3. 文件类型 4. 文件存取 5. 文件属性 6. 文件操作
文件系统
目录 – 一级目录系统 – 层次目录系统 – 路径名 – 目录操作
文件系统
文件系统的实现 – 文件系统布局 – 文件的实现
连续分配 链表分配 文件分配表 i节点
– 间接制约、直接制约,互斥、同步
– 实现技术:加锁程间通信
– 临界区
– 忙等待的互斥
– 睡眠与唤醒

对现代操作系统的思考

对现代操作系统的思考

对现代操作系统的思考在当今数字化的时代,操作系统如同计算机和移动设备的灵魂,掌控着它们的运行和功能。

从我们日常使用的个人电脑到智能手机,从企业级的服务器到工业控制系统,操作系统无处不在,发挥着至关重要的作用。

然而,对于大多数用户来说,操作系统往往是一个“幕后英雄”,其复杂性和重要性常常被忽视。

在这篇文章中,让我们一起深入思考现代操作系统的诸多方面。

现代操作系统的发展可以追溯到上世纪五六十年代。

早期的操作系统功能相对简单,主要用于管理计算机的硬件资源和执行基本的任务调度。

随着计算机技术的飞速发展,操作系统也经历了多次重大的变革和升级。

如今,我们所熟知的操作系统,如 Windows、Mac OS、Linux 以及各种移动操作系统如 Android 和 iOS,都具备了极其强大和丰富的功能。

从用户的角度来看,操作系统首先提供了一个直观、友好的用户界面。

这使得我们能够轻松地与计算机或移动设备进行交互,完成各种工作和娱乐活动。

无论是通过鼠标点击、触摸屏幕还是语音指令,操作系统都能迅速响应我们的操作,并将其转化为对硬件和软件的精确控制。

同时,操作系统还负责管理文件和文件夹,让我们能够方便地存储、检索和组织数据。

在性能方面,现代操作系统致力于实现高效的资源管理。

它们要合理分配 CPU 时间、内存空间、硬盘存储和网络带宽等资源,以确保多个应用程序能够同时平稳运行,而不会出现卡顿或崩溃的情况。

为了达到这一目标,操作系统采用了一系列复杂的算法和策略,例如进程调度、内存分页和缓存管理等。

这些技术的不断优化和改进,使得我们在使用计算机时能够享受到更快的响应速度和更流畅的体验。

安全性是现代操作系统不可忽视的一个重要方面。

随着网络的普及和信息技术的广泛应用,计算机面临着越来越多的安全威胁,如病毒、恶意软件、网络攻击等。

操作系统必须具备强大的安全机制,来保护用户的隐私和数据安全。

这包括用户认证、权限管理、加密技术、防火墙和入侵检测系统等。

现代操作系统——2

现代操作系统——2
(a) 分派线程(Dispatcher thread) (b) 工作线程(Worker thread) 单线程会怎么样? 有何优缺点? 进一步的思考: 可否在采用单线程方式下,仍然能够实 现并行操作?
22
2019/3/29
ECNU-Operating Systems, Li Dong
设计利弊权衡
构建Web Server的三种方式
2019/3/29 ECNU-Operating Systems Design, Li Dong
20
线程应用2: Web Server
21
2019/3/29
ECNU-Operating Systems Design, Li Dong
线程应用2: Web Server
Web Server的代码框架
14
进程概念回顾

什么是进程?
它是一个程序的执行实例,是一个程序的执行过程。 它与其它的实例是相互独立的。 它可以创建、运行其他进程。

进程包括什么?
程序段,
数据段
进程控制块(PCB)

进程状态、进程优先级、进程记账信息 程序指针、寄存器变量值、堆栈指针等
打开的文件、分配到的设备等
16
2019/3/29
ECNU-Operating Systems Design, Li Dong
线程模型(The Thread Model)


在不支持线程的系统中,进程是资源分配和调度的基本 单位;而在支持线程的系统中,进程是资源分配的基本 单位,线程是调度的独立单位,进程中的线程共享该进 程的所有资源。 共享信息
1.进程调度程序选择某一进程
2.进程调度程序选择另一个进程

现代操作系统第3版课程设计 (2)

现代操作系统第3版课程设计 (2)

现代操作系统第3版课程设计1. 前言本文档是针对现代操作系统第三版课程设计的说明文档。

在本文档中,将会对课程设计的目的、要求、内容进行详细阐述,同时也会在结尾部分附上一些可能有用的参考资源。

2. 课程设计目的现代操作系统第三版课程设计的目的在于:•帮助学生深入理解现代操作系统的各种概念和技术;•提供实践机会,让学生能够通过设计和实现操作系统程序,加深对所学知识的理解;•培养学生的编程能力和问题解决能力;•通过课程设计,鼓励学生对操作系统领域进行深入的研究,并有可能找到未来的研究方向。

3. 课程设计要求本次课程设计要求学生自行选择至少一个现代操作系统的相关主题,通过设计和实现相应的程序,来加深对所学知识的理解。

具体要求如下:3.1 选择主题学生可以从以下主题中自行选择一个或多个:•进程和线程管理•内存管理•文件系统•输入输出子系统•网络通信3.2 确定实现语言和平台学生可自行选择实现语言和平台。

建议使用C/C++或Java等高级语言进行开发。

3.3 设计和实现程序根据选定的主题,学生需要设计和实现相应的程序。

程序的功能和特点应该与所选主题密切相关,同时需要充分考虑程序的效率、可靠性和扩展性。

4. 参考资源以下是一些可能有用的参考资源:•Tanenbaum A S. Modern Operating Systems (3rd ed.)[M].Prentice Hall Press, 2008.•Silberschatz A, Galvin P B, Gagne G. Operating System Concepts (9th ed.)[M]. Wiley, 2012.•戴宏平,吴岳峰. 操作系统概念与实现 [M]. 电子工业出版社, 2005.•Abraham S, Birman K P, Dolev D. Distributed Computing: Principles, Algorithms, and Systems 2nd ed. [M]. Springer, 2017.5. 结语通过本次课程设计,相信每位学生都将会对所学知识有更深入的了解和理解。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Figure 1-8. (a) A quad-core chip with a shared L2 cache. (b) A quad-core chip with separate L2 caches.
Memory (1)
Figure 1-9. A typical memory hierarchy. The numbers are very rough approximations.
Figure 1-5. A multiprogramming system with three jobs in memory.
ICs and Multiprogramming(cont.)
•Spooling( Simultaneous Peripheral Operation On Line) Input spooling is the technique of reading in jobs, for example, from cards,onto the disk, so that when the currently executing processes are finished,there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. •Timesharing A kind of OS •Typical OS UNIX Linux
Figure 1-3. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries output tape to 1401. (f) 1401 prints output.
Transistors and Batch Systems (4)
Mainframe Operating Systems(3)
Timeshareing Timesharing systems allow multiple remote users to run jobs on the computer at once. Eg: Querying a big database
Figure 1-4. Structure of a typical FMS job.
ICs and Multiprogramming
•Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.
• •

Allow multiple programs to run at the same time Manage and protect memory, I/O devices, and other resources Includes multiplexing (sharing) resources in two different ways: • In time • In space
The Operating System as an Extended Machine
Figure 1-2. Operating systems turn ugly hardware into beautiful abstractions.
The Operating System as a Resource Manager
Server Operating Systems
• Run on servers Servers: Large PC workstation Mainframe Server OS serve multiple users at once over a network and allow the users to share hardware and software. Servers provide print service, file service, Web service Internet providers run server machines to support their customers and Websites use servers to store the Web pages and handle the incoming requests. Eg: Solaris, FreeBSD, Linux, Windows server 200X

Mainframe Operating Systems(2)
Batch system A batch system is one that processes routine jobs without any interactive user present. Eg: processing in an assurance company Sales reporting for a chain of stores Transaction processing system A transaction processing system is one that handles large numbers of small requests. Each unit of work is small, but the system must handle hundreds or thousands per second. Eg: Check processing in a bank Airline reservations
Memory (2)
Questions when dealing with cache:
• • • When to put a new item into the cache. Which cache line to put the new item in. Which item to remove from the cache when a slot is needed. Where to put a newly evicted itቤተ መጻሕፍቲ ባይዱm in the larger memory.
Personal Computers
•CP/M (Control program for Microcomputer)(1974-1982)
•DOS(Disk Operating System)->MSDOS(1982-1995)
•Windows95 (16 bit system)(1995.8) •Windows98 (16 bit system)(1998.6) •Windows NT (32 bit system)(1993-1998 V3.1-V5.0Beta2) •Windows2000 (Windows NT5.0)(2000.2)
Transistors and Batch Systems (1)
Figure 1-3. An early batch system. (a) Programmers bring cards to 1401. (b)1401 reads batch of jobs onto tape.
Transistors and Batch Systems (2)
MODERN OPERATING SYSTEMS
Third Edition ANDREW S. TANENBAUM
Chapter 1 Introduction
What Is An Operating System (1)
A modern computer consists of:
• • • • • One or more processors Main memory Disks Printers Various input/output devices
Managing all these components requires a layer of software – the operating system
What Is An Operating System (2)
Figure 1-1. Where the operating system fits in.
History of Operating Systems
Generations: • • • • (1945–55) Vacuum Tubes (1955–65) Transistors and Batch Systems (1965–1980) ICs and Multiprogramming (1980–Present) Personal Computers

Disks
Figure 1-10. Structure of a disk drive.
I/O Devices
Figure 1-11. (a) The steps in starting an I/O device and getting an interrupt.
Buses
Figure 1-12. The structure of a large Pentium system
•Windows Me(Millennium edition)(another version of Win98 ,2000.9)
•Windows XP (upgraded version of Windows 2000)(2001) •Vista (2007.1)
相关文档
最新文档