计算机组成与设计第五版答案

合集下载

计算机组成原理第五版 白中英(详细)第4章习题参考答案

计算机组成原理第五版 白中英(详细)第4章习题参考答案

第4章习题参考答案1.ASCII码是7位,如果设计主存单元字长为32位,指令字长为12位,是否合理?为什么?答:不合理。

指令最好半字长或单字长,设16位比较合适。

一个字符的ASCII 是7位,如果设计主存单元字长为32位,则一个单元可以放四个字符,这也是可以的,只是在存取单个字符时,要多花些时间而已,不过,一条指令至少占一个单元,但只占一个单元的12位,而另20位就浪费了,这样看来就不合理,因为通常单字长指令很多,浪费也就很大了。

2.假设某计算机指令长度为32位,具有双操作数、单操作数、无操作数三类指令形式,指令系统共有70条指令,请设计满足要求的指令格式。

答:字长32位,指令系统共有70条指令,所以其操作码至少需要7位。

双操作数指令单操作数指令无操作数指令3.指令格式结构如下所示,试分析指令格式及寻址方式特点。

答:该指令格式及寻址方式特点如下:(1) 单字长二地址指令。

(2) 操作码字段OP可以指定26=64种操作。

(3) 源和目标都是通用寄存器(可分指向16个寄存器)所以是RR型指令,即两个操作数均在寄存器中。

(4) 这种指令结构常用于RR之间的数据传送及算术逻辑运算类指令。

4.指令格式结构如下所示,试分析指令格式及寻址方式特点。

15 10 9 8 7 4 3 0答:该指令格式及寻址方式特点如下:(1)双字长二地址指令,用于访问存储器。

(2)操作码字段OP可以指定26=64种操作。

(3)RS型指令,一个操作数在通用寄存器(选择16个之一),另一个操作数在主存中。

有效地址可通过变址寻址求得,即有效地址等于变址寄存器(选择16个之一)内容加上位移量。

5.指令格式结构如下所示,试分析指令格式及寻址方式特点。

答:该指令格式及寻址方式特点如下:(1)该指令为单字长双操作数指令,源操作数和目的操作数均由寻址方式和寄存器构成,寄存器均有8个,寻址方式均有8种。

根据寻址方式的不同,指令可以是RR型、RS型、也可以是SS型;(2)因为OP为4位,所以最多可以有16种操作。

白中英《计算机组成原理》(第5版)笔记和课后习题详解复习答案

白中英《计算机组成原理》(第5版)笔记和课后习题详解复习答案

白中英《计算机组成原理》(第5版)笔记和课后习题详解完整版>精研学习网>无偿试用20%资料
全国547所院校视频及题库全收集
考研全套>视频资料>课后答案>往年真题>职称考试
第1章计算机系统概论
1.1复习笔记
1.2课后习题详解
第2章运算方法和运算器
2.1复习笔记
2.2课后习题详解
第3章多层次的存储器
3.1复习笔记
3.2课后习题详解
第4章指令系统
4.1复习笔记
4.2课后习题详解
第5章中央处理器
5.1复习笔记
5.2课后习题详解
第6章总线系统
6.1复习笔记
6.2课后习题详解
第7章外存与I/O设备
7.1复习笔记
7.2课后习题详解
第8章输入输出系统
8.1复习笔记
8.2课后习题详解
第9章并行组织与结构
9.1复习笔记
9.2课后习题详解
第10章课程教学实验设计
第11章课程综合设计。

计算机组成原理课后答案(白中英主编_第五版_立体化教材)_2

计算机组成原理课后答案(白中英主编_第五版_立体化教材)_2

( 2= ==( 2= = =( 2===第二章1.(1) 35 =−100011)[ 35]原 10100011[ 35]补 11011100 [ 35]反 11011101(2)[127]原=01111111[127]反=01111111[127]补=01111111(3) 127 =−1111111)[ 127]原 11111111[ 127]补 10000001[ 127]反 10000000(4) 1 =−00000001)[ 1]原 10000001[ 1]补 11111111 [ 1]反 111111102.[x]补 = a 0. a 1a 2…a 6解法一、(1) 若 a 0 = 0, 则 x > 0, 也满足 x > -0.5此时 a 1→a 6 可任意(2) 若 a 0 = 1, 则 x <= 0, 要满足 x > -0.5, 需 a 1 = 1 即 a 0 = 1, a 1 = 1, a 2→a 6 有一个不为 0解法二、-0.5 = -0.1(2) = -0.100000 = 1, 100000(1) 若 x >= 0, 则 a0 = 0, a 1→a 6 任意即可;(2) [x]补= x = a 0. a 1a 2…a 6(2) 若 x < 0, 则 x > -0.5只需-x < 0.5, -x > 0[x]补 = -x, [0.5]补 = 01000000 即[-x]补 < 01000000a 0 * a 1 * a 2 a 6 + 1 < 01000000⋅ (1 2 ) 即: 2 2 ⋅ 2(最接近 0 的负数)即: 2 2 ⋅ (2 + 2[ 2 2 ⋅ 2 ⋅ (1 2 ) ] [ 22 1 ⋅ ( 1) , 2 2 ⋅ (2 1 + 2 ) ]a 0 a 1a 2 a 6 > 11000000即 a 0a 1 = 11, a 2→a 6 不全为 0 或至少有一个为 1(但不是“其余取 0”)3.字长 32 位浮点数,阶码 8 位,用移码表示,尾数 23 位,用补码表示,基为 2EsE 1→E 8MsM 21M 0(1) 最大的数的二进制表示E = 11111111Ms = 0, M = 11…1(全 1)1 11111111 01111111111111111111111(2) 最小的二进制数E = 11111111Ms = 1, M = 00…0(全 0) 1 11111111 1000000000000000000000(3) 规格化范围正最大E = 11…1, M = 11…1, Ms = 08 个22 个即: 227 122正最小E = 00…0, M = 100…0, Ms = 08 个7121 个负最大E = 00…0, M = 011…1, Ms = 18 个 21 个负最小7 1E = 11…1, M = 00…0, Ms =18 个22 个22 )即: 22⋅ ( 1) 规格化所表示的范围用集合表示为:71, 227122 7 7 2244.在 IEEE754 标准中,一个规格化的 32 位浮点数 x 的真值表示为:X=( 1)s ×(1.M )× 2 E 127(1)27/64=0.011011=1.1011× 22E= -2+127 = 125= 0111 1101 S= 0M= 1011 0000 0000 0000 0000 000最后表示为:0 01111101 10110000000000000000000 (2)-27/64=-0.011011=1.1011× 22E= -2+127 = 125= 0111 1101 S= 1M= 1011 0000 0000 0000 0000 000最后表示为:1 01111101 10110000000000000000000 5.(1)用变形补码进行计算:[x]补=00 11011 [y]补=00 00011[x]补 = [y]补 = [x+y]补00 11011 + 00 00011 00 11110结果没有溢出,x+y=11110(2) [x]补=00 11011 [y]补=11 01011[x]补 = [y]补 = [x+y]补=00 11011 + 11 01011 00 00110结果没有溢出,x+y=00110(3)[x]补=11 01010 [y]补=11 111111[x]补 = [y]补 = [x+y]补=00 01010 + 00 11111 11 01001结果没有溢出,x+y=−101116.[x-y]补=[x]补+[-y]补 (1)[x]补=00 11011[-y]补=00 11111[x]补 =00 11011 [-y]补 = + 00 11111 [x-y]补= 01 11010结果有正溢出,x−y=11010(2)[x]补=00 10111[-y]补=11 00101[x]补 =00 10111 [-y]补 = + 11 00101 [x-y]补结果没有溢出,x−y=−00100(3)[x]补=00 11011 [-y]补=00 10011[x]补= 00 11011[-y]补= + 00 10011[x-y]补= 01 01110结果有正溢出,x−y=100107.(1)用原码阵列乘法器:[x]原=0 11011 [y]原=1 11111因符号位单独考虑,|x|=11011 |y|=111111 1 0 1 1×) 1 1 1 1 1——————————————————————————1 1 0 1 11 1 0 1 11 1 0 1 11 1 0 1 11 1 0 1 11 1 0 1 0 0 0 1 0 1[x×y]原=1 1101000101用补码阵列乘法器:[x]补=0 11011 [y]补=1 00001乘积符号位为:1|x|=11011 |y|=111111 1 0 1 1×) 1 1 1 1 1——————————————————————————1 1 0 1 11 1 0 1 11 1 0 1 11 1 0 1 11 1 0 1 0 0 0 1 0 1[x×y]补=1 0010111011(2) 用原码阵列乘法器:[x]原=1 11111 [y]原=1 11011因符号位单独考虑,|x|=11111 |y|=110111 1 1 1 1×) 1 1 0 1 1——————————————————————————1 1 1 1 11 1 1 1 10 0 0 0 01 1 1 1 11 1 1 1 11 1 0 1 0 0 0 1 0 1[x×y]原=0 1101000101用补码阵列乘法器:[x]补=1 00001 [y]补=1 00101乘积符号位为:1|x|=11111 |y|=110111 1 1 1 1×) 1 1 0 1 1——————————————————————————1 1 1 1 11 1 1 1 10 0 0 0 01 1 1 1 111111[x×y]补=0 11010001018.(1) [x]原=[x]补=0 11000[-∣y ∣]补=1 00001被除数 X 0 11000 +[-|y|]补 1 00001----------------------------------------------------余数为负 1 11001 →q0=0左移 1 10010 +[|y|]补0 11111----------------------------------------------------余数为正 0 10001 →q1=1左移 1 00010 +[-|y|]补1 00001----------------------------------------------------余数为正 0 00011 →q2=1左移 0 00110 +[-|y|]补1 00001----------------------------------------------------余数为负 1 00111 →q3=0左移 0 01110 +[|y|]补0 11111----------------------------------------------------余数为负 1 01101 →q4=0左移 0 11010 +[|y|]补0 11111----------------------------------------------------余数为负 1 11001 →q5=0+[|y|]补0 11111 ----------------------------------------------------余数 0 11000故 [x÷y]原=1.11000 即 x÷y= −0.11000 余数为 0 11000(2)[∣x ∣]补=0 01011[-∣y ∣]补=1 00111被除数 X 0 01011 +[-|y|]补 1 00111----------------------------------------------------余数为负 1 10010 →q0=0x+y= 1.010010*2 = 2 *-0.101110左移 1 00100 +[|y|]补 0 11001----------------------------------------------------余数为负 1 11101 →q1=0左移 1 11010 +[|y|]补0 11001----------------------------------------------------余数为正 0 10011 →q2=1左移 1 00110 +[-|y|]补1 00111----------------------------------------------------余数为正 0 01101 →q3=1左移 0 11010 +[-|y|]补1 00111----------------------------------------------------余数为正 0 00001 →q4=1左移 0 00010 +[-|y|]补1 00111----------------------------------------------------余数为负 1 01001 →q5=0 +[|y|]补0 11001----------------------------------------------------余数 0 00010x÷y= −0.01110余数为 0 000109.(1) x = 2-011*0.100101, y = 2-010*(-0.011110)[x]浮 = 11101,0.100101 [y]浮 = 11110,-0.011110 Ex-Ey = 11101+00010=11111 [x]浮 = 11110,0.010010(1)x+y 0 0. 0 1 0 0 1 0 (1)+ 1 1. 1 0 0 0 1 01 1. 1 1 0 1 0 0 (1)规格化处理: 1.010010 阶码11100-4 -4x-y0 0. 0 1 0 0 1 0 (1) + 0 0. 0 1 1 1 1 00 0 1 1 0 0 0 0 (1) 规格化处理:0.110000阶码11110x-y=2-2*0.110001(2) x = 2-101*(-0.010110), y = 2-100*0.010110[x]浮= 11011,-0.010110 [y]浮= 11100,0.0101109Ex-Ey = 11011+00100 = 11111 [x]浮= 11100,1.110101(0) x+y 1 1. 1 1 0 1 0 1+ 0 0. 0 1 0 1 1 00 0. 0 0 1 0 1 1规格化处理: 0.101100 x+y= 0.101100*2阶码-611010x-y1 1.1 1 0 1 0 1 + 1 1.1 0 1 0 1 01 1.0 1 1 1 1 1规格化处理: 1.011111 阶码11100x-y=-0.100001*2-410.(1) Ex = 0011, Mx = 0.110100Ey = 0100, My = 0.100100 Ez = Ex+Ey = 0111 Mx*My 0. 1 1 0 1* 0.1 0 0 101101 00000 00000 01101 00000 001110101规格化:26*0.111011(2) Ex = 1110, Mx = 0.011010Ey = 0011, My = 0.111100 Ez = Ex-Ey = 1110+1101 = 1011 [Mx]补 = 00.011010[My]补 = 00.111100, [-My]补 = 11.00010010计算机组成原理第五版习题答案00011010 +[-My]11000100 11011110 10111100+[My]00111100 11111000 111100000.0 +[My]00111100 00101100 010110000.01 +[-My]11000100 00011100 001110000.011 +[-My]11000100 11111100 111110000.0110 +[My]00111100 00110100 011010000.01101 +[-My]1 1 0 00 1 0 0 0 0 1 0 1 10 00.01101 商 = 0.110110*2-6, 11.4 位加法器如上图,C i = A i B i + A i C i 1 + B i C i 1 = A i B i + ( A i + B i )C i 1 = A i B i + ( A i B i )C i 1(1)串行进位方式余数=0.101100*2-6C 1 = G 1+P 1C 0 C 2 = G 2+P 2C 1 C 3 = G 3+P 3C 2 C 4 = G 4+P 4C 3 其中:G 1 = A 1B 1G 2 = A 2B 2G 3 = A 3B 3 G 4 = A 4B 4P1 = A 1⊕B 1(A 1+B 1 也对) P 2 = A 2⊕B 2 P 3 = A 3⊕B 3 P 4 = A 4⊕B 4(2)并行进位方式 C 1 = G 1+P 1C 0C 2 = G 2+P 2G 1+P 2P 1C 0C 3 = G 3+P 3G 2+P 3P 2G 1+P 3P 2P 1C 0C 4 = G 4+P 4G 3+P 4P 3G 2+P 4P 3P 2G 1+P 4P 3P 2P 1C 0“计算机组成原理第五版习题答案12.(1)组成最低四位的74181 进位输出为:C4 = C n+4 = G+PC n = G+PC0,C0为向第0 位进位其中,G = y3+y2x3+y1x2x3+y0x1x2x3,P = x0x1x2x3,所以C5 = y4+x4C4C6 = y5+x5C5 = y5+x5y4+x5x4C4(2)设标准门延迟时间为T,与或非”门延迟时间为1.5T,则进位信号C0,由最低位传送至C6需经一个反相器、两级“与或非”门,故产生C0的最长延迟时间为T+2*1.5T = 4T(3)最长求和时间应从施加操作数到ALU 算起:第一片74181 有3 级“与或非”门(产生控制参数x0, y0, C n+4),第二、三片74181 共 2 级反相器和 2 级“与或非”门(进位链),第四片74181 求和逻辑(1 级与或非门和 1 级半加器,设其延迟时间为3T),故总的加法时间为:t0 = 3*1.5T+2T+2*1.5T+1.5T+3T = 14T13.设余三码编码的两个运算数为X i和Y i,第一次用二进制加法求和运算的和数为S i’,进位为C i+1’,校正后所得的余三码和数为S i,进位为C i+1,则有:X i = X i3X i2X i1X i0Y i = Y i3Y i2Y i1Y i0S i’ = S i3’S i2’S i1’S i0’s i3 s i2 s i1 s i0Ci+1FA FA FA FA十进校正+3VFA s i3'FAs i2'FAs i1'FAs i0'二进加法X i3 Y i3 X i2 Y i2 X i1 Y i1 X i0 Y i0当C i+1’ = 1时,S i = S i’+0011并产生C i+1当C i+1’ = 0时,S i = S i’+1101根据以上分析,可画出余三码编码的十进制加法器单元电路如图所示。

计算机组成原理第五版-白中英(详细)第4章习题参考答案

计算机组成原理第五版-白中英(详细)第4章习题参考答案

第4章习题参考答案1.ASCII码是7位,如果设计主存单元字长为32位,指令字长为12位,是否合理为什么答:不合理。

指令最好半字长或单字长,设16位比较合适。

一个字符的ASCII 是7位,如果设计主存单元字长为32位,则一个单元可以放四个字符,这也是可以的,只是在存取单个字符时,要多花些时间而已,不过,一条指令至少占一个单元,但只占一个单元的12位,而另20位就浪费了,这样看来就不合理,因为通常单字长指令很多,浪费也就很大了。

2.假设某计算机指令长度为32位,具有双操作数、单操作数、无操作数三类指令形式,指令系统共有70条指令,请设计满足要求的指令格式。

答:字长32位,指令系统共有70条指令,所以其操作码至少需要7位。

双操作数指令单操作数指令无操作数指令3.指令格式结构如下所示,试分析指令格式及寻址方式特点。

15 10 !9 8 7 4 3 0答:该指令格式及寻址方式特点如下:(1) 单字长二地址指令。

》(2) 操作码字段OP可以指定26=64种操作。

(3) 源和目标都是通用寄存器(可分指向16个寄存器)所以是RR型指令,即两个操作数均在寄存器中。

(4) 这种指令结构常用于RR之间的数据传送及算术逻辑运算类指令。

4.指令格式结构如下所示,试分析指令格式及寻址方式特点。

15 10 9 8 7 4 3 015 10 9 8 7 4 3 0答:该指令格式及寻址方式特点如下:(1)双字长二地址指令,用于访问存储器。

(2)操作码字段OP可以指定26=64种操作。

(3)RS型指令,一个操作数在通用寄存器(选择16个之一),另一个操作数在主存中。

有效地址可通过变址寻址求得,即有效地址等于变址寄存器(选择16个之一)内容加上位移量。

|5.指令格式结构如下所示,试分析指令格式及寻址方式特点。

15 12 11 9 8 6 5 3 2 0答:该指令格式及寻址方式特点如下:(1)该指令为单字长双操作数指令,源操作数和目的操作数均由寻址方式和寄存器构成,寄存器均有8个,寻址方式均有8种。

计算机组成原理第五版白中英(详细)第3章习题答案

计算机组成原理第五版白中英(详细)第3章习题答案

第3章习题‎答案1、设有一个具‎有20位地‎址和32位‎字长的存储‎器,问 (1) 该存储器能‎存储多少字‎节的信息? (2) 如果存储器‎由512K ‎×8位SRA ‎M 芯片组成‎,需要多少片‎? (3) 需要多少位‎地址作芯片‎选择? 解:(1) 该存储器能‎存储:字节4M 832220=⨯(2) 需要片8823228512322192020=⨯⨯=⨯⨯K(3) 用512K ‎⨯8位的芯片‎构成字长为‎32位的存‎储器,则需要每4‎片为一组进‎行字长的位‎数扩展,然后再由2‎组进行存储‎器容量的扩‎展。

所以只需一‎位最高位地‎址进行芯片‎选择。

2、已知某64‎位机主存采‎用半导体存‎储器,其地址码为‎26位,若使用4M ‎×8位的DR ‎A M 芯片组‎成该机所允‎许的最大主‎存空间,并选用内存‎条结构形式‎,问; (1) 若每个内存‎条为16M ‎×64位,共需几个内‎存条? (2) 每个内存条‎内共有多少‎D RAM 芯‎片? (3) 主存共需多‎少DRAM ‎芯片? CPU 如何‎选择各内存‎条? 解:(1) 共需内存条‎条4641664226=⨯⨯M (2) 每个内存条‎内共有个芯‎32846416=⨯⨯M M 片 (3) 主存共需多‎少个RAM ‎1288464648464226=⨯⨯=⨯⨯M M M 芯片, 共有4个内‎存条,故CPU 选‎择内存条用‎最高两位地‎址A 24和‎A 25通过‎2:4译码器实‎现;其余的24‎根地址线用‎于内存条内‎部单元的选‎择。

3、用16K ×8位的DR ‎A M 芯片构‎成64K ×32位存储‎器,要求: (1) 画出该存储‎器的组成逻‎辑框图。

(2) 设存储器读‎/写周期为0‎.5μS ,CPU 在1‎μS 内至少‎要访问一次‎。

试问采用哪‎种刷新方式‎比较合理?两次刷新的‎最大时间间‎隔是多少?对全部存储‎单元刷新一‎遍所需的实‎际刷新时间‎是多少? 解:(1) 用16K ×8位的DR ‎A M 芯片构‎成64K ×32位存储‎器,需要用个芯‎16448163264=⨯=⨯⨯K K 片,其中每4片‎为一组构成‎16K ×32位——进行字长位‎数扩展(一组内的4‎个芯片只有‎数据信号线‎不互连——分别接D0‎~D 7、D 8~D 15、D 16~D23和D ‎24~D 31,其余同名引‎脚互连),需要低14‎位地址(A 0~A 13)作为模块内‎各个芯片的‎内部单元地‎址——分成行、列地址两次‎由A 0~A6引脚输‎入;然后再由4‎组进行存储‎器容量扩展‎,用高两位地‎址A 14、A15通过‎2:4译码器实‎现4组中选‎择一组。

计算机组成与设计第五版答案

计算机组成与设计第五版答案

计算机组成与设计:《计算机组成与设计》是2010年机械工业出版社出版的图书,作者是帕特森(DavidA.Patterson)。

该书讲述的是采用了一个MIPS 处理器来展示计算机硬件技术、流水线、存储器的层次结构以及I/O 等基本功能。

此外,该书还包括一些关于x86架构的介绍。

内容简介:这本最畅销的计算机组成书籍经过全面更新,关注现今发生在计算机体系结构领域的革命性变革:从单处理器发展到多核微处理器。

此外,出版这本书的ARM版是为了强调嵌入式系统对于全亚洲计算行业的重要性,并采用ARM处理器来讨论实际计算机的指令集和算术运算。

因为ARM是用于嵌入式设备的最流行的指令集架构,而全世界每年约销售40亿个嵌入式设备。

采用ARMv6(ARM 11系列)为主要架构来展示指令系统和计算机算术运算的基本功能。

覆盖从串行计算到并行计算的革命性变革,新增了关于并行化的一章,并且每章中还有一些强调并行硬件和软件主题的小节。

新增一个由NVIDIA的首席科学家和架构主管撰写的附录,介绍了现代GPU的出现和重要性,首次详细描述了这个针对可视计算进行了优化的高度并行化、多线程、多核的处理器。

描述一种度量多核性能的独特方法——“Roofline model”,自带benchmark测试和分析AMD Opteron X4、Intel Xeo 5000、Sun Ultra SPARC T2和IBM Cell的性能。

涵盖了一些关于闪存和虚拟机的新内容。

提供了大量富有启发性的练习题,内容达200多页。

将AMD Opteron X4和Intel Nehalem作为贯穿《计算机组成与设计:硬件/软件接口(英文版·第4版·ARM版)》的实例。

用SPEC CPU2006组件更新了所有处理器性能实例。

图书目录:1 Computer Abstractions and Technology1.1 Introduction1.2 BelowYour Program1.3 Under the Covers1.4 Performance1.5 The Power Wall1.6 The Sea Change: The Switch from Uniprocessors to Multiprocessors1.7 Real Stuff: Manufacturing and Benchmarking the AMD Opteron X41.8 Fallacies and Pitfalls1.9 Concluding Remarks1.10 Historical Perspective and Further Reading1.11 Exercises2 Instructions: Language of the Computer2.1 Introduction2.2 Operations of the Computer Hardware2.3 Operands of the Computer Hardware2.4 Signed and Unsigned Numbers2.5 Representing Instructions in the Computer2.6 Logical Operations2.7 Instructions for Making Decisions2.8 Supporting Procedures in Computer Hardware2.9 Communicating with People2.10 ARM Addressing for 32-Bit Immediates and More Complex Addressing Modes2.11 Parallelism and Instructions: Synchronization2.12 Translating and Starting a Program2.13 A C Sort Example to Put lt AU Together2.14 Arrays versus Pointers2.15 Advanced Material: Compiling C and Interpreting Java2.16 Real Stuff." MIPS Instructions2.17 Real Stuff: x86 Instructions2.18 Fallacies and Pitfalls2.19 Conduding Remarks2.20 Historical Perspective and Further Reading2.21 Exercises3 Arithmetic for Computers3.1 Introduction3.2 Addition and Subtraction3.3 Multiplication3.4 Division3.5 Floating Point3.6 Parallelism and Computer Arithmetic: Associativity 3.7 Real Stuff: Floating Point in the x863.8 Fallacies and Pitfalls3.9 Concluding Remarks3.10 Historical Perspective and Further Reading3.11 Exercises4 The Processor4.1 Introduction4.2 Logic Design Conventions4.3 Building a Datapath4.4 A Simple Implementation Scheme4.5 An Overview of Pipelining4.6 Pipelined Datapath and Control4.7 Data Hazards: Forwarding versus Stalling4.8 Control Hazards4.9 Exceptions4.10 Parallelism and Advanced Instruction-Level Parallelism4.11 Real Stuff: theAMD OpteronX4 (Barcelona)Pipeline4.12 Advanced Topic: an Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipelineand More Pipelining Illustrations4.13 Fallacies and Pitfalls4.14 Concluding Remarks4.15 Historical Perspective and Further Reading4.16 Exercises5 Large and Fast: Exploiting Memory Hierarchy5.1 Introduction5.2 The Basics of Caches5.3 Measuring and Improving Cache Performance5.4 Virtual Memory5.5 A Common Framework for Memory Hierarchies5.6 Virtual Machines5.7 Using a Finite-State Machine to Control a Simple Cache5.8 Parallelism and Memory Hierarchies: Cache Coherence5.9 Advanced Material: Implementing Cache Controllers5.10 Real Stuff: the AMD Opteron X4 (Barcelona)and Intel NehalemMemory Hierarchies5.11 Fallacies and Pitfalls5.12 Concluding Remarks5.13 Historical Perspective and Further Reading5.14 Exercises6 Storage and Other I/0 Topics6.1 Introduction6.2 Dependability, Reliability, and Availability6.3 Disk Storage6.4 Flash Storage6.5 Connecting Processors, Memory, and I/O Devices6.6 Interfacing I/O Devices to the Processor, Memory, andOperating System6.7 I/O Performance Measures: Examples from Disk and File Systems6.8 Designing an I/O System6.9 Parallelism and I/O: Redundant Arrays of Inexpensive Disks6.10 Real Stuff: Sun Fire x4150 Server6.11 Advanced Topics: Networks6.12 Fallacies and Pitfalls6.13 Concluding Remarks6.14 Historical Perspective and Further Reading6.15 Exercises7 Multicores, Multiprocessors, and Clusters7.1 Introduction7.2 The Difficulty of Creating Parallel Processing Programs7.3 Shared Memory Multiprocessors7.4 Clusters and Other Message-Passing Multiprocessors7.5 Hardware Multithreading 637.6 SISD,MIMD,SIMD,SPMD,and Vector7.7 Introduction to Graphics Processing Units7.8 Introduction to Multiprocessor Network Topologies7.9 Multiprocessor Benchmarks7.10 Roofline:A Simple Performance Model7.11 Real Stuff:Benchmarking Four Multicores Using theRooflineMudd7.12 Fallacies and Pitfalls7.13 Concluding Remarks7.14 Historical Perspective and Further Reading7.15 ExercisesInuexC D-ROM CONTENTA Graphics and Computing GPUSA.1 IntroductionA.2 GPU System ArchitecturesA.3 Scalable Parallelism-Programming GPUSA.4 Multithreaded Multiprocessor ArchitectureA.5 Paralld Memory System G.6 Floating PointA.6 Floating Point ArithmeticA.7 Real Stuff:The NVIDIA GeForce 8800A.8 Real Stuff:MappingApplications to GPUsA.9 Fallacies and PitflaUsA.10 Conduding RemarksA.1l HistoricalPerspectiveandFurtherReadingB1 ARM and Thumb Assembler InstructionsB1.1 Using This AppendixB1.2 SyntaxB1.3 Alphabetical List ofARM and Thumb Instructions B1.4 ARM Asembler Quick ReferenceB1.5 GNU Assembler Quick ReferenceB2 ARM and Thumb Instruction EncodingsB3 Intruction Cycle TimingsC The Basics of Logic DesignD Mapping Control to HardwareADVANCED CONTENTHISTORICAL PERSPECTIVES & FURTHER READINGTUTORIALSSOFTWARE作者简介:David A.Patterson,加州大学伯克利分校计算机科学系教授。

计算机组成与设计第五版答案

计算机组成与设计第五版答案

解决方案4第4章解决方案S-34.1 4.1.1信号值如下如下:RegWrite MemReadALUMux MemWrite aloop RegMux Branch0 0 1(Imm)1 ADD X 0ALUMux是在ALU输入处控制Mux 的控制信号,0(Reg)选择寄存器文件的输出,和1(Imm)从指令字中选择立即数作为第二个输入铝合金是控制输入到寄存器文件的Mux的控制信号,0(ALU)选择ALU 的输出,1(Mem)选择存储器的输出。

X值表示“不在乎”(不管信号是0还是1)4.1.2除了未使用的寄存器4.1.3的分支添加单元和写入端口:分支添加,寄存器的写入端口无输出:无(所有单元都产生输出)4.2 4.2.1第四条指令使用指令存储器,两个寄存器读端口,将Rd和Rs相加的ALU,数据存储器和寄存器中的写入端口。

4.2.2无。

此指令可使用现有的块来实现。

4.2.3无。

该指令无需添加新的控制信号即可实现。

它只需要改变控制逻辑。

4.3 4.3.1时钟周期时间由关键路径决定,对于给定的延迟,它恰好是为了得到加载指令的数据值:I-Mem(读取指令)、Regs(比控制时间长)、Mux(选择ALU 输入)、ALU、数据存储器,和Mux(从内存中选择要写入寄存器的值)。

这个路径的延迟是400ps?200秒?第30页?120秒?350马力?第30页?1130马力。

1430马力(1130马力?300 ps,ALU在关键路径上)。

4.3.2第4.3.2加速来自于时钟周期时间的变化和程序所需时钟周期数的变化:程序所需的周期数减少了5%,但周期时间是1430而不是1130,所以我们的加速比是(1/0.95)*(1130/1430)?0.83,这意味着我们实际上在减速。

S-4第4章解决方案4.3.3成本始终是所有组件(不只是关键路径上的组件)的总成本,因此原始处理器的成本为I-Mem、Regs、Control、ALU、D-Mem、2个Add单元和3个Mux 单元,总成本为1000?200?500?100?2000年?2*30?3*10?3890.我们将计算与基线相关的成本。

计算机组成与设计_第五版答案_Chapter05_Solution

计算机组成与设计_第五版答案_Chapter05_Solution

Chapter 5 Solutions S-35.15.1.1 45.1.2 I, J5.1.3 A[I][J]5.1.4 3596 ϭ 8 ϫ 800/4 ϫ 2Ϫ8ϫ8/4 ϩ 8000/45.1.5 I, J5.1.6 A(J, I)5.25.2.130000 001103M1801011 0100114M430010 1011211M20000 001002M1911011 11111115M880101 100058M1901011 11101114M140000 1110014M1811011 0101115M440010 1100212M1861011 10101110M2531111 11011513M5.2.230000 001101M1801011 0100112M430010 101125M20000 001001H1911011 1111117M880101 100054M1901011 1110117H140000 111007M1811011 0101112H440010 110026M1861011 1010115M2531111 1101156MS-4 ChapterSolutions55.2.330000 001103M1M0M1801011 0100224M2M1M430010 101153M1M0M20000 001002M1M0M1911011 1111237M3M1M880101 1000110M0M0M1901011 1110236M3H1H140000 111016M3M1M1811011 0101225M2H1M440010 110054M2M1M1861011 1010232M1M0M2531111 1101315M2M1MCache 1 miss rate ϭ 100%Cache 1 total cycles ϭ 12 ϫ 25 ϩ 12 ϫ 2 ϭ 324Cache 2 miss rate ϭ 10/12 ϭ 83%Cache 2 total cycles ϭ 10 ϫ 25 ϩ 12 ϫ 3 ϭ 286Cache 3 miss rate ϭ 11/12 ϭ 92%Cache 3 total cycles ϭ 11 ϫ 25 ϩ 12 ϫ 5 ϭ 335Cache 2 provides the best performance.5.2.4 First we must compute the number of cache blocks in the initial cacheconfi guration. For this, we divide 32 KiB by 4 (for the number of bytes per word)and again by 2 (for the number of words per block). Th is gives us 4096 blocks anda resulting index fi eld width of 12 bits. We also have a word off set size of 1 bit and abyte off set size of 2 bits. Th is gives us a tag fi eld size of 32 Ϫ 15 ϭ 17 bits. Th ese tagbits, along with one valid bit per block, will require 18 ϫ 4096 ϭ 73728 bits or 9216bytes. Th e total cache size is thus 9216 ϩ 32768 ϭ 41984 bytes.Th e total cache size can be generalized tototalsize ϭ datasize ϩ (validbitsize ϩ tagsize) ϫ blockstotalsize ϭ 41984datasize ϭ blocks ϫ blocksize ϫ wordsizewordsize ϭ 4tagsize ϭ 32 Ϫ log2(blocks) Ϫ log2(blocksize) Ϫ log2(wordsize)validbitsize ϭ 1Chapter 5 Solutions S-5 Increasing from 2-word blocks to 16-word blocks will reduce the tag size from17 bits to 14 bits.In order to determine the number of blocks, we solve the inequality:41984 Ͻϭ 64 ϫ blocks ϩ 15 ϫ blocksSolving this inequality gives us 531 blocks, and rounding to the next power oftwo gives us a 1024-block cache.Th e larger block size may require an increased hit time and an increased misspenalty than the original cache. Th e fewer number of blocks may cause a higherconfl ict miss rate than the original cache.5.2.5 Associative caches are designed to reduce the rate of confl ict misses. Assuch, a sequence of read requests with the same 12-bit index fi eld but a diff erenttag fi eld will generate many misses. For the cache described above, the sequence0, 32768, 0, 32768, 0, 32768, …, would miss on every access, while a 2-way setassociate cache with LRU replacement, even one with a signifi cantly smaller overallcapacity, would hit on every access aft er the fi rst two.5.2.6 Y es, it is possible to use this function to index the cache. However,information about the fi ve bits is lost because the bits are XOR’d, so you mustinclude more tag bits to identify the address in the cache.5.35.3.1 85.3.2 325.3.3 1ϩ (22/8/32) ϭ 1.0865.3.4 35.3.5 0.255.3.6 ϽIndex, tag, dataϾϽ0000012, 00012, mem[1024]ϾϽ0000012, 00112, mem[16]ϾϽ0010112, 00002, mem[176]ϾϽ0010002, 00102, mem[2176]ϾϽ0011102, 00002, mem[224]ϾϽ0010102, 00002, mem[160]ϾS-6 ChapterSolutions55.45.4.1 Th e L1 cache has a low write miss penalty while the L2 cache has a highwrite miss penalty. A write buff er between the L1 and L2 cache would hide thewrite miss latency of the L2 cache. Th e L2 cache would benefi t from write buff erswhen replacing a dirty block, since the new block would be read in before the dirtyblock is physically written to memory.5.4.2 On an L1 write miss, the word is written directly to L2 without bringingits block into the L1 cache. If this results in an L2 miss, its block must be broughtinto the L2 cache, possibly replacing a dirty block which must fi rst be written tomemory.5.4.3 Aft er an L1 write miss, the block will reside in L2 but not in L1. A subsequentread miss on the same block will require that the block in L2 be written back tomemory, transferred to L1, and invalidated in L2.5.4.4 One in four instructions is a data read, one in ten instructions is a datawrite. For a CPI of 2, there are 0.5 instruction accesses per cycle, 12.5% of cycleswill require a data read, and 5% of cycles will require a data write.Th e instruction bandwidth is thus (0.0030 ϫ 64) ϫ 0.5 ϭ 0.096 bytes/cycle. Th edata read bandwidth is thus 0.02 ϫ (0.13ϩ0.050) ϫ 64 ϭ 0.23 bytes/cycle. Th etotal read bandwidth requirement is 0.33 bytes/cycle. Th e data write bandwidthrequirement is 0.05 ϫ 4 ϭ 0.2 bytes/cycle.5.4.5 Th e instruction and data read bandwidth requirement is the same as in5.4.4. Th e data write bandwidth requirement becomes 0.02 ϫ 0.30 ϫ (0.13ϩ0.050)ϫ 64 ϭ 0.069 bytes/cycle.5.4.6 For CPIϭ1.5 the instruction throughput becomes 1/1.5 ϭ 0.67 instructionsper cycle. Th e data read frequency becomes 0.25 / 1.5 ϭ 0.17 and the write frequencybecomes 0.10 / 1.5 ϭ 0.067.Th e instruction bandwidth is (0.0030 ϫ 64) ϫ 0.67 ϭ 0.13 bytes/cycle.For the write-through cache, the data read bandwidth is 0.02 ϫ (0.17 ϩ0.067) ϫ64 ϭ 0.22 bytes/cycle. Th e total read bandwidth is 0.35 bytes/cycle. Th e data writebandwidth is 0.067 ϫ 4 ϭ 0.27 bytes/cycle.For the write-back cache, the data write bandwidth becomes 0.02 ϫ 0.30 ϫ(0.17ϩ0.067) ϫ 64 ϭ 0.091 bytes/cycle.Address041613223216010243014031001802180Line ID001814100191118Hit/miss M H M M M M M H H M M MReplace N N N N N N Y N N Y N YChapter 5 Solutions S-75.55.5.1 Assuming the addresses given as byte addresses, each group of 16 accesseswill map to the same 32-byte block so the cache will have a miss rate of 1/16. Allmisses are compulsory misses. Th e miss rate is not sensitive to the size of the cacheor the size of the working set. It is, however, sensitive to the access pattern andblock size.5.5.2 Th e miss rates are 1/8, 1/32, and 1/64, respectively. Th e workload isexploiting temporal locality.5.5.3 In this case the miss rate is 0.5.5.4 AMAT for B ϭ 8: 0.040 ϫ (20 ϫ 8) ϭ6.40AMAT for B ϭ 16: 0.030 ϫ (20 ϫ 16) ϭ 9.60AMAT for B ϭ 32: 0.020 ϫ (20 ϫ 32) ϭ 12.80AMAT for B ϭ 64: 0.015 ϫ (20 ϫ 64) ϭ 19.20AMAT for B ϭ 128: 0.010 ϫ (20 ϫ 128) ϭ 25.60B ϭ 8 is optimal.5.5.5 AMAT for B ϭ 8: 0.040 ϫ (24 ϩ 8) ϭ 1.28AMAT for B ϭ 16: 0.030 ϫ (24 ϩ 16) ϭ 1.20AMAT for B ϭ 32: 0.020 ϫ (24 ϩ 32) ϭ 1.12AMAT for B ϭ 64: 0.015 ϫ (24 ϩ 64) ϭ 1.32AMAT for B ϭ 128: 0.010 ϫ (24 ϩ 128) ϭ 1.52B ϭ 32 is optimal.5.5.6 Bϭ1285.65.6.1P1 1.52 GHzP2 1.11 GHz5.6.2P1 6.31 ns9.56 cyclesP2 5.11 ns 5.68 cycles5.6.3P112.64 CPI8.34 ns per instP27.36 CPI 6.63 ns per instS-8 Chapter5Solutions5.6.46.50 ns9.85 cycles Worse5.6.513.045.6.6 P1 AMAT ϭ 0.66 ns ϩ 0.08 ϫ 70 ns ϭ 6.26 nsP2 AMAT ϭ 0.90 ns ϩ 0.06 ϫ (5.62 ns ϩ 0.95 ϫ 70 ns) ϭ 5.23 nsFor P1 to match P2’s performance:5.23 ϭ 0.66 ns ϩ MR ϫ 70 nsMR ϭ 6.5%5.75.7.1 Th e cache would have 24 / 3 ϭ 8 blocks per way and thus an index fi eld of3 bits.30000 001101M T(1)ϭ01801011 0100112M T(1)ϭ0T(2)ϭ11430010 101125MT(1)ϭ0 T(2)ϭ11 T(5)ϭ220000 001001MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(1)ϭ01911011 1111117MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(1)ϭ0880101 100054MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(1)ϭ01901011 1110117HT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(1)ϭ0140000 111007MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(1)ϭ0T(7)ϭ01811011 0101112HT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(1)ϭ0T(7)ϭChapter 5 Solutions S-9440010 110026MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(6)ϭ2T(1)ϭ0T(7)ϭ01861011 1010115MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(6)ϭ2T(1)ϭ0T(7)ϭ0T(5)ϭ112531111 1101156MT(1)ϭ0T(2)ϭ11T(5)ϭ2T(7)ϭ11T(4)ϭ5T(6)ϭ2T(1)ϭ0T(7)ϭ0T(5)ϭ11T(6)ϭ155.7.2 Since this cache is fully associative and has one-word blocks, the word address is equivalent to the tag. Th e only possible way for there to be a hit is arepeated reference to the same word, which doesn’t occur for this sequence.3M 3180M 3, 18043M 3, 180, 432M 3, 180, 43, 2191M 3, 180, 43, 2, 19188M 3, 180, 43, 2, 191, 88190M 3, 180, 43, 2, 191, 88, 19014M 3, 180, 43, 2, 191, 88, 190, 14181M 181, 180, 43, 2, 191, 88, 190, 1444M 181, 44, 43, 2, 191, 88, 190, 14186M 181, 44, 186, 2, 191, 88, 190, 14253M181, 44, 186, 253, 191, 88, 190, 145.7.331M 118090M 1, 904321M 1, 90, 2121H 1, 90, 2119195M 1, 90, 21, 958844M 1, 90, 21, 95, 4419095H 1, 90, 21, 95, 44147M 1, 90, 21, 95, 44, 718190H 1, 90, 21, 95, 44, 74422M 1, 90, 21, 95, 44, 7, 22186143M 1, 90, 21, 95, 44, 7, 22, 143253126M1, 90, 126, 95, 44, 7, 22, 143S-10 ChapterSolutions5Th e fi nal reference replaces tag 21 in the cache, since tags 1 and 90 had been re-used at timeϭ3 and timeϭ8 while 21 hadn’t been used since timeϭ2.Miss rate ϭ 9/12 ϭ 75%Th is is the best possible miss rate, since there were no misses on any block thathad been previously evicted from the cache. In fact, the only eviction was for tag21, which is only referenced once.5.7.4 L1 only:.07 ϫ 100 ϭ 7 nsCPI ϭ 7 ns / .5 ns ϭ 14Direct mapped L2:.07 ϫ (12 ϩ 0.035 ϫ 100) ϭ 1.1 nsCPI ϭ ceiling(1.1 ns/.5 ns) ϭ 38-way set associated L2:.07 ϫ (28 ϩ 0.015 ϫ 100) ϭ 2.1 nsCPI ϭ ceiling(2.1 ns / .5 ns) ϭ 5Doubled memory access time, L1 only:.07 ϫ 200 ϭ 14 nsCPI ϭ 14 ns / .5 ns ϭ 28Doubled memory access time, direct mapped L2:.07 ϫ (12 ϩ 0.035 ϫ 200) ϭ 1.3 nsCPI ϭ ceiling(1.3 ns/.5 ns) ϭ 3Doubled memory access time, 8-way set associated L2:.07 ϫ (28 ϩ 0.015 ϫ 200) ϭ 2.2 nsCPI ϭ ceiling(2.2 ns / .5 ns) ϭ 5Halved memory access time, L1 only:.07 ϫ 50 ϭ 3.5 nsCPI ϭ 3.5 ns / .5 ns ϭ 7Halved memory access time, direct mapped L2:.07 ϫ (12 ϩ 0.035 ϫ 50) ϭ 1.0 nsCPI ϭ ceiling(1.1 ns/.5 ns) ϭ 2Halved memory access time, 8-way set associated L2:Chapter 5 Solutions S-11.07 ϫ (28 ϩ 0.015 ϫ 50) ϭ 2.1 nsCPI ϭ ceiling(2.1 ns / .5 ns) ϭ 55.7.5 .07 ϫ (12 ϩ 0.035 ϫ (50 ϩ 0.013 ϫ 100)) ϭ 1.0 nsAdding the L3 cache does reduce the overall memory access time, which is themain advantage of having a L3 cache. Th e disadvantage is that the L3 cache takesreal estate away from having other types of resources, such as functional units.5.7.6 Even if the miss rate of the L2 cache was 0, a 50 ns access time givesAMAT ϭ .07 ϫ 50 ϭ 3.5 ns, which is greater than the 1.1 ns and 2.1 ns given by theon-chip L2 caches. As such, no size will achieve the performance goal.5.85.8.11096 days26304 hours5.8.20.9990875912%5.8.3 Availability approaches 1.0. With the emergence of inexpensive drives,having a nearly 0 replacement time for hardware is quite feasible. However,replacing fi le systems and other data can take signifi cant time. Although a drivemanufacturer will not include this time in their statistics, it is certainly a part ofreplacing a disk.5.8.4 MTTR becomes the dominant factor in determining availability. However,availability would be quite high if MTTF also grew measurably. If MTTF is 1000times MTTR, it the specifi c value of MTTR is not signifi cant.5.95.9.1 Need to fi nd minimum p such that 2pϾϭ p ϩ d ϩ 1 and then add one.Th us 9 total bits are needed for SEC/DED.5.9.2 Th e (72,64) code described in the chapter requires an overhead of8/64ϭ12.5% additional bits to tolerate the loss of any single bit within 72 bits,providing a protection rate of 1.4%. Th e (137,128) code from part a requires anoverhead of 9/128ϭ7.0% additional bits to tolerate the loss of any single bit within137 bits, providing a protection rate of 0.73%. Th e cost/performance of both codesis as follows:(72,64) code ϭϾ 12.5/1.4 ϭ 8.9(136,128) code ϭϾ 7.0/0.73 ϭ 9.6Th e (72,64) code has a better cost/performance ratio.5.9.3 Using the bit numbering from section 5.5, bit 8 is in error so the valuewould be corrected to 0x365.5.10 Instructors can change the disk latency, transfer rate and optimal page size for more variants. Refer to Jim Gray’s paper on the fi ve-minute rule ten years later.5.10.1 32 KB5.10.2 Still 32 KB5.10.3 64 KB. Because the disk bandwidth grows much faster than seek latency, future paging cost will be more close to constant, thus favoring larger pages.5.10.4 1987/1997/2007: 205/267/308 seconds. (or roughly fi ve minutes)5.10.5 1987/1997/2007: 51/533/4935 seconds. (or 10 times longer for every 10 years).5.10.6 (1) DRAM cost/MB scaling trend dramatically slows down; or (2) disk $/ access/sec dramatically increase. (2) is more likely to happen due to the emerging fl ash technology.5.115.11.1TLB miss PT hitPF 11112466911741361 (last access 0)11322270TLB missPT hit 1 (last access 1)05174136 1 (last access 0)113139163TLB hit 1 (last access 1)05174 1 (last access 2)36 1 (last access 0)113345878TLB missPT hitPF1 (last access 1)051 (last access 3)8141 (last access 2)361 (last access 0)1134887011TLB missPT hit 1 (last access 1)05 1 (last access 3)814 1 (last access 2)36 1 (last access 4)1112126083TLB hit 1 (last access 1)05 1 (last access 3)814 1 (last access 5)36 1 (last access 4)11124922512TLB missPT miss 1 (last access 6)1215 1 (last access 3)814 1 (last access 5)36 1 (last access 4)11125.11.246690TLB miss PT hit111121741361 (last access 0)0522270TLB hit111121741361 (last access 1)05139160TLB hit111121741361 (last access 2)05345872TLB miss PT hit PF1 (last access 3)2131741361 (last access 2)05488702TLB hit1 (last access 4)2131741361 (last access 2)05126080TLB hit1 (last access 4)2131741361 (last access 5)05492253TLB hit1 (last access 4)2131741 (last axxess 6)361 (last access 5)5A larger page size reduces the TLB miss rate but can lead to higher fragmentationand lower utilization of the physical memory.5.11.3Two-way set associative4669101TLB missPT hitPF111120174113601 (last access 0)01312227000TLB missPT hit1 (last access 1)050174113601 (last access 0)013113916311TLB missPT hit1 (last access 1)0501 (last access 2)16113601 (last access 0)113134587840TLB missPT hitPF1 (last access 1)0501 (last access 2)1611 (last access 3)41401 (last access 0)1131488701151TLB missPT hit1 (last access 1)0501 (last access 2)1611 (last access 3)41401 (last access 4)512112608311TLB hit 1 (last access 1)050 1 (last access 5)161 1 (last access 3)4140 1 (last access 4)5121492251260TLB missPT miss1 (last access 6)61501 (last access 5)1611 (last access 3)41401 (last access 4)51214669101TLB miss PT hit PF11112010131136204932227000TLB miss PT hit1050101311362049313916303TLB miss PT hit1050101311362106334587820TLB miss PT hit PF121401013113621063488701123TLB miss PT hit121401013113621212312608303TLB miss PT hit121401013113621063492251230TLB miss PT miss13150101311362163All memory references must be cross referenced against the page table and the TLB allows this to be performed without accessing off -chip memory (in the common case). If there were no TLB, memory access time would increase signifi cantly.5.11.4 Assumption: “half the memory available” means half of the 32-bit virtual address space for each running application.Th e tag size is 32 Ϫ log 2(8192) ϭ 32 Ϫ 13 ϭ 19 bits. All five page tables would require 5 ϫ (2^19/2 ϫ 4) bytes ϭ 5 MB.5.11.5 In the two-level approach, the 2^19 page table entries are divided into 256 segments that are allocated on demand. Each of the second-level tables contain 2^(19Ϫ8) ϭ 2048 entries, requiring 2048 ϫ 4 ϭ 8 KB each and covering 2048 ϫ 8 KB ϭ 16 MB (2^24) of the virtual address space.Direct mappedIf we assume that “half the memory” means 2^31 bytes, then the minimum amount of memory required for the second-level tables would be 5 ϫ (2^31 / 2^24) * 8 KB ϭ 5 MB. Th e fi rst-level tables would require an additional 5 ϫ 128 ϫ 6 bytes ϭ 3840 bytes.Th e maximum amount would be if all segments were activated, requiring the use of all 256 segments in each application. Th is would require 5 ϫ 256 ϫ 8 KB ϭ10 MB for the second-level tables and 7680 bytes for the fi rst-level tables.5.11.6 Th e page index consists of address bits 12 down to 0 so the LSB of the tag is address bit 13.A 16 KB direct-mapped cache with 2-words per block would have 8-byte blocks and thus 16 KB / 8 bytes ϭ 2048 blocks, and its index fi eld would span address bits 13 down to 3 (11 bits to index, 1 bit word off set, 2 bit byte off set). As such, the tag LSB of the cache tag is address bit 14.Th e designer would instead need to make the cache 2-way associative to increase its size to 16 KB.5.125.12.1 Worst case is 2^(43Ϫ12) entries, requiring 2^(43Ϫ12) ϫ 4 bytes ϭ2 ^33 ϭ 8 GB.5.12.2 With only two levels, the designer can select the size of each page table segment. In a multi-level scheme, reading a PTE requires an access to each level of the table.5.12.3 In an inverted page table, the number of PTEs can be reduced to the size of the hash table plus the cost of collisions. In this case, serving a TLB miss requires an extra reference to compare the tag or tags stored in the hash table.5.12.4 It would be invalid if it was paged out to disk.5.12.5 A write to page 30 would generate a TLB miss. Soft ware-managed TLBs are faster in cases where the soft ware can pre-fetch TLB entries.5.12.6 When an instruction writes to V A page 200, and interrupt would be generated because the page is marked as read only.5.135.13.1 0 hits5.13.2 1 hit5.13.3 1 hits or fewer5.13.4 1 hit. Any address sequence is fi ne so long as the number of hits are correct.5.13.5 Th e best block to evict is the one that will cause the fewest misses in the future. Unfortunately, a cache controller cannot know the future! Our best alternative is to make a good prediction.5.13.6 If you knew that an address had limited temporal locality and would confl ict with another block in the cache, it could improve miss rate. On the other hand, you could worsen the miss rate by choosing poorly which addresses to cache.5.145.14.1 Shadow page table: (1) VM creates page table, hypervisor updates shadow table; (2) nothing; (3) hypervisor intercepts page fault, creates new mapping, and invalidates the old mapping in TLB; (4) VM notifi es the hypervisor to invalidate the process’s TLB entries. Nested page table: (1) VM creates new page table, hypervisor adds new mappings in PA to MA table. (2) Hardware walks both page tables to translate V A to MA; (3) VM and hypervisor update their page tables, hypervisor invalidates stale TLB entries; (4) same as shadow page table.5.14.2 Native: 4; NPT: 24 (instructors can change the levels of page table)Native: L; NPT: Lϫ(Lϩ2)5.14.3 Shadow page table: page fault rate.NPT: TLB miss rate.5.14.4 Shadow page table: 1.03NPT: 1.045.14.5 Combining multiple page table updates5.14.6 NPT caching (similar to TLB caching)5.155.15.1 CPIϭ 1.5 ϩ 120/10000 ϫ (15ϩ175) ϭ 3.78If VMM performance impact doubles ϭϾ CPI ϭ 1.5 ϩ 120/10000 ϫ(15ϩ350) ϭ5.88If VMM performance impact halves ϭϾ CPI ϭ 1.5 ϩ 120/10000 ϫ(15ϩ87.5) ϭ2.735.15.2 Non-virtualized CPI ϭ 1.5 ϩ 30/10000 ϫ 1100 ϭ 4.80Virtualized CPI ϭ 1.5 ϩ 120/10000 ϫ (15ϩ175) ϩ 30/10000 ϫ(1100ϩ175) ϭ 7.60Virtualized CPI with half I/Oϭ 1.5 ϩ 120/10000 ϫ (15ϩ175) ϩ 15/10000ϫ (1100ϩ175) ϭ 5.69I/O traps usually oft en require long periods of execution time that can beperformed in the guest O/S, with only a small portion of that time needingto be spent in the VMM. As such, the impact of virtualization is less forI/O bound applications.5.15.3 Virtual memory aims to provide each application with the illusion of the entire address space of the machine. Virtual machines aims to provide each operating system with the illusion of having the entire machine to its disposal. Th us they both serve very similar goals, and off er benefi ts such as increased security. Virtual memory can allow for many applications running in the same memory space to not have to manage keeping their memory separate.5.15.4 Emulating a diff erent ISA requires specifi c handling of that ISA’s API. Each ISA has specifi c behaviors that will happen upon instruction execution, interrupts, trapping to kernel mode, etc. that therefore must be emulated. Th is can require many more instructions to be executed to emulate each instruction than was originally necessary in the target ISA. Th is can cause a large performance impact and make it diffi cult to properly communicate with external devices. An emulated system can potentially run faster than on its native ISA if the emulated code can be dynamically examined and optimized. For example, if the underlying machine’s ISA has a single instruction that can handle the execution of several of the emulated system’s instructions, then potentially the number of instructions executed can bereduced. Th is is similar to the case with the recent Intel processors that do micro-op fusion, allowing several instructions to be handled by fewer instructions.5.165.16.1 Th e cache should be able to satisfy the request since it is otherwise idle when the write buff er is writing back to memory. If the cache is not able to satisfy hits while writing back from the write buff er, the cache will perform little or no better than the cache without the write buff er, since requests will still be serialized behind writebacks.5.16.2 U nfortunately, the cache will have to wait until the writeback is completesince the memory channel is occupied. Once the memory channel is free,the cache is able to issue the read request to satisfy the miss.5.16.3 Correct solutions should exhibit the following features:1. Th e memory read should come before memory writes.2. Th e cache should signal “Ready” to the processor before completingthe write.Example (simpler solutions exist; the state machine is somewhatunderspecifi ed in the chapter):5.175.17.1 Th ere are 6 possible orderings for these instructions.Ordering 1:Results: (5,5)Ordering 2:Results: (5,5)Ordering 3:Results: (6,3)Ordering 4:Results: (5,3)Ordering 5:Results: (6,5)Ordering 6:(6,3)If coherency isn’t ensured:P2’s operations take precedence over P1’s: (5,2)5.17.25.17.3 Best case:Orderings 1 and 6 above, which require only two total misses.Worst case:Orderings 2 and 3 above, which require 4 total cache misses.5.17.4 Ordering 1:Result: (3,3)Ordering 2:Result: (2,3)Ordering 3:Result: (2,3) Ordering 4:Result: (0,3)Ordering 5:Result: (0,3) Ordering 6:Result: (2,3)Ordering 7:Result: (2,3) Ordering 8:Result: (0,3)Ordering 9:Result: (0,3) Ordering 10:Result: (2,1)Result: (0,1) Ordering 12:Result: (0,1) Ordering 13:Result: (0,1) Ordering 14:Result: (0,1)Ordering 15:Result: (0,0)5.17.5 Assume Bϭ0 is seen by P2 but not preceding Aϭ1Result: (2,0)5.17.6 Write back is simpler than write through, since it facilitates the use of exclusive access blocks and lowers the frequency of invalidates. It prevents the use of write-broadcasts, but this is a more complex protocol.Th e allocation policy has little eff ect on the protocol.5.185.18.1 Benchmark Aϭ (1/32) ϫ 5 ϩ 0.0030 ϫ 180 ϭ 0.70AMATprivateϭ (1/32) ϫ 20 ϩ 0.0012 ϫ 180 ϭ 0.84AMATsharedBenchmark Bϭ (1/32) ϫ 5 ϩ 0.0006 ϫ 180 ϭ 0.26AMATprivateAMATϭ (1/32) ϫ 20 ϩ 0.0003 ϫ 180 ϭ 0.68sharedPrivate cache is superior for both benchmarks.5.18.2 Shared cache latency doubles for shared cache. Memory latency doubles for private cache.Benchmark Aϭ (1/32) ϫ 5 ϩ 0.0030 ϫ 360 ϭ 1.24AMATprivateϭ (1/32) ϫ 40 ϩ 0.0012 ϫ 180 ϭ 1.47AMATsharedBenchmark Bϭ (1/32) ϫ 5 ϩ 0.0006 ϫ 360 ϭ 0.37AMATprivateϭ (1/32) ϫ 40 ϩ 0.0003 ϫ 180 ϭ 1.30AMATsharedPrivate is still superior for both benchmarks.5.18.35.18.4 A non-blocking shared L2 cache would reduce the latency of the L2 cache by allowing hits for one CPU to be serviced while a miss is serviced for another CPU, or allow for misses from both CPUs to be serviced simultaneously.A non-blocking private L2 would reduce latency assuming that multiple memory instructions can be executed concurrently.5.18.5 4 times.5.18.6 Additional DRAM bandwidth, dynamic memory schedulers, multi-banked memory systems, higher cache associativity, and additional levels of cache.f. P rocessor: out-of-order execution, larger load/store queue, multiple hardware threads;Caches: more miss status handling registers (MSHR)Memory: memory controller to support multiple outstanding memoryrequests5.195.19.1 srcIP and refTime fi elds. 2 misses per entry.5.19.2 Group the srcIP and refTime fi elds into a separate array.5.19.3 peak_hour (int status); // peak hours of a given statusGroup srcIP, refTime and status together.5.19.4 Answers will vary depending on which data set is used.Confl ict misses do not occur in fully associative caches.Compulsory (cold) misses are not aff ected by associativity.Capacity miss rate is computed by subtracting the compulsory miss rateand the fully associative miss rate (compulsory ϩ capacity misses) fromthe total miss rate. Confl ict miss rate is computed by subtracting the coldand the newly computed capacity miss rate from the total miss rate.Th e values reported are miss rate per instruction, as opposed to miss rateper memory instruction.5.19.5 Answers will vary depending on which data set is used.5.19.6 apsi/mesa/ammp/mcf all have such examples.Example cache: 4-block caches, direct-mapped vs. 2-way LRU.Reference stream (blocks): 1 2 2 6 1.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

解决方案4第4章解决方案S-34.1 4.1.1信号值如下:RegWrite MemReadALUMux MemWrite aloop RegMux Branch 0 0 1(Imm)1 ADD X 0 ALUMux是控制ALU输入处Mux 的控制信号,0(Reg)选择寄存器文件的输出,1(Imm)从指令字中选择立即数作为第二个输入。

以铝合金为控制信号,控制Mux输入寄存器文件,0(ALU)选择ALU的输出,1(Mem)选择存储器的输出。

X值表示“不关心”(不管信号是0还是1)4.1.2除了未使用的寄存器4.1.3分支添加单元和写入端口:分支添加,寄存器写入端口没有输出:无(所有单元都生成输出)4.2 4.2.1第四条指令使用指令存储器、两个寄存器读取端口、添加Rd和Rs的ALU,寄存器中的数据存储器和写入端口。

4.2.2无。

可以使用此指令实现现有的块。

4.2.3无。

此指令可以在不添加新的控制信号的情况下实现。

它只需要改变控制逻辑。

4.3 4.3.1时钟周期时间由关键路径决定。

对于给定的延迟,它正好得到加载指令的数据值:I-Mem(读取指令)、Regs(长于控制时间)、Mux(选择ALU)输入)、ALU、数据存储器和Mux(从内存中选择要写入寄存器的值)。

这个路径的延迟是400ps吗?200秒?第30页?120秒?350马力?第30页?1130马力。

1430马力(1130
马力?300 ps,ALU在关键路径上)。

4.3.2第4.3.2节加速度来自于时钟周期时间和程序所需时钟周期数的变化:程序要求的周期数减少了5%,但循环时间是1430而不是1130,所以我们的加速比是(1/0.95)*(1130/1430)?0.83,这意味着我们实际上在减速。

S-4第4章解决方案4.3.3成本始终是所有组件(不仅仅是关键路径上的组件)的总成本,因此原处理器的成本是I-Mem、Regs、Control、ALU、D-Mem、2个Add单元和3个Mux 单元,总成本是1000?200?500?100?2000年?2*30?3*10?3890我们将计算与基线相关的成本。

相对于此基线的性能是我们先前计算的加速,相对于基线的成本/性能如下:新成本:3890?600?4490相对成本:4490/3890?1.15性价比:1.15/0.83?1.39条。

我们必须付出更高的代价来换取更差的性能;成本/性能比未经修改的处理器差得多。

4.4 4.4.1 i-Mem比加法单元长,因此时钟周期时间等于i-memory:200 ps 4.4.2本指令的关键路径是通过指令存储器获得f集,符号扩展和移位t-lef t-2,加上单位来计算新的PC。

Mux选择哪个值代替PC?4注意,通过另一个加法单元的路径较短,因为I-Mem的延迟比加法单元的延迟长。

我们有:200秒?15磅?10磅?70秒?20秒?315 ps4.4.3
条件分支和无条件分支具有相同的长延迟路径来计算分支地址。

此外,它们还有一个长延迟路径,通过寄存器、Mux和ALU计算PCSrc条件。

关键路径是这两条路径中较长的一条。

对于这些路径,通过PCSrc的路径具有更长的延迟:200ps?90秒?20秒?90秒?20秒?420 ps4.4.4 PC相关分支机构。

4.4.5 PC相对无条件分支指令。

我们在c部分看到,这不是条件分支的关键路径,它只在PC相关分支上需要。

注意,MIPS没有实际的无条件分支(bnezero、zero和Label扮演这个角色,所以不需要无条件分支操作码),所以对于MIPS,这个问题的答案实际上是“None”。

4.4.6在这两条指令(bne和ADD)中,bne的关键路径较长,决定了时钟周期时间。

注意ADD的每条路径都小于或等于BNE 的相应路径,因此第4章中解决方案S-5中的单位延迟变化不会影响这一点。

因此,我们关心的是单元的延迟如何影响BNE的关键路径,而这个单元不在关键路径上,所以这个单元成为关键的唯一方法就是增加它的延迟,直到它通过符号扩展、移位lef T和分支加法。

地址计算的路径比PCSrc通过寄存器、Mux和ALU的路径长。

Regs、Mux和ALU 的延迟为200ps,符号扩展、移位t-lef t-2和加法的延迟为95ps,因此移位t-lef t-2的延
迟必须增加105 ps或更高,才能达到时钟周期时间。

4.5 4.5.1数据存储器由LW和SW 指令使用,所以答案是:25%?10%?35%4.5.2符号扩展电路实际上在每个循环中计算一个结果。

相关文档
最新文档