AIX 性能调优

合集下载

AIX性能下降检查解决案例

AIX性能下降检查解决案例

AIX性能下降检查解决案例(客户名称、时间、问题关键字)【处理时间】2016年8月1日【客户名称】华夏信用卡【主机信息】要有详细的硬件描述、数据库版本描述主机:IBM 、8205-E6D 四个Lpar分区操作系统:AIX 7100-03-05数据库:【业务系统】业务系统名称、有版本信息更好【关键字】Lpar、CPU折叠功能、性能优化。

关键字3个【处理人员】系统集成--刘党旗【问题说明】现象:对于使用共享CPU的AIX分区,当系统负载偏低时,基于JAVA的应用程序可能会被延迟执行,交易执行时间变长。

事件分析主要原因是在分区负载偏低时,AIX操作系统的CPU折叠功能只开启一个虚拟CPU,所有线程均被调度到该CPU 的第一个线程中。

解决方案可以通过HMC/ASMI设置,关闭操作系统的CPU折叠功能**折叠功能对系统的影响** 关闭CPUfolding的影响:关闭了系统内核对微分区环境的自动调度优化;所有的VP都会被调度到hypervisor,不管这些VP上是否有实际负载;更高的hypervisor延时,物理资源亲和度也可能受到影响。

** 关闭CPU folding的好处:对于分区sizing非常完美的情形下,比如EC:VP始终控制在不低于1:2,而且处理器池资源从未受限,这时关闭folding可能获得一定的性能收益(主要是通过减少VPM管理开销,以及避免unfold展开CPU 延迟)后续跟踪性能优化明显**折叠功能介绍虚拟处理器管理(VirtualProcessorManagement),也称之为处理器折叠技术(CPUFolding),是一项Power虚拟化特性,用于控制一个LPAR处理使用的VP(VirtualProcessor)数量。

按目前AIX的设置,默认对微分区(即共享处理器分区)开启了处理器折叠功能;而专有处理器分区(dedicatedLPAR)则默认关闭此功能。

处理器折叠技术的作用主要体现在两个方面:1)节能,如果一个物理核心对应的所有VP都处于被折叠状态PowerVMhypervisor可以将这颗核心置于低能耗状态。

AIX 5L 内存性能优化之使用 ps、sar、svmon 和 vmstat 监视内存的使用

AIX 5L 内存性能优化之使用 ps、sar、svmon 和 vmstat 监视内存的使用

AIX 5L 内存性能优化之使用ps、sar、svmon 和vmstat 监视内存的使用AIX 5L 内存性能优化之使用ps、sar、svmon 和vmstat 监视内存的使用,通过命令监控AIX系统的内存使用状况,进而进行系统内存的性能优化,是一个系统管理员对系统优化要做的基本工作!内存子系统中最重要的优化部分并不涉及到实际的优化工作。

在对您的系统进行优化之前,必须弄清楚主机系统的实际运行情况。

要做到这一点,AIX® 管理员必须知道应该使用何种工具,以及如何对他或她将要捕获的数据进行分析。

再次说明近期发表的一些其他优化文章中所介绍的内容,您在对系统进行正确地优化之前,必须首先监视主机,无论它是在逻辑分区(LPAR) 运行还是在自己的物理服务器上运行。

您可以使用许多命令来捕获和分析数据,所以您需要了解这些命令,以及其中的哪个命令最适合于将要进行的工作。

在捕获了相关的数据之后,您需要对结果进行分析。

有些问题乍看起来像是一个中央处理单元(CPU) 的问题,而经过分析之后,可以正确地诊断为内存或I/O 问题,前提是您使用了合适的工具捕获数据,并且知道如何进行分析工作。

仅当正确地完成了这些工作之后,您才可以考虑对系统进行实际的更改。

如果医生不了解您的病史和目前的症状,就无法诊治疾病,同样地,您也需要在优化子系统之前对其进行诊断。

如果在出现CPU 或者I/O 瓶颈的情况下,对内存子系统进行优化,这将是毫无帮助的,甚至可能会影响主机的正常运行。

本文将帮助您了解正确地实施诊断工作的重要性。

您将看到,性能优化并不仅仅只是进行实际的优化工作。

在您将要学习的工具中,有一些是通用的监视工具,所有版本的UNIX 都提供了这些工具,另外还有一些工具是专门为AIX 编写的。

有些工具为AIX Version 5.3 进行了优化,同时还专门为AIX 5.3 系统开发了一些新的工具。

生成基准数据是非常重要的,这一点无论重申多少次都不为过。

Tuxedo性能调优经验谈

Tuxedo性能调优经验谈

Tuxedo性能调优经验谈Tuxedo 9.0 for AIX与Oracle 10 XA连接网友:chinakkee 发布于:2006.11.13 09:54(共有条评论) 查看评论| 我要评论系统说明TUXEDO版本:9.0 安装目录/opt/bea/tuxedo9.0ORACLE版本:10.2.0.1 安装目录/u01/app/oracle一、Tuxedo 9 for AIX的安装1、创建一个用户为Tuxedo,用户组为bea2、创建/opt/bea为tuxedo的安装目录,$mkdir /opt/bea$chown tuxedo.bea /opt/bea$chmod 770 /opt/bea#bootinfo -k64$ sh tuxedo9_aix53_64.bin -i consolePreparing to install...WARNING: /tmp does not have enough disk space!Attempting to use /home/tuxedo for install base and tmp dir.Extracting the JRE from the installer archive...Unpacking the JRE...Extracting the installation resources from the installer archive...Configuring the installer for this system's environment...Launching installer...Preparing CONSOLE Mode Installation...===================================================== ======Choose Locale...----------------->1- EnglishCHOOSE LOCALE BY NUMBER: 1===================================================== ======(created with InstallAnywhere by Zero G)-------------------------------------------------------------------------------===================================================== ======Introduction------------BEA End User Clickwrap 001205Copyright (c) BEA Systems, Inc.All Rights Reserved.DO YOU ACCEPT THE TERMS OF THIS LICENSE AGREEMENT? (Y/N): y===================================================== ======Choose Install Set------------------Please choose the Install Set to be installed by this installer.->1- Full Install2- Server Install3- Full Client Install4- Jolt Client Install5- ATMI Client Install6- CORBA Client Install7- Customize...ENTER THE NUMBER FOR THE INSTALL SET, OR PRESS TO ACCEPT THE DEFAULT : 1===================================================== ======Choose BEA Home---------------1- Create new BEA Home2- Use existing BEA HomeEnter a number: 21- /opt/beaExisting BEA Home directory: 1===================================================== ======Choose Product Directory------------------------1- Modify Current Selection (/opt/bea/tuxedo9.0)2- Use Current Selection (/opt/bea/tuxedo9.0)Enter a number: 2===================================================== ======Pre-Installation Summary------------------------Please Review the Following Before Continuing:Product Name:Tuxedo 9.0Install Folder:/opt/bea/tuxedo9.0Link Folder:/home/tuxedoDisk Space Information (for Installation Target):Required: 386,803,702 bytesAvailable: 2,625,392,640 bytesPRESS TO CONTINUE:===================================================== ======Ready To Install----------------InstallAnywhere is now ready to install Tuxedo 9.0 onto your system at thefollowing location:/opt/bea/tuxedo9.0PRESS TO INSTALL:===================================================== ======Installing...-------------[==================|==================|=============== =][------------------|------------------|------------------|------------------]===================================================== ======Configure tlisten Service-------------------------Password: tuxedoVerify Password: tuxedoPassword Accepted! Press "Enter" to continue.===================================================== ======SSL Installation Choice.------------------------Would you like to install SSL Support?->1- Yes2- NoENTER THE NUMBER FOR YOUR CHOICE, OR PRESS TO ACCEPT THE DEFAULT:: 2===================================================== ======License Installation Choice---------------------------Would you like to install your license now?->1- Yes2- NoENTER THE NUMBER FOR YOUR CHOICE, OR PRESS TO ACCEPT THE DEFAULT:: 2===================================================== ======Installation Complete---------------------Congratulations. Tuxedo 9.0 has been successfully installed to:/opt/bea/tuxedo9.0PRESS TO EXIT THE INSTALLER:安装完毕,需要把license文件重命名为lic.txt copy到$TUXDIR/udataobj/二、TUxedo 9 连接Oracle 10g配置前提是在Tuxedo 9 上安装Oracle 10g client还有安装C编译器(不一定要用Visual Age C/C+用户能够通过sqlplus连接oracle数据库1、ORACLE的的配置sqlplus[email=system@testcrm]system@testcrm[/email]SQL> @$ORACLE_HOME\rdbms\admin\xaview.sqlSQL>grant select on v$xatrans$ to public with grant option;SQL>grant select on v$pending_xatrans$ to public with grant option;SQL>grant select EMP to ScottSQL>GRANT SELECT ON DBA_PENDING_TRANSACTIONS TO Scott;注:scott默认为lock,需要用alter user scott account unlock,解锁。

Aix的一些配置参数

Aix的一些配置参数

1.远程客户可通过"login, ftp"登录, 但不可通过"telnet"登录1. 使用命令"ps -ef" 查看"telnetd"进程是否启动;2. 检查文件/etc/services中的"telnet port"是否为"23", 如果不是,改为"23",然后执行" refresh-s inetd".2.在AIX中设置中文环境在AIX中使用中文有两种途径:第一是在安装AIX时选择中文语言,装好的系统自动显示中文(这种方法不推荐使用,它没有第二种方法使用起来灵活)。

第二是安装AIX时选择英文,系统启动后手工设置中文环境,方法如下:1. 将AIX系统盘的第一张光盘放入光驱;2. 运行命令:smitty--> System Environments--> Manage Languange Environment--> Change/Show Primary Language Environment--> Change/Show Cultural Convention, Language, or Keyboard在随后显示的菜单中将光标分别移到以下字段:Primary CULTURAL ConventionPrimary LANGUAGE translationPrimary KEYBOARD按下<F4>,从弹出的菜单中选择“IBM-eucCN”将上述字段改为简体中文,按下回车键后系统自动从光盘安装中文环境软件包。

此操作完成后重新启动系统,操作界面即为简体中文。

需要输入中文时使用下列功能键切换输入方法:AIX 4.3.3 以前的版本:<Shift> + F1 --- <Shift> + F4 切换到各种中文输入方法;右<Alt> --- 切换到英文输入;AIX 4.3.3:CTRL + [F2] : 智能ABC ;CTRL + [F4] : 拼音输入;CTRL + [F5] : 五笔输入;CTRL + [F6] : 郑码输入;CTRL + [F7] : 表形码输入;CTRL + [F9] : 内码输入;CTRL + [F10] :英文半角;此外,AIX还包含另外两种中文环境,即“UTF8”和“GBK”,它们与“IBM-eucCN”之区别在于包含了繁体汉字的使用。

AIX系统参数配置

AIX系统参数配置

AIX系统参数配置AI某内核属于动态内核,核心参数基本上可以自动调整,因此当系统安装完毕后,应考虑修改的参数一般如下:一、单机环境1、系统用户的最大登录数ma某loginma某login的具体大小可根据用户数设定,可以通过mittychlicene 命令修改,该参数记录于/etc/ecurity/login.cfg文件,修改在系统重新启动后生效。

2、系统用户的limit参数这些参数位于/etc/ecurity/limit文件中,可以把这些参数设为-1,即无限制,可以用vi修改/etc/ecurity/limit文件,所有修改在用户重新登录后生效。

default:fize=2097151----》改为-1core=2097151cpu=-1data=262144----》改为-1r=65536tack=65536nofile=20003、PagingSpace检查pagingpace的大小,在物理内存<2G时,应至少设定为物理内存的1.5倍,若物理内存>2G,可作适当调整。

同时在创建pagingpace时,应尽量分配在不同的硬盘上,提高其性能。

利用mittychp修改原有pagingpace的大小或mittymkp增加一块pagingpace。

4、系统核心参数配置利用lattr-Ely0检查ma某uproc,minpout,ma某pout等参数的大小。

ma某uproc为每个用户的最大进程数,通常如果系统运行DB2或ORACLE是应将ma某uproc调整,Default:128、调整到500,ma某uproc增加可以马上起作用,降低需要AI某重起。

当应用涉及大量的顺序读写而影响前台程序响应时间时,可考虑将ma某pout设为33,minpout设为16,利用mittychgy来设置。

5、文件系统空间的设定一般来说,系统的文件系统/、/ur、/var、/tmp的使用率不要超过80%,/tmp建议至少为300M,文件系统满可导致系统不能正常工作,尤其是AI某的基本文件系统,如/(根文件系统)满则会导致用户不能登录。

AIX 5.3主机性能评估-Memory性能评估

AIX 5.3主机性能评估-Memory性能评估
lrubucket = 131072
maxclient% = 80
maxfree = 1088
maxperm = 4587812
maxperm% = 80
nokilluid = 0
npskill = 49152
npsrpgmax = 393216
npsrpgmin = 294912
npsscrubmax = 393216
312417 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2878 filesystem I/Os blocked with no fsbuf
defps = 1
force_relalias_lite = 0
framesets = 2
htabscale = n/a
kernel_heap_psize = 4096

1.4.2 使用vmstat确定内存的使用情况
主要检查vmstat输出的 memory和pages列和faults列。详细的说明见前一节cpu评估说明。
1.4.3 svmon命令
# svmon -G -i 2 2
如果系统在向调页空间调出页面,可能使因为内存中的文件页数低于maxperm,从而也调出了部分的计算页面以达到maxfree的要求。在这种情况下,可以考虑把maxperm降低到低于numperm的某个值,从而阻止计算页面的调出。在5.2 ML4以后的版本中,为了防止计算页面被调出,可以采用另外一个方法,就是设置参数lru_file_repage=0。将该参数设为0,则告诉vmm在进行页面替换的时候,优先替换文件页面。

aix 常用命令

aix 常用命令

aix 常用命令AIX常用命令AIX(Advanced Interactive eXecutive)是IBM公司的一款UNIX操作系统,广泛应用于企业级服务器系统中。

本文将介绍AIX 常用命令,帮助读者更好地理解和使用该操作系统。

一、系统管理命令1. whoami:查询当前登录用户的用户名;2. hostname:查看主机名;3. uname -a:显示系统的各种信息,如内核版本、硬件平台等;4. uptime:查看系统的运行时间和负载情况;5. date:显示当前日期和时间;6. topas:实时监控系统性能,包括CPU利用率、内存使用情况等;7. lparstat -i:显示LPAR(Logical Partition)信息,包括分区的配置和资源利用情况;8. lsdev:列出设备列表;9. errpt:查看系统错误日志,用于排查故障;10. ps -ef:显示当前系统的进程列表;11. mksysb:创建系统备份;12. bootlist:设置系统启动顺序。

二、文件和目录管理命令1. ls:列出当前目录下的文件和子目录;2. pwd:显示当前工作目录的路径;3. cd:切换工作目录;4. mkdir:创建新的目录;5. rm:删除文件或目录;6. cp:复制文件或目录;7. mv:移动文件或目录;8. find:按照指定条件查找文件;9. du:查看目录或文件的磁盘使用情况;10. df:显示文件系统的使用情况;11. cat:查看文件内容;12. vi:编辑文本文件。

三、用户和权限管理命令1. useradd:创建新用户;2. userdel:删除用户;3. passwd:修改用户密码;4. chuser:修改用户属性;5. chown:修改文件或目录的所有者;6. chmod:修改文件或目录的权限;7. chgrp:修改文件或目录的所属组;8. groups:查看用户所属的组;9. su:切换用户身份;10. visudo:编辑sudoers文件,配置用户的sudo权限。

aix 面试题

aix 面试题

aix 面试题在应聘AIX相关岗位时,面试官常常会问及与AIX相关的面试题,以评估应聘者的技术能力和专业知识。

本文将介绍一些常见的AIX面试题,并给出相应的答案和解析。

1. 什么是AIX操作系统?AIX(Advanced Interactive eXecutive)是IBM公司开发的一种基于UNIX的操作系统。

它是为IBM Power Systems服务器设计的,主要用于企业级应用和数据库。

2. 请简要介绍一下AIX的特点和优势。

AIX具有以下几个特点和优势:- 可靠性高:AIX采用了冗余设计和可靠的错误检测与恢复机制,以确保系统持续稳定运行。

- 扩展性强:AIX支持多处理器和多线程技术,可以有效利用硬件资源,满足高性能和扩展性需求。

- 安全性好:AIX提供了丰富的安全功能和机制,如访问控制、权限管理和身份验证,保护系统和数据的安全性。

- 管理和调优:AIX提供了一系列的管理工具和性能调优机制,方便管理员进行系统管理和性能优化。

- 兼容性强:AIX与其他UNIX-like操作系统兼容,并且支持多种软件和应用的移植。

3. 请解释一下在AIX中如何创建文件系统。

在AIX中,可以使用mkfs命令来创建文件系统。

例如,创建一个ext3文件系统,可以使用以下命令:```mkfs -V jfs2 -O ext /dev/hdX```其中,/dev/hdX是磁盘分区设备名称。

4. 如何在AIX系统上查看网络接口状态?可以使用ifconfig命令来查看AIX系统上的网络接口状态。

例如,查看所有网络接口的状态,可以使用以下命令:```ifconfig -a```该命令将显示系统上所有网络接口的详细信息,包括接口名称、IP 地址、MAC地址等。

5. 在AIX系统上如何查看进程及其资源占用情况?可以使用ps命令来查看AIX系统上的进程及其资源占用情况。

例如,查看所有进程及其资源占用情况,可以使用以下命令:```ps -ef```该命令将显示系统上所有进程的详细信息,包括进程ID、父进程ID、CPU占用、内存占用等。

优化AIX 7磁盘性能 第一部分磁盘I O概述和长期监控工具【sar+nmon+ topas】

优化AIX 7磁盘性能 第一部分磁盘I O概述和长期监控工具【sar+nmon+ topas】

优化AIX 7磁盘性能第一部分磁盘I/O概述和长期监控工具【sar+nmon+ topas】简介磁盘 I/O 优化的关键部分涉及到在构建系统之前实现最佳实践。

因为当系统已经启动并处于运行状态时,很难再对数据进行移动,所以需要在规划磁盘和 I/O 子系统环境时正确地完成这项任务,这一点是非常重要的。

这包括物理架构、逻辑磁盘排列以及逻辑卷和文件系统配置。

当系统管理员听到可能出现了磁盘争用问题时,他或她首先会求助于 iostat。

iostat 等同于使用 vmstat 提供有关内存的报告,它是获得有关 I/O 子系统的当前运行概况的一种快速而原始的方法。

尽管运行 iostat 并不是一种完全不合理的反应,但是很早就应该着手考虑磁盘 I/O 的问题,而不是等到必须进行调优的时候。

如果没有从一开始就正确地为环境配置磁盘,那么任何调优工作都无法提供帮助。

而且,有一点非常重要,需要了解磁盘 I/O 的具体情况,以及它与 AIX® 和 your System p™ 硬件之间的关系。

对于磁盘 I/O 调优来说,AIX 特有的工具和实用工具比通用的 UNIX® 命令和工具能够提供更多的帮助,因为它们的任务就是帮助优化本机 AIX 磁盘 I/O 子系统。

在本文中,我们要定义和介绍 AIX I/O 栈,并将其与磁盘性能的物理和逻辑方面关联起来。

本文介绍直接、并发和异步 I/O:它们是什么,如何启用它们,以及如何监控和优化它们。

本文还介绍一些长期监控工具,应该使用它们来帮助优化系统。

听到 iostat 并不是我们推荐的帮助长期收集统计数据的工具,您可能会感到奇怪。

本文讨论 AIX 7 的 beta 版中的支持和变化,包括不同子系统的配置方式方面的变化。

AIX 7 中的主要变化进一步简化了许多 I/O 子系统的操作和配置,这个改进过程从 AIX 6 就开始了。

其结果是许多 I/O 子系统不再需要启用和配置了。

AIX性能调优-牛新庄-

AIX性能调优-牛新庄-

AIX性能调优系统资源(物理资源,逻辑资源)系统瓶颈性能概念一个程序执行步骤性能调整流程Instructor GuideFigure 1-6. Program Execution Hierarchy AU187.0 Notes:IntroductionIn the graphic above, the left side represents hardware entities that are loosely matched to the appropriate operating system entity on the right side. A program must go from the lowest level of being stored on disk, to the highest level being the processor running program instructions.Hardware hierarchy overviewWhen a program runs, it makes its way up the hardware and operating systemhierarchies, more or less in parallel. Each level on the hardware side is scarcer and more expensive than the one below it. There is contention for resources amongprograms and time spent in transitional from one level to the next. Usually, the time required to move from one hardware level to another consists primarily of the latency of the lower level, that is, the time from the issuing of a request to the receipt of the first data.1-18 AIX 5L System Administration III© Copyright IBM Corp. 2000, 2006Course materials may not be reproduced in whole or in partInstructor Guide Disks are the slowest hardware operationBy far the slowest operation that a running program does (other than waiting on a human keystroke) is to obtain code or data from a disk. Disk operations are necessary for read or write requests for programs. System tuning activities frequently turn out to be hunts for unnecessary disk I/O or searching for disk bottlenecks since disk operations are the slowest operations. For example, can the system be tuned to reduce paging? Is one disk too busy causing higher seek times because it has multiple filesystems which have a lot of activity?Real memoryRandom Access Memory (RAM) access is fast compared to disk, but much moreexpensive per byte. Operating systems try to keep program code and data that are in use in RAM. When the operating system begins to run out of free RAM, it needs to make decisions about what types of pages to write out to disk. Virtual memory is the ability of a system to use disk space as an extension of RAM to allow for more efficient use of RAM.Paging and page faultsIf the operating system needs to bring a page into RAM that has been written to disk or has not been brought in yet, a page fault occurs, and the execution of the program is suspended until the page has been read in from disk. Paging is a normal part of the operation of a multi-processing system. Paging becomes a performance issue when free RAM is short and pages which are in memory are paged-out and then paged back in again causing process threads to wait for slower disk operations. How virtual memory works will be covered in another unit of this course.Translation Lookaside Buffers (TLBs)One of the ways that programmers are insulated from the physical limitations of the system is the implementation of virtual memory. The programmer designs and codes the program as though the memory were very large, and the system takes responsibility for translating the program's virtual addresses for instructions and data into realaddresses that are needed to get the instructions and data from RAM. Since thisaddress-translation process is time-consuming, the system keeps the real addresses of recently accessed virtual memory pages in a cache called the Translation Lookaside Buffer (TLB).As long as the running program continues to access a small set of program and data pages, the full virtual-to-real page-address translation does not need to be redone for each RAM access. When the program tries to access a virtual-memory page that does not have a TLB entry, called a TLB miss, dozens of processor cycles, called theTLB-miss latency are required to perform the address translation.© Copyright IBM Corp. 2000, 2006Unit 1. Performance Analysis and Tuning Overview1-19Course materials may not be reproduced in whole or in partInstructor GuideCachesTo minimize the number of times the program has to experience the RAM latency, systems incorporate caches for instructions and data. If the required instruction or data is already in the cache (a cache hit), it is available to the processor on the next cycle (that is, no delay occurs); otherwise, a cache miss occurs. If a given access is both a TLB miss and a cache miss, both delays occur consecutively.Depending on the hardware architecture, there are two or three levels of cache, usually called L1, L2, and L3. If a particular storage reference results in an L1 miss, then L2 is checked. If L2 generates a miss, then the reference goes to the next level, either L3, if it is present, or RAM.Pipeline and registersA pipelined, superscalar architecture allows for the simultaneous processing of multipleinstructions, under certain circumstances. Large sets of general-purpose registers and floating-point registers make it possible to keep considerable amounts of the program's data in registers, rather than continually storing and reloading the data.Operating system hierarchy overviewThe operating system works on a thread level. When a user requests the execution of a program, AIX performs a number of operations to transform the executable program on disk to a running program. First, the directories in the user's current PATH must be scanned to find the correct copy of the program. Then the system loader (not to be confused with ld, the binder) must resolve any external references from the program to shared libraries. Finally, the system branches to the entry point of the program and the resulting page fault causes the program page that contains the entry point to be brought into RAM.Interrupt handlersThe mechanism for notifying the operating system that an external event has taken place is to interrupt the currently running thread and transfer control to an interrupt handler (FLIH or SLIH). Before the interrupt handler can run, enough of thegeneral-purpose registers must be saved to ensure that the system can restore the context of the running thread after interrupt handling is complete.ThreadsA thread is the current execution state of a single instance of a program. In AIX, accessto the processor and other resources is allocated on a thread basis, rather than a process basis. Multiple threads can be created within a process by the application program. Those threads share the resources owned by the process within which they are running.1-20 AIX 5L System Administration III© Copyright IBM Corp. 2000, 2006Course materials may not be reproduced in whole or in partWaiting threadsWhenever an executing thread makes a request that cannot be satisfied immediately, such as an I/O operation (either explicit or as the result of a page fault) or the granting ofa lock, it is put in a Wait state until that request is complete. Normally, this results inanother set of TLB and cache latencies, in addition to the time required for the request itself. However, it also allows other threads which are ready to run to gain access to the CPU.When a thread is replaced by another thread on a CPU, this is a context switch. A context switch can also occur when a thread finishes its timeslice or, as stated above, it must wait for a resource. Whenever a context switch occurs, there may be additional latencies due to cache misses. Context switches are a normal function of amulti-processing system, but an abnormally high rate of context switches could be a symptom of a performance problem.Dispatchable threadsWhen a thread is dispatchable, but not actually running, it is put in a run queue to run on an available CPU. If a CPU is available, it will run right away, otherwise it must wait. Currently dispatched threadThe scheduler chooses the thread that has the strongest claim to use the processor.When the thread is dispatched, the logical state of the processor is restored to what was in effect when the thread was last interrupted.Effect of the use of cache, RAM, and paging space on program performanceAccess time for the various components increase exponentially as you move away from the processor. For example, a one second access time for a 1 GHz CPU would be the equivalent of 100 seconds for a L3 cache access, 6 minutes for a RAM access, and 115 days for a local disk access! As you can see, the closer you are to the core, the faster the access is. Understanding how to reduce more expensive (performance-wise)bottlenecks is key to performance management.© Copyright IBM Corp. 2000, 2006Unit 1. Performance Analysis and Tuning Overview1-21Course materials may not be reproduced in whole or in partFigure 1-8. Performance Analysis Flowchart AU187.0 Notes:Tuning is a processThe flowchart in the visual above can be used for performance analysis and it illustrates that tuning is an iterative process. We will be following this flowchart throughout our course.The starting point for this flowchart is the Normal Operations box. The first piece of data you need is a performance goal. Only by having a goal, or a set of goals, can you tell if there is a performance problem. The goals may be something like a specific response time for an interactive application or a specific length of time in which a batch job needs to complete. Tuning without a specific goal could in fact lead to the degradation of system performance.Once you decide there is a performance problem and you analyze and tune the system, you must then go back to the performance goals to evaluate whether more tuning needs to occur.© Copyright IBM Corp. 2000, 2006Unit 1. Performance Analysis and Tuning Overview1-27Course materials may not be reproduced in whole or in partAdditional testsThe additional tests that you perform at the bottom right of the flowchart relate to the four previous categories of resource contention. If the specific bottleneck is well hidden, or you missed something, then you must keep testing to figure out what is wrong. Even when you think you’ve found a bottleneck, it’s a good idea to do additional tests to identify more detail or to make sure one bottleneck is not masquerading as another. For example, you may find a disk bottleneck, but in reality it’s a memory bottleneck causing excessive paging.1-28 AIX 5L System Administration III© Copyright IBM Corp. 2000, 2006Course materials may not be reproduced in whole or in partFigure 1-9. Performance Analysis Tools AU187.0 Notes:CPU analysis toolsCPU metrics analysis tools include:-vmstat, iostat, sar, lparstat and mpstat which are packaged with bos.acct -ps which is in bos.rte.control-cpupstat which is part of mands-gprof and prof which are in bos.adt.prof-time (built into the various shells) or timex which is part of bos.acct-emstat and alstat are emulation and alignment tools from bos.perf.tools-netpmon, tprof, locktrace, curt, splat, and topas are in bos.perf.tools-trace and trcrpt which are part of bos.sysmgt.trace-truss is in bos.sysmgt.ser_aids1-30 AIX 5L System Administration III© Copyright IBM Corp. 2000, 2006Course materials may not be reproduced in whole or in partInstructor Guide-smtctl is in bos.rte.methods-Performance toolbox tools such as xmperf, 3dmon which are part of perfmgr Memory subsystem analysis toolsSome of the memory metric analysis tools are:-vmstat which is packaged with bos.acct-lsps which is part of bos.rte.lvm-topas, svmon and filemon are part of bos.perf.tools-Performance toolbox tools such as xmperf, 3dmon which are part of perfmgr-trace and trcrpt which are part of bos.sysmgt.trace-lparstat is part of bos.acctI/O subsystem analysis toolsI/O metric analysis tools include:-iostat and vmstat are packaged with bos.acct-lsps, lspv, lsvg, lslv and lvmstat are in bos.rte.lvm-lsattr and lsdev are in bos.rte.methods-topas, filemon, and fileplace are in bos.perf.tools-Performance toolbox tools such as xmperf, 3dmon which are part of perfmgr-trace and trcrpt which are part of bos.sysmgt.traceNetwork subsystem analysis toolsNetwork metric analysis tools include:-lsattr and netstat which are part of .tcp.client-nfsstat and nfs4cl as part of .nfs.client-topas and netpmon are part of bos.perf.tools-ifconfig as part of .tcp.client-iptrace and ipreport are part of .tcp.server-tcpdump which is part of .tcp.server-Performance toolbox tools such as xmperf, 3dmon which are part of perfmgr-trace and trcrpt which are part of bos.sysmgt.trace© Copyright IBM Corp. 2000, 2006Unit 1. Performance Analysis and Tuning Overview1-31Course materials may not be reproduced in whole or in partInstructor GuideAIX 5L V5.3 enhancements to analysis toolsSeveral changes were made in AIX 5L V5.3 to the analysis tools. Changes were made at different maintenance levels for V5.3.The tprof command has new -E option to enable event based profiling and the new -f option allows you to set the sampling frequency for event-based profiling. There were updates to PMAPI including updates to pmlist and there are two new commands for hardware analysis: hpmcount and hpmstat. These are not covered in this course.The topas command has a new -D panel for disk analysis.There are new commands for obtaining statistics specific to a logical partition. These give statistics for POWER Hypervisor activity or for tracking real CPU utilization in a simultaneous multi-threading or shared processor (Micro-Partition) environment. A new register was added called the Processor Utilization Resource Register (PURR) to track logical and virtual processor activity. Commands such as sar and topas willautomatically use the new PURR statistics when in a simultaneous multi-threading or shared processor (Micro-Partition) environment and you will see new columns reporting partition statistics in those environments. Trace-based commands now have new hooks for viewing PURR data. Some commands such as lparstat, mpstat, and smtctl are new for AIX 5L V5.3 and work in a partitioned environment.The AIX 5L Virtualization Performance Management course covers all differences in performance analysis and tuning in a partitioned environment.1-32 AIX 5L System Administration III© Copyright IBM Corp. 2000, 2006Course materials may not be reproduced in whole or in partInstructor GuideFigure 1-10. Performance Tuning Process AU187.0 Notes:OverviewPerformance tuning is one aspect of performance management. The definition ofperformance tuning sounds simple and straight forward, but it’s actually a complex process.Performance tuning involves managing your resources. Resources could be logical (queues, buffers, etc.) or physical (real memory, disks, CPUs, network adapters, etc.).Resource management involves the various tasks listed here. We will examine each of these tasks later.Tuning always must be done based on performance analysis. While there arerecommendations as to where to look for performance problems, what tools to use, and what parameters to change, what works on one system may not work on another. So there is no cookbook approach available for performance tuning that will work for all systems.The wheel graphic in the visual above represents the phases of a more formal tuning project. Experiences with tuning may range from the informal to the very formal where reports and reviews are done prior to changes being made. Even for informal tuning actions, it is essential to plan, gather data, develop a recommendation, implement, and document.Figure 1-11. Performance Tuning Tools AU187.0 Notes:CPU tuning toolsCPU tuning tools include:-nice, renice, and setpri modify priorities.nice and renice are in the bos.rte.control fileset.setpri is a command available with the perfpmr package.-schedo (schedtune in AIX 5L V5.1) modifies scheduler algorithms (in the bos.perf.tune fileset).-bindprocessor binds processes to CPUs (in the bos.mp fileset).-chdev modifies certain system tunables (in the bos.rte.methods fileset).-bindintcpu can bind an adapter interrupt to a specific CPU (in thedevices.chrp.base.rte fileset).-procmon is in bos.perf.gtools.Instructor GuideMemory tuning toolsMemory tuning tools include:-vmo and ioo (vmtune in AIX 5L V5.1) for various VMM, file system, and LVM parameters (in bos.perf.tune fileset)-chps and mkps modify paging space attributes (in bos.rte.lvm fileset)-fdpr rearranges basic blocks in an executable so that memory footprints become smaller and cache misses are reduced (in perfagent.tools fileset) -chdev modifies certain system tunables (in bos.rte.methods fileset)I/O tuning toolsI/O tuning tools include:-vmo and ioo modify certain file system and LVM parameters (in bos.perf.tune fileset) (Use vmtune prior to AIX 5L V5.2.)-chdev modifies system tunables such as disk and disk adapter attributes (in bos.rte.methods fileset)-migratepv moves logical volumes from one disk to another (in bos.rte.lvm fileset) -lvmo displays or sets pbuf tuning parameters (in bos.rte.lvm fileset)-chlv modifies logical volume attributes (in bos.rte.lvm fileset)-reorgvg moves logical volumes around on a disk (in bos.rte.lvm fileset)Network tuning toolsNetwork tuning tools include:-no modifies network options (in .tcp.client fileset)-nfso modifies NFS options (in .nfs.client fileset)-chdev modifies network adapter attributes (in bos.rte.methods fileset)-ifconfig modifies network interface attributes (in .tcp.client fileset)CPU在定位CPU性能问题时,从监视 CPU 使用率的统计数据入手。

aix到linux系统的迁移方案

aix到linux系统的迁移方案

aix到linux系统的迁移方案在将AIX(Advanced Interactive eXecutive)系统迁移到Linux系统时,需要执行一系列的步骤来确保数据的完整性和系统的连续性。

以下是一个基本的迁移方案:评估和规划:对AIX系统进行全面的评估,包括硬件、操作系统版本、应用程序和数据。

制定迁移计划,包括目标Linux系统的配置、应用程序的兼容性和数据转换。

硬件和系统准备:确定目标Linux系统的硬件配置,确保与AIX系统兼容。

安装和配置Linux操作系统,确保其具有必要的性能和功能。

数据迁移:使用适当的工具或方法将AIX系统上的数据迁移到Linux 系统。

这可能包括文件、数据库、配置文件等。

验证数据的完整性和准确性,确保没有数据丢失或损坏。

应用程序迁移:评估AIX系统上的应用程序是否可以在Linux系统上运行。

如果可以,需要安装和配置必要的依赖项。

如果应用程序无法直接在Linux上运行,可能需要寻找替代品或进行定制开发。

测试和验证:在生产环境之前,在测试环境中进行全面的测试,确保Linux系统的功能和性能与AIX系统相当。

对应用程序进行集成测试,确保它们与新环境兼容。

生产环境部署:一旦测试阶段没有问题,就可以将系统迁移到生产环境。

监控新系统的性能和稳定性,确保一切正常运行。

后期优化和监控:根据需要优化Linux系统的配置,以提高性能和可靠性。

定期监控系统,确保稳定运行,并及时处理任何问题。

文档和培训:更新所有相关文档,包括系统配置、应用程序和数据迁移的详细步骤。

为用户提供必要的培训,帮助他们熟悉新的Linux系统。

持续维护和支持:根据需要提供持续的系统维护和支持,包括安全更新、性能监控等。

定期评估系统,考虑是否需要进行进一步的优化或改进。

备份策略:建立并实施有效的备份策略,以确保数据的安全性和可恢复性。

定期测试备份恢复流程,确保在需要时可以快速恢复系统。

安全性考虑:在迁移过程中,确保遵循最佳的安全实践,包括数据加密、访问控制和防火墙配置等。

AIX系统性能分析及优化的研究

AIX系统性能分析及优化的研究

AIX系统性能分析及优化的研究作者:张明栋来源:《信息化建设》2015年第06期摘要:随着企业信息化不断深入发展,小型机的应用越来越广泛,因此如何充分发挥小型机的性能,减少硬件投入成本,保障企业应用系统高效、稳定、可靠运行成为重要研究课题。

本文主要以IBM小型机的AIX操作系统为运行平台,从CPU、内存及磁盘I/O三个方面,对AIX操作系统的性能分析及优化方法进行了具体的研究。

关键词:AIX;性能分析;性能优化引言AIX系统是IBM公司基于AT&T Unix System V开发的一套类UNIX操作系统,运行在IBM公司专有的Power系列芯片的小型机上。

目前,IBM小型机广泛应用于政府、企业、银行以及证券等领域当中,尤其很多关键业务系统采用IBM小型机作为服务器,因此如何针对业务系统的特点和要求将小型机的系统资源尽可能均衡地充分利用,增强系统的吞吐能力,减少响应时间成为我们必须进行研究的课题。

本文主要以IBM小型机的AIX操作系统为运行平台,从CPU、内存及磁盘I/O三个方面,对AIX操作系统的性能分析及优化方法进行了具体的研究。

1 CPU1.1 CPU性能分析AIX 系统运行的硬件平台为Power CPU。

Power CPU是IBM公司设计的一款基于 RISC 架构的处理器,主要用于服务器市场的小型机平台。

CPU是系统运行的中枢大脑,重要性不言而喻。

AIX操作系统中用于CPU性能分析的工具及命令非常多,各自具有不同的特点。

我们主要采用vmstat命令进行CPU运行数据的监测及分析,通过分析合理调度CPU资源,从而充分发挥CPU性能,解决CPU性能瓶颈。

vmstat命令运行输出结果,如图1所示。

图1 vmstat命令输出结果vmstat命令除了显示CPU负载情况外,还统计了虚拟内存、内核线程、物理内存及陷阱(错误)的活动情况。

CPU是否成为整个系统性能瓶颈,主要由r、us、sy、id、wa五列数据决定。

AIXIBM小型机文系统PPT课件

AIXIBM小型机文系统PPT课件

sar命令
系统活动报告工具,可收集、报告和 保存系统活动信息,用于历史性能分 析。
常见性能问题诊断思路
CPU性能问题
内存性能问题
首先查看CPU使用率是否过高,如果是则 需要进一步分析是哪个进程或线程导致的 ,以及是否存在资源争用等问题。
检查内存使用率是否过高,是否存在内存 泄漏等问题。同时需要注意虚拟内存的使 用情况,如分页活动是否频繁等。
导致数据丢失。
PART 05
AIXIBM小型机网络配置 与优化
REPORTING
网络连接方式选择及配置步骤
01
02
03
网络连接方式选择
根据实际需求,选择适合 的网络连接方式,如LAN 、WAN、VPN等。
配置网络参数
设置IP地址、子网掩码、 默认网关等网络参数,确 保网络通信正常。
配置网络服务
根据实际需求,配置DNS 、DHCP、FTP等网络服 务,提供便捷的网络应用 环境。
磁盘性能问题
网络性能问题
磁盘IO通常是系统性能的瓶颈之一。需要 关注磁盘的读写速度、IOPS、等待时间等 指标,以及是否存在磁盘争用等问题。
网络延迟、丢包等问题都可能导致系统性 能下降。需要关注网络带宽、延迟、丢包 率等指标,并分析网络配置是否合理。
系统资源优化建议提供
CPU优化建议
内存优化建议
磁盘优化建议
AIXIBM小型机文系 统PPT课件
REPORTING
• AIXIBM小型机概述 • AIXIBM小型机硬件组成 • AIXIBM小型机操作系统及软件支持 • AIXIBM小型机文件系统详解 • AIXIBM小型机网络配置与优化 • AIXIBM小型机性能监控与调优
目录

Linux系统性能调优脚本

Linux系统性能调优脚本

Linux系统性能调优脚本Linux系统是一种常用的操作系统,它具有开放源代码的特点,使得用户可以自由地进行定制和优化。

为了提高系统的性能,我们可以使用脚本进行调优。

本文将介绍一些常用的Linux系统性能调优脚本,帮助您优化系统并提升其性能。

一、检测系统性能瓶颈的脚本1. vmstat 脚本:vmstat 是一个常用的性能分析工具,可以显示系统的虚拟内存、进程、磁盘、CPU 等各方面的性能信息。

通过编写脚本,在一段时间内持续运行 vmstat 命令,并将结果输出到日志文件中,我们可以分析系统的性能瓶颈所在,并采取相应的优化措施。

2. top 脚本:top 是一个交互式的进程查看工具,可以实时显示系统的进程状态、CPU 使用率、内存使用情况等。

编写脚本将 top 的输出结果保存到日志文件中,可以帮助我们了解系统中的资源占用情况,找出性能瓶颈。

二、优化系统资源的脚本1. 清理内存脚本:Linux系统会将一部分内存用于缓存,而过多的缓存会影响系统的性能。

编写脚本可以定期清理不必要的缓存,释放内存资源,提高系统的响应速度。

2. 禁用不必要的服务脚本:在Linux系统中,可能会存在一些不需要的服务,默认情况下这些服务都会启动,占用系统资源。

编写脚本可以检测并禁用这些不必要的服务,从而释放系统资源,提升性能。

三、优化磁盘写入性能的脚本1. IO调度算法脚本:Linux系统中提供了多种IO调度算法,可以根据实际需求选择适合的算法来优化磁盘的读写性能。

编写脚本可以自动设置合适的IO调度算法,提高磁盘的性能。

2. 优化磁盘读写缓存脚本:在Linux系统中,可以通过调整磁盘的读写缓存大小来提高IO性能。

编写脚本可以自动设置合适的缓存大小,加速磁盘的读写操作,从而提升系统的整体性能。

四、优化网络性能的脚本1. 设置最大文件打开数脚本:Linux系统中,每个进程可以打开的文件数是有限制的。

如果系统中同时运行了大量的进程,并且每个进程都打开了大量的文件,则可能导致系统的性能下降。

Aix性能分析及优化

Aix性能分析及优化

aix性能分析一、分析CPU。

检查usr% + sys%是否大于90% 。

可使用Nmon,vmstat ,topas,sar命令,以nmon工具为例系统输入nmon命令,如下输入h 进入主菜单输入C (c = CPU by processor), 查看CPU运行状态如上图,这里我们可以清晰看到user% ,sys% 所占用的CPU资源。

上图为测试机,没有运行任何应用,状态良好。

如果CPU有IO wait ,说明内存或IO 存在瓶颈,下面内存分析部分,和IO分析部分会讲到。

Tips:如CPU资源占用较高,可以用topas命令(user% ,sys%也可以用该命令直接查看)查看CPU的进程,检查哪个进程占用CPU资源较高,分析是否合理,为业务所用。

然后得出结论然后再决定是否需要进行参数调试其它参数,扩充CPU等方案。

二、分析内存。

根据实际情况判断内存占用是否合理。

判断系统是否有计算页面调入现象,判断换页空间是否持续增高。

nmon主菜单输入m (m = Memory & Paging),查看内存运行状态在观察一段时间后,上图内存大小为7072MB (8G), 可以空间% Free 为78.2% 比较充裕。

Paging Space In 为0 ,PageSpace %used 为1.6% ,无持续增高现象。

检查是否有内存泄漏,使用命令svmon –P 进程ID ,记录“work process private”项对应的值。

间隔一会重复运行上面的命令,比较“work process private ” 的值是否明显增大,如有则可能有内存泄漏问题。

接下来找出对应的应用,进行更新或安装补丁解决。

如依旧无法解决,联系应用厂商咨询解决。

Tips:如内存占用较高,可以用svmon -Pns 命令详细查看占用内存资源较大、异常的进程,分析是否合理,是否需要扩充物理内存三、I O 分析。

a. 判断系统是否有I O wait ,如果有系统有可能有I O性能问题。

aix操作系统中页面交换空间的使用技巧

aix操作系统中页面交换空间的使用技巧

aix操作系统中页面交换空间的使用技巧
1.配置页面交换空间大小:在aix操作系统中,默认页面交换空间大小为2GB。

但是,如果需要更大的空间来支持更多的进程和应用程序,可以通过修改/etc/filesystems文件中的swap选项来增加页面交换空间的大小。

2. 监控页面交换空间使用情况:使用vmstat命令可以监视aix 系统中的页面交换情况。

vmstat命令可以显示当前的内存使用情况、页面交换使用情况以及CPU使用情况等。

3. 优化页面交换空间使用:在aix操作系统中,页面交换是一个重要的系统性能因素。

为了最大化系统性能,可以采取以下措施: a. 避免过度使用页面交换空间:由于页面交换需要花费大量的CPU时间和I/O操作,因此应该尽量避免过度使用页面交换空间。

b. 使用高速磁盘或固态硬盘:使用高速磁盘或固态硬盘可以加速页面交换操作,从而提高系统性能。

c. 调整页面交换算法:可以通过调整页面交换算法来提高系统性能。

例如,使用LRU (最近最少使用)算法可以优化页面交换性能。

d. 确保物理内存充足:为了避免过多使用页面交换空间,应该确保系统的物理内存充足。

如果物理内存不足,可以通过增加物理内存来解决。

e. 合理分配内存:合理分配内存可以减少页面交换的需求。

例如,在为应用程序分配内存时,可以考虑其实际需要的内存大小,避免过度分配内存。

服务器性能调优实践提升服务器的处理能力

服务器性能调优实践提升服务器的处理能力

服务器性能调优实践提升服务器的处理能力服务器性能调优实践:提升服务器的处理能力经过长期使用和不断扩展,服务器的性能逐渐下降成为许多企业和组织面临的一个问题。

随着业务的增加和用户的增长,服务器的处理能力成为关键。

本文将介绍一些服务器性能调优实践,帮助提升服务器的处理能力。

一、硬件调优1. 定期检查和维护服务器硬件服务器硬件的正常运行对性能至关重要。

定期检查服务器的硬件状态,包括风扇、硬盘、内存等,确保它们都能正常运转。

及时更换故障硬件,提高服务器的稳定性和性能。

2. 升级硬件配置如果服务器的配置已经达到极限,考虑升级硬件以提高性能。

可以增加内存、更换更快的处理器、安装更高容量的硬盘等,以满足更大的负载需求。

二、操作系统调优1. 系统内核优化合理配置系统内核参数可以提高服务器的性能。

例如,调整TCP窗口大小、最大文件句柄数、接受和发送缓冲区大小等,根据服务器的具体情况进行优化。

2. 关闭不必要的服务和进程操作系统默认启动了许多不必要的服务和进程,它们虽然对功能扩展可能有帮助,但对性能影响较大。

关闭不必要的服务和进程,可以释放系统资源,并提高服务器的处理能力。

三、数据库调优1. 优化SQL查询性能低下的SQL查询是数据库性能下降的主要原因之一。

通过优化查询语句、添加索引、避免全表扫描等手段,可以提高数据库的查询性能。

2. 数据库分区和分表如果数据库中的表过大,可以考虑对其进行分区或者分表。

通过将数据分散存储在不同的分区或表中,可以减少查询的数据量,提高查询效率。

四、应用程序调优1. 代码优化优化应用程序的代码可以显著提高服务器的性能。

避免使用过多的循环、减少函数调用、避免重复计算等,都可以改善代码的性能。

2. 并发控制并发是一个常见的性能瓶颈。

优化并发控制可以提高服务器的并发处理能力。

例如,合理设计数据库事务、使用线程池等,可以避免并发冲突和资源争用,提高并发处理效率。

五、网络调优1. 网络带宽优化合理配置网络带宽可以提高服务器的网络性能。

AIX性能调优

AIX性能调优

管理作业的方法
-前台运行的作业通过: - batch - nice - /etc/security/limits -后台运行的作业通过: - renice
基本的性能分析
查看CPU 查看运行 队列长度 高队列长度 否 高数据页交换 是 是 可能是CPU限制 可能是内存限制 是 是 高CPU 使用率 否 查看内存 否 查看磁盘 磁盘间平衡 否
性能调优的限制
-性能的度量经常是不确定的 -调优方法并不总是可靠的 -没有标准的途径或详尽的说明书 -相互冲突的需求 -不可预测的使用 -复杂的系统相互作用 -敏感的资源要求
明确工作量
分类: -工作站 -多用户环境 -服务器 -需求的类型和等级 -软件包 -内部的应用 -真实世界的度量 -没有基准
05:08:11 05:08:11 9.03 0.22 05:08:5005:08:57 7773.12 52.83
Tue May 23 05:22:00 1999 Tue May 23 05:08:00 1999
基本的性能分析
查看CPU sar -u 查看运行 队列长度 sar -q 高队列长度 否 高数据页交换 是 是 可能是CPU限制 可能是内存限制 是 是 高CPU 使用率ห้องสมุดไป่ตู้否 查看内存vmstat 否 查看磁盘 iostat 磁盘间平衡 否
平衡磁 盘负载
可能是磁盘/SCSI限制
CPU的使用情况(sar –u)
这条命令的语法是: # sar [options] interval number 例如: # sar –u 60 3 AIX NODE 2 3 00000211 07/06/99 %usr %sys %wio %idle 08:25:11 48 52 0 0 08:26:10 63 37 0 0 08:27:12 59 41 0 0 . . Average 56 44 0 0
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

IBM TRAINING®A26AIX Performance TuningJaqui LynchLas Vegas, NVAIX Performance TuningUpdated Presentation will be at:/papers/pseries-a26-aug06.pdfJaqui LynchSenior Systems EngineerMainline Information SystemsAgenda•AIX v5.2 versus AIX v5.3•32 bit versus 64 bit •Filesystem Types•DIO and CIO•AIX Performance Tunables •Oracle Specifics •Commands•ReferencesNew in AIX 5.2•P5support•JFS2•Large Page support (16mb)•Dynamic LPAR•Small Memory Mode–Better granularity in assignment of memory to LPARs •CuOD•xProfiler•New Performance commands–vmo, ioo, schedo replace schedtune and vmtune •AIX 5.1 Status–Will not run on p5 hardware–Withdrawn from marketing end April 2005–Support withdrawn April 2006AIX 5.3•New in5.3–With Power5 hardware•SMT•Virtual Ethernet•With APV–Shared Ethernet–Virtual SCSI Adapter–Micropartitioning–PLMAIX 5.3•New in5.3–JFS2 Updates•Improved journaling•Extent based allocation•1tb filesystems and files with potential of 4PB•Advanced Accounting•Filesystem shrink for JFS2•Striped Columns–Can extend striped LV if a disk fills up•1024 disk scalable volume group–1024 PVs, 4096 LVs, 2M pps/vg•Quotas•Each VG now has its own tunable pbuf pool–Use lvmo commandAIX 5.3•New in5.3–NFSv4 Changes•ACLs–NIM enhancements•Security•Highly available NIM•Post install configuration of Etherchannel and Virtual IP –SUMA patch tool–Last version to support 32 bit kernel–MP kernel even on a UP–Most commands changed to support LPAR stats–Forced move from vmtune to ioo and vmo–Page space scrubbing–Plus lots and lots of other things32 bit versus 64 bit•32 Bit•Up to 96GB memory •Uses JFS for rootvg •Runs on 32 or 64 bit hardware •Hardware all defaults to 32 bit•JFS is optimized for 32 bit• 5.3 is last version of AIX with 32 bit kernel •64 bit•Allows > 96GB memory •Current max is 256GB (arch is 16TB) except 590/595 (1TB & 2TB)•Uses JFS2 for rootvg •Supports 32 and 64 bit apps•JFS2 is optimized for 64 bitFilesystem Types•JFS•2gb file max unless BF •Can use with DIO •Optimized for 32 bit •Runs on 32 bit or 64 bit •Better for lots of small file creates and deletes •JFS2•Optimized for 64 bit •Required for CIO •Can use DIO•Allows larger file sizes •Runs on 32 bit or 64 bit •Better for large files and filesystemsGPFSClustered filesystemUse for RACSimilar to CIO –noncached, nonblocking I/ODIO and CIO•DIO–Direct I/O–Around since AIX v5.1–Used with JFS–CIO is built on it–Effectively bypasses filesystem caching to bring data directlyinto application buffers–Does not like compressed JFS or BF (lfe) filesystems•Performance will suffer due to requirement for 128kb I/O –Reduces CPU and eliminates overhead copying data twice–Reads are synchronous–Bypasses filesystem readahead–Inode locks still used–Benefits heavily random access workloadsDIO and CIO•CIO–Concurrent I/O–Only available in JFS2–Allows performance close to raw devices–Use for Oracle dbf and control files, and online redo logs,not for binaries–No system buffer caching–Designed for apps (such as RDBs) that enforce writeserialization at the app–Allows non-use of inode locks–Implies DIO as well–Benefits heavy update workloads–Not all apps benefit from CIO and DIO –some arebetter with filesystem caching and some are saferthat wayPerformance Tuning•CPU–vmstat, ps, nmon•Network–netstat, nfsstat, no, nfso•I/O–iostat, filemon, ioo, lvmo•Memory–lsps, svmon, vmstat, vmo, iooNew tunables•Old way–Create rc.tune and add to inittab•New way–/etc/tunables•lastboot•lastboot.log•Nextboot–Use –p –o options–ioo–p –o options–vmo–p –o options–no –p –o options–nfso–p –o options–schedo-p –o optionsTuneables1/3•minperm%–Value below which we steal from computational pages -default is 20%–We lower this to something like 5%, depending on workload•Maxperm%–default is 80%–This is a soft limit and affects ALL file pages (including those in maxclient)–Value above which we always steal from persistent–Be careful as this also affects maxclient–We no longer tune this –we use lru_file_repage instead–Reducing maxperm stops file caching affecting programs that are running•maxclient–default is 80%–Must be less than or equal to maxperm–Affects NFS, GPFS and JFS2–Hard limit by default–We no longer tune this –we use lru_file_repage instead•numperm–This is what percent of real memory is currently being used for caching ALL file pages •numclient–This is what percent of real memory is currently being used for caching GPFS, JFS2 and NFS •strict_maxperm–Set to a soft limit by default –leave as is•strict_maxclient–Available at AIX 5.2 ML4–By default it is set to a hard limit–We used to change to a soft limit –now we do notTuneables2/3•maxrandwrt–Random write behind–Default is 0 –try 32–Helps flush writes from memory before syncd runs•syncd runs every 60 seconds but that can be changed–When threshhold reached all new page writes are flushed to disk–Old pages remain till syncd runs•Numclust–Sequential write behind–Number of 16k clusters processed by write behind•J2_maxRandomWrite–Random write behind for JFS2–On a per file basis–Default is 0 –try 32•J2_nPagesPerWriteBehindCluster–Default is 32–Number of pages per cluster for writebehind•J2_nRandomCluster–JFS2 sequential write behind–Distance apart before random is detected•J2_nBufferPerPagerDevice–Minimum filesystem bufstructs for JFS2 –default 512, effective at fs mountTuneables3/3•minpgahead, maxpgahead, J2_minPageReadAhead & J2_maxPageReadAhead–Default min =2 max = 8–Maxfree–minfree>= maxpgahead•lvm_bufcnt–Buffers for raw I/O. Default is 9–Increase if doing large raw I/Os (no jfs)•numfsbufs–Helps write performance for large write sizes–Filesystem buffers•pv_min_pbuf–Pinned buffers to hold JFS I/O requests–Increase if large sequential I/Os to stop I/Os bottlenecking at the LVM–One pbuf is used per sequential I/O request regardless of the number of pages–With AIX v5.3 each VG gets its own set of pbufs–Prior to AIX 5.3 it was a system wide setting•sync_release_ilock–Allow sync to flush all I/O to a file without holding the i-node lock, and then use the i-node lock to do the commit.–Be very careful –this is an advanced parameter•minfree and maxfree–Used to set the values between which AIX will steal pages–maxfree is the number of frames on the free list at which stealing stops (must be >=minfree+8)–minfree is the number used to determine when VMM starts stealing pages to replenish the free list–On a memory pool basis so if 4 pools and minfree=1000 then stealing starts at 4000 pages– 1 LRUD per pool, default pools is 1 per 8 processors•lru_file_repage–Default is 1 –set to 0–Available on >=AIX v5.2 ML5 and v5.3–Means LRUD steals persistent pages unless numperm< minperm•lru_poll_interval–Set to10–Improves responsiveness of the LRUD when it is runningNEW Minfree/maxfree•On a memory pool basis so if 4 pools andminfree=1000 then stealing starts at 4000pages•1 LRUD per pool•Default pools is 1 per 8 processors•Cpu_scale_memp can be used to changememory pools•Try to keep distance between minfree andmaxfree<=1000•Obviously this may differvmstat -v•26279936 memory pages•25220934 lruable pages•7508669 free pages• 4 memory pools•3829840 pinned pages•80.0 maxpin percentage•20.0 minperm percentage•80.0 maxperm percentage•0.3 numperm percentage All filesystem buffers•89337 file pages•0.0 compressed percentage•0 compressed pages•0.1 numclient percentage Client filesystem buffers only•80.0 maxclient percentage•28905 client pages•0 remote pageouts scheduled•280354 pending disk I/Os blocked with no pbuf LVM –pv_min_pbuf •0 paging space I/Os blocked with no psbuf VMM –fixed per page dev •2938 filesystem I/Os blocked with no fsbuf numfsbufs•7911578 client filesystem I/Os blocked with no fsbuf•0 external pager filesystem I/Os blocked with no fsbuf j2_nBufferPerPagerDevice •Totals since boot so look at 2 snapshots 60 seconds apart•pbufs, psbufs and fsbufs are all pinnedno -p -o rfc1323=1no -p -o sb_max=1310720no -p -o tcp_sendspace=262144no -p -o tcp_recvspace=262144no -p -o udp_sendspace=65536no -p -o udp_recvspace=655360nfso -p -o nfs_rfc1323=1nfso -p -o nfs_socketsize=60000nfso -p -o nfs_tcp_socketsize=600000vmo -p -o minperm%=5vmo -p -o minfree=960vmo -p -o maxfree=1088vmo -p -o lru_file_repage=0vmo -p -o lru_poll_interval=10ioo -p -o j2_maxPageReadAhead=128ioo -p -o maxpgahead=16ioo -p -o j2_maxRandomWrite=32ioo -p -o maxrandwrt=32ioo -p -o j2_nBufferPerPagerDevice=1024ioo -p -o pv_min_pbuf=1024ioo -p -o numfsbufs=2048ioo -p -o j2_nPagesPerWriteBehindCluster=32Increase the following if using raw LVMs (default is 9)Ioo –p –o lvm_bufvnt=12Starter Set of tunablesNB please test these before putting intoproduction vmstat -IIGNORE FIRST LINE -average since bootRun vmstat over an interval (i.e. vmstat 2 30)System configuration: lcpu=24 mem=102656MB ent=0kthr memory page faults cpu---------------------------------------------------------------------------r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec 56 1 18637043 7533530 0 0 0 0 0 0 4298 24564 986698 2 0 0 12.00 100.057 1 18643753 7526811 0 0 0 0 0 0 3867 25124 9130 98 2 0 0 12.00 100.0System configuration: lcpu=8 mem=1024MB ent=0.50kthr memory page faults cpu------------------------------------------------------------------------------r b p avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec1 1 0 170334 968 96 163 0 0 190 511 11 556 662 1 4 90 5 0.03 6.81 1 0 170334 1013 53 85 0 0 107 216 7 268 418 02 92 5 0.02 4.4Pc = physical processors consumed –if using SPPEc = %entitled capacity consumed –if using SPPFre may well be between minfree and maxfreefr:sr ratio 1783:2949 means that for every 1783 pages freed 2949 pages had to be examined. ROT was 1:4 –may need adjustingTo get a 60 second average try: vmstat 60 2Memory and I/O problems•iostat–Look for overloaded disks and adapters•vmstat•vmo and ioo(replace vmtune)•sar•Check placement of JFS and JFS2 filesystems and potentially the logs•Check placement of Oracle or database logs•fileplace and filemon•Asynchronous I/O•Paging•svmon–svmon-G >filename•nmon•Check error logsioo Output•lvm_bufcnt= 9•minpgahead= 2•maxpgahead= 8•maxrandwrt = 32 (default is 0)•numclust= 1•numfsbufs= 186•sync_release_ilock= 0•pd_npages= 65536•pv_min_pbuf= 512•j2_minPageReadAhead = 2•j2_maxPageReadAhead = 8•j2_nBufferPerPagerDevice = 512•j2_nPagesPerWriteBehindCluster = 32•j2_maxRandomWrite = 0•j2_nRandomCluster = 0vmo OutputDEFAULTS maxfree= 128 minfree= 120 minperm% = 20 maxperm% = 80 maxpin% = 80 maxclient% = 80 strict_maxclient = 1 strict_maxperm = 0OFTEN SEEN maxfree= 1088 minfree= 960 minperm% = 10 maxperm% = 30 maxpin% = 80 Maxclient% = 30 strict_maxclient = 0 strict_maxperm = 0numclient and numperm are both 29.9So numclient-numperm=0 aboveMeans filecaching use is probably all JFS2/NFS/GPFSRemember to switch to new method using lru_file_repageiostatIGNORE FIRST LINE -average since bootRun iostat over an interval (i.e. iostat2 30)tty: tin tout avg-cpu: % user % sys % idle % iowait physc% entc0.0 1406.0 93.1 6.9 0.0 0.012.0 100.0Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk1 1.0 1.5 3.0 0 3hdisk0 6.5 385.5 19.5 0 771hdisk14 40.5 13004.0 3098.5 12744 13264 hdisk7 21.0 6926.0 271.0 440 13412 hdisk15 50.5 14486.0 3441.5 13936 15036 hdisk17 0.0 0.00.00 0iostat–a AdaptersSystem configuration: lcpu=16 drives=15tty: tin tout avg-cpu: % user % sys % idle % iowait0.4 195.3 21.4 3.3 64.7 10.6Adapter: Kbps tps Kb_read Kb_wrtnfscsi1 5048.8 516.9 1044720428 167866596Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk6 23.4 1846.1 195.2 381485286 61892408 hdisk9 13.9 1695.9 163.3 373163554 34143700 hdisk8 14.4 1373.3 144.6 283786186 46044360 hdisk7 1.1 133.5 13.8 628540225786128 Adapter: Kbps tps Kb_read Kb_wrtnfscsi0 4438.6 467.6 980384452 85642468Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk5 15.2 1387.4 143.8 304880506 28324064 hdisk2 15.5 1364.4 148.1 302734898 24950680 hdisk3 0.5 81.4 6.8 3515294 16043840 hdisk4 15.8 1605.4 168.8 369253754 16323884 iostat-DExtended Drive Reporthdisk3 xfer: %tm_act bps tps bread bwrtn0.5 29.7K 6.8 15.0K 14.8Kread: rps avgserv minserv maxserv timeouts fails29.3 0.1 0.1784.5 0 0write: wps avgserv minserv maxserv timeouts fails133.6 0.0 0.3 2.1S 0 0 wait: avgtime mintime maxtime avgqsz qfull0.0 0.00.2 0.0 0iostat Otheriostat-A async IOSystem configuration: lcpu=16 drives=15aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait150 0 5652 0 12288 21.4 3.3 64.7 10.6Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk6 23.4 1846.1 195.2 381485298 61892856hdisk5 15.2 1387.4 143.8 304880506 28324064hdisk9 13.9 1695.9 163.3 373163558 34144512iostat-m pathsSystem configuration: lcpu=16 drives=15tty: tin tout avg-cpu: % user % sys % idle % iowait0.4 195.3 21.4 3.3 64.7 10.6Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk0 1.6 17.0 3.7 1190873 2893501Paths: % tm_act Kbps tps Kb_read Kb_wrtnPath0 1.6 17.0 3.7 1190873 2893501lvmo•lvmo output••vgname= rootvg(default but you can change with –v)•pv_pbuf_count= 256–Pbufs to add when a new disk is added to this VG •total_vg_pbufs= 512–Current total number of pbufs available for the volume group.•max_vg_pbuf_count= 8192–Max pbufs that can be allocated to this VG•pervg_blocked_io_count= 0–No. I/O's blocked due to lack of free pbufs for this VG •global_pbuf_count= 512–Minimum pbufs to add when a new disk is added to a VG •global_blocked_io_count= 46–No. I/O's blocked due to lack of free pbufs for all VGslsps–a(similar to pstat)•Ensure all page datasets the same size although hd6 can be bigger -ensure more page space than memory–Especially if not all page datasets are in rootvg–Rootvg page datasets must be big enough to hold the kernel •Only includes pages allocated (default)•Use lsps-s to get all pages (includes reserved via early allocation (PSALLOC=early)•Use multiple page datasets on multiple disks –Parallelismlsps outputlsps-aPage Space Physical Volume Volume Group Size %Used Active Auto Typepaging05 hdisk9 pagvg01 2072MB 1 yes yes lvpaging04 hdisk5 vgpaging01 504MB 1 yes yes lvpaging02 hdisk4 vgpaging02 168MB 1 yes yes lvpaging01 hdisk3 vgpagine03 168MB 1 yes yes lvpaging00 hdisk2 vgpaging04 168MB 1 yes yes lvhd6 hdisk0 rootvg512MB 1 yes yes lvlsps-sTotal Paging Space Percent Used3592MB 1%Bad Layout aboveShould be balancedMake hd6 the biggest by one lp or the same size as the others in a mixedenvironment like thisSVMON Terminology•persistent–Segments used to manipulate files and directories •working–Segments used to implement the data areas of processesand shared memory segments•client–Segments used to implement some virtual file systems likeNetwork File System (NFS) and the CD-ROM file system•/infocenter/pseries/topi c/com.ibm.aix.doc/cmds/aixcmds5/svmon.htmsvmon-Gsize inuse free pin virtualmemory 26279936 18778708 7501792 3830899 18669057pg space 7995392 53026work pers clnt lpagepin 3830890 0 0 0in use 18669611 80204 28893 0In GB Equates to:size inuse free pin virtualmemory 100.25 71.64 28.62 14.61 71.22pg space 30.50 0.20work pers clnt lpagepin 14.61 0 0 0in use 71.22 0.31 0.15 0General Recommendations•Different hot LVs on separate physical volumes•Stripe hot LV across disks to parallelize•Mirror read intensive data•Ensure LVs are contiguous–Use lslv and look at in-band % and distrib–reorgvg if needed to reorg LVs•Writeverify=no•minpgahead=2, maxpgahead=16 for 64kb stripe size•Increase maxfree if you adjust maxpgahead•Tweak minperm, maxperm and maxrandwrt•Tweak lvm_bufcnt if doing a lot of large raw I/Os•If JFS2 tweak j2 versions of above fields•Clean out inittab and rc.tcpip and inetd.conf, etc for things that should not start–Make sure you don’t do it partially–i.e. portmap is in rc.tcpip and rc.nfsOracle Specifics•Use JFS2 with external JFS2 logs(if high write otherwise internal logs are fine)•Use CIO where it will benefit you–Do not use for Oracle binaries•Leave DISK_ASYNCH_IO=TRUE in Oracle•Tweak the maxservers AIO settings•If using JFS–Do not allocate JFS with BF (LFE)–It increases DIO transfer size from 4k to 128k–2gb is largest file size–Do not use compressed JFS –defeats DIOTools•vmstat –for processor and memory•nmon–/collaboration/wiki/display/WikiPtype/nmon–To get a 2 hour snapshot (240 x 30 seconds)–nmon-fT-c 30 -s 240–Creates a file in the directory that ends .nmon•nmon analyzer–/collaboration/wiki/display/WikiPtype/nmonanalyser–Windows tool so need to copy the .nmon file over–Opens as an excel spreadsheet and then analyses the data•sar–sar-A -o filename 2 30 >/dev/null–Creates a snapshot to a file –in this case 30 snaps 2 seconds apart •ioo, vmo, schedo, vmstat–v•lvmo•lparstat,mpstat•Iostat•Check out Alphaworks for the Graphical LPAR tool•Many many moreOther tools•filemon–filemon -v -o filename -O all–sleep 30–trcstop•pstat to check async I/O–pstat-a | grep aio| wc–l•perfpmr to build performance info forIBM if reporting a PMR–/usr/bin/perfpmr.sh300lparstatlparstat-hSystem Configuration: type=shared mode=Uncapped smt=On lcpu=4 mem=512 ent=5.0 %user %sys %wait %idle physc%entc lbusy app vcsw phint%hypv hcalls0.0 0.5 0.0 99.5 0.00 1.0 0.0 -1524 0 0.5 154216.0 76.3 0.0 7.7 0.30 100.0 90.5 -321 1 0.9 259Physc–physical processors consumed%entc–percent of entitled capacityLbusy–logical processor utilization for system and userVcsw–Virtual context switchesPhint–phantom interrupts to other partitions%hypv-%time in the hypervisor for this lpar–weird numbers on an idle system may be seen/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/lparstat.htmmpstatmpstat–sSystem configuration: lcpu=4 ent=0.5Proc1Proc00.27%49.63%cpu0cpu2cpu1cpu30.17%0.10% 3.14%46.49%Above shows how processor is distributed using SMTAsync I/OTotal number of AIOs in usepstat–a | grep aios| wc–lOr new way is:ps–k | grep aio| wc-l4205AIO max possible requestslsattr –El aio0 –a maxreqsmaxreqs4096 Maximum number of REQUESTS TrueAIO maxserverslsattr –El aio0 –a maxserversmaxservers 320 MAXIMUM number of servers per cpu TrueNB –maxservers is a per processor setting in AIX 5.3Look at using fastpathFastpath can now be enabled with DIO/CIOSee Session A23 by Grover Davidson for a lot more info on Async I/OI/O Pacing•Useful to turn on during backups (streaming I/Os)•Set high value to multiple of (4*n)+1•Limits the number of outstanding I/Osagainst an individual file•minpout–minimum•maxpout–maximum•If process reaches maxpout then it issuspended from creating I/O untiloutstanding requests reach minpoutNetwork•no –a & nfso-a to find what values are set to now•Buffers–Mbufs•Network kernel buffers•thewall is max memory for mbufs•Can use maxmbuf tuneable to limit this or increase it–Uses chdev–Determines real memory used by communications–If 0 (default) then thewall is used–Leave it alone–TCP and UDP receive and send buffers–Ethernet adapter attributes•If change send and receive above then also set it here–no and nfso commands–nfsstat–rfc1323 and nfs_rfc1323netstat•netstat–i–Shows input and output packets and errors foreach adapter–Also shows collisions•netstat–ss–Shows summary info such as udp packets droppeddue to no socket•netstat–m–Memory information•netstat–v–Statistical information on all adaptersNetwork tuneables•no -a•Using no–rfc1323 = 1–sb_max=1310720(>= 1MB)–tcp_sendspace=262144–tcp_recvspace=262144–udp_sendspace=65536(at a minimum)–udp_recvspace=655360•Must be less than sb_max•Using nfso–nfso-a–nfs_rfc1323=1–nfs_socketsize=60000–nfs_tcp_socketsize=600000•Do a web search on “nagle effect”•netstat–s | grep“socket buffer overflow”nfsstat•Client and Server NFS Info •nfsstat–cn or –r or –s–Retransmissions due to errors•Retrans>5% is bad–Badcalls–Timeouts–Waits–ReadsUseful Links• 1. Ganglia–• 2. Lparmon–/tech/lparmon• 3. Nmon–/collaboration/wiki/display/WikiPtype/nmon• 4. Nmon Analyser–/collaboration/wiki/display/WikiPtype/nmonanalyser • 5. Jaqui's AIX* Blog–Has a base set of performance tunables for AIX 5.3 /blosxomjl.cgi/• 6. vmo command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds6/vmo.htm •7. ioo command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/ioo.htm •8. vmstat command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/ioo.htm •9. lvmo command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/ioo.htm •10. eServer Magazine and AiXtra–/•Search on Jaqui AND Lynch•Articles on Tuning and Virtualization•11. Find more on Mainline at:–/ebrochureQuestions?Supplementary SlidesDisk Technologies•Arbitrated–SCSI20 or 40 mb/sec–FC-AL 100mb/sec–Devices arbitrate for exclusive control–SCSI priority based on address •Non-Arbitrated–SSA80 or 160mb/sec–Devices on loop all treated equally–Devices drop packets of data on loopAdapter Throughput-SCSI100%70%Bits Maxmby/s mby/s Bus DevsWidth •SCSI-15 3.588•Fast SCSI10788•FW SCSI20141616•Ultra SCSI201488•Wide Ultra SCSI 4028168•Ultra2 SCSI402888•Wide Ultra2 SCSI80561616•Ultra3 SCSI1601121616•Ultra320 SCSI3202241616•Ultra640 SCSI6404481616•Watch for saturated adaptersCourtesy of /terms/scsiterms.htmlAdapter Throughput-Fibre100%70%mbit/s mbit/s•13393•266186•530371• 1 gbit717• 2 gbit1434•SSA comes in 80 and 160 mb/secRAID Levels•Raid-0–Disks combined into single volume stripeset–Data striped across the disks•Raid-1–Every disk mirrored to another–Full redundancy of data but needs extra disks–At least 2 I/Os per random write•Raid-0+1–Striped mirroring–Combines redundancy and performanceRAID Levels•RAID-5–Data striped across a set of disks–1 more disk used for parity bits–Parity may be striped across the disks also–At least 4 I/Os per random write(read/write to data and read/write toparity)–Uses hot spare technology。

相关文档
最新文档