DataLoad V4 Users Guide

合集下载

storage manager 2018 r1 安装指南说明书

storage manager 2018 r1 安装指南说明书

Storage Manager 2018 R1安装指南注、小心和警告注: “注”表示帮助您更好地使用该产品的重要信息。

小心: “小心”表示可能会损坏硬件或导致数据丢失,并告诉您如何避免此类问题。

警告: “警告”表示可能会导致财产损失、人身伤害甚至死亡。

© 2017 – 2018 Dell Inc. 或其子公司。

保留所有权利Dell、EMC 和其他商标为 Dell Inc. 或其子公司的商标。

其他商标均为其各自所有者的商标。

2018 - 11Rev. D关于本指南 (5)修订历史记录 (5)读者对象 (5)相关出版物 (5)Storage Manager 文档 (5)Storage Center 文档 (6)FluidFS Cluster 文档 (6)Dell 技术中心 (7)联系 Dell (7)1 Storage Manager 简介 (9)Storage Manager 组件 (9)管理兼容性 (10)软件和硬件要求 (10)Data Collector 要求 (10)Storage Manager 虚拟设备要求 (11)Storage Manager Client 要求 (12)Unisphere 和 Unisphere Central 要求 (13)Server Agent 要求 (13)Storage Manager 使用的默认端口 (13)Data Collector 端口 (13)客户端端口 (14)Server Agent 端口 (15)IPv6 支持 (15)2 规划和准备 (17)选择数据存储方法 (17)收集所需安装信息 (18)记录数据库信息 (18)准备数据库 (18)准备 Microsoft SQL Server 数据库 (18)准备 MySQL 数据库 (19)3 安装和配置 Data Collector (21)安装 Data Collector (21)将现有 Data Collector 迁移到新的 Data Collector (22)Data Collector 迁移要求 (23)将现有 Data Collector 迁移到新的 Data Collector (23)4 安装和配置 Storage Manager 虚拟设备 (25)vSphere 的虚拟设备要求 (25)部署 Storage Manager 虚拟设备 (25)目录3部署基于虚拟化的应用方案 (25)配置 Storage Manager 虚拟设备 (26)将虚拟设备配置为主要 Data Collector (26)将现有 Data Collector 迁移至 Storage Manager 虚拟设备 (27)Data Collector 迁移要求 (27)迁移现有 Data Collector (28)迁移之后的后续步骤 (28)5 安装和配置 Storage Manager 客户端 (31)连接到 Storage Manager 应用程序页面 (31)在 Windows 上安装 Storage Manager Client (31)在 Linux 上安装 Storage Manager Client (31)使用客户端连接到 Data Collector (32)将 Storage Center 添加到 Storage Manager (33)后续步骤 (34)6 更新 Storage Manager 软件 (35)更新 Storage Manager Data Collector (35)更新 Storage Manager Client (35)更新 Storage Manager Server Agent (35)更新 Storage Manager 虚拟设备 (36)4目录关于本指南本指南介绍如何安装和配置 Storage Manager 组件。

戴尔 DR4000 磁盘备份设备上设置 Veeam说明书

戴尔 DR4000 磁盘备份设备上设置 Veeam说明书

Setting up Veeam on theDell™ DR4000 Disk Backup ApplianceA Dell Technical White Paper© 2012 Dell Inc. All Rights Reserved. Dell and the Dell logo, and other Dell names and marks are trademarks of Dell Inc. in the US and worldwide. Veeam is a trademark of Veeam.Table of contentsExecutive Summary (4)1Install and Configure the DR4000 (5)2Configure the Backup Server (12)3Set up Veeam (14)4Set up the DR4000 Cleaner (19)5Monitoring Dedupe, Compression & Performance (20)Executive SummaryThis paper provides information about how to set up the Dell DR4000 as a backup to disk target forVeeam® Backup & Replication™ software. This paper is a quick reference guide and does not include allDR4000 deployment best practices.See the DR4000 documentation other data management application best practices whitepapers foradditional information.1Install and Configure the DR40001.Rack and cable the DR4000 appliance, and power it on.2.Log into iDRAC using the default address 192.168.0.1, user name root, and the password calvin.unch the virtual console.4.Once the virtual console is open, log in to the system as user administrator and the passwordSt0r@ge! (The “0” in the password is the numeral zero.).5.Set the user-defined networking preferences.6.View the summary of preferences and confirm that it is correct.7.Log into the DR4000 administrator console, using the IP address you just provided for theDR4000, user administrator and the password St0r@ge! (The “0” in the password is the numeral zero.).8.Join the DR4000 to Active Directory.a.Select Active Directory in the tree on the left hand side of the dashboard.b.Enter your Active Directory credentials.9.Create and mount the container.a.Select Containers in the tree on the left side of the dashboard, and then click the Create linkat the top of the page.b.Enter a Container Name and select the Enable CIFS check box.c.Select the client access credentials preferred.For improved security, Dell recommends adding IP addresses for the following (Not all environments will have all components):-Backup console (Veeam Server)-Hyper-V hosts (on-host proxy for Hyper-V environments)s -Off-host proxies (for Hyper-V environments)-Backup proxies (for vSphere environments)d.Click Create a New Container.e.Confirm that the container was added.f.Click Edit, and write down the container path, which you will use later to target the DR4000.g.Click Cancel to exit.2Configure the Backup Server1.Log into the media server and click on Start My Computer.2.Click Map network drive.3.For Folder, enter the path to the container on the DR4000.4.Select the Reconnect at logon check box.5.When prompted, enter the DR4000 login credentials.The DR4000 container is now mounted to your backup server.3Set up Veeam1.Open the Veeam Backup & Replication console.2.Click the Backup Infrastructure section, right-click on Backup Repositories, and select AddBackup Repository.3.Enter a name that is self-documenting, such as “CIFS-DR4000-Deduplication,” to indicate thedevice, protocol and feature of the repository.4.Select Shared folder.5.For Shared folder, enter in the name of the DR4000 (or TCP/IP address used above) and the sharename, and then click Next.6.Configure the repository wizard to note that the DR4000 is a deduplication target. Click theadvanced button and select the additional option for “Aligning backup file data blocks” (fixedlength write). Optionally select the decompress value. The decompress option will make theDR4000 do all of the compression.All other options for the new repository wizard are independent of anything related to theDR4000.7.Create a new backup job in the Veeam Backup console, for either Hyper-V or vSphere; the optionsspecific to the DR4000 are the same.8.Select one or more virtual machines, folders, datastores, resource pools, vApps, SCVMM clusters,etc. for the backup.9.Ensure that the repository for this job is the DR4000.10.On the storage wizard of the backup job, click the Advanced button to check a number ofimportant settings.11.Ensure the backup job is running in Incremental mode, and avoid the rollback transformationoption. This is a default for new jobs.12.Change the compression engine to Dedupe-friendly in the storage tab of the advanced settings.The deduplication option to local target will perform deduplication by Veeam at 1 MB, landing on the DR4000 for additional deduplication and compression.All other options are independent of the backup target.4Set up the DR4000 CleanerOnce all the backup jobs are setup the DR4000 cleaner must be scheduled. The DR4000 cleaner shouldrun at least 6 hours per week when backups are not taking place, generally after a backup job hascompleted.Performing scheduled disk space reclamation operations are recommended as a method for recoveringdisk space from system containers in which files were deleted as a result of deduplication.5Monitoring Dedupe, Compression & Performance After backup jobs have run the DR4000 will track Capacity, Storage Savings and Throughput on theDR4000 dashboard. This information is valuable in understanding the benefits the DR4000.21 Setting up Veeam on the Dell™ DR4000 Disk Backup Appliance。

DataLoad使用说明一看就会

DataLoad使用说明一看就会

DataL‎o ad 使用说明一、安装下载路径:在Data‎L oad文‎件夹中安装‎最新的Da‎t aLoa‎d版本:DataL‎o ad_s‎e tup_‎v4.3.9.exe安装过程按‎默认的设置‎安装就行了‎。

二、DataL‎o ad介绍‎DataL‎o ad is a tool for manip‎u lati‎n g the data and comma‎n ds in Oracl‎e Appli‎c atio‎n s and other‎softw‎a re by sendi‎n g pre-defin‎e d data and comma‎n ds to the targe‎t progr‎a m.DataL‎o ad 可以将已经‎准备好的数‎据根据自定‎义好的模板‎输入到Or‎a cle系‎统或者其他‎程序当中。

下面先举个‎例子看下D‎a taLo‎a d使用效‎果。

如下图表1‎所示,在手工填这‎个表单时,我们常用到‎的一些基本‎操作是:选中第一个‎Categ‎o ry文本‎框,填入数据后‎按T AB键跳‎到下一个文‎本框,再填下一个‎数据,然后再TA‎B 到下一个‎文本框继续‎……。

DataL‎o ad要作‎的就是将这‎一系列的键‎盘操作事先‎记录在一张‎表上(如按一下T‎a b,Enter‎,Shift‎以及鼠标等‎),然后再整批‎运行,把数据成批‎填进系统。

图表2是一‎个已经定义‎好的Dat‎a Load‎模版。

其中的行表‎示一连串操‎作记录,列表示一个‎基本操作,蓝颜色标注‎的部分是命‎令,白色的表示‎填写数据。

图表 1三、操作说明这一部分将‎详细讲解上‎面的A ss‎e t Categ‎o ries‎数据如何通‎过D ata‎L oad导‎入。

首先打开要‎输数据的窗‎口A sse‎t Categ‎o ries‎,见图3;然后打开桌‎面的Dat‎a Load‎,如图4。

IBM TotalStorage DS4000 存储服务器和存储扩展机箱快速入门指南说明书

IBM TotalStorage DS4000 存储服务器和存储扩展机箱快速入门指南说明书
3. Cable the Storage Area Network
®
5. Managing the DS4000 storage server
After installation is completed, launch the storage management software to discover the hosts and storage subsystem automatically. For assistance, use the online help provided with the client.
GC26-7738-00
*07GC26-7738-00*
25R0400
*1P25R0400*
GC26-7738-00
Before you begin
Before you begin the installation procedure, unpack the carton and verify that your shipment contains all the required components. Be sure to read the Safety Guide first, and review the other documentation provided with your shipment, before attempting to install any components.
1. Capture a profile of the subsystem. Verify that the current controller, ESM and drive firmware match the latest versions on the DS4000 storage support Web site, and if required, download any updated firmware from the Web site. Review the readme files that are included in each firmware package for any special download requirements or sequence dependencies before downloading and installing that particular firmware.

瀚高数据库V4管理使用手册说明书

瀚高数据库V4管理使用手册说明书

瀚高数据库V4管理员使用手册瀚高数据库V4管理员使用手册目录目录 (1)第一章系统安装 (2)第二章服务器管理 (2)2.1服务器配置 (2)2.2管理数据库 (74)2.3备份和恢复 (131)2.4监控数据库活动 (155)2.5监控磁盘使用情况 (193)2.6存储过程 (195)2.7大对象 (196)第一章系统安装软件的安装配置、测试使用及软件的卸载详见安装手册,这里不再赘述。

第二章服务器管理2.1服务器配置有很多配置参数可以影响数据库系统的行为。

本章的第一节中我们将描述如何与配置参数进行交互。

后续的小节将详细地讨论每一个参数。

2.1.1设置参数参数名称和值所有参数名都是大小写不敏感的。

每个参数的值可能是以下五种类型中的一种:布尔、字符串、整数、浮点数或枚举。

参数值的类型决定了设置参数的语法:•布尔:值可以写成(都是大小写不敏感的)on、off、true、false、yes、no、1、0或这些值的任何无歧义前缀。

•字符串:通常值被包括在单引号中,值内部出现的单引号都需要改为两个单引号。

不过,如果值是一个简单的数字或标识符,通常可以省略单引号。

•数值(整数或浮点数):仅允许浮点数有小数点。

不要使用千位分隔符。

不需要加引号。

•带单位的数字:有些用来描述内存大小或时间等的数字参数具有隐含单位,例如千字节、块(通常是8KB)、毫秒、秒或分钟。

如果这些数值类型的参数没有携带单位,则它们将HighGo DB设置的默认单位,可以通过查询pg_settings.unit获得。

为了避免混淆,可以为参数值显式指定一个单位,例如时间值'120ms',它们将被转换为该参数的实际单位。

请注意,要使用该特性,必须将值写作字符串(加引号)。

单位名称是大小写敏感的,并且数值和单位之间可以有空格。

⏹有效的存储单位是kB(千字节)、MB(兆字节)、GB(千兆字节)和TB(兆兆字节)。

存储单元的乘数是1024,不是1000。

银河麒麟服务器操作系统 V4 solr 软件适配手册说明书

银河麒麟服务器操作系统 V4 solr 软件适配手册说明书

银河麒麟服务器操作系统V4 solr软件适配手册天津麒麟信息技术有限公司2019年6月目录目录 (I)1概述 (2)1.1系统概述 (2)1.2环境概述 (2)1.3SOLR软件简介 (2)1.4SOLR优点 (2)1.5SOLR架构及原理 (3)1.5.1全文检索 (3)1.5.2索引创建和搜索过程 (3)2SOLR软件适配 (4)1)下载并解压SOLR (4)2)下载TOMCAT (4)3)准备依赖包 (4)4)配置SOLRHOME (5)5)配置TOMCAT (5)3验证SOLR部署 (6)1概述1.1系统概述银河麒麟服务器操作系统主要面向军队综合电子信息系统、金融系统以及电力系统等国家关键行业的服务器应用领域,突出高安全性、高可用性、高效数据处理、虚拟化等关键技术优势,针对关键业务构建的丰富高效、安全可靠的功能特性,兼容适配长城、联想、浪潮、华为、曙光等国内主流厂商的服务器整机产品,以及达梦、金仓、神通、南大通用等主要国产数据库和中创、金蝶、东方通等国产中间件,满足虚拟化、云计算和大数据时代,服务器业务对操作系统在性能、安全性及可扩展性等方面的需求,是一款具有高安全、高可用、高可靠、高性能的自主可控服务器操作系统。

1.2环境概述服务器型号长城信安擎天DF720服务器CPU类型飞腾2000+处理器操作系统版本Kylin-4.0.2-server-sp2-2000-19050910.Z1内核版本 4.4.131solr版本7.7.21.3solr软件简介Solr是一个独立的企业级搜索应用服务器,它对外提供类似于Web-service 的API接口。

用户可以通过http请求,向搜索引擎服务器提交一定格式的XML 文件,生成索引;也可以通过Http Get操作提出查找请求,并得到XML格式的返回结果。

Solr是一个高性能,采用Java5开发,基于Lucene的全文搜索服务器。

同时对其进行了扩展,提供了比Lucene更为丰富的查询语言,同时实现了可配置、可扩展并对查询性能进行了优化,并且提供了一个完善的功能管理界面,是一款非常优秀的全文搜索引擎。

DELL服务器通过sd卡安装系统(iDRAC-Use-vFlash-)

DELL服务器通过sd卡安装系统(iDRAC-Use-vFlash-)

iDRAC Use vFlash使用iDRAC 6 vFlash 功能upload image 文件,通过upload 的image 文件进行系统安装或修复,实验为如何Upload及设置启动的实验,要使用vFlash功能要求iDRAC 为Enterprise带DELL OEM SD内存卡,否则不能正常使用实验环境:T310服务器(iDRAC带vFlash 功能、DELL OEM 1G SD)IP: 10.10.10.124客户端:windows 2003 IP: 10.10.10.141ISO文件:BootCDXP.ISO (WinPE)、ploplinux-4.2.2-x64.iso (LinuxPE)Dos98.img (DOS)、 Perc6.imgTools:racadm.exe下载链接:iDRAC 用户指南:实验说明:iDRAC 6 version 1.9或以上,vFlash中的SD卡必须为DELL OEM的,否则无法进行以下的实验,(如下图)使用不是DELL OEM SD的显示,无法进行实验;另实验内容只适用于Power Edge 服务器(不包含M1000E的M系列刀片机)只测试Image文件上传至vFlash 中的SD作引导启动;实验中:上传Image后类型为CD(只读),软盘(读写),硬盘(读写)以下是实验步骤Upload CD1.首先要登入 iDRAC Web 页面,默认用户名:root, 密码:calvin ;2.进入vFlash 界面,如第一次使用请在SD卡中初始化及启用vFlash功能;如下图3.完成以上步骤后,点击“从映像创建”进行Image文件上传;如下图4.点击上图的“应用”上传文件进程开始运行,并请等待至提示“已完成创建分区”的提示出现;如下图5.完成创建分区后点击“管理”,选中已附加并应用;如下图6.在完成应用后,可进行使用已上传的Image 文件进行启动,可以选择在iDRAC中的设置,设置为第一引导设备,选择一次性引导;如下图或开机时按F11,通过BIOS Boot Manager 的文件启动Image文件;如下图以下启动后的将以ISO文件启动;如下图7.附加后如正常进入操作系统,可在直接中以光驱形式显示,要取消则在iDRAC中取消附加;参考步骤5Upload Fopply1.Upload Fopply first boot;注:上传为软盘只能使用于First Boot ,不能用于安装操作系统过程中加载驱动,如加载阵列控制卡驱动;2.从“映像创建”文件为dos98.img,标签为DOS;如下图3.请确保上传完成100%;如下图4.将文件附加;如下图5.如显示如下断开提示,新附加的介质,iDRAC会中断所有已附加的介质并重新加载,故选择确定即可;如下图6.参考步骤6方式以附加的文件方式启动;7.启动后的显示;如下图8.在系统中的显示为可移设备而不是为A盘;如下图Upload HDD1.上传Image文件作为磁盘分区,标签为HDD,类型为硬盘;如下图2.上传完成;3.附加,新附加的介质,iDRAC会中断所有已附加的介质并重新加载,故选择确定即可;如下图4.在系统的显示为可移动磁盘,而不是本地磁盘;如下图命令模式上传ISO1.以命令模式上传ISO 到vFlash中;工具:racadm.exe命令参数:vflashpartition 子命令(create | delete | status | list)-i (索引1-16的数字)-o (卷标,六个字母或数字,不能带空格)-e (类型:Floppy、cddvd、HDD)-t(类型:empty 创建空白分区,image:使用上传的image创建分区)empty - 创建空白分区。

联想凌拓DM系列存储平台简介

联想凌拓DM系列存储平台简介
384 SSDs
DM240ION MAPPING
DM7000F* DM7000H DM5000F* DM5000H DM3000H
DM240S (2U24) ▪ 24 x 2.5" drives
DM600S (4U60) ▪ 60 x 3.5" or 2.5" drives ▪ High Density Offering
480 HDDs/SSDs
DM120s, DM240s, DM600s
275K IOPs
DM7000F SPECIFICATION OVERVIEW
▪ Active-Active Controllers ▪ Unified Storage
Form Factor Cache Array Base host ports per system
Maximum host connectivity ports
Drive expansion ports Maximum drives Expansion support System performance (100% 8KB random read @ 1ms)
Lenovo DM7000F
3U
256GB
联想凌拓DM 系列存储平台简介
LENOVO OFFERING OVERVIEW
All Flash
UNIFIED
➢ Maximum performance
➢ Latency sensitive apps
➢ Mission critical workload Hybrid ➢ Cost effective
A700s
All-Flash
A700
Apollo (NVMe)

IBM System Storage N 系列 AIX 主机实用程序 6.0 快速入门指南说明书

IBM System Storage N 系列 AIX 主机实用程序 6.0 快速入门指南说明书

IBM System Storage N series AIX Host Utilities6.0Quick Start GuideThis guide is for experienced AIX users.It provides the basic information required to get the AIX Host Utilities installed and set up on an AIX host.The steps listed are for a typical installation and cover multiple AIX environments,including MPIO and PowerVM.While many steps are common to all environments,some steps apply only to a specific environment.If you are not an experienced AIX user,see the AIX Host Utilities Installation and Setup Guide.That document provides detailed steps and examples.It also includes troubleshooting information as well as instructions for optional tasks,such as setting up a SAN boot LUN.For information about installing and setting up third-party hardware and applications,such as host bus adapters,see the documentation that accompanies those products.Note:Occasionally there are known problems that can affect your system setup.Read the AIX Host Utilities Release Notes before you install the Host Utilities.The Release Notes are updated whenever an issue is found and might contain the information that was discovered after this guide wasproduced.Task1:Make sure the prerequisites for installing and setting up the Host Utilities have been metTo install the Host Utilities and set up your system,you must perform tasks on both the host and the storage system.In some cases,the tasks you perform vary depending on your environment;for example, the protocol you are using and whether you are using AIX MPIO for multipathing.Note:The N series interoperability matrix website contains the most current information about supported environments for the Host Utilities.1.Verify that your host system is correct,including:v Host operating system version,technology levels,and appropriate updatesv HBAs and drivers,model and version,or software initiatorv Volume management and multipathing,if you are using multipathingv PowerVM,if you are using PowerVMThe Host Utilities support both PowerVM vSCSI and PowerVM NPIV.2.Verify that your storage system is set up correctly,which includes having:v The correct version of Data ONTAP installed.v The appropriate license for the protocol on which your environment runs.v Appropriate HBAs or software initiators set up to work with the host as needed by your protocol.v ALUA enabled on all platforms that support it.Note:Starting with Data ONTAP8.0,ALUA is the default igroup.For FC environments where ALUA is not supported,the AIX Host Utilities provide the dotpaths utility.v Working volumes and qtrees(if desired)set up.3.(FC environments)If you are using a fabric connection,verify that the switch is set up correctly,which includes having the switch:v Cabled according to the instructions in the SAN Configuration Guide(called Fibre Channel and iSCSI Configuration Guide in Data ONTAP8.1and earlier)for your version of Data ONTAP.vZoned appropriately,using the supported zoning technique in single initiator zoning from a host's initiator's standpoint.v Powered on in the correct order:switch,disk shelves,storage systems,and then the host.Note:For information about supported topologies,see the SAN Configuration Guide(called Fibre Channel and iSCSI Configuration Guide in Data ONTAP8.1and earlier)for your version of DataONTAP.4.(FC direct-attached environment)If you use a direct-attached,FC configuration,set the media typeof the target HBA on the storage system to loop.Use the fcp config command to stop the HBA,set it to loop(fcp config adapter mediatype loop), and then restart the adapter.You can use the default HBA settings for the initiator HBA in the host,but you must configure the5.Confirm that the host and the storage system can communicate.Task2:Install and set up the Host UtilitiesTo install and set up the Host Utilities,you must get the software package for your Host Utilities environment and the software package for the SAN Toolkit.After you install these packages,you might need to perform additional configuration steps for your environment.You must be logged on as root to install or uninstall the Host Utilities.Note:(PowerVM vSCSI environments)If you are using PowerVM with vSCSI,you must switch into the OEM setup shell to install the Host Utilities on the VIO server:1.Log on to the host as padmin.2.Enter the command:oem_setup_envComplete the following steps to get,install,and set up the Host Utilities:1.Download the compressed file containing the Host Utilities from N series support website at/storage/support/nseries/.2.Uncompress the file and get the SAN Toolkit software package(Ontap.SAN_toolkit)for allenvironments and the host settings software package for your environment:Host Utilities environment Software packageAIX MPIO,PowerVM MPIO/Ontap.MPIO_Host_Utilities_KitSAN Tool Kit SAN_Tool_Kit/Ontap.SAN_toolkit You can use the zcat and tar commands to uncompress the file and extract the software;for example: zcat ibm_aix_host_utilities_6.0.tar.Z|tar-xvf-3.Install or upgrade the existing Host Utilities SAN Toolkit software package and the host settingssoftware package that you extracted.2From the directory containing the extracted software packages,use one of the following methods to install the two software packages:v SMITUse the smitty install command to start this program.Go to the Software Installation andMaintenance screen and choose the Install and Update Software option.v The command installp-aXYd FileSet_Path_NameRegardless of which method you use,you must execute it twice to install both the SAN Toolkitsoftware package and the host settings software package.(PowerVM NPIV environments)If you have a PowerVM NPIV environment running FC,you must install the SAN Toolkit on each VIO client.Doing this lets you run the sanlun utility on each VIO client.Note:If you have a PowerVM vSCSI environment,the Host Utilities is only installed on the VIO server and not the VIO client.4.(FC environments)Enable AIX Fast I/O Failure by setting the fc_err_recov parameter to fast_fail.a.For each host HBA(fcsci X where X is the HBA number),enable the AIX Fast I/O Failure featureby entering the following command:chdev-l fscsiX-a fc_err_recov=fast_fail-Pb.After you have set these values on each host HBA,reboot the system to enable the change to takeplace.e the command lsattr-El fscsiX to verify that the AIX Fast I/O Failure feature and DynamicTracking are enabled.Task3:Set up access between the host and the LUNs on the storage systemTo complete the Host Utilities setup,you must ensure your host discovers and can work with the LUNs on the storage system.The steps you perform differ depending on your environment.You must be logged in as root administrator to execute the Host Utilities commands.(PowerVM environments only)When you are setting access between the host and LUNs,the commands you enter in a PowerVM environment vary depending on whether you are running PowerVM vSCSI or PowerVM NPIV.If you are using PowerVM vSCSI and VIO servers,you must use the padmin login and the commands appropriate for that login to configure and discover LUNs.When you use the Host Utilities commands, you must become root by entering the oem_setup_env command.If you are using PowerVM NPIV,you run all the commands on the VIO client.This is the same as running AIX MPIO.1.Create and map igroups and LUNs.You must create at least one igroup and one LUN,and then map the LUN to the igroup.The lun setup command helps you through this process.For details about creating an igroup and LUNs,see the SAN Administration Guide(called Data ONTAP Block Access Management Guide for iSCSI and FC in Data ONTAP8.1and earlier)for your version of Data ONTAP.2.(FC)If your environment is running the FC protocol and supports ALUA,make sure ALUA isenabled.With Data ONTAP8.0and later,ALUA is automatically enabled when you create an igroup in an environment using FC.To determine whether ALUA is enabled,enter the command:igroup show-v igroup_nameIf ALUA is not enabled,enable it by entering the following command:igroup set igroup_name alua yes3.Discover the new LUNs by entering the appropriate command for your environment.3AIX MPIO and PowerVM NPIV environments cfgmgrPowerVM vSCSI environments cfgdev4.Verify that the host has discovered the LUNs by displaying all the AIX disks.(AIX MPIO and all PowerVM environments)Write down the hdisk instance numbers so you can supply them when you perform the path configurations.AIX MPIO and PowerVM NPIV environments lsdev-Cc diskPowerVM vSCSI environments lsdev-type disk5.(AIX MPIO and all PowerVM environments)Display information about your environment.AIX MPIO and PowerVM NPIV environments lsattr-El hdisk_namePowerVM vSCSI environments only lsdev-dev hdisk_name-attr6.(PowerVM vSCSI environments)Become root by entering the following command:oem_setup_env7.(AIX MPIO and all PowerVM FC environments)In FC environments,set the path priorities.ALUA handles this automatically.If ALUA is not supported and you are using an MPIO environment, you can use the dotpaths utility that comes with the Host Utilities to set the path priorities.If you enter dotpaths without any options,it sets the priority for all Data ONTAP LUNs.8.(PowerVM vSCSI environments)If you have LUNs presented to a VIO server from multiplethird-party storage vendors,make sure that all the LUNs use the same maximum transfer size.Enter the following command:lsattr-El hdisk_name-a max_transfer9.Display information about the LUNs and the HBAs by using the command sanlun.For example,toverify that the host has discovered the LUNs,enter the command:sanlun lun showIf you are using multipathing,you can display information about the primary and secondary paths available to the LUNs by entering the following command:sanlun lun show-pNote:The sanlun command displays path information for each LUN;however,it only displays the native multipathing policy.To see the multipathing policy for other vendors,you must usevendor-specific commands.WebsitesIBM maintains pages on the World Wide Web where you can get the latest technical information and download device drivers and updates.The following web pages provide N series information:v A listing of currently available N series products and features can be found at the following web page: /storage/nas/v The IBM System Storage N series support website requires users to register in order to obtain access to N series support content on the web.To understand how the N series support web content is organized and navigated,and to access the N series support website,refer to the following publicly accessible web page:/storage/support/nseries/This web page also provides links to AutoSupport information as well as other important N series product resources.v IBM System Storage N series products attach to a variety of servers and operating systems.To determine the latest supported attachments,go to the IBM N series interoperability matrix at the following web page:/systems/storage/network/interophome.html4v For the latest N series hardware product documentation,including planning,installation and setup, and hardware monitoring,service and diagnostics,see the IBM N series Information Center at the following web page:/infocenter/nasinfo/nseries/index.jspNA210-05812_A0,Printed in USA©Copyright IBM Corporation2012.US Government Users Restricted Rights–Use,duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.GI13-2806-01。

IBMPC服务器培训讲义

IBMPC服务器培训讲义

目录一、IBM产品学习..................................................... - 1 -1.1 IBM PC Server产品介绍....................................... - 1 -1.2 IBM PC Server官方网站介绍................................... - 3 -1.3 IBM PC Server常用手册....................................... - 6 -1.4 IBM PC Server常用技术....................................... - 7 -1.5 IBM PC Server OS安装....................................... - 10 -1.6 IBM PC Server常见配置...................................... - 10 -1.7 IBM PC Server常用软件...................................... - 11 -二、IBM PC Server常见故障诊断和解决办法............................ - 19 -2.1状态确定.................................................... - 19 -2.2故障定位.................................................... - 19 -2.3常见故障解决................................................ - 32 -一、IBM产品学习1.1 IBM PC Server产品介绍1. IBM NetFinity:一般来说,IBM X系列就代表了IBM的PC服务器;而IBM提出X系列的概念就是从Netfinity开始的,我们现在几乎已经看不到了。

Geoframe软件基本知识

Geoframe软件基本知识

Geoframe软件基本知识Greoframe 软件基本知识一,Greoframe 4.03 模块简介(2002.8 )P 包——测井包G 包——地质包可视化 seismic 数据管理:工区管理 project 过程 process 数据管理 Data Reservion——油藏包工具包 G 包中——Greology office 地质办公室 well composite 单井综合柱状图well pix 测井曲线名井地层对比储层参数集总 (储层统计) Ressum Cross section 油藏剖面的绘制井校正,测井数据.岩芯数据等编辑及函数运算 P 包中-------well edit ELANplus 测井解释 Rockcell 岩性分类多井岩性/岩相判别 Petrostat Geoplot 多井交会图显示,数据标准化与曲线关系拟合LithoQuick Look.二.Geoframe 4.03 工区管理(一).建工区及工区备份工区建立有两种方式:1.产生新工区,2.恢复工区(从本机或其他机备份数据工区的概念:小(20 兆). 中. 大(120 兆) 工区放井与 2D 导航数据在 oracle 数据库中,如果工区选择小的话,当井与 2D 导航数据增大时,它会自动扩容,建立表空间,作分散管理,这样工区的运行速度将变慢. 工区的内容: ----stand lone 多用户 ----share project(只放井的工区,只能读井的数据)不能完成井的解释任务 ----subproject(Geofame project)工区工区管理 Storage Edit Delete Merge Backup Recover 系统设置数据放在何处改变工区参数不合理的地方删除工区合并工区,通常情况不用,拷贝其他版本.不用的代码备份工区恢复工区 Accessrights Rebuild zndexes 自动做整个工区,提高速度,一般先要作备份工区备份 Backup-----Acrchive*.gfa 完全备份产生的*log 文件有些有用的信息Fast-backup*.gfb 增量备份 recover 恢复增量备份恢复完全备份选择性备份 1 解释输出(在 Seismic 中 Data manage) 2 Data Save(well Data)将整个油田井数据备份 3 2D Seismic 校正好的 Class 包括加载定义一起备份 4 在CP 备份整个 CP 目录,作一个 tar 文件(包括网格文件)从工区输入 (CP 是个单独模块,可以放在任何一个工区下运行) 使用命令备份remoto—gen---save.csh 选择性备份 restore:remite—recover.csh 关于基准面的问题: (Data Manager Users Guide---AppendixA) ER (elevation Reference) EIT (elebation at Time zere) 建工区给工区名——确认口令——从 DBA 选工区的大小. 类型——DBA 注册——连接数据库分配空间分开管理(解释地震速度 CPS 缺省)——创建工区的参数(米制) 地震基准面 North American 27 clake1866 zone 代理很重要 OTM zone number——19(二)地震数据的加载与备份 1 SEGX Dump: 带是否可读测线名字道头位置shotpoint 的数及在道头位置,增量是正是负采样率是否正确(必须是用微秒) 样点数是否正确有无坐标及在道头位置 EBCDIC (Extended Binary coded Decimal znterchange code)计算机对符号的代码 3200bytes reel header 卷头400bytes 240 240…… line 头 trace1 trace2 二进制二进制道头 one at the end of each file two EOT's mark on the end of tape End of File mark End of Tape mark (EOT) 2 SEGY formate line header 1 Sample rate 4000 microseconds=4ms 2 Number of Sample pertrace 3 Data Sample formate code 1—IBM32—bit 浮点 (4byte per sample) 2—IBM32—bit 整数 (4byte persample) 3—IBM16—bit 整数 (2byte per sample) 4—32bit point with gain(4byte per sample) Sample rate(ms) *(number of Sample)=length of the trace 4 1000 4000 Full Dump (完全 Dump) 2D 定义 * loc line CDP x y 3D 定义 3 个点面元大小 3 在 seismic 中 Data Manager (详细见中的数据管理) 1 地震 seY 输入输出 2 删除.拷贝.层位.断层输入输出 (缺省格式) 3 产生.改变 class,重新命名测线,产生 Time.Slice 切片 4 二维测站,删除等5 surface 管理可以转换为层位提供 IESX—UTIL 恢复工区 cd IESXPROG 定义格式 line read 表达式 expression trace read6 2D 测线加入中带桩号(找到对应关系) (三)数据加载及备份井数据 name checkshots OWI marker API log dat bottom location hole condition index elevation information flow direction deviation surrey 3 井数据加载井数据备份 ASCII 加载 Data Load 在井的各数管理中备份(打印输出) Data Save 1 关于 KB 的问题 KB 不是补正高,它是井到 project reference 之间 SCB Checkshot depth datumn USP 处理针对哪一个面做的加井时 well—Borehol—加 elevation—Reference ASCII 加载 (控制文件 Data——ASCII Load 批加载) Wu-asciiWu-galoaden*.ctl(放在缺省目录下) 2 (1) 井位加载文件: 井名 x ywell.ctl g25 Name well 1 OWI Borehole 1 Location x 2 Location y 3 (单井加入) (2) 井曲线 (log data) 原文件: 井深 las1 las2 控制文件 log.ctl DT us/m 2 RHOB g/cm3 3 GR gAPI 4 (3) 井分层 (well marker data ) (多井加入) 原文件 : 井名分层测深 g23 23 ^^^^^ g23 32 ^^^^^ g24 23 ^^^^^ g32 32 ^^^^^ 控制文件: (marker.ctl): Borhole-name start-wam Depth/Time Name (4) 井斜 (well deviation survey) 原文件: 井名 MD 垂深东分北分控制文件 MD m1 MD m2 DX m3 (单井加入) DY m4 1 2 3 (marker ram) MD TVD DX DY m1 m2 m3 m4 在 Borehole UIM (1)多井装入 (5) 井的 Checkshot (well checkshot survey) 原文件 TVD Time(s) 控制文件 TVD m (checkshot.ctl) TWOTIM s 装入给出是给出井名 (一个工区允许有多个 checkshot,也可把速度按这种方式装入) (6) 离散文件装入 (Scatter set Data) 原文件 X Y Z (如平均速度) 控制文件 X 1 Y 2 Property 3 UWI (Unique well Identifier) 井名(最长 40) API (the American Petroleum Institute) Well name 属性用于区分多个版本,别各针对井,也用于家载 DLIS 数据 DLIS (Digital Log Interchange Standard) LAS (Log Interchange Standard) Well composed of Multiple Boreholes Elevation 是到参考面 datumis 垂直距离 MD (Measured Depth) True Vertical Depth (TVD) Two way Time (TWT) (需要做校正) Borehole working Datum Wellcheckshot Datum Well Deviation Datum Well Marker Datum Depth Grid 工区工作 datum 3.井数据 Data Load 加载 Data Load *.gf66 *.Lis *.Dlis 对油田井打包后加入,加的是二进制的文件,只看到文件的头, (preview 可以查看) 如果想选择性加入一些曲线,则——Library Filter option 1 不过滤. 单个文件 DLIS 4. 井数据备份 (井的数据直接在数据库) 1 将井的各种数据在各数据相应的管理窗口备份选择所要输出内容 (小打印机)输出选上 well symbol (SPACE)空格 5 2 Data save——打包 Archive *.gf66 所有井数据二进制测井曲线 DLIS ASCII x . y . marker 在 Data—general—右键—弹出油田—弹出油田有关井信息打开 TTC 转输在 Data save 下井数据会自动显示如果选 ASCII—no Depth index muti borehole and producer 在LIS 格式下 File number 1 代表每口井的数 2 3 每口井输出后需要 Apply 单口井输出 marker—ASCII 中—well marker 3 在 CPS 中——井数据可以打包(四)常用一些命令 (gf-db-admin.fm) 1. 在 Geoframe xterm window 启动proman Q 2. $CL-f 1 $CL-f-project name 查看谁在用计算机解锁 2 $CL-f project project password 3. gf-users 查看本机工区运行情况 client___ID process 4. gf__accounts 查看机本机名字,分配的 oracle gf__accounts-o (每个工区的所有元) gf__project-users free space project__passproj-changepasswd 允许 DBA 或用户来改变它们工区的口令 proj__delete工区工区口令 __update 三合成记录制作,时深关系建立 (synthetrcs) seismic---IESX---Application Synthetics seismic---synthetic sonic logs 选井---post---Time/Depth check shots 声波慢度两者任选其一 velocity 给初始速度值 coefficient 1 不用 checkshot,时深关系从声波中提取,可用correction 2 用 checkshot. (校正声波用 checkshot)选 checkshotpost---sonic scale marker Reflection 曲线刻度分层 wavelet seismic correlation Auxiliary log 子波地震(井旁道) 其中子波可以在 Tools---通过井旁道提,对子波编辑,分析,输入输出 Tools---漂移及局部拉伸在子波分析---Time varying 在面版 seismic 右键弹出选 borehole---在井柱上选---appearance---出现窗口 borehole appearance 可贴上合成记录(直接在合成记录的窗口) Tools—Backshif 在声波,marker 上漂.虽在时域工作互校正了时深关系 (整体漂移) stretch-sqeeze 局部拉伸选停泊点 anchor 停泊 2 键选点 1 键拉伸 relocity surey 编辑 checkshot 后校正声波曲线 wavelet Extract Analytical Edit Time varying 产生时变子波 Import/export 合成记录完成后三键---Save 存在合成记录---在地震解释在 Boreholeappearance 中完成合成记录定义---update well (更新井集)再贴在剖面上(表现一道或多道) 在 Tools---Synthetic---上下键记住时间值,回到合成记录中,重新做合成记录四 .地震资料综合解释 Interpretation: 2D/3D 综合解释,底图 Basemap Data Manager: 数据管理 Computation Manager 属性计算,方差体 Mistie Analysis 自动追踪 Automatic Picking IESX Surface Slice Synthetics 合成记录制作 Geoviz 可视化 Geoviz expore Basic TD 时深转换Interpetation Model Manager 解释模型管理 (一) 解释模型管理启动Interpretation ----选 Model 创建 Model Model Manager ----Populate 合并 7 ----两种方式 1 Assign 全拷(会覆盖相同的层位) 2 Clone 拷贝(选择性拷贝,不会覆盖相同的层位) 选 Source Model 对要拷贝层位,断层选择 (二)自动追踪 Automatic Picking---ASAP AutoPix 算法不一样 ASAP(Automatic Seismic Area Picken) ----在 Edit 定义一个 Seismic Area Path 多边形 ----数据种子类 Tracking option Area 断层边界追踪方法----先作 Advanced 追踪的数据在 Horizon Function----Attrebute Erase 删除最好先备份在解释窗口 Areal----Horizon Copy 备份下换成另一个名字,再做自动追踪或删除(三)数据管理 Data Manage 在 IESX 放的数据 3D/2D 测线工线 Surface(层位,断层) 解释(属性,断层 Cuts, Contacts) 网格文件在 Geoframe 中所有的3D 层位属性存在数据库以网格形式 2D 层位属性以线数据存在 Geoframe 下列属性自动产生:Time Snap-Criteria Interpretation-origin Pick-Quality (地震解释,ASAP,AutoPix) 逆断层由 Horizon Patches 进行管理(1,2,3)LCS( Local Coordinate System) IESX ing-f/dev/rmt/on fsf/ 绕过另一个文件解锁 $CL 大写 $IESXPROG/Da-clear 全称作切割线输出,先产生2D,后输出工具 IESX cd $IESXPROG ies_Util 1 12 22 42 44 2 终止退出其中有的恢复工区 Path----定义多边形----输出层位(控制范围) 收敛 (四)属性,方差体产生选测站,选要运行的程序----输出文件放在 DataManangers----Interpretation Data 找到可以删除删除 (五)常见问题 1. 优先级管理 ---Baspman---user---preferme---presence1,2,3---3—2—1切 ----surveys 2. 手工加的 marker 显示不出来.在 Source 中加入GF-Loader 3. 输入层位必须是缺省 5 项记录 X Y line CDP Time 4. 将 IESX 中的层位按 X,Y,Z 输出,作为离散文件加入,将离散文件网格 Gridding,再将网格文件还原成 Surface (Data Manage---解释---Surface 完成 5. 属性产生与删除, 在数据管理下解释数据中删除 6. 移动底图上粘贴顺序 7. 斜线的切割 8. 数据加载中 Raviable fix 9. 计算井斜-----倾角问题 10. 加入深度域的地震数据,在剖面显示中 User—改变显示状态 Metric 缺省的是 11.User----refrenence----Fault(从 Segment---(变成) Symbol)能把断层的闭合点显示大一些五.Geoviz. 可视化 (相干数据体) Geoviz explore Data Manage—Geoviz—只有显示功能 Visualization File—Volume Load (三维数据体加载) 看有没有空洞,颜色是否均匀去掉不要的东西—点黑如 Box Map 等对地震 ----在 Tools—Modify 下修改颜色网格层位/断层体Tools—Volume—Pan—解释(如加断层)必须在+状态解释右键退出解释状态Fault Surface/Volume (产生断层面) 解释完后可以可以产生断层面,以后的断层解释可用 XN GT R O 外推限制不限制 Shift+P 去掉立体框 Voxel Picking—种子体检测 Shift+1 键点种子点 Covert Detected Volume—产生层位体—输出层面存顶,底 1.井为空井 (0,0) 2.空 2D,3D Surface 3.2D 导航数据出错出现故障而不能找到数据 4.速度异常 Tool—Modif—修改层位—Coloning—Chang Attribute—选 Grid ( 产生各种属性数据体的时间网)--Modify Object—Modify Surface—Coloning Grid—改变颜色 9 Geofram 4.0 版本软件使用手册第一部分基础知识一,系统介绍 Geofram 4.0 系统是schlumber 公司针对解决测井,地震,油藏, 公司针对解决测井,地震,油藏, 地质以及综合研究等问题开发的较为完整的集成软件. 所示, 地质以及综合研究等问题开发的较为完整的集成软件.如图 1-1 所示,本等问题开发的较为完整的集成软件软件共划分井眼地质 ( geolog ) 岩石物理 ,油藏描述(reservoir) (petrophysics) 油藏描述(reservoir),地震 petrophysics)油藏描述(reservoir), , seismic) 可视化地震(visualization) ,可视化地震(visualization)和工 ( seismic ) 可视化地震 (visualization) 和工 , 具(utility)等六个部分.系统管理配备 3 个基 (utility)等六个部分. 等六个部分本管理工具:项目管理工具( mandge) 本管理工具:项目管理工具(project mandge) , 工作流程管理器( manage) 工作流程管理器(process manage)及数据管理 manage) 该系统具有如下特色: .该系统具有如下特色器(data manage) 该系统具有如下特色: . 关系型数据库管理: *基于 Oracle 关系型数据库管理:使得解释参数,描述信息及离散,连续采样的信息数据, 参数,描述信息及离散,连续采样的信息数据, 样的信息数据均以数据指针信息存放在项目数据库的目录中. 均以数据指针信息存放在项目数据库的目录中. 数据入库后不需知道数据以什么样格式存放再什么地方,只进行聚焦即可得到快速查询. 什么地方,只进行聚焦即可得到快速查询. 可靠的安全性:设置了用户存储权限, *可靠的安全性:设置了用户存储权限,使得数据管理更为方便; 得数据管理更为方便;图 1-1 *极大提高工作效率:合理的项目配置,减少不必要的数据重复,使数极大提高工作效率:合理的项目配置,减少不必要的数据重复, 据深度,单位,坐标系统等转换更为方便,省时; 据深度,单位,坐标系统等转换更为方便,省时; *强大的可操作性:所有程序军采用窗口,菜单选项交互运行,加大可强大的可操作性:所有程序军采用窗口,菜单选项交互运行, 视化程序. 视化程序. 二,系统登陆当在界面敲入用户名和密码之后, 所视界面, 当在界面敲入用户名和密码之后 , 出现如图 1-2 所视界面 , 选择版本系统, 所示界面, GeoFram 4.0.3 进入 GeoFram4.0 版本系统,出现图 1-3 所示界面,选择其中一个用户,敲入密码 ( pas sword 图 1-2 图 1-3 ),当系统确认后在单击Application manager 即可完成系统的完全登陆. 即可完成系统的完全登陆. 所示界面.可选择进行下一步工作. 出现如图 1-4 所示界面.可选择进行下一步工作. 11 三,数据加载进入系统后, 管理器, 进入系统后,单击系统界面中的 Data manager 管理器,出现数据管图 1-4 图 1-5 理主窗口(如图 1-4) 理主窗口( . 窗口, 选择 loaders and unloaders 出现如图 1-5 窗口, 其中ASCIL Load 表数据加载; 数据加载; 表示数据存示 ASCIL 数据加载;data load 为 Dlis 数据加载;data save 表示数据存储.其具体操作方法如下: 其具体操作方法如下: a,ASCII LOAD ASCII load 模块主要用来加载文本数据,具体步骤如下: 具体步骤如下: 将 ASCII LOAD 模块调入工作流程管理器中,单击鼠标左键,则可掉入主窗口( 器中,单击鼠标左键,则可掉入主窗口(如图1-6) : 图 1-6 1) 2) 3) 中选择所加数据文件名称; 在 input file 中选择所加数据文件名称; 中填入控制文件名; 在 control file 中填入控制文件名; 在 create file 中可编写所加载的数据.选择所需要的曲线; 载的数据.选择所需要的曲线; 4) 按 run 键即可完成 ascii 数据的加载. 的加载. 模块. b , data load 模块 . 本模块主要用 LIS,DLIS,BIT, 等格式的途是将LIS,DLIS,BIT,LA716 等格式的数据加入数据库中.具体步骤如下: 数据加入数据库中.具体步骤如下: 在 Geofram 的 P 包中,将 Data load 包中, 图 1-7 模块调入工作流程管理器,并用鼠标左键双击, 模块调入工作流程管理器,并用鼠标左键双击,则可调入 Data load 主窗口(如图 1-5) : ,在对话框中假如你想要加入的数据文件名称(LIS, 1) 在 input file 对话框中假如你想要加入的数据文件名称(LIS, , DLIS,BIT, 等格式) DLIS,BIT,LA716 等格式) ; 2) 在targetr field 中假如油田名称; ,在 , 中假如油田名称; ,在中输入井名; 3) 在 target well 中输入井名; , ,在中输入井眼名称; 4) 在 target borehole 中输入井眼名称; , ,在中输入公司名(可选) 5) 在 producer 中输入公司名(可选) , ; ,点击按纽,即可将所要数据加入数据库中. 6) 点击 run 按纽,即可将所要数据加入数据库中. , 13 资料预处理( edit) 第二部分资料预处理(well edit) 因为各种井况,仪器,人为等因素造成测井原始资料深度不统一,因为各种井况,仪器,人为等因素造成测井原始资料深度不统一, 局部出现畸点等,因此在室内对其进行修改和编辑非常重要, 局部出现畸点等,因此在室内对其进行修改和编辑非常重要,Geofram 系统软件就提供这样一个方便快洁的编辑模块——well 模块, 统软件就提供这样一个方便快洁的编辑模块——well edit 模块,我门可——以利用它来队原始数据资料进行处理.其工作步骤如下: 以利用它来队原始数据资料进行处理.其工作步骤如下: 一, 进入编辑窗口, 如图 2-1 所示进入 p 包后直接选择 Welledit 进入编辑窗口, 模块即可. 模块即可. 窗口. 二,点击 Welledit 右键出现图 2-2 窗口.其中 inspect 表示聚焦选择井文件, 表示运行程序表示退出该模块, 井文件,run 表示运行程序,exit 表示退出该模块,abort 表示中断目前运行方式. 运行方式. 图2-1 图 2-2 图 2-3 编辑模块主界面. 所示, 三,聚焦数据后即可进入welledit 编辑模块主界面.如图 2-3 所示, 模块, 然后根据需要分别点击Edit ,depth match 和 splice 模块,可以根据需求进行井编辑,其中部分快捷功能键用途说明如下: 求进行井编辑,其中部分快捷功能键用途说明如下: 各快洁键使用说明如下: 各快洁键使用说明如下: 第三部分 Utility PlotUtility Plot 是 Geofram 系统中的一个交会图工具模块,主要功能是进行 XY 交会 XY- 值图,直方图, 图,XY-Z 值图,直方图,玫瑰图等, 瑰图等,主要窗口如图 3-1. 该模块使用方法如下: 该模块使用方法如下: 1, 点击 Data Focus 模块进行数据聚焦,即选择交会曲线所在位置; 模块进行数据聚焦,即选择交会曲线所在位置; 15 图 3-1 选项中分别输入所交会曲线的起止深度; 2, 在 Top 和 Bottom 选项中分别输入所交会曲线的起止深度; 选项中选择所需要图形类别, 3, 在 Application Type 选项中选择所需要图形类别, 分别为 Cross Plot 值图) Historm(直方图 Polar(极坐标图直方图) 极坐标图) (交会图),Z Plot (Z 值图),Historm(直方图),Polar(极坐标图)等, 交会图) 现以交会图为例进行说明. 现以交会图为例进行说明. 4, 单击 Presentation File 按钮所示图版, 出现图 3-2 所示图版,依据需要选择所需要的类型; 要选择所需要的类型; 5, 单击 Overlay File 进入图 3-3 所示图版, 所示图版,依据需要选择所需要的类型; 要的类型; 所对应曲线; 6, 选择 X,Y 所对应曲线; Run,进入图形显示窗口( ,即可完成本次交会工作 7, 单击 Run,进入图形显示窗口(图 3-4) 即可完成本次交会工作. ,即可完成本次交会工作. 注:完成以上步骤后可以根据需要对所选择曲线进行优选编辑, 需要对所选择曲线进行优选编辑, 回归. 回归. 图 3-2 图 3-3 图 3-4 第四部分常规测井资料处理技术(PVP) 常规测井资料处理技术(PVP) 对于常规资料处理模块, 两种, 对于常规资料处理模块,目前只有 PVP 和 ElanPlus 两种,在这两种模块之中,前者所需参数相对较少,应用层次明显,做法相对简单; 块之中,前者所需参数相对较少,应用层次明显,做法相对简单;而后者所需参数多,计算复杂,因此, 处理模块进行说明. 所需参数多,计算复杂,因此,本文主要对 PVP 处理模块进行说明. 第五部分一电成像的处理流程特殊资料处理(声,电成像) 特殊资料处理( 电成像) 该套软件可以处理三大测井公司的多系列电成像及井周成像资料, 电成该套软件可以处理三大测井公司的多系列电成像及井周成像资料, FMI,ARI,STARII,EMI, USI,CBIL,CAST. 像包括 FMI,ARI,STARII,EMI,井周成像包括 USI,CBIL,CAST.对于这些测井系列,其处理原理和过程基本一致, 些测井系列,其处理原理和过程基本一致,其处理框图见图 1. 17 数据的转换与加载对于斯伦贝谢的测井资料是先数据加载, 对于斯伦贝谢的测井资料是先数据加载,再通过 BHGeol Formatter 将原始数据包解压,快慢道数据分开,检查数据的完整性. 原始数据包解压,快慢道数据分开,检查数据的完整性.而对于另外两家 Converter 进行数据转换, 测井公司的资料则是先利用 BHGeol Converter 进行数据转换,再将数据加载到数据库中. 载到数据库中. 磁场及加速度数据的质量检查, 2) GPIT Survey 磁场及加速度数据的质量检查,GPIT 的正确与否是进行余下处理的关键, 进行余下处理的关键,正确的磁场资料和加速度资料其 X 和 Y 轴的分量应为中心的圆或弧. 是以 O 为中心的圆或弧. 对数据进行预处理, BorEID 对数据进行预处理,并形成静态图象资料对数据进行环境,加速度及死电极的校正, 对数据进行环境,加速度及死电极的校正,校正后的数据进行均衡处理,输出静态平衡的图象, 输出静态平衡的图象,该图象反映了整个测量井段的微电阻率的变化. 该图象反映了整个测量井段的微电阻率的变化. BorScal 微电阻率成像测井资料的刻度成像测井的微电阻率与微球测的浅电阻有一定的差别, 用浅电阻率冲 ( 成像测井的微电阻率与微球测的浅电阻有一定的差别, 洗带)刻度成像的微电阻率,使其比较准确的代表井眼周围的地层.该模刻度成像的微电阻率,使其比较准确的代表井眼周围的地层. 块只对定量计算裂缝参数时才有用, 且只适用于斯伦贝谢的成像测井资料. 块只对定量计算裂缝参数时才有用, 且只适用于斯伦贝谢的成像测井资料. BorDip 自动计算构造倾角或沉积倾角用相关对比法自动计算地层倾角.可以改变计算的窗长和步长分别求用相关对比法自动计算地层倾角. 取构造倾角和沉积倾角. 取构造倾角和沉积倾角. BorNor 图象的动态加强输出的静态图象是全井段大范围电阻率变化之间的颜色刻度, 输出的静态图象是全井段大范围电阻率变化之间的颜色刻度,由于有限的颜色资源,使静态图象不能清晰的反映井眼地质特征. 限的颜色资源,使静态图象不能清晰的反映井眼地质特征.图象的动态加强是在每0 . 5 米配一次色,这种图象可用于识别岩层中各种细微的特征米配一次色, 和构造变化. 和构造变化. BorView BorView 人机交互解释这是微电阻率扫描成像测井最关键的一步,通过人机交互解释, 关键的一步,通过人机交互解释,识别出资料上反映的各种地质特征如沉积构造,裂缝等特征, 沉积构造,裂缝等特征,并进行详细的计算及统计. 的计算及统计. 8)Data Save 数据输出将处理解释的各项结果存盘输出,并对裂缝进行定量计算. 并对裂缝进行定量计算. 19 二偶极子横波的处理流程该软件只处理斯伦贝谢的偶极子横波资料.该软件只处理斯伦贝谢的偶极子横波资料.其处理流程比较复杂, 其处理流程比较复杂,包括资料的预处理和解释处理. 资料的预处理和解释处理. (一)资料的预处理其预处理流程见图 2 主要分以下几个步骤: 主要分以下几个步骤: 1,DataLoad 数据的加载读取声波波形数据,对输入的数据进行解压, 处理,2,SWP 读取声波波形数据,对输入的数据进行解压,预处理,输出一些后处理所必需的输入参数,产生波形数据集; 些后处理所必需的输入参数,产生波形数据集; 计算出时差/时间平面上的所有相似系数值, 3,STC 计算出时差/时间平面上的所有相似系数值,通过峰值的最大相关性在平面上形成一个等高线图,在等高线图上最大相关峰值给出了时差关性在平面上形成一个等高线图, 和时间. 和时间. Sonic Labelling 声波标定计算输出的峰值矩阵中进行峰值标定,求出相应的纵, 在 STC 计算输出的峰值矩阵中进行峰值标定,求出相应的纵,横及斯通利波时差. 通利波时差. Label Editor 标定声波的编辑对提取的声波时差进行编辑. 对提取的声波时差进行编辑. 对提取的纵波,横波及斯通利波时差进行井眼补偿校正. 6,MBHC 对提取的纵波,横波及斯通利波时差进行井眼补偿校正. Vp/Vs,泊松比以及纵波,横波及斯通利波的传播时 7,SPP 用来计算 Vp/Vs,泊松比以及纵波,横波及斯通利波的传播时间. (二)解释处理偶极横波资料可以用来进行地层评价, 偶极横波资料可以用来进行地层评价,进行地层各向异性分析, 进行地层各向异性分析,岩石机械强度分析,目前我们主要有以下几个方面的应用. 械强度分析,目前我们主要有以下几个方面的应用. 斯通利波的处理应用流程具体的流程见图 3. Sonic Fracture 利用斯通利波指示裂缝的张开度斯通利波在遇到裂缝时, 斯通利波在遇到裂缝时,其透射波和反射波会发生变化, 其透射波和反射波会发生变化,利用计算的透射系数与反射系数的结合, 的结合,可以提高裂缝评价的效果, 缝评价的效果,反映裂缝的有效性,反射裂缝的有效性, 系数的大小在一定程度上反映了裂缝的张开度, 的张开度,但目前还没有岩心资料来标定,因此还只是半定量的分析. 量的分析. permeability 2) Stoneley permeability 利用斯通利波计算地层渗透率斯通利波的传播受各种因素的影响, 其中包括骨架渗透率和开口裂缝. 斯通利波的传播受各种因素的影响, 其中包括骨架渗透率和开口裂缝. 因此斯通利波的时差, 幅度衰减等异常往往和地层渗透性有很好的相关性. 因此斯通利波的时差, 幅度衰减等异常往往和地层渗透性有很好的相关性. 根据毕奥模型,利用测量的斯通利波时差与计算的弹性的斯通利波的时差根据毕奥模型, 21 来计算渗透率, 来计算渗透率,在计算弹性斯通利波时差时关键的参数是泥浆时差. 在计算弹性斯通利波时差时关键的参数是泥浆时差.另外, 另外, 利用纵波, 横波时差与地层密度合成的理论上的斯通利波时差(DTSTE)与实利用纵波, 横波时差与地层密度合成的理论上的斯通利波时差(DTSTE)与实(DTSTE) 测的斯通利波时差的差值,可以作为流体移动的指数. 测的斯通利波时差的差值,可以作为流体移动的指数. 3)Sonic Waveform Energy 利用该模块计算斯通利波的微差能量判断地层裂缝的有效性.当井壁有有效裂缝存在时,则钻井液沿着裂缝判断地层裂缝的有效性.当井壁有有效裂缝存在时, 流进或流出,从而消耗斯通利波的能量,使其幅度降低, 流进或流出,从而消耗斯通利波的能量,使其幅度降低,反之在无效缝处就不会产生能量衰减, 就不会产生能量衰减,但还要注意泥饼的影响, 泥饼的影响,因泥饼要阻止流体在裂缝中流动,微差能量越大, 裂缝中流动,微差能量越大,说明裂缝的有效性越好. 裂缝的有效性越好. 偶极横波的处理应用流程( 偶极横波的处理应用流程 ( 见。

地质中常用的软件

地质中常用的软件

一、地质绘图、矢量化、CAD软件1. Geomap 3.2地质绘图软件包版本3.2平台-1Windows 98/NT/2000/XP简介GeoMap3.2适用于制作各种地质平面图(如构造图、等值线图、沉积相图、地质图等)、剖面图(如地质剖面图、测井曲线图地震剖面图、岩性柱状图、连井剖面图等)、统计图、三角图、地理图、工程平面图(公路分布图、管道布线图等)多种图形。

GeoMap地质制图系统能广泛应用于石油勘探与开发、地质、煤炭、林业、农业等领域,也是目前国内在石油地质上应用较广的CAD软件之一。

相关软件还包括以下几个专业制图系统:GeoCon 油藏连通图生成系统、GeoCol 综合地质柱状图编辑系统、GeoMapD油藏开发制图系统、GeoStra地层对比图编辑系统、GeoMapBank网上图文资料库管理系统、GeoReport地质多媒体汇报系统OE目标评价软件。

2. MAPGIS版本6.5平台Windows 98/NT/2000/XP简介图形矢量化及编辑软件,是一个大型工具型地理信息系统软件,可对数字、文字、地图遥感图像等多源地学数据进行有效采集、一体化管理、综合空间分析以及可视化表示。

可制作具有出版精度的复杂地质图,能进行海量无缝地图数据库管理以及高效的空间分析。

具有强大的图形编辑功能。

3. NDS测井曲线矢量化版本4.16平台Windows 98/NT/2000简介测井曲线矢量化,NDSlog、Ndsmap等4. SDI CGM Editor版本2.00.50平台Windows简介CGM绘图工具,包括图形转换及拼图。

与Larson CGM Studio相比,有以下优点:1、Larson将已作好的CGM文件,作为整体导入,不能修改; 2、Larson添加的热区不能在同一文件的对象之间跳转。

而这些SDI CGM Editor都可以。

5. SDI CGM Office版本2.00.50平台Windows简介显示CGM v1 - v4, ATA, CGM+, PIP, WebCGM ,dwg/dxf, pdf, ps, hpgl, plt, emf, tiff, jpeg, png, bmp & xwd 文件。

OpenNI_UserGuide

OpenNI_UserGuide

User GuideTable of ContentsLicense Notice (4)Overview (4)Natural Interaction (4)What is OpenNI? (4)Abstract Layered View (5)Concepts (6)Modules (6)Production Nodes (7)Production Node Types (8)Production Chains (9)Capabilities (11)Generating and Reading Data (12)Generating Data (12)Reading Data (12)Mock Nodes (13)Sharing Devices between Applications and Locking Nodes (13)Licensing (13)General Framework Utilities (14)Recording (14)Production Node Error Status (15)Backwards Compatibility (15)Getting Started (15)Supported Platforms (15)Main Objects (16)The Context Object (16)Metadata Objects (16)Configuration Changes (16)Data Generators (17)User Generator (18)Creating an empty project that uses OpenNI (18)Basic Functions: Initialize, Create a Node and Read Data (19)Enumerating Possible Production Chains (20)Understanding why enumeration failed (21)Working with Depth, Color and Audio Maps (22)Working with Audio Generators (23)Recording and Playing Data (24)Recording (24)Playing (25)Node Configuration (26)Configuration Using XML file (27)Licenses (28)Log (28)Production Nodes (29)Global Mirror (29)Recordings (29)Nodes (29)Queries (30)Configuration (31)Start Generating (33)Building and Running a Sample Application (33)NiSimpleRead (34)NiSimpleCreate (34)NiCRead (34)NiSimpleViewer (34)NiSampleModule (35)NiConvertXToONI (35)NiRecordSynthetic (35)NiViewer (35)NiBackRecorder (36)Troubleshooting (38)Glossary (38)License NoticeOpenNI is written and distributed under the GNU Lesser General Public License which means that its source code is freely-distributed and available to the general public.You can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.OpenNI is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details: </licenses/>. OverviewNatural InteractionThe term Natural Interaction (NI) refers to a concept where Human-device interaction is based on human senses, mostly focused on hearing and vision. Human device NI paradigms render such external peripherals as remote controls, keypads or a mouse obsolete. Examples of everyday NI usage include:∙Speech and command recognition, where devices receive instructions via vocal commands. ∙Hand gestures, where pre-defined hand gestures are recognized and interpreted to activate and control devices. For example, hand gesture control enables users to manage living room consumer electronics with their bare hands.∙Body Motion Tracking, where full body motion is tracked, analyzed and interpreted for gaming purposes.What is OpenNI?OpenNI (Open Natural I nteraction) is a multi-language, cross-platform framework that defines API s for writing applications utilizing Natural Interaction. OpenNI APIs are composed of a set of interfaces for writing NI applications. The main purpose of OpenNI is to form a standard API that enables communication with both:∙Vision and audio sensors (the devices that ‘see’ and ‘hear’ the figures and their surroundings.)∙Vision and audio perception middleware (the software components that analyze the audio and visual data that is recorded from the scene, and comprehend it). For example, software that receives visual data, such as an image, returns the location of the palm of a handdetected within the image.OpenNI supplies a set of APIs to be implemented by the sensor devices, and a set of APIs to be implemented by the middleware components. By breaking the dependency between the sensor and the middleware, OpenNI’s API enables a pplications to be written and ported with no additional effort to operate on top o f different middleware modules (“write once, deploy everywhere”). OpenNI's API also enables middleware developers to write algorithms on top ofraw data formats, regardless of which sensor device has produced them, and offers sensor manufacturers the capability to build sensors that power any OpenNI compliant application. The OpenNI standard API enables natural-interaction application developers to track real-life (3D) scenes by utilizing data types that are calculated from the input of a sensor (for example, representation of a full body, representation of a hand location, an array of the pixels in a depth map and so on). Applications can be written regardless of the sensor or middleware providers. OpenNI is an open source API that is publicly available at .Abstract Layered ViewFigure 1 below displays a three-layered view of the OpenNI Concept with each layer representing an integral element:∙Top: Represents the software that implements natural interaction applications on top of OpenNI.∙Middle: Represents OpenNI, providing communication interfaces that interact with both the sensors and the middleware components, that analyze the data from the sensor.∙Bottom: Shows the hardware devices that capture the visual and audio elements of the scene.ConceptsModulesThe OpenNI Framework is an abstract layer that provides the interface for both physical devices and middleware components. The API enables multiple components to be registered in the OpenNI framework. These components are referred to as modules, and are used to produce and process the sensory data. Selecting the required hardware device component, or middleware component is easy and flexible.The modules that are currently supported are:Sensor Modules∙3D sensor∙RGB camera∙IR camera∙Audio device (a microphone or an array of microphones)Middleware components∙Full body analysis middleware: a software component that processes sensory data and generates body related information (typically data structure that describes joints,orientation, center of mass, and so on).∙Hand point analysis middleware: a software component that processes sensory data and generates the location of a hand point∙Gesture detection middleware: a software component that identifies predefined gestures (for example, a waving hand) and alerts the application.∙Scene Analyzer middleware: a software component that analyzes the image of the scene in order to produce such information as:o The separation between the foreground of the scene (meaning, the figures) and the backgroundo The coordinates of the floor planeo The individual identification of figures in the scene.ExampleThe illustration below displays a scenario in which 5 modules are registered to work with an OpenNI installation. Two of the modules are 3D sensors that are connected to the host. The other three are middleware components, including two that produce a person’s full-body data, and one that handles hand point tracking.Modules, whether software or actual devices that wish to be OpenNI compliant, must implement certain interfaces.Production NodesOpenNI defines Production Nodes, which are a set of units that have a productive role in the process of creating the data required for Natural Interaction based applications. Each production node can use other lower level production nodes (read their data, control their configuration and so on), and be used by other higher level nodes, or by the application itself. ExampleThe application wants to track the motion of a human figure in a 3D scene. This requires a production node that produces body data, or, in other words, a user generator. This specific user generator obtains its data from a depth generator. A depth generator is a production node that is implemented by a sensor, which takes raw sensory data from a depth sensor (for example, a stream of X frames per second) and outputs a depth map."Meaningful"3D data is defined as data that can comprehend, understand and translate the scene. Creating meaningful 3D data is a complex task. Typically, this begins by using a sensor device that produces a form of raw output data. Often, this data is a depth map, where each pixel is represented by its distance from the sensor. Dedicated middleware is then used to process this raw output, and produce a higher-level output, which can be understood and used by the application.Common examples of higher level output are as described and illustrated below:∙The location of a user’s hand.The output can be either the center of the palm (often referred to as ‘hand point’) or the finger tips.∙The identification of a figure within the scene.The output is the current location and orientation of the joints of this figure (often referred to as ‘body data’).∙The identification of a hand gesture (for example, waving).The output is an alert to the application that a specific hand gesture has occurred.Production Node TypesEach production node in OpenNI has a type and belongs to one of the following categories:∙Sensor-related Production Nodes∙Middleware Related Production NodesThe production node types that are currently supported in OpenNI are:Sensor Related Production Nodes∙Device: A node that represents a physical device (for example, a depth sensor, or an RGB camera). The main role of this node is to enable device configuration.∙Depth Generator: A node that generates a depth-map. This node should be implemented by any 3D sensor that wishes to be certified as OpenNI compliant.∙Image Generator: A node that generates colored image-maps. This node should be implemented by any color sensor that wishes to be certified as OpenNI compliant∙IR Generator: A node that generates IR image-maps. This node should be implemented by any IR sensor that wishes to be certified as OpenNI compliant.∙Audio Generator: A node that generates an audio stream. This node should be implemented by any audio device that wishes to be certified as OpenNI compliant.Middleware Related Production Nodes∙Gestures Alert Generator: Generates callbacks to the application when specific gestures are identified.∙Scene Analyzer: Analyzes a scene, including the separation of the foreground from the background, identification of figures in the scene, and detection of the floor plane.The Scene Analyzer’s main output is a labeled depth map, in which each pixel holds a label that states whether it represents a figure, or it is part of the background.∙Hand Point Generator: Supports hand detection and tracking. This node generates callbacks that provide alerts when a hand point (meaning, a palm) is detected, and when a hand point currently being tracked, changes its location.∙User Generator: Generates a representation of a (full or partial) body in the 3D scene.For recording purposes, the following production node types are supported:∙Recorder: Implements data recordings∙Player: Reads data from a recording and plays it∙Codec: Used to compress and decompress data in recordingsProduction ChainsAs explained previously, several modules (middleware components and sensors) can be simultaneously registered to a single OpenNI implementation. This topology offers applications the flexibility to select the specific sensor devices and middleware components with which to produce and process the data.What is a production chain?In the Production Nodes section, an example was presented in which a user generator type of production node is created by the application. In order to produce body data, this production node uses a lower level depth generator, which reads raw data from a sensor. In the example below, the sequence of nodes (user generator => depth generator), is reliant on each other in order to produce the required body data, and is called a production chain.Different vendors (brand names) can supply their own implementations of the same type of production node.Example:Brand A provides an implementation (a module) of user generator middleware. Brand B provides separate middleware that implements a user generator. Both generators are available to the application developer. OpenNI enables the application to define which modules, or production chain, to use. The OpenNI interface enumerates all possible production chains according to the registered modules. The application can then choose one of these chains, based on the preference for a specific brand, component, or version and so on, and create it.Note: An application can also be non-specific, and request the first enumerated production chain from OpenNI.Typically, an application is only interested in the top product node of each chain. This is the node that outputs the required data on a practical level, for example, a hand point generator. OpenNI enables the application to use a single node, without being aware of the production chain beneath this node. For advanced tweaking, there is an option to access this chain, and configure each of the nodes.For example, if we look at the system illustration that was presented earlier, it described multiple registered modules and devices. Once an application requests a user generator, OpenNI returns the following four optional production chains to be used to obtain body data:The above illustration shows a scenario in which the following modules were registered to OpenNI:∙Two body middleware components, each being different brands.∙Two 3D sensors, each being two different brandsThis illustration displays the four optional production chains that were found for this implementation. Each chain represents a possible combination of a body middlewarecomponent and a 3D sensor device. OpenNI offers the application the option to choose from the above four production chain alternatives.CapabilitiesThe Capabilities mechanism supports the flexibility of the registration of multiple middleware components and devices to OpenNI. OpenNI acknowledges that different providers may have varying capabilities and configuration options for their production nodes, and therefore, certain non-mandatory extensions are defined by the OpenNI API. These optional extensions to the API are called Capabilities, and reveal additional functionality, enabling providers to decide individually whether to implement an extension. A production node can be asked whether it supports a specific capability. If it does, those functions can be called for that specific node. OpenNI is released with a specific set of capabilities, with the option of adding further capabilities in the future. Each module can declare the capabilities it supports. Furthermore, when requesting enumeration of production chains, the application can specify the capabilities that should be supported as criteria. Only modules that support the requested capability are returned by the enumeration.Currently supported capabilities:∙Alternative View:Enables any type of map generator (depth, image, IR) to transform its data to appear as if the sensor is placed in another location (represented by anotherproduction node, usually another sensor).∙Cropping: Enables a map generator (depth, image, IR) to output a selected area of the frame as opposed to the entire frame. When cropping is enabled, the size of the generated map is reduced to fit a lower resolution (less pixels). For example, if the map generator is working in VGA resolution (640x480) and the application chooses to crop at 300x200, the next pixel row will begin after 300 pixels. Cropping can be very useful for performance boosting.∙Frame Sync: Enables two sensors producing frame data (for example, depth and image) to synchronize their frames so that they arrive at the same time.∙Mirror: Enables mirroring of the data produced by a generator. Mirroring is useful if the sensor is placed in front of the user, as the image captured by the sensor is mirrored, so the right hand appears as the left hand of the mirrored figure.∙Pose Detection: Enables a user generator to recognize when the user is posed in a specific position.∙Skeleton: Enables a user generator to output the skeletal data of the user. This data includes the location of the skeletal joints, the ability to track skeleton positions and the usercalibration capabilities.∙User Position: Enables a Depth Generator to optimize the output depth map that is generated for a specific area of the scene.∙Error State: Enables a node to report that it is in "Error" status, meaning that on a practical level, the node may not function properly.∙Lock Aware: Enables a node to be locked outside the context boundary. For more information, see Sharing Devices between Applications and Locking Nodes.Generating and Reading DataGenerating DataProduction nodes that also produce data are called Generators, as discussed previously. Once these are created, they do not immediately start generating data, to enable the application to set the required configuration. This ensures that once the object begins streaming data to the application, the data is generated according to the required configuration. Data Generators do not actually produce any data until specifically asked to do so. Thexn::Generator::StartGenerating() function is used to begin generating. The application may also want to stop the data generation without destroying the node, in order to store the configuration, and can do this using the xn::Generator::StopGenerating function.Reading DataData Generators constantly receive new data. However, the application may still be using older data (for example, the previous frame of the depth map). As a result of this, any generator should internally store new data, until explicitly requested to update to the newest available data.This means that Data Generators "hide" new data internally, until explicitly requested to expose the most updated data to the application, using the UpdateData request function. OpenNI enables the application to wait for new data to be available, and then update it using the xn::Generator::WaitAndUpdateData() function.In certain cases, the application holds more than one node, and wants all the nodes to be updated. OpenNI provides several functions to do this, according to the specifications of what should occur before the UpdateData occurs:∙xn::Context::WaitAnyUpdateAll(): Waits for any node to have new data. Once new data is available from any node, all nodes are updated.∙xn::Context::WaitOneUpdateAll(): Waits for a specific node to have new data. Once new data is available from this node, all nodes are updated. This is especially useful when several nodes are producing data, but only one determines the progress of the application.∙xn::Context::WaitNoneUpdateAll(): Does not wait for anything. All nodes are immediately updated.∙xn::Context::WaitAndUpdateAll(): Waits for all nodes to have new data available, and then updates them.The above four functions exit after a timeout of two seconds. It is strongly advised that you use one of the xn::Context::Wait*…+UpdateAll() functions, unless you only need to update a specific node. In addition to updating all the nodes, these functions have the following additional benefits:∙If nodes depend on each other, the function guarantees that the "needed" node (the lower-level node generating the data for another node) is updated before the "needing" node.∙When playing data from a recording, the function reads data from the recording until the condition is met.∙If a recorder exists, the function automatically records the data from all nodes added to this recorder.Mock NodesOpenNI provides a mock implementation for nodes. A mock implementation of a node does not contain any logic for generating data. Instead, it allows an outside component (such as an application, or another node implementation) feed it configuration changes and data. Mock nodes are rarely required by the application, and are usually used by player nodes to simulate actual nodes when reading data from a recording.Sharing Devices between Applications and Locking NodesIn most cases, the data generated by OpenNI nodes comes from a hardware device. A hardware device can usually be set to more than one configuration. Therefore, if several applications all using the same hardware device are running simultaneously, their configuration must be synchronized.However, usually, when writing an application, it is impossible to know what other applications may be executed simultaneously, and as such, synchronization of the configuration is not possible. Additionally, sometimes it is essential that an application use a specific configuration, and no other.OpenNI has two modes that enable multiple applications to share a hardware device:∙Full Sharing (default): In this mode, the application declares that it can handle any configuration of this node. OpenNI interface enables registering to callback functions of any configuration change, so the application can be notified whenever a configuration changes (by the same application, or by another application using the same hardware device).∙Locking Configuration: In this mode, an application declares that it wants to lock the current configuration of a specific node. OpenNI will therefore not allow "Set" functions to be called on this node. If the node represents a hardware device (or anything else that can be shared between processes), it should implement the "Lock Aware" capability, which enables locking across process boundaries.Note: When a node is locked, the locking application receives a lock handle. Other than using this handle to unlock the node, the handle can also be used to change the node configuration without releasing the lock (in order that the node configuration will not be "stolen" by another application).LicensingOpenNI provides a simple licensing mechanism that can be used by modules and applications. An OpenNI context object, which is an object that holds the complete state of applications using OpenNI, holds a list of currently loaded licenses. This list can be accessed at any stage to search for a specific license.A license is composed of a vendor name and a license key. Vendors who want to use this mechanism can utilize their own proprietary format for the key.The license mechanism is used by modules, to ensure that they are only used by authorized applications A module of a particular vendor can be installed on a specific machine, and only be accessible if the license is provided by the application using the module. During the enumeration process, when OpenNI searches for valid production chains, the module can check the licenses list. If the requested license is not registered, the module is able to hide itself, meaning that it will return zero results and therefore not be counted as a possible production chain.OpenNI also provides a global registry for license keys, which are loaded whenever a context is initialized. Most modules require a license key from the user during installation. The license provided by the user can then be added to the global license registry, using the niLicense command-line tool, which can also be used to remove licenses.Additionally, applications sometimes have private licenses for a module, meaning that this module can only be activated using this application (preventing other applications from using it). General Framework UtilitiesIn addition to the formal OpenNI API, a set of general framework utilities is also published, intended mainly to ease the portability over various architectures and operating systems. The utilities include:∙ A USB access abstract layer (provided with a driver for Microsoft Windows)∙Certain basic data type implementation (including list, hash, and so on)∙Log and dump systems∙Memory and performance profiling∙Events (enabling callbacks to be registered to a specific event)∙Scheduling of tasksThose utilities are available to any application using OpenNI. However, these utilities are not part of standard OpenNI, and as such, backwards compatibility is only guaranteed to a certain extent.RecordingRecordings are a powerful debug tool. They enable full capture of the data and the ability to later stream it back so that applications can simulate an exact replica of the situation to be debugged.OpenNI supports recordings of the production nodes chain; both the entire configuration of each node, and all data streamed from a node.OpenNI has a framework for recording data and for playing it back (using mock nodes). It also comes with the nimRecorder module, which defines a new file format (".ONI") - and implements a Recorder node and a Player node for this format.Production Node Error StatusEach production node has an error status, indicating whether it is currently functional. For example, a device node may not be functional if the device is disconnected from the host machine. The default error state is always OK, unless an Error Status capability is implemented. This capability allows the production node to change its error status if an error occurs. A node that does not implement this capability always has a status of "OK".An application can check the error status of each node although it mostly only needs to know if any node has an error status, and is less interested which node (other than for user notification purposes). In order to receive notifications about a change in the error status of a node, the application can register to a callback that will alert of any change in a node's error status. OpenNI aggregates the error statuses of all the nodes together into a single error status, called Global Error Status. This makes it easier for applications to find out about the current state of a node or nodes. A global error status of XN_STATUS_OK means that all the nodes are OK. If only one node has an error status, that error status becomes the global error status (for example, if one sensor is disconnected, the OpenNI global error status isXN_STATUS_DEVICE_NOT_CONNECTED). If more than one node has an error status, the global error status is XN_STATUS_MULTIPLE_NODES_ERROR. In such a situation, the application can review all nodes and check which one has an error status, and why. Backwards CompatibilityOpenNI declares full backwards compatibility. This means that every application developed over any version of OpenNI, can also work with every future OpenNI version, without requiring recompilation.On a practical level, this means that each computer should ideally have the latest OpenNI version installed on it. If not this, then the latest OpenNI version required by any of the applications installed on this computer. In order to achieve this, we recommend that the application installation should also install OpenNI.Getting StartedSupported PlatformsOpenNI is available on the following platforms:∙Windows XP and later, for 32-bit only∙Linux Ubuntu 10.10 and later, for x86Main ObjectsThe Context ObjectThe context is the main object in OpenNI. A context is an object that holds the complete state of applications using OpenNI, including all the production chains used by the application. The same application can create more than one context, but the contexts cannot share information. For example, a middleware node cannot use a device node from another context. The context must be initialized once, prior to its initial use. At this point, all plugged-in modules are loaded and analyzed. To free the memory used by the context, the application should call the shutdown function.Metadata ObjectsOpenNI Metadata objects encapsulate a set of properties that relate to specific data alongside the data itself. For example, typical property of a depth map is the resolution of this map (for example, the number of pixels on both an X and a Y axis). Each generator that produces data has its own specific metadata object.In addition, the metadata objects play an important role in recording the configuration of a node at the time the corresponding data was generated. Sometimes, while reading data from a node, an application changes the node configuration. This can cause inconsistencies that may cause errors in the application, if not handled properly.ExampleA depth generator is configured to produce depth maps in QVGA resolution (320x240 pixels), and the application constantly reads data from it. At some point, the application changes the node output resolution to VGA (640x480 pixels). Until a new frame arrives, the application may encounter inconsistency where calling the xn::DepthGenerator::GetDepthMap() function will return a QVGA map, but calling the xn::DepthGenerator::GetMapOutputMode() function will return that the current resolution is a VGA map. This can result in the application assuming that the depth map that was received is in VGA resolution, and therefore try to access nonexistent pixels.The solution is as follows: Each node has its metadata object, that records the properties of the data when it was read. In the above case, the correct way to handle data would be to get the metadata object, and read both the real data (in this case, a QVGA depth map) and its corresponding resolution from this object.Configuration ChangesEach configuration option in OpenNI interfaces comprises the following functions:∙ A Set function for modifying the configuration.∙ A Get function for providing the current value.。

VMware+DataRecovery安装配置

VMware+DataRecovery安装配置

29,
确认配置,或配置无误,直接输入“y”,回车,系统会自动重启网络配置,使网络
配置生效。
30,
在 vCenter Server 管理界面,点击主页,在解决方案和应用程序栏点击“VMware Data
Recovery”。
31,
输入 VMware Data Recoery 地址或者域名,点击“连接”。
VMware Data Recovery 安装配置
VMware Data Recovery 可创建虚拟机备份,同时不会中断虚拟机的使用或其提供的数 据和服务。Data Recovery 会管理现有备份,并在这些备份过时后将它们删除。它还支持重 复数据删除功能以删除冗余数据。
Data Recovery 建立在用于数据保护的 VMware vStorage API 基础上。它与 VMware vCenter Se rver 集成,使您可以集中调度备份作业。通过与 vCenter Server 集成,还可以备份虚拟机,即使使用 VMware VMotion 或者 VMware DRS 移动这些虚拟机也是如此。 Data Recovery 使用虚拟机设备和客户端插件来管理以及还原备份。备份设备是以开放虚拟华格 式(OVF)提供的。Data Recovery 插件需要安装 VMware vSphere Client。 可以在 VMwareESX 支持的任何虚拟磁盘上存储备份。您可以使用存储区域网络(SAN)、网络附 加存储(NAS)设备或基于公用 Internet 文件系统(CIFS)的存储(如 SAMBA)。所有设备的虚拟机都存储 在重复数据删除存储中。VMware Data Recovery 支持卷影副本服务(VSS),该服务可为某些 Windows 操作系统提供备份基础结构。

ebsdataload导入用户职责模板

ebsdataload导入用户职责模板

EBSDataload 用户职责模板1. 职责概述EBSDataload 是一个数据加载工具,用于将数据从外部源导入到 EBS 系统中。

该工具提供了一种简单而高效的方式来导入大量数据,以便在 EBS 系统中进行后续处理和分析。

本模板的目的是为了帮助用户理解并正确使用 EBSDataload 工具。

在使用该工具之前,请确保你已经具备了必要的技术知识和操作权限。

2. 使用准备2.1. 安装和配置 EBSDataload在开始使用 EBSDataload 之前,你需要先安装和配置该工具。

请按照以下步骤进行操作:1.下载 EBSDataload 安装包,并解压到本地文件夹。

2.运行安装程序,并按照提示完成安装过程。

3.配置 EBSDataload 的连接信息,包括数据库连接、源系统连接等。

请参考附录A获取详细的配置说明。

2.2. 准备数据源在导入数据之前,你需要准备好要导入的数据源。

确保数据源符合以下要求:•数据源应为结构化数据,例如 CSV、Excel 或数据库表。

•数据源应包含正确的字段和数据类型。

•数据源应与目标系统相匹配,确保字段名称和顺序正确。

2.3. 准备数据映射在导入数据之前,你需要定义数据源和目标系统之间的字段映射关系。

请按照以下步骤进行操作:1.确定目标系统中的实体和字段,例如表、列等。

2.根据数据源的结构,确定每个字段在目标系统中的对应关系。

3.编写一个数据映射文件,将数据源中的字段映射到目标系统中的字段。

请参考附录B获取详细的数据映射说明。

3. 使用方法3.1. 配置加载参数在使用 EBSDataload 导入数据之前,你需要配置加载参数。

加载参数用于控制导入过程中的行为和选项。

请按照以下步骤进行操作:1.打开 EBSDataload 工具。

2.在工具界面上选择“配置”选项卡。

3.配置加载参数,包括源文件路径、目标表名、错误处理策略等。

请参考附录C获取详细的加载参数说明。

3.2. 执行数据导入在完成加载参数配置后,你可以执行数据导入操作。

LightDB数据库运维手册说明书

LightDB数据库运维手册说明书

LightDB数据库运维手册1 前言本文档为恒生电子企业级数据库LightDB日常运维手册,主要介绍日常运维常用操作的指南。

2 LightDB单机2.1 GUI安装界面为什么弹不出来?是否支持命令行安装模式?GUI安装界面弹不出来,一般来说有两种原因:Linux系统未安装GUI程序所需的依赖包Linux系统未正确设置DISPLAY环境变量,或者Windows未正确运行Xmanager - Passive 如果无法满足上述条件,可以使用命令行安装模式,LightDB支持命令行安装模式,且与GUI安装相比仅在安装向导上有所差异,其余并无不同。

2.2 查看LightDB安装目录、实例目录、归档目录ls$LTHOME # 查看安装目录ls$LTDATA # 查看实例目录ls$LTHOME/archive # 查看归档目录2.3 LightDB包含哪些日志?数据库日志,位于$LTDATA/log目录中。

ltcluster日志,位于$LTHOME/etc/ltcluster/下,仅高可用版本有。

keepalived日志,位于/var/log/下,并且在$LTHOME/etc/keepalived/keepalived_lightdb.log有keepalived检测lightdb的心跳日志,仅高可用版本需启用keepalived。

2.4 查看数据库最新日志LightDB数据库日志路径为$LTDATA/log/,日志文件命名格式为lightdb-yyyy-mm-dd_hhmmss.log,可以此找到最新的日志文件,然后用tail命令循环查看指定行数的最新日志内容,如下图所示。

tail -fn10 lightdb-yyyy-mm-dd_hhmmss.log2.5 查看数据库日志中的错误信息LightDB日志中的错误信息包含ERROR或FATAL标签,可以此为关键词从日志文件中过滤错误行。

# 单次查看当前错误日志cat lightdb-yyyy-mm-dd_hhmmss.log | grep-E'ERROR|FATAL'# 实时监控最新错误日志tail -fn10 lightdb-yyyy-mm-dd_hhmmss.log | grep-E'ERROR|FATAL'2.6 查看是否开启了慢日志,开启与关闭慢日志在LightDB中慢日志配置参数有两处:数据库自身和auto_explain插件,使用show可以查看这两个参数。

IBM Storage Shelf与Compute Nodes连接指南说明书

IBM Storage Shelf与Compute Nodes连接指南说明书

3. Connect dark blue SAS cable Connect into dark blue port (SAS0) in PCIe slot 2 in Node0Connect into dark blue port in top IO Module (PORT 0)4. Connect light blue SAS cable Connect into light blue port (SAS1) in PCIe slot 3 in Node0 Connect into light blue port in bottom IO Module (PORT 0)5. Connect dark red SAS cable Connect into dark red port (SAS1) in PCIe slot 2 in Node1 Connect into dark red port in top IO Module (PORT 1)6. Connect light red SAS cableConnect into light red port (SAS0) in PCIe slot 3 in Node1Connect into light red port in bottom IO Module (PORT 1)Start - Compute NodesEnd - Storage ShelfStart - Compute Node0End - Compute Node1. Connect dark blue SAS cable Connect into dark blue port(SAS0) in PCIe slot 2 in Node1Connect into dark blue port in top IO Module (PORT 0)8. Connect light blue SAS cable Connect into light blue port (SAS1) in PCIe slot 3 in Node1Connect into light blue port in bottom IO Module (PORT 0)9. Connect dark red SAS cable Connect into dark red port (SAS1) in PCIe slot 2 in Node0 Connect into dark red port in top IO Module (PORT 1). Connect light red SAS cableConnect into light red port (SAS0) in PCIe slot 3 in Node0Connect into light red port in bottom IO Module (PORT 1)Start - Compute NodesEnd - Expansion ShelfNote: The following cables are included as part of the Oracle Database Appliance shipment.Storage ShelfStorage Expansion Shelf. Connect green SFP+ cable Connect into green port (PORT 2) in PCIe slot 1Connect into green port (PORT 2) in PCIe slot 12. Connect yellow SFP+ cableConnect into yellow port (PORT 1) in PCIe slot 1Connect into yellow port (PORT 1) in PCIe slot 135126481097Connect optional storage expansion shelf to Oracle Database Appliance X7-2-HA.Note: The following cables are included as part of the Oracle Database Appliance shipment.Preparing to Deploy Oracle Database Appliance X7-2-HAYou can also scan the Quick Response Code with your mobile device to read the documentation.Node1Node0Storage Shelf121113Connect the Power and Public Network CablesImportant: Follow the instructions on Page 1 to cable the server nodes, storage system(s) and interconnect before proceeding. On both nodes, connect:A Power to the power supply unit (PSU) (1)B(Optional) Ethernet to network management for Oracle Integrated Lights Out Manager (Oracle ILOM) (2)C(Optional) On Node0 only, connect peripheral to USB (3).1For more information about Oracle Database Appliance, go to Oracle Technology Network: /technetwork/server-storage/engineered-systems/database-appliance/index.htmlFor more information about deployment, go to: /goto/oda/docs You can also scan the Quick Response Code with your mobile device to read the documentation.H The Configuration Type window opens. Make your selectionfor each configuration option and click I Enter the requested information on the remaining windows. Note: Select Custom to configure options that have default values in Typical configurations, such as: • Normal disk redundancy • NTP servers • Oracle ILOM • Additional network interfaces • Oracle Auto Service Requests (Oracle ASR) • Size of the /cloudfs file system (default is 50 GB)The deployment takes about 1 hour to finish.Node1Node0StorageShelf12111Connect the Power and Public Network CablesImportant: Follow the instructions on Page 1 to cable the server nodes, storage system(s) and interconnect before proceeding. On both nodes, connect:A Power to the power supply unit (PSU) (1)B (Optional) Ethernet to network management for Oracle Integrated Lights Out Manager (Oracle ILOM) (2)C (Optional) On Node0 only, connect peripheral to USB (3).13。

VNXE安装初始化配置

VNXE安装初始化配置

VNXE安装初始化配置神州数码存储服务事业部(王磊AV)PS:本文以VNXE3100为例,介绍VNXE安装初始化。

由于VNXE 需要的环境比较多,文中会详细介绍如何配置DNS、NTP、以及和服务器端ISCSI协议的设置。

2011年9月Wang lei前言VNXE3100安装有别于其他设备的地方在于等待的时间稍微长些,可能和型号有关系,也有可能是这个系列都是这样的现象,由于其他型号接触少,所以这一点不太确定。

就VNXE3100而言,划分空间和升级的时候需要的时间都是比较久的,几台设备做下来情况都差不多。

这一点希望大家一定要和客户交代清楚,尤其是接触过CX等等系列的客户,他们会拿来比较,如果提前没交代,客户一般就不太理解了。

有些客户会主动约在下午,他们计划可以安装完成的,这种情况更要和客户说明安装时间会比较久,让客户有心理准备。

在创建存储空间的时候一定要和客户确定好如何做RAID,如何分配存储空间。

因为一旦做好再去修改的话,又需要重新等待了。

VNXE初始化需要DNS、NTP。

如果客户环境具备的话,最好用客户的环境,这样比较方便。

如果不具备的话,本文中笔者详细介绍了如何在自己电脑搭建DNS、NTP环境,都是亲自测试可以实现的,希望对大家在安装时候有所帮助。

另外,VNXE3100走的是ISCSI协议。

需要客户服务器端有ISCSI 协议,一般SERVER2003的服务器需要我们自己安装。

这点可以让客户准备,也可以我们在服务器端安装对应的iscsi initiator.目录一安装环境准备 (4)安装Connection Utility (4)二设备初始化 (5)三完成配置向导 (8)配置DNS (11)配置NTP (22)四注册LISENCE (26)五升级微码 (27)六配置ISCSI (28)七配置主机 (29)八创建存储空间 (30)九服务器ISCSI配置 (33)十服务器端认盘 (38)一安装环境准备在安装VNXE3100之前需要确认一些必要的环境。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Introduction to DataLoadDataLoad V4 is a Windows program that enables the user to take control and manipulate other applications. Although DataLoad is aimed at forms based applications, and in particular Oracle Applications, it can be used with any program running in the Windows environment, including Java based software. Previous versions of this tool were written as an Excel macro and V3.x will continue to be developed in this format. For reasons of improved performance, reliability and functionality, however, the software was re-written as a standalone Windows application. This standalone version is numbered 4.X.Every Oracle Applications implementation requires data and setup information to be loaded in the database before the system goes live. Furthermore, there are often requirements to regularly load or amend data in the system once it has gone live. Oracle provides open interfaces into the Applications that are normally used for this purpose. The "DataLoad" or "Cut and Paste" tool was developed to automate the manipulation of the Application's forms by simulating key presses, providing an alternative method for entering or manipulating data. Control of the forms not only includes the sending of data to the form fields, but also the issuing of any commands which can be accessed from the menus, by keystroke combinations, or from button presses. Thus, data can be entered, amended or deleted, forms can be opened or closed and changes saved or rejected, for example. All data is entered through Oracle forms so there are no support implications by using this tool and data is validated by the application in the normal way. Frequently Asked QuestionsQ. Can I use DataLoad with Oracle Applications 11i?A. DataLoad works with any Windows based applications and as long as 11i continues to support Copy & Paste, etc, there will be no issues using DataLoad with 11i. A number of changes have been made to shortcuts in 11i, therefore the commands.dat file does need to be updated to reflect these changes. The revised commands file is available to download.Q. How do I toggle a checkbox?A. Use a single space in the cell or the *SB commandQ. How do I select an item from a pick drop down list?A. Values can be selected by the first letter of the value. Where 2 values begin with the same letter, repeating the letter moves down the list. For example:Column containing I would select IndirectColumn containing C would select ContractColumn containing CC would select CapitalQ. Some keystrokes are sent to forms but data isn't, why?A. This happens when Copy and Paste isn't working in the NCA forms, which may be caused by two different problems. First, when your applet is unsigned, i.e. your windows have yellow bars at their base copy and paste doesn't work. Run 'appscert.bat' to create the 'identitydb.obj' file. Second, Copy & Paste won't work in flexfields until 11.0.3 and patch 857097 have been applied.Q. The DataLoad spreadsheet isn't big enough, how do I make it bigger?A.DataLoad versions prior to V4.1 had a fixed grid size, but inserting rows or column using "Edit, Insert…" where data has already been entered would add rows or columns to the grid. V4.1 and beyond have grids whose size is limited only by the PC's memory.Q. How do I send non-printing keys like the function keys (F1…F12) to the target window?A. First, precede your data with a \ (backslash) to indicate you want to send keystrokes. Then use the appropriate character code - see Sending Keystrokes.Q. What is the latest version of DataLoad?A. V4.1.0.1 (standalone) and V3.2 (macro) are the latest versions.Q. When I start the load the first 1 or 2 fields are missed but this only happens at the start, why?A. This appears to be a forms issue. You have to work around this by adding keystrokes at the start of your load to compensate for this.Q. Why isn't the window I want listed in the Window Name drop down box?A. Prior to V4.1 only running Oracle NCA forms were listed in this list. If you want to use a window which isn't Oracle NCA, or an Oracle form which isn't currently running, type the window name in this box manually. V4.1 and beyond lists all active widows.Q. Why doesn't the Oracle pick list respond when I send data to it?A. You must send data to a pick list as keystrokes, not the default DataLoad cut and paste. Prefix your data with a \ (backslash) to force DataLoad to simulate keystrokes.Q. Will DataLoad start automatically when I double click on a .dld or tab delimited file?A. Yes, if you configure Windows to do this. Double click on a .dld file (or your tab delimited file) and in the "Open With" dialog box click on the "Other" button and select dataload.exe in the directory when it was unloaded. Enter a description for .dld files and press OK.Q. I am using 10SC and commands like *PB and *SAVE won't work. Why?A. Before V4.1 the built in commands were scripted for NCA and wouldn't work with the different keyboard shortcuts used in 10SC and other applications. You could, however, write powerful keyboard control statements in DataLoad which give you access to all combinations of keyboard characters. Use these to access new or unsupported commands. See Sending Keystrokes. V4.1 introduced "Command groups" that allow you to specify which command definitions should be used.Q. I have a large amount of data that I need to send as keystrokes, not using Copy & Paste. How can I quickly prefix this data with the necessary '\'?A. DataLoad has functionality to change single or multiple cells to & from keystroke and Copy & Paste cells. Use "Convert To…" from the Edit or grid pop-up menus to access this functionality. See Convert To.Q. Can I change the names of the command groups in V4.1, or add or delete groups.A. Yes. The commands.dat file holds the command definitions in a TAB delimited format. Simply edit this file and you can have as many command groups as you want with names of your choice.Q. Can I use DataLoad with Windows 2000?A. DataLoad has not been tested with Windows 2000 and until there is greater takeup of this OS and it is certified by Oracle for use as a client OS with Applications there won't be any specific testing. However, if the backward compatibility of 2000 with NT is as good as has been claimed there should be no issues running DataLoad with 2000. DataLoad New FeaturesThis section lists the new features and fixes included in each version of DataLoad.Version 4.1.0.1Bug Fixes∙Delays failed when used with decimal values.∙*SB command was incorrectly defined.Version 4.1Version 4.1 contains a large number of enhancements and new features. Please click here for a full list.Version 4.0.2New Features∙Automatically colour grid cells according to their contents:∙- Data Cells- Keystroke Cells- Command Cells∙New load commands:∙*FI Find*FA Find all*CW(window)Change window to window*QE Query enter*QR Query run*CL Clear field*IR Insert record∙Fill command (right, left, up, down), like Excel.∙Columns can be automatically sized to fit the cell text.∙Column widths are now saved in the .dld file.Bug Fixes∙DataLoad failed to open .dld files which used a decimal point separator different to the current system default.∙TprogressBar property out of range error occasionally occurred at the start of a load.∙Couldn’t copy & Paste to Window Name box.∙There were a number of problems when selecting cells using the keyboard, especially when adjusting a selection made using the mouse.∙Deleting, inserting or clearing rows or columns didn't always turn on saving.Version 4.0.1.1Bug Fixes∙ A file name passed on the command line to DataLoad didn't open and caused an error.∙Hourglass option wasn't saved in the .dld file.∙The "NEW" command didn't reset any changes in the Delays and Options forms.∙The "NEW" command didn't reset columns & rows to their default sizes.Version 4.0.1New Features∙Mobile progress bar indicates load progress.∙Title form changed to allow editing of all titles in one go.∙Double clicking on header row opens title editing.∙DataLoad can wait for a user defined duration while the cursor is hourglass∙File save icon & menu item greyed out when there is nothing to save.∙DataLoad prompts to save spreadsheet if it has changed and you are closing the spreadsheet.∙Function to clear single, multiple or all titles.∙Full Windows 98 compatability.Bug Fixes∙Paste sometimes caused divide by zero errors.∙Double clicking on the fixed row or column caused problems.∙On NT the grid selection was lost after the load completed.∙The load duration did not display correctly on all PCs.∙Security rules demo spreadsheet failed to set include/exclude properly.∙Multiple line pastes occasionally lost the final line.∙Copy added an extranious carriage return to the end of text copied to clipboard.Version 4.0∙DataLoad implemented as a standalone Windows 32 bit application.∙- Improved performance, reliability and stability.- Dedicated DataLoad interface.- No dependancy on Excel.- Vastly increased scope of potential future functionality.∙Control of delays increased and improved, including access to low level delays.∙DataLoad file format introduced to save setup information as well as data.∙Documentation re-written as Windows help file.∙Control of keystrokes made easier and the keys available to the user increased.Version 3.2∙New load commands:∙*FI Find*FA Find all*CW(window)Change window to window*QE Query enter*QR Query run*CL Clear field*IR Insert record∙AutoTabbing feature added with customisable no tab prefix..Minor Changes to Version 3.19-Feb-99, 3.1b *BM added for block menu.29-Jan-99, 3.1a *NF & *PF use menus not TAB key.Version 3.1∙*ST command updated for fields where cursor isn't at the start.∙Added DoEvents in SendExtKey sub to prevent extended keys being left 'down'.∙Added DoEvents to prevent first char sent to a form getting lost.∙Error handling added.∙Form activation improved:∙- Allows looping until valid name entered or Cancel pressed.∙Name cells can be used to define:∙-the form to be activated.-a delay after every TAB.-a delay after every ENT.-a delay after every *SAVE or *SP.-a delay after every cell.-a delay after every data load.-report on timing statitics.∙Loading can be halted with Esc key.∙*NR & *PR use menus not arrow keys.∙Documentation rewritten.∙Demo macro ammended to work with NCA.∙Example Excel data files supplied.Version 3.0∙Rewritten to work with NCA∙- Key sending mechanism changed.- Key mapping changed.- Extra commands added.Version 2.3∙Save command added.Version 2.2∙Single spacebar added.Version 2.0∙Extra navigation paths included.∙Improved performance.Version 1.0∙Initial release.NB. Versions 1.0 and 2.0 were written by Martin Birch, while versions 2.2 and 2.3 were written by Luther Armstrong.DataLoad for DummiesDataload works great if you can figure out what you are supposed to do toget it working.. Basically what your are trying to do is get data into anOracle table and Dataload is a HUGE time saver. Here’s what you needto do step by step.1.Open the Oracle Form you want to load data into(Example the Asset Fiscal Years Form)2.Begin by manually entering the data you want in the form.Starting with the Fiscal Year Name box. Write down your keystrokes on a piece of paper For Example: Fa Fiscal Year TAB Fa Fiscal Year TAB 01-Sep-50 TAB 31-Aug-51 TAB 1951 TABNotice after that last tab it automatically fills in the next record3.Now you are ready to use the DataLoad program4.Open the DataLoad version 4.1.0.1 program5.Make sure the form you want you want to load datainto is open.6.Click the drop down box for WINDOW NAME andselect “Asset Fiscal Years” or the form name you areusing. The form name should appear in thewindow as long as you have that form open on yourPC.7.In the DESCRIPTION box type a description forexample FAXSURFYR 11.0.2 (form name & version)8.Click the drop down box for COMMAND GROUPand select NCA (OR 11i)9.Click in the first cell of the Grid and type your firstkeystroke FA FISCAL YEAR10.Move to the next cell and type your next keystrokeTAB11.Continue until you have typed all the keystrokes you wrote down from Step #2 above.12.Click the SEND DATA TO FORM BUTTON13.Click OK to “Load all cells”14.Watch Dataload – load your data into your form.15.Save your Form**Note if some of your data is in an Excel spreadsheet you can Copy (Ctrl C) and Paste (Ctrl V) from Excel to DataLoad.。

相关文档
最新文档