华为FusionInsight LibrA方案白皮书

合集下载

华为FusionInsight LibrA 2.8 技术主打胶片

华为FusionInsight LibrA 2.8 技术主打胶片

RSI 1979-1983
ORACLE 1983-
ORACLE Exadata x1~x6 2008
软硬件一体机,共享磁盘架构,最大支持8机柜x8计 算节点。从x6开始主推Oracle Exadata数据库云平台
NCR 1990-
Teradata 2009-
ORACLE Exadata x7 2017~
趋势:云化,与Hadoop互通,企业级内核,更大的集群规模
70
80
90
2010
2012
5
FusionInsight LibrA 在行业中的位置


传统OLTP数据库


&可靠性 Nhomakorabea要
混合型场景

Oracle Exadata
大数据
云服务
Aliyun HybridDB
Amazon Redshift Huawei DWS
Vertica 2005-
AsterDB 2007-
MySQL 1994-
90
SkySQL 2010MariaDB 2009-
2010
MySQL5.5 2009~
2012
云服务 AWS、Aliyun、HEC
关系型数据库发展谱系
➢ 关系型理论之初诞生 ➢ 商业数据库主流
➢ 大部闭源商业数据库选 择的道路
使用说明
文档名称 目的 受众
关键信息
华为FusionInsight LibrA 技术主打胶片(全球,通用) 作为一线向客户讲解LibrA主打胶片的素材
客户技术层早期拓展交流 客户战略规划部、技术规划部,业务运营部等的交流
1.数据库的发展线路图 2.FusionInsight概述 3.FusionInsight LibrA基础功能介绍 4. FusionInsight LibrA的竞争力特性 5. FusionInsight LibrA的典型场景 6. FusionInsight LibrA的规划

FusionSphere虚拟化套件存储虚拟化技术白皮书

FusionSphere虚拟化套件存储虚拟化技术白皮书

华为FusionSphere 6.5.0虚拟化套件存储虚拟化技术白皮书目录1简介/Introduction (3)2解决方案/Solution (4)2.1 FusionSphere 存储虚拟化解决方案 (4)2.1.1架构描述 (4)2.1.2特点描述 (5)2.2存储虚拟化的磁盘文件解决方案 (6)2.2.1厚置备磁盘技术 (6)2.2.2厚置备延时置零磁盘技术 (6)2.2.3精简置备磁盘技术 (6)2.2.4差分磁盘技术 (7)2.3存储虚拟化的业务管理解决方案 (7)2.3.1磁盘文件的写时重定向技术 (7)2.3.2磁盘文件的存储热迁移 (8)2.3.3磁盘文件高级业务 (8)2.4存储虚拟化的数据存储扩容解决方案 (9)2.4.1功能设计原理 (9)2.5存储虚拟化的数据存储修复解决方案 (10)2.5.1功能设计原理 (10)1 简介/Introduction存储设备的能力、接口协议等差异性很大,存储虚拟化技术可以将不同存储设备进行格式化,将各种存储资源转化为统一管理的数据存储资源,可以用来存储虚拟机磁盘、虚拟机配置信息、快照等信息。

用户对存储的管理更加同质化。

虚拟机磁盘、快照等内存均以文件的形式存放在数据存储上,所有业务操作均可以转化成对文件的操作,操作更加直观、便捷。

基于存储虚拟化平台提供的众多存储业务,可以提高存储利用率,更好的可靠性、可维护性、可以带来更好的业务体验和用户价值。

华为提供基于主机的存储虚拟化功能,用户不需要再关注存储设备的类型和能力。

存储虚拟化可以将存储设备进行抽象,以逻辑资源的方式呈现,统一提供全面的存储服务。

可以在不同的存储形态,设备类型之间提供统一的功能。

2 解决方案/Solution2.1 FusionSphere 存储虚拟化解决方案FusionSphere存储虚拟化平台能够屏蔽存储设备差异,统一封装为文件级操作接口,并在虚拟化层提供了丰富的存储业务功能。

FusionInsight HD技术白皮书

FusionInsight HD技术白皮书

华为FusionInsight HD 技术白皮书目录1 简介 (1)1.1 FusionInsight概述 (1)1.2 FusionInsight HD组件介绍 (2)2 重点组件介绍 (4)2.1 集群管理Manager (4)2.2 分布式文件系统HDFS (6)2.3 统一资源管理和调度框架YARN (7)2.3.1 Yarn (7)2.3.2 Superior Scheduler (7)2.4 分布式批处理引擎MapReduce (12)2.5 分布式数据库HBase (13)2.6 数据仓库组件Hive (13)2.7 分布式内存计算引擎Spark (14)2.7.1 Spark (14)2.7.2 CarbonData (15)2.8 交互式SQL引擎Elk (16)2.9 全文检索组件Solr (19)2.10 全文检索组件Elasticsearch (21)2.11 批量数据集成Loader (22)2.12 实时数据采集Flume (25)2.13 流式事件处理(Storm) (26)2.13.1 Storm (26)2.13.2 StreamCQL (27)2.14 流处理引擎Flink (28)2.15 分布式高速缓存Redis (30)2.16 分布式消息队列Kafka (30)2.17 作业编排与调度Oozie (31)2.18 数据继承入口Hue (33)2.19 多租户 (34)2.20 安全增强 (36)2.21 可靠性增强 (37)2.22 滚动重启、滚动升级与滚动补丁 (39)1 简介1.1 FusionInsight概述FusionInsight是华为企业级大数据存储、查询、分析的统一平台,能够帮助企业快速构建海量数据信息处理系统,通过对巨量信息数据实时与非实时的分析挖掘,发现全新价值点和企业商机。

FusionInsight解决方案由产品:FusionInsight HD、FusionInsight LibrA、FusionInsightAthena和操作运维系统FusionInsight Manager,数据使能服务:数据集成开发工具、实时决策平台,及私有云服务:HDS大数据服务、ADS数据库服务、RDS数据库服务构成。

华为FusionInsight大数据方案介绍

华为FusionInsight大数据方案介绍

B 商业理解 数据分析师
P
M
技术实践
数据科学
平台
算法
不断迭代
13
大数据应用挑战
数据分析师
传统分析方法面临大数据的挑战 海量数据分析的及时性、效率和实时应用 当前技能要求搞,需要业务驱动的一站式甚 至one-Click的闭环解决方案
数据集成工程师
Hadoop
开放、统一数据处理,混合负载 稳定、可靠、安全 高效、高可扩展
第三方数据
微信
微博
流式数据 刷卡事件
12
数据价值发现是一个系统工程,数据分析师是不可替代的
以业务问题为出发点,围绕商业理解-数据科学-技术实践才能形成系统的数据价 值发现,数据分析师是核心角色,平台/算法都是他的工具。
商业理解:分解业务问题/理解数据 数据科学:数据方法体系,算法和工具 技术实践:大数据相关平台技术
GFS(分布式文件系统)
Chubby(分布式协同)
分布式存储+ 查询 + 批处理
网页搜索应用驱动Google建立低成本高扩展文件系 统、支持K/V网页数据的查询、批处理构建索引
Google大数据架构2.0 社交网络时代(2010)
Dremel 交互式分析
BI/Analytics
Search Page Indexing Google+
Travel Sky Ticket Booking
Core Banking System
IOT
搜索,社交
大数据平台
复杂度
数据模型
传统数据平台 数据负载特征
并发量
访问量
在大数据和移动互联网时代,传统企业在数据规模和访问量的快速增长,使得技术选择上,向互联网公司靠齐

华为FusionStorage技术白皮书

华为FusionStorage技术白皮书

华为FusionStorage技术白皮书1 执行摘要/Executive Summary本文从存储技术的发展趋势为切入点,结合用户需求,从高性能、高可靠、高扩展、易管理、兼容性等方面详细介绍了华为公司FusionStorage产品的功能及特点,旨在突出FusionStorage 产品独有的亮点、应用场景以及为客户带来的价值。

2 简介/Introduction虚拟化与云计算技术正在引领IT技术的发展方向,越来越多的企业采用虚拟化与云计算技术来构建新一代IT系统,以提升IT系统的资源利用率,并在保证服务级别水平的前提下降低成本;同时帮助业务更加具有敏捷性,加速新业务的上线时间。

然而,虚拟化与云计算技术的广泛应用也给后端的存储系统提出更加严峻的挑战。

如:需要存储系统能够承载更多的业务、更高的性能与可靠性、更好的扩展性、保证关键业务服务级别水平并降低成本等。

华为分布式存储软件FusionStorage采用创新的分布式软件架构,以高性能、高可靠、高扩展为其设计理念,充分满足企业未来业务需求,帮助其IT系统转型以更快更好地应对日益激烈的竞争环境,实现与客户的共同成长。

3 解决方案/Solution随着企业面临的竞争环境越来越激烈、新业务上线时间要求越来越短,其IT系统需要从传统的成本中心转变为提升企业竞争力的利器,帮助企业提升竞争力并实现商业成功。

作为存放企业数据资产的存储系统,不但要满足业务所需要的高性能、高可靠等基本诉求,更要满足未来业务的发展、提升业务的敏捷性,帮助业务更快更好地适应竞争环境的需要。

从IT业界发展来看,以下技术趋势正在影响存储行业的发展:l 虚拟化技术的广泛应用虚拟机技术给服务器带来更高的利用率、给业务带来更便捷的部署,降低了TCO,因而在众多行业得到了广泛的应用。

与此同时,虚拟机应用给存储带来以下挑战:第一,相比传统的物理服务器方式,单个存储系统承载了更多的业务,存储系统需要更强劲的性能来支撑;第二,采用共享存储方式部署虚拟机,单个卷上可能承载几十或上百的虚拟机,导致卷IO呈现更多的随机特征,这对传统的Cache技术提出挑战;第三,单个卷承载多个虚拟机业务,要求存储系统具备协调虚拟机访问竞争,保证对QoS要求高的虚拟机获取到资源实现性能目标;第四,单个卷上承载较多的虚拟机,需要卷具有很高的IO性能,这对传统受限于固定硬盘的RAID技术提出挑战;第五,虚拟机的广泛使用,需要更加高效的技术来提高虚拟机的部署效率,加快新业务的上线时间。

FusionInsight LibrA技术白皮书

FusionInsight LibrA技术白皮书

FusionInsight LibrA 技术白皮书目录1 FusionInsight LibrA产品简介 (1)1.1 产品定位 (1)1.2 应用场景 (2)1.3 技术特点 (3)2 FusionInsight LibrA软件架构 (4)3 FusionInsight LibrA支持平台和技术指标 (6)3.1 软件要求 (6)3.2 硬件及本地PC要求 (9)3.3 技术指标 (11)4 FusionInsight LibrA核心技术 (12)4.1 Share-nothing架构 (12)4.2 数据分布式存储 (13)4.3 数据分区 (14)4.4 数据并行导入 (15)4.5 全并行的数据查询处理 (17)4.6 向量化执行和行列混合引擎 (18)4.7 工作负载管理 (21)4.8 高可靠事务处理 (22)4.9 线性扩容 (23)4.10 分析查询HDFS数据 (25)4.11 三方工具兼容 (27)4.12 跨集群数据处理 (27)5 FusionInsight LibrA工具 (29)5.1 客户端工具 (29)5.1.1 Data Studio (29)5.1.2 gsql (30)5.2 管理、监控工具 (30)5.3 备份恢复工具 (32)6 FusionInsight Libra对外接口 (36)1 FusionInsight LibrA产品简介1.1 产品定位1.2 应用场景1.3 技术特点1.1 产品定位FusionInsight LibrA是企业级的大规模并行处理关系型数据库。

FusionInsight LibrA采用MPP(Massive Parallel Processing)架构,支持行存储与列存储,提供PB(Petabyte,2的50次方字节)级别数据量的处理能力。

FusionInsight LibrA在核心技术上跟传统数据库相比有巨大优势,可以解决很多行业用户的数据处理性能问题,可以为超大规模数据管理提供高性价比的通用计算平台,并可用于支撑各类数据仓库系统、BI(Business Intelligence)系统和决策支持系统,统一为上层应用的决策分析等提供服务。

华为FusionInsight解决方案介绍

华为FusionInsight解决方案介绍
9
海量数据从哪里来-机器
Boeing:飞机每 个引擎3分钟产生 1TB数据,波音 787 6小时飞行产 生240TB数据
CERN: LHC对撞 产生1PB/s的数据 SKA:2015年存 储需要1EB
云化IDC建设 催生了数据大集中
Facebook:每天 产生50TB的日志 数据,衍生分析 数据超过100TB
obsolete before plateau
50%的企业已经投资和使用大数据,33%的企业正在规划如何利用大数据,我们看到大数据领域的持续投资,大数据即将步入成熟发展阶段
跨过概念,进入实践,空间迅猛发展
5
大数据已经在领先企业获得落地,并产生效果
互联网
金融 运营商 零售
Google大脑
VISA信用卡可疑交易
Activity Streams Internet TV NFC Payment Private Cloud Computing Augmented Reality Cloud Computing Media Tablet Virtual Assistants In-Memory Database Management Systems Gesture Recognition Machine-to-Machine Communication Services Mesh Networks:Sensor
2013
密合作,最大限度地促进增长和利益,减少风险
•八国集团发布了《G8开放数据宪章》,提出要加快推动数据开放和利用。
•欧盟力推《数据价值链战略计划》,用大数据改造传统治理模式,降低公共部门成本,并促进经济增长和就业增长
•G8:
•英国政府发布《英国数据能力发展战略规划》,旨在利用数据产生商业价值、提振经济增长,承诺2015年之前 开放交通、天气、医疗方面的核心数据库。 •安倍内阁正式公布新IT战略《创建最尖端IT国家宣言》,以开放大数据为核心的IT国家战略 •2015年3月的两会上,李克强总理明确表态,政府应该尽量的公开非涉密的数据,以便利用这些数 据更好的服务社会,也为政府决策和监管服务。

华为FusionInsight安全技术白皮书

华为FusionInsight安全技术白皮书
华为 FusionInsight 2.5
安全技术白皮书
文档版本 发布日期
Hale Waihona Puke 01 2015-06-30
华为技术有限公司
版权所有 © 华为技术有限公司 2015。 保留一切权利。 非经本公司书面许可,任何单位和个人不得擅自摘抄、复制本文档内容的部分或全部,并不得以任何形式传 播。
商标声明
和其他华为商标均为华为技术有限公司的商标。 本文档提及的其他所有商标或注册商标,由各自的所有人拥有。
华为技术有限公司
地址: 网址:
深圳市龙岗区坂田华为总部办公楼
邮编:518129
文档版本 01 (2015-06-30)
华为专有和保密信息
i
版权所有 © 华为技术有限公司
华为 FusionInsight 2.5 安全技术白皮书
目录
目录
1 简介............................................................................................................................................ 1
4 应用安全 .................................................................................................................................... 6
4.1 身份鉴别和认证.................................................................................................................................................. 6 4.2 用户和权限管理.................................................................................................................................................. 7 4.3 Web 应用安全....................................................................................................................................................... 8 4.4 数据库加固.......................................................................................................................................................... 9 4.5 审计安全 ............................................................................................................................................................. 9 4.6 口令安全 ............................................................................................................................................................10 4.6.1 口令保存 .........................................................................................................................................................10 4.6.2 口令规则 .........................................................................................................................................................10

FusionCube超融合平台技术白皮书

FusionCube超融合平台技术白皮书

华为FusionCube HCI超融合平台技术白皮书前言概述本文档介绍了华为FusionCube 3.2 虚拟化超融合基础设施(FusionCube Hyper-converged Virtualization Infrastructure,以下简称FusionCube 3.2 HCI)的产品价值、产品架构、高性能、线性扩展、系统安全以及系统可靠性。

借助本手册,您可以全面了解FusionCube 产品。

读者对象本文档主要适用于以下工程师:●营销工程师●技术支持工程师●维护工程师符号约定在本文中可能出现下列标志,它们所代表的含义如下。

“注意”不涉及人身伤害。

目录前言 (ii)1产品概述 (1)2产品价值 (2)3产品架构 (4)3.1FusionSphere 场景架构 (5)3.1.1架构 (5)3.1.2典型配置 (6)3.1.3组网 (9)3.1.4工作原理 (9)3.2 Vmware 场景架构 (10)3.2.1 架构 (11)3.2.2 典型配置 (11)3.2.3 组网 (14)3.2.4 工作原理 (15)4分布式存储 (16)4.1架构概述 (17)4.2关键业务流程 (20)4.2.1数据路由 (20)4.2.2IO 路径 (21)4.2.3Cache 机制 (23)4.3存储管理 (25)4.3.1存储集群管理 (25)4.3.2存储服务化 (26)4.4数据冗余 (26)4.4.1多副本 (26)4.4.2Erasure Code (27)4.5特性介绍 (28)4.5.1SCSI/iSCSI 块接口 (28)4.5.3 快照 (31)4.5.4共享卷快照 (32)4.5.5一致性快照 (32)4.5.6链接克隆 (33)4.5.7多资源池 (34)4.5.8QoS (35)4.5.9存储双活 (35)4.5.10存储异步复制 (36)5硬件设备平台 (38)5.1机架服务器平台 (38)5.1.1RH1288 V3 机架服务器 (38)5.1.2RH2288H V3 机架服务器 (39)5.1.3RH5885H V3 机架服务器 (40)5.1.4 1288H V5 机架服务器 (41)5.1.5 2288H V5 机架服务器 (42)5.1.6 2488 V5 机架服务器 (43)5.1.7 2488H V5 机架服务器 (44)5.2 E9000 刀片服务器平台 (44)5.2.1 E9000 机框 (44)5.2.2 E9000 刀片 (45)5.2.3 高性能交换板 (50)5.3 高密服务器平台X6800/X6000 (53)5.3.1 X6800 机框 (53)5.3.2 X6800 服务器节点 (54)5.3.3 X6000 机箱 (58)5.3.4 X6000 服务器节点 (59)6安装部署和运维管理 (61)6.1自动化部署 (61)6.1.1FusionCube Builder (61)6.1.2系统初始化 (63)6.1.3设备自动发现 (64)6.2统一运维管理 (65)6.2.1业务发放管理 (66)6.2.2一键式运维 (67)6.2.3Call Home (70)7性能和可扩展性 (72)7.1系统高性能 (72)7.1.2分布式SSD Cache 加速 (73)7.1.2.1Read/Write Cache (74)7.1.2.2大块Pass Throught (76)7.1.3硬件加速 (77)7.2线性扩展 (77)7.2.1存储平滑扩容 (78)7.2.2性能线性扩展 (78)7.2.3一键式扩容 (79)7.3FusionCube 分布式存储相对于传统SAN 的性能优势 (80)7.3.1更高的性能 (80)7.3.2线性Scale-up/Scale-out (81)7.3.3大池POOL (83)7.3.4SSD Cache vs SSD Tier (84)8系统可靠性 (86)8.1数据可靠性 (86)8.1.1块存储集群可靠性 (86)8.1.2数据一致性 (87)8.1.3数据冗余保护 (87)8.1.4快速数据重建 (88)8.1.5数据存储多路径 (88)8.2硬件可靠性 (89)8.3系统亚健康增强 (89)8.4备份与恢复 (93)8.5容灾恢复 (95)8.5.1双活解决方案 (96)8.5.2异步复制解决方案 (97)9系统安全 (98)9.1系统安全威胁 (98)9.2总体安全框架 (99)9.2.1网络安全 (100)9.2.2应用安全 (101)9.2.2.1权限管理 (101)9.2.2.2Web 安全 (101)9.2.2.3数据库加固 (102)9.2.2.4日志管理 (102)9.2.3 主机安全 (103)9.2.3.1 操作系统加固 (103)1 产品概述随着数据不断增长以及互联网业务的兴起,新兴业务的激增、业务数据呈现几何倍数增加,传统服务器+存储的架构已经无法很好满足业务发展需求,分布式、云化技术应运而生。

华为数据中心3.0架构白皮书

华为数据中心3.0架构白皮书

Technical White PaperHigh Throughput Computing Data Center ArchitectureThinking of Data Center 3.0 AbstractIn the last few decades, data center (DC) technologies have kept evolving fromDC 1.0 (tightly-coupled silos) to DC 2.0 (computer virtualization) to enhance dataprocessing capability. Emerging big data analysis based business raiseshighly-diversified and time-varied demand for DCs. Due to the limitations onthroughput, resource utilization, manageability and energy efficiency, current DC2.0 shows its incompetence to provide higher throughput and seamlessintegration of heterogeneous resources for different big data applications. Byrethinking the demand for big data applications, Huawei proposes a highthroughput computing data center architecture (HTC-DC). Based on resourcedisaggregation and interface-unified interconnects, HTC-DC is enabled withPB-level data processing capability, intelligent manageability, high scalability andhigh energy efficiency. With competitive features, HTC-DC can be a promisingcandidate for DC3.0.ContentsEra of Big Data: New Data Center Architecture in Need 1⏹Needs on Big Data Processing 1⏹DC Evolution: Limitations and Strategies 1⏹Huawei’s Vision on Future DC 2DC3.0: Huawei HTC-DC 3⏹HTC-DC Overview 3⏹Key Features 4Summary 6June 2014ERA OF BIG DATA: NEW DATA CENTER ARCHITECTURE IN NEED⏹Needs on Big Data Processing During the past few years, applications which arebased on big data analysis have emerged, enrichinghuman life with more real-time and intelligentinteractions. Such applications have proven themselvesto become the next wave of mainstream of onlineservices. As the era of big data approaches, higher andhigher demand on data processing capability has beenraised. Being the major facilities to support highlyvaried big data processing tasks, future data centers(DCs) are expected to meet the following big datarequirements (Figure 1):▪PB/s-level data processing capability ensuring aggregated high-throughput computing, storage and networking; ▪Adaptability to highly-varied run-time resource demands; ▪Continuous availability providing 24x7 large-scaled service coverage, and supporting high-concurrency access; ▪ Rapid deployment allowing quick deployment and resource configuration for emerging applications.⏹ DC Evolution: Limitations and StrategiesDC technologies in the last decade have been evolved (Figure 2) from DC 1.0 (with tightly-coupled silos) to current DC 2.0 (with computer virtualization). Although data processing capability of DCs have been significantly enhanced, due to the limitations on throughput, resource utilization, manageability and energy efficiency, current DC 2.0 shows its incompetence to meet the demands of the future:Figure 2. DC Evolution- Throughput: Compared with technological improvement in computational capability of processors, improvement in I/O access performance has long been lagged behind. With the fact that computing within conventional DC architecture largely involves data movement between storage and CPU/memory via I/O ports, it is challenging for current DC architecture to provide PB-level high throughput for big data applications. The problem of I/O gap is resulted from low-speed characteristics of conventional transmission and storage mediums, and also from inefficient architecture design and data access mechanisms.To meet the requirement of future high throughput data processing capability, adopting new transmission technology (e.g. optical interconnects) and new storage medium can be feasible solutions. But a more fundamental approach is to re-design DC architecture as well as data access mechanisms for computing. If data access in computing process can avoid using conventional I/O mechanism, but use ultra-high-bandwidth network to serve as the new I/O functionality, DC throughput can be significantly improved.Figure 1. Needs Brought by Big DataJune 2014- Resource Utilization:Conventional DCs typically consist of individual servers which are specifically designed for individual applications with various pre-determined combinations of processors, memories and peripherals. Such design makes DC infrastructure very hard to adapt to emergence of various new applications, so computer virtualization technologies are introduced accordingly. Although virtualization in current DCs help improve hardware utilization, it cannot make use of the over-fractionalized resource, and thus making the improvement limited and typically under 30%1,2. As a cost, high overhead exists with hypervisor which is used as an essential element when implementing computer virtualization. In addition, in current DC architecture, logical pooling of resources is still restricted by the physical coupling of in-rack hardware devices. Thus, current DC with limited resource utilization cannot support big data applications in an effective and economical manner.One of the keystones to cope with such low utilization problem is to introduce resource disaggregation, i.e., decoupling processor, memory, and I/O from its original arrangements and organizing resources into shared pools. Based on disaggregation, on-demand resource allocation and flexible run-time application deployment can be realized with optimized resource utilization, reducing Total Cost of Operation (TCO) of infrastructure.- Manageability: Conventional DCs only provide limited dynamic management for application deployment, configuration and run-time resource allocation. When scaling is needed in large-scaled DCs, lots of complex operations still need to be completed manually.To avoid complex manual re-structuring and re-configuration, intelligent self-management with higher level of automation is needed in future DC. Furthermore, to speed up the application deployment, software defined approaches to monitor and allocate resources with higher flexibility and adaptability is needed.- Energy Efficiency: Nowadays DCs collectively consume about 1.3% of all Array global power supply3. As workload of big data drastically grows, future DCswill become extremely power-hungry. Energy has become a top-lineoperational expense, making energy efficiency become a critical issue in greenDC design. However, the current DC architecture fails to achieve high energyefficiency, with the fact that a large portion of energy is consumed for coolingother than for IT devices.With deep insight into the composition of DC power consumption (Figure3), design of each part in a DC can be more energy-efficient. To identify andeliminate inefficiencies and then radically cut energy costs, energy-savingdesign of DC should be top-to-bottom, not only at the system level but also atFigure 3. DC Power Consumption the level of individual components, servers and applications.Huawei’s Vision on Future DCIn Huawei’s vision, to support future big data applications, future DCs should be enabled with the following features:- Big-Data-Oriented: Different from conventional computing-centric DCs, data-centric should be the key design concept of DC3.0. Big data analysis based applications have highly varied characteristics, based on which DC 3.0 should provide optimizedmechanisms for rapid transmission, highly concurrent processing of massive data, and also for application-diversified acceleration.- Adaptation for Task Variation: Big data analysis brings a booming of new applications, raising different resource demands that vary with time. In addition, applications have different need for resource usage priority. To meet such demand variation with highadaptability and efficiency, disaggregation of hardware devices to eliminate the in-rack coupling can be a key stone. Such a methodenables flexible run-time configuration on resource allocation, ensuring the satisfactory of varied resource demand of different applications.- Intelligent Management: DC 3.0 involves massive hardware resource and high density run-time computation, requiring higher intelligent management with less need for manual operations. Application deployment and resource partitioning/allocation, even system diagnosis need to be conducted in automated approaches based on run-time monitoring and self-learning. Further, Service Level Agreement (SLA) guaranteeing in complex DC computing also requires a low-overhead run-time self-manageable solution.1./index.cfm?c=power_mgt.datacenter_efficiency_consolidation2./system-optimization/a-data-center-conundrum/3./green/bigpicture/#/datacenters/infographicsJune 2014- High Scalability: Big data applications require high throughput low-latency data access within DCs. At the same time, extremely high concentration of data will be brought into DC facilities, driving DCs to grow into super-large-scaled with sufficient processing capability. It is essential to enable DCs to maintain acceptable performance level when ultra-large-scaling is conducted.Therefore, high scalability should be a critical feature that makes a DC design competitive for the big data era.- Open, Standard based and Flexible Service Layer: With the fact that there exists no unified enterprise design for dynamical resource management at different architecture or protocol layers, from IO, storage to UI. Resources cannot be dynamically allocated based on the time and location sensitive characteristics of the application or tenant workloads. Based on the common principles of abstraction and layering, open and standard based service-oriented architecture (SOA) has been proven effective and efficient and has enabled enterprises of all sizes to design and develop enterprise applications that can be easily integrated and orchestrated to match their ever-growing business and continuous process improvement needs, while software defined networking (SDN) has also been proven in helping industry giants such as Google to improve its DC network resource utilization with decoupling of control and data forwarding, and centralized resource optimization and scheduling. To provide competitive big data related service, an open, standard based service layer should be enabled in future DC to perform application driven optimization and dynamic scheduling of the pooled resources across various platforms.- Green: For future large-scale DC application in a green and environment friendly approach, energy efficient components, architectures and intelligent power management should be included in DC 3.0. The use of new mediums for computing, memory, storage and interconnects with intelligent on-demand power supply based on resource disaggregation help achieving fine-grained energy saving. In addition, essential intelligent energy management strategies should be included: 1) Tracking the operational energy costs associated with individual application-related transactions; 2) Figuring out key factors leading to energy costs and conduct energy-saving scheduling; 3) Tuning energy allocation according to actual demands; 4) Allowing DCs to dynamically adjust the power state of servers, and etc.DC3.0: HUAWEI HTC-DCHTC-DC OverviewTo meet the demands of high throughput in the big data era, current DC architecture suffers from critical bottlenecks, one of which is the difficulty to bridge the I/O performance gap between processor and memory/peripherals. To overcome such problem and enable DCs with full big-data processing capability, Huawei proposes a new high throughput computing DC architecture (HTC-DC), which avoids using conventional I/O mechanism, but uses ultra-high-bandwidth network to serve as the new I/O functionality. HTC-DC integrates newly-designed infrastructures based on resource disaggregation, interface-unified interconnects and a top-to-bottom optimized software stack. Big data oriented computing is supported by series of top-to-bottom accelerated data operations, light weighted management actions and the separation of data and management.Figure 4. Huawei HTC-DC ArchitectureFigure 4 shows the architecture overview of HTC-DC. Hardware resources are organized into different pools, which are links up together via interconnects. Management plane provides DC-level monitoring and coordination via DC Operating System (OS), while business-related data access operations are mainly conducted in data plane. In the management plane, a centralized ResourceJune 2014Figure 5. Hardware Architecture of Huawei HTC-DC Management Center (RMC) conducts global resource partitioning/allocation and coordination/scheduling of the related tasks, with intelligent management functionalities such as load balancing, SLA guaranteeing, etc. Light-hypervisor provides abstract of pooled resources, and performs lightweight management that focuses on execution of hardware partitioning and resource allocation but not get involved in data access. Different from conventional hypervisor which includes data access functions in virtualization, light-hypervisor focuses on resource management, reducing complexity and overhead significantly. As a systematical DC3.0 design, HTC-DC also provides a complete software stack to support various DC applications. A programming framework with abundant APIs is designed to enable intelligent run-time self-management.Key FeaturesResource Disaggregated Hardware SystemFigure 5 illustrates the hardwarearchitecture of HTC-DC, which isbased on completely-disaggregatedresource pooling. The computing poolis designed with heterogeneity. Eachcomputing node (i.e. a board) carriesmultiple processors (e.g., x86, Atom,Power and ARM, etc.) for application-diversified data processing. Nodes inmemory pool adopt hybrid memorysuch as DRAM and non-volatilememory (NVM) for optimized high-throughput access. In I/O pool,general-purposed extension (GPU,massive storage, external networking,etc.) can be supported via differenttypes of ports on each I/O node. Eachnode in the three pools is equippedwith a cloud controller which canconduct diversified on-board manage-ment for different types of nodes. Pooled Resource Access Protocol (PRAP)To form a complete DC, all nodes in the three pools are interconnected via a network based on a new designed Pooled Resource Access Protocol (PRAP). To reduce the complexity of DC computing, HTC-DC introduces PRAP which has low-overhead packet format, RDMA-enabled simplified protocol stack, unifying the different interfaces among processor, memory and I/O. PRAP is implemented in the cloud controller of each node to provide interface-unified interconnects. PRAP supports hybrid flow/packet switching for inter-pool transmission acceleration, with near-to-ns latency. QoS can be guaranteed via run-time bandwidth allocation and priority-based scheduling. With simplified sequencing and data restoring mechanisms, light-weight lossless node-to-node transmission can be achieved.With resource disaggregation and unified interconnects, on-demand resource allocation can be supported by hardware with fine-granularity, and intelligent management can be conducted to achieve high resource utilization (Figure 6). RMC in the management plane provides per-minute based monitoring, on-demand coordination and allocation over hardware resources. Required resources from the pools can be appropriately allocated according to the characteristics of applications (e.g. Hadoop). Optimized algorithm assigns and schedules tasks on specific resource partitions where customized OSs are hosted. Thus, accessibility and bandwidth of remote memory and peripherals can be ensured within the partition, and hence end-to-end SLA can be guaranteed. Enabled with self-learning mechanisms, resource allocation and management in HTC-DC requires minimal manual operation, bringing intelligence and efficiency.June 2014Figure 6. On-demand Resource Allocation Based on DisaggregationHuawei Many-Core Data Processing UnitTo increase computing density, uplift data throughput and reduce communication latency, Huawei initializes Data Processing Unit (DPU, Figure 7) which adopts lightweight-core based many-core architecture, heterogeneous 3D stacking and Through-Silicon Vias (TSV) technologies. In HTC-DC, DPU can be used as the main computing component. The basic element of DPU is Processor-On-Die (POD), which consists of NoC, embedded NVM, clusters with heavy/light cores, and computing accelerators. With software-defined technologies, DPU supports resource partitioning and QoS-guaranteed local/remote resource sharing that allow application to directly access resources within its assigned partition. With decoupled multi-threading support, DPU executes speculative tasks off the critical path, resulting in enhanced overall performance. Therefore static power consumptions can be significantly reduced. Especially, some of the silicon chip area can be saved by using the optimal combinations of the number of synchronization and execution pipelines, while maintaining the same performance.Figure 7. Many-Core ProcessorNVM Based StorageEmerging NVM (including MRAM or STT-RAM, RRAM and PCM, etc.) has been demonstrated with superior performance over flash memories. Compared to conventional storage mediums (hard-disk, SSD, etc.), NVM provides more flattened data hierarchy with simplified layers, being essential to provide sufficient I/O bandwidth. In HTC-DC, NVMs are employed both as memory and storage. NVM is a promising candidate for DRAM replacement with competitive performance but lower power consumption. When used as storage, NVM provides 10 times higher IOPS than SSD4, bringing higher data processing capability with enhanced I/O performance.Being less hindered by leakage problems with technology scaling and meanwhile having a lower cost of area, NVM is being explored extensively to be the complementary medium for the conventional SDRAM memory, even in L1 caches. Appropriately tuning of selective architecture parameters can reduce the performance penalty introduced by the NVM to extremely tolerable levels while obtaining over 30% of energy gains.54./global/business/semiconductor/news-events/press-releases/detail?newsId=129615.M. Komalan et.al., “Feasibility exploration of NVM based I-cache through MSHR enhancements”, Proceeding in DATE’14June 2014Optical InterconnectsTo meet the demand brought by big data applications, DCs are driven to increase the data rate on links (>10Gbps) while enlarging the scale of interconnects (>1m) to host high-density components with low latency. However due to non-linear power consumption and signal attenuation, conventional copper based DC interconnects cannot have competitive performance with optical interconnects on signal integrity, power consumption, form factor and cost6. In particular, optical interconnect has the advantage of offering large bandwidth density with low attenuation and crosstalk. Therefore a re-design of DC architecture is needed to fully utilize advantages of optical interconnects. HTC-DC enables high-throughput low-latency transmission with the support of interface-unified optical interconnects. The interconnection network of HTC-DC employs low-cost Tb/s-level throughput optical transceiver and co-packaged ASIC module, with tens of pJ/bit energy consumption and low bit error rate for hundred-meter transmission. In addition, with using intra/inter-chip optical interconnects and balanced space-time-wavelength design, physical layer scalability and the overall power consumption can be enhanced. Using optical transmission that needs no signal synchronization, PRAP-based interconnects provide higher degree of freedom on topology choosing, and is enabled to host ultra-large-scale nodes.DC-Level Efficient Programming FrameworkTo fully exploit the architectural advantages and provide flexible interface for service layer to facilitate better utilization of underlying hardware resource, HTC-DC provides a new programming framework at DC-level. Such a framework includes abundant APIs, bringing new programming methodologies. Via these APIs, applications can issue requests for hardware resource based on their demands. Through this, optimized OS interactions and self-learning-based run-time resource allocation/scheduling are enabled.In addition, the framework supports automatically moving computing operations to near-data nodes while keeping data transmission locality. DC overhead is minimized by introducing topology-aware resource scheduler and limiting massive data movement within the memory pool.In addition, Huawei has developed the Domain Specific Language (HDSL) as part of the framework to reduce the complexity of programming in HTC-DC for parallelism. HDSL includes a set of optimized data structures with operations (such as Parray, parallel processing the data in array) and a parallel processing library. One of the typical applications of HDSL is for graph computing. HDSL can enable efficient programming with competitive performance. Automated generation of distributed codes is also supported.SUMMARYWith the increasing growth of data consumption, the age of big data brings new opportunities as well as great challenges for future DCs. DC technologies have been evolved from DC 1.0 (tightly-coupled server) to DC 2.0 (software virtualization) with the data processing capability being largely enhanced. However, the limited I/O throughput, energy inefficiency, low resource utilization and limited scalability of DC 2.0 become the bottlenecks to fulfill big data application demand. Therefore, a new, green and intelligent DC 3.0 architecture fitting different resource demands of various big-data applications is in need.With the design avoiding data access via conventional I/O but using ultra-high-bandwidth network to serve as the new I/O functionality, Huawei proposes HTC-DC as a new generation of DC design for future big data applications. HTC-DC architecture enables a DC to compute as a high throughput computer. Based on the resource disaggregated architecture and interface-unified PRAP network, HTC-DC integrates many-core processor, NVM, optical interconnects and DC-level efficient programming framework.Such a DC ensures PB-level data processing capability, supporting intelligent management, being easy and efficient to scale, and significantly saves energy.HTC-DC architecture is still being developed. Using Huawei’s cutting-edge technologies, HTC-DC can be a promising candidate design for the future, ensuring a firm step for DCs to head for the big data era.■6.“Silicon Photonics Market & Technologies 2011-2017: Big Investments, Small Business”, Yole Development, 2012Copyright © Huawei Technologies Co., Ltd. 2014. All rights reserved.No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.All other trademarks and trade names mentioned in this document are the property of their respective holders.NoticeThe information in this document may contain predictive statements including, without limitation, statements regarding the future financial and operating results, future product portfolio, new technology, etc. There are a number of factors that could cause actual results and developments to differ materially from those expressed or implied in the predictive statements. Therefore, such information is provided for reference purpose only and constitutes neither an offer nor an acceptance. Huawei may change the information at any time without notice.Huawei Technologies Co., Ltd.Address: Huawei Industrial BaseBantian, LonggangShenzhen 518129People's Republic of ChinaWebsite: Tel: 0086-755-28780808。

FusionInsight大数据解决方案白皮书

FusionInsight大数据解决方案白皮书

让数据慧说话,让企业更智能华为FusionInsight大数据解决方案概述华为FusionInsight大数据解决方案,快速集成结构化、半结构化和非结构化等多种数据,支持离线分析、实时流处理、实时检索、交互查询等各种数据处理能力,针对政府、金融、运营商、公共安全等数据密集型行业的客户需求,打造了敏捷、智慧、融合的大数据解决方案,让客户可以更快、更准、更稳的从各类繁杂无序的海量数据中发现价值,助力政府高效治理和企业卓越经营。

FusionInsight大数据平台包括HD数据底座、数据使能工具(DLF、RTD)与数据服务HDS。

2017年10月,IDC发布的《IDC MarketScape:中国大数据管理平台厂商评估,2017》报告中,华为FusionInsight 大数据平台位居领导者象限第一。

2017~2019年华为FusionInsight大数据连续3年入围Gartner Magic Quadrant for Data Management Solutions for Analytics,中国区厂商排名第一。

政务互联网+政务服务:一号一窗一网,数据多跑路,群众少跑腿,流程审批效率提升50%以上;个人或者企业办事只跑1次,提高效率和民生满意度。

城市IOC:城市运行实况直播,城市服务可视化;有效地利用数据,提升政府决策能力。

智慧海关:基于实时大数据技术,结合物流、税收、检疫风险规则、参数、模型;构建实时风控平台,缩短通关时间,提升关税征收准确性,提升查验率和查获率。

金融智慧营销:提升客户洞察能力,提高获客、挽客率和客户满意度;优化营销资源配置,提升人均销售业绩和效益。

智慧风控:信用卡全流程数据化运营,提升实时风控、实时征信、精准获客、分期预测、催收风控能力。

公共安全警务大数据:融合不同警种和各级单位数据,由“事后打”向“事前防”转变,汗水警务 向 智慧警务演进,实现协同研判和作战,提升办案效率。

视频大数据:应用和算法平台解耦;支持千亿级多维数据秒级检索,提升案件研判效率。

FusionInsight LibrA企业级数据仓库白皮书

FusionInsight LibrA企业级数据仓库白皮书

PB 级企业数据仓库FusionInsight LibrA 是采用Shared-nothing 架 构的MPP 系统,它是由众多拥有独立且互不共享 CPU 、内存、存储等系统资源的逻辑节点组成。

在这 样的系统架构中,业务数据被分散存储在多个物理节 点上,数据分析任务被推送到数据所在位置就近执行, 通过控制模块的协调,并行地完成大规模的数据处理 工作,实现对数据处理的快速响应。

产品特性简单易用● 应用快速迁移/上线,兼容标准ANSI SQL ,提供快速数据迁移工具。

● 简化运维,提供可视化集群部署、运维管理工具。

● 快速升级:版本升级性能提升至2小时完成,且和数据库元数据和用户数据的数量无关。

弹性扩展● Shared-nothing 架构,具备超强的Scale-out 横向扩展能力。

● 1024物理节点,扩展线性比为1。

● 基于通用X86架构,扩容成本低。

无缝对接Hadoop● 对应用透明,完全支持SQL2003标准访问Hadoop 原生数据。

● 高性能交互查询:支持向量引擎访问HDFS 存储层,支持高效率查询ORC 文件,Smart Scan 技术减少网络数据交换。

● 与华为大数据生态(Hadoop )体系无缝连接。

极致性能,高效交互SQL 分析●列存向量化引擎,利用向量化+SIMD 提供极致分析性能。

● 并行 Bulk Load 加载技术,旁路协调节点,数据节点直连,充分利用各节点的计算能力及网络带宽。

● 大规模集群通讯技术,集群网络通信分层解耦,实现数据中心内大规模节点的高通量无阻塞通信。

● 弹性集群:通过NodeGroup 技术支持一套集群划分为多个逻辑子集群,不同Node Group 之间计算资源可以弹性共享,满足子集群业务峰值时的计算需求。

电信级高可靠● 无单节点故障:全组件HA ,无单点故障。

数据节点 HA + Handoff 技术,协调节点多活,GTM 全局事务节点HA 。

2023-华为室内数字化面向5G演进白皮书-1

2023-华为室内数字化面向5G演进白皮书-1

华为室内数字化面向5G演进白皮书随着5G的逐步普及和推广,越来越多的人开始关注5G技术在室内应用中的具体实现。

针对这一问题,华为发布了一份名为“华为室内数字化面向5G演进白皮书”的文献,可以为人们提供更具体的答案。

该白皮书主要分为以下几个方面:第一部分:数字化室内建设的必要性数字化室内建设的必要性越来越受到人们的重视。

越来越多的室内场所开始采用数字化技术,以提高服务效率和用户体验。

数字化室内建设对于5G技术的发展也至关重要。

第二部分:数字化室内建设的发展趋势数字化室内建设的发展趋势主要包括智能化、自动化和网络化。

智能化是数字化室内建设的关键,自动化和网络化则可以提高服务效率和用户体验。

数字化室内建设的发展将在不久的将来推动5G技术的进一步发展。

第三部分:数字化室内建设的关键技术数字化室内建设的关键技术包括无线网络、传感器、云计算、人工智能、虚拟现实和区块链等。

这些技术共同构成了数字化室内建设的基础。

第四部分:数字化室内建设的应用场景数字化室内建设的应用场景包括智能办公、智慧运维、智能零售、智能医疗、智慧城市等。

这些场景都可以通过数字化技术实现自动化和智能化,以提高服务效率和用户体验。

第五部分:数字化室内建设对5G发展的影响数字化室内建设对5G发展的影响主要体现在两个方面:一是数字化室内建设可以为5G技术提供更广阔的应用场景;二是数字化室内建设可以为5G技术的网络规划和优化提供更多数据支持。

总之,“华为室内数字化面向5G演进白皮书”详细阐明了数字化室内建设的必要性、发展趋势、关键技术、应用场景以及对5G发展的影响。

该白皮书为人们提供了更加具体的思路和方向,可以为数字化室内建设和5G技术的进一步发展提供更加有力的支持。

V288HCIE云计算-华为FusionSphere 5.1北向接口SDK技术白皮书(服务器虚拟化

V288HCIE云计算-华为FusionSphere 5.1北向接口SDK技术白皮书(服务器虚拟化

华为FusionSphere 5.1北向接口SDK技术白皮书文档版本V2.0发布日期2015-05-30版权所有© 华为技术有限公司2015。

保留一切权利。

非经本公司书面许可,任何单位和个人不得擅自摘抄、复制本文档内容的部分或全部,并不得以任何形式传播。

商标声明和其他华为商标均为华为技术有限公司的商标。

本文档提及的其他所有商标或注册商标,由各自的所有人拥有。

注意您购买的产品、服务或特性等应受华为公司商业合同和条款的约束,本文档中描述的全部或部分产品、服务或特性可能不在您的购买或使用范围之内。

除非合同另有约定,华为公司对本文档内容不做任何明示或暗示的声明或保证。

由于产品版本升级或其他原因,本文档内容会不定期进行更新。

除非另有约定,本文档仅作为使用指导,本文档中的所有陈述、信息和建议不构成任何明示或暗示的担保。

华为技术有限公司地址:深圳市龙岗区坂田华为总部办公楼邮编:518129网址:前言概述本文档介绍FusionSphere产品北向开放接口SDK技术。

读者对象本文档主要适用于以下工程师:公司MKT、行销、渠道商在项目拓展中使用符号约定在本文中可能出现下列标志,它们所代表的含义如下。

“注意”不涉及人身伤害。

“说明”修改记录修改记录累积了每次文档更新的说明。

最新版本的文档包含以前所有文档版本的更新内容。

文档版本V1.0 (2015-05-30)第一次正式发布。

目录前言 (ii)1 开放能力总览 (5)1.1 文档介绍 (5)1.2 整体结构 (6)1.3 配套版本 (6)2 开放集成场景 (7)2.1 被第三方云管理系统集成 (7)2.1.1 集成场景概述 (7)2.1.2 典型应用场景 (8)2.2 被第三方备份软件集成 (18)2.2.1 集成场景概述 (18)2.2.2 典型应用场景 (19)2.3 被第三方防病毒软件等安全产品集成 (28)2.3.1 集成场景概述 (28)典型应用场景 (29)2.4 被CloudStack集成 (30)2.4.1 集成场景概述 (30)2.4.2 典型应场景 (30)2.5 命令行运维 (33)3 开放能力清单 (34)3.1 FusionManager云管理北向接口SDK (34)3.1.1 开放接口清单 (34)3.2 FusionCompute云操作系统北向接口SDK (39)3.2.1 开放接口清单 (39)3.3 FusionCompute虚拟磁盘管理接口SDK (42)3.3.1 Backup&Restore虚拟磁盘管理接口 (43)3.3.2 Fusioncompute虚拟机备份相关接口SDK (43)3.4 FusionCompute云操作系统PowerShell命令行 (43)4 术语表 (45)1 开放能力总览1.1 文档介绍Fusionsphere解决方案旨在向其用户提供IaaS层服务,相应的,Fusionsphere对外开放了IaaS接口供用户更为灵活的使用IaaS层服务。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

PB级企业数据仓库
FusionInsight LibrA是采用Shared-nothing架
构的MPP系统,它是由众多拥有独立且互不共享CPU、内存、存储等系统资源的逻辑节点组成。

在这
样的系统架构中,业务数据被分散存储在多个物理节点上,数据分析任务被推送到数据所在位置就近执行,通过控制模块的协调,并行地完成大规模的数据处理工作,实现对数据处理的快速响应。

产品特性
简单易用
●应用快速迁移/上线,兼容标准ANSI SQL,提供
快速数据迁移工具。

●简化运维,提供可视化集群部署、运维管理工具。

●快速升级:版本升级性能提升至2小时完成,且和
数据库元数据和用户数据的数量无关。

安全
●SSL安全连接,RFC5802口令登录认证,三权分立
的用户权限管理模式,日志、文件敏感信息mask 处理、完善的日志与审计等功能,全方位安全保证机制,为数据安全保驾护航。

●通过了华为网络安全实验室ICSL认证,该认证是遵
从英国当局颁布的网络安全标准设立的。

弹性扩展
●Shared-nothing架构,具备超强的Scale-out横
向扩展能力。

●1024物理节点,扩展线性比为1。

●基于通用X86架构,扩容成本低。

无缝对接Hadoop
●对应用透明,完全支持SQL2003标准访问Hadoop
原生数据。

●高性能交互查询:支持向量引擎访问HDFS存储层,
支持高效率查询ORC文件,Smart Scan技术减少网络数据交换。

●与华为大数据生态(Hadoop)体系无缝连接。

极致性能,高效交互SQL分析
●列存向量化引擎,利用向量化+SIMD提供极致分析
性能。

●并行 Bulk Load 加载技术,旁路协调节点,数据节
点直连,充分利用各节点的计算能力及网络带宽。

●大规模集群通讯技术,集群网络通信分层解耦,实
现数据中心内大规模节点的高通量无阻塞通信。

●弹性集群:通过NodeGroup技术支持一套集群划
分为多个逻辑子集群,不同Node Group之间计算资源可以弹性共享,满足子集群业务峰值时的计算需求。

电信级高可靠
●无单节点故障:全组件HA,无单点故障。

数据节
点 HA + Handoff 技术,协调节点多活,GTM
全局事务节点HA。

●扩容业务不中断:基于Node Group技术的在线
扩容,扩容过程中支持数据增、删、改、查,及DDL操作(Drop/Truncate/Alter table) 。

技术规格。

相关文档
最新文档