ESXi多网卡设置

合集下载

ESXi 多vlan配置

ESXi 多vlan配置

51C T O 首页我的博客搜索社区:论坛博客下载读书更多登录注册首页|F 5|C i s c o |V M w a r e |L i n u x 博客统计信息z h u j i a n 2233 的B L O G写留言邀请进圈子发消息加友情链接加T A 为好友M S N /Q Q 论坛 开心 人人 豆瓣 新浪微博 分享到:博主的更多文章>>标签:服务器 网卡 V L A N V M w a r e E S X i原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。

否则将追究法律责任。

h t t p ://08180701.b l o g .51c t o .c o m /478869/410652背景:需要在一台V M w a r e E S X i 服务器单网卡跑多V L A N解决方法:首页在交换机上将与V M w a r e E S X i 服务器的端口模式设置成t r u n k ,再在V M w a r e E S X i 服务器上建立不同的虚拟交换机,每个交换机的V l a n i d 值与交换机的L V A N号对应就可。

最后不用的虚拟机网卡选择不用的虚拟交换机相V M w a r e E S X i 服务器单网卡跑多V L A N2010-10-26 08:53:15悦佩h t t p ://08180701.b l o g .51c t o .c o m 【订阅】原创:5翻译:0转载:0博 客|图库|写博文|帮 助用户名:z h u j i a n 2233文章数:5评论数:2访问量:1133无忧币:33博客积分:56博客等级:1注册日期:2008-09-14L i n u x 双域名邮件服务器实战V M w a r e E S X i 服务器单网卡跑多V L A N P P T P V P N 服务器问题之一F 5使用单臂网络架构问题之一 M R TG 监控6509状态(一)网络工程师联盟 热门文章搜索B L O G 文章搜 索我的技术圈(4)更多>>连。

VMWare ESXi 6.0相关设置说明

VMWare ESXi 6.0相关设置说明

VMWare ESXi 6.0相关设置说明目录1 健康检查 (1)2 时间配置 (2)3 DNS和路由配置 (3)4 虚拟机启动和关机选项 (4)5 网络设置 (6)1健康检查点击主机的“配置”选项卡,再点击“健康状况”能查看到主机各硬件的健康情况,如果有部分硬件有告警或故障,能在这里看到。

2时间配置“配置”选项卡里点击“时间配置”,能看到时间及NTP(Network Time Protocol)信息,点击右上方的“属性”可以进行时间和NTP配置修改,在NTP配置的“选项”里可以设置NTP相关参数及添加NTP服务器,可根据实际情况进行添加设置。

3DNS和路由配置“配置”选项卡里点击“DNS和路由配置”,能看到当前的主机名、DNS及默认网关信息,点击右上方的“属性”可以进行主机名、DNS和默认网关配置修改。

4虚拟机启动和关机选项“配置”选项卡里点击“虚拟机启动和关机”,能看到各虚拟机的启动关闭选项,此选项主要是配置虚拟机是否允许与主机系统一起自动启动与停止,就是说当主机系统关机时,虚拟机将自动关闭电源,而当主机系统启动时,虚拟机将根据这里的配置自动启动或保持关机不启动。

对于一些经常使用的虚拟机建议设置为与主机系统一起启动。

5网络设置1.“配置”选项卡里点击“网络”,能看到相关的网络设置,从下图看,有一个“标准交换机”vSwitch0,此“标准交换机”下有一个“VMNetwork”、一个“Management Network”和四张物理适配器,其中有两张物理适配器没有连接,“Management Network”里能看到主机管理IP地址。

从图里看到,所以开机的虚拟机和“Management Network”都通过右边四张物理适配器访问。

2.点击右上方的“添加网络”出现以下对话框,有两类连接选项:(1)虚拟机(VM Network)用于所有虚拟网路卡连接的端口,相当于物理交换机的下行端口组。

主要给ESXi上创建的虚拟机使用,前面创建虚拟机时选择的网络就是指这个。

esxiwindowscpu多核的设置原理详细说明

esxiwindowscpu多核的设置原理详细说明

esxiwindowscpu多核的设置原理详细说明物理cpu(即插槽):真实的⼀个CPU;⽐如 2core:⼀个cpu有多少核;⽐如 8hyper threading:超线程技术,即再虚拟个核出来。

所以虚拟软件vmware,esxi最终会算出多少个逻辑cpu:物理cpu(slot) * cores * 2 = 2*8*2=24linux对物理cpu或者slot没有限制。

win10 专业版最多运⾏2个slot或者物理cpu。

在win10上,如果你的esxi虚拟出 vCpu = 16个,由于最多运⾏两个插槽,即2个物理cpu。

那需要配置它的每个cpu核⼼是8核。

这样正好有2 slot。

Setting the Number of Cores per CPU in a Virtual Machine: A How-to GuideWhen creating virtual machines, you should configure processor settings for them. With hardware virtualization, you can select the number of virtual processors for a virtual machine and set the number of sockets and processor cores. How many cores per CPU should you select for optimal performance? Which configuration is better: setting fewer processors with more CPU cores or setting more processors with fewer CPU cores? This blog post explains the main principles of processor configuration for VMware virtual machines. TerminologyFirst of all, let’s go over the definitions of the terms you should know when configuring CPU settings for to help you understand the working principle. Knowing what each term means allows you to avoid confusion about the number of cores per CPU, CPU cores per socket, and the number of CPU cores vs speed.A CPU Socket is a physical connector on the motherboard to which a single physical CPU is connected. A motherboard has at least one CPU socket. Server motherboards usually have multiple CPU sockets that support multiple multicore processors. CPU sockets are standardized for different processor series. Intel and AMD use different CPU sockets for their processor families.A CPU (central processing unit, microprocessor chip, or processor) is a computer component. It is the electronic circuitry with transistors that is connected to a socket. A CPU executes instructions to perform calculations, run applications, and complete tasks. When the clock speed of processors came close to the heat barrier, manufacturers changed the architecture of processors and started producing processors with multiple CPU cores. To avoid confusion between physical processors and logical processors or processor cores, some vendors refer to a physical processor as a socket.A CPU core is the part of a processor containing the L1 cache. The CPU core performs computational tasks independently without interacting with other cores and external components of a “big” processor that are shared among cores. Basically, a core can be considered as a small processor built into the main processor that is connected to a socket. Applications should support parallel computations to use multicore processors rationally.Hyper-threading is a technology developed by Intel engineers to bring parallel computation to processors that have one processor core. The debut of hyper-threading was in 2002 when the Pentium 4 HT processor was released and positioned for desktop computers. An operating system detects a single-core processor with hyper-threading as a processor with two logical cores (not physical cores). Similarly, a four-core processor with hyper-threading appears to an OS as a processor with 8 cores. The more threads run on each core, the more tasks can be done in parallel. Modern Intel processors have both multiple cores and hyper-threading. Hyper-threading is usually enabled by default and can be enabled or disabled in BIOS. AMD simultaneous multi-threading (SMT) is the analog of hyper-threading for AMD processors.A vCPU is a virtual processor that is configured as a virtual device in the virtual hardware settings of a VM. A virtual processor can be configured to use multiple CPU cores. A vCPU is connected to a virtual socket.CPU overcommitment is the situation when you provision more logical processors (CPU cores) of a physical host to VMs residing on the host than the total number of logical processors on the host.NUMA (non-uniform memory access) is a computer memory design used in multiprocessor computers. The idea is to provide separate memory for each processor (unlike UMA, where all processors access shared memory through a bus). At the same time, a processor can access memory that belongs to other processors by using a shared bus (all processors access all memory on the computer). A CPU has a performance advantage of accessing own local memory faster than other memory on a multiprocessor computer.These basic architectures are mixed in modern multiprocessor computers. Processors are grouped on a multicore CPU package or node. Processors that belong to the same node share access to memory modules as with the UMA architecture. Also, processors can access memory from the remote node via a shared interconnect. Processors do so for the NUMA architecture but with slower performance. This memory access is performed through the CPU that owns that memory rather than directly.NUMA nodes are CPU/Memory couples that consist of a CPU socket and the closest memory modules. NUMA is usually configured in BIOS as the node interleaving or interleaved memory setting.An example. An ESXi host has two sockets (two CPUs) and 256 GB of RAM. Each CPU has 6 processor cores. This server contains two NUMA nodes. Each NUMA node has 1 CPU socket (one CPU), 6 Cores, and 128 GB of RAM.always tries to allocate memory for a VM from a native (home) NUMA node. A home node can be changed automatically if there are changes in VM loads and ESXi server loads.Virtual NUMA (vNUMA) is the analog of NUMA for VMware virtual machines. A vNUMA consumes hardware resources of more than one physical NUMA node to provide optimal performance. The vNUMA technology exposes the NUMA topology to a guest operating system. As a result, the guest OS is aware of the underlying NUMA topology for the most efficient use. The virtual hardware version of a VM must be 8 or higher to use vNUMA. Handling of vNUMA was significantly improved in VMware vSphere 6.5, and this feature is no longer controlled by the CPU cores per socket value in the VM configuration. By default, vNUMA is enabled for VMs that have more than 8 logical processors (vCPUs). You can enable vNUMA manually for a VM by editing the VMX configuration file of the VM and adding theline numa.vcpu.min=X, where X is the number of vCPUs for the virtual machine.CalculationsLet’s find out how to calculate the number of physical CPU cores, logical CPU cores, and other parameters on a server.The total number of physical CPU cores on a host machine is calculated with the formula:(The number of Processor Sockets) x (The number of cores/processor) = The number of physical processor cores*Processor sockets only with installed processors must be considered.If hyper-threading is supported, calculate the number of logical processor cores by using the formula:(The number of physical processor cores) x (2 threads/physical processor) = the number of logical processorsFinally, use a single formula to calculate available processor resources that can be assigned to VMs:(CPU sockets) x (CPU cores) x (threads)For example, if you have a server with two processors with each having 4 cores and supporting hyper-threading, then the total number of logical processors that can be assigned to VMs is2(CPUs) x 4(cores) x 2(HT) = 16 logical processorsOne logical processor can be assigned as one processor or one CPU core for a VM in VM settings.As for virtual machines, due to hardware emulation features, they can use multiple processors and CPU cores in their configuration for operation. One physical CPU core can be configured as a virtual CPU or a virtual CPU core for a VM.The total amount of clock cycles available for a VM is calculated as:(The number of logical sockets) x (The clock speed of the CPU)For example, if you configure a VM to use 2 vCPUs with 2 cores when you have a physical processor whose clock speed is 3.0 GHz, then the total clock speed is 2x2x3=12 GHz. If CPU overcommitment is used on an ESXi host, the available frequency for a VM can be less than calculated if VMs perform CPU-intensive tasks.LimitationsThe maximum number of virtual processor sockets assigned to a VM is 128. If you want to assign more than 128 virtual processors, configure a VM to use multicore processors.The maximum number of processor cores that can be assigned to a single VM is 768 in vSphere 7.0 Update 1. A virtual machine cannot use more CPU cores than the number of logical processor cores on a physical machine.CPU hot add. If a VM has 128 vCPUs or less than 128 vCPUs, then you cannot use the CPU hot add feature for this VM and edit the CPU configuration of a VM while a VM is in the running state.OS CPU restrictions. If an operating system has a limit on the number of processors, and you assign more virtual processors for a VM, the additional processors are not identified and used by a guest OS. Limits can be caused by OS technical design and OS licensing restrictions. Note that there are operating systems that are licensed per-socket and per CPU core (for example, ).CPU support limits for some operating systems:Windows 10 Pro – 2 CPUsWindows 10 Home – 1 CPUWindows 10 Workstation – 4 CPUsWindows Server 2019 Standard/Datacenter – 64 CPUsWindows XP Pro x64 – 2 CPUsWindows 7 Pro/Ultimate/Enterprise - 2 CPUsWindows Server 2003 Datacenter – 64 CPUsConfiguration RecommendationsFor older vSphere versions, I recommend using sockets over cores in VM configuration. At first, you might not see a significant difference in CPU sockets or CPU cores in VM configuration for VM performance. Be aware of some configuration features. Remember about NUMA and vNUMA when you consider setting multiple virtual processors (sockets) for a VM to have optimal performance.If vNUMA is not configured automatically, mirror the NUMA topology of a physical server. Here are some recommendations for VMs in VMware vSphere 6.5 and later:When you define the number of logical processors (vCPUs) for a VM, prefer the cores-per-socket configuration. Continue until the count exceeds the number of CPU cores on a single NUMA node on the ESXi server. Use the same logic until you exceed the amount of memory that is available on a single NUMA node of your physical ESXi server.Sometimes, the number of logical processors for your VM configuration is more than the number of physical CPU cores on a single NUMA node, or the amount of RAM is higher than the total amount of memory available for a single NUMA node. Consider dividing the count of logical processors (vCPUs) across the minimum number of NUMA nodes for optimal performance.Don’t set an odd number of vCPUs if the CPU count or amount of memory exceeds the number of CPU cores. The same applies in case memory exceeds the amount of memory for a single NUMA node on a physical server.Don’t create a VM that has a number of vCPUs larger than the count of physical processor cores on your physical host.If you cannot disable vNUMA due to your requirements, don’t enable the vCPU Hot-Add feature.If vNUMA is enabled in vSphere prior to version 6.5, and you have defined the number of logical processors (vCPUs) for a VM, select the number of virtual sockets for a VM while keeping the cores-per-socket amount equal to 1 (that is the default value). This is because the one-core-per-socket configuration enables vNUMA to select the best vNUMA topology to the guest OS automatically. This automatic configuration is optimal on the underlying physical topology of the server. If vNUMA is enabled, and you’re using the same number of logical processors (vCPUs) but increase the number of virtual CPU cores and reduce the number of virtual sockets by the same amount, then vNUMA cannot set the best NUMA configuration for a VM. As a result, VM performance is affected and can degrade.If a guest operating system and other software installed on a VM are licensed on a per-processor basis, configure a VM to use fewer processors with more CPU cores. For example, Windows Server 2012 R2 is licensed per socket, and Windows Server 2016 is licensed on a per-core basis.If you use CPU overcommitment in the configuration of your VMware virtual machines, keep in mind these values: 1:1 to 3:1 – There should be no problems in running VMs3:1 to 5:1 – Performance degradation is observed6:1 – Prepare for problems caused by significant performance degradationCPU overcommitment with normal values can be used in test and dev environments without risks.Configuration of VMs on ESXi HostsFirst of all, determine how many logical processors (Total number of CPUs) of your physical host are needed for a virtual machine for proper work with sufficient performance. Then define how many virtual sockets with processors (Number of Sockets in vSphere Client) and how many CPU cores (Cores per Socket) you should set for a VM keeping in mind previous recommendations and limitations. The table below can help you select the needed configuration.If you need to assign more than 8 logical processors for a VM, the logic remains the same. To calculate the number of logical CPUs in , multiply the number of sockets by the number of cores. For example, if you need to configure a VM to use 2-processor sockets, each has 2 CPU cores, then the total number of logical CPUs is 2*2=4. It means that you should select 4 CPUs in the virtual hardware options of the VM in vSphere Client to apply this configuration.Let me explain how to configure CPU options for a VM in VMware vSphere Client. Enter the IP address of your in a web browser, and open VMware vSphere Client. In the navigator, open Hosts and Clusters, and select the needed virtual machine that you want to configure. Make sure that the VM is powered off to be able to change CPU configuration.Right-click the VM, and in the context menu, hit Edit Settings to open virtual machine settings.Expand the CPU section in the Virtual Hardware tab of the Edit Settings window.CPU. Click the drop-down menu in the CPU string, and select the total number of needed logical processors for this VM. In this example, Iselect 4 logical processors for the Ubuntu VM (blog-Ubuntu1).Cores per Socket. In this string, click the drop-down menu, and select the needed number of cores for each virtual socket (processor). CPU Hot Plug. If you want to use this feature, select the Enable CPU Hot Add checkbox. Remember limitations and requirements. Reservation. Select the guaranteed minimum allocation of CPU clock speed (frequency, MHz, or GHz) for a virtual machine on an ESXi host or cluster.Limit. Select the maximum CPU clock speed for a VM processor. This frequency is the maximum frequency for a virtual machine, even if this VM is the only VM running on the ESXi host or cluster with more free processor resources. The set limit is true for all virtual processors of a VM. If a VM has 2 single-core processors, and the limit is 1000 MHz, then both virtual processors work with a total clock speed of one million cycles per second (500 MHz for each core).Shares. This parameter defines the priority of resource consumption by virtual machines (Low, Normal, High, Custom) on an ESXi host or resource pool. Unlike Reservation and Limit parameters, the Shares parameter is applied for a VM only if there is a lack of CPU resources within an ESXi host, resource pool, or DRS cluster.Available options for the Shares parameter:Low – 500 shares per a virtual processorNormal - 1000 shares per a virtual processorHigh - 2000 shares per a virtual processorCustom – set a custom valueThe higher the Shares value is, the higher the amount of CPU resources provisioned for a VM within an ESXi host or a resource pool. Hardware virtualization. Select this checkbox to enable . This option is useful if you want to run a VM inside a VM for testing or educational purposes.Performance counters. This feature is used to allow an application installed within the virtual machine to be debugged and optimized after measuring CPU performance.Scheduling Affinity. This option is used to assign a VM to a specific processor. The entered value can be like this: “0, 2, 4-7”.I/O MMU. This feature allows VMs to have direct access to hardware input/output devices such as storage controllers, network cards, graphic cards (rather than using emulated or paravirtualized devices). I/O MMU is also called Intel Virtualization Technology for Directed I/O (Intel VT-d) and AMD I/O Virtualization (AMD-V). I/O MMU is disabled by default. Using this option is deprecated in vSphere 7.0. If I/O MMU is enabled for a VM, the VM cannot be migrated with and is not compatible with snapshots, memory overcommit, suspended VM state, physical NIC sharing, and .If you use a standalone ESXi host and use VMware Host Client to configure VMs in a web browser, the configuration principle is the same as for VMware vSphere Client.If you connect to vCenter Server or ESXi host in and open VM settings of a vSphere VM, you can edit the basic configuration of virtual processors. Click VM > Settings, select the Hardware tab, and click Processors. On the following screenshot, you see processor configuration for the same Ubuntu VM that was configured before in vSphere Client. In the graphical user interface (GUI) of VMware Workstation, you should select the number of virtual processors (sockets) and the number of cores per processor. The number of total processor cores (logical cores of physical processors on an ESXi host or cluster) is calculated and displayed below automatically. In the interface of vSphere Client, you set the number of total processor cores (the CPUs option), select the number of cores per processor, and then the number of virtual sockets is calculated and displayed.Configuring VM Processors in PowerCLIIf you prefer using the command-line interface to configure components of VMware vSphere, use to edit the CPU configuration of VMs. Let’s find out how to edit VM CPU configuration for a VM which name is Ubuntu 19 in Power CLI. The commands are used for VMs that are powered off.To configure a VM to use two single-core virtual processors (two virtual sockets are used), use the command:get-VM -name Ubuntu19 | set-VM -NumCpu 2Enter another number if you want to set another number of processors (sockets) to a VM.In the following example, you see how to configure a VM to use two dual-core virtual processors (2 sockets are used):$VM=Get-VM -Name Ubuntu19$VMSpec=New-Object -Type VMware.Vim.VirtualMachineConfigSpec -Property @{ "NumCoresPerSocket" = 2}$VM.ExtensionData.ReconfigVM_Task($VMSpec)$VM | Set-VM -NumCPU 2Once a new CPU configuration is applied to the virtual machine, this configuration is saved in the VMX configuration file of the VM. In my case, I check the Ubuntu19.vmx file located in the VM directory on the datastore (/vmfs/volumes/datastore2/Ubuntu19/). Lines with new CPU configuration are located at the end of the VMX file.numvcpus = "2"cpuid.coresPerSocket = "2"If you need to reduce the number of processors (sockets) for a VM, use the same command as shown before with less quantity. For example, to set one processor (socket) for a VM, use this command:get-VM -name Ubuntu19 | set-VM -NumCpu 1The main advantage of using Power CLI is the ability to configure multiple VMs in bulk. is important and convenient if the number of virtual machines to configure is high. Use VMware cmdlets and syntax of Microsoft PowerShell to create scripts.ConclusionThis blog post has covered the configuration of virtual processors for VMware vSphere VMs. Virtual processors for virtual machines are configured in VMware vSphere Client and in Power CLI. The performance of applications running on a VM depends on the correct CPU and memory configuration. In VMware vSphere 6.5 and later versions, set more cores in CPU for virtual machines and use the CPU cores per socket approach. If you use vSphere versions older than vSphere 6.5, configure the number of sockets without increasing the number of CPU cores for a VM due to different behavior of vNUMA in newer and older vSphere versions. Take into account the licensing model of software you need to install on a VM. If the software is licensed on using a per CPU model, configure more cores per CPU in VM settings. When using virtual machines in VMware vSphere, don’t forget about . Use NAKIVO Backup & Replication to back up your virtual machines, including VMs that have multiple cores per CPU. Regular backup helps you protect your data and recover the data in case of a .5(100%)4votes。

ESXi多网卡设置

ESXi多网卡设置

ESXi双网卡双IP设置最近机房有台机器坏了,新购一台机器(Dell730 双CPU,共20核,64G存,3块4T硬盘)。

考虑到现在虚拟化技术比较成熟,使用维护确实方便,决定采用vSphere来部署。

机器部署挺方便的,由于以前物理机是双IP设置,虚拟化时遇到一点小问题,主要是开始没理解虚拟交接机的概念。

一设置网络1.点击配置-》网络按钮,默认只有一个虚拟交换机。

所有物理端口聚合到此虚拟交换机,实现冗余。

这就是我开始配置虚机的IP地址,虚机之间可以ping,但到物理交换机就是不通的原因。

2.增加虚拟交换机点击添加添加网络按钮下一步,选择ESXi的物理端口,新建虚拟交换机下面是新增后效果:二虚机指定虚拟网卡的网络选择虚机,点击虚拟机配置按钮选择第一步设置的虚拟交换机这样配置后,可正常ping通物理交换机,就可以了。

三.Windows虚机双IP配置Windows比较简单,分别配置网卡和对应的网关。

在高级里将下一跳由自动改为固定值就可了。

四.Linux虚机双IP配置Ubuntu的双IP稍微麻烦一点,ubuntu安装时不像RHEL,自动激活联线的网卡,ubuntu只激活一个,另一个需手工安装。

1.找出网卡sudo lshw -C network2.编辑/etc/network/interfaces,加入新网卡配置vi /etc/network/interfaces 修改里面的容如下auto eth0iface eth0 inet staticaddress 192.168.4.213netmask 255.255.255.0auto eth1iface eth1 inet staticaddress 58.200.200.15netmask 255.255.255.128gateway 58.200.200.13.增加路由通过以上操作后,可ping通各自的网络。

但如果通过外网访问的话,只有一个IP是通的。

pfsense for esxi安装设置

pfsense for esxi安装设置

Pfsense 安装2016年4月1日9:18Esxi6网络设置以下为最终网络配置。

在esxi配置网络里选添加网络。

虚拟机下一步把所有网卡选项去掉下一步。

下一步网络标签改为 VM LAN,esxi自带的网络可以改为VM WAN好区分内外网。

完成如下图。

配置虚拟机网络到刚刚设置好的虚拟网络中。

把两个虚拟网卡分别设置到VM wan 及 VM lan中。

其它虚拟机也要用同样的设置。

设置好后开始安装pfsense服务器。

下载最新版本下载列表随便选一个下载点就可以。

https:///download/2、创建虚拟机,选择新建虚拟机Esxi选安装其他->FreeBSD(64)选择1个CPU,1G内存,8GB硬盘即可。

安回车进行光盘加载文件。

选99进行安装进行本地硬盘安装。

选最后一项进行安装选择第一个选项快速安装,快速安装。

如果选择希望自定义安装的话可以选择第二个选型选OK回车继续安装完成选择重启,重启后记得移除安装光盘。

重启后出现这个画面。

0.注销(ssh只有)1.分配接口2.设置界面(s)的ip地址 3. webconfigurator重置密码4.重置为出厂默认值 5.重启系统 6.停止系统 7.ping主机 8.壳 9.pftop 10.过滤日志 11.重启webconfigurator 12.pfsense开发者壳13.升级从控制台 14.启用安全shell(sshd)选择2进行WAN IP及LAN IP设置选择要修改WAN IP 还是LAN IP,1或2 回车。

输入内网IP 地址:172.16.10.110 回车外网根据Ip 地址进行同样设置。

输入子网掩码这里设置为16,回车,外网同样设置。

这里是否设置外网Ip 如不设置回车即可提示是否设置IPv6 如不设置回车即可是否设置DHCP,选n不设置,进入网页管理里在设置。

设置完成后回车继续。

用上网的IP地址进行网页管理。

进入网页管理http://172.16.10.110用户名:admin 密码:pfsense。

在VMware ESXi服务器上配置NAT上网

在VMware ESXi服务器上配置NAT上网

在VMware ESXi服务器上配置NAT上网在使用VMware workstation的时候,我们经常以NAT的方式配置虚拟机的网络,与桥接方式相比,这样配置可以让虚拟机共享主机的网络而不用单独设置IP。

到了ESXi,由于其使用了vSwitch作为网络交换设备,因此没有NAT这样的选项了。

但在实际环境中,我们还是经常会遇到IP不够用的情况,比如只有一个公网IP,但是有一堆虚拟机需要上网。

此时就要通过软路由来达到目的。

先看一下配置之前的网络环境,在vSphere Client上选中主机,然后在右边依次点击“配置”->“网络”,如下图:可以看到当前主机上有一个虚拟交换机vSwitch0,构成VM Network网络,它连接到主机的物理网卡vmnic0上,因此网络是与外网连通的。

有4台虚拟机连接到此网络。

此时这4台虚拟机想要上网,必须有此网段的独立IP。

想达到共享上网的目的,我们必须增加一个内网,比如10.10.10.*,然后通过路由设置把这个网段内的请求映射到外网去。

先在主机上创建内网,还在刚才的“网络”页,点“添加网络...”,选择创建虚拟机网络:之后比较关键,选择创建虚拟交换机,但是不要让它与物理网卡相关,因此去掉vmnic1前面的勾,下方的预览图里会相应显示无适配器。

之所以这么做,是因为我们要把这个网络的请求都转发到VM Network上去,而不要让它自己走物理网卡出去。

下一步,可以给它定一个名称,比如NAT Network。

接下来要建一个软件路由了,它的作用是连接两个网络,把内网的请求转发到外网去。

我推荐使用pfSense,它是一个ova文件,在vSphere Client的文件菜单里选“部署OVF模板...”就可以部署它了,过程比较简单,不一一截图了。

部署完成后,注意编辑一下配置,作为路由器,它一定有2个网络适配器,我们把适配器1定义为外网,让它接入VM Network网络,把适配器2定义为内网,让它接入NAT Network,如下图:顺便记录一下这两个适配器的MAC地址,后面会用到。

ESXIpfsense公网ip,实现内网服务器端口映射

ESXIpfsense公网ip,实现内网服务器端口映射

ESXIpfsense公网ip,实现内网服务器端口映射暴露内网服务器端口的方法有很多,之前介绍过ngrok和frp,今天我们用 ESXI +pfsense 来做下。

0. 准备材料ESXI服务器一台,双网卡,版本5.5以上公网ip一个(有固定IP是最好的)fpsense镜像1. 拓扑&规划1.内网网段 192.168.0.0/24 ,公司路由器网关192.168.0.12.ESXI服务器的两张网卡: 一张接内网交换机,一张接外网(比如光猫),内网管理地址192.168.0.1513.pfsense安装在EXSI上,使用ESXI的两张网卡,内网网段和现有的内网一致,即192.168.0.0/24,内网网关地址192.168.0.2344.ESXI上有一台虚拟机,IP:192.168.0.219,运行了一个web服务(nginx)2. ESX网络配置配置两个vSwitch,分别对应两张网卡,需注意的是接外网的网卡配置时,IP设置要结合实际网络来配置,本次的环境是外接电信光猫,选择自动获取。

而对内的vSwitch则可以配置下ip范围,比如192.168.0.1-192.168.0.127。

如果之前已配置了内网网卡,则用现有的即可,不需要另外配置。

配置完成后的拓扑如下vSwitch0用作内网,连接的是内网交换机,虚拟机网络是VM NetworkvSwitch1用作外网,连接电信光猫,虚拟机网络是VM_DX3. 安装pfsense在ESXI上新建虚拟机,配置2CPU,2G内存,8G硬盘就够了,配置两个网卡,一个连VM Network,另一个连VM_DX将pfsense的镜像放入CD/DVD中,开机安装。

一路默认即可,最后看下网卡和ip是否对应,并手动配置下内网ip,设置为192.168.0.234,最终界面如下:如果网卡对应正确,配置正确的话,在内网PC上就能通过192.168.0.234管理pfsense了用户名admin 密码pfsense4. 在pfsense上配置内网服务器的端口映射点击Firewall -> NAT , 点击ADD按钮,添加一条映射规则这里将192.168.0.219的80端口映射到公网的9090端口,如果要内网也能通过公网ip直接访问的话,要开下回流,即NAT reflection要选择NAT+Proxy。

VMwareESXi配置

VMwareESXi配置

TT 服务器技术专题之“主标题”
Page 11 of 72
随便在后面加个 1 辨认,这里有一个 BUG,输入完 IP 和 netmask 掩码之后,关闭设置 会出现错误。如图:
TT 服务器技术专题之“主标题”
Page 12 of 72
提示,比如我的 netmask 是 23 位 255.255.254.0,IP 是 192.168.1.11,但是网关设 置 192.168.0.254,它会报不属于同个网段的错误。不管他继续前进。添加好网卡之后回 到 configuration 界面的 networking,点击 properties 便可以继续设置网关。
Windows Server 2008 R2
TT 服务器技术专题之“主标题”
Page 4 of 72
Hyper-V Server 2008 和 ESXi 都是一种虚拟的主系统,并不是我们日常用的 VMware Workstation 或者 VPC。EXSi 和 Hyper-V 都是一个完整的系统,可以打个比方,VMware Workstation 等虚拟机只是操作系统的一个软件,提供的功能都是基于主系统(Linux 或 Windows),性能也是受到所在操作系统的影响。而 EXSi 和 Hyper-V 则是一个完整的宿主 系统,EXSi 是基于 Linux 修改而成,Hyper-V 是基于 Windows 修改而成。这两个系统只是 个宿主系统,无任何额外功能,都需要另一个管理系统来管理这两个宿主系统(这也是免 费产品的缺陷所在)。
我试验了一下,Hyper-V 很麻烦,它没有一个免费的像 VMware Infrastructure2.5 这 样的客户端工具来管理宿主系统。我在微软主页上找了几个小时,暂时只知道可以用 SCVMM2008 和 windows2008 X86-64 版自带的 Hyper-V 来远程管理(VISTA X86-64 版也可 以),SCVMM2008 提供了 180 天的测试期,我本来想只需用到远程虚拟机管理工具 VMM ADMINISTRATOR CONSOLE 来管理已经安装好 Hyper-V,但是它还需要加入域。

ESXi多网卡设置

ESXi多网卡设置

ESXi双网卡双IP设置最近机房有台机器坏了,新购一台机器(Dell730 双CPU,共20核,64G内存,3块4T硬盘)。

考虑到现在虚拟化技术比较成熟,使用维护确实方便,决定采用vSphere来部署。

机器部署挺方便的,由于以前物理机是双IP设置,虚拟化时遇到一点小问题,主要是开始没理解虚拟交接机的概念。

一设置网络1.点击配置-》网络按钮,默认只有一个虚拟交换机。

所有物理端口聚合到此虚拟交换机,实现冗余。

这就是我开始配置虚机的IP地址,虚机之间可以ping,但到物理交换机就是不通的原因。

2.增加虚拟交换机点击添加添加网络按钮下一步,选择ESXi的物理端口,新建虚拟交换机下面是新增后效果:二虚机指定虚拟网卡的网络选择虚机,点击虚拟机配置按钮选择第一步设置的虚拟交换机这样配置后,可正常ping通物理交换机,就可以了。

三.Windows虚机双IP配置Windows比较简单,分别配置网卡和对应的网关。

在高级里将下一跳由自动改为固定值就可了。

四.Linux虚机双IP配置Ubuntu的双IP稍微麻烦一点,ubuntu安装时不像RHEL,自动激活联线的网卡,ubuntu只激活一个,另一个需手工安装。

1.找出网卡sudo lshw -C network2.编辑/etc/network/interfaces,加入新网卡配置vi /etc/network/interfaces 修改里面的内容如下auto eth0iface eth0 inet staticaddress 192.168.4.213netmask 255.255.255.0auto eth1iface eth1 inet staticaddress 58.200.200.15netmask 255.255.255.128gateway 58.200.200.13.增加路由通过以上操作后,可ping通各自的网络。

但如果通过外网访问的话,只有一个IP是通的。

ESXi多网卡设置

ESXi多网卡设置

最近机房有台机器坏了,新购一台机器(Dell730 双CPU,共20核,64G内存,3块4T硬盘)。

考虑到现在虚拟化技术比较成熟,使用维护确实方便,决定采用vSphere来部署。

机器部署挺方便的,由于以前物理机是双IP设置,虚拟化时遇到一点小问题,主要是开始没理解虚拟交接机的概念。

一设置网络1.点击配置-》网络按钮,默认只有一个虚拟交换机。

所有物理端口聚合到此虚拟交换机,实现冗余。

这就是我开始配置虚机的IP地址,虚机之间可以ping,但到物理交换机就是不通的原因。

2.增加虚拟交换机点击添加添加网络按钮下一步,选择ESXi的物理端口,新建虚拟交换机下面是新增后效果:二虚机指定虚拟网卡的网络选择虚机,点击虚拟机配置按钮选择第一步设置的虚拟交换机这样配置后,可正常ping通物理交换机,就可以了。

三.Windows虚机双IP配置Windows比较简单,分别配置网卡和对应的网关。

在高级里将下一跳由自动改为固定值就可了。

四.Linux虚机双IP配置Ubuntu的双IP稍微麻烦一点,ubuntu安装时不像RHEL,自动激活联线的网卡,ubuntu只激活一个,另一个需手工安装。

1.找出网卡sudo lshw -C network2.编辑/etc/network/interfaces,加入新网卡配置vi /etc/network/interfaces 修改里面的内容如下auto eth0iface eth0 inet staticaddressnetmaskauto eth1iface eth1 inet staticaddress 增加路由通过以上操作后,可ping通各自的网络。

但如果通过外网访问的话,只有一个IP是通的。

cat /etc/iproute2/rt_tables# reserved values255 local254 main253 default252 net0251 net10 unspec## local##1[root@localhost ~]#使用ip route添加默认路由:ip route add dev lo table net1ip route add default via dev eth0 src table net1ip rule add from table net1ip route add dev lo table net0ip route add default via dev eth1 src table net0ip rule add from table net0ip route flush table net1ip route flush table net0这样操作后,就可以双IP访问,有一路断掉就可以正常访问的。

ESXI多网卡网络配置

ESXI多网卡网络配置

ESXI多⽹卡⽹络配置1、两台路由器接⼊不同⽹络;2、⼀台4⽹⼝服务器,⽹⼝分别为:vmnic0、vmnic1、vmnic2、vmnic33、ESXI6.5服务器虚拟机系统测试环境模拟:路由1:192.168.0.1 255.255.254.0路由2:192.168.100.1 255.255.254.0实验⽬的:ESXI服务器中创建虚拟机时,可⾃由选择在两个路由⽹络中选择;实验流程:1、将路由1的LAN⼝⽹线插⼊服务器vmnic0⼝(第⼀个⽹⼝);路由2的LAN⼝⽹线插⼊服务器vmnic1⼝(第⼆个⽹⼝);2、在esxi上,按F2进⼊本地控制台⽤root⽤户登录,看到configure Management network(配置管理⽹络),打开network adapters(⽹络适配器),选择需要开启管理接⼝的⽹卡,如下图:3、选择需要开启的⽹⼝⽹卡,如下图:4、给ESXI服务器配置⼀个固定IP:192.168.0.2,然后在浏览器上输⼊:https://192.168.0.2并登录进服务器;5、选择⽹络-虚拟交换机-修改默认交换机配置,去掉除了vmnic0外的其他上层链路(不去除的话默认的虚拟交换机和端⼝组可⾃由配置双⽹⼝⽹络,这⾥为了更好区分开故进⾏此操作)5、去除默认虚拟交换机除默认端⼝链路后,选择添加⼀个新的虚拟交换机命名为vSwitch1,并选择上⾏链路为vmnic1(第⼆个端⼝);6、之后在端⼝组⾥添加新的端⼝组,这⾥我们命令为PC Network,VLAN ID为0,虚拟交换机为刚才创建的vSwitch1,这样我们就创建了⼀个第⼆个⽹⼝的⽹络;7、最后我们在创建虚拟机时候,在⽹络选择那⾥若选择默认的VM Network就是第⼀个⽹⼝⽹络,选择PC Network就是第⼆个⽹⼝⽹络;原出处:。

VMware vSphere ESXi 安装和配置(DUCI)

VMware vSphere ESXi 安装和配置(DUCI)

VMware vSphere ESXi安装和配置(DUCI)重庆黑科信息技术有限公司技术服务部文档控制修改记录审阅分发目录文档控制............................................................................................................................................................................... - 1 -修改记录....................................................................................................................................................................... - 1 -审阅............................................................................................................................................................................... - 1 -分发............................................................................................................................................................................... - 1 -关于文档............................................................................................................................................................................... - 3 -目标读者............................................................................................................................................................................... - 3 -1.ESXi介绍 ..................................................................................................................................................................... - 4 -2.准备安装镜像源 ........................................................................................................................................................... - 4 -3.CD镜像文件安装......................................................................................................................................................... - 5 -4.gPXE网络引导安装..................................................................................................................................................... - 9 -4.1 概述........................................................................................................................................................................ - 9 -4.2 安装Linxu组件................................................................................................................................................... - 10 -4.3 关闭防火墙和selinux.......................................................................................................................................... - 11 -4.4 配置DHCP服务器 ............................................................................................................................................. - 12 -4.5 配置TFTP服务器............................................................................................................................................... - 12 -4.6 拷贝文件到TFTP服务器目录........................................................................................................................... - 12 -4.7 拷贝文件到FTP服务器目录 ............................................................................................................................. - 14 -4.8 验证gPXE引导................................................................................................................................................... - 15 -5.ESXi主机配置(DUCI).......................................................................................................................................... - 16 -6.总结............................................................................................................................................................................. - 28 -关于文档《VMware vSphere ESXi 安装和配置(DUCI)》介绍了VMware vSphere虚拟化解决方案中ESXi主机常见的两种安装方式,以及对DUCI(Direct Console User Interface)控制界面进行介绍。

ESXI网卡设置

ESXI网卡设置

ESXI⽹卡设置流程1、受限在装虚拟机前按照下列⽅法配置好、虚拟交换机和端⼝组2、然后再装虚拟机的时候,默认的⽹卡基础上,在新增⼀个⽹卡,新增⽹卡有刚创建的配置3、创建好虚拟机后,再到、etc/sysconfig/network-scripts下找到新创建的ifcfg-ens2244、配置好⽹络后重启即可上⽹ESXi 6.5 & 7.0 给虚机添加双⽹卡当宿主机的多个⽹⼝分别连接不同⽹关或⼦⽹,希望虚机也具备同时通过多⽹卡连接不同⽹段的能⼒,可以直接通过界⾯配置宿主机新增虚拟交换机,新增端⼝组对于宿主机,先确认各⽹⼝对应物理适配器的编号,可在服务器的⽹络配置界⾯查看链接状态;并选择需要连接的⽹络适配器。

通过左侧导航⽹络->虚拟交换机,ESXi默认内建了⼀个虚拟交换机vSwitch0,对应1条上⾏线路⽹⼝vmnic0,对应了两个端⼝组,⼀个是⽤于管理的Management Network,这个端⼝组有⼀个就可以了;另⼀个是⽤于虚机的VM Network;新建虚拟交换机,名称[vSwitch1],MTU默认[1500],上⾏链路1选择另⼀个需要绑定的⽹⼝例如[vmnic1],链路发现使⽤默认的模式[侦听],协议[Cisco Dis...(CDP)],安全的三个选项全使⽤默认的[拒绝]。

然后就可以新建端⼝组了,通过左侧导航⽹络->端⼝组,添加端⼝组,名称[VM Network 2],VLAN ID使⽤默认的[0],虚拟交换机选择刚才新建的vSwitch1,安全全部默认继承。

虚机新增⽹卡虚机先关机打开虚机详情页,点击编辑,在编辑设置页,点击添加⽹络适配器在多出来的新建⽹络适配器这⾏,选择[VM Network 2],其他全部默认:勾选连接,适配器类型VMXNET3,MAC地址⾃动后⾯全零。

保存后虚机打开电源开机后连接shell,通过ifconfig应该能看到新产⽣的⽹卡,记录新⽹卡的名称,例如ens224,需要前往 /etc/sysconfig/network-scripts/ 添加这个⽹卡的ifcfg⽂件,在本例中使⽤ifcfg-ens224,# 复制原配置⽂件作为基础cp ifcfg-ens192 ifcfg-ens224# 修改vi ifcfg-ens224# 修改其中的TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noBOOTPROTO=staticIPADDR=172.31.13.7 # 要改NETMASK=255.255.255.0GATEWAY=172.31.13.1 # 要改DNS1=114.114.114.114DNS2=8.8.8.8DEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=ens224 # 要改UUID=44bdb213-d3ff-4e3a-8b9e-0262a5a1f522 # 这⾏要删除DEVICE=ens224 # 要改ONBOOT=yes修改完,重启⽹络,在ifconfig中就能看到变化了systemctl restart networkESXi 6.0 添加静态路由⾸先打开ESXi的SSH服务, 在Configuration -> Security Profile -> Services, start SSH⽤管理员登录后, 在命令⾏下执⾏~ # esxcli network ip route ipv4 add --gateway 192.168.20.59 --network 10.8.0.0/24# 检查路由是否正确添加~ # esxcfg-route -l对于5.0及5.0之前的ESXi, 执⾏esxcfg-route -a 192.168.100.0/24 192.168.0.1# oresxcfg-route -a 192.168.100.0 255.255.255.0 192.168.0.1因为这个版本还不能持久化路由配置, 需要在 /etc/rc.local 中添加这个路由命令ESXi 6.7安装完成后添加Realtek驱动ESXi 6.7 (VMware-VMvisor-Installer-6.7.0-8169922.x86_64.iso), 安装完成后, 开启SSH, 连接后执⾏# check your network cards:lspci -v | grep "Class 0200" -B 10000:00:1f.6 Ethernet controller Network controller: Intel Corporation Ethernet Connection I219-LM [vmnic0]Class 0200: 8086:156f--0000:6c:00.0 Ethernet controller Network controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit EthernetControllerClass 0200: 10ec:8168可以看到realtek⽹卡, 但是vendor ID 10ec:8168 对应的驱动不可⽤. 可以通过VMware Compatibility Guide 搜索(选择过滤条件“IO Devices”).或者使⽤命令进⾏下载:# allow ESXI to use Community Supported VIB'sesxcli software acceptance set --level=CommunitySupportedesxcli network firewall ruleset set -e true -r httpClientesxcli network firewall ruleset set -e true -r dns# install net55-r8168esxcli software vib install -d https://vibsdepot.v-front.de -n net55-r8168the driver download and installation was successfull:Installation ResultMessage: The update completed successfully, but the system needs to be rebooted for the changes to be effective.Reboot Required: trueVIBs Installed: Realtek_bootbank_net55-r8168_8.045a-napiVIBs Removed:然后输⼊reboot重启ESXi服务器制作带Realtek驱动的ESXi安装镜像需要的软件⽹卡驱动, 在下⾯找到 Offline Bundle of version xxx 下载PowerCLI, 需要安装下载ESXi的update zip包, 注意下载的是zip包不是ISO, 这个需要是客户才⾏安装PowerCLI⽤ZIP包安装将前⼀步下载的ZIP解开# 查看有哪些路径> $env:PSModulePathC:\Users\millt\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules# 使⽤其中的⼀个路径, 将ZIP包解开的⽂件都放⼊, 然后执⾏下⾯的命令, 注意Path改成使⽤的路径> Get-ChildItem -Path 'C:\Program Files\WindowsPowerShell\Modules' -Recurse | Unblock-File# 检查是否已经安装> Get-Module VMware* -ListAvailable# 降低执⾏安全级别, 否则后⾯执⾏时会报错> Set-ExecutionPolicy Unrestricted# 导⼊模块> Import-Module VMware.DeployAutomation# 检查命令> Get-DeployCommand⽤命令⾏安装> Install-Module -Name VMware.PowerCLI制作过程⾸先使⽤两个bundle创建仓库(software depot)Add-EsxSoftwareDepot "C:\7_KIT\VMW\net55-r8168-8.045a-napi-offline_bundle.zip", "C:\7_KIT\VMW\VMware-ESXi-6.7.0-8169922-depot.zip"查看现有的image profile, see what profiles existGet-EsxImageProfile然后, 创建⼀个新的image profile, 使⽤现有的clone并设置acceptance level为"community" (因为要添加的驱动是社区签名的):New-EsxImageProfile -CloneProfile ESXi-6.7.0-8169922-standard -name ESXi-6.7.0-8169922-standard-RTL8111 -Vendor RazzSet-EsxImageProfile -ImageProfile ESXi-6.7.0-8169922-standard-RTL8111 -AcceptanceLevel CommunitySupported查看已经添加到仓库的驱动Get-EsxSoftwarePackage | Where {$_.Vendor -eq "Realtek"}使⽤上⾯看到的驱动名称, 添加到image profile:Add-EsxSoftwarePackage -ImageProfile ESXi-6.7.0-8169922-standard-RTL8111 -SoftwarePackage net55-r8168这时候就可以创建包含驱动的ISO了Export-EsxImageProfile -ImageProfile ESXi-6.7.0-8169922-standard-RTL8111 -ExportToIso -filepath C:\7_KIT\VMW\VMware-ESXi-6.7.0-8169922-RTL8111.iso 过程截图。

ESXI安装及配置及网卡直通

ESXI安装及配置及网卡直通

ESXI安装及配置及网卡直通ESXI安装,在设置网卡直通的时候走了不少弯路,看了很多的帖子都说的不是很清楚,这两个帖子是写的很详细的步骤,中间有一点的问题是我遇到的,特此记录一下。

1、首先是esxi的安装这个还是借鉴思想就是武器这位大神的文章,原文链接/ESXI6-5-%E5%AE%89%E8%A3%85%E8%AF%A6%E7%BB%86%E6%AD %A5%E9%AA%A4.html由于官方的ESXI支持的网卡及驱动非常少,为了方便已经将常用的驱动已经打包。

可以直接用我打包好的ISO文件来安装。

ESXi-6.7.0-8169922-standard-8111-igb-xahci.iso,密码:kgfcj3455主板专用其中VMware-VMvisor-Installer-6.7.0-8169922.x86_64是原版,ESXi-6.7.0-8169922-standard-customized是添加驱动的版本接着用UltraISO将esxi的ISO文件写入到U盘。

然后在装ESXI的电脑上,通过U盘引导。

进入安装介面。

选择第一个回车。

ESXI安装程序载入中…..按回车继续。

按F11继续。

选择安装的硬盘,按回车继续。

设置ESXI的登陆密码,按回车确认。

按F11继续安装。

选择第一个重启电脑,记得要把U盘拔出来。

配置ESXI网络安装完成,现在配置ESXI,按F2进入配置介面。

选择Configure Management Network进入网络配置。

(如果有两块网卡,可以选择Netwaork Adapters进行选择哪一块网卡做管理接口)选择IPv4 Configuration配置IP地址。

选择第三个静态IP地址,然后在下面输入你的IP地址,子网掩码和网关,回车确认。

键盘上按ESC键,退出。

选择Y,保存网络配置。

ESXI设置完毕!访问ESXI访问ESXI,通过局域网内另一台电脑,在浏览器中输入刚才设置的IP地址,就可以访问了。

ESXi 配置 说明书

ESXi 配置 说明书

ESXi 配置指南ESXi 4.0vCenter Server 4.0ZH_CN-000114-00ESXi 配置指南2 VMware, Inc.您可以在 VMware 的网站上找到最新的技术文档,网址为/cn/support/VMware 网站还提供了最新的产品更新。

如果您对本文档有任何意见和建议,请将您的反馈提交到:docfeedback@©2009 VMware, Inc. 保留所有权利。

本产品受美国和国际版权及知识产权法的保护。

VMware 产品受一项或多项专利保护,有关专利详情,请访问 /go/patents-cn 。

VMware 、VMware “箱状”徽标及设计、Virtual SMP 和 VMotion 都是 VMware, Inc. 在美国和/或其他法律辖区的注册商标或商标。

此处提到的所有其他商标和名称分别是其各自公司的商标。

VMware, Inc.3401 Hillview Ave.Palo Alto, CA 北京办公室北京市海淀区科学院南路2号融科资讯中心C 座南8层/cn 上海办公室上海市浦东新区浦东南路 999 号新梅联合广场 23 楼/cn 广州办公室广州市天河北路 233 号中信广场 7401 室/cn目录关于本文档71ESXi 配置简介9网络2网络简介13网络概念概述13网络服务14在 vSphere Client 中查看网络信息14在 vSphere Client 中查看网络适配器信息143vNetwork 标准交换机的基本网络17vNetwork 标准交换机17端口组18虚拟机的端口组配置18VMkernel 网络配置19vNetwork 标准交换机属性214vNetwork 分布式交换机的基本网络23vNetwork 分布式交换机架构23配置 vNetwork 分布式交换机24dvPort 组26专用 VLAN27配置 vNetwork 分布式交换机网络适配器29在 vNetwork 分布式交换机上配置虚拟机网络325高级网络33Internet 协议第 6 版33网络策略34更改 DNS 和路由配置45MAC 地址46TCP 分段清除和巨帧47NetQueue 和网络性能49VMDirectPath Gen I506网络最佳做法、场景和故障排除51网络最佳做法51挂载 NFS 卷52VMware, Inc. 3ESXi 配置指南故障排除52存储器7存储器简介55关于 ESXi 存储器55物理存储器的类型55支持的存储适配器57目标和设备表示形式57关于 ESXi 数据存储59比较存储器类型61在 vSphere Client 中查看存储器信息628配置 ESXi 存储器67本地 SCSI 存储器67光纤通道存储器68iSCSI 存储器68存储刷新和重新扫描操作77创建 VMFS 数据存储78网络附加存储79创建诊断分区809管理存储器83管理数据存储83更改 VMFS 数据存储属性85管理重复 VMFS 数据存储87在 ESXi 中使用多路径89精简置备9610裸机映射99关于裸机映射99裸机映射特性102管理映射的 LUN105安全11ESXi 系统的安全111ESXi 架构和安全功能111安全资源和信息11612确保 ESXi 配置的安全117使用防火墙确保网络安全117通过 VLAN 确保虚拟机安全122确保虚拟交换机端口安全125确保 iSCSI 存储器安全1274 VMware, Inc.目录13身份验证和用户管理129通过身份验证和权限确保 ESXi 的安全129ESXi 加密和安全证书13514安全部署和建议141常见 ESXi 部署的安全措施141ESXi 锁定模式143虚拟机建议144主机配置文件15管理主机配置文件151主机配置文件使用情况模型151访问主机配置文件视图152创建主机配置文件152导出主机配置文件153导入主机配置文件153编辑主机配置文件153管理配置文件155检查合规性157索引159VMware, Inc. 5ESXi 配置指南6 VMware, Inc.关于本文档本手册(《ESXi 配置指南》)提供有关如何为 ESXi 配置网络的信息,其中包括如何创建虚拟交换机和端口,以及如何为虚拟机、VMotion 和 IP 存储器设置网络的信息。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

ESXi双网卡双IP设置
最近机房有台机器坏了,新购一台机器(Dell730 双CPU,共20核,64G内存,3块4T硬盘)。

考虑到现在虚拟化技术比较成熟,使用维护确实方便,决定采用vSphere来部署。

机器部署挺方便的,由于以前物理机是双IP设置,虚拟化时遇到一点小问题,主要是开始没理解虚拟交接机的概念。

一设置网络
1.点击配置-》网络按钮,默认只有一个虚拟交换机。

所有物理端口聚合到此虚拟交换机,实现冗余。

这就是我开始配置虚机的IP地址,虚机之间可以ping,但到物理交换机就是不通的原因。

2.增加虚拟交换机
点击添加添加网络按钮
下一步,选择ESXi的物理端口,新建虚拟交换机
下面是新增后效果:
二虚机指定虚拟网卡的网络
选择虚机,点击虚拟机配置按钮
选择第一步设置的虚拟交换机
这样配置后,可正常ping通物理交换机,就可以了。

三.Windows虚机双IP配置
Windows比较简单,分别配置网卡和对应的网关。

在高级里将下一跳由自动改
为固定值就可了。

四.Linux虚机双IP配置
Ubuntu的双IP稍微麻烦一点,ubuntu安装时不像RHEL,自动激活联线的网卡,ubuntu只激活一个,另一个需手工安装。

1.找出网卡
sudo lshw -C network
2.编辑/etc/network/interfaces,加入新网卡配置
vi /etc/network/interfaces 修改里面的内容如下
auto eth0
iface eth0 inet static
address 192.168.4.213
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 58.200.200.15
netmask 255.255.255.128
gateway 58.200.200.1
3.增加路由
通过以上操作后,可ping通各自的网络。

但如果通过外网访问的话,只有一个IP是通的。

cat /etc/iproute2/rt_tables
# reserved values
255 local
254 main
253 default
252 net0
251 net1
0 unspec
#
# local
#
#1 inr.ruhep
[rootlocalhost ~]#
使用ip route添加默认路由:
ip route add 127.0.0.0/8 dev lo table net1
ip route add default via 172.16.8.1 dev eth0 src 172.16.8.11 table net1
ip rule add from 172.16.8.11 table net1
ip route add 127.0.0.0/8 dev lo table net0
ip route add default via 10.120.6.1 dev eth1 src 10.120.6.78 table net0
ip rule add from 10.120.6.78 table net0
ip route flush table net1
ip route flush table net0
这样操作后,就可以双IP访问,有一路断掉就可以正常访问的。

五.配置自动添加脚本
本来挺简单的,加入rc.local就可以了,新装的最新的ubuntu16.04.3已取消rc.local,参考其他文档,处理了一下。

首先创建systemd的服务脚本
1、sudo vi etcsystemdsystemrc-local.service
[Unit]
Description=etcrc.local Compatibility
ConditionPathExists=etcrc.local
[Service]
T ype=forking
ExecStart=etcrc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99
#sysVstart这行可以删掉,我看启动日志中貌似报忽略这个了。

[Install]
WantedBy=multi-user.target
2、sudo systemctl enable rc-local.service
然后就按以前的格式编辑etcrc.local就好了。

最后记得chmod +x etcrc.local
终于大功告成,心情非常愉快。

分享此文,希望看到此文的你少走弯路。

相关文档
最新文档