Introduction
NSX-T is the newest flavor of NSX that is available. Previously, most people familiar with NSX have been working with NSX-V (NSX for vSphere) and while there are lots of great things about that product, it does have its limitations. NSX-T looks to solve some of those problems as the product evolves. I am starting to work with NSX-T and, while I am by no means an expert, my goal is to document my experience with the product so that others may see what it can do. First, a little about the deployment I am looking to build:
NSX-T Components
1x NSX Manager (KVM)
3x NSX Controllers (KVM)
2x NSX Edges (KVM)
2x Compute Hosts (ESXi)
Network Deployment
The drawing above shows what we will have by the end of this post. We already have a KVM host configured using Fedora 23. We have configured two openvswitch bridges with physical ethernet uplinks going to a physical Cisco switch. The Cisco switch has two vlans for this lab so far:
Vlan 10 – 10.10.1.0/24 – Used for lab management of virtual network infrastructre
Vlan 20 – 10.10.20.0/24 Used for TEPs that will participate in the overlay (Geneve in the case of NSX-T)
We have a network XML file that will be used to create a network entity in libvirtd. The network in libvirtd will link to the openvswitch (OVS) bridge vsMgmt. This allows us to specify that libvirtd network in the vm’s configuration file so that when it starts up, it dynamically adds that virtual nic (VIF) to the OVS bridge. The file looks like this:
vsMgmt.xml
<network>
<name>vsMgmt</name>
<forward mode='bridge'/>
<bridge name='vsMgmt'/>
<virtualport type='openvswitch'/>
</network>
Create network in libvirtd
Once you have the vsMgmt.xml file created with the information above, you need to define the network in libvirtd. The steps are below:
virsh net-define vsMgmt.xml
virsh net-start vsMgmt.xml
virsh net-autostart vsMgmt.xml
The net-define command created the network in libvirtd. net-start starts the network so that vm spun up can use them. net-autostart ensures that the network will start up again once the libvirtd service starts up (nice to have when the host reboots)
Prepare the NSX Manager for installation
Source: Install NSX Manager on KVM
You first have to prepare the NSX Manager qcow2 image. Using guestfish, we set things like the NSX Manager Role, passwords, SSH, IP info, and more. We create an XML file with all of the attributes. Below is an example taken right from the vmware docs. The link above will take you right to them.
<?xml version="1.0" encoding="UTF-8"?>
<Environment
xmlns="http://schemas.dmtf.org/ovf/environment/1"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:oe="http://schemas.dmtf.org/ovf/environment/1">
<PropertySection>
<Property oe:key="nsx_role" oe:value="nsx-manager"/>
<Property oe:key="nsx_allowSSHRootLogin" oe:value="True"/>
<Property oe:key="nsx_cli_passwd_0" oe:value="<password>"/>
<Property oe:key="nsx_dns1_0" oe:value="192.168.110.10"/>
<Property oe:key="nsx_domain_0" oe:value="corp.local"/>
<Property oe:key="nsx_gateway_0" oe:value="192.168.110.1"/>
<Property oe:key="nsx_hostname" oe:value="nsx-manager1"/>
<Property oe:key="nsx_ip_0" oe:value="192.168.110.19"/>
<Property oe:key="nsx_isSSHEnabled" oe:value="True"/>
<Property oe:key="nsx_netmask_0" oe:value="255.255.255.0"/>
<Property oe:key="nsx_ntp_0" oe:value="192.168.110.10"/>
<Property oe:key="nsx_passwd_0" oe:value="<password>"/>
</PropertySection>
</Environment>
Make sure to edit the template above with your appropriate values. I used 10.10.1.200/24 as the IP and subnet mask with the gateway being 10.10.1.1. Be sure to set a password that meets the complexity standards. IF IT DOES NOT, THEN IT WILL NOT APPLY THE PASSWORD UPON BOOT AND YOU WILL BE UNABLE TO LOG IN!
Take note of the password specs below:
At least eight characters
At least one lower-case letter
At least one upper-case letter
At least one digit
At least one special character
At least five different characters
No dictionary words
No palindromes
*** Important note ***
You MUST define the nsx_role value. With the NSX-T unified appliance, a role is not assigned by default. If you do not set this and start up the vm using the qcow2 image, the NSX Manager will not boot correctly. The issue number is 1944678 and the link to the bug is here: NSX-T 2.0 Release Notes
To apply the values using guestfish, use the following command:
sudo guestfish --rw -i -a nsx-manager.qcow2 upload guestinfo /config/guestinfo
The image is now ready to use.
Create NSX Manager VM
Now that we have the image prepared, we will use libvirtd to create the VM. I used the following values:
virt-install --import \
--name nsx-manager1 \
--ram 16348 \
--vcpus 4 \
--network network=vsMgmt,model=vmxnet3 \
--disk path=/vmstorage/NSX-T/manager/nsx-manager.qcow2,format=qcow2 \
--nographics
We create a VM that has 16 Gigs of RAM, 4 virtual CPUs, and a connection to the management network. Notice in the network statement we specified the vsMgmt network we created earlier in the post. This will then dynamically associate the vnet interface to the ovs management bridge named vsMgmt.
Once the command is executed, we should see the vm created:
virsh list
Output:
Id Name State
—————————————————-
1 nsx-manager1 running
We can also look at the virtual interfaces (VIF) created and what network it is connected to:
virsh domiflist nsx-manager1
Output:
Interface Type Source Model MAC
——————————————————-
vnet0 bridge vsMgmt vmxnet3 52:54:00:52:fd:19
vnet0 was dynamically created and assigned to the vsMgmt network. We should be able to see this in OVS:
ovs-vsctl show
Output:
<–Omitted Extra–>
Bridge vsMgmt
Port “vnet0”
Interface “vnet0”
Confirmation
After about 5-10 minutes, we should be able to see a login screen for the NSX Manager. I used 10.10.1.200, so we will navigate to https://10.10.1.200
Looks good so far…..Let’s login with the username admin and the password we specified above. It should have been configured when we applied the values using guestfish.
If we got here, then we are good to go! Next will be creating the controllers, joining them to the management plane, and then creating a control cluster.
Until next time,
AJ