id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB7753 | Nutanix Objects - "dial http://127.0.0.1:2082/msp/cluster/list : connection Refused" error when deploying object store | Nutanix Objects v1.x deployment failure - "dial http://127.0.0.1:2082/msp/cluster/list : connection Refused" | When creating an object store fails, and the following error message is displayed:
type[CREATE]:Get http://127.0.0.1:2082/msp/cluster/list: dial tcp 127.0.0.1:2082: connect: connection refused
The /home/nutanix/data/logs/aoss_service_manager.out log file on the Prism Central VM contains errors similar to the following indicating connection to the Microservices Platform (MSP) cluster on port 2082 is not successful:
time="2019-07-02T00:03:27-07:00" level=error msg="Get failed." error="Get http://127.0.0.1:2082/msp/cluster/list: dial tcp 127.0.0.1:2082: connect: connection refused"
Executing mspctl cluster_list from a Prism Central VM may not list any clusters:
nutanix@PCVM:~$ mspctl cluster list
These errors typically occur when the MSP cluster is unresponsive. MSP is a back-end infrastructure microservice platform for Nutanix Objects. Typically users do not need to access this platform unless directed by Nutanix Support. This platform is managed by a service called msp_controller in Prism Central.There may be numerous reasons why MSP is unresponsive, such as:
Deployment of an object store was attempted immediately after the enablement of Nutanix Objects was completed and the msp_controller service is not yet fully initialized.The docker service is crashing on a node in Prism Central scale-out environment.MSP-related pods crashing on some nodes. | Nutanix Objects depends on the MSP cluster and deployment will fail if MSP is not healthy. Wait a few minutes after the deployment has failed and resume the deployment from the Prism Central UI. If the deployment still fails, the cause of the MSP cluster being unresponsive needs to be investigated and resolved. Contact Nutanix Support https://portal.nutanix.com/#/page/cases/form?targetAction=new to troubleshoot issues with the MSP cluster.Once MSP is in a healthy state, resume the object store deployment from the Nutanix Objects UI to allow the deployment to proceed. |
KB5338 | How to inject storage VirtIO driver if it was not installed before migration to AHV | Failure to install VirtIO drivers prior to converting physical-to-virtual (P2V), or migrating from a different Hypervisor platform (ESX / Hyper-V) to AHV may result in a Blue Screen of Death (BSOD) for inaccessible storage (boot device). | Failure to install VirtIO drivers prior to converting physical-to-virtual (P2V), or migrating from a different Hypervisor platform (ESX / Hyper-V) to AHV may result in a Blue Screen of Death (BSOD) for inaccessible storage (boot device), showing stop error 0x0000007B.
Sample error text:
0x0000007B
This happens because the Windows Operating System does not have the appropriate drivers to read the disk that has the operating system installed on it. With AHV, you need to install the VirtIO drivers, available from the Nutanix Portal in the AHV / VirtIO https://portal.nutanix.com/page/downloads?product=ahv&bit=VirtIO download section. | Typically, in preparation for performing your P2V or migration, you would want to install the VirtIO MSI package before shutting down the target and starting the migration. This will pre-load all of the required drivers and prevent this BSOD from occurring. However, if this was not done, you can do a post P2V / Migration driver injection to recover the VM.
First, mount both the Windows ISO and the downloaded VirtIO driver CD/DVDs for the VM that fails to boot.
To do this, you will need to download the VirtIO package referenced above and upload that along with your Windows ISO into your Nutanix image store. How this is done depends on whether you are using Prism Element (PE) or Prism Central (PC).
For Prism Element (PE), instructions are available here https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide:wc-image-configure-acropolis-wc-t.html.
For Prism Central (PC), instructions are available here https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-images-upload-from-workstation-pc-t.html.
Once the ISO images are uploaded into the Prism Image repository, go to VM configuration (for the affected VM) and mount them both as CD-Rom drives for your guest VM to use.NOTE: Depending on Windows installation and the Windows version, you might not need to boot from the Windows ISO and launch the Windows Recovery Environment (winRE) instead. This could happen by rebooting the VM multiple times. More details in Microsoft docs below: https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-recovery-environment--windows-re--technical-reference?view=windows-11 https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/windows-recovery-environment--windows-re--technical-reference?view=windows-11 https://support.microsoft.com/en-us/windows/recovery-options-in-windows-31ce2444-7de3-818c-d626-e3b5a3024da5 https://support.microsoft.com/en-us/windows/recovery-options-in-windows-31ce2444-7de3-818c-d626-e3b5a3024da5
First, you will attach the VirtIO drivers (although the order does not matter). Keep in mind the name of VirtIO ISO will likely be different.
Next, add an additional CD-ROM for the Windows ISO.Or Boot into Windows Recovery Environment (winRE) as described above
Keep in mind that with the following image, the name of Windows ISO will likely differ.
Once both CD-Roms (VirtIO and Windows) are mounted, save the configuration.
Next, power on the guest VM. You will see something similar to the following. You want to strike any key to boot from the Windows ISO that you mounted. You may not be able to get on the AHV console quick enough to grab this the first time through, so reboot or ctrl+alt+del as needed to get to this screen. Press any key here.
If you boot from the Windows CD-Rom, you should see the following screens:
Ultimately, you want to land on the CMD prompt. From here, you will first mount the drive and then inject the VirtIO driver.
First, type "wmic logicaldisk get caption" to get the list of mounted disk drives. Identify the drive letter holding our VirtIO drivers.
In this case, they are mounted as "D:" drive.
Navigate to the driver directory. It should look similar to the following image:
Once in the VirtIO driver directory on the VirtIO CD-Rom drive, load the SCSI driver with the following command:
drvload vioscsi.inf
Now run "wmic logicaldisk get caption" again (you can up-arrow to it). You should see additional drives mounted:
For UEFI-based VMs, even after loading the SCSI driver, additional drives may not be seen because Windows does not assign drive letters. In this case, follow the steps at the bottom of this KB to assign a drive letter.Typically, you would expect that C: has the Windows installation on it. However, in this example, C: is empty, and the Windows installation is on F:\. It might vary in your environment. Look for the drive that contains "Program Files", "Windows", and "Users" directory. This will usually be your OS install directory.
Make a note of this drive letter:
From the existing directory (you should still be in the VirtIO driver directory), type the following command to inject the VirtIO driver into the Windows installation.
dism /image:{drive_letter_from_above}:\ /add-driver /driver:vioscsi.inf
You should see the following:
Now, exit everything and reboot the guest VM.
If everything went right, you should now boot into Operating System
After you log in, run the Nutanix VirtIO for Windows MSI installer to add Nutanix VirtIO Balloon Driver and Nutanix VirtIO Ethernet Adapter to the VM.Please Note: In some scenarios, VirtIO drivers may be removed after OS generalization (sysprep). Refer to KB-5436 http://portal.nutanix.com/kb/5436
An alternative method is to attempt power on VM with IDE disks and install drivers, then convert back to SCSI disks (if VirtIO was uninstalled by mistake)Perform the following steps to test starting a Windows VM with IDE disks instead of SCSI after migrating to the Nutanix cluster:
Clone the disks to be IDE type. Note: By default, Move creates the VM with SCSI disks.Once the VM is up, install the VirtIO drivers and apply the network configuration.Clone the IDE disks to be SCSI type. The VMs with IDE disks takes a long time to start and operate.
The following procedure is an example of starting a Windows 32-bit VM after migrating to the Nutanix cluster.
Note: You can also delete the scsi.0 disk by powering off the VM and following step 9, as scsi.0 is the original boot device when Move migrated the VM.
Log on to the Prism web console and locate the boot device for the VM. Navigate to VM > Update > "set Boot Priority" will be the current boot disk. Alternatively, use aCLI command vm.get <VM_name>:
nutanix@cvm$ acli vm.get winserver1
Power off the virtual machine.Clone the disk as an IDE disk type. For example, the boot device of the VM is scsi.0. You can clone it as ide.1 because ide.0 is used for CD-ROM device. The vm-name is winserver1. The below command creates a new disk with the next available index. In this example, it is ide.1 because CD-ROM is using ide.0:
nutanix@cvm$ acli vm.disk_create vm-name bus=ide clone_from_vmdisk=vm-disk
nutanix@cvm$ acli vm.disk_create winserver1 bus=ide clone_from_vmdisk=vm:winserver1:scsi.0
Log on to the Prism web console and select drive ide.1 as the boot device for the VM.
Power on the virtual machine. The virtual machine is started successfully. OS should boot up with ide.1Install the VirtIO drivers and apply the network configuration. Power off the virtual machine. Clone the disk ide.1 as a SCSI Disk. The below command creates a new disk with the next available index i.e. scsi.1 automatically:
nutanix@cvm$ acli vm.disk_create vm-name bus=scsi clone_from_vmdisk=vm-disk
nutanix@cvm$ acli vm.disk_create winserver1 bus=scsi clone_from_vmdisk=vm:winserver1:ide.1
Log on to the Prism web console and select the newly created SCSI device as the boot device for the VM. In this example, it is scsi.1.Delete the IDE Disk (In this example, it is ide.1) using the following command:
nutanix@cvm$ acli vm.disk_delete vm-name disk_addr=disk
nutanix@cvm$ acli vm.disk_delete winserver1 disk_addr=ide.1
Power on the VM. Scsi.1 is used as the boot device.
Procedure for assigning a drive letter
Run "diskpart" command to enter the diskpart command line.Run "list volume" command to display a list of volumes on Windows.Identify the volume on which Windows is installed, based on the size of the volume or filesystem type, and select that volume using the following command:
select volume <volume_number>
Assign unused drive letters to the selected volume.
assign letter=<drive_letter>
Run "exit" to exit the diskpart command line.
NOTE: If you are not able to boot from IDE, then there is an issue with the operating system. Please engage the vendor of your OS to troubleshoot further. |
KB8641 | Dell PowerEdge 730 foundation failed | Foundation might fail on Dell PowerEdge 730 due to unrecognized backplane model on the Node | Foundation on Dell PowerEdge-730 fails with the below error:
20190701 22:38:41 ERROR Failed to generate hardware_config.json. Error:
Trying to foundation the node with versions 3.9.x or 4.x fails as well with the same error. | The issue is with the backplane model on the Node. Nutanix supports only models 0JJTTK, 0DTCVP, 0FWXJR and 0CDVF9 with DELL PowerEdge-730, but not 0Y97XNA00.You can check the backplane model using the below commands:
root@AHV# ipmitool fru print
For ESX Host
root@ESX# /ipmitool fru print
For Hyper-V Host
nutanix@cvm$ winsh
The solution is to ask DELL to replace the backplane in the node. |
KB6510 | Create Network (IPAM) with Nutanix Api v2 | null | To create a network using Nutanix API v2 we need to send POST call to URL https://cluster_ip:9440/api/nutanix/v2.0/networks/. Call Body parameter template shown below:
{
In this example we will create network with below parameters:name: nuran_net_160vlan: 160network address: 10.XX.XXX.0prefix lenght(mask): 24(255.255.255.0)default gateway: 10.XX.XXX.1dns: 8.X.X.8, 4.X.X.4dhcp pool: 10.XX.XXX.10 - 10.XX.XXX.20So here is how Body param. will look in our case
{
If call is successful the response will include created network uuid.
{
Python:
from nutanixv2api import *
Note: Keep script and nutanixv2api.py (https://github.com/noodlesss/nutanix_v2_api/blob/master/nutanixv2api.py) in same directory so its content importable. | null |
KB7597 | Imaging with Phoenix fails for Dell ESXi 6.5 U1, U2 and U3 images | Dell-specific ESXi 6.5 images contain a network driver with a defect that causes Phoenix installation to fail. This article describes the issue and potential workarounds. | When trying to run Phoenix on a node after imaging using a Dell-specific version of ESXi 6.5 (U1, U2 and U3), the imaging may fail during the firstboot script execution with the following message in the first_boot.log:
2019-06-07 20:30:28,604 FATAL Fatal exception encountered:
This error is due to a problematic NIC driver (i40en) not providing the max link speed. The driver is seen for the Intel Corporation Ethernet Controller X710 NIC.
[root@Failed-Install:~] esxcfg-nics -l | This issue is driver-specific and not necessarily specific to a version of Phoenix/Foundation or AOS.
In this case, the reported 1.1.0, 1.5.6 and 1.7.11 versions of the i40en driver encountered the issue, and any image using these driver versions may experience this issue. Potentially, there could be other versions of this driver that might have the same issue as well. As a workaround, disable the i40en driver before running Phoenix by following the steps below:
Disable the driver with the following command from the ESXi host.
[root@localhost:~]esxcli system module set --enabled=false --module=i40en
Reboot the host and confirm that the i40e driver is not being used.
[root@localhost:~]esxcfg-nics -l
Retry Phoenix. |
KB8541 | Re-enabling Metro PD may fail with the error 'Attempt to lookup attributes of path /XXX/XXX...... failed with NFS error 2' | If customer registers by mistake a VM from .snapshot folder, re-enabling Metro will fail and in some scenarios it can cause Storage outage. | After unplanned Metro failover, a user may need to remove one or more VMs from the inventory, then register them back with vCenter.In some scenarios, when browsing the datastore if you search for VM name it will display files from live and from .snapshot folder so a user may register more than one VM with .vmx file that is located under .snapshot folder. If customer re-enables Metro, Stargate will continuously fail with nfs_attr->has_vdisk_name check, for example, because the reference file vmware.log in reference snapshot 22 has changed. It was recreated when VM was registered from snapshot.
I1106 13:59:39.563139 3771 cerebro_replicate_op.cc:947] Completed replication of the file path /Datastore04_A0/.snapshot/24/788732942135150362-1520778172451043-18532924/SD0151E/vmware.log
At same time Cerebro will log the following errors :
W1106 13:59:56.082300 30275 replicate_file_op.cc:711] [work_id: 18556140 local meta_opid: 18541980 remote meta_opid: 17196681] Attempt to request stargate to replicate the file path /Datastore04_A0/.snapshot/24/788732942135150362-1520778172451043-18532924/SD0151E/vmware.log completed with error kRetry
Above FATALs in Stargate and errors in Cerebro will continue until Cerebro aborts the replication when it tries to lookup attributes of file path that doesn't exist on source reference snapshot but exists on remote reference snapshot.
W1106 14:20:32.217676 30484 replicate_file_op.cc:1252] [work_id: 18597530 local meta_opid: 18541980 remote meta_opid: 17196681] Stargate requested abort of master's meta op: Attempt to lookup attributes of path /Datastore04_A0/.snapshot/22/788732942135150362-1520778172451043-18481322/SD00E35/vmware-50.log failed with NFS error 2
Below are corresponding lookup failures in Stargate:
I1106 14:20:32.217082 32498 cerebro_replicate_op.cc:1001] Looking up reference file's attributes /Datastore04_A0/.snapshot/22/788732942135150362-1520778172451043-18481322/SD00E35/vmware-50.log | The above issue can cause an outage if a customer registers more than one VM from .snapshot folder and if impacted VMs are running on different nodes. Also never try to delete missing files from the reference snapshot on the remote site, as this will prevent Cerebro from aborting replication and increase chances of another Stargate hitting the FATALs. Since reference snapshot on both sites are not anymore identical, the safest way to re-establish Metro relationship is to resync from scratch which means delete current Metro configuration and recreate it.
Delete Metro configuration on both sites Clean up standby container by deleting all files and container itselfRecreate the container on standby site Recreate Metro configuration which will replicate all data from scratch
If a customer is not willing to replicate all data from scratch open an ONCALL to fix the discrepancy of reference snapshot between the 2 sites. |
KB6315 | Prism Central shows message: "There is a licensing violation for Flow. To be compliant, please license now." for Flow Network Security | This KB article describes actions that can be taken to remove the banner in Prism Central about Flow Network Security (FNS) microsegmentation feature license violation. | The following error can be displayed in Prism Central after Flow Network Security (FNS) microsegmentation feature trial has expired:
There is a licensing violation for Flow. To be compliant, please license now. | If you want to continue using Flow Network Security (FNS), install a valid Flow license. Refer to the Licensing documentation on the Support Portal https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LicensingIf you are not planning to continue using FNS, use Prism Central Settings menu, click on Microsegmentation (under Flow group) and click on the red Disable Microsegmentation button.Note: Any existing Security Policies will no longer be enforced and VMs that were protected under these policies will no longer be protected after disabling FNS. |
KB10117 | Image upload will fail if the container has # in the name | Image upload will fail if the container has # in the name | If an image upload is attempted to a container that has the symbol # in the name, the upload will fail and the error will be generated:
"Could not access the URL nfs://127.0.0.1/%23ISO_Images/.acropolis/image/106ac31b-6236-4534-a2ae-808c08bb912f: Failed to parse the file size"
| That happens because the URLs cannot have the # in them and it gets automatically replaced with %23. That creates the inconsistency between the container name and the URL. To resolve the issue, rename the container and remove the # from the container name and retry the image upload. |
KB8159 | Enable debug level logging for VirtIO network adapter driver (netkvm) | This KB describes how to increase debug level for in-guest (Windows) NIC | In situations were additional detailed log information is needed for in-guest network operations, the Nutanix VirtIO Ethernet Adapter's logging level needs to be increased.Some examples where this may be useful include (not exhaustive list):
Checking Acropolis operations against in-guest VM network operations;Debugging dropped/error packets to/from guest VM;Monitor network traffic flow in guest VM;Checking connection status of Nutanix VirtIO Ethernet Adapter. | NetKVM support logging levels
NetKVM driver supports the logging levels from 0 to 6:
- Basic configuration and unload trace, critical errors.- Warnings, corner cases.- Network packet flow.- More verbose trace of packets.- VirtIO library, DPC.- ISR trace.- Registers read/write.
Enabling and collecting NetKVM debug logs
To enable the logging
Inside the Windows VM, open Device Manager, locate Network Adapters and right-click on the "Nutanix VirtIO Ethernet Adapter" (NOTE: if there are multiple vNICs, pick the one that debug logging collection is needed), go to the "Advanced" tab, and locate the "Logging.Level" property:Change the value from the default "0" to the logging level of interest in and save the change by clicking 'OK' button. The logging level of "2" should be sufficient for most of the troubleshooting scenarios.NOTE: Increasing the debug log level to 6 will generate some substantial log files, it is hard to predict how fast a file will grow as it depends on many variables, so ensure that the guest VM has some available space and that the logging is constrained to a few hours, typically less than 12 hrs. Lab tests have indicated that 15 mins capture generates between 10 - 20 MB of log files depending on debug log level enabled.
To collect debug output
To log the output generated to a file, you can use DebugView from Microsoft Sysinternals: https://docs.microsoft.com/en-us/sysinternals/downloads/debugview https://docs.microsoft.com/en-us/sysinternals/downloads/debugview. DebugView is an application that enables monitoring debug output on the local system and can also extract kernel-mode debug output generated before a crash from Window crash dump files if DebugView was capturing at the time of the crash.DebugView needs to be running in elevated mode, so you need to start it as 'Run as administrator':Once DebugView is started, configure it as demonstrated below:
Configure DebugView to log output to a text file: File -> Log to File As: Configure DebugView to show UVM clock time as by default it shows only relative time: Options -> Clock Time (or Ctrl+T keyboard shortcut): Configure DebugView to capture Kernel events only to reduce amount of noise in the DebugView output:
Examples
Ping 8.8.8.8
The following example is based on analyzing a capture from a Windows 10 guest VM doing a single ping to 8.8.8.8
00006313 73.57194519 ip_version 4, ipHeaderSize 20, protocol 1, iplen 60, L2 payload length 74
No IP addresses will show up, but as an example, we see IPv4, protocol 1 indicates ICMP.
DHCP IP address release and renew
The following example is based on IP address renew from Windows VM by running:
C:\> ipconfig /release && ipconfig /renew
NetKVM debug logs:
00001773 4:21:16 AM [ParaNdis6_OidRequest] OID type 1, id 0x10118(OID_GEN_NETWORK_LAYER_ADDRESSES) of 6
Line 1773 shows that the list of IP addresses on the interface was changed (IP address released) and lines 1824 - 1826 that a new address was obtained. More details on OID_GEN_NETWORK_LAYER_ADDRESSES OID can be found at https://docs.microsoft.com/en-us/windows-hardware/drivers/network/oid-gen-network-layer-addresses https://docs.microsoft.com/en-us/windows-hardware/drivers/network/oid-gen-network-layer-addresses.
Additional information
Most of the lines have the function names of the VirtIO code in them, so if needed one can search in the code to look what it is doing;List of OIDs along with their description can be found at Microsoft developers portal: https://docs.microsoft.com/en-us/windows-hardware/drivers/network/ https://docs.microsoft.com/en-us/windows-hardware/drivers/network/ |
KB14223 | Data Lens Ransomware Detection is flagging files that are not ransomware | Temp files created by MS Office (like Excel .xlb) may be created in a way that appears to mimic a ransomware attack. | Application Files like that of MS Office Suite and Chrome files show up as ransomware-infected files.
The way these application files get written to and renamed mimics that of the ransomware audit event pattern, and the final result of the file is a binary encrypted file. That is the reason why these files are getting flagged off. | To view the detected attacks and list out the files, you can use the following section of the Data Lens User Guide https://portal.nutanix.com/page/documents/details?targetId=Data-Lens:ana-analytics-view-threat-details-t.html.
You can export the list in a .csv format and review the File Name column to see the extensions being flagged. Here is a workaround to exclude certain file name/extension patterns from ransomware detection:
Go to Ransomware UI. Click Update Blocked Signature List.Enter the required patterns in the text box. This can be a comma-separated list. Click on the search icon.The patterns will be shown as "Not available in the Block List". Click on Add to add them to the Block list. Let the task for updating the list complete. This will take a few minutes.Once the task is complete, enter the patterns in the text box then click on the search icon again.The patterns will be shown as "Added to the Block list". Click on Remove to remove them from the Block list. Let the task for updating complete. Once it is complete, the problem should be solved.
Operations on the Files will be blocked briefly while the patterns are in the Block list.
Note: In Data Lens version DL2023.2, the ransomware detection module has been enhanced to reduce false positives by ignoring common system files and certain application files. |
KB12491 | Security mitigation for Apache log4j2 vulnerability on Witness VM 6.x (CVE-2021-44228) | This article provides mitigation steps for Apache log4j2 vulnerability on Witness VM 6.x | A critical vulnerability in Apache Log4j2 (CVE-2021-44228) has been publicly disclosed that may allow for remote code execution in Witness VM 6.x release.Witness VM 6.x has an inactive version of log4j2 present which is not instantiated nor listening for connections.For more details refer to the Security Advisory #23 https://download.nutanix.com/alerts/Security_Advisory_0023.pdf. | In order to mitigate the Apache log4j2 vulnerability, SSH into the Witness VM as user nutanix and execute the following command to remove the affected binary.
nutanix@CVM:~$ USE_SAFE_RM=no rm -rf ~/adonis
|
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""esxcfg-route -l"" | null | null | null | null |
KB10490 | EXT4-fs (loop0): warning: mounting fs with errors, running e2fsck is recommended | This article explains how to resolve FS errors on /tmp or loop0 in CVM | Sometimes, the /tmp or loop0 device on CVM can get soft filesystem corruption. This article explains how to resolve such cases.The "dmesg -T" command on CVMs will show the following output:
Fri Jul 17 12:12:48 2020: EXT4-fs (loop0): warning: mounting fs with errors, running e2fsck is recommended
NCC will report the following warning:
Detailed information for fs_inconsistency_check:
Do note that the error is for loop0 and not for sdX devices. This KB should be used for loop0 or /tmp corruption only.If the FS errors have been generated for a longer period of time, /tmp may get into a Read-only state. This will in turn cause services like ntpd to not start.
nutanix@CVM:~$ sudo journalctl -u ntpd
nutanix@CVM:~$ touch /tmp/1
| The File-system corruption on /tmp can be fixed using following steps.1. Put the CVM (on which the /tmp File-system error is seen) to Maintenance Mode. Refer KB- http://portal.nutanix.com/kb/4639 4639 http://portal.nutanix.com/kb/46392. Un-mount the /tmp Filesystems.
nutanix@CVM:~$ sudo umount /tmp -l; sudo umount /var/tmp
3. Run Filesystem check.
nutanix@CVM:~$ sudo e2fsck -fy /root/filesystems/tmpvol.bin
4. Mount the /tmp back.
nutanix@CVM:~$ sudo mount -a
5. Exit CVM (on which the /tmp File-system error is seen) from Maintenance Mode. Refer KB- http://portal.nutanix.com/kb/4639 4639 http://portal.nutanix.com/kb/4639 |
""Form Factor"": ""VMD node (Hot-plug capable)\n\t\t\tAHV: No CVM reboot | no Hypervisor reboot | no script needs to be runESXi: No CVM reboot | no script needs to be run\n\t\t\t\t\t\tNon-VMD node\n\t\t\tAHV: CVM will auto reboot upon the NVMe drive “add and partition” workflow | no Hypervisor reboot |
KB6228 | Citrix Director plugin "Unable to connect to the host" | Citrix Director plugin for AHV fails to make the connection with Nutanix cluster and fails with error "Unable to connect to the host" | Issue:Citrix Director plugin for AHV fails to make connection with the Nutanix cluster and fails with error "Unable to connect to the host".Cause:ICMP traffic between Citrix Director and Nutanix Cluster is blocked. | Verify if the server that Citrix Director is installed on has connectivity to the Virtual IP of the AHV cluster via port 9440. Please disable Proxy if there have a proxy config with IE on DDC server.Run command from Power Shell:
Test-NetConnection <Cluster VIP> -Port 9440
Example of successful connection:
PS C:\Users\Administrator> Test-NetConnection <VIP> -Port 9440
If the connection is not successful, verify the firewall and ensure there is no proxy server configured in the browser configuration.Note:
Currently, AHV Host Integration in Citrix Director is not implemented, hence it is expected that Status shows as 'Not available'.Only VM stats currently are integrated.
|
KB6995 | Nutanix DRaaS - A130179 - Replication is stuck because of long running replication tasks | It has been observed in the field where the replication is tuck due to long running replication tasks. | Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.We have seen issues where replication gets stuck because of the long running replication tasks. This lead to the inability to delete old local recovery points due to running cerebro tasks. This issue can happen when one of the VM vdisks get inadvertently deleted while the replication was ongoing, therefore the follow up recovery point was referencing the "old" recovery point and the tasks get stuck. You can confirm this by launching an instance to Nutanix DRaaS from On-prem PC and you can see the following alert in DRaaS.
When this issue occurs, attempts to delete VM Recovery Points will fail as consequence of the long running cerebro tasks. [
{
"Description": "Cause of failure",
"Data Protection Tasks are not progressing": "Replication Tasks are running for long"
},
{
"Description": "Resolutions",
"Data Protection Tasks are not progressing": "Abort the long running tasks"
},
{
"Description": "Impact",
"Data Protection Tasks are not progressing": "System Indicator"
},
{
"Description": "Alert ID",
"Data Protection Tasks are not progressing": "A130179"
},
{
"Description": "Alert Title",
"Data Protection Tasks are not progressing": "Data Protection Tasks are not progressing"
},
{
"Description": "Alert Message",
"Data Protection Tasks are not progressing": "Multiple tasks are not making progress on PE: because of long running replication tasks"
}
] | Do not attempt to abort the tasks without consulting with Nutanix Support as this may lead to data inconsistencies.
If you need assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. You can also contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. |
KB4399 | ESXi: How to configure LSI HBA passthrough | Circumstances can lead to the PCI address of the HBA to change leading to CVM booting issues.
Here is how to detect and correct these issues. | After BIOS or ESXi updates, the LSI HBA might get a different PCI address after the host reboots.
An error similar to the following is generated when the CVM starts:
Insufficient resources. One or more devices (pciPassthru0) required by VM NTNX-CVM are not available on host host01.local.
For the newer ESXi version For the older ESXI version The vmware.log for the CVM generates the following:
2019-04-27T17:11:24.748Z| vmx| I125: PCIPassthru: Failed to register device 0000:03:00.0 error = 0x13
2019-04-27T17:11:24.748Z| vmx| I125: Msg_Post: Error
2019-04-27T17:11:24.748Z| vmx| I125: [msg.pciPassthru.createAdapterFailedDeviceNotFound] Device 003:00.0 was not found.
2019-04-27T17:11:24.748Z| vmx| I125: ----------------------------------------
2019-04-27T17:11:24.748Z| vmx| I125: Vigor_MessageRevoke: message 'msg.pciPassthru.createAdapterFailedDeviceNotFound' (seq 2236) is revoked
2019-04-27T17:11:24.748Z| vmx| I125: Module DevicePowerOn power on failed.
Below is an example of the vmkernel log during the issue.
CVM booted without the boot device:
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: AddAlias: Not commiting alias vmhba2 for busAddress s00000002.00
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: AddAliasByBusAddress: No actual device present for configured device PCI address 's00000003.00' # <===== !!!!! No phys HBA
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: LoadAliases: deleted alias vmhba3 for busAddress s00000003.00 (new branch) # <===== !!!!! Removed
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: AddAlias: skipping matching alias vmhba2 for pci device s00000002.00 with assigned alias vmhba2
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: LoadAliases: deleted old branch for alias vmhba3, device 00000:003:00.0 # <===== !!!!!
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: InheritVmklinuxAliases: not a vmnic alias vmhba2
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: RemoveDevice: Removed alias for ancestorBusAddress: pci#s00000003.00#0 # <===== !!!!!
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: ADD event for bus=pci addr=s00000001.00 id=1000009715d90808010700.
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: Found driver pciPassthru for device bus=pci addr=s00000001.00 id=1000009715d90808010700.
CVM booted after re-seating the HBA and found the boot device correctly:
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: AddAlias: Not commiting alias vmhba2 for busAddress s00000002.00
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: AddAlias: skipping matching alias vmhba2 for pci device s00000002.00 with assigned alias vmhba2
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: GetAliasInternal: created new alias vmhba3 for device s00000001.00 # <===== !!!!! New device added
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: InheritVmklinuxAliases: not a vmnic alias vmhba3
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: InheritVmklinuxAliases: not a vmnic alias vmhba2
2019-04-27T17:11:24.748Z vmkdevmgr[66193]: ADD event for bus=pci addr=s00000001.00 id=1000009715d90808010700. # <===== !!!!!
| For the newer ESXi version
Check the current PCI address for the LSI adapter:[root@esxi:~] lspci | grep -i lsi
0000:01:00.0 Serial Attached SCSI controller: Avago (LSI Logic) Fusion-MPT 12GSAS SAS3008 PCI-Express [vmhba2]
In the vmkernel.log, you see that the CVM tries to access 0000:03:00.0, which is no longer valid.Go to the ESXi host entry in vCenter where the problematic CVM is and select Configure > Hardware > ALL PCI DEVICESSearch for the LSI PCI device and select the checkbox next to it which is disabled stated.Select TOGGLE PASSTHROUGH.
Reboot the ESXi host for the changes to take effect. Start the CVM.
For older ESXI version
Check the current PCI address for the LSI adapter:[root@esxi:~] lspci | grep -i lsi
0000:01:00.0 Serial Attached SCSI controller: Avago (LSI Logic) Fusion-MPT 12GSAS SAS3008 PCI-Express [vmhba2]
In the vmkernel.log, the CVM tries to access 0000:03:00.0 that is no longer valid.Go to the ESX host entry in vCenter where the problem-CVM is and select Configure > Hardware > PCI Devices.On the right, click CONFIGURE PASSTHROUGH.Locate the LSI PCI device and select the checkbox next to it.
Reboot the ESXi host for the changes to take effect.Go to the CVM Settings.Select the LSI device.Select the Remove button (small X at the end of the device entry):
Add and select the newly passthrough-enabled LSI device.
Start the CVM.
Notes:
If you encounter CVM in a reboot loop with "mdadm main: Failed to get exclusive lock on mapfile" after rebooting the host, refer to KB-6226 https://portal.nutanix.com/kb/6226 VMWare KB 1010789 https://kb.vmware.com/s/article/1010789 describes this configuration. |
KB16269 | NKE - Certificate rotation fails to rotate ETCD DVP certificate | NKE certificate rotation fails when rotating ETCD dvp certificate | Certificate rotation tasks initiated in NKE will fail reporting etcd service restart failed.
2024-02-22T20:53:40.13Z etcd_configure.go:469: [ERROR] Error while rotating the etcd cert for etcd node: karbon-xxx-yyyy-etcd-0, err: Operation timed out: Failed to configure with SSH: Failed to run command: on host(xx.xx.xx.xx:22) cmd(systemctl restart etcd.service) error: "Process exited with status 1"
Logging into one of the etcd nodes you may notice the etcd service is in activating state but reports etcd.service failed.
[root@karbon-xxxx-yyyy-etcd-0 docker-plugin-certs]# systemctl status etcd.service
There will be no docker volumes listed
[root@karbon-xxxx-yyyy-etcd-0 nutanix]# docker volume ls
Checking the docker volume plugin certificate in one of the etcd nodes, you will notice the certificate has expired.
[root@karbon-xxxx-yyyy-etcd-0 ~]# openssl x509 -in /etc/docker-plugin-certs/cert -noout -dates
| To resolve the issue follow the steps below, Step1: Download the script https://download.nutanix.com/kbattachments/16269/karbon_extract_cert.py to extract the new certificate from failed task from here into ~/tmp folder on one of the Prism Central VM. md5sum is 5d494bfd5426af6ea73f42260ff5ab44Step2: List the failed certificate rotation tasks with the command below. Before running the below commands ensure you have logged into karbonctl.
nutanix@NTNX-PCVM:~$ export CLUSTER_NAME=<karbon_cluster_name>
In the above command replace <karbon_cluster_name> with the name of the karbon cluster that needs to be fixed.Step3: Run the script to extract the certificate from the above task uuid.
nutanix@NTNX-PCVM:~$ python ~/tmp/karbon_extract_cert.py --uuid <task_uuid> --dvp > ~/tmp/dvp_cert.pem
In the above command replace <task_uuid> with the uuid taken from output of Step2An example output will look like below, this output includes a certificate and a private key
nutanix@NTNX-PCVM:~$ cat ~/tmp/dvp_cert.pem
Run the below commands to extract the cert file and the private key separately
nutanix@NTNX-PCVM:~$ sed -n '1,/END CERT/p' ~/tmp/dvp_cert.pem > ~/tmp/cert
Step4: Copy the certificate into one of the master nodes in the Karbon cluster under /home/nutanix/ folder by running the following commands First extract the private key to use for scp with the below commands
nutanix@NTNX-PCVM:~$ ~/karbon/karbonctl cluster ssh script --cluster-name ${CLUSTER_NAME} > ${CLUSTER_NAME}.sh
The above command will show example output like below, in the output below copy the private key path to use for the next command
nutanix@NTNX-PCVM:~$ sh ${CLUSTER_NAME}.sh -s
With the private key from the above output run the below commands to copy the file into one of the Karbon master nodes,
nutanix@NTNX-PCVM:~$ scp -i <private_key_file_path> ~/tmp/{cert,key} nutanix@<master_node_ip>:/home/nutanix/
In the above command replace <master_node_ip> with one of the master nodes ip address.Step5: SSH to the master node where you copied the files in the previous command.Run the following command replace the certificates in /etc/docker-plugin-certs and /var/nutanix/etc/docker-plugin-certs/ with the new certificate copied into /home/nutanix/ directoryCAUTION: Executing the below commands will crash the cluster since ETCD service is shutting down. Inform the customer there will be a short downtime during the procedure.
[nutanix@xxxx-yyyy-master-1 nutanix]$ sudo su
Step6: Confirm the certificate is valid with the below command
[root@xxxx-yyyy-master-1 nutanix]# for i in $(cat etcd_ips.txt) ;do echo ===$i===; ssh $i 'openssl x509 -in /etc/docker-plugin-certs/cert -noout -dates' ;done
Step7: Disable the docker plugin and re-enable it.
[root@xxxx-yyyy-master-1 nutanix]# for i in $(cat etcd_ips.txt) ;do echo ===$i===; ssh $i 'docker plugin disable nutanix:latest -f && docker plugin enable nutanix:latest' ;done
Step8: Confirm you are able to list docker volumes with the below command,
root@xxxx-yyyy-master-1 nutanix]# for i in $(cat etcd_ips.txt) ;do echo ===$i===; ssh $i 'docker volume ls' ;done
You should see some etcd volumes listed in the above command output. If you do not see any etcd volumes in the output. Reach out to STL or Karbon SME for further assistance.Step9: Start etcd service
[root@xxxx-yyyy-master-1 nutanix]# for i in $(cat etcd_ips.txt) ;do echo ===$i===; ssh $i 'systemctl start etcd' ;done
Step10: Re-initiate the certificate rotation task again from Prism Central VM ssh session.
nutanix@NTNX-PCVM:~$ ~/karbon/karbonctl cluster certificates rotate-cert --cluster-name ${CLUSTER_NAME} --skip-health-checks
Step11: Check the status of cert rotation and confirm it is successful
nutanix@NTNX-PCVM:~$ ~/karbon/karbonctl cluster certificates rotate-cert status --cluster-name ${CLUSTER_NAME} --skip-health-checks
|
KB7181 | Aborting Move VM Tasks | This KB emphasizes on how to Abort / Cancel migration plans manually if the Abort option on UI fails. | This KB has been merged with the contents of KB 5560.For aborting / cancelling Move VM tasks for Move versions prior to 3.x as well as 3.x and above.
PLEASE NOTE: This issue is fixed in the latest Move 3.5.1, If you encounter this issue in Move 3.5.1 and above, please reach out to Move Engineering on Slack Channel: #tc_move | For Move Versions 3.x or later follow the below steps :
1. Login to Move VM through SSH using admin / nutanix/4u (default password)
2. Switch to the root user.
[admin@nutanix-move ~]$ rs
3. Login to postgres shell, This can be done by below command
root@nutanix-move ~]# postgres-shell
4. Then invoke the Postgres-cli using below command
root@nutanix-move ~]# psql -d datamover
5. List up the UUID of migration tasks
Run following query on psql tool:
datamover=# SELECT mpuuid,name FROM migrationplans; (please don't forget the last semi-colon ';')
(sample)
6. Remove target (aborted) task from task list. Run following SQL command on psql tool:
datamover=# DELETE FROM migrationplans WHERE mpuuid='<migration_plan_UUID>'; (please don't forget the single quotes and the last semi-colon ';')
7. Exit psql toolRun following command on psql tool :
datamover=# \q
8. Exit the postgres-shell (ctrl+d)
For Move versions prior to 3.x we can perform the below steps :
Aborting task can be deleted from task list. Please follow the steps below:
1. Login to Move VM through SSH using admin / nutanix/4u (default password)
2. Switch to the root user.
[admin@nutanix-move ~]$ rs
3. Start psql tool
Run following command:
[admin@nutanix-move ~]$ sudo -u admin psql datamover
4. List up the UUID of migration tasks
Run following query on psql tool:
datamover=# SELECT mpuuid,name FROM migrationplans; (please don't forget the last semi-colon ';')
Pick target task and UUID for the next step.
5. Remove target (aborted) task from task list.
Run following SQL command on psql tool:
datamover=# DELETE FROM migrationplans WHERE mpuuid='<migration_plan_UUID>'; (please don't forget the single quotes and the last semi-colon ';')
6. Exit psql tool
Run following command on psql tool :
datamover=# \q
After this, proceed to the source cluster and target cluster for the cleanup so that there is no extra space on the containersOn the source cluster, remove all the snapshots created by Move for that VM from using Vcenter / HyperV Manager. The Snapshots will be named as Move-Snap-x (where x is the number of the snapshot starting 0,1,2 and so on) |
KB13918 | Remote connection update task may get stuck or fail on PE and PC | Stuck remote_connection_intentful task while deleting/updating/resetting the remote connection. PC shows as "Disconnected" in PE UI | Stuck remote_connection_intentful task while deleting/updating/resetting the remote connection:Scenario 1 : Stuck task on PE :
nutanix@CVM:~$ ecli task.list include_completed=0
Stuck task on PC :
nutanix@PCVM:~$ ecli task.list include_completed=0
Remote connection task on PE fails with error message - ENTITY_NOT_FOUND
nutanix@CVM:~$ nuclei remote_connection.reset_pe_pc_remoteconnection
nutanix@CVM:~$ ecli task.get d597e84e-78e4-4e3b-b638-1393665b6a68
On PC, create_remote_connection_intentful connection fails with message
"ACCESS_DENIED: No permission to access the resource. https://<PE>:9440/v3/remote_connections"
nutanix@PCVM:~$ ecli task.list
nutanix@PCVM:~$ ecli task.get 64930a16-9a20-425a-acf0-6498cd33aa84
On PE, aplos_engine logs report similar error message - ENTITY_NOT_FOUND for remote connection task.
nutanix@CVM:~$ grep "d60c549a-e1ed-582b-bae9-b6becb00a15b" ~/data/logs/aplos_engine.out
Prism Central cluster is registered in PE, according to ncli and nuclei:
nutanix@CVM:~$ ncli multicluster get-cluster-state
But Remote Connection is absent:Please use the following command on AOS 6.5 and below:
nutanix@CVM:~$ nuclei remote_connection.list_all
And use the following on AOS 6.7 or above.
nuclei cluster_connection.list_all
The same behavior is seen from PC side.Editing affected PE resources from PC may produce error
INVALID_ARGUMENT: Given input is invalid.
Scenario 2 : PC is listed as Disconnected in PE UI, all other workflows appear to be working as expected.Task stuck on PE :
nutanix@CVM:~$ ecli task.list include_completed=0
Task stuck on PC :
nutanix@PCVM:~$ ecli task.list include_completed=0
Task on PC is stuck at 0% without any progress :
nutanix@PCVM:~$ ecli task.get 468cdcb6-d417-415f-bd1f-7e0e898e690c
aplos_engine.out on PE will have the below signatures :
2023-04-18 19:07:30,302Z INFO intent_spec.py:182 <98c85d20> [a272e7a3-2165-4a87-9e58-331a3e483476] [None] Found intent spec with UUID f41bd3c7-7e2f-53b2-868a-367de5b172fa and cas_value 1609
Scenario 3:At PC, Reset command fails:
<nuclei> remote_connection.reset_pe_pc_remoteconnection
Also, remote connection is false on ncli multicluster get-cluster-state
<ncli> multicluster get-cluster-state
PE task fails with the message "ENTITY_NOT_FOUND" :
<ergon> task.list status_list=kFailed component_list=aplos
And PC task fails with the message "ACCES_DENIED":
<ergon> task.list component_list=aplos
At PE on aplos_engine.out could be observed the following traceback:
2023-12-08 16:27:29,034Z INFO remote_connection_api.py:234 response from rc {"state": "ERROR", "code": 404, "message_list": [{"reason": "ENTITY_NOT_FOUND", "message": "remote_connection : 6a2b8358-3408-5018-9147-710399eb8655 does not exist."}], "kind": "remote_connection", "api_version": "3.1"}
Validate if the connection with the PC is listed:
For AOS 6.5 and below, use:
nutanix@CVM~$ nuclei remote_connection.list_all
For 6.7 and above:
nutanix@CVM~$ nuclei cluster_connection.list_all
| Workaround: Scenario 1 :
Make sure there is no problem with connectivity between PE and PC on port 9440 (see KB 6970 https://portal.nutanix.com/kb/6970)If the parent task (update_remote_connection_intentful) is stuck on PE then restart aplos and aplos_engine on PE to clear the task:
nutanix@CVM:~$ allssh genesis stop aplos aplos_engine && cluster start
Download create_rc_v1.py script to ~/bin directory on PCVM and verify md5sum: https://download.nutanix.com/kbattachments/13918/create_rc_v1.py https://download.nutanix.com/kbattachments/13918/create_rc_v1.py
nutanix@PCVM:~$ cd bin/
Run the script and provide Prism Element UUID
nutanix@PCVM:~/bin$ python create_rc_v1.py --uuid <pe-uuid>
Note: When we do reset_remote_connection from PE it creates a parent task on PE and a child task on PC. The child task on PC gets stuck in the running state.This script finishes the task on PC while automatically creating the RC on PE by completing the parent task.Scenario 2 : NOTE: Consult with a Sr. SRE Specialist/MSP SME/STL/Devex prior to executing workarounds. Ensure requested data is captured and uploaded to ENG-578992 https://jira.nutanix.com/browse/ENG-578992. If there are any doubts, open a Tech Help for STL guidance.PE authenticates with an internal user to PC for some workflows. When this user is incorrect or missing from PC, remote_connection workflows will fail, and PE may show the PC as "Disconnected" in the PE UI.
Use the following for further confirmation.
On PC, download the get_users_info_zk_repo.py https://download.nutanix.com/kbattachments/13971/get_users_info_zk_repo.py script in ~/tmp directory and confirm the md5sum matches.nutanix@PCVM:~$ cd ~/tmp
nutanix@PCVM:~/tmp$ wget https://download.nutanix.com/kbattachments/13971/get_users_info_zk_repo.py
nutanix@PCVM:~/tmp$ md5sum get_users_info_zk_repo.py
625731557a2dad7045e5779d7081a1be get_users_info_zk_repo.py
Create a directory for data collection and execute the script collecting the output. nutanix@PCVM:~$ mkdir ~/tmp/analysis
nutanix@PCVM:~$ python ~/tmp/get_user_info_zk_repo.py > ~/tmp/analysis/get_user_info_from_pc.dump
Search get_users_info_from_pc.dump file for the UUID of the impacted PE.nutanix@PCVM:~$ cat ~/tmp/analysis/get_user_info_from_pc.dump |grep <pe_cluster_uuid>
nutanix@PCVM:~$
Typically we will see the user entry does still exist in IDF.nutanix@PCVM:~$ idf_cli.py get-entities --guid abac_user_capability > ~/tmp/analysis/idf_abac_user_capability.dump
nutanix@PCVM:~$ cat ~/tmp/analysis/idf_abac_user_capability.dump |grep <pe_cluster_uuid>
str_value: "<pe_cluster_uuid>"
Collect a full logbay bundle from PC and PE. In addition, collect all available prism_gateway.log logs on PC and PE. Include all of the data from the analysis directory in your sharepath on Diamond. Add the diamond sharepath to ENG-578992 https://jira.nutanix.com/browse/ENG-578992 for further investigation into the root cause.Once above logs are collected, proceed with the following workaround.
If PC is NOT IAMv2 enabled:
Confirm PC does not have IAMv2 enabled, zknode iam_v2_enabled should not exist.
nutanix@PCVM:~$ zkls /appliance/logical/iam_flags
On PE, download the getZkCreds.py https://download.nutanix.com/kbattachments/13918/getZkCreds.py script to ~/tmp directory to gather the internal user info and validate the md5sum
nutanix@CVM:~$ cd ~/tmp
Execute getZkCreds.py and enter the PC UUID when prompted. This will provide a username and password output. Note: the AttributeError at the end can be safely ignored.
nutanix@CVM:~/tmp$ python getZkCreds.py
On PC, download the script named updateZkUserRepo.py https://download.nutanix.com/kbattachments/13918/updateZkUserRepo.py and validate the md5sum.
nutanix@PCVM:~$ cd ~/tmp
Execute the updateZkUserRepo.py script and enter the username and password when prompted,
nutanix@PCVM:~/tmp$ python updateZkUserRepo.py
If the above script fails, we need to try the v2 script as it is recommended first to try the old script ( mentioned above ), and if it fails with the below error in mercury logs, then try the v2 script.
E20240404 16:21:02.352823Z 40568 iamv2_interface_manager.cc:641] Request to: https://iam-proxy.ntnx-base:8445/api/iam/authn/v1/oidc/token failed , http status: 401, response body: {"message":"Invalid credentials","code":7}
If we see the above error in mercury logs, try updateZkUserRepo_v2.py https://download.nutanix.com/kbattachments/13918/updateZkUserRepo_v2.py with the same process of Executing the updateZkUserRepo_v2.py script and enter the username and password when prompted.Confirm PE no longer shows PC as "Disconnected".
If PC is IAMv2 enabled:If PC is IAMv2 enabled, there are two approaches to update the password. Regardless of the approach you need to start with the below steps to confirm that IAMv2 is enabled and retrieve required information before proceeding with the password update.
Confirm PC does have IAMv2 enabled, zknode iam_v2_enabled should exist.
nutanix@PCVM:~$ zkls /appliance/logical/iam_flags
Execute the same workflow used for non-IAMv2 enabled PC clusters mentioned in the section "If PC is NOT IAMv2 enabled". Once complete, the following will walk through removing the old user and disable/enable workflow in IAMv2.Use kubectl to get the master cape pod.
nutanix@PCVM:~$ sudo kubectl get pods -A -l role=master
Connect to PostgresDB of the cape pod and use the following output to delete the user from IAMv2 DB (everything in bold should be executed).
nutanix@PCVM:~$ sudo kubectl exec -it cape-gnnw-65f74d79c9-p45tm -n ntnx-base bash <--- enter cape master pod
On PE, download the getZkCreds.py https://download.nutanix.com/kbattachments/13918/getZkCreds.py script to ~/tmp directory to gather the internal user info and validate the md5sum
nutanix@CVM:~$ cd ~/tmp
Execute getZkCreds.py and enter the PC UUID when prompted. This will provide a username and password output. Note: the AttributeError at the end can be safely ignored.
nutanix@CVM:~/tmp$ python getZkCreds.py
Approach 1:With this approach we will try first to retrieve the hash of the PE user password and manually update IAMv2 DB with the new hash. Make sure you are extremely cautious when updating the database manually.
On PC, download a script to generate PE user's password hash from this link https://download.nutanix.com/kbattachments/13918/gen_password_hash_v1.py and verify its md5 checksum to be .
nutanix@PCVM:~$ wget https://download.nutanix.com/kbattachments/13918/gen_password_hash_v1.py
Execute the script and enter the PE user's credentials retrieved from the PE with getZkCreds.py script as instructed above.
nutanix@PCVM:~$ PYTHONPATH=~/bin python gen_password_hash_v1.py
Approach 2:
If the first approach has not helped, it will be required to delete the user from from IAMv2 DB and then to disabled/re-enabled IAMv2 to migrate the user back into IAMv2. This will result in some time where AD authentication may not work while IAMv2 migration is taking place. Ensure customer is comfortable with this or plan for a time to execute these steps.
Disable IAMv2.
nutanix@PCVM:~$ zkls /appliance/logical/iam_flags
Get MSP cluster UUID.
nutanix@PCVM:~$ mspctl cluster list
Delete iam-bootstrap deployment using the MSP cluster UUID obtained above.
nutanix@PCVM:~$ mspctl application -u <msp_cluster_uuid> delete iam-boostrap -f /home/docker/msp_controller/bootstrap/services/IAMv2/iam-bootstrap.yaml
Enable iam-bootstrap deployment. This will trigger the deployment workflow to enable IAMv2 and migrate the users from /appliance/physical/userrepository to IAMv2 DB.
nutanix@PCVM:~$ mspctl application -u <msp_cluster_uuid> apply iam-boostrap -f /home/docker/msp_controller/bootstrap/services/IAMv2/iam-bootstrap.yaml
Once you see migration_complete in /appliance/logical/iam_flags zknode, migration is complete.
nutanix@PCVM:~$ zkls /appliance/logical/iam_flags
Confirm PE no longer shows PC as "Disconnected".
Scenario 3This scenario can be fixed by following the workaround for scenario 1. |
{ | null | null | null | null |
{ | null | null | null | null |
""ISB-100-2019-05-30"": ""Description"" | null | null | null | null |
KB6925 | CVM fails to power on with error message "The virtual machine cannot be powered on because virtual nested paging is not compatible with PCI passthru in VMware ESX 6.0.0." | In rare cases, particularly after ESXi updates, a CVM (Controller VM) may fail to boot with the error message "The virtual machine cannot be powered on because virtual nested paging is not compatible with PCI passthru in VMware ESX 6.0.0." | In rare cases, particularly after ESXi updates, a Nutanix CVM (Controller VM) may fail to boot with the error message "The virtual machine cannot be powered on because virtual nested paging is not compatible with PCI passthru in VMware ESX 6.0.0."
In vmware.log, you see the following messages or similar:
2019-02-05T16:22:17.337Z| vmx| I125: Msg_Post: Error | On the ESXi host that owns the affected CVM, use KB-4399 http://portal.nutanix.com/kb/4399 to verify that all basic LSI passthrough settings are correct.
If the issue still persists after going through KB-4399 http://portal.nutanix.com/kb/4399, continue.
The setting that enables the nesting of hypervisors should be disabled globally, but if there is a requirement to have this enabled at the global level, then skip the " Global #Global" section below and go straight to the " CVM Only #CVM_Only" section.
undefinedGlobal
SSH to the ESXi host and edit /etc/vmware/config
[root@host] vi /etc/vmware/config
For ESXi version 5.1 and below (or VMs with hardware versions 7 or earlier), look for the following setting. This could appear in upper or lower case.
vhv.allow = "TRUE"
Change this setting to:
vhv.allow = "false"
For ESXi above 5.1, look for the following setting. This could appear in upper or lower case.
vhv.enable = "TRUE"
Change this setting to:
vhv.enable = "false"
Save the file and exit.
undefinedCVM Only
SSH to the ESXi host and edit /vmfs/volumes/{datastore}/ServiceVM_Centos/ServiceVM_Centos.vmxAdd the line:
vhv.enable = "false"
Save the file and exit.
The CVM should now power on normally. |
KB9924 | Nutanix Self-Service - Steps to increase IDF memory | This KB article has the steps to increase IDF memory for the VM migration to succeed. | Nutanix Self-Service (NSS) is formerly known as Calm.When VM gets migrated to Calm as a Single-VM application, a number of IDF entities are created pertaining to the newly created Calm Application. As these entities are created in the IDF, IDF memory requirements increase.If a large number of VMs are Getting migrated or if the setup is already loaded with IDF memory then to avoid crashes in the IDF due to insufficient memory availability, it is required to increase the IDF memoryThis KB article has the steps to increase IDF memory so the VM migration can go through. | "WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit"Step 1: Get the amount of IDF memory to be increased using the below command on PC
nutanix@pcvm$ docker exec -it calm bash
To know how much memory to increase use the below command
sh /home/calm/bin/get_vmmigration_memory.sh
Step 2: Increase IDF memorySet the insights gflag with the new memory valuecreate file /home/nutanix/config/insights_server.gflags if it does not existTo this file add the following
--insights_rss_share_mb=<existing value> + <Value obtained from the above script>
The existing value depends on the PC type:
Small PC - 5.5 GBLarge PC - 21 GB
After increasing continue with the migration. |
KB11466 | Objects - Objects Deployment post CMSP enablement with IAM service is not healthy | If the deployment of the objects cluster is performed after CMSP is enabled, it might result in the deployment failure. | Problem Signatures
You are running PC 2021.5, but Objects Manager is on a version < 3.2You have enabled Objects and then enabled CMSP in Prism Central prior to deploying your objects clusterThe object store deployment will fail with the error:IAM service is not healthy
Service Manager logs (in Prism Central: /home/nutanix/data/logs/aoss_service_manager.out) would have below signature<
time="2021-04-01 10:25:26Z" level=info msg="IAM endpoint: iam.ntnx-base.oss-pn1.qa.nutanix.com:5553" file="poseidon_utils.go:124"
| This issue is fixed in Objects Manager 3.2.
To resolve the issue, upgrade Objects Manager and start a new deployment:
Delete the existing failed deployment in UIUpgrade Objects Manager via LCM to 3.2 or higherRe-attempt deployment.
As a workaround, restart Objects Manager service and resume Objects Deployment:
SSH into a PCVMRun the below command to stop Objects Service Manager (UI and other Manageability aspects will be impacted for a short duration)
PCVM$: allssh genesis stop aoss_service_manager
Star the service again from PCVM:PCVM$: cluster start
Resume deployment in UI. |
KB15502 | Nearsync could experience issues after upgrading AOS to 6.5x or higher if one of limits in the PD configuration maximums is not complied with. | Nearsync might be malfunctioning after upgrading AOS to 6.5 or higher if one the limits in PD configuration maximums is not respected" | After upgrading AOS version to 6.5x, nearsync may be malfunctioning due to the PD configuration maximums enforced in the newer AOS versions.
Symptoms : To pinpoint the issue please proceed with the following steps:
The following/similar alerts will be detected from the cluster/PD:
Above alert "failing PD snapshot" is raised when 20 full snapshots are taken in less than 1 hour. In cerebro.INFO logs we can see 20 full snapshots are taken in few minutes.
13:42:11.924722Z opid = 1358749697, = SnapshotProtectionDomain, Creation time 20230831-13:42:08-GMT+0000, Duration = 3, take_full=true Error kNoError,
The the 21st snapshot fails with error kAutonomousNearSyncSnapshotFailed
13:54:05.485801Z opid = 1358757487, = SnapshotProtectionDomain, Creation time 20230831-13:54:01-GMT+0000, Duration = 4, take_full=true Error kAutonomousNearSyncSnapshotFailed
Then a new alert about transitioning out of nearsync is raised
W20230831 13:54:05.482661Z 18819 protection_domain.cc:24228] notification=ProtectionDomainNearSyncTransitionOut protection_domain_name=P1-CSPN-01-R_1596015005090 rpo_string=1 hour(s) reason=Disabling nearsync due to snapshot failures (is_resolve_notification=false)
Always from cerebro.INFO logs we can observe that replications of type "async" are occurring every few minutes, whereas there should be one asynchronous replication every hour."
05:45:27.272854Z opid = 1358618844, = Replicate, Creation time 20230831-05:45:02-GMT+0000, Duration = 24, replicated_bytes=0 replication_type=async,
Minutely replications with type "near_sync" and with opcode ReplicateEntitiesMetaOp are not happening. Most of SnapshotProtectionDomain metaops are of type "Full", no incremental snapshots.
13:51:28.734894Z opid = 1358756113, = SnapshotProtectionDomain, Creation time 20230831-13:51:24-GMT+0000, Duration = 3, take_full=true Error kNoError,
Check the number of VMs in near-sync PD using the 2020 page or from prism UI. You will find more than 10 VMs, which is the max allowed as per Nutanix Configuration Maximums:
| To fix the issue, please follow these steps:
Make sure that you are following Nutanix Configuration Maximums https://portal.nutanix.com/page/documents/configuration-maximum/list?software=Disaster%20Recovery%20-%20Protection%20Domain&version=6.5Split nearsync PDs with a max of 10 VMs per PD. Note that in the newer AOS versions if you try to add more that 10 VMs in a nearsync PD it won't allow you |
KB12537 | Troubleshooting Nutanix DRaaS connectivity with Multiple VPN gateways | Troubleshoot On-Prem connectivity to Nutanix DRaaS when using multiple VPN gateways | Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.You will note that the replication from one or more clusters is not working and customer has multiple VPN gateways for redundancy purpose.At the Xi portal everything looks fine:
VPN gateway Status is On at all VPN gateways;The IPSec status is Connected and;EBGP status is Established;
| Please check the routes at the affected cluster:To check the routes run:
route -n
Or
ip route
PS: You may also compare the output with a working cluster if you have one.The output should return one route to the Xi public IP using the VPN gateway as router.Ex:
Kernel IP routing table
If you have the route to Xi missing, you can manually create running:
allssh sudo ip route add <Xi_GATEWAY_SUBNET> via <VPN_VM_IP>
Where:<Xi_GATEWAY_SUBNET> -> You get this information from the VPN Summary on Xi portal and it should be informed at the format you will find there: IP/Mask.For the example above it would be: 206.80.158.136/29 .<VPN_VM_IP> -> Is theIP used by the VPN VM at the internal network.
After add the route the replication should start to work.In regards to ENG-413171, "LB routes are not advertised in tab-eth0 when CMSP is enabled" if CSMP is enabled, the workaround would be adding a route in PC as mentioned in the Jira:
sudo ip route add table tab-eth0 <Xi_GATEWAY_SUBNET> via <VPN_VM> |
KB5179 | SCOM Exception: The remote server returned an error: (401) Unauthorized. | null | When you configure the Nutanix cluster discover SCOM template, you might receive the Cluster connection successful validation from the Nutanix Cluster Configuration and Verified from the IPMI Account Configuration.
However, when you apply the configuration, nothing is discovered. From the SCOM server Operations Manager event log, you can observe the following:
Following is the error message:
Nutanix.x.x.x.x.ClusterDiscovery.ps1 : Cluster Discovery Error: | This issue occurs because you do not have the local administrator rights. When you are running the cluster discovery, you need to run the SCOM dashboard as the administrator.Right-click the SCOM dashboard and run it as the administrator. This should resolve the Unauthorized message. |
KB11461 | LCM inventory is taking longer time to complete on large clusters | LCM inventory is taking longer time to complete on large clusters | LCM run inventory is taking longer time to complete on large clusters which are running with huge workloads.
Example:Inventory on ESXi 48 nodes cluster and it took close to 9 hours for a successful inventory.
Prechecks ran for 20 minutes and inventory op for 7 hours 52 min. Inventory op had 500+ sub tasks. | Please investigate the issue to understand what task is stuck and why ?There can be multiple areas where LCM Inventory task can take time - it is important to know what is the cause for individual case.Steps to triage:
Understand the stuck task from ecli task.list
Example :
nutanix@CVM:10.x.x.x:~$ ecli task.get 5359be40-9bf7-465c-a573-7734ccb70675
In the above example, task that was causing the delay was " Product Meta compatibility check".
It took ~4:20 Hrs to complete Product Meta Compatibility check.
Based on the above observation, proceed to create an TH/ONCALL to further provide appropriate relief to customer and then decide to create an ENG for Engineering team to work on improving any notice design improvements.
With ENG-395726 https://jira.nutanix.com/browse/ENG-395726 (LCM-2.6) LCM inventories have been made considerably faster. Kindly upgrade to the latest LCM version. |
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""ethtool -i VMNic Name"" | null | null | null | null |
KB13243 | Nutanix Self-Service: Creating VM fails with error "Script execution failed with status 1!" | This article discusses an issue with launching VM/App on Self-Service (formerly Calm). | Nutanix Self-Service is formerly known as Calm.
Deploying VM/App from a Market Place throws the following error under section "Allowed to deploy new desktop"
Script execution failed with status 1!
The styx logs (/home/docker/nucalm/log/styx.log) will show the following entry:
[2022-06-13 00:54:59.920495Z] INFO [styx:74330:DummyThread-148] [:][cr:5817f497-eaf3-413f-8542-ae475f896838][pr:][rr:5817f497-eaf3-413f-8542-ae475f896838] calm.lib.model.store.base.patch_spec_dict:692 [:::] Patching key username. debug_data is {'env_value': u'calm_setup', 'editables_whitelist': ['username', 'secret', 'passphrase'], 'always_patch_keys': ['account_uuid'], 'should_patch_platform_dep_fields': False, 'spec_value': u'[email protected]', 'platform_depenedent_fields': []}
| From the UI error and the log snippet, it is clear that user authentication error is causing the issue. The user in concern can be found in the logs (here [email protected]).The issue can be caused if the credentials https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-Guide:nuc-app-mgmt-cred-account-settings-c.html given for the user in Blueprint/Market Place are not working (either due to an update on the AD side or getting expired). Fix the issue with the credentials to resolve the issue. |
KB13800 | VSS snapshot fails with error "cannot snapshot file in recycle bin" in Hyper-V clusters | Hyper-V VSS snapshot during any Backup process on Nutanix fails as it tries to snapshot files in recycle bin as well. | It has been discovered in the field that backup operations using VSS fail on Hyper-V clusters upgrading to AOS 5.18.x and above.
Identification:
Backup fails with error messages in Stargate logs below:VSS snapshot attempts to take a snapshot of the files in recycle bin causing snapshots to fail.
E20220901 22:35:18.772078Z 40571 shadow_copy_agent.cc:1274] Attempted to make an unsupported NfsSnapshotGroup call
| Root cause:
This is caused due to recycle bin feature being enabled on AOS 5.18.x and higher versions. As the VSS snapshot tries to backup the contents of the recycle bin it fails.
Workaround:
The workaround for this is to disable recycle bin and clear any existing contents.Follow Recycle Bin in AOS /articles/Knowledge_Base/Recycle-Bin-in-AOS to disable the recycle bin feature (Engage a STL via TH if you require assistance for this activity).
Note: Starting with AOS 5.20.2 and 6.0.1 the RecycleBin cleanup option is available in Prism UI as well.Note: Disabling the RecycleBin does not imply cleanup, follow the KB noted above to disable and clear the data.IMPORTANT: Once the customer upgrades to a version with the fix , recycle bin needs to be re-enabled. |
KB7751 | Hyper-V: Upgrade: Manual Upgrade Process | It may be required to perform a manual upgrade for a Nutanix Hyper-V cluster due to issues or scenarios where the 1-click automation may not be supported or may not function as expected. | Purpose:
It may be required to perform a manual upgrade (or perform specific steps) for a Nutanix Hyper-V cluster due to issues or scenarios where the 1-click automation may not be supported or may not function as expected.
Known Scenarios:
The customer is using SCVMM (System Center Virtual Machine Manager) with Logical Switches. Currently, 1-click upgrades of Hyper-V do not support upgrading with Logical Switches.
The customer is unable to upgrade using the 1-click method due to extended fail-over attempts.
Non-standard partition configurationsNo access to qualified Windows ISO1-click fails due to VLAN setting in the host(s) and CVM(s)If the customer is using LACP (with LBFO NIC teaming) on the host networking1-Click upgrade Fails and manual intervention is needed to resume the upgrade of complete the upgrade.
Pre-Upgrade Information and Recommendations:
It is recommended that the customer has a workstation/server running Windows 10 or Server 2016 with Hyper-V/Failover Cluster and SCVMM.If running lower version SCVMM, you must first upgrade or install SCVMM. For example, SCVMM 2012 cannot manage Windows Server 2016 Hyper-V and Failover Cluster.It is recommended that the customer installs the Remote Server Administration Tools (RSAT) https://www.hammer-software.com/how-to-install-remote-server-administration-tools-rsat-on-windows-server-2012-using-server-manager/on a workstation or server before beginning this process.It is recommended that the customer uses a system that is located within the same location as the Nutanix Hyper-V cluster. Java and IPMI/iDRAC will be required to virtual mount ISO(s). This should not be done over a VPN(s) or wireless.During the upgrade process, do not deploy any new virtual machine until the upgrade is completed.Customer should download and install the latest 2016 Rollup Patches to ensure the CredSSP patch is applied:
https://support.microsoft.com/en-us/help/4000825/windows-10-windows-server-2016-update-history https://support.microsoft.com/en-us/help/4000825/windows-10-windows-server-2016-update-history
Use the below KB to attempt to resolve upgrade issues before manually imaging the host.
KB 6506 https://portal.nutanix.com/kb/6506
If you are upgrading Hyper-V to 2022, ensure that you have migrated virtual Switches from LBFO to SET before upgrade. Refer to KB-11220 https://portal.nutanix.com/kb/11220 for details. | It is recommended that you read through the entire process thoroughly before attempting this process.
If you are unsure about any steps, please reach out to a senior Nutanix resource.
Upgrade Cluster Preparations Steps:
Stage ISO.
Windows 2016Windows 2019Phoenix
For G4/G5 Nodes, test the SATADOM write speed. If lower than 50 MB/s, we may need to consider SATADOM replacement before upgrading. The install of Windows and/or Phoenix may be extremely slow or may fail. (Hyper-V 2019 is not supported on G4 or lower). Please refer to KB-3252 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0320000004H02CAE to measure SATADOM write speed.Check Windows Host Patches. If the Windows hosts are too far out of date, i.e. greater than six months, you may experience issues with live migration between nodes.
allssh winsh get-hotfix
Run an NCC Health Check and resolve any cluster issues before upgrading.
ncc health_checks run_all
Document cluster details.
ncli cluster info
Document host details.
ncli host ls
Please repeats the below steps until all nodes have been upgraded.
Manual Upgrade Steps:
Find the host id from Preparation Step 6 (ncli host ls).
Example:
Id : 00000000-a8c0-e2dd-3a91-9999999999::6
Put the host into maintenance mode by running the below on any CVM.
ncli host edit id=<Host ID> enable-maintenance-mode=true
Example:
ncli host edit id=6 enable-maintenance-mode=true
SSH to the CVM of the host you are imaging and power it off. Wait until CVM is fully shut down before moving to the next step.
cvm_shutdown -P now
Pause the node and choose "Drain Roles" in Failover Cluster Manager.
Select host > Right-click host > Pause > Drain Roles
If the cluster is using SCVMM, put the host into Maintenance Mode.
Select host > Right-click host > Start Maintenance Mode
Evict node from Failover Cluster Manager.
If the cluster is using SCVMM, remove host.
VMs and Services > Select host > Right-click host > Remove
Log in to the IPMI of the host, plug in the Windows 2016 ISO into the virtual media, and reset the power on the host.
In Active Directory, under Computers, right click the host and choose "Reset Account".
Refer to KB 5591 for Windows installation steps.Once the installation completes, mount a Phoenix ISO into the IPMI's virtual media and reset the power on the host. Once you get to the Phoenix screen, choose "Configure Hypervisor".Once the Phoenix process completes, log in to the host with user: Administrator and password: nutanix/4u Run the first boot script manually via PowerShell.
D:\firstboot.bat
Note: This will cause a reboot. Keep running the firstboot script until you see the first_boot_successful file under the D:\markers directory.
You can monitor the first boot script by opening another PowerShell console and running the below.
gc c:\program files\nutanix\logs\first_boot.log -wait
Once you see the first_boot_successful marker, configure the below items using the sconfig utility via PowerShell.
sconfig
Computer Name (preform the reboot when prompted, then configure the rest of the below items)Enable Remote DesktopConfigure IP address for External Switch (interface that does not have 192.169.5.2 IP)DNSTime ZoneJoin the domain
If the host/CVM is using a VLAN, use the below to configure in PowerShell.
Set Host VLAN:
set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "ExternalSwitch" -Access -VlanID <VLAN #>
Set CVM VLAN:
set-VMNetworkAdapterVlan -Access -VMNetworkAdapterName "External" -VMName <CVM Name> -VlanID <VLAN #>
If needed, configure the NIC Team to use LACP in Server Manager.
Server Manager > Local Server > NIC Teaming > Right-click NetAdapterTeam > Properties
Take host out of maintenance mode via another CVM.
ncli host edit id=<Host ID> enable-maintenance-mode=false
Rejoin host to Failover Cluster Manager
Right-click Nodes > Add > Browse > Select host
Note: Do not run the validation test and do not add all eligible storage to the cluster.
From host’s PowerShell console, run the below to configure the CVM for Mixed FC Mode.
Add-RetainV8CvmTask
Test VM Migrations from Failover Cluster Manager.Verify cluster is up via any CVM.
cs | grep -v UP
Adjust CVM memory if needed via Failover Cluster ManagerIf SCVMM is used, add the host.If Logical Switch is used, migrate the standard switch to a logical switch.
Note: If the logical switch is named External Switch, it is possible to migrate the standard switch to a logical switch. Otherwise, you will need to use an alternative method to convert to a logical switch.
Cluster Finalization Steps:
Run below via PowerShell to upgrade Failover Cluster functional level:
update-clusterfunctionallevel -cluster <cluster name>
From PowerShell, verify Failover Cluster Version:
get-cluster -name <cluster name> | fl *
Optional Step:
Configure windows Cluster Aware Updating:
See Nutanix Hyper-V Administration Guide or KB for Alternative CAU configuration https://portal.nutanix.com/page/documents/details?targetId=HyperV-Admin-AOS-v5_19%3Ahyp-cau-intro-c.html https://portal.nutanix.com/page/documents/details?targetId=HyperV-Admin-AOS-v5_19%3Ahyp-cau-intro-c.html
References:
https://portal.nutanix.com/page/documents/details?targetId=HyperV-Admin-AOS-v5_10:HyperV-Admin-AOS-v5_10 https://portal.nutanix.com/page/documents/details?targetId=HyperV-Admin-AOS-v5_10:HyperV-Admin-AOS-v5_10 https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade https://docs.microsoft.com/en-us/powershell/module/virtualmachinemanager/remove-scvmhost?view=systemcenter-ps-2019 https://docs.microsoft.com/en-us/powershell/module/virtualmachinemanager/remove-scvmhost?view=systemcenter-ps-2019 |
KB5118 | AOS Upgrade to 5.5.x Is Unresponsive - Genesis Is Unresponsive in a Loop | Any incorrect or malformed NFS whitelist IP address or netmask created in an AOS version earlier than 5.5 and still present in Zeus configuration at the time the upgrade to AOS 5.5.x is triggered, will not pass the new network validation and will lead to a genesis crash loop. | Symptoms
Upgrading AOS to 5.5 by using the Prism web console is not making any progress.Genesis is unresponsive in a loop according to the genesis.out file. The following error message is displayed before the Genesis becomes unresponsive.
AddrFormatError: invalid IPNetwork
The full stack trace in ~/data/logs/genesis.out is similar to the following.
2017-12-19 13:52:53 INFO helper.py:139 Using salt firewall framework
There is an incorrect NFS whitelist netmask address as the following.
$ zeus_config_printer | grep -i nfs_subnet
Root CauseFrom AOS 5.5, a new network validation in the code exists during address creation at nCLI and Prism level.Note: Any incorrect or malformed NFS whitelist IP address or netmask created in an AOS version earlier than 5.5, and still present in Zeus configuration at the time the upgrade is triggered, does not pass the new network validation and can cause Genesis to become unresponsive.This condition is currently not detected in the AOS upgrade pre-checks. | Perform the following procedure to solve the issue.
If your CVMs (Controller VMs) are in the up state and you can still access nCLI , remove the incorrect NFS whitelist entry.
nutanix@cvm$ ncli cluster remove-from-nfs-whitelist ip-subnet-mask=<IP/Netmask>
Restart Genesis on the cluster.
nutanix@cvm$ $ allssh genesis restart
If the cluster is in the down state and you cannot use Prism or nCLI to remove the entry, contact Nutanix Support to edit the cluster configuration manually.
Note: You may also see the incorrect whitelist IP address or netmask configured on a container level.
Retrieve the ID of the incorrect container (the Id is after the :: sign of the following output)
nutanix@cvm$ ncli ctr ls
Example output.
Id : 000539a5-ea6c-b225-0000-000000002af8::194309
194309 is the container ID.
Remove the whitelist
nutanix@cvm$ $ ncli ctr remove-from-nfs-whitelist id=<container_id> ip-subnet-mask=<IP address/Netmask>
Replace container_id with the ID of the container. Replace IP address/Netmask with the IP address and Netmask. Note: For errors in genesis log:
Warning: command.py Timeout executing sudo_wrapper /usr/bin/salt-call state.sls security/CVM/iptables concurrent=True --retcode-passthrough: 10 secs elapsed
To solve the errors above, please disable iptables and rebooting genesis on the CVMs:
nutanix@cvm$ for i in $(svmips); do ssh $i sudo service iptables stop; genesis restart ; done
|
KB3764 | ESXi reboot fails with error "No hypervisor found" | ESXi reboot after BIOS upgrade files with error "No hypervisor found." This article describes how to troubleshoot and resolve this issue. |
Attempt to reboot the ESXi host after BIOS Upgrade or after successfully installing the ESXi fails with the below error:
BANK5: not a VMware boot bank
This issue occurs due to a prior partition table present in the local datastore (SATADOM).
Reinstalling ESXi fails with the below error:
Error: Both the primary and backup GPT tables are corrupt. Try making a fresh table or use appropriate tools to recover partitions.
Below is a screenshot for reference as well:
| Resolution is to boot the node from a Live Linux CD/DVD ISO of your choice, delete all the partitions of the local datastore, relabel the local datastore, create a partition table with one partition, and then re-install ESXi.
Below are the detailed steps that need to be followed one by one:
Download a Live Linux CD/DVD ISO. A Live CD is a complete bootable computer installation including an operating system https://en.wikipedia.org/wiki/Operating_system that runs in a computer's memory, rather than loading from a hard disk drive.
To know more about Live CDs, click here https://en.wikipedia.org/wiki/Live_CD.
Live Linux CD/DVD ISOs can be downloaded from here https://www.pendrivelinux.com/live-cd-repository/ if the customer does not have one.
Open the IPMI console of the node and then Plug-In the Live Linux ISO file:
On the IPMI console, go to Remote Control > Console RedirectionClick Launch ConsoleOn the console window, go to Virtual Media > Virtual StorageSet Logical Drive Type to ISO FileClick Open Image and select the downloaded Live Linux ISO fileClick Plug In
Reset (reboot) the node.
Once booted from the Live Linux ISO, enter into the shell prompt of the Linux OS. Every Linux distribution has a different way to enter into the shell prompt.
Run the command lsscsi to find the device name of the local datastore. Note down the device name of the local datastore.
Then run the below command to proceed with the deletion of existing partitions of the device (local datastore).
# fdisk /dev/X -----> Replace X with the device name of the local datastore.
Delete all the existing partitions of the local datastore.
Create a new partition with gpt label using the below command:
# parted /dev/X mklabel gpt mkpart P1 ext4 1MiB 8MiB ---> Replace X with the device name of the local datastore.
Once done, verify the partition using the below command:
# fdisk /dev/X -l -----> Replace X with the device name of the local datastore.
Plug Out the Live Linux ISO and Plug In the ESXi installer ISO and then reset (reboot) the node.
Proceed with ESXi installation. Select “Install ESXi, overwrite VMFS datastore” while installing ESXi. This should complete successfully.
|
KB16661 | Failure to add node to preprovisioned cluster due to "No available hosts" | Failure to add node to preprovisioned cluster due to "No available hosts" | null | If you are trying to add nodes to a DKP preprovisioned cluster, you may encounter a scenario where the node is stuck being provisioned.
Checking the logs for the cappp-controller-manager logs, you might see messages like the following:
20XX-XX-XXTXX:XX:XX.XXXZ ERROR controller.preprovisionedmachine Reconciler error {"reconciler group": "infrastructure.cluster.konvoy.d2iq.io", "reconciler kind": "PreprovisionedMachine", "name": "cluster-control-plane-xxxxx", "namespace": "default", "error": "no available hosts"...
To troubleshoot further, ensure that there are available hosts in the associated PreprovisionedInventory objects that are not already provisioned as PreprovisionedMachines:
kubectl get preprovisionedinventory -A -oyamlkubectl get preprovisionedmachines -A
Ensure that all control plane and worker node addresses are unique, and correspond to hosts that you have preprovisioned.
|
KB3777 | ESXi CVM memory showing 100 percent | ESXi CVM memory running 100 percent | In ESXi the CVM memory utilization status might show %100 but when you check within the CVM there are free memory available.Symptoms:On vSphere Client you'll see the status of the CVM as shown below:On ESXi using ESXTOP with the "m" option you will see similar to this output depending on the memory size allocated to the CVM.When you reboot or start the CVM, memory utilization will jump to %100 percent straight away.Increasing or reducing the memory will give you the same results.When you remove the memory reservation on the CVM and turn it on, you will get the following error: | The problem is due to Latency-Sensitivity feature that was introduced in vSphere 5.5.To fix the issue, Latency Sensitivity needs to be disabled or set to Normal on the CVM. This setting is only available in vSphere Web Client.For more information about this feature, you can refer to this technical whitepaper Deploying Extremely Latency-Sensitive Applications in VMware vSphere 5.5 https://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf |
KB15363 | NCC Health Check: smtp_server_security_check | The NCC health check smtp_server_security_check checks if the SMTP server's outbound email setting uses SSL or TLS security mode. | The NCC check check_smtp_server_security confirms whether the SMTP Server in PC is configured to use recommended security mode (STARTTLS or SSL) for secure snapshots.The check was introduced in NCC 5.0.0 and PC.2024.1. This is applicable only on PC.2024.1 or above. This check doesn't run on CVM.The check_smtp_server_security returns the following statuses:
PASS: If the security mode configured for SMTP server settings is either STARTTLS or SSL and Secure Snapshots (Approval Policy) is enabled. PASS: If Secure Snapshots (Approval Policy) is not enabled. WARN: If the security mode configured for SMTP server settings is NONE
Running the NCC Check
It can be run as part of the complete NCC check by running:
nutanix@pcvm$ ncc health_checks run_all
or individually as:
nutanix@pcvm$ ncc health_checks system_checks check_smtp_server_security
This check is scheduled to run every 24 hours, by default.This check will generate an alert A6221 after 1 failure across scheduled intervals.
Sample output
For Status: PASS
Running : health_checks system_checks check_smtp_server_security
For Status: WARN
Running : health_checks system_checks check_smtp_server_security
Output messaging
[
{
"Check ID": "Check SMTP Server Security Setting on PC"
},
{
"Check ID": "The SMTP Server is not configured securely on PC."
},
{
"Check ID": "Change the security configuration of the SMTP Server on PC"
},
{
"Check ID": "The SMTP Server security may get compromised."
},
{
"Check ID": "A6221"
},
{
"Check ID": "The SMTP Server is not configured securely."
},
{
"Check ID": "The SMTP Server is not configured securely."
},
{
"Check ID": "SMTP server on PC has not been configured with the recommended security setting. It is recommended to configure the SMTP server to use STARTTLS or SSL Security Mode."
}
] | It is strongly advised to configure the SMTP server to utilize STARTTLS or SSL Security Mode.Approval Policy (Secure Snapshots) relies on email communication for approval requests. Failure to secure the SMTP server with these protocols may lead to the compromise of approval requests and the unauthorized manipulation of snapshots. This could potentially expose critical data to interception or tampering by malicious entities.Refer to Configuring an SMTP Server (Prism Central) in Prism Central guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2022_6:mul-smtp-server-configure-pc-t.html to set the SMTP security mode |
KB6692 | Configuration error in event logs of BMC (G6 platforms) | Support should not replace the node on the basis configuration errors in event logs | You may see configuration errors in the IPMI SEL logs for G6 platforms. We have observed two symptoms when this SEL is observed.Issue1: No impact to the system operation but Configuration Error in SEL events.
IPMI SEL logs will have this entry:-
0x04 0xce 0x00 0x6f 0x04 0xff 0xff # Unknown #0x00 Unknown
Web UI SEL logs will show the following line with Configuration error of processor:
Issue2: Kernel Error below along with Configuration Error in the SEL. We need to reboot the node to recover from PSOD
IPMI SEL logs will have this entry:-
0x04 0xce 0x00 0x6f 0x04 0xff 0xff # Unknown #0x00 Unknown
Web UI SEL logs will show the following line with Configuration error of processor:
| The error signature from the decode of registers, it is CPU cache correctable error.Please collect the below logs for both issues and open a case with Nutanix Support to confirm this instance.
NCC log collector bundle
Gather NCC log bundle using the following plugin and component list for a 4-hour window around when the problem started. Have it start 1 hour prior to the start of the problem and continue for 3 hours after:
nutanix@cvm$ ncc log_collector --start_time=2018/11/03-02:00:00 --end_time=2018/11/03-06:00:00 run_all
CPU model
AHV
root@host# cat /proc/cpuinfo
ESXi
root@host# esxcli hardware cpu list
Hyper-V
nutanix@cvm$ allssh "winsh systeminfo"
c. TS dump from BMC |
KB13846 | Alerts/Tasks not loading anymore after click to 'Overview' subnavigation link in Alerts and Tasks pages | Alerts/Tasks not loading anymore after click to 'Overview' sub-navigation link in Alerts and Tasks pages. This prevents the PE administrator from viewing alerts. | In Prism Element UI of AOS 6.5.1, 6.5.1.5, or 6.1, Alerts and/or Tasks page may not load and may be stuck with "Loading..." message after clicking on 'Overview' sub-navigation link in the 'Alerts' and 'Tasks' pages.Identification:
In Prism Element UI, after navigation to 'Alerts' or 'Tasks' page, the page not loading, and 'Overview' sub-navigation link is highlighted Web browser Developers tools (open by F12 in most browsers) 'Console' view will show a backtrace similar to the following:
TasksPageView-jG04R.79ce7bdb9007c342cf87.prism-pe.js:1
Note: After the user clicks on 'Overview' sub-navigation link once, the last open page will get stored in user session info, and subsequent navigation to 'Alerts' or 'Tasks' page will try to load 'Overview' automatically, and will hang again. | The issue has been fixed in AOS 6.5.2 and above. If the upgrade is not immediately possible, please, follow the steps described in the workaround section.Workaround:To workaround the problem, clear cookies for the Prism website in the browser; Once cookies are cleared the default view type will be reset to 'entities', and the UI will not try to load 'Overview' view type anymore until the next click to 'Overview' sub-navigation link on 'Alerts' or 'Tasks' view. |
KB16225 | Replication network testing with DR Replication Simulation Tool | DR Replication Simulation Tool is for running network tests in different pathways in a DR environment. | Prerequisites
AOS version >= 6.8 for the source site.
No requirements for the remote sites.
DR Replication Simulation Tool helps you debug network issues and gives a potential idea of network delays to expect in the DR workflow.The tool provides you with a utility that can debug:
Connection issue in Control Pathway to remote site:
Detect if communication is working on the control pathway using a single ping.
Connection issue in Data Pathways to remote site:
Detect if communication is working on the data pathway.
Find out the average network latency of data transfer during replication by doing a series of ping tests in a pipelined fashion.
Compatibility status between local and remote site
Check if some compatibility issues can cause problems in replicating the data.
Basic information about the remote site.
The tool is present under ~/bin/ directory and named dr_replication_simulation_tool.py
Usage
nutanix@NTNX-CVM:~/bin$ python dr_replication_simulation_tool.py -a
Arguments
Printing Interpretation Index
At the end of the simulation, one could print a list of common error details that can be encountered during simulation, by providing the option -i / --index.
python dr_replication_simulation_tool.py -i
[
{
"-a, --all": "-ps PKT_SIZE,\n\n\t\t\t--pkt_size PKT_SIZE",
"Simulate on all available remote sites": "Ping Packet Size for data-handle(data pathway) test.\t\t\t(default = 1048)\t\t\tMin: 0\t\t\tMax: 100000"
},
{
"-a, --all": "-np NUM_OF_PKTS,\n\n\t\t\t--num_of_pkts NUM_OF_PKTS",
"Simulate on all available remote sites": "Number of Packets to be sent for data-handle(data pathway) test.\t\t\t(default = 10)\t\t\tMin: 1\t\t\tMax:100000"
},
{
"-a, --all": "-mo MAX_OUTSTANDING_PKTS,\n\n\t\t\t--max_outstanding_pkts\n\n\t\t\tMAX_OUTSTANDING_PKTS",
"Simulate on all available remote sites": "Maximum number of outstanding packets for data-handle(data pathway) test.\t\t\t(default = 10)\t\t\tAt any point of time, ‘mo’ number of pings on a particular data handle could be incomplete. If there are more pings(as per ‘np’) to be sent, it will be done only after some of the pings get complete, so that ‘mo’ is not violated.\n\n\t\t\tMin: 1\t\t\tMax: 100"
},
{
"-a, --all": "-i, --index",
"Simulate on all available remote sites": "Print index to interpret common issues at the end."
}
] | Running the simulation
To run the simulation, one needs to provide the -a / --all while running the script, -np / --num_of_pkts, -ps / --pkt_size, -mo / --max_outstanding_pkts can be provided additionally if not running with default values.
The tool for each site that is known to Cerebro, will return the following:
ctrlInterfaceTestResults: It will contain the following info regarding the test done on the control pathway:
latency: latency of single ping to remote side control Interface.
remoteName: <remoteName>
rpcStatus: rpc status of the control pathway test.
rpcErrorDetail: Error details in case of failure.
dataInterfaceTestResults: It will contain the following info regarding tests done on data pathways on each data handle found on the remote site.
interfaceHandle: <data interface handle, ex: ip:2009>
avgLatency: average latency for np packets of ps size sent to this data handle.
remoteName: <remoteName>
ctrlVerified: Whether the interfaceHandle was fetched using the Query to the control interface or not. If it is false, then it means that at the time of query to the control interface the request was not complete due to either cerebro being down during the request or the request was timeout (only for the query, not the test).
rpcStatus: rpc status of the data pathway test.
rpcErrorDetail: Error details in case of failure.
compatibility status: It will contain the following info regarding the test done on the control pathway:
remoteClusterName: <remoteClusterName>
compressionAlgorithm: details about whether source and target compression algorithms are the same or not
remoteName: <remoteName>
remoteAzAddress: remote Availability Zone Address in case of Entity Centric remote site.
remoteSiteType: either kEntityCentric or kLegacy.
Sample outputs
Default Parameters
nutanix@NTNX-CVM:~/bin$ python dr_replication_simulation_tool.py -a
Default Parameters with remote stargate down
nutanix@NTNX-CVM:~/bin$ python dr_replication_simulation_tool.py -a |
KB14396 | Prism Central - "Virtual Switch API Fetch Error" when accessing a Subnet in PC UI | UI error "Virtual Switch API Fetch Error" seen when clicking on subnets in the Subnets dashboard in Prism Central | When navigating to the Subnets dashboard in Prism Central and clicking on a subnet, the error message "Virtual Switch API Fetch Error" appears. This error will appear for every subnet when clicking on it. This behavior is cosmetic in nature and will not prevent configuration updates for any subnets via CLI.Scenario 1To further identify if this issue is being hit, you can check for the following via CLI:
Open Developer Tools and look for the following API call in the Network tab. A working call will have a UUID value instead of undefined after "/virtual-switches".
https://<PC IP or fqdn>:9440/api/networking/v2.a1/dvs/virtual-switches/undefined?proxyClusterUuid=<UUID>Request Method: GET
Filter the network traffic for "api/networking/v2.a1/dvs/virtual-switches" and then click through the values under "Name" which should be the API calls. The call will be found under "Headers" in the "General" section as the "Request URL". The "Status Code" will show as 500 there, as well.Switch to the "Response" tab for the API call you're looking at and confirm you see the following response.
{"message":"Invalid UUID string specified undefined "}
No impact has been seen when encountering this in the field. If there is any impact, such as an inability to update the subnet, updates failing, etc. Please consider a scenario in which this issue combined may be combined with another issue and further investigation will be needed.
Scenario 2Following the same steps from scenario 1 when you switch to the response tab you see
{
Issue observed on pc.2023.4 | Scenario 1
This issue is resolved in pc.2022.9 or higher. Please upgrade to latest release from Support portal.
Scenario 2
This issue is currently tracked under ENG-624827, it happens because it fails to retrieve the distributed_virtual_switch(es) as their protos do not contain logical_timestamp.Adonis uses logical_timestamp to generate etag and if it is missing, it does not create such attribute.However, despite the error it is possible to update the subnet just fine. Basically, it throws when it tries to resolve VS UUID into a name.
|
KB7671 | SSH to Kubernetes VM does not work - Permission denied | This article describes an issue where SSH to Nutanix Kubernetes Engine does not work. | Nutanix Kubernetes Engine is formerly known as Karbon or Karbon Platform Services.Accessing an NKE Kubernetes node via SSH requires downloading an ephemeral certificate via the NKE UI or generating it the karbonctl utility. This certificate expires after 24 hours and must be re-downloaded. When the certificate expires, or if the certificate is valid but there is another, underlying issue, SSH may fail.
Typical error messages when you try to access the Kubernetes VMs via SSH are as follows:
Trying to connect to x.x.x.x
Using the SSH script from Nutanix Kubernetes Engine UI or via karbonctl may produce a similar message:
$ ./<nke_ssh_script.sh>
Accessing the nodes using the script from an OpenSSH client v7.8 may fail.Accessing the nodes using the script from an OpenSSH client v8.8 or higher may fail.Accessing the nodes using the script from an OpenSSH client v7.2p2 may fail. | Ensure you have validated the below points:
As VMs are locked down by design as a security feature, you need to download the SSH script from Nutanix Kubernetes Engine UI. Follow instructions under Accessing Locked Nodes in the NKE Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Nutanix%20Kubernetes%20Engine%20(formerly%20Karbon).The ephemeral certificate expires in 24 hours. This, again, is by design. Download the SSH script again from the UI.
You can alternatively access via CLI. Use the karbonctl CLI in Prism Central (PC) VM accessed via SSH:
Access the Prism Central VM via SSH using nutanix user.Log in to Nutanix Kubernetes Engine CLI using the following command:
nutanix@PCVM$ karbon/karbonctl login --pc-username admin
Find the UUID of the Kubernetes cluster.
nutanix@PCVM$ karbon/karbonctl cluster list
Using karbonctl, you download the SSH script and use the UUID noted from the above command output.
nutanix@PCVM$ karbon/karbonctl cluster ssh script --cluster-uuid UUID > nke_node_ssh.sh
Run the nke_node_ssh.sh generated from the above command, where the NKE Kubernetes VM IP can be noted in output step 2.3.
nutanix@PCVM$ sh nke_node_ssh.sh
Certain SSH clients like Mac OS's terminal may have OpenSSH client version 7.8 that has compatibility issues with the sshd running in the Kubernetes VMs. Try obtaining the script from karbonctl in the PC and running it in the PC itself to check if it is working. You could also try accessing from a different Linux client. If access via PC works, then most likely, it is an SSH issue of the client machine. You may need to upgrade the SSH client on your workstation and retry.OpenSSH version 8.8 or higher disables RSA signatures with the SHA-1 hash algorithm by default, which causes these OpenSSH clients to fail authentication to the Kubernetes VMs. For client environments where these signatures are allowed, see the OpenSSH documentation for more information on re-enabling [email protected] as an accepted key type.When loading the certificate during authentication, OpenSSH client version 7.2p2 may not correctly append "-cert" onto the name of the certificate filename. Thus, instead of the client looking for <filename>-cert.pub as expected, the client looks for <filename>.pub. Upgrade to a newer version of the OpenSSH client to resolve this issue. Alternatively, the following workaround may be used:
Edit the downloaded SSH script. For example, if the SSH script was saved using the filename nke_node_sh.sh, edit the nke_node_sh.sh file.Locate the following section in the script:
if [ -z "$cluster_uuid" ]; then
Remove "-cert" from the cert_file paths. After the change, this section of the script will look like the following:
if [ -z "$cluster_uuid" ]; then
Save the changes. This workaround must be applied each time the SSH script is downloaded from the UI or saved via karbonctl.
|
""Title"": ""Hyper-V: TCP checksum mismatch causing Nutanix Cluster Service crashes"" | null | null | null | null |
KB7812 | NX-G6/G7: Nutanix BMC and BIOS Manual Upgrade Guide (BMC 7.10 and BIOS 42.600 and higher) | This article describes the upgrade procedure for BMC 7.10 and BIOS 42.600 and higher. As these are signed firmware versions, the upgrade procedure is different from previous releases. | This is the manual upgrade procedure for upgrading G6/G7 nodes to BMC v7.10, BIOS v42.600, and higher which are signed firmware versions. As a result, there is a deviation from the standard firmware upgrade procedure to transition from Unsigned to Signed firmware versions. LCM (Life Cycle Manager) support for upgrading to these versions and higher on the G6 nodes was added to LCM 2.2.9803+.
The minimum BMC version needs to be at v6.49 for the BMC upgrade to v7.xx. If you want to upgrade to 6.49 - Refer to KB 2896 http://portal.nutanix.com/kb/2896The minimum BIOS version needs to be at PB21.003 or PU21.000 for BIOS upgrade to v4x.xxx. If you want to upgrade to PB 21.003/PU 21.000 - Refer to KB 2905 http://portal.nutanix.com/kb/2905
You need to upgrade BMC to v7.xx before upgrading BIOS to 4x.xxx. Note: It is not possible to downgrade BIOS and BMC Firmware once upgraded.
Recommended practices:
Use LCM as a vehicle to upgrade to the latest versionThe IPMI WebUI upgrade procedure is recommended to be started from a workstation that is on the same network as the target hosts since a problematic or slow network can cause WebUI session disconnections during the upgrade.Upgrade procedure requires a node reboot. Make sure the node VMs are evacuated and in any necessary maintenance mode state.Please note that Nutanix always recommends that firmware is upgraded to the latest release listed below.
Firmware binary locations:
BMC
BMC-7.14.1 (bridge release to 7.15 and later releases)
Prerequisite for this upgrade = BMC: 6.49BMC Binary file: https://download.nutanix.com/kbattachments/7812/NX-G7-714-01-20231206.bin https://download.nutanix.com/kbattachments/7812/NX-G7-714-01-20231206.binMD5: 3b6209429f709d962d53825e57fe666aPlatforms- All G6/G7 platforms
BMC-7.15 (latest release)
Prerequisite for this upgrade = BMC: 7.14.1BMC Binary file: https://download.nutanix.com/kbattachments/7812/NX-G7-715-00-20231207.bin https://download.nutanix.com/kbattachments/7812/NX-G7-715-00-20231207.binMD5: 5cbad8c4db1cfbc6d62c3362e5291b69Platforms- All G6/G7 platforms
BIOSLatest BIOS releases:
BIOS-PB80.001 binary (latest release for G6/G7 DPT: NX-1065-G6/G7, NX-3060-G6/G7, NX-8035-G6/G7)
Prerequisite for this upgrade: BMC 7.15BIOS binary: https://download.nutanix.com/kbattachments/7812/BIOS_X11DPTB-0962_20230819_PB80.001_STDsp.bin https://download.nutanix.com/kbattachments/7812/BIOS_X11DPTB-0962_20230819_PB80.001_STDsp.binMD5: 0fce3d8afaa51be81acf5d65a57aef86Build date: 08/19/2023
BIOS-PU80.001 binary (latest release for G6/G7 DPU: NX-3155G-G6/G7, NX-3170-G6/G7, NX-5155-G6, NX-8150-G7, NX-8155-G6/G7)
Prerequisite for this upgrade: BMC 7.15BIOS binary: https://download.nutanix.com/kbattachments/7812/BIOS_X11DPU-091C_20230819_PU80.001_STDsp.bin https://download.nutanix.com/kbattachments/7812/BIOS_X11DPU-091C_20230819_PU80.001_STDsp.binMD5: b0a108d1955ed3a2b581defe2f2b1fdfBuild date: 08/19/2023
BIOS-PW80.001 binary (latest release for G6/G7 SPW: NX-1175S-G6/G7)
Prerequisite for this upgrade: BMC 7.15BIOS binary: https://download.nutanix.com/kbattachments/7812/BIOS_X11SPW-0953_20230819_PW80.001_STDsp.bin https://download.nutanix.com/kbattachments/7812/BIOS_X11SPW-0953_20230819_PW80.001_STDsp.binMD5: d1d5d169192814711a7c4342ee8c949bBuild Date: 08/19/2023
BIOS-PV90.002 binary (latest release for G7 SDV: NX-1120S-G7)
Prerequisite for this upgrade: BMC 7.15BIOS binary: https://download.nutanix.com/kbattachments/7812/PV90.002.bin https://download.nutanix.com/kbattachments/7812/PV90.002.binMD5: fc6d561517a9346e2d979a91ef23fd1e Build Date: 01/08/2024
Previous BIOS releases:
BIOS-PB70.002 binary (latest release for G6/G7 DPT: NX-1065-G6/G7, NX-3060-G6/G7, NX-8035-G6/G7)
Prerequisite for this upgrade: BMC 7.13BIOS binary: https://download.nutanix.com/kbattachments/7812/BIOS_X11DPTB-0962_20221207_PB70.002_STDsp.bin https://download.nutanix.com/kbattachments/7812/BIOS_X11DPTB-0962_20221207_PB70.002_STDsp.bin MD5: 2b0a6d61b41bb2e10a944d1ce4f01c3eBuild date: 12/07/2022
BIOS-PU70.002 binary (latest release for G6/G7 DPU: NX-3155G-G6/G7, NX-3170-G6/G7, NX-5155-G6, NX-8150-G7, NX-8155-G6/G7)
Prerequisite for this upgrade: BMC 7.13BIOS binary: https://download.nutanix.com/kbattachments/7812/BIOS_X11DPU-091C_20221208_PU70.002_STDsp.bin https://download.nutanix.com/kbattachments/7812/BIOS_X11DPU-091C_20221208_PU70.002_STDsp.binMD5: cab534c1e833f5202949e1050e755971Build date: 12/08/2022
BIOS-PW70.002 binary (latest release for G6/G7 SPW: NX-1175S-G6/G7)
Prerequisite for this upgrade: BMC 7.13BIOS binary: https://download.nutanix.com/kbattachments/7812/BIOS_X11SPW-0953_20221208_PW70.002_STDsp.bin https://download.nutanix.com/kbattachments/7812/BIOS_X11SPW-0953_20221208_PW70.002_STDsp.binMD5: 0d75dc0e683b3a3177d6eda0dd64ea4eBuild Date: 12/08/2022
BIOS-PV50.001 binary (latest release for G7 SDV: NX-1120S-G7)
Prerequisite for this upgrade: BMC 7.11BIOS binary: https://download.nutanix.com/kbattachments/7812/NUTANIX_X11SDV_TP8F_20211021.bin https://download.nutanix.com/kbattachments/7812/NUTANIX_X11SDV_TP8F_20211021.binMD5: 8196f1486d883ee9332576a0db100f74Build Date: 10/21/2021
| BMC Upgrade Process:Note: BMC 7.15 resolves critical vulnerabilities in BMC IPMI firmware. For more details, please check Nutanix Security Advisory-30 https://download.nutanix.com/alerts/Security_Advisory_0030.pdf. The fix will not allow the upload of any unsafe config files. This means any previous saved config file from BMC version 7.13.x and earlier can not be used and restored in version 7.15 and later. The 7.14.1 version serves as the bridge version to support both the legacy config file (<= 7.13.x) and the safe config file (>= 7.15.x). For the BMC upgrade from 7.14.1 to 7.15, ensure to strictly follow the additional guidance in the KB.
Log on to the IPMI WebUI as the administrative user.[Required for BMC upgrade 7.14.1 to 7.15 only]: Go to Maintenance > IPMI configuration, and click on "Save" button to save the IPMI Configuration file. [Required for BMC upgrade 7.14.1 to 7.15 only]: Go to Maintenance > Factory Default, select "Remove current settings but preserve user configurations", then click "Restore". Keep the configuration file in a safe place. Wait for 120 seconds, then log on to the IPMI WebUI as the administrative user. Select Maintenance > Firmware Update.
Select Enter Update Mode, and then click 'OK' on the pop-up window.On the Firmware Upload page, click 'Choose File' and browse to the location where you have downloaded the BMC binary firmware file. Select the file, and click Open.Click the ‘Upload Firmware’ option to upload the firmware.
The firmware upload may take a few minutes. The bottom-left corner of the browser will display the upload progress. Do not navigate away from this page until the upload is complete.
Note: In some instances with a 1Gbps IPMI interface, the BMC upload may time out. To resolve the issue, refer to KB 4722: BMC/BIOS firmware upload process on IPMI GUI, times out https://portal.nutanix.com/kb/4722.
When the upload is complete, the following screen is displayed. Note: The different BMC upgrade scenarios have different requirements at this step. Only choose the one correct action option from below:
For any Signed BMC firmware version upgrade (BMC 7.XX to BMC 7.XX), ensure that all the boxes are checked as shown below before clicking 'Start Upgrade'.
From Unsigned firmware (BMC version < 7.00) to Signed firmware (BMC version >= 7.00). Uncheck all the boxes as shown below before clicking 'Start Upgrade'. This step is required for the unsigned-to-signed BMC firmware upgrade (BMC 6.XX to BMC 7.XX).
Click 'Start Upgrade'.
Once the upgrade completes, the dialog box below will be displayed. Click 'OK'. The BMC automatically restarts and loads the new firmware image.
Wait for 120 seconds in order to let the IPMI complete the initialization. Log on to the IPMI WebUI as the administrative user again and verify the new BMC firmware is loaded. (Firmware Revision field). Note: Beginning from BMC 7.08, the IPMI default password has changed to the Board Serial Number. More details can be found from the Portal BMC-BIOS Release Note https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-BMC-BIOS:Release-Notes-BMC-BIOS: Release Notes | G6 And G7 Platforms: BMC 7.08 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-BMC-BIOS:bmc-Release-Notes-G6-G7-Platforms-BMC-v7_08-r.html
[Required for BMC upgrade 7.14.1 to 7.15 only]: Go to Maintenance > IPMI configuration, then click on the "Choose File" button. Select the previously saved configuration file. Click the "Reload" button to restore the configuration. Wait for 120 seconds, then log on to the IPMI WebUI as the administrative user. In case the need to restore the IP address configuration from the host, use the procedure Configuring the Remote Console IP Address (Command Line) https://portal.nutanix.com/page/documents/details?targetId=Advanced-Setup-Guide-AOS-v5_10:ipc-remote-console-ip-address-reconfigure-cli-t.html from the NX and SX Series Hardware Administration Guide https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:ipc-ipmi-ip-addr-change-t.html.
After you reset the BMC to the factory default settings, you might not be able to ping the IPMI IP address. This issue occurs because in Network Settings, the IPMI interface is reverted to Dedicated and the customer is not using the dedicated port. To resolve this issue, refer to KB 1091: Common BMC and IPMI Utilities and Examples http://portal.nutanix.com/kb/1091. To resolve this issue for ESXi, see KB 1486: How to re-configure IPMI using ipmitool https://portal.nutanix.com/kb/1486. NOTE: If the latest version of BMC Firmware is not displayed after following the above steps, you may have encountered a rare condition that requires the node to be reseated. Try clearing the browser cache and reload the IPMI WEB UI. If the issue still exists, engage Nutanix Support https://portal.nutanix.com to have the node reseated.
BIOS Upgrade Process:
BIOS Upgrade requires a NODE REBOOT. Follow KB 2905 https://portal.nutanix.com/kb/2905 for checking cluster health and graceful CVM shutdown procedure before rebooting the node.
Log on to the IPMI WebUI as the administrative user.Go to Maintenance > BIOS Update.
On the BIOS Upload page, click 'Choose File', browse to the location where you have downloaded the BIOS binary firmware file, select the file, and click Open. Note: Ensure you choose the correct Firmware File for your Node type (DPT/DPU) as indicated in the Firmware download portion of this document.Click Upload BIOS to upload the BIOS firmware.
The firmware upload may take a few minutes. The bottom-left corner of the browser displays the upload progress. Do not navigate away from this page until the upload is complete
After the upload is complete, the following screen is displayed. Do not change any settings on this screen. Press “Start Upgrade” to start the BIOS update.
Following are the screens showing upgrade process status:
When updating is finished, press “Yes” on the pop-up screen below to reboot the system. Note: A Reboot is necessary for the new BIOS to take effect. If you hit 'Cancel', the node will not automatically reboot and the new BIOS will be loaded on the next reboot.
Click OK when the following dialog box appears.
BIOS upgrade is now complete.
Verification:
For BIOS upgrade verification, wait for the node to boot completely and then log in to the IPMI WebUI and check the versions listed on the "System" screen which is the default screen after logging in.
X11DPU
X11DPT
NX-1175S-G6
Dependencies:
After upgrading BIOS and BMC, also upgrade NCC to version 3.10.0 or higher to support the new SEL log messages. For the latest NCC release, see the Downloads > Tools & Firmware https://portal.nutanix.com/#/page/static/supportTools section of the Support Portal https://portal.nutanix.com. |
KB13567 | IAM - Unable to log into PC - Permission Denied | Customer is unable to login into Prism Central with IAMv2 enabled. | The customer is unable to login into Prism Central with IAMv2 enabled and the customer is redirected to the HTTP 403 Error: Permission Denied Page.
Note: See KB 9696 for IAM Troubleshooting ( https://portal.nutanix.com/kb/000009696 https://portal.nutanix.com/kb/000009696 )
Note: The scenario encountered in this KB was seen on PC Version pc.2022.4.0.1 but has not been isolated to that version.
Note: Please attach any cases believed to be related to this KB to ENG-492695 and upload the requested logs to diamond.
The following signature from the mercury logs was observed:
Iam session cookie validation for cookie:
Example:
E20220803 20:33:15.963450Z 179195 request_processor_handle_v3_api_op.cc:1326] <HandleApiOp: op_id: 1180 | type: GET | base_path: /v3/users | external | XFF: 10.51.132.50> Iam session cookie validation for cookie: | 1. Collect Mercury logs:
Enable Debug Logging:
nutanix@PCVM:~$ allssh curl http://0:9444/h/gflags?v=4
Attempt to log into Prism CentralDisable Debug Logging:
nutanix@PCVM:~$ allssh curl http://0:9444/h/gflags?v=0
Upload Mercury Logs to Diamond
2. Save Keys to a file
Save Keys
nutanix@PCVM:~$ curl -k https://iam-proxy.ntnx-base:8443/api/iam/authn/v1/oidc/keys >> /home/nutanix/data/logs/iam-keys.txt
Upload iam-keys.txt to Diamond.
3. Collect HAR logs
See KB 5761
https://portal.nutanix.com/kb/000005761 https://portal.nutanix.com/kb/000005761
Upload HAR logs to Diamond
4. Collect POD logs
Collect log bundle using logbay
nutanix@PCVM:~$ logbay collect -t aoss,msp -O run_all=true --duration=-6h
Upload logs to Diamond
5. Attach the case to ENG-492695
6. Add Diamond Sharepath to an ENG Comment. |
KB3670 | How to set MAC address for VM NIC on AHV cluster | This article describes how to set custom MAC address for AHV VM NIC. | In some cases, it may be required to change the MAC address of an AHV VM NIC. By default MAC addresses are dynamically generated during NIC addition. | Things to consider:
NIC MAC address can only be configured via aCLIIt is not possible to change the MAC address for an existing NIC. A NIC has to be deleted and then added back with the new MAC addressNutanix AHV assigns NIC MAC addresses from the range: 50:6B:8D:00:00:00 - 50:6B:8D:FF:FF:FFA statically configured MAC will persist on a Protection Domain migration but not on any clone or recover from a snapshot operationa VM's MAC address cannot be "52:6b:8d:00:00:00" as it is reserved for the ARP refresh mechanism. (Periodic ARP request packets with the source MAC "52:6b:8d:00:00:00" are sent on the tap, forcing the VM to send an ARP reply packet. This process aids in relearning valid IPs. Subsequently, the packet is dropped in the eBPF program to prevent flooding due to an unknown MAC).
Run the following command on any CVM in the cluster to add a new NIC with a customized MAC address:
nutanix@CVM~$ acli vm.nic_create <vm name> network=<network name> mac=<new MAC address>
If the Network name is not working or not present, use Network UUID from all vm.get <VM-NAME> or acli net.list:
nutanix@CVM~$ acli vm.nic_create <vm name> network=<Network-UUID> mac=<new MAC address>
The ":" symbol must be used as separator. If no separator is specified or other separator symbols are used, then addition will fail with a "Invalid MAC address" error. This address can be outside the Nutanix range if desired.
Run the following command on any CVM in cluster to check MAC address of NIC:
nutanix@CVM~$ acli vm.get <vm name>
An example output for testvm:
nutanix@CVM~$ acli vm.get testvm
VM NIC address information can be also found in Prism UI.To delete a VM NIC, find its MAC address in the acli vm.get <vm name> command output and then run the following command on any CVM in the cluster:
nutanix@CVM~$ acli vm.nic_delete <vm name> <NIC MAC address> |
KB7865 | NCC Health Check: file_server_licensing_check | The NCC health check file_server_licensing_check checks if one or more cloned Nutanix Files file servers are in grace period of free licensing. | The NCC health check file_server_licensing_check introduced in NCC version 3.9.0 checks if one or more cloned Nutanix Files file servers are within the grace period of free licensing.
The check will PASS if there are no file servers, or if there are file servers and no clones, or if each clone is within the grace period threshold. More information on file server cloning can be found in the Nutanix Files Guide https://portal.nutanix.com/#/page/docs/details?targetId=Files-v35:fil-file-server-clone-c.html#nconcept_a3f_q3p_xz.The check will show WARN when the file-servers usage on the cluster exceeds the licensed capacity allowed. This check also validates a violation of the licensed file-server capacity in the cluster. Since 1TiB usage is free, the alert is not raised for total file-server usage less than 1TiB, whether the license is being installed or not. The cluster file-server usage is the sum of logical usage of each file-server on the cluster, accounting for both live and snapshot data.
Note: This check will only run on AOS version 5.11.1 or later.
Running the NCC check
This check can be run as part of a complete NCC health check:
nutanix@cvm$ ncc health_checks run_all
You can also run this check separately:
nutanix@cvm$ ncc health_checks fileserver_checks fileserver_cvm_checks file_server_licensing_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
Sample Output
For Status: INFO
Running : health_checks fileserver_checks fileserver_cvm_checks file_server_licensing_check
For Status: PASS
Running : health_checks fileserver_checks fileserver_cvm_checks file_server_licensing_check
For Status: WARN
file_server_licensing_check
Output messaging
[
{
"Check ID": "Check for licensing of cloned file servers."
},
{
"Check ID": "One or more cloned file servers are within the grace period"
},
{
"Check ID": "Alerts will stop after the end of the grace period."
},
{
"Check ID": "Cloned file servers will be considered for licensing after the end of the grace period"
},
{
"Check ID": "Cloned file server(s) in grace period: {fs_clone_msg}"
},
{
"Check ID": "File server clone grace period check"
},
{
"Check ID": "The check is scheduled to run every 7 days, by default."
},
{
"Check ID": "This check will generate an alert after 1 failure."
},
{
"Check ID": "160086"
},
{
"Check ID": "Check for Files license capacity violation"
},
{
"Check ID": "Files license capacity for the cluster is non-compliant."
},
{
"Check ID": "Apply for a new license, Increase Files license capacity."
},
{
"Check ID": "The cluster does not have enough File license capacity."
},
{
"Check ID": "The cluster does not have enough File license capacity."
},
{
"Check ID": "Files License Invalid"
},
{
"Check ID": "This check is scheduled to run every day, by default."
},
{
"Check ID": "This check will generate an alert after 1 failure."
},
{
"Check ID": "160153"
},
{
"Check ID": "File Server License Under Usage Check"
},
{
"Check ID": "The Files license capacity for the cluster is underused."
},
{
"Check ID": ""
},
{
"Check ID": "Files license capacity is being wasted."
},
{
"Check ID": "Cluster is under-utilizing the Files license capacity."
},
{
"Check ID": "Files License Under Usage"
},
{
"Check ID": "This check is scheduled to run every day, by default."
},
{
"Check ID": "This check will generate an alert after 1 failure."
}
] | If this check returns INFO and generates a corresponding alert, it will indicate the remaining days outside of the 90-day free test period for the cloned file server.
Nutanix Files product is licensed on Used capacity - “Used TiB” of data, including Snapshot and Clones (just the deltas). After 90 days, File Server clone usage will start counting as the Used Capacity to be licensed.
See the Nutanix Licensing Guide https://portal.nutanix.com/#/page/docs/details?targetId=Licensing-Guide:Licensing-Guide section on how to license Nutanix Files.
Note: Cloned File Server instances will not stop working and licensing alerts will not impact production workload.Nutanix recommends to either bring cluster file-server usage under a licensed cluster capacity OR Increase the licensed file-server capacity for the cluster by installing/upgrading the license.For Alert related to "Cluster is under-utilizing the Files license capacity. Usage: 0.00 TiB, Allowed license capacity: 1.0 TiB" This alert is by design to promote Nutanix Files usage and is triggered every six months by default. The Alert is placeholder for Nutanix Files awareness and strictly informational in nature. No action if there is a plan to increase Files usage in the next 6 months to a year. If Files is not deployed in the cluster and you have a third-party NAS in the environment, please check out Nutanix Files at https://www.nutanix.com/products/files https://www.nutanix.com/products/filesTo hide the alert, click "Resolve". Once marked as resolved, the alert would not show again for at least the next six months. |
KB13611 | Alert - A130382 - SynchronousReplicationPausedOnVM | Investigating SynchronousReplicationPausedOnVM issues on a Nutanix cluster. | This Nutanix article provides the information required for troubleshooting the alert A130382 - SynchronousReplicationPausedOnVM for your Nutanix cluster.
Alert Overview
The alert A130382 - SynchronousReplicationPausedOnVM is generated when the recovery site set up for synchronous replication becomes unreachable or there is a network connection problem between the source and recovery sites.
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
[
{
"Check ID": "Synchronous Replication Paused on VM"
},
{
"Check ID": "Recovery site {secondary_site_name} configured for synchronous replication is unreachable or the network connectivity between sites has issues"
},
{
"Check ID": "Resume synchronous replication for the VM. If the issue still persists, please reach out to Nutanix Support"
},
{
"Check ID": "Data protection will be impacted. Any updates on the VM will only be performed locally and will not be synchronized to the recovery site until the replication is resumed"
},
{
"Check ID": "A130382"
},
{
"Check ID": "Synchronous Replication Paused"
},
{
"Check ID": "Synchronous Replication is paused for entity '{vm_name}' syncing to '{secondary_site_name}'"
}
] | [] |
KB14067 | Foundation - HPE nodes foundation stuck in phoenix mode | HPE nodes foundation failed in Powering off nodes, and stuck in phoenix. | The foundation fails on HPE SPP versions earlier than 2022.09.01. The node's log error shows "computer system.v1_13_0 is not a valid system type."For the standalone foundation check the logs from GUI using the Log button or find the CVM running foundation service and check the foundation logs on this node if it's running on the cluster.
allssh "genesis status | grep -i foundation "
nutanix@cvm: less /home/nutanix/data/logs/foundation/<date-time>/node_172.xx.xx.xx.log | To resolve the issue, follow the recommendations below:
Ensure you are using Foundation 5.3.2 or later.Upgrade HPE SPP to 2022.09.01 or newer.Enable IPMI/DCMI over LAN (refer to KB-5494 https://portal.nutanix.com/kb/5494), and then re-attempt the foundation process. |
KB16418 | NDB | Deletion of VM does not occur if Provisioning operation fails at step " Activate Time Machine for database" | Deletion of VM is not happening if Provisioning operation failed during registration of database step "Activate Time Machine for database". | If the provisioning of a database with a new DBServer fails at the 'Activate Time Machine for database ...' step, the resulting VM will not be deleted during the rollback process. This is because the step to deregister and delete the DBServer fails, preventing the VMs from being deleted.Below is an example of such a scenario:
TASK [rollback : Wait for all the DbServer delete ops to finish] *************** | Engineering is aware of this issue and working to resolve it in a future NDB release. Until then, if the issue is encountered, follow the workaround.Workaround:In the event of such failures, the virtual machine (VM) will always be retained and not automatically deleted. If required, the VMs can be deleted manually. |
KB9728 | NCC Health Check: nearsync_stale_staging_area_check | NCC 3.10.1. The NCC health check nearsync_stale_staging_area_check warns you if there is stale staging data from a NearSync restore operation which is older than 24 hours old and which exceeds 10 GB in size. | The NCC health check nearsync_stale_staging_area_check warns you if there is stale staging data from a NearSync restore operation which is older than 24 hours old and which exceeds 10 GB in size. Under normal circumstances, NearSync operations will clean up this data for you before it goes stale and this check was added to warn if this has not happened as expected. The impact to this check failing is that you may see higher than usual space utilization on the affected containers until this space is manually cleaned up.
Running NCC Check
You can run this check as a part of the complete NCC health checks
nutanix@cvm:~$ ncc health_checks run_all
Or you can run this check individually
nutanix@cvm:~$ ncc health_checks data_protection_checks protection_domain_checks nearsync_staging_area_stale_files_check
Sample Output
Check Status: PASS
Running : health_checks data_protection_checks protection_domain_checks nearsync_staging_area_stale_files_check
Check Status: WARN
Running : health_checks data_protection_checks protection_domain_checks nearsync_staging_area_stale_files_check
Output messaging
[
{
"110272": "Check for stale files in Nearsync staging area.",
"Check ID": "Description"
},
{
"110272": "Nearsync temporary files in staging area were not properly cleaned up.",
"Check ID": "Causes of failure"
},
{
"110272": "Stale files in the temporary staging area need to be removed manually.",
"Check ID": "Resolutions"
},
{
"110272": "Cluster storage capacity is unnecessarily reduced. Cluster may run out of space.",
"Check ID": "Impact"
},
{
"110272": "This check is scheduled by default to run once every day",
"Check ID": "Schedule"
},
{
"110272": "This check does not generate an alert",
"Check ID": "Number of failures to alert"
}
] | If you are seeing this check warning, you should open a case with Nutanix Support for help with this issue. Once the support case is open, Nutanix Support can help with confirming this check failure and help with manually cleaning up the stale data, if required. Since this is a manual process, we currently only suggest doing this process with the guidance of Nutanix Support. There are specific improvements which should help prevent stale staging data in AOS versions 5.10.10+, 5.15+, and 5.17.1+. If you are running an AOS version which is below these versions, it is highly suggested that you plan to upgrade for long-term fixes for this issue. Upgrading from a lower version to a version with these improvements will not help with any stale data which existed prior to the upgrade, but it will help prevent new staging data from becoming stale. |
KB4423 | AOS Upgrade or CVM Memory Upgrade Stalls on VMware Cluster | When upgrading from AOS 5.0.x or earlier release to AOS 5.1/5.5/5.6, Controller VM memory is increased by 4GB by the finish script. In some cases, the script fails to re-configure the Controller VM, causing the upgrade process to stall. Similar issues can occur when upgrading CVM memory through Prism in 5.1.x or later releases. | Please Note: Below KB article is only applicable for clusters running below AOS 5.5.4, The issue listed below is resolved for all greater AOS versions.
This article is divided into 2 parts and applies to
1. Controller VMs (CVMs) in ESXi clusters. 2. CVMs with 32 GB of memory or higher are not affected.3. For nodes with ESXi hypervisor hosts with the total physical memory of 64 GB, the Controller VM is upgraded to a maximum of 28 GB.4. With total physical memory greater than 64 GB, the existing Controller VM memory is increased by 4 GB up until to 32 GB. The Prism Web Console Guide https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v5_5:wc-cluster-nos-upgrade-wc-t.html#task_bks_ctk_sn describes this memory increase introduced in AOS 5.1.
Problem # 1 - Firewall Rule Issue
Fixed with AOS 5.1.1
Identification
The issue occurs around the ESXi firewall and is fixed with AOS 5.1.1 and above. Script logs are on the host in the following location: /scratch/update_cvm_memory_via_vcenter.logWhen AOS is upgraded to 5.1 or any subsequent minor release/major release, Controller VM memory increased by 4 GB by the finished script. In some cases pre-AOS 5.5.4, the script fails to re-configure the CVM, causing the upgrade process to stall.
When this stall happens, the ~/data/logs/finish.out log file will contain the following errors. If this encountered during a Memory Only upgrade of CVM through Prism, you will find similar entries in ~/data/logs/genesis.out log.
ERROR esx.py:1726 Cannot update memory via vcenter
View from the Prism web console, the upgrade task progress shows errors if the process has stalled and you might see a few Alerts as belowOrion service is downCluster_Config service is down
Problem # 2 - Certificate Issue
Fixed with AOS 5.5.3
Identification
In some cases, the AOS upgrade fails when the CVM memory upgrade fails due to CertificateError.When the vCenter server certificate is defined using FQDN, you will get an error stating " hostname 'vcenter IP' doesn't match 'vcenter fqdn'". You will get the same error in finish.out and genesis.out. Additionally, you will see below error on the ESXi host where the upgrade failed:
root@host# cat /scratch/update_cvm_memory_via_vcenter.log
Example:
2017-12-27 02:43:21,767 Error CertificateError("hostname 'x.x.x.x' doesn't match '<host_fqdn>'",) occurred while reconfiguring the CVM memory
Problem # 3 - ESXi DNS issues
Identification
Checking genesis logs on the CVM for which memory upgrade task is running, exhibits below error signature.
2020-07-22 17:51:46 INFO cluster_manager.py:3896 Master 10.1.100.31 did not grant shutdown token to my ip 10.1.100.32, trying again in 30 seconds.
Checking /scratch/update_cvm_memory_via_vcenter.log logs on the ESXi host reporting below errors.
2020-07-22 09:39:46,641 Error Error gaierror(-2, 'Name or service not known') occurred while reconfiguring the CVM memory occurred while updating memory via Vcenter
Problem # 4 - Port 443 to vCenter blocked
Identification
Checking genesis logs on the CVM for which memory upgrade task is running, exhibits below error signature.
2021-07-14 04:14:26,072Z ERROR 71982928 rm_helper.py:472 Memory update either going on or failed, target memory 50331648 kb current memory 33554432 kb
Checking /scratch/update_cvm_memory_via_vcenter.log logs on the ESXi host reporting below errors.
2021-07-14 08:24:16,308 Error TimeoutError(110, 'Connection timed out') occurred while reconfiguring the CVM memory
Trying to connect to port 443 of vCenter from the host results in a timeout
[root@ESX:~] nc -zv <vCenter IP> 443
Problem # 5 - unrecognized character in password from user connecting to vCenter
Identification
The following message can be found in /home/nutanix/data/logs/genesis.out log on the CVM:
nutanix@CVM:~$ grep -A 5 -B 5 -i "update memory via vcenter" data/logs/genesis.out
Problem #6 - vCenter login password changing in the middle of upgrade taskWhen the CVM memory reconfigure task starts Prism requests vCenter login details from the user, if this password changes before the CVM memory reconfigure tasks is completed on all the CVMs the task will fail with the following error message,In the example below, the CVM memory reconfigure task failed on the 16th CVM in an 18 node cluster. Following logs are taken from genesis.out on the 16th CVM which failed to re-configure the task.
2022-02-22 21:25:09,673Z ERROR 91881584 esx.py:2024 Exception (vim.fault.InvalidLogin) {
In the example above customer used an active directory account where the password changes every few hours. The password changed during the middle of memory upgrade. Any attempt to restart the task will fail immediately until the vCenter login information is updated. | Problem # 1 - Firewall Rule Issue
Fixed with AOS 5.1.1
Workaround for older versions
1. Record the current status of the following firewall rule from the local ESXi host to the CVM which is stuck in the upgrade:
root@host# esxcli network firewall ruleset list | grep -i httpClient
2. Log on to the ESXi host (by using an SSH connection) running the CVM affected by the problem and issue the following command:
root@host# esxcli network firewall ruleset set -e true -r httpClient
3. Wait for the memory reconfiguration to be attempted again. This might take a minute or so and should be successful.4. If the upgrade process has been stalled for an extended period of time, you might need to log on to the CVM and restart the genesis service with this command:
nutanix@cvm$ genesis restart
5. After the upgrade completes on the node, return the Firewall setting for httpClient back to its original state as noted on Step-1.If it was set to "false" this is done with the following command:
root@host# esxcli network firewall ruleset set -e false -r httpClient
If the above steps don't resolve your issue, Kindly reach out to Nutanix Support https://www.nutanix.com/support-services/product-support
Problem # 2 - Certificate Issue
Fixed with AOS 5.5.3
Workaround for older versions
Update vCenter info with fully qualified hostname instead of IP address. If the issue was encountered while FQDN is staged in get_vcenter_info, try updating to the IP address instead.
nutanix@cvm$ get_vcenter_info
Once the upgrade is completed, revert the changes (ie, re-register vCenter with IP address from the prism or by using the "get_vcenter_info" script).Problem # 3 - ESXi DNS issuesVerify ESXi host DNS settings and make sure ESXi host can resolve to vCenter server by domain name.
We can fix this by manually editing the /etc/resolve.conf on the ESXi hosts to point to a correct DNS server that is working fine for the CVM and able to resolve to vCenter server name.Problem # 4 - Port 443 to vCenter blockedAllow traffic from all ESXi hosts to port 443 of vCenter IP
Problem # 5 - unrecognized character in password from user connecting to vCenterThis issue is caused by unrecognized characters in the vCenter password, such as "é" and "à". To work around this issue, update the vCenter password (it may be reverted following the completion of maintenance) get_vcenter_info followed by cluster restart_genesis. After approximately 5 minutes, the CVM memory update task should resume.Problem #6 - vCenter login password changing in the middle of upgrade taskIf the wrong IP/hostname/credentials for vCenter were used, you can stage these again using the get_vcenter_info CLI script. Login to any one CVM in this cluster and run the command below. Answer the prompts.If you initially supplied the hostname for vCenter when prompted for this in Prism, try using the vCenter IP address this time. Alternatively, if you initially used the vCenter IP address, try using the FQDN with the script below. Using one or the other can sometimes get around DNS/certificate issues.
nutanix@CVM:~$ get_vcenter_info
Once the script tells you that the credentials were successfully saved, restart Genesis to get the upgrade to start again.
nutanix@CVM:~$ cluster restart_genesis
|
KB9909 | Nutanix Files - Deployment network check fails and UVMs update properties stuck on loading to get information from vCenter | Nutanix Files deployment fails on network check, UVMs unable to get update/details from vCenter. | Under certain conditions where HTTP protocol/port 80 is blocked, customers may run into the following issues:
The deployment of Nutanix Files does not detect the network configurations from vCenter and cannot proceed if the Network pre-req check has failed.Properties of UVMs from Prism are stuck on loading while trying to fetch information from vCenter.vCenter connection check fails, mentioning port 80:
nutanix@cvm:~$ ncc health_checks hypervisor_checks check_vcenter_connection
Running : health_checks hypervisor_checks check_vcenter_connection
| Consider opening the required ports. The following is the list of firewall ports that must be open to successfully access the Nutanix cluster.
vCenter remote console: 443, 902, 903 from both the user host and vCentervCenter from Prism web console: 443, 80
The requirement to use Best Practices for connection using extensions is mentioned in VMware KB: https://kb.vmware.com/s/article/2004305 https://kb.vmware.com/s/article/2004305
Port requirements: Ports and Protocols - Files https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files |
KB3209 | Prism Central alerts - Cluster is running out of storage capacity in approximately 0 days | The Capacity Runway uses n+1 configuration, which means it takes out 1 node capacity during total calculation since that capacity is required for node recovery in case of a node failure. In case of a node failure, a 3-node cluster does not have sufficient capacity to rebuild, hence the 0 days estimation. | Prism Central displays a critical alert on the Prism Central Alerts page:
Cluster <name_of_cluster> is running out of storage capacity in approximately 0 days
On the Prism Element page of the cluster, no alerts are displayed.Alerts displayed are for storage capacity, and also for CPU and memory. | This alert is displayed on a 3-node cluster managed by Prism Central. Prism Central Storage Runway calculates the total amount of available storage space using the n+1 configuration as part of Prism Central Capacity Runway.
The Capacity Runway uses n+1 configuration, which means it takes out 1 node capacity during total calculation since that capacity is required for node recovery in case of a node failure. In this case, since the cluster has only 3 nodes, the cluster does not have the n+1 configuration. If there is a node failure, a 3-node cluster does not have sufficient capacity to rebuild, hence the 0 days estimation.So, if you see the red dotted lines that are set at approximately n+1 configuration, that is the total storage capacity of total nodes -1 node (as we use 1 node less in the calculation to provide fault tolerance). The alert will be generated when we hit the threshold percentage, which is (total storage - storage of 1 node) of the total storage usage. You can disable this setting by going to Prism Central > Configuration > Capacity Configurations and selecting None. The alert will not be generated anymore.
Refer to Prism Central Guide: Updating Capacity Configurations https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc:mul-capacity-sizing-configure-pc-t.html for more details.
|
KB8091 | CVM won't power on after maintenance on ESXi Host | Unable to power on the CVM after a Maintenance on the ESXi Host or increasing the CVM memory. Power on task in the vCenter server completes, but CVM doesn't power up | Checking the ESXi hosts /var/log/hostd.log for ServiceVM_Centos we see that VM is marked to be in Invalid state and transition to VM_STATE_OFF right after the power on attempt.
2019-08-31T02:16:17.572Z info hostd[F381B70] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/5b9a731a-7ede90c2-8e9d-e4434b0f3974/ServiceVM_Centos/ServiceVM_Centos.vmx opID=HB-SpecSync-host-369169@41564-5901fb2f-d1-6bcd user=vpxuser:vpxuser] State Transition (VM_STATE_ON -> VM_STATE_RECONFIGURING)
Checking the vmware.log on under /vmfs/volumes/NTNX-local-datastore/ServiceVM_Centos/vmware.log shows below errors.
2019-08-31T02:16:05.482Z| vcpu-0| I125: Transitioned vmx/execState/val to poweredOn
You can also verify that there are multiple vmmcores and vmx-zdump.xxx under the same VM directory.
[root@hqidvntxvh09:/vmfs/volumes/5b9a731a-7ede90c2-8e9d-e4434b0f3974/ServiceVM_Centos] ls -la | We can check the amount of memory assigned to the CVM and confirm the same with customer if it was increased recently.As in this scenario the memory assigned is 36 GB.
[root@hqidvntxvh09:/vmfs/volumes/5b9a731a-7ede90c2-8e9d-e4434b0f3974/ServiceVM_Centos] less ServiceVM_Centos.vmx | grep -i mem
1st issue - This error occurs whenever CVM wants memory which host cannot provide. So in such case it wont power on CVM.
Verify if there are any other VMs running on this Host causing less free available memory to power on the CVM.Put DRS into manual mode from Fully automated and migrate some VMs manually out of this host to free UP some memory on the Host to power on the CVM.Once VMs are migrate you can use ESXi command line to power on the CVM.
Run below command to get the VMID of the CVM
Once the CVM is powered on, you can change the DRS back to fully automated for ESXi host to migrate the VMs back to this Esxi host.
NOTE - In case the CVM does not power on manually, check KB 5528 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000LIlICAW (CVM unable to power on during AOS upgrade of Maintenance) to update the memory affinity value in CVM's vmx file. Make sure to schedule time to make the change on all remaining CVM's in the cluster one by one gracefully ensuring data resiliency between each change. These updates should be standard for clusters imaged with Foundation 4.1+ via ENG-89817 https://jira.nutanix.com/browse/ENG-89817.2nd issue - It was later identified that on a ESXi host for pinned VM, memory cant be more than or equal to half of Numa node. ENG-117837 https://jira.nutanix.com/browse/ENG-117837 - CVM fails to power on with IOMMU error after increasing memory to 64 GB VMware ESX, if physical memory is 128 GB, CVM cant be 64 GB. Thus in cases where physical memory is 128 GB, CVM memory is limited to 32 GB in order for customer avoid CVM being stuck.This issue is fixed in 5.10.5 and 5.11 versions of AOS. Upgrade to these versions to fix the issue. |
KB13452 | AHV Citrix Director Plugin: Install error “Citrix Director is not installed. Please install Citrix Director to proceed" | AHV Citrix Director not installing due to error “Citrix Director is not installed. Please install Citrix Director to proceed" even though Citrix Director is installed. | When installing the AHV Citrix Director plugin, the installer looks for the below registry keys in the below location.
SOFTWARE\Wow6432Node\Citrix\DesktopDirector
New versions of Citrix Director (i.e. Director from Citrix Virtual Apps and Desktops 7 2206) do not create registry keys in the above location, which causes the below error.
Citrix Director is not installed. Please install Citrix Director to proceed
Use the below steps to create the needed registry keys as a workaround. | Open regeditExpand HKEY_LOCAL_MACHINE > SOFTWARE > WOW6432Node > CitrixRight-click Citrix > New > KeyName it "DesktopDirector"In the right pane, add new string values. Right-click an empty area, then click New > String ValueCreate the following string values and right-click on each, then click Modify and add the following information:
CONFIGTOOL: C:\inetpub\wwwroot\Director\tools\DirectorConfig.exeINSTALLLOCATION: C:\inetpub\wwwroot\Director\URL: http://localhost/Director
Sample screenshot:
http://localhost/Director
|
KB12999 | Nutanix Files: "Invalid IP format" error when configuring NFS exceptions | When configuring NFS exports with clients that have exceptions (read/write | read-only) to the default access behaviour via the Files Console, the error "invalid IP format" is returned. This issue is fixed in Nutanix Files 4.1.0. | When configuring NFS exports with clients that have exceptions (read/write | read-only) to the default access behaviour via the Files Console, the below error is seen:
invalid IP format
| This issue is fixed in Nutanix Files 4.1.0. Upgrade to the latest Nutanix Files version or follow the steps below for the workaround.
From any CVM run the below to identify the current clients with read-write and note the UUID and Share UUID.
ncli fs list-all-fs-shares | grep -B15 -A11 "/<share name>"
Example:
nutanixanix@CVM:~$ ncli fs list-all-fs-shares | grep -B15 -A11 "/My_NFS_Standard"
Run the below command to update the IPs of the client with read-write. Use the UUID and Share UUID from the above command.
ncli fs edit-share <client-with-read-only-access or client-with-read-write-access or client-with-no-access>=<Comma seperaated client identifiers > uuid=<UUID step 1> share-uuid=<Share UUID from step 1>
Client identifier formats:
Absolute IPs (for example 129.144.0.0)The hostname of the client (for example client.domain.com)CIDR format for specifying subnets (for example 10.0.0.0/24)Netgroups (for example @it_admin)Wildcards (for example clients*.domain.com)
Example:
ncli fs edit-share client-with-read-write-access=ghi.123.com,jkl*.123.com,10.0.0.0/24 uuid=681870b1-fa3b-42a5-9f59-ca8bce8d1f20 share-uuid=7389490b-d653-4793-af2b-f806f70c4765
Run the below command to verify your change.
ncli fs list-all-fs-shares | grep -B15 -A11 "/<share name>"
Example:
nutanix@NTNX-19FM6H130137-C-CVM:~$ ncli fs list-all-fs-shares | grep -B15 -A11 "/My_NFS_Standard" |
KB4409 | LCM: Life Cycle Manager Troubleshooting Guide | This article lists the most commonly encountered issues with respect to LCM. There are separate KB articles created for individual LCM issues. Present article is helpful in general LCM troubleshooting. | LCM Introduction
Life Cycle Manager (LCM) is the 1-click upgrade process for firmware and software on Nutanix clusters. This feature can be accessed through the Prism UI. Select the LCM entity from the pull-down list after the cluster name.
LCM performs two operations:
Inventory – detects what can be managed on a cluster as well as conducts pre-checks (KB-10847).Update – performs an update to a certain version.
Versioning scheme
LCM 1.4 is versioned as follows: <major version>.<minor version>.<build number>.
This makes the version string strictly increasing. For example, the latest LCM version, as of July 26, 2018, is 1.4.1810.
URL
The LCM framework uses a pre-configured URL to identify the current updates available. This URL is auto-configured through the framework but can be changed to enable disconnected/dark site workflows. It is not recommended to manually edit the URL if the cluster is not in a dark site.
The current default URL is:
http://download.nutanix.com/lcm/3.0
Relevant logs
Here is the master list of LCM logs.
genesis.out
One of the most important logs when it comes to troubleshooting LCM. Genesis logs LCM's interactions with AOS.
~/data/logs/genesis.out
lcm_ops.out
This is the module's inventory/update operation log. This log file is created only on LCM leader(s).
~/data/logs/lcm_ops.out
lcm_wget.log
Logs the download from a URL to the cluster.
~/data/logs/lcm_wget.log
Foundation logs
Logs of the boot into Phoenix and boot out of Phoenix operations. Relevant logs only on the LCM leader. Attach these logs if the module involves booting into the Phoenix.
~/data/logs/foundation/*
ergon.out
These are general task logs. Relevant when you are troubleshooting stuck tasks.
~/data/logs/ergon.out
dellaum.log
Dell-specific log to track firmware. These logs are located on the host.
AHV Host:
/var/log/dell/dellaum.log
ESXi Host:
/scratch/log/dell/dellaum.log
PowerTools.log
Dell's specific log to track interaction with the Dell Update Manager. The logs are located on the host.
AHV Host:
/var/log/dell/PowerTools.log
ESXi Host:
/scratch/log/dell.PowerTools.log
PTAgent.config
Dell PTAgent configuration. Log locations differ between the hypervisors.
AHV:
ESX:
PTA 1.7.x:
/opt/dell/DellPTAgent/bin/PTAgent.config
PTA 1.8:
/opt/dell/DellPTAgent/cfg/PTAgent.config
Pre-PTA 1.7-4.x:
/scratch/dell/DellPTAgent/bin/PTAgent.config
PTA 1.7.4+:
/scratch/dell/config/PTAgent.config
Lenovo_support
When a node is stuck in Phoenix, collect the log folder
/var/log/Lenovo_support
Notable logs
Command line support
The following command-line options are available for LCM:
Finding the LCM leader:
AOS >= 5.5:
nutanix@CVM:~$ lcm_leader
This prints the IP of the LCM leader.
Get LCM Configuration:
Shows current LCM version:
nutanix@CVM:~$ cat ~/cluster/config/lcm/version.txt
Get LCM upgrade status ( KB 7914 https://portal.nutanix.com/kb/7914):
LCM >= 2.2R2(2.2.8381):
nutanix@CVM:~$ lcm_upgrade_status
The following files exist in the CVM (Controller VM) for LCM to function:
LCM framework
/home/nutanix/cluster/lib/py/nutanix_lcm_framework.egg
Infrastructure interfaces
/home/nutanix/cluster/lib/py/nutanix_infrastructure_python.egg
Scripts to run LCM operations
/home/nutanix/cluster/bin/lcm/
Script to find LCM leader
/home/nutanix/cluster/bin/lcm_leader
Certificate for verifying LCM operations
/home/nutanix/cluster/config/lcm/public.pem
Local LCM version (should be consistent across the cluster)
/home/nutanix/cluster/config/lcm/version.txt
Dark site workflow
Dark site bundles are available on the Nutanix Portal https://portal.nutanix.com. To obtain the latest dark site bundle, navigate to Downloads > under Essential Tools > LCM and look for 'LCM Framework Bundle'. The contents of this bundle should be placed in an HTTP server and made reachable to the PE (Prism Element). Detailed steps are found in Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details/?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_3%3ASetting%20Up%20a%20Local%20Web%20Server.
LCM firmware bundles for dark siteAs of LCM version 2.3.1, Nutanix has separate payloads for server firmware. Examples of these payloads are NX, Dell, Fujitsu, and Inspur. These payloads/binaries must be downloaded separately and then extracted to the /release/builds folder on the Dark Site webserver. More details can be reviewed in the LCM Firmware Updates https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_3:LCM%20Firmware%20Updates in the Life Cycle Manager Dark Site Guide.Beginning with LCM 2.4.1.1, LCM allows direct ingestion of LCM bundles. If you are at a dark site, use this feature to fetch an update bundle and upload it directly to Prism, without setting up a local webserver. Currently, LCM only supports direct upload for Prism Element. For Prism Central, use a local web server.
For troubleshooting LCM Dark site direct upload (no webserver) - refer to KB-10450 http://portal.nutanix.com/kb/10450.
Two ways of upgrading LCM:
Upgrading through LCM. This is done automatically when running the "Perform Inventory" from the LCM page.Upgrading AOS. If the version of LCM packaged with AOS is greater than the LCM version on the cluster, the AOS upgrade process upgrades LCM as well.
Recommended upgrade path for LCM
Nutanix recommends upgrading to the latest LCM version applicable before running operations to avoid encountering fixed issues. For instructions on how to upgrade to the latest LCM, follow Opening LCM in the LCM Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:Life-Cycle-Manager-Dark-Site-Guide.[
{
"Log path": "~/data/logs/genesis.out\n~/data/logs/lcm_ops.out",
"Notes": "Inventory & Upload Operations"
},
{
"Log path": "~/data/logs/foundation/last_session.log",
"Notes": "Workflows Involving Phoenix"
}
] | Below are the best practices for troubleshooting a failure in LCM.
Download failures
LCM framework requires connectivity to the URL to perform its operations. If a download failure occurs, the error below is displayed:
Operation Failed. Reason: Failed to download manifest sign file
Triage processLook at the lcm_wget.log file across the cluster to see if there are errors while downloading contents from the URL. Refer to KB-14130 http://portal.nutanix.com/kb/14130 for more details.
Precheck failures
As part of every operation, LCM performs certain prechecks to ensure the cluster is healthy at the start of the operation. This ensures LCM does not bring down the cluster. These prechecks are similar to the checks run for Host Disruptive activities in AOS (SATADOM Break-fix, traditional one-click upgrade).
Starting with AOS 5.6.1 and 5.5.3, LCM displays the pre-check failure reason in the UI (and as part of the task view).
Operation failed. Reason: Pre-check 'test_hypervisor_test' failed ('Failure reason: Failed to obtain ssh client and hypervisor type.')
Prior versions of AOS will identify the pre-check that failed but not provide a reason. The error message:
Operation failed. Reason: Pre-chek 'test_cluster_status' failed ('precheck test_cluster_status failed')
Triage process:
Identify the LCM leader.Open genesis.out and find the last instance of the failure message shown in the UI: In this case, "Pre-check 'test_cluster_status' failed".
Look at the preceding logs for failure reasons. One example is highlighted here where, above the failure line, we see "No services are running on x.x.x.x":
2018-07-26 18:41:01 INFO zookeeper_service.py:554 Zookeeper is running as leader
Please refer to KB of the identified pre-check failure to resolve the issue.
LCM framework troubleshooting
To troubleshoot issues with the LCM framework, look for ERROR messages in genesis.out and lcm_ops.out. Below is the list of benign messages that may be displayed during LCM operations:
LcmRecoverableError: Recoverable error encountered during LCM operation.LcmUnrecoverableError: Error encountered during/after an update operation.LcmDowloaderError: Error encountered during downloads.LcmSchedulerError: Errors with LCM’s scheduler (deprecated soon)LcmOpsError: LCM Operation failed to run in the environment.LcmRelinquishLeadership: Event that the current leader is relinquishing leadership.LcmLeadershipChange: Event denoting a change of leadership in LCM.
XC platform troubleshooting
LCM uses the PowerTools suite to perform updates for all firmware except SATADOMs. The SATA DOM module is shared between NX and XC clusters. Contact Dell for any issues that occur during LCM updates. Known issues for XC Platform:
iSM - iDRAC Service Module (A service that interfaces with iDRAC to schedule updates).Dell Update Manager (PTAgent) - REST API used by LCM to perform firmware management.Hardware Entities – a firmware payload for the different hardware in the XC cluster.
Cleanup failed iDRAC jobs.
If the iDRAC Job Queue contains failed jobs, it is recommended that the jobs be deleted before attempting a new update. PTAgent 1.7-4.r39fb0c9 and above fixes this issue.
Update failures due to PTAgent connectivity
The LCM module interacts with the PTAgent installed on the same host to perform operations. The PTAgent is configured here:
AHV (PTAgent 1.8 and above)
/opt/dell/DellPTAgent/cfg/PTAgent.config
AHV (PTAgent 1.7-4)
/opt/dell/DellPTAgent/cfg/PTAgent.config
ESXi
/scratch/dell/config/PTAgent.config
The following fields are configured by default to perform LCM operations:
rest_ip=192.168.5.1:8086
If LCM fails to connect to the PTAgent, raise the issue with Dell. KB-6134 https://portal.nutanix.com/kb/6134 provides a detailed troubleshooting guide for Lenovo Platforms.
Other common issues/scenarios
Phoenix-Based updates
The following modules need the node to boot into Phoenix to perform firmware upgrades:
SATADOM Host Boot DeviceM2 Host Boot DeviceNVMe DrivesSSD/HDD DrivesBIOSBMC
LACP/LAG
Upgrade to the latest Foundation (available at Downloads: Foundation https://portal.nutanix.com/#/page/Foundation) for LACP/LAG setups. This is applicable for ESXi, AHV, and Hyper-V.
Disk/HBA updates
Bad disks could cause LCM updates to fail. Refer to KB 5352 https://portal.nutanix.com/kb/5352 for details.
Note: Disk and HBA updates were revoked from LCM 1.4. Perform Inventory and upgrade to LCM 1.4.1561 or later.
LCM upgrade may fail due to the failure to remove the token
An LCM upgrade may fail due to the failure to remove the shutdown token. Refer to KB 7071 https://portal.nutanix.com/kb/7071.
LCM inventory or upgrade may also fail if there are filesystem errors on the /home partition of any CVM. Ensure that the NCC check below is passed and dmesg on all CVM's do not show any signs of filesystem errors. If filesystem issues exist on the /home partition, you may have to follow the rescue CVM workflow to get it resolved.
nutanix@CVM$ ncc health_checks system_checks fs_inconsistency_check
Dark site firmware not reflecting correctly after an inventory operation completes
When running and completing an LCM inventory, the latest available firmware for the nodes in the cluster is not reflected.
Verify that the firmware version is not the latest.Verify that the latest Firmware Bundle for the relevant servers in the environment is extracted to the correct location as per LCM Firmware Updates https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_3:LCM%20Firmware%20Updates in the Life Cycle Manager Dark Site Guide.If the Dark Site web server is not configured correctly or not accessible, the below will be seen in the genesis.out log on the LCM leader (lcm_leader):
DEBUG:Current Source URL is: http://x.x.x.x/release/builds/nx-builds/gen11/bios/, parent directory is /release/builds/nx-builds/gen11 and the current directory is bios
Another example of firmware not being available is due to foundation version dependency violation, as seen in genesis.out log on the LCM leader (lcm_leader), for example with the installed LCM version being 4.5.4.1 and the required / valid versions being 5.0, 4.6.2 and 4.6.1:
INFO product_meta_utils.py:1712 Stage 2: Checking requirements for kFirmware(HBAs:LSI Logic SAS3008 HBA AOC on X10:smc_gen_10) available version MPTFW-16.00.10.00-IT of entity 4ff31952-2da7-4f5c-bb53-f9e6284a2f76
Correct or verify the LCM Dark Site Web server per the Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:Life-Cycle-Manager-Dark-Site-Guide. |
KB6902 | False positive Disk failure alert on Lenovo Platforms | This article describes an issue where false alerts were observed on Seagate drives on Lenovo platforms. | NOTE: This issue has been observed on Seagate drives only.
We have observed false positive disk failure alerts for Seagate drives (e.g. ST2400MM0129) on Lenovo platforms.
Alert message would be like this:- Drive xx with serial xxxxxx and model ST2400MM0129 in drive bay 10 on Controller VM x.y.z.10 has failed.
Stargate will mark a disk offline once it sees delays in responses to I/O requests to the disk. Previously, this required that a user manually run smartctl checks against the disk and manually mark the disk back online if the smartctl checks passed. Hades automates this behavior. Once Stargate marks a disk offline, Hades will automatically remove the disk from the data path and run the smartctl checks against it. If the checks pass, Hades will then automatically mark the disk online and return it to service. But if the disk gets marked offline 3 times within an hour, then Hades gives up and leaves the disk unmounted.
In this case, when the smartctl command issued by Hades for the short smartctl test returns code 4, AOS treats it as a failure even though the hardware is not actually bad, then the command is issued two more times and the return message from those two new attempts show the test is in progress.
You may find log snippet in hades.out like this:
nutanix@cvm$ tail -F /home/nutanix/data/logs/hades.out | grep -C3 smartctl
2018-11-11 07:08:13 WARNING disk.py:879 smartctl returned non-zero code 4. | This issue has been fixed in AOS 5.10.7 and 5.11.1. Please upgrade your cluster's AOS version to resolve the issue. |
KB4302 | Adding a New Metadata Disk to an Existing Cloud Controller VM in AWS | This KB article describes how to attach an additional metadata disk to your cloud Controller VM in AWS | Note: Nutanix has announced End Of Life for Azure Cloud Connect Feature. For more information refer to the Azure Cloud Connect End of Life bulletin https://download.nutanix.com/misc/AzureCloudConnectEOLNotification.pdf, To manage existing deployments refer to KB 10959 http://portal.nutanix.com/kb/10959If the metadata disk attached to your cloud Controller VM becomes full, you can attach an additional metadata disk.
Note: You cannot take EBS snapshots for the newly added metadata disk. | Perform the following procedure to attach an additional metadata disk to your cloud Controller VM in AWS.
Start the Dynamic Ring Changer service on the cloud Controller VM.
Set the cloud_start_dynamic_ring_changer gflag to true in the /home/nutanix/genesis.gflags file.
--cloud_start_dynamic_ring_changer=true
Restart Genesis and start the cluster.
nutanix@cvm$ genesis restart
The Dynamic Ring Changer service is now started on the Controller VM.
Add the new metadata disk.
From the AWS console, add a new EBS volume of size 300 GB and type General Purpose SSD (gp2), and attach the EBS volume to the instance.Run the following command on the cloud Controller VM.
nutanix@cvm$ sudo fdisk -l
Verify if the new drive has been added.
Prepare and mount the disk.
nutanix@cvm$ sudo cluster/bin/prepare_replace_disks -d /dev/<d>
The disk is displayed in the output of the df -h command.
Determine the ID of the disk.
nutanix@cvm$ ncli disk ls
Mark the disk to be not used for storage.
nutanix@cvm$ ncli sp edit name=DoNotUse-sp add-disk-ids=<disk_id>
Replace <disk_id> with the ID of the disk.
The disk is now displayed in zeus_config with the field contains_metadata set to true, which means that the disk is successfully added as a metadata disk.
Stop the Dynamic Ring Changer service on the cloud Controller VM.
Set the cloud_start_dynamic_ring_changer gflag to false in the /home/nutanix/genesis.gflags file.
--cloud_start_dynamic_ring_changer=false
Run the following commands.
nutanix@cvm$ genesis restart
|
""Verify all the services in CVM (Controller VM) | start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""You could even do a “dry-run” to see what the result would be before installing updates or upgrading drivers"" | null | null | null |
KB14995 | Prism Central upgrade may fail if PCVM is improperly moved to a new cluster | The only supported method to relocate a Prism Central VM (PCVM) cluster is by using Prism Central Disaster Recovery feature. If the PCVM is migrated using another method such as shared-nothing vMotion, then subsequent upgrades of Prism Central can fail. | The only supported method to relocate Prism Central (PC) from one cluster to another is using the Prism Central Disaster Recovery feature and performing a planned failover. Any other method used to relocate the PCVM(s) will cause subsequent upgrade workflows of Prism Central to fail.
Typically, the upgrade workflow fails at 30%, and an error like below may be seen in pre-upgrade.out log file:
nutanix@PCVM>less /home/nutanix/data/logs/pre-upgrade.out | If it is necessary to relocate the PCVM(s), use PC DR features to set up replication of the Prism Central deployment on the target cluster. Refer to Prism Central Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:mul-cluster-pcdr-introduction-pc-c.html for instructions.
If you have relocated Prism Central using an unsupported method, such as "shared-nothing" vMotion, it may be possible to recover from this situation. Use the same unsupported method to relocate Prism Central back to the original cluster. |
KB4562 | How to recreate and define CVM XML on AHV host | KB article to outline the recreation of CVM .xml file in the event that it is corrupted or missing | This KB was created from a specific corner case in which a CVM.xml file was corrupted, and cleaned up with FSCK after a DC power event. Note that there could be alternate scenarios in which the CVM.xml file can be missing/corrupted. Use this KB article for guidance on the recreation of the .xml file.
Scenario : Customer experiences a datacenter power outage. Upon restoration of power, 2 of 3 nodes booted correctly and hypervisor/CVM came online. The 3rd node booted and reported a file system inconsistency issue similar to the below.
Checking filesystems
Subsequent reboots would not allow the boot of the CVM, and a manual FSCK was run against (in this case) /dev/sda1. After FSCK was run against /dev/sda1, it appears that the NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml file was removed and/or corrupted.
Symptoms of this behaviour:
The command "virsh list --all" does not report a CVM in the VM list.AHV host directory /var/run/libvirt/qemu does not have a NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml file.svmboot.iso is present on the host at /var/lib/libvirt/NTNX-CVM/
Note: If an AHV host has a full disk, it may appear that the CVM is missing. Locate what is consuming space and reboot the host and the CVM should start normally. Prism should display a node crash alert prior to this occurring.
root@AHV# df -h | The steps below are only applicable for AHV versions older than 20230302.100173.Starting with AHV 20230302.100173, CVM is automatically recreated on every host reboot using configuration from the /etc/nutanix/config/cvm_config.json file. Updating CVM XML on AHV won't guarantee changes will persist across host reboot or upgrade.STEPS: The following solution was used to recover the CVM and bring it back online.
Navigate to /root directory. A "NTNX-CVM.xml" file will be located here. Note that this is a "vanilla" CVM configuration. The following fields will likely need to be modified to match the previous NTNX CVM configuration.
<name>NTNX-<BLOCK_SERIAL>-<NODE Location>-CVM</name> <!--Modify to match the hostname as needed, Name must start with NTNX- and end with -CVM. Only letters, numbers, and "-" are supported in the name.-->
*Note: The CVM's virtual memory allocation value in KiB should be configured to be consistent with the other CVMs in the cluster and can be found in any other working CVM's current running XML configuration file on its corresponding AHV host at: /var/run/libvirt/qemu/NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml *Note: Additional .XML fields may require modification based on the customer deployment (i.e network interfaces).
Make a copy of the NTNX-CVM.xml from the /root directory.
root@AHV# cp NTNX-CVM.xml NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml
Edit/modify the copy of the .xml file to suit the previous CVM instance's configuration. If the cluster is down or prism is not reachable you can find this information <BLOCK_SERIAL>-<NODE> by using this command from host cli "ls -lah /var/log/libvirt/qemu"
root@AHV# vi NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml
Make a copy of the NTNX-CVM.xml from the /root directory to /var/run/libvirt/qemu
root@AHV# cp /root/NTNX-<BLOCK_SERIAL>-<NODE>-CVM.xml /var/run/libvirt/qemu/
Check the CVM's name from the following output.
root@AHV# virsh list --all
NOTE: If the CVM still shows up under the 'virsh list --all' output above, undefine the above CVM's entity (Step #6 and #7) so that we can define a new one using the edited XML file. Otherwise, proceed to Step #8.
Un-define the above CVM's entity so that we can define a new one using the edited XML file.
root@AHV# virsh undefine NTNX-xxx-xxx-CVM ====> record from virsh list --all
If the undefine attempt fails with 'failed to get domain' error check for empty space in the CVM name
root@AHV# virsh list --all
Add the empty space in quotes while running virsh undefine
root@AHV# virsh undefine " NTNX-xxx-xxx-CVM"
Verify that the CVM's entry has disappeared now from "virsh list --all" output.
root@AHV# virsh list --all
Note: If the un-define command is not executed properly (or not executed at all), defining the CVM using the newly created xml file might give the following error. In this case, run the un-define command again.
root@AHV# virsh define NTNX-xxx-xxx-CVM.xml
Define the CVM using the modified XML file.
You will see the following message after the Domain is successfully defined for the CVM:
[root@AHV ~]# virsh define /var/run/libvirt/qemu/NTNX-xxx-xxx-CVM.xml
Note: Please ensure that the virsh define NTNX-xxx-xxx-CVM.xml command is run from /var/run/libvirt/qemu/ directory. Else, you may run into the following error:
[root@AHV ~]# virsh start NTNX-xxx-xxx-CVM
Start the CVM.
root@AHV# virsh start NTNX-xxx-xxx-CVM
Enable autostart so the CVM boots up with the host.
root@AHV# virsh autostart NTNX-xxx-xxx-CVM
If the CVM requires a VLAN tag, do the following: Log into the CVM.
root@AHV# ssh [email protected]
To change the VLAN on the CVM, run the following command with the VLAN tag (marked X) you would like to configure:
nutanix@cvm$ change_cvm_vlan <vlan tag>
|
KB15527 | File Analytics with CAC Authentication | Instructions on how to enable CAC for File Analytics. | If Prism Central is using CAC authentication, it can be enabled for File Analytics v3.3 and later. | Note: These instructions can be found in the README on the File Analytics VM (FAVM) in /opt/nutanix/analytics/generate_fa_certs/Pre-requisites:
Minimum version: File Analytics 3.3.0File Analytics already deployed/upgraded. If a File Analytics deployment or upgrade is failing due to CAC authentication being enabled on PE/PC, disable CAC auth during the deployment or upgrade.
Part 1.
Verify the Python version on the PE CVM:
nutanix@cvm$ python --version
If the Python version is 3, ensure the File Analytics version is >= 3.4.0.1.
If all fileservers are managed by Prism Element, then execute the following steps on Prism Element.
Copy the file from the FAVM- /opt/nutanix/analytics/generate_fa_certs/generate_file_analytics_cert.py to /home/nutanix/bin/ on Prism Element.
nutanix@favm$ scp -r /opt/nutanix/analytics/generate_fa_certs/generate_file_analytics_cert.py nutanix@<cvm-ip-address>:/home/nutanix/bin/generate_file_analytics_cert.py
If the CVM has Python 3, copy the file from the FAVM - /opt/nutanix/analytics/generate_fa_certs/py3_prism_fs_avm_pb2.py to /home/nutanix/bin/ on Prism Element.
nutanix@favm$: scp -r /opt/nutanix/analytics/generate_fa_certs/py3_prism_fs_avm_pb2.py nutanix@<cvm-ip-address>:/home/nutanix/bin/py3_prism_fs_avm_pb2.py
If the CVM has Python 2, copy the file from the FAVM - /opt/nutanix/analytics/generate_fa_certs/prism_fs_avm_pb2.py to /home/nutanix/bin/ on Prism Element.
nutanix@favm$ scp -r /opt/nutanix/analytics/generate_fa_certs/prism_fs_avm_pb2.py nutanix@<cvm-ip-address>:/home/nutanix/bin/prism_fs_avm_pb2.py
Login to the CVM and execute the script as follows:
nutanix@CVM$: cd /home/nutanix/bin/
The script will generate the required certs and copy them on FA VM as well.The script will also perform a cleanup of all files once they are copied to the FA VM successfully.
Delete the python script:
nutanix@CVM$: /bin/rm generate_file_analytics_cert.py
If fileservers are deployed or managed via Prism Central, execute the steps below on Prism Central CLI with major modifications.
Verify the Python version on the PCVM:
nutanix@PCVM$ python --version
If the Python version is 3, ensure the File Analytics version is >= 3.4.0.1.
Copy the file from the FAVM /opt/nutanix/analytics/generate_fa_certs/generate_file_analytics_cert.py to /home/nutanix/bin/ on Prism Central:
nutanix@favm$ scp -r /opt/nutanix/analytics/generate_fa_certs/generate_file_analytics_cert.py nutanix@<pc-vm-ip-address>:/home/nutanix/bin/generate_file_analytics_cert.py
If the PCVM has Python 3, copy the file from the FAVM /opt/nutanix/analytics/generate_fa_certs/py3_prism_fs_avm_pb2.py to /home/nutanix/bin/ on Prism Central.
nutanix@favm$: scp -r /opt/nutanix/analytics/generate_fa_certs/py3_prism_fs_avm_pb2.py nutanix@<pc-vm-ip-address>:/home/nutanix/bin/py3_prism_fs_avm_pb2.py
If the PCVM has Python 2, copy the file from the FAVM /opt/nutanix/analytics/generate_fa_certs/prism_fs_avm_pb2.py to /home/nutanix/bin/ on Prism Central.
nutanix@favm$ scp -r /opt/nutanix/analytics/generate_fa_certs/prism_fs_avm_pb2.py nutanix@<pc-vm-ip-address>:/home/nutanix/bin/prism_fs_avm_pb2.py
Login to PCVM using the nutanix user and execute the script as follows:
nutanix@PCVM$: cd /home/nutanix/bin/
When executed on PC, the script does not copy the certs on the FA VM, as it cannot fetch the FA VM IP address from each Prism Element. So this step needs to be executed manually for each deployed FA VM, as follows:
nutanix@PCVM$: scp ca.pem FileAnalytics.key FileAnalytics.pem nutanix@<fa-vm-ip>:/mnt/containers/config/prism_signed_certs/
Cleanup files as follows:
nutanix@PCVM$: /bin/rm generate_file_analytics_cert.py prism_fs_avm_pb2.py ca.pem FileAnalytics.key FileAnalytics.pem FileAnalytics.csr
Part 2.After part 1 is completed, the second part must be executed via FAVM CLI.
Login into FAVM CLI with nutanix user.Execute the following script on FA VM versions 3.3.0.2 / 3.4.0.1 or higher:
nutatnix@FAVM$: python3 /opt/nutanix/analytics/bin/configure_fa_for_cert_auth.py
Once the script execution completes, the File Server subscription should work seamlessly. |
KB10289 | Karbon 2.0.1 to 2.2 - cni config uninitialized after host reboot post os-image upgrade | cni config uninitialized after host reboot post os-image upgrade | Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Symptoms
Karbon hosts has been created pre 2.0.1, Karbon has been upgraded to > 2.0.1 and a host (os-image) upgrade has been performed recently.After a reboot the host will stay in NotReady state with the following errors observed in nodes events and kubelet logs :
Sep 29 09:55:00 karbon-fr-couvkub301-e0a5d0-k8s-master-0 hyperkube[48435]: E0929 09:55:00.608936 48435 pod_workers.go:191] Error syncing pod 5cba7145-1c83-4af2-9a23-959a5abc8855 ("kube-dns-d868d5f4b-sj59h_kube-system(5cba7145-1c83-4af2-9a23-959a5abc8855)"), skipping: network is not ready: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
[root@karbon-fr-couvkub301-e0a5d0-k8s-master-0 nutanix]# kubectl describe nodes | grep KubeletNotReady
/etc/cni/net.d/10-flannel.conflist exists, but it's missing the cniVersion parameter
[root@karbon-fr-couvkub301-e0a5d0-k8s-master-0 nutanix]# cat /etc/cni/net.d/10-flannel.conflist
[root@karbon-fr-couvkub301-e0a5d0-k8s-master-0 nutanix]# diff /etc/cni/net.d/10-flannel.conflist /var/nutanix/etc/cni/net.d/10-flannel.conflist
Root cause Os-image upgrade completed successfully and the cni config has been restored successfully from /var/nutanix/etc/cni/net.d/10-flannel.conflistProblem is the code is missing to update the "cniVersion" in the cni-config.json portion of the kube-flannel-cfg configmap. During a reboot, this conf.json takes over and overrides the value in /etc/cni/net.d/10-flannel.conflist | To fix the ongoing cni issue :
Edit the /etc/cni/net.d/10-flannel.conflist on the affected node and add the missing "cniVersion": "0.3.1"
from
{
to
{
Then restart kubelet (systemctl restart kubelet-worker ((or kubelet-master))
Keep in mind the above workaround will be removed during next host rebootTo permanently fix the issue :
Get the flannel configmap yaml
[nutanix@karbon-fr-couvkub301-e0a5d0-k8s-master-0 ~]$ kubectl get configmaps -n kube-system kube-flannel-cfg -o yaml > flannel-cfg.yaml
Add "cniVersion": "0.3.1", below the name field in the cni-conf.json portion of the above obtained yaml
[nutanix@karbon-fr-couvkub301-e0a5d0-k8s-master-0 ~]$ cat flannel-cfg.yaml
Apply the yaml again:
[nutanix@karbon-fr-couvkub301-e0a5d0-k8s-master-0 ~]$ kubectl apply -f flannel-cfg.yaml
|
{ | null | null | null | null |
KB6224 | Prism Central: Error message "Showing XX of YY permissions in role. To view all permissions please use the API." | This article describes the Prism Central error message "Showing XX of YY permissions in role. To view all permissions please use the API" and how to resolve it. | In Prism Central > Administration > Roles page, when attempting to Update, Manage or Clone a Role, you might see a warning banner:
Showing-XX-of-YY-permissions-in-role-To-view-all-permissions-please-use-the-API
Where:
XX would be the number of permissions the UI is displaying; andYY would be the number of permissions that the role actually holds.
| This warning banner is seen when the Role has been created using API and it contains permissions that the UI does not have access to.
The UI has only a subset of the total permissions available via the API.
To proceed, use the API to update/manage/clone such roles created by the API. If you proceed with the operation in the UI, it is possible that the extra permissions added via API will be removed.
Below is an example workflow utilizing the Prism Central REST API Explorer to gather the required data points then format the JSON appropriately to add a permission to a custom role via the API.
1: First navigate to your Prism Central's "REST API Explorer"
2: To gather your custom Role's UUID and also some relevant permissions' UUIDs you can use a single endpoint https://<PCIPAddress:9440/api/nutanix/v3/roles/list passing it similar JSON provided below to list 100 or so roles and their permissions. From the response body returned we can find the our custom role'e UUID and various permissions' UUIDs assigned to other pre-existing roles.2a: Example JSON for the get_entities_request value field.
{
3: Select "Try it out!" then scroll to view the "Response Code" and "Response Body"3a: "Response Code" should be 2003c: The "Response Body" will return a long JSON body. Below is a truncated example with the relevant data points we will need to format the JSON to update our custom role.
{
3d: From this output we have been able to gather the following required data points to format our JSON required to send to the https://<PCIPAddress>:9440/api/nutanix/v3/roles/<CustomRoleUUID> endpoint.
Our Custom Role UUID:
4: Format the JSON required to update out custom role and pass it to the https://<PCIPAddress>:9440/api/nutanix/v3/roles/<CustomRoleUUID> endpoint. Specifically for our test case we're passing it to https://<PCIPAddress>:9440/api/nutanix/v3/roles/5e7ab2cc-784b-4a22-9b39-4938cf087c424a: Below is the example JSON for updating our role with the desired new Snapshot_Virtual_Machine permission.4ai: Please note these permissions' UUIDs are unique per deployment and so need to be gathered separately for each unique Prism Central instance.4aii: Below I added the new desired Snapshot_Virtual_Machine permission at the bottom, but still included all the roles other the pre-existing permissions.
{
4b: In the API Explorer scroll to the PUT explorer endpoint and expand it.4c: Paste the custom role's UUID into the UUID field.4d: Paste the formatted JSON body into the "body" field.4e: Scroll down and select "Try it out!"4d: Scroll down to review the "Response Code" and "Response Body"4di: The response code should be 2024dii: If the response body returns the below message then update the JSON "spec_version" to match the requested version then re-submit the call.
The entity you are trying to update might have been modified. Please retrieve the entity again before you update. spec version mismatch: specified version <JSONSpecifiedVersion>, requested <RequestedVersionNumber>"
4diii: If the response body returns the below message then navigate back to your Prism Central WebUI re-login, navigate back to this API Explorer endpoint, re-fill the required fields, and finally re-submit the call.
Authentication required.
5: Now navigate back to your Prism Central WebUI and view the role. The desired permission should be visible. |
KB8351 | Network changes on a AHV host can cause lead to an outage due to extended zookeeper leader election time | Network changes on a AHV host can cause lead to an outage due to extended zookeeper leader election time | A network change such as updating bridge configuration on AHV hosts can lead to a cluster outage due to network partitioning and multiple zookeeper elections happening. Below there is a depiction of the events of ONCALL-5773 and how a network change impacted the whole cluster:
Customer enabled LACP time fast on a node of the cluster when LACP was already active by using the following command. CVM was previously have been set in maintenance mode:
ovs-vsctl set port br0-up other_config:lacp-time=fast (On AHV) all good, no issues, and manage_ovs show_uplinks shows that all is good:
When trying to update the uplinks to use only 10g ports we noticed the issue, CVM and host were pingable but no ssh was possible
manage_ovs --interfaces 10g update_uplinks
1. Logged onto IPMI and executed the following on AHV host, which returned network connectivity:
ovs-vsctl set port br0-up lacp=active
2. After this, connectivity to the CVM was restored which was in maintenance mode. Exited CVM from maintenance mode:
ncli host edit enable-maintenance-mode=false id=CVMID
3. After that, cluster status reported all services UP, and fault-tolerance was again OK.
nodetool -h 0 ring returned all good (Normal) on all CVMs
This is the root cause analysis provided by engineering:
10.22.0.153 zk2 # DON'T TOUCH THIS LINE myid=2
At the moment of the network change on ZK3, the node ZK3 was the leader at that point in time which was in maintenance mode. So when network changes were made ZK leader was shutdown. Node .172 was not seeing the other ZK2 and ZK1 and it issued shutdown as there were no followers.
At this moment ZK1 and ZK2 started to report exception about missing the leader:
2019-03-02 23:57:16,839 - WARN [QuorumPeer[myid=2]0.0.0.0/0.0.0.0:9876:Follower@117] - Exception when following the leader
Since we have a network partition there are 2 election being held on the same time for a new leader .
Here is the issue is ZK3 (.172) which was in maintenance mode where we had done the network changes got elected again. Due to network partition we see two elections
ZK1 and ZK3
2019-03-02 23:57:17,629 - INFO [QuorumPeer[myid=1]0.0.0.0/0.0.0.0:9876:FileTxnLog$FileTxnIterator@691] - Last read zxid from the transaction log file is 150006794c
ZK2 and ZK3
2019-03-02 23:57:18,669 - INFO [QuorumPeer[myid=2]0.0.0.0/0.0.0.0:9876:FileTxnLog$FileTxnIterator@691] - Last read zxid from the transaction log file is 150006794c
Immediately after this election both ZK1 and ZK2 were unable to reach ZK3.
2019-03-02 23:57:26,909 - WARN [QuorumPeer[myid=2]0.0.0.0/0.0.0.0:9876:Learner@275] - Unexpected exception, tries=0, connecting to zk3/10.22.0.172:2888
So a new election will take part now between
ZK1 and ZK2
2019-03-02 23:58:03,013 - INFO [QuorumPeer[myid=2]0.0.0.0/0.0.0.0:9876:FileTxnLog$FileTxnIterator@604] - Reading transaction log file : /home/nutanix/data/zookeeper/version-2/log.1500050812
And this time ZK2 has been elected as leader and in the mean time ZK3 will start looking to join the election
By the time we lost the ZK at around 2019-03-02 23:57:16,839 and new leader election completed at around 2019-03-02 23:58:05,909 which is good enough time for cluster wide ZK outage which had cascading effect on the cluster services. | ENG-226742 https://jira.nutanix.com/browse/ENG-226742 has been filed to relinquish Zookeeper leadership from a node when the CVM is set to maintenance mode to avoid unnecessary zookeeper migrations. |
KB1976 | RMA: Return Instructions (Mexico) | This KB shows the printout sent to the customer when a replacement part is requested. | Below is the return instruction letter provided with a replacement part. Spanish and English translations are listed. | AVISO IMPORTANTEReferente a la Devolución de Productos
Este envío contiene equipos designados como reemplazo de acuerdo al RMA de Reemplazo Avanzado. Bajo los términos de la política de RMAs de Nutanix, el equipo defectuoso debe ser devuelto a Nutanix dentro de un plazo no mayor a 7 días.Favor de seguir los pasos abajo descritos para devolver su equipo a Nutanix de manera correcta: INSTRUCCIONES DE RETORNO
Se solicita que tome el mismo cuidado en empaquetar los equipos defectuosos así como Nutanix tomo para enviárselos. Favor de utilizar el mismo embalaje en que le llegó su reemplazo para preparar su envío; esto incluye la caja, bolsa antiestática, sello y protectores. Cerrar y sellar adecuadamente a la caja en la que se retornará la parte defectuosa para asegurar un envío sin apretura prematura.Cada envío debe presentar en el paquete la siguiente información, si se devuelven paquetes múltiples, cada uno tiene que tener su información correspondiente:
El número de RMA proporcionado por Nutanix.
Esta información asegurara que su devolución será recibida y acreditada correctamente y no se le cobrara por la falta de devolución del equipo(s).
Utilice la guía de retorno incluida en el Kit para regresar la parte defectuosa. Favor de contactar el almacén de Nutanix/ Estafeta para coordinar la recolección de la(s) parte(s) en los siguientes teléfonos: 01800 378 2338 ó 0155 52708300.
La dirección de envío (disponible en su guía de retorno anexa a este embarque) es: Nutanix C/O Agencia Aduanal MinerAllende 709Zona CentroNuevo Laredo, Tamaulipas Mexico 88000Tel: (867) 7135968At´n: Jesus PerezRecuerde que tiene 7 días para solicitar su recolección, de otro modo, la guía de retorno será inválida.Si el número de cliente es requerido, usted puede utilizar el 0000057.Si tiene algún problema con el uso de la guía de retorno, por favor contacte al almacén de Nutanix/ Estafeta en el teléfono +52 (55) 58619500 ext 42317 o vía correo electrónico a [email protected]
AVISO IMPORTANTE: Guardar y archivar la información de quien recogió la(s) parte(s) – nombre, fecha, hora y firma. Será información requerida para dar seguimiento en caso de demora o pérdida del envío.
Si el equipo llegara dañado por no seguir estas instrucciones descritas anteriormente, al cliente se le podría cobrar el costo total del equipo.Si la devolución del equipo no se hace dentro del tiempo estipulado de 7 días, al cliente se le podría cobrar el costo total del equipo.
English Translation:
IMPORTANT INFORMATIONRegarding Return Shipping
The parts contained in this shipment are replacement parts provided under an Advance Replacement RMA. Under the terms and conditions of the Nutanix RMA policy, these replaced parts must be returned to Nutanix within 7 days.Please follow the below return process to return your defective unit. INSTRUCTIONS FOR RETURN
Please treat the return as you would expect us to treat products sent to you. To achieve this we ask that you use the Packing Material from this advance replacement to return your defective part (Packaging material includes the Box, ESD Bag, Foam Inserts and Protectors). Make sure that the following information is clearly noted on each returned unit(s):
Nutanix RMA number.
This will ensure that your defective part is received correctly and that you are not billed in error for non return of the defective part.
Use the inserted return waybill to ship the defective unit back to the Nutanix warehouse within 7 business days. Please contact the below Nutanix warehouse to coordinate pick up of part(s): 01800 378 2338, 0155 52708300.
The Delivery Address (Available in the return waybill attached) is: Nutanix C/O Agencia Aduanal MinerAllende 709Zona CentroNuevo Laredo, Tamaulipas Mexico 88000Tel: (867) 7135968At´n: Jesus Perez
Remember that you have 7 days to ask for the defective pickup, otherwise, the return waybill will be invalid.If a Customer number is required, you can use 0000057.If you have any trouble using the return waybill, please contact to: Nutanix c/o Estafeta Mexicana at the telephone number 52 (55) 58619500 ext 42317 or via email to: [email protected]
IMPORTANT: Please be sure to get a proof of pick up from the driver with name, date, time and signature. This information may be needed in the event of mis-shipment or loss.
Product damaged as a result of inadequate packaging may result in invoicing for the full value of the goods.If you fail to return the defective part within 7 days you may be invoiced the full value of the part. |
KB16470 | HTTP Proxy - Error: Unable to resolve FQDN as hostname with current DNS settings | Prism and NCLI will return the unable to resolve X as hostname with current DNS settings when attempting to modify the http-proxy whitelist if an existing FQDN entry is no longer available / does not resolve against the DNS servers | Prism and NCLI will return "Error: Unable to resolve <FQDN> as hostname with current DNS settings" when attempting to modify (add/remove) an entry from the http-proxy via Prism UI or ncli.A common example of this issue would be after a cluster was decommissioned and all records were scrubbed client-side, but the FQDN still exists in the Prism Central http-proxy whitelist.This issue is caused by an existing record or "target" existing within the http-proxy whitelist and the A record no longer exists in the DNS servers.The bug was first reported in AOS 5.19 and confirmed to work in 5.15. ENG-392529 https://jira.nutanix.com/browse/ENG-392529 was created in April 2021 and is currently Unresolved with a workaround.
What to look for
ncli
nutanix@NTNX-CVM:~$ ncli http-proxy add-to-whitelist target-type=ipv4_address target=xx.xx.xx.xx
Prism | 1. Get a list of the existing targets on the http-proxy whitelist
nutanix@NTNX-PCVM:~$ ncli http-proxy get-whitelist
Optionally grep for the base domain and get all results - customer should have some input on which are no longer in use
nutanix@NTNX-PCVM:~$ ncli http-proxy get-whitelist | grep "example"
2. Get a list of current DNS server(s)
nutanix@NTNX-PCVM:~$ ncli cluster get-name-servers
3. Use `dig` to resolve the FQDN with customer DNS server(s)
nutanix@NTNX-PCVM:~$ dig @10.134.80.20 example.com
You will notice the ANSWER SECTION is missing, indicating there is no response4. Use `nslookup` to resolve the FQDN with customer DNS server(s)
nutanix@NTNX-PCVM:~$ nslookup internal.domain.example.com
6. Ping the FQDN and confirm it does not resolve / ping
nutanix@NTNX-PCVM:~$ ping internal.domain.example.com
5. Add a loopback record to the Prism Leader hosts file, pointing the FQDN to the gateway of the cluster (sudo is required)
nutanix@NTNX-PCVM:~$ sudo vi /etc/hosts
Use :wq to exit vi (write, quit)6. Confirm the FQDN now resolves
nutanix@NTNX-PCVM:~$ ping internal.domain.example.com
7. Remove the offending whitelist target with ncli
nutanix@NTNX-PCVM:~$ ncli http-proxy delete-from-whitelist target=internal.domain.example.com
8. Remove the entry from /etc/hosts with the same steps as Step 5 |
KB7345 | Hyper-V Highly Available VM's disappeared from Hyper-V manager and Prism | the steps about how to fix the problem that if the VM's on the Prism and Hyper-V Manager disappeared after a Storage downtime. | We have seen situations where the VM's on the Prism and Hyper-V Manager disappeared after a Storage downtime. Even the PowerShell command Get-VM does not show any VM.I would like to discuss one specific situation here where the complete Storage was down for some time. After the Nutanix Storage is brought back up, there are no user VM's listed in Prism or Hyper-V manager. | Let's have a look at the Cluster Resource policies page to understand the issue. Each Virtual Machine in a Failover cluster is contained in an entity called as Cluster Group. The Virtual Machine Cluster group contains cluster resources as below:1. Virtual Machine Resource - Corresponds to the actual virtual machine2. Virtual Machine Configuration resource - Corresponds to the xml file of the VM. 3. (Optional) Cluster Disk resource. If the VM is created on a shared Cluster Disk. The Virtual Machine Configuration resource is responsible for talking to the Virtual machine management service and registering the xml file of the Virtual Machine with VMMS so the VM can be managed via Hyper-V manager. If the resource is Offline/Failed, you will not see the VM in Hyper-V manager. This was the reason why all the VM's disappeared from Hyper-V manager. Now let's take a look at why this configuration file resources was in offline/failed state. When we have a look at the Policies tab of the properties of the Virtual Machine Configuration resource, we see below: This is the configuration option for how Hyper-V virtual machine response to resource failure. So, when the Nutanix cluster was down, the configuration file (xml) file which resides on Nutanix containers was unavailable, and hence the Virtual Machine Configuration file resource failed. According to the above policies settings, the failover cluster will try to restart the resource on the same node once in next 15 minutes. If the resource failed again within 15 minutes (which it will if the Nutanix cluster is down for more than 15 minutes), the failover cluster will move the entire Virtual Machine group to a different Cluster node and try to start the resources. The Failover Cluster will continue to perform the above step until the resource has failed to come online on all the nodes of the cluster. In this case, the complete cycle of bringing the resource online will restart only after 1 hour. In this duration of 1 hour, if the Nutanix cluster is up and running, the VM configuration resource will not start automatically till the 1 hour is completed from the last failure. In such cases, the VM configuration file can be started manually from the Failover Cluster Manager followed by the VM Power On. |
KB9220 | Changing ownership of applications, blueprints and users to other projects in Calm not supported. | Changing ownership of VMs, applications and blueprints between projects in Calm is not supported. | Nutanix Self-Service is formerly known as Calm.If a VM or an application or a blueprint is moved from one project to another, the entity does not appear as part of a new project on the Calm UI to any user.
The VM's configuration shows that the VM is correctly associated with the new project on the CLI. You will see a project_reference field in the output of the "vm.get" command:
nuclei vm.get <vm name>
If the entity (VM, application, blueprint) is moved back to the original project, it is visible to the users again on the Calm UI.
Root Cause
The VM project can be changed using Prism Central, and that works as expected. However, the Calm application, which created the VM, is not updated to reflect the change. This leads to inconsistency in the UI. For example, VM is created in Project A using Calm. Then it is moved to Project B using Prism Central. Users in Project B can see the VM in Prism Central but not the underlying application in Calm. Users in Project A can see the application in Calm but not the VM in Prism Central. | Nutanix Engineering is aware of the issue and is working on a solution. In the meantime, a workaround is available. Re-deploy the entity in the new or correct project for the entity to appear correctly. |
KB4806 | Dell Support Portal Registration Guide | Dell Support Portal Registration Guide: This article describes how you can register for the Nutanix OEM portal access. | The following article describes how to register for the Nutanix OEM portal access. | Perform the following procedure to register the Nutanix OEM portal access.
In the address bar of a web browser, type: https://my.nutanix.com https://my.nutanix.com and press Enter.Click Sign up now.Type the values of First Name, Last Name, Email, Password in the fields and click Submit. Note: Follow to the specified password complexity requirements when you are creating the password. A confirmation page is displayed and you will receive an email from [email protected] after you have successfully completed the registration process.
Following is an example of the email
Hi First Name,
Click the link provided in the email to confirm the registration process. A message briefly appears in the browser confirming your email address and the Nutanix portal opens in the browser.Type the Email and Password that you used to register and click the arrow in the Password field.The Nutanix portal welcome page appears.Look for the Support Portal tile and click the Launch button in the tile. Note: You must first activate the account before you create Nutanix cases.In the Activation Required dialog box that appears, enter the Business Email and Support4Dell in the Serial Number or Service Tag fields and click Activate. The Activation Complete! screen appears after you have successfully completed the activation.Click the box next to I have read and agreed to the above terms and conditions to accept the terms. The Support Portal page appears. Note: The Create a New Case option is available if the registration process is successfully completed. |
KB3629 | How to generate memory dump for hung Windows VM on AHV | This article describes how to generate a memory dump for a VM running on AHV. | For troubleshooting unresponsive VMs Nutanix support can request to generate a memory dump for analysis. This article describes how to generate a memory dump on Windows VMs. | The only way to generate a memory dump from hung VM is to inject NMI interrupt into a Windows guest and triggering Windows to bugcheck.
Follow the instructions described in the following Microsoft KB article to configure the VM for crash on NMI: https://docs.microsoft.com/en-US/windows/client-management/generate-kernel-or-complete-crash-dump https://docs.microsoft.com/en-US/windows/client-management/generate-kernel-or-complete-crash-dump
Note: Starting from Windows 8 and Windows Server 2012 OS is configured to crash on NMI by default, so no additional configuration required.
To send NMI from the AHV host to the VM, run the following commands:
On any CVM, run the following command and note the VM UUID and the AHV host where the VM is running.
nutanix@cvm$ acli vm.list | grep <VM name>
ssh to the host and run the following command to send the NMI.
[root@AHV ~]# virsh inject-nmi <UUID_obtained_from_the_acli_command_above>
VM will bugcheck (crash / BSOD) and reboot once memory dump is written.By default, memory dump is saved in the following location inside VM:
%SYSTEMDRIVE%\Windows\memory.dmp
Collecting Memory dump from Linked Clone DesktopsWhen working with Linked Clones aka Non-Persistent Desktops, virtual machines are reverted back to the original base snapshot if a user is logged out. By default, Windows OS generate the Windows Kernel Dump on the same partition as the OS. This results in the dump file not being created as the system disk is reverted to the base snapshot as part of this technology.To collect a Memory Dump from a Non-Persistent desktop, the registry settings of the Virtual Machine must be updated to redirect the dump location to a persistent storage location to collect the files.
Redirecting the dump location to persistent storage:
Add new IDE disk to affected VM. The drive should be larger than the total RAM allocated to the VM (min 256MB larger) Initialize the drive from disk managementFormat it and assign drive letter.Go to Control Panel >System and Security> Advanced system settings> Advanced > Settings (under Startup and Recovery)
Under System failure, configure the Dump file: location to the newly created disk
5. Go to Control Panel >System and Security> Advanced system settings> Advanced > Settings (under Performance)
Click the "Change..." button in the Virtual Memoryuncheck “automatically manage paging file size for all drives”.Select the newly created drive letter
Select Custom size: radio button, and set the values as follows:
Initial size (MB): System RAM size (converted to MB) + 256 MBMaximum size (MB): System RAM size (converted to MB) + 256 MB
6. Use Registry Editor to configure DedicatedDumpFile and DumpFile values to use newly added disk as described in Microsoft: How to Configure Dedicated Memory Dump https://blogs.msdn.microsoft.com/ntdebugging/2010/04/02/how-to-use-the-dedicateddumpfile-registry-value-to-overcome-space-limitations-on-the-system-drive-when-capturing-a-system-memory-dump/
Known issuesNutanix VirtIO releases older than 1.1.4 have an issue in vioscsi driver that prevents memory dump creation. You may see progress stuck at 0% while OS tries to write dump to disk. Please upgrade Nutanix VirtIO to version 1.1.4 or newer. If VirtIO cannot be upgraded, following workaround can be used:
Add new IDE disk to affected VM. Size needs to be determined based on the need, but 15-20 Gb should be more than enough for kernel dumps.Format it and assign drive letter.Configure DedicatedDumpFile and DumpFile options to use newly added disk as described in https://blogs.msdn.microsoft.com/ntdebugging/2010/04/02/how-to-use-the-dedicateddumpfile-registry-value-to-overcome-space-limitations-on-the-system-drive-when-capturing-a-system-memory-dump/ https://blogs.msdn.microsoft.com/ntdebugging/2010/04/02/how-to-use-the-dedicateddumpfile-registry-value-to-overcome-space-limitations-on-the-system-drive-when-capturing-a-system-memory-dump/ |
KB17145 | Unable to remove VLAN from Self-Service Application | Subnet is unable to be removed from the environment, as it is still referenced by the application, despite having been removed. | Attempting to remove an unnecessary subnet from a Self Service Environment is unsuccessful due to the subnet still being referenced in an existing Application.Navigating to that Application, it does not show the subnet in question as present.
In this scenario, the user had a VM created from that Application, and had removed the subnets directly from the VM itself in Prism Central, and subsequently added the desired subnet. The UI reflects the new subnet, however, the old subnet cannot be removed from the Environment as it is still reporting being referenced.Additional information on Self-Service Application Management is available in the Portal Documentation. Calm-Admin-Operations-Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Calm-Admin-Operations-Guide-v3_5_1%3Anuc-projects-ahv-environment-configuration-t.html&a=c5da0ba437fc1a509de8694dad3288fa7120ca3826b5858e8ac4673c97d3f0aadafb736cbc21b4ea | Confirmation:
Get the application UUID from the GUI.Get the subnet UUID from nuclei subnet.list
nutanix@PCVM:~$ nuclei subnet.list
2024/07/16 14:16:32 ZK : Initiating connection to server X.X.X.30:9876
2024/07/16 14:16:32 ZK : Connected to X.X.X.30:9876
2024/07/16 14:16:32 ZK : Authenticating connection 0x0
2024/07/16 14:16:32 nuclei is attempting to connect to Zookeeper
2024/07/16 14:16:32 ZK : Authenticated: id=0x39027b85677527b, timeout=20000
"Total Entities : 4"
"Length : 4"
"Offset : 0"
"Entities :"
Name UUID State
Option1 40badfee-db07-4f66-9ee3-304430b920d9 COMPLETE
Option2 cddcd424-cd14-454e-b819-efbac3b65569 COMPLETE
Option3 6d4bb417-9646-4187-8308-d5f54b901dec COMPLETE
Option4 cfc66b78-3b0e-4fa2-a6a2-60ae90915401 COMPLETE
In Prism Central or Standalone Self-Service VM, enter cshell.
nutanix@NTNX-x-x-x-30-A-PCVM:~$ docker exec -it nucalm bash
*******************************************************************
You are logged in to the calm container
DO NOT EDIT ANY FILES: THIS MIGHT CORRUPT CALM DATA OR SERVICES
Commands for container:
activate - Activates calm virtual env
code - Changes directory to calm code directory
home - Changes directory to calm home, containing conf files
status - Prints calm service status
********************************************************************
Have Fun!
[root@ntnx-x-x-x-30-a-pcvm /]#
Run the `activate` command.
[root@ntnx-x-x-x-30-a-pcvm /]# activate
(venv) [root@ntnx-x-x-x-30-a-pcvm /]#
Enter the cshell terminal by running `cshell.py`
2024-07-16 14:55:49,398Z INFO db.py:944 Loading IDF template files
2024-07-16 14:55:49,609Z INFO db.py:978 Loading of IDF template files done
Calm Shell
----------------------------
Available objects:
model : calm.lib.model
session : Session Object (flush to push items in db)
In [1]:
Run the following to confirm that the Application is still referencing the subnet UUID. Note: In cshell, the command prompts are "In [1]:", "In [2]", instead of "$". So after each line starting with "In [#]:" hit enter. If the commands have an output, it will appear as "Out [#]:"
In [1]: app = model.Application.get_object('<application-uuid-from-step-2>')
In [2]: app.name
Out[2]: u'Application_One'
In [3]: app_pf = app.active_app_profile_instance
In [4]: dep = app_pf.deployments
In [5]: dep
Out[5]: [<Deployment: Deployment object>]
In [6]: dep = dep[0]
In [7]: sub = dep.substrate
In [8]: sub
Out[8]: <NutanixSubstrate: NutanixSubstrate object>
In [10]: sub.subnet_references
Out[10]:
[u'40badfee-db07-4f66-9ee3-304430b920d9',
u'cddcd424-cd14-454e-b819-efbac3b65569']
We can see that these two references match subnets Option1 and Option2.
If the output of [10] includes the subnet UUID gathered in step 2, this is a match for CALM-46208 https://jira.nutanix.com/browse/CALM-46208 and requires Engineering to remove or replace the stale subnet reference. Type `exit` to exit the cshell environment. |
""Title"": ""Unable to create VMs in SSP on AOS 5.1.1.1 and 5.1.1.2"" | null | null | null | null |
KB6429 | Pre-Upgrade Check: test_supported_multihome_setup | test_supported_multihome_setup checks if the cluster is a multihomed setup. | This is a pre-upgrade check that that checks if the cluster is a multihomed setup. If Zookeeper/Nutanix services are using eth0 for communication, then the check would pass. If they are using eth2, then this is not supported, and the check would fail.
Note: This pre-upgrade check runs on Prism Element (AOS) as well as Prism Central during upgrades. | See table below for the failure message you are seeing in the UI, some further details about the failure message, and the actions to take to resolve the issues.
[
{
"Failure message seen in the UI": "Zookeeper servers are configured on eth2. This is an unsupported multihome setup. Contact Nutanix Support for assistance in upgrade.",
"Details": "Zookeeper/Nutanix services are using eth2 for communication, and this is unsupported",
"Action to be taken": "These clusters with multihomed configurations might require assistance in the upgrade process. Contact Nutanix Support."
},
{
"Failure message seen in the UI": "Failure while finding CVMs which have multihome setup",
"Details": "Software is unable to find if the cluster is having a multihome setup",
"Action to be taken": "Collect the NCC log bundle and contact Nutanix Support."
}
] |
KB13245 | PC - Some LDAP users cannot login to Prism Central due to "Server not reachable" error | Some AD users cannot login to Prism Central, they get "Server not reachable" error when they redirected back to the login screen. | Scenario 1: Some Active Directory users cannot login to Prism Central due to "Server not reachable" shown right after they try to login with LDAP credentials. Note: "Server not reachable" is a generic error message and you need to confirm the problem in aplos log.On Prism Central in /home/nutanix/data/logs/aplos.out logs we can see below error signature when a user tries to login:
2022-05-05 09:57:08,349Z INFO athena_auth.py:134 Basic user authentication for user [email protected]
User might be in error state in Prism Central or the same user is listed twice in "nuclei user.list" output:
nutanix@pcvm:~$ nuclei user.get 3bd8575b-8684-5c1e-9145-31d575b76a20
Scenario 2: "Server not reachable" shown due to duplicate user groups.Here aplos.out shows:
2023-02-06 05:06:38,539Z WARNING auth_util.py:113 Error during user group validation/ user entity update. Inaccurate data
Where
active_directory_group_name is the name of the group in Active Directoryou is a list of any organisational units where the Active Directory group may sitsub_domain is a subdomain that might be in use in Active Directorydomain_name is the name of the organisation domaintld is the name of the top level domain
For example'cn=cluster_admins,ou=it-dept,dc=apac,dc=nutanix,dc=com'Duplicate UUID for above AD
0f2b6436-3409-45be-9200-259bbc99200023e0b2ce-9a38-4252-a10e-92f315141273
nutanix@NTNX-10-0-35-202-A-PCVM:~/data/logs$ nuclei user_group.list
Note the UUID returning an ERROR. | This issue may happen if a customer has IDP and LDAP (AD) authentication providers configured. If IDP (e.g. ADFS) is configured to return User Principal Name (UPN) in NameID claim, it may lead to user_uuid collision for two user accounts (with "kDIRECTORY_SERVICE" and "kIDP" types) of abac_user_capability type due to the logic how Aplos generates user_uuid UUID. For example:
Entity GUID: 07806de4-ef0d-5842-b073-57fb073b7092
The issue was reported to Nutanix engineering, but due to shift to IAMv2, it has low priority and due to a rare nature may not be fixed in IAMv1. If your customer is hitting the issue, engage STL (via TH) or DevEx. If there are duplicate groups, and one is reporting an error as above, then this group can be removed. In this example, it would be:
nuclei user_group.delete 0f2b6436-3409-45be-9200-259bbc992000
Note: If there are any aplos.out errors in collecting user group data, it may be due to role mappings not being configured:
2023-03-29 22:39:47,936Z ERROR auth_util.py:318 User is not allowed to access the system without access control policy.
From the PCVM command line when logged in as "nutanix" user, verify there are no role mappings.
nutanix@pcvm$ ncli authconfig ls-role-mappings name=<domain_name>
Configure role mappings in Prism Central. Log in as the "admin" user, and from Settings > Users and Roles > Role Mappings, select the appropriate type of mapping (user, group or ou) and enter the details.
|
""ISB-100-2019-05-30"": ""Title"" | null | null | null | null |
{ | null | null | null | null |
KB16376 | Nutanix Files - SmartDR Planned Failback stuck at 50% for max 4 hours leaving shares inaccessible | Nutanix Files - SmartDR Planned Failback stuck at 50% for hours with 'Planned FailOver Failback for Files' on PC minerva tasks | During smart DR failback operation, Prism central task may get stuck on 50% for maximum of 4 hours, then failing.During this time, file server shares are in read-only mode on both sites.Verification :
Checking PC task on the remote site, ~/data/logs/files_manager_service.out in PCVM:
nutanix@NTNX-A-PCVM:~$ ecli task.list include_completed=0 Task UUID Parent Task UUID Component Sequence-id Type Status
Checking tasks on remote File server FSVM
nutanix@NTNX-A-FSVM:~$ ecli task.list include_completed=0
Tasks seemed to be stuck
nutanix@NTNX-A-FSVM:~$ ecli task.get b1073bd4-7872-4fe9-42e9-f10dd10cbff4
Checking the replication jobs on the remote site, there was 2 running share replication tasks
nutanix@NTNX-A-FSVM:~$ afs job.list
Observed that the replication tasks were progressing but very slow that's why the parent task on prism central was not reflecting progress. | Checking the network bandwidth using KB-13052 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LZLwSAO showed that much data needed to be replicated with a limited bandwidth.The problem is that a reverse replication policy was not created initially when failover was performed, hence by doing a failback all the new written data on the remote site had to be replicated back to the main site before the failback succeeds.In our example case, based on BW calculation with respect to share data size it needed 8 hours to complete the replication.We have a guardrail in which we don't allow the failback operation to take more than 4 hours: https://portal.nutanix.com/page/documents/details?targetId=Files-Manager-v4_3:fil-fm-dr-failback-c.html https://portal.nutanix.com/page/documents/details?targetId=Files-Manager-v4_3:fil-fm-dr-failback-c.html
If the Files Manager does not complete a failback operation during the timeout window of 4 hours, the operation is unsuccessful
During this time, all the shares are in Read-only mode. |
KB3709 | NCC Health Check: switch_interface_stats_collector | The NCC health check switch_interface_stats_collector verifies if interface statistics are being collected from the network switches. This check runs only on AHV node clusters. | The NCC health check switch_interface_stats_collector verifies if interface statistics are being collected from the network switches.
The check fails if interface statistics are not being collected from the upstream switch.
When you add a switch configuration in Prism, if SNMP and LLDP connectivity are available, the interface statistics are updated in Arithmos and then published to the entity database (EDB). If publishing to EDB fails, use this check to verify if the interface statistics are successfully updated in the database from Arithmos.
Note: This check runs only on AHV node clusters.
Running the NCC Check
Run the NCC check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or run the check separately:
nutanix@cvm$ ncc health_checks network_checks switch_checks run_all
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check does not generate an alert.
Sample output
For Status: PASS
Running : health_checks network_checks switch_checks run_all
For Status: FAIL
Running : health_checks network_checks switch_checks run_all
Output messaging
[
{
"Description": "Switch interface stats collected."
},
{
"Description": "Check switch config and LLDP/CDP settings."
},
{
"Description": "This check is scheduled to run every 2 minutes, by default."
}
] | Steps to verify if the configurations are in place:
Verify if LLDP or CDP is enabled on the upstream switch connecting to Nutanix hosts.
From the AHV host, verify if the LLDP or CDP packets are getting discovered by either running packet capture to see if the packets are received, or use the following command.
[root@AHV-Host~]# lldpctl
Verify if the SNMP access to the switch management IP addresses is correctly configured in Prism.
For more information, see Configuring Network Switch Information https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v55:wc-system-network-switches-wc-t.html#ntask_o3v_2ww_ds.
Example
Run the following commands from the Controller VM to verify the SNMP access.
nutanix@cvm$ sudo snmpwalk -v3 -l authPriv -u <User> -a SHA -A <Password> -x DES -X <Password> <Switch MGMT IP> ifname
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. |
KB15051 | AHV Hosts non-schedulable or VM connectivity issues caused by duplicate Flow Network Security rule entries in table=6 | On clusters running AHV, hosts can become non-schedulable or non connected due to bloated vNIC's metadata in OVS table 6 in br0.local
Duplicate entries in table 6 can cause the overflow of resubmit actions of broadcast in taps leading to VM connectivity issues | Affected AOS versions are from 6.1.2 to 6.5.2.7.Scenario 1Symptoms observed include reconciliation delays/failures during acropolis leadership changes triggered by common maintenance operation (upgrades, CVM memory reconfig, ...), which can lead to HA Event and unexpected UVMs reboot.Identification:
We may see some AHV Hosts showing connected false for example host .68 as below
nutanix@CVM:~$ acli host.list
Use the command below to list affected hosts :
Note : In below command vlan_tci is the vlan ID for the intended traffic, we need to convert the Vlan ID from Decimal to Hexadecimal Vlan 3903 = 0x0f3f
nutanix@CVM:~$ hostssh "ovs-ofctl dump-flows br0.local table=6 | grep 'vlan_tci=0x0f3f' | sed 's/:/\n/g'| sort|uniq -c"
We can dump-flows on br0.local table=6 using below command and look for any flows listed with multiple entries means duplicated metadata vNICs.In below example we see flow 0x1506b to be a duplicate entry and getting resubmitted 358 times
nutanix@CVM:~$ hostssh "ovs-ofctl dump-flows br0.local table=6 | grep 'vlan_tci=0x0f3f' | sed 's/:/\n/g'| sort|uniq -c"
We may also see ovs.py tasks are likely to be pilling up on the affected hosts. Use the command below to confirm :
root@AHV:~$ ps -auxx | grep ovs.py | wc -l
nutanix@CVM:~$ hostssh "ps -auxx | grep ovs.py | wc -l"
This issue is likely to be seen on environments where customer have VMs with kTrunk NIC, as such a config will cause rules for each Vlan to be configured for the vNic on the hosts
Scenario 2Only on Flow Network Security enabled clusters, ARP broadcast traffic for VMs within the same VLAN may get dropped on some of the AHV hosts in the cluster leading to VM connectivity issues.Identification:
For the rule in question we can check the table=6 and find if there are multiple duplicate entries, for example below Flow Network Security rule 0x506b is the duplicate entry and getting resubmitted 893 timesWe need to perform dump-flows on br1.local(Used for VM traffic) table=6 using below command and look for any flows listed with multiple entries means duplicated metadata vNICs.Note : In below command vlan_tci is the vlan ID for the intended traffic, we need to convert the Vlan ID from Decimal to Hexadecimal Vlan 164 = 0x00a4
root@AHV:~$ ovs-ofctl dump-flows br1.local table=6 |grep vlan_tci=0x00a4 | sed 's/:/\n/g'| sort|uniq -c| head -n 10
Duplicate entries in table 6 can cause the overflow of resubmit actions of broadcast in taps where we can hit OVS limit for resubmit greater than 4096 which is where OVS can drop the traffic.We will also see below error signatures in AHV ovs-vswitchd.log logs to confirm if we are hitting this issue.
nutanix@CVM:~$ hostssh "grep -i recirc /var/log/openvswitch/ovs-vswitchd* |grep 4096 | tail"
OVS dropping the traffic after 4096 resubmit operations is an expected in OVS and loop prevention mechanism in OVS where it allows only 4096 resubmission in a rule and then stops processing the rule.Reason why only some of the hosts maybe affected by this issue is because of the amount of resubmits happening on the each hosts. Any Vlan with re-submission greater than 815(First 2 lines in each host) will result in this issue which is why Host.137 and Host.138 will hit this issue.
nutanix@CVM:~$ hostssh "ovs-ofctl dump-flows br1.local table=6 |grep vlan_tci=0x00a4 | sed 's/:/\n/g'| sort|uniq -c |head -n 5"
Explanation:
OVS table 6 is required for Flow L2 isolation. This table is used only when Flow Network Security (Microsegmentation) is configured and isolation policies are in place. However, this table is programmed for all vNIC operations (add / remove). Due to defect ENG-534330, the vlan table 6 can grow to be very large with a small set of vNICs, if added / removed repeatedly.There is a higher probability to observe this bloat on clusters running UVMs with Trunk vNICs | Permanent fix :This issue is caused by https://jira.nutanix.com/browse/ENG-534330 https://jira.nutanix.com/browse/ENG-534330, and an upgrade to AOS 6.5.3 or newer is needed to fix this issue permanently.Workaround for Scenario 1Check if the Flow Network Security is enabled on this cluster
nutanix@CVM:~$ microsegmentation_cli microseg.get
Example Flow Network Security is enabled:
nutanix@CVM:~$ microsegmentation_cli microseg.get
Example Flow Network Security is disabled:
nutanix@CVM:~$ microsegmentation_cli microseg.get
If the Flow Network Security is disabled, skip this step. If the Flow Network Security is enabled, put the impacted host to Maintenance Mode and make sure all VMs migrated off before proceeding.
Clear table 6 on the host
root@AHV:~$ ovs-ofctl del-flows br0.local table=6
Kill ovs.py processes. After a few minutes, the host should be back to the schedulable state.
root@AHV:~$ for i in $(ps aux | grep ovs.py | awk '{print $2}'); do kill -9 $i ; done
Workaround for Scenario 2To provide a workaround for the VMs connectivity issue, affected VMs can be migrated out of the affected hosts to other hosts where we see less number of resubmits on the particular Flow Network Security rule. |
""ISB-100-2019-05-30"": ""Title"" | null | null | null | null |
KB16077 | How to create an NX Core quote | null | This guide will allow you to create a quote for a "NX Core" model at cost, fully decoupled from the software. Licensing through this type of quotation should be quoted based on the number of Cores into the configuration. | For step-by-step instructions, click here https://drive.google.com/file/d/1ZWh2XybcrZudwWXIo-979A1Vj7tJ508k/view?usp=sharing. Point of contact: [email protected] and [email protected] |
KB1283 | Mellanox QSFP-SFP+ Adapter (QSA) support note | This article describes the Mellanox QSFP to SFP+ Adapter (QSA). | The QSA is a mechanical QSFP to SFP+ Adapter. It is strictly a mechanical adapter. See here http://www.mellanox.com/pdf/prod_cables/QSA.pdf for the product brief. This connector allows an SFP+ cable to attach to the QSFP cage (with low insertion loss).
Note: Everything here is in the Electrical Domain.
Once the signal gets to the SFP+, you can either have passive copper, active copper, SR-Optics or LR-Optics. SFP+ SR-Optics are analogous to LC-SR in modal dispersion and use 850nm LASERs. SFP+ LR is LC-LR in modal dispersion and uses 1310nm LASERs. I believe that SFP+ uses the same optical cable diameter as LC at 1.25mm.
If the customer already has LC cables and wants to know if they can use it with the QSA. The answer is no because they do not fit mechanically. You need the SFP+ adapter first.
They could, of course, get one of the following products as an alternative (but probably not HP as they might be vendor locked) and then plug that into the Mellanox QSA transceiver and that would allow them to connect their existing LC cables to this SFP+.
http://www.cdw.com/shop/products/HP-X132-10G-SFP-LC-SR-Transceiver/1677936.aspx http://www.cdw.com/shop/products/HP-X132-10G-SFP-LC-SR-Transceiver/1677936.aspx
http://www.amazon.com/Prosafe-10GBASE-SR-Sfp-Lc-Gbic/dp/B0030L6ATQ http://www.amazon.com/Prosafe-10GBASE-SR-Sfp-Lc-Gbic/dp/B0030L6ATQ | null |
{ | null | null | null | null |
KB10573 | Alert - 130352 - VolumeGroupProtectionMightFailPostRecovery | This alert reports that the Recovery of the Volume Group might fail if the Recovery site does not support multisite protection. |
Note: This alert will be retired in a future release. Make sure you are running the latest version to avoid retired alerts.
This Nutanix article provides information required for troubleshooting the alert VolumeGroupProtectionMightFailPostRecovery for your Nutanix cluster.
Alert overviewThe alert VolumeGroupProtectionMightFailPostRecovery is generated when the remote cluster to which replication happens does not support multisite protection.
Sample alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
[
{
"Check ID": "Volume Group Protection might fail post Recovery"
},
{
"Check ID": "The remote cluster to which replication is happening does not support multisite protection."
},
{
"Check ID": "Upgrade the remote Prism Element to AOS 5.19 or above, which supports multisite protection."
},
{
"Check ID": "After recovery to the specified remote site, Volume Group protection might fail."
},
{
"Check ID": "A130352"
},
{
"Check ID": "Volume Group Protection Might Fail Post Recovery"
},
{
"Check ID": "After recovery to remote availability zone {remote_availability_zone_name}, Volume Group protection might fail because remote cluster {remote_cluster_name} does not support multisite protection."
}
] | If the Remote (Recovery) cluster is running an AOS version that is less than 5.19 (STS) and using a multisite configuration, it is highly advisable to upgrade the cluster to 5.19 (STS) or above. Information on Long-Term Support (LTS) or Short-Term Support (STS) releases can be found in KB-5505 http://portal.nutanix.com/kbs/5505.If the above steps do not resolve the issue, contact Nutanix Support http://portal.nutanix.com. |
KB13565 | Nutanix Files - Quota limit usage is not calculate correctly | Nutanix Files - Quota limit usage is not calculate correctly. | Identity Mapping (IDMAP) is the mechanism with mapping between Windows security identifiers (SIDs) to Unix/Linux UIDs and GIDs.For further information on IDMAP, please see this link. https://www.samba.org/~ab/output/htmldocs/Samba3-HOWTO/idmapper.htmlThe UID for specific user is generated based on range number, rangesize, and so on.Each FSVM creates autorid.tdb independently and it assigns the range number for each domains. When winbindd is started, it initiates a request to scan the trusted domain list. Based on the order of the responses, winbindd will allocate range numbers for each domains. If there is a difference in the order of responses, the range numbers can be different on each FSVM.This may cause generated UID on each FSVM mismatch. As a result, the user's individual usage does not reflect the actual usage and quota limit usage is not calculate correctly.For example, we can see the SID for a testuser10 when run the Get-ADUser command with AD joined Windows machine.
PS C:\Users\Administrator> Get-ADUser testuser10
We can see the SID for testuser10 on FSVMs and it's matching.
nutanix@NTNX-xx-xx-xx-131-A-FSVM:~$ allssh 'sudo wbinfo -n testuser10'
However, the UID is different on one of the FSVMs.
nutanix@NTNX-xx-xx-xx-xx-A-FSVM:~$ allssh 'sudo wbinfo -S S-1-5-21-3481467373-1474693003-1853401515-1119'
Since we query the actual usage, we can see quota limit can be exceeded.
nutanix@NTNX-xx-xx-xx-132-A-FSVM:~$ afs quota.get_user_quota HOME03 [email protected]
This is because TLD contains folders and files that having two UIDs and GIDs.
nutanix@NTNX-xx-xx-xx-132-A-FSVM:~$ ls -la /zroot/shares/a1a5db4d-4692-48fd-8ed0-ba8b6619fe95/:e1a82f51-74a3-4869-958d-b2d597d3ebc9/6d12e831-1ea3-4b57-977c-125a81c4a4f6/testuser10.V6
Also, quota usage report is not correct.
nutanix@NTNX-xx-xx-xx-131-A-FSVM:~$ allssh 'afs quota.user_report testuser10'
| Nutanix Engineering is aware of the issue and will support for distributed autorid feature on Nutanix Files 4.2.However, this feature is enabled only when deploy new file server on Nutanix Files 4.2 and is disabled on upgraded file server by default.Engineering will provide steps to fix autorid.tdb and script to correct UIDs. |
KB14198 | UpdateVmDbState task can often be seen on AHV clusters | UpdateVmDbState task can often be seen on AHV clusters | UpdateVmDbState task can often be seen on AHV clusters.Depending on AOS versions, it can be found on the Tasks page or in the "VM tasks" tab in Prism Element.This task often appears for VMs that have NGT enabled. | This internal task does not affect the functionality and can be safely ignored. |
""Title"": ""Cluster Expansion fails if Nutanix cluster is configured to use vLANs"" | null | null | null | null |
KB10111 | Alert - A130124 - Metro availability Pre-Checks failed | Investigating MetroAvailabilityPrechecksFailed issues on a Nutanix cluster. | This Nutanix article provides the information required for troubleshooting the alert MetroAvailabilityPrechecksFailed for your Nutanix cluster.
Alert OverviewThe Metro Availability Prechecks Failed alert can be generated if Metro availability operation could not be started.
Sample alert
Block Serial Number: 16SMXXXXXXXX
Output Messaging
[
{
"130124": "Metro availability operation failed",
"Check ID": "Description"
},
{
"130124": "Check the alert message for the reason of failure",
"Check ID": "Causes of failure"
},
{
"130124": "Resolve the issue as stated in the alert message and retry the Metro operation. If the issue persists contact Nutanix Support.",
"Check ID": "Resolutions"
},
{
"130124": "Metro availability operation could not be started.",
"Check ID": "Impact"
},
{
"130124": "A130124",
"Check ID": "Alert ID"
},
{
"130124": "Metro Availability Prechecks Failed",
"Check ID": "Alert Title"
},
{
"130124": "Prechecks for {operation} failed for the protection domain {protection_domain_name} to the remote site {remote_name}. Reason: {reason}.",
"Check ID": "Alert Message"
}
] | The issue arises due to an operation at the host level and hence its not a specific cause that can be pre-determined. If the check fails consider engaging Nutanix Support.Collecting Additional InformationBefore collecting additional information, upgrade NCC. For information on upgrading NCC, refer KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC health check bundle via cli using the following command
nutanix@cvm$ ncc health_checks run_all
Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, refer KB 2871 https://portal.nutanix.com/kb/2871.
Attaching Files to the CaseAttach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.Requesting AssistanceIf you need assistance from Nutanix Support, add a comment to the case on the support portal asking for Nutanix Support to contact you. You can also contact the Support Team by calling on one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers.Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. |
KB8223 | NCC Health Check: key_manager_checks | Introduced in NCC 3.9.3, the health checks sed_key_availability_check and sw_encryption_key_availability_check will check if the cluster disk passwords and the software encryption keys (respectively) are missing in the key management server configured on the Nutanix cluster. Additionally, local_key_manager_unsafe_mode_check will check for configurations where Local Key Manager is configured on a cluster with less than 3 nodes. | The NCC health checks sed_key_availability_check and sw_encryption_key_availability_check will check if the cluster disk passwords and the software encryption keys (respectively) are missing in the key management server configured on the Nutanix cluster. These plugins were introduced in NCC 3.9.3.The NCC health check local_key_manager_unsafe_mode_check will check for configurations where Local Key Manager is configured on a cluster with less than 3 nodes. This plugin is introduced in NCC 4.0.0.
These three checks are not hypervisor-specific and will only run on clusters running AOS 5.6+ (i.e. clusters that support the software encryption feature).
Running the NCC Check
The checks can be run as part of a complete NCC by running:
nutanix@cvm$ ncc health_checks run_all
Or individually as:
nutanix@cvm$ ncc health_checks key_manager_checks sed_key_availability_check
nutanix@cvm$ ncc health_checks key_manager_checks sw_encryption_key_availability_check
nutanix@cvm$ ncc health_checks key_manager_checks local_key_manager_unsafe_mode_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run once a week and will generate an alert after a single failure.
Interpreting the check results
1. sed_key_availability_check
If the check results in a PASS, the KMS has all the required disk keys. No action needs to be taken.
Running : health_checks key_manager_checks sed_key_availability_check
If one or more disk keys go missing from the KMS, the check will result in a FAIL.
Running : health_checks key_manager_checks sed_key_availability_check
2. sw_encryption_key_availability_check
If the check results in a PASS, the KMS has all the required keys. No action needs to be taken.
Running : health_checks key_manager_checks sw_encryption_key_availability_check
If one or more encryption keys go missing from the KMS, the check will result in a FAIL.
Running : health_checks key_manager_checks sw_encryption_key_availability_check
3. local_key_manager_unsafe_mode_check
If the check results in a PASS, the local KMS is configured correctly. No action needs to be taken.
Running : health_checks key_manager_checks local_key_manager_unsafe_mode_check
If local KMS is used on a 1-node or 2-node PE (Prism Element) cluster, the master key is not stored securely, and the check will result in a FAIL.
Running : health_checks key_manager_checks local_key_manager_unsafe_mode_check
Output messaging
sed_key_availability_check
sw_encryption_key_availability_check
local_key_manager_unsafe_mode_check
[
{
"101071": "Check to see if the KMS server actually has the expected passwords for the SED disks.",
"Check ID": "Description"
},
{
"101071": "If a 3rd party KMS gets restored from a backup or other management operation incorrectly, it may have missing keys.",
"Check ID": "Causes of failure"
},
{
"101071": "Contact Nutanix and 3rd party key manager support.",
"Check ID": "Resolutions"
},
{
"101071": "Cluster will have data unavailability if rebooted.",
"Check ID": "Impact"
},
{
"101071": "A101071",
"Check ID": "Alert ID"
},
{
"101071": "SED keys from {kms_name} are unavailable",
"Check ID": "Alert Smart Title"
},
{
"101071": "SED Keys Unavailable",
"Check ID": "Alert Title"
},
{
"101071": "SED keys for one or more disks are not available from external key manager {kms_name}.",
"Check ID": "Alert Message"
},
{
"101071": "111075",
"Check ID": "Check ID"
},
{
"101071": "Check to see if the KMS server actually has the expected passwords for the container encryption keys.",
"Check ID": "Description"
},
{
"101071": "If a 3rd party KMS gets restored from a backup or other management operation incorrectly, it may have missing keys.",
"Check ID": "Causes of failure"
},
{
"101071": "Contact Nutanix and 3rd party key manager support.",
"Check ID": "Resolutions"
},
{
"101071": "Cluster will have data unavailability if rebooted.",
"Check ID": "Impact"
},
{
"101071": "A111075",
"Check ID": "Alert ID"
},
{
"101071": "SW Encryption Keys from {kms_name} are unavailable",
"Check ID": "Alert Smart Title"
},
{
"101071": "SW encryption keys unavailable for cluster",
"Check ID": "Alert Title"
},
{
"101071": "SW encryption keys for one or more containers are not available from external key manager {kms_name}.",
"Check ID": "Alert Message"
},
{
"101071": "111078",
"Check ID": "Check ID"
},
{
"101071": "Local Key Manager is configured on a cluster with less than 3 nodes.",
"Check ID": "Description"
},
{
"101071": "Local Key Manager needs at least 3 nodes in a cluster to securely store its master key.",
"Check ID": "Causes of failure"
},
{
"101071": "Register to Prism Central and switch to Prism Central based key manager or switch to an External Key Manager.",
"Check ID": "Resolutions"
},
{
"101071": "Local Key Manager's master key is not stored securely.",
"Check ID": "Impact"
},
{
"101071": "A111078",
"Check ID": "Alert ID"
},
{
"101071": "Cluster's local KMS is running in unsafe mode",
"Check ID": "Alert Title"
},
{
"101071": "Cluster's local KMS is configured on a cluster with less than 3 nodes.",
"Check ID": "Alert Message"
}
] | For details on configuring data-at-rest encryption, see AOS Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v5_17:wc-security-data-encryption-wc-c.html.
Investigating a FAIL
1. For sed_key_availability_check and sw_encryption_key_availability_check
Engage the third party Key Management Software's support team to understand why the keys are missing and recover them as soon as possible. Avoid rebooting or shutting down anything in the Nutanix Infrastructure until this is resolved.If the keys are not recoverable, or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/.2. For local_key_manager_unsafe_mode_check
Nutanix recommends registering this cluster to Prism Central and switch to Prism Central based key manager or switch to an External Key Manager. |
KB15099 | NX-G8 node - Kernel Panic due to CPU internal execution error | The article describe a situation that NX-G8 node crashes frequently due to kernel panic that caused by processor errors | It's seen in the field that NX-G8 node may experience kernel panic and crash frequently. Here are the steps to identify the issue:
Check if the crash dumps are generated under /var/crash in the AHV host:
[root@AHV ~]# ls -lahtr /var/crash/
Check the file vmcore-dmesg.txt in these directories, the following errors could be seen:
[351614.650736] {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 1
or
[ 325.289622] mce: [Hardware Error]: CPU 10: Machine Check Exception: 5 Bank 0: f2000040000f040a
Check the IPMI SEL log, the processor configuration errors are displayed:
root@AHV # ipmitool sel list
Download the System Crash Dump file (Cdump.txt) from IPMI:
Search for ppin in the Cdump.txt and see if the value is "0x42a513c0e7d51607"
"crash_data": {
| Note: HW engineering identified the ppin value 0x42a513c0e7d51607 as a CPU internal execution error.
FaultCPU= ICX
If the BIOS version is WU20.005, upgrade to the latest version WU40.002 and see if the issue still happens.If the node still crashes after the upgrade, collect HW logs:
KB-2893 https://portal.nutanix.com/kb/2893 Use of the collect_oob_logs.sh script and TS Dump collection from BMC KB-9528 https://portal.nutanix.com/kb/9528 Hardware: Failure Analysis - 7 Day HW Log Collection Requirement KB-9529 https://portal.nutanix.com/kb/9529 HW: Additional data collection for specific part types before removing for replacement
Replace the node and initiate a Failure Analysis (FA). |
KB6734 | AHV host management network troubleshooting | This KB aims to provide some troubleshooting commands to aid in situation where the AHV Host Management network has connectivity issues | This article is to aid SREs or customers in situations where only AHV management network is reportedly having connectivity issues and no issues reported with the User VMs or CVM. This may be visible via Alerts in Prism, or the host being marked degraded or via intermittent ping drops to the Management IP. AHV upgrade might get stuck on "Installing bundle on the hypervisor" step due to host connectivity issue as well.
By default, AHV Host Management is represented by port br0 for the bridge also named br0 in ovs-vsctl show. Below is example of a sample port representation:
Bridge "br0" | Confirm if the AHV management IP is reachable by pinging from its own CVM and also another CVM. If the issue is seen only from the external CVMs, then the problem most likely has something to do with the external network (including AHV configuration like teaming modes). Perhaps point 2 and 3 are of use here. If you notice connectivity issues from the host's CVM to the host external IP, then all points will apply.First, understand the VLAN used. In above, port 'br0' and port 'vnet0' are on the same VLAN. Port br0 represents the AHV management port (external IP) and vnet0 represents the CVM's external network. (For additional info, refer to KB 2090 http://portal.nutanix.com/kb/2090.)Check the team modes. For example, in below configuration, br0 bridge has an active-backup mode. If LACP is used, check the AHV network doc and verify customer's switch configuration (Refer to KB 3263 http://portal.nutanix.com/kb/3263):
nutanix@cvm:~$ manage_ovs show_uplinks
Verify NIC rx troubleshooting errors are not noticed. If you see an increase in rx / tx errors, troubleshoot that issue first. Refer to KB 1381 http://portal.nutanix.com/kb/1381Verify you are not hitting other known issues (KBs 3261 http://portal.nutanix.com/kb/3261, 1381 http://portal.nutanix.com/kb/1381)Confirm if no IP Address (e.g. static or DHCP) on the AHV host physical interfaces: eth0, eth1, eth2, eth3, since these are being used within bonds. They can check this with the following commands (on the AHV hosts):
root@AHV# ifconfig eth0 ; ifconfig eth1 ; ifconfig eth2 ; ifconfig eth3
If the problem is only noticed in one host or a subset of host, you could compare the problem the config by running this on any CVM and validating if you see a difference between the working and non-working hosts:
root@AHV# head -n 20 /etc/sysconfig/network-scripts/ifcfg-*
If active-backup is used, you could change the active slave and check. You could also compare with the working hosts (if there is one) and check if the active slave are on different switches.
example: to check and change the active slave interface on the non-working node:.
root@AHV# ovs-appctl bond/show # note the "active slave"
If it is eth2, try to change it the other member in the bond, for example eth0.
We can isolate where the packets are reaching or not reaching within the AHV host using tcpdump.
The example MAC address 01:02:03:04:05:06 here, should be replaced by the MAC that is in use by the AHV br0 interface, which can be identified with:
root@AHV# ifconfig br0
Here are several different tcpdump on different interfaces within the path on the AHV host:
root@AHV# tcpdump -s 96 -vni br0 ether host 01:02:03:04:05:06 and not tcp
It may be informative to examine the L2 fdb of the br0 bridge, by capturing the output of the following command when ping is working, and again when it has stopped working (on the AHV host):
root@AHV# ovs-appctl fdb/show br0
|
KB13725 | cassandra-token-range-skew-fixer start/stop can prevent node from being added back to metadata ring | cassandra-token-range-skew-fixer start/stop can prevent node from being added back to metadata ring | After starting a cassandra-token-range-skew-fixer operation as per the instructions in the KB 1342 https://portal.nutanix.com/kb/1342, the operation may be aborted by the user using the "ncli cluster cassandra-token-range-skew-fixer stop" command.
Under some circumstances, after the operation is aborted, the Cassandra state changes may not handled gracefully, leaving Cassandra crashing on the node to be added, and dynamic ring changer not being able to add the node back to the Cassandra ring.Reasons currently seen for the ring aborts which have triggered this issue are:
A performance issue introduced by the ring change, as warned in the KB 1342 https://portal.nutanix.com/kb/1342Node being added as part of ring skew has a conflicting token with another node already in the ring - ENG-503895 https://jira.nutanix.com/browse/ENG-503895
Identifying the issue:
Check the /home/nutanix/.bash_history and /home/nutanix/.nutanix_history to confirm whether there have been cassandra-token-range-skew-fixer start/stop operations:
nutanix@cvm$ allssh "grep cassandra-token-range-skew-fixer /home/nutanix/.*history"
There will be a dyn_ring_change_info entry in zeus_config for the node which has not been added back to the ring:
nutanix@cvm$ zeus_config_printer |grep -A13 dyn
/home/nutanix/data/logs/dynamic_ring_changer.INFO logs on the node performing the ring change indicate the node addition cannot complete due to no token being assigned yet:
F20221005 09:39:10.941969Z 321241 dynamic_ring_change.cc:3882] Node add cannot proceed as no token has been assigned yet.
/home/nutanix/data/logs/cassandra_monitor.INFO on the node that can't be added to the ring will log the following repetitive FATAL message:
F20221005 09:39:10.941969Z 432111 cassandra_ring_fixer.cc:1964] Check failed: config_proto.has_ring_fix_state()
/home/nutanix/data/logs/cassandra_monitor.FATAL on the node that can't be added to could be observed messages about ring_fix_state process is crashing
F0314 cassandra_ring_fixer.cc:1782] Check failed: ring_fix_state.svm_and_token().size() > 0 (0 vs. 0))
Note: From the CVM, use "nodetool -h0 ring" command to and see which node is not listed, or map the service_vm_id_being_added from step 4 to the node using the output of "svmips -d" | Workaround:
Cancel the node addition using the script from KB 5747 https://portal.nutanix.com/kb/5747:
nutanix@cvm$ /home/nutanix/cluster/bin/cancel-node-addition.py --node_ip=<insert_limbo_cvm_ip_address>
Check that Cassandra on the node which is not part of the ring is stable - this is indicated by the lack of FATAL messages in the cassandra_monitor.INFO logs mentioned in the description. Add the node back to the Cassandra ring using Prism UI or ncli:
nutanix@cvm$ ncli host enable-metadata-store id=<svm ID of the node to be added> |
KB15995 | Nutanix Files: FSVM is consistently experiencing crashes, resulting in degraded performance | FSVM is consistently experiencing crashes, resulting in degraded performance and stargate shows Disk IO error due to network problem. | File Servers Virtual machines may randomly crash.Crash dumps will be keep generating on FSVM, when a FSVM is stunned due to high IO wait time in mpstat.The below is an example of what the core dump contained in this scenario:
root@NTNX-xx-xx-xx-xx-A-FSVM:/home/nutanix/data/cores/127.0.0.1-2023-03-30-19:20:44# gunzip vmcore-dmesg.txt.gz
When the FSVM goes to hung state there are high IO wait time in mpstat.
nutanix@NTNX-10-24-xx-xx-A-FSVM:~$ less data/logs/sysstats/mpstat.INFO
During the FSVM hung state,Check the stargate logs and observed timeout error for egroup read and write operations.
nutanix@NTNX-19FM6H130137-A-CVM:10.24.xx.xx:~$ less data/logs/stargate.ERROR | grep -i "ktimeout"
Verify qemu logs of affected FSVM, we see only following logs during the time of issue.
2023-10-12T18:02:46.610426Z frodo[27669]: LOG frodo/vdev_scsi.c:vdev_scsi_vq_start():183: Started VQ 0
Disk Read/write delays will be generated in logs under corresponding CVM where FSVM lies.
nutanix@NTNX-18SM6Hxxxx-C-CVM:10.1.xx.xx:~/data/logs$ allssh 'less ~/data/logs/curator.INFO | grep -i "Stargate reports it can handle"'
Seen JukeBox error in stargate logs continued appeared.
nutanix@NTNX-18SM6H3xxx-A-CVM:10.1.xx.xx 6:~/data/logs$ allssh 'grep -i "Sending NFS3ERR_JUKEBOX" ~/data/logs/stargate.*'
We will observe lot of retransmission in the cluster and all seem to point cvm of host at which affected FSVM hosted.
nutanix@NTNX-18Sxxx-C-CVM:10.1.xx.xx :~/tmp$ allssh "netstat -s | egrep -i 'csum|Tcp:|segment'"
| The issue stems from a network problem on the underlying host, impacting the Stargate operation. Consequently, this disrupts the Stargate operation, resulting in packet drops for IO operations. The aftermath of these packet drops includes elevated IO wait times for the FSVM, ultimately causing it to enter a crash loop.You will observe consistent packet drop for affected CVM.
nutanix@NTNX-18SM6xxxx-C-CVM:10.1.xx.xx :~/tmp$ allssh ' dmesg -T | grep "IPTables Packet Dropped"| tail'
To address the problem, it is recommended to segregate the issue at the underlying host and CVM (Controller VM) levels to further investigate and resolve the packet drop. Please refer to ONCALL-16303 https://jira.nutanix.com/browse/ONCALL-16303 and ONCALL-16385 https://jira.nutanix.com/browse/ONCALL-16385 to get insights into the specific nature of the problem. |
KB9138 | LCM on INSPUR - BIOS-BMC Compatibility matrix | BMC can only be upgraded from a specific version of BMCs. BIOS Upgrade is dependent on BMC version. | BMC can only be upgraded from a specific version of BMCs. BIOS Upgrade is dependent on the BMC version. | Inmerge 1000 series:
Inmerge 900 series:
[
{
"Update version": "BMC 4.25.7",
"Compatibility": "Upgrade from previous versions 4.25.2, 4.25.5 and 4.25.6"
},
{
"Update version": "BIOS 4.1.13",
"Compatibility": "Upgrade is dependent on BMC 4.25.7 and supported from BIOS version 4.1.11"
},
{
"Update version": "Update version",
"Compatibility": "Compatibility"
},
{
"Update version": "BMC 4.04.1",
"Compatibility": "Upgrade from previous 3.19.0, 4.3.0"
},
{
"Update version": "BIOS 4.1.9",
"Compatibility": "Upgrade from previous version 4.1.6, BMC should be any of the following versions 3.19.0,4.3.0,4.04.1"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.