id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB12744
NUS card missing on the license portal or the message “No compatible AOS clusters were found” is displayed when attempting to apply Nutanix Unified Storage licenses
A gap between the availability of Nutanix Unified Storage (NUS) licenses and the software that supports them results in a license violation until a software update is available. The products remain fully functional.
The new Nutanix Unified Storage portfolio provides customers with more flexibility and simplicity than ever before. To achieve this, the new portfolio implements a new set of licenses that entitle customers to use the various products.The new Unified Storage licenses will be available to purchase from Feb 15, 2022. In order to accept the new licenses, updated versions of Prism Central 2022.3 and AOS (6.1.1 or later) are required. AOS 6.1.1 is currently scheduled for GA between March and April 2022. This leaves a short gap where Nutanix Unified Storage licenses can be purchased but cannot be applied.When attempting to apply Nutanix Unified Storage licenses during this period, if a compatible Prism Central instance is not found, the NUS card will not appear in the license portal. Additionally, if a compatible Prism Central instance is found, but a compatible AOS cluster is not, the message “No compatible AOS clusters were found” will be displayed when Nutanix Unified Storage is selected. However, the products are still fully functional and will continue to operate normally.
Updating to Prism Central 2022.3 and AOS 6.1.1 or later, when available, will enable customers to apply the Nutanix Unified Storage licenses. Invalid license violation warnings will no longer be displayed.
KB13938
File Analytics - Scenarios of Unexpected Read event may be seen
This article introduces some scenarios that unexpected read event on FA when navigating to a Files share.
In the FA console -> Audit Trails -> Search by User or Client IP, we may notice some unexpected Read Operations events. The customers confirmed that they never opened those files. The logs of FSVMs and FAVM don't include such Read events.The below scenarios can create unexpected Read operations events on our FA Audit Trails whenever we browse the Nutanix Files Share only but don't open the reported files.Scenario 1: MIME or file type check (4K read request) when browsing to a Files share directory.Scenario 2: The Files share folder has a Windows link type file (the suffix is .lnk).Scenario 3: MS Windows explorer has the "Creator" column.Scenario 4: MS Windows explorer is in the "Details" view. Especially for the files that suffixes are .exe, .msc, .avi. A Wireshark package capture result shows that 4096 bytes read request is created for these files when browsing to the Files share folder.Scenario 5: MS Windows explorer reads the first 4096 bytes and 3195 bytes from the end of the file, and both reads were with the logged-in user, regardless of the different views (List, Details, or Small icons). Attach a lab test Wireshark package capture for reference purposes:
Scenario 1: It has been fixed on the AFS 4.0 version of ENG-392213 https://jira.nutanix.com/browse/ENG-392213, which filters the read request smaller than 4K.Scenario 2: This is the expected behavior of MS Windows. Scenario 3: This is the expected behavior of MS Windows. ENG-340736 https://jira.nutanix.com/browse/ENG-340736 reported this scenario. Scenario 4: This is the expected behavior of MS Windows. There is no way to differentiate between the user-initiated reads and system-initiated reads in this case.Scenario 5: ENG-518787 https://jira.nutanix.com/browse/ENG-518787 will enhance the FA (by adding more criteria to identify and filter out the reads from the end of the file), which prevents this unexpected read event. But being 100% accurate will be difficult.
KB7646
Modify One-node backup cluster snapshot retention
This KB explains One-node backup retention of expired snapshots.
Nutanix clusters keep one expired snapshot to avoid a full replication, by default. For one-node backup clusters ( SNRT https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_7:wc-cluster-replication-single-node-c.html), the logic is different and the retention policy is to retain 5 expired snapshots, by default. Refer to ROBO Deployment and Operations https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2083-ROBO-Deployment:single-node-backup-target.html guide for more information on one-node backup targets.
If you do not wish to retain 5 snapshots in your one-node backup cluster consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/ and refer to this knowledge base article.
KB5890
Identifying the issue --All disks are offline and all services are down on CVM with Medusa ERROR : Cassandra gossip failed
This KB describes Cassandra gossip failed issue in case of SAS address mismatch in hardware_config.json
This is a scenario in which: CVM is UP but all the services on that specific CVM is down reporting Medusa ERROR Cassandra gossip failed.All the disks are offline on that specific CVM Cluster status shows all CVMs are UP, but the services may be down on the CVM which has issues. nutanix@CVM:~$ cluster status|grep -v UP Hades might be in a crash loop as follows: ERROR hades_utils.py:48 Failed to get Hades config for svm id 9xxxxx with error no node Output of list_disks might return the following stack trace. Traceback (most recent call last): Check if the SAS Addresses match to that in hardware_config.json (Collect hardware_config.json from that specific node) sudo ~/cluster/lib/lsi-sas/lsiutil –s
If there is a mismatch of SAS Addresses to the ones in hardware_config.json, follow the procedure below: Download the Phoenix ISO from the Nutanix Support Portal https://portal.nutanix.com.Reboot the Host to Phoenix using IPMI's KVM Console.Run the following command to make sure that Phoenix has rebuilt the md-arrays: mdadm --assemble --scan Create temporary mount point folders: mkdir /mnt/rootA; mkdir /mnt/rootB Mount the CVM boot partitions: mount /dev/md0 /mnt/rootA; mount /dev/md1 /mnt/rootB Create the new hardware_config.json with the right SAS addresses using the layout_finder.py script: python /phoenix/layout/layout_finder.py local # For Phoenix 4.4.x+ and later python /usr/bin/layout_finder.py local Copy the new hardware_config.json to both boot partitions: cp hardware_config.json /mnt/rootA/etc/nutanix; cp hardware_config.json /mnt/rootB/etc/nutanix Unmount the Phoenix ISO and reboot to Host: python /phoenix/reboot_to_host.py
KB15538
LCM Inventory setup failed. Reason: None
Unable to initiate inventory operation: Failed to perform the operation as Request to run LCM inventory failed with root task UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and error message (Inventory setup failed. Reason: [None])
In rare cases you may see the following error message when attempting to conduct an LCM inventory:Unable to initiate inventory operation: Failed to perform the operation as Request to run LCM inventory failed with root task UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and error message (Inventory setup failed. Reason: [None])This was recently found when the LCM version is less than 2.6.1, LCM automatically attempts to update to the latest (2.6.1). This is when the aforementioned error appears and the LCM process fails.In some cases, there is no lcm_ops.out log since LCM inventory was not able to initiate, the lcm_ops.out log was not generated.In other cases, it is possible that there is an lcm_ops.out log that was generated previous to this issue, however, since LCM inventory is not initializing in this case, there is likely not to be any log entry related to this issue.The best signature we have for this issue is the error message:Unable to initiate inventory operation: Failed to perform the operation as Request to run LCM inventory failed with root task UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and error message (Inventory setup failed. Reason: [None])Once we unchecked the Enable HTTPS box and completed the LCM upgrade and subsequent inventory, the lcm_ops.out log was created.
Workaround: Go into LCM > settings and de-select the Enable HTTPS check box and click save. This will allow LCM to update to the latest and then allow the inventory process to complete.If the above workaround does not resolve your issue, or if there are any reservations with using this solution, please consult The Life Cycle Manager Troubleshooting Guide - KB-4409 https://portal.nutanix.com/kb/000004409Since the last time this was done, and resolved, there were no logs collected, please collect logs and file either a tech help or ENG if this issue is positively identified.
KB13078
How to configure IPMI Active Directory Authentication for G8 NX- platforms
This document describes how to set up Active Directory based authentication (LDAP) for IPMI access on G8 series NX- Platform
This document describes how to set up Active Directory based authentication (LDAP) for IPMI access on the G8 series NX- Platform.
The Active Directory authentications method on G8 differs from G5/G6/G7 (Prior to G8, see KB-2860 https://portal.nutanix.com/kb/2860) thus must configure the Remote Group (DistinguishedName) in the IPMI. Also due to a known issue, if the user is a member of multiple large groups, the login may fail so recommended the user bind with a single group. Gather the DistinguishedName of the User Group for the user, using the below Powershell command or GUI on the Active Directory server.Example: PS C:\Users\Administrator> Get-ADGroup -Filter '*' | where-object {$_.distinguishedname -like "*powerusers*"} | select name,SamAccountName, DistinguishedName Prerequisites:Log in to the IPMI with the ADMIN account and make sure the below settings are configured properly. "Date and Time" (Under Configuration > BMC Setting)"DNS server IP" (Under Configuration > Network > Advanced Setting) Perform the following steps to set up Active Directory authentication (LDAP) for IPMI access. Log in to the IPMI with the ADMIN account.Expand the "Configuration" > "Account Services" Select "Directory Service" in the right pane.Under "Setting" enable the "Active Directory" Under "Active Directory" > "Server Address", click "Add" then enter the Active Directory information (Prefix, IP or Domain and Port) and finally "Save"Under "Active Directory" > "Rules", click "Add" then enter Rules (Role, Remote User (name), and Remote Group (DistinguishedName), and finally "Save" Note: Make sure to copy the full DistinguishedName string to the Remote Group.Click "Submit" to save the above settings. Log back in as the Active Directory user configured above. Known Issues: Due to a known issue in SuperMicro BMC (8.0.6 and 8.1.7), if the DistinguishedName and sAMAccountName are not the same, the user will fail to log in. Recommended to change the sAMAccountName to match DistinguishedNameThe Active Directory user password length must be less than 19 characters. Both of the issues have been fixed with BMC 8.1.9. Use LCM to upgrade the BMC to the latest version or follow KB-12447 https://portal.nutanix.com/kb/12447 to upgrade the BMC manually.
KB16156
Unable to update Distributed vSwitch due to orphaned VM
DVS cannot be synced as an orphaned VM is configured to utilize vSwitch.
In vCenter, if a VM is labelled as orphaned, it means that the VM storage is no longer connected to the host. The VM cannot be edited or reconfigured until the storage is reconnected to the host. At the host level, this also means that if the VM is configured to use port groups on the Distributed vSwitch, the vSwitch will not allow updates as the VM is actively using the port groups. To resolve this, we must remove the VM from the port groups on the vSwitch, however, since the VM is orphaned, we cannot edit it as its storage is inaccessible.
In vCenter, take a screenshot of the VM inventory confirming the name of the orphaned VM. That way if the VM must be re-registered to the inventory, you know the VM name.Remove the VM from inventory. This will make the port groups on the vSwitch no longer in use, and allow the vSwitch to sync.To add the VM back to the inventory, browse the datastore on the host, select the VMDK file, and add the VM to the inventory.
KB16742
Cisco UCS - Imaging via Foundation Central may fail with Error "Failed to receive the first heart beat of the node"
This KB article talks about an issue where Imaging via Foundation Central may fail with Error "Failed to receive the first heart beat of the node" in Foundation Central based on the UI and logs
This KB article talks about an issue where Imaging via Foundation Central may fail with Error "Failed to receive the first heart beat of the node" in Foundation Central based on the UI and logsIdentification of the issue : 1. Below message would be seen on the UI : Failed to receive the first heartbeat of the node2. On the Foundation Central logs(/home/nutanix/data/logs/foundation_central.out), you would see below error messages : 2024-04-17T22:08:08.721Z [Alloc='16 MiB' Sys='41 MiB' NumGC=179] hmp.go:819 [DEBUG] [Deployment="c3889af7-88cb-48b4-56b0-4a1c6566e8b0" HMType="INTERSIGHT"] [10/10] Polling for the first heart beat task This issue happens due to incorrect configuration of LACP.
Steps to configure LACP : Make sure there is no LACP configuration on the switch side, then perform node imaging using FC with LACP.Enable LACP in Cisco N9K switch configuration.Use Nutanix KB https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0320000004H0vCAE to enable LACP fallback on the switch side after the deployment and enable/disable/verify LACP on AHV hosts.In Prism UI change virtual switch bond type to 'Active-Active' (which is LACP) which is available at "Network configuration => Virtual switch => Uplink Configuration => Bond Type as 'Active-Active' ".
KB16703
AHV upgrade may fail with the "Could not back up /etc/selinux/targeted/contexts/files/file_contexts.local.bin: Got passed new contents for sum" error while running Puppet.
AHV upgrade fails with the "Could not back up /etc/selinux/targeted/contexts/files/file_contexts.local.bin: Got passed new contents for sum" error while running Puppet.
AHV upgrade fails with the "Could not back up /etc/selinux/targeted/contexts/files/file_contexts.local.bin: Got passed new contents for sum" error while running Puppet.We see Puppet failure in the /var/log/upgrade_config.log: 24 Mar 04:20:46 INF Running puppet on upgrade The "Could not back up /etc/selinux/targeted/contexts/files/file_contexts.local.bin: Got passed new contents for sum" error is seen in the /var/log/upgrade_config-puppet.log: 2024-03-24 04:20:51 +0000 Puppet (err): Could not back up /etc/selinux/targeted/contexts/files/file_contexts.local.bin: Got passed new contents for sum {sha256}fb831c8731ffee193d5c42f0837b12c41737ccf0106f26c45af6493543def47e
Nutanix Engineering is investigating this issue - please collect the full log bundle covering the upgrade failure timeframe and post an update to ENG-647460 https://jira.nutanix.com/browse/ENG-647460.
KB13061
Reverse Cluster conversion on ESXi 7.0u3/8.0.c fails with Prism showing error "Finalizing conversion operations" at 80%
This KB talks about an issue where Reverse Cluster Conversion fails from AHV to ESXi 7.0u3/8.0.1c. Cluster conversion from AHV to ESXi would go through.
Scenario 1 Cluster conversion on ESXi 7.0u3 build 19193900 Jan2022 failed with AOS 5.20.2. Prism shows the error "Finalizing conversion operations" at 80%. If you are hitting this issue, you would see: Conversion will be stuck with the message "Finalizing conversion operations at 80%". Virtual Switch configuration deployment failed for vs0. This switch is created when cluster is converted from AHV to ESXi.All the Services in the Node would be down.You would see the below message in ~/data/logs/genesis.out: Failed to read zknode /appliance/logical/genesis/node_shutdown_priority_list with error 'no node' The convert_cluster_status command would show: nutanix@cvm$ allssh convert_cluster_status Scenario 2 If you are hitting this issue, you would see: Cluster conversion to ESXi 8.0.1c failed with AOS 6.7.1. Prism shows the error "Finalizing conversion operations" at 80%.Genesis log (~/data/logs/genesis.out) shows the error "Failed to complete shutdown token request".Reverse conversion is stuck at 80%.Hosts are UEFI secure enabled. Cause: Reverse conversion for UEFI secure enabled hosts is not supported. Resolution: To recover, follow the steps in the Solution section.
For AOS 6.7 and later: Follow the below steps under Nutanix Support's supervision: Log in to the vCenter.Search for the CVM (Controller VM) in the vCenter, which was supposed to be up and running. If it is in orphaned state, then 'Remove the vm from inventory'.Log in to the host (using ssh) for which the conversion got stuck.Go to the scratch directory: cd ~/scratch Execute the command: ./reregister_cvm When this command gets executed, no output will be printed to the terminal and it takes about 6 minutes for the command execution to be completed.To confirm that the above command executed successfully, you can open the log file 'reregister_cvm.log' and then go to the last line (if you are using vi, Shift+g) and make sure that no stack trace is present.Go to the vCenter. The CVM will be powered on in the next 1 or 2 minutes. If it is not done, then you can manually power it on. After following the above steps, the conversion should continue. If any other CVM gets stuck at the same issue, repeat the above-mentioned steps. For AOS 5.20.4 and below: Follow the below steps under Nutanix Support's supervision: After seeing a failure on the cluster, log in to VMware vCenter.Exit host from Maintenance mode.Power On CVM manually.Conversion task is picked up automatically and starts on the node.If the task gets stuck on any other node, follow steps 1 to 3 again and complete the Conversion from AHV to ESXi. The fix for this issue is released in AOS 5.20.4 and above.
KB14720
NDB - Listing and changing the TimeZone settings in NDB
How to list or change the TimeZone settings in a NDB server.
The NDB web interface does not provide a way to list or change the configuration of the TimeZone configured on the server. To list the current settings and to change them, use the era-server CLI.
Log in to the NDB server using SSH as an era user. If this is a High Availability NDB installation, ensure you are on the NDB server and not on any of the HA proxies.Enter the era-server shell: [era@localhost ~]$ era-server List the current Timezone: era-server > era_server list timezone To change the settings, run: era-server > era_server set timezone=<value> Note: The name of the time zone can be obtained from Timezone List https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
KB15420
LCM Inventory operation failing with Failed to get public key from zknode
This KB covers an issue with LCM Inventory operation failing with Failed to get public key from zknode and Direct upload failing with Could not read V2 public-key from Zookeeper error signatures
Scenario 1 : Customer performs an LCM inventory operation on a connected site cluster failing with the error signatures as below. Failed to get public key from zknodeCould not read V2 public-key from Zookeeper ERROR 86770800 ergon_utils.py:903 Failing task [1169ebfd-7ea7-4367-5871-ece0bf92ef15] kRunning -> kFailed, Reason: Operation Failed. Reason: Could not read V2 public-key from Zookeeper Logs have been collected and are available to download on 10.220.198.17 at /home/nutanix/data/log_collector/lcm_logs__10.220.198.17__2023-08-16_08-03-25.943279.tar.gz Checking ~/data/logs/genesis.out on LCM leader node will see following signatures 2023-08-16 08:03:22,072Z INFO 68294608 time_manager.py:408 Syncing time with upstream NTP servers: time.hermanmiller.com time.google.com DEBUG: Using connected site helper Checking the ZK /appliance/logical/lcm/public_key we see no zknode created. nutanix@CVM:~$ zkcat /appliance/logical/lcm/public_key Scenario 2 : LCM Direct Upload failing with the following error : Operation Failed. Reason: LcmRecoverableError('Could not read V2 public-key from Zookeeper') 2023-08-26 05:52:19,910Z ERROR 53686448 framework.py:961 Exception LcmRecoverableError('Could not read V2 public-key from Zookeeper',) occurred while checking for intent to upload bundle Traceback (most recent call last): Checking the ZK /appliance/logical/lcm/public_key we see no zknode created. nutanix@CVM:~$ zkcat /appliance/logical/lcm/public_key Check if Manual upgrade of LCM framework was done using deploy_framework.py script steps in KB 000014042 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000H4qzSAC
Scenario 1 : Cause : Issue is caused due to a race condition because of which LCM is proceeding without creating the public_key zknode and doing the verification of master manifest. This issue is resolved in LCM-2.7. Please upgrade to LCM-2.7 or higher version - Release Notes | Life Cycle Manager Version 2.7 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-LCM:Release-Notes-LCMIf you are using LCM for the upgrade at a dark site or a location without Internet access, please upgrade to the latest LCM build (LCM-2.7 or higher) using Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_7:Life-Cycle-Manager-Dark-Site-Guide-v2_7 Workaround if LCM version < 2.7: From ~/data/logs/genesis.out logs on the LCM leader we see leader created the catalog tasks to download master manifest after detecting that update in master manifest is required.But right after creating this catalog task, genesis got restarted on this node for some other underlying condition and LCM leadership moved to another node in the cluster. 2023-08-16 08:02:50,735Z INFO 95617072 lcm_ergon.py:563 Done monitoring tasks [UUID('3081928d-11d0-44fe-8d34-92bf1a315314')]. Completed tasks: 1, Pending tasks: 0! Tracing the ~/data/logs/genesis.out on the new genesis leader node we see lcm detected that no master manifest update is required because of LCM relying on catalog to do this check. To check if master manifest update is available, LCM iterates over master manifest items in catalog and matches v2_signature with portal v2 sign file.As master manifest item would have already been downloaded because of CatalogCreate task created by previous genesis leader node earlier, no master manifest update was detected and LCM skipped creating the public_key zknode and doing the verification of master manifest. 2023-08-16 08:03:14,727Z WARNING 86770800 catalog_utils.py:94 Could not find manifest with known uuid 220fbd24-94c5-4f94-8593-4eb1d49288ff.Attempting to find a newer instance in catalog.! Workaround steps Identify and fix the underlying cause of genesis service in crash loop on the cluster.Perform Catalog and cpdb cleanup as per KB-9658 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000XmaXCAS and starting another LCM inventory should resolve the issue. Scenario 2 : Issue is caused only in the scenario where deploy_framework.py was used to upgrade LCM Framework, but LCM inventory was not done before attempting to use the Direct Upload operation. This issue is being tracked under ENG-598132 https://jira.nutanix.com/browse/ENG-598132.Workaround steps Perform another LCM inventory operation, before attempting to use the Direct Upload operation.This should update the /appliance/logical/lcm/public_key ZK node and resolve Direct Upload issues. nutanix@CVM:~$ zkcat /appliance/logical/lcm/public_key
KB5487
NCC Health Check: ngt_installer_version_check
The NCC health check ngt_installer_version_check is run against the VMs that have NGT (Nutanix Guest Tools) installed and compares the version of NGT on the VM to the one on the cluster. If the NGT version on the cluster is higher than the one installed on the VM, the check generates a warning.
The NCC health check ngt_installer_version_check compares the version of NGT (Nutanix Guest Tools) installed on the VMs with the NGT version on the cluster. If the NGT version on the cluster is higher than the one installed on the VM, this check generates a warning. In order for this check to work, the Nutanix Guest Tools Service on the cluster should be able to communicate with the Nutanix Guest Tools Service in the UVM. Running the NCC check The check can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks ngt_checks ngt_installer_version_check As of NCC 3.0, you can also run the checks from the Prism web console Health page: Select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 14 days by default. This check will generate an alert after 1 failure. Sample outputs For status: PASS The check returns a PASS for the following conditions: UVMs do not have NGT installed.NGT version on the UVM is the same version as NGT version of the cluster. Running : health_checks ngt_checks ngt_installer_version_check For status: WARN If there is an NGT version mismatch between the UVM and the cluster, and the NGT version on the cluster is higher than the one on the UVM, the check returns a WARN with a message as shown below. Detailed information for ngt_installer_version_check: Output messaging [ { "Check ID": "Checks whether VMs have the latest NGT version installed." }, { "Check ID": "The NGT version on VM is not the latest" }, { "Check ID": "Upgrade the NGT version on the VM to the latest." }, { "Check ID": "This NGT update contains bug fixes and improvements to enable simultaneous upgrades of NGT on multiple VMs when the next NGT upgrade is available." }, { "Check ID": "A600101" }, { "Check ID": "NGT Update Available" }, { "Check ID": "NGT on vm_name should be upgraded. It has improvements to allow simultaneous upgrades of NGT on multiple VMs when the next NGT upgrade is available." } ]
To resolve the issue, upgrade the NGT version on the VMs to the latest version available on the cluster, either using Prism Central https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-ngt-pc-upgrade-t.html or Prism Element https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide:wc-ngt-upgrade-r.html. The NGT software on the cluster is bundled into AOS. When AOS is updated, the NGT software on the cluster is also updated to whatever version was bundled with the new AOS. Note that during AOS updates, only the NGT on the cluster is updated. The NGT on the user VMs is not updated automatically. They must be manually updated using the procedure in the links above.
KB13729
Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment
This KB article describes the set of security capabilities tailored for virtualized environments. AHV provides software implementation for Unified Extensible Firmware Interface (UEFI), Secure Boot, and Trusted Platform Module (TPM) that are used by guest OSs such as Windows 11 Operating System (OS).
Microsoft's latest release of Windows requires an AHV environment to support UEFI, Secure Boot, and TPM. It is observed that Microsoft Windows 11 https://www.microsoft.com/en-us/windows/windows-11-specifications documentation does not distinguish between Windows running on a bare machine versus Windows running in a hypervisor environment such as AHV, ESXi, or Hyper-V. Windows OS is unaware of the underlying hypervisor’s virtual implementation of UEFI, Secure Boot, and TPM, which does not require these features to be available on the hardware.UEFIAn open specification that defines the software interfacing mechanism between an operating system and hardware or hypervisor platform. For more information on UEFI, see the UEFI organization https://uefi.org. UEFI is required to leverage advanced security capabilities such as Secure Boot. UEFI effectively replaces traditional BIOS implemented in most hardware and hypervisor platforms such as AHV or ESXi, booting guest OS such as Windows. AHV UEFI implementation does not require UEFI to be enabled on the hardware.Secure BootA security feature within UEFI and the specifications are defined by the UEFI organization. Secure Boot is designed to ensure what is being booted is trusted and that the boot binaries have not been tampered with, preventing unauthorized software, like malware, from taking control of your system. Guest VM Secure Boot on AHV does not require Secure Boot to be enabled on the hardware. Worth noting if Secure Boot is enabled on the hardware side, the hardware will ensure that the AHV, ESXi, or Hyper-V boot binaries are trusted and not tampered with. For more information on Secure Boot, see Secure Boot Support for VMs https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:vm-secure-boot-uvm-c.html.TPMProvide enhanced Security and privacy in handling encryption operations in protecting data. The Trusted Computing Group https://trustedcomputinggroup.org/resource/trusted-platform-module-tpm-summary/ defines the specification for the TPM. Hardware-based TPM If Windows is running on a bare metal machine (no hypervisor), it leverages the hardware implementation of the TPM, which is a dedicated physical TPM chip on the motherboard of a server. Hypervisor Virtual TPM (vTPM) In a virtual environment using either AHV or ESXi as the hypervisor, the physical TPM chip on a server’s motherboard can not scale in a virtual environment running multiple guest systems. In the virtual environment, Windows leverages the virtual TPM (vTPM), a software implementation provided by AHV or ESXi, emulating the TPM specifications. The vTPM does not use the hardware physical TPM chip on the server. The following use cases are applicable for UEFI, Secure Boot, and/or TPM technologies: Windows 11 - Uses UEFI, Secure Boot, and TPM. The earlier versions of Windows optionally use UEFI, Secure Boot, and TPM. Windows Credential Guard - For information on Windows Credential Guard, see Windows Defender Credential Guard Support in AHV https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-windows-defender-credential-guard-support-c.html.Microsoft Bitlocker – For information on Microsoft Bitlocker, see the BitLocker Overview topic on Microsoft Technical Documentation.Linux - Most current versions of the various Linux optionally can support UEFI and Secure Boot.
The following table describes the actions that you need to perform to enable UEFI, Secure Boot, and TPM support for Windows running on AHV: For information on system requirements to host Windows 11 OS, see Windows 11 System Requirements topic in Microsoft Technical Documentation.[ { "Actions": "Enable UEFI", "Documentation Reference": "Creating a VM (AHV)" }, { "Actions": "Enable Secure Boot", "Documentation Reference": "Creating a VM (AHV)" }, { "Actions": "Enable TPM", "Documentation Reference": "You can enable TPM using aCLI only and not through the Prism element web console. For details, see: Securing AHV VMs with Virtual Trusted Platform Module (vTPM)" } ]
KB15961
Unable to login to Nutanix Portal "Forbidden. You do not have permission to access this page."
When attempting to access "my.nutanix.com" the error "Forbidden. You do not have permission to access this page." is returned.
The attempt to access "my.nutanix.com" page results in the error "Forbidden. You do not have permission to access this page"(my.nutanix.com/page/error/403).Example screenshot:
Email [email protected] for assistance. Include the login name used.
KB9214
HYCU backups taking long time to complete due to phase of Volume Group creation
null
Hycu AOS backups on ESXi (or AHV) takes hours per VM to complete due to the phase of Volume Group creation as reported by Hycu. This KB will provide you with information on what logs to check on HYCU appliance for troubleshooting this issue. I have provided a real example of logs seen in my case. Note: Here, Hycu logs are in UTC timezone and CVM logs are in EST timezone.Follow below steps to identify the issue:1) Ask customer to login into HYCU appliance GUI. Select the backup job that took long time to complete. Click on "View Report". Under the report, search for "Create volume group" task: Name: Create volume group As we can see in above snippet, HYCU report shows that the phase of volume group creation took ~2 hours. The volume group will be created on each vDisk of the VM (three vdisks in this example).2) But in theory, a volume group creation on Nutanix storage usually takes only few seconds (can be verified from Prism tasks page). In order to find task UUID of this volume group creation, grep for VG UUID (obtained from step 1): nutanix@NTNX-CVM:~/data/logs$ cat acropolis.out | grep 418ebec6-cdf6-4f06-81f1-83833705a892 3) Task details from backup job in regards to VG creation: nutanix@NTNX-CVM:~$ ecli task.get a0ecc412-bd46-4e1f-acb5-12bc8379fc75 4) From step 1, we see that Hycu mounts the VG to three devices; which reflect as tgt0 (/dev/sdp), tgt1 (/dev/sdl), and tgt2 (/dev/sdw). In Stargate logs, we saw about a 40 minute delta between mounting each of these: nutanix@NTNX-CVM$ allssh 'zgrep "hycu-iscsi-prefix-418ebec6-cdf6-4f06-81f1-83833705a892" /home/nutanix/data/logs/*stargate* |egrep "Creating state for|Added LUN|Adding virtual target"' This can happen when there is a resource constraint on Hycu appliance when multiple VM backups run concurrently.5) Next step is to check "Grizzly" logs of HYCU which can be obtained from HYCU appliance GUI. During same timeframe as seen in step 4, grizzly.log of HYCU will look something like this: 2020-03-24T13:53:14.369 INFO @4C40C New iSCSI database record [targetname: iqn.2010-06.com.nutanix:hycu-iscsi-prefix-418ebec6-cdf6-4f06-81f1-83833705a892-tgt0 portal: <dsip>:3260] created. (com.comtrade.ntx.target.storage.IScsiDeviceStorageController createNewIScsiDbRecord) 2020-03-24T14:26:04.363 INFO @4C40C New iSCSI database record [targetname: iqn.2010-06.com.nutanix:hycu-iscsi-prefix-418ebec6-cdf6-4f06-81f1-83833705a892-tgt1 portal: <dsip>:3260] created. (com.comtrade.ntx.target.storage.IScsiDeviceStorageController createNewIScsiDbRecord) 2020-03-24T15:14:19.269 INFO @4C40C New iSCSI database record [targetname: iqn.2010-06.com.nutanix:hycu-iscsi-prefix-418ebec6-cdf6-4f06-81f1-83833705a892-tgt2 portal: <dsip>:3260] created. (com.comtrade.ntx.target.storage.IScsiDeviceStorageController createNewIScsiDbRecord) 6) Compare iscsi operation logs on Stargate and Hycu:Stargate.INFO log: I0324 09:51:45.213445 19165 iscsi_server.cc:2552] Checking 7 sessions for iqn.2017-01.com.comtrade:d58962ee-7d47-49ef-b2bc-337c0059347e, params iscsi_client_id { iscsi_initiator_name: "iqn.2017-01.com.comtrade:d58962ee-7d47-49ef-b2bc-337c0059347e" client_uuid: "\223\274\010\320F\224DD\273\266\t\n\3617?\001" } iscsi_target_name: "iqn.2010-06.com.nutanix:hycu-iscsi-prefix-046133c5-6721-4f0b-ac0b-523de82d6287" iscsi_target_name: "iqn.2010-06.com.nutanix:hycu-iscsi-prefix-2ac53a20-8442-4e6b-981e-e80b3c44eb5e" iscsi_target_name: "iqn.2010-06.com.nutanix:hycu-iscsi-prefix-418ebec6-cdf6-4f06-81f1-83833705a892" target_params { num_virtual_targets: 32 } target_params { num_virtual_targets: 32 } target_params { num_virtual_targets: 32 } HYCU grizzly log: 2020-03-24T13:53:14.369 INFO @4C40C New iSCSI database record [targetname: iqn.2010-06.com.nutanix:hycu-iscsi-prefix-418ebec6-cdf6-4f06-81f1-83833705a892-tgt0 portal: <dsip>:3260] created. (com.comtrade.ntx.target.storage.IScsiDeviceStorageController createNewIScsiDbRecord) Based on Stargate logs matching the Hycu logs; Stargate responds immediately.7) In this example, Hycu was configured to backup 10 concurrent VMs and each of these VMs were consisting of 3 vDisks totaling 30 running concurrent operations. What we noticed is that various operations were taking a long time to complete on Hycu as it was becoming resource bound on the Hycu controller. Hycu uses one Nutanix Volume group per VM and mounts a vdisk per vdisk of the VM (3). These are treated as a /dev/sd<x> on Hycu. When comparing the iscsi operations run from Hycu to the corresponding Stargate logs, we confirmed that Stargate was performing the actions immediately, further indicating that Hycu's internal threads were stepping over each other.
If you see logs matching above scenario, consider engaging HYCU support. HYCU support should be able to help the customer in reducing the number of concurrent operations (based on their analysis) in order to mitigate this issue.For example: In this case, Hycu helped customer in reducing the number of concurrent operations from 10 down to 3; again with each VM having 3 vdisks creating 9 concurrent operations instead of 30. Since doing so, the Hycu controller was not peaking its resources and backups are completing in a timely manner.
KB8417
Alert - A1122 - NonSelfEncryptingDriveInserted
Investigating NonSelfEncryptingDriveInserted issues on a Nutanix cluster
This Nutanix article provides the information required for troubleshooting the alert NonSelfEncryptingDriveInserted for your Nutanix cluster. For an overview about alerts, including who is contacted and where parts are sent when an alert case is raised, see KB 1959 http://portal.nutanix.com/kb/1959. IMPORTANT: Keep your contact details and Parts Shipment Address for your nodes in the Nutanix Support Portal https://portal.nutanix.com/#/page/assets up-to-date for timely dispatch of parts and prompt resolution of hardware issues. Out-of-date addresses may result in parts being sent to the wrong address/person, which results in delays in receiving the parts and getting your system back to a healthy state. Alert OverviewThe Alert - A1122 - NonSelfEncryptingDriveInserted occurs when non self-encryption drive (Non SED) is added or replaced on cluster which has Data Encryption enabled using self-encryption drives (SEDs) - Drive based encryptionThe problematic drive serial and slot can be determined in the alert_msg (see Sample Alert below) or in Prism > Hardware.Refer Security Guide - https://portal.nutanix.com/#/page/docs/list?type=software&filterKey=software&filterVal=Security&reloadData=false https://portal.nutanix.com/#/page/docs/list?type=software&filterKey=software&filterVal=Security&reloadData=false under Data-at-Rest Encryption section for more details on Securing data using EncryptionNote:A non-protected cluster can contain both SED and standard drives, but Nutanix does not support a mixed cluster when protection is enabled. All the disks in a protected cluster must be SED drives.Sample Alert ID : 156871xxx293xxx:6:0005xxx-xxxx-xxxx-0000-xxxxxxxxxxx
Troubleshooting Check the Prism web console for the disk alert. Log on to the Prism web console.From the web console, select Hardware > DiagramHover the mouse over the problematic drive and a pop-up message will displayed with error message Follow the steps under Resolving the Issue #resolving_the_case to fix it. If the disk shows as normal, or if this alert was due to a scheduled maintenance, follow the steps under Closing the Case #closing_the_case. For Deeper HDD troubleshooting, refer to KB 1203 https://portal.nutanix.com/#page/kbs/details?targetId=kA0600000008WzECAU. If you need further assistance or if the steps above do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/. undefinedResolving the Issue Once the new or replaced disk is identified as NonSelfEncryptingDrive (Standard drive) and cluster has encryption enabled, It need to be replaced with SED drive similar to that of other existing drives on the cluster. If its a Nutanix hardware then proceed with collecting additional information and requesting assistance to replace the drive . Provide the following details in the support case comments and ensure your contact details and Parts Shipment Address are accurate against your assets in the Support Portal http://portal.nutanix.com/#/page/assets. For Non Nutanix hardware, Check with appropriate vendor to replace with correct drive. Block Serial numberNode Serial numberDrive Model and CapacityContact nameContact phoneContact email addressDelivery addressCityPostal codeCountryField Engineer required*: Yes/NoAny special instructions needed for the delivery or Field Engineer *Field Engineers (FE) are available for customers with Direct Support contracts with Nutanix. If you require a Field Engineer, ensure that security clearance is done for the FE before arrival. Inform Nutanix if you need anything to obtain approval. For Next Business Day contracts, the need for a dispatch needs to be confirmed by Nutanix Support and created before 3:00 PM local time in order to be delivered the next business day. Check "When can I expect a replacement part or Field Engineer to arrive" in the Support FAQs https://www.nutanix.com/support-services/product-support/faqs/ for more details. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871. Gather the following information and attach them to the support case. NCC output file ncc-output-latest.log. Refer to KB 2871 http://portal.nutanix.com/kb/2871 for details on running NCC and collecting this file.Disk details: Log on to the Prism web console.From the web console, select Hardware > DiagramClick on the failed drive.Get the DISK DETAILS on the lower left of the screen. Attach the files at the bottom of the support case page on the Support Portal. Requesting Assistance To request assistance from Nutanix Support, add a comment to the case asking them to contact you. If you need urgent assistance, contact Support by calling one of the Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. You can also press the Escalate button in the case and explain the urgency in the comments, and then Nutanix Support will contact you within an hour. undefinedClosing the Case If this alert was due to a scheduled maintenance or this KB resolved your issue, inform Nutanix to close the case by clicking on the Thumbs Up icon next to the KB Number in the initial case email. You can also update the support case saying you want to close the case and Nutanix Support will close the case.
KB6389
Nutanix Files - Preupgrade Check failure "Failed to transfer afs software bits"
The following KB explains the reasons for errors when pre-upgrade checks are executed at the time of a Nutanix Files upgrade and also basic troubleshooting to help resolving issues yourself before contacting Nutanix Support.
Nutanix Files Pre-Upgrade checks fails stating following error message All File Server Upgrades completed, Failed to transfer afs software bits
Failure occurs when the software is about to copy to the AFS storage container. You might see the following error message in the UI. Transferring AFS software bits to the container <container_name> failed. Please check the container access attributes and retry the upgrade. This occurs because the file server storage container is inaccessible due to a wrong uid or gid or mode bits. Note: An alert is raised during pre-upgrade checks but for releases earlier than AOS 5.6, transferring software bits to container fails without any alerts. Troubleshooting If the storage container is inaccessible, Nutanix recommends executing the following NCC check to validate the health of Nutanix Files cluster. Resolve any existing issues with Nutanix Files because Nutanix Files might have other issues leading to the pre-upgrade failure. ncc health_checks fileserver_checks run_all If the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/.
KB6242
ESX host high cpu utilization caused by stale python processes
ESX hosts may have an issue with high cpu utilization caused by many stale python processes running on the host.
ESX hosts may run into an issue with excessive CPU utilization caused by many stale python processes. Checking esxtop, you will see many python processes consuming an excessive amount of CPU. It is normal to see python processes running on the host, but it will be fairly obvious when there is an abnormal amount of these processes and they are consuming very high CPU The following ps command can be run on the ESXi host to view these processes: root@ESX# ps -Tcjstv | grep get_one_time_password You can also check the CVM ~/data/logs/acropolis.out logs, which should contain messages as shown below when this process has not exited or completely cleanly: WARNING command.py:152 Timeout executing ssh -q -o CheckHostIp=no -o ConnectTimeout=15 -o StrictHostKeyChecking=no -o TCPKeepAlive=yes -o UserKnownHostsFile=/dev/null -o PreferredAuthentications=publickey [email protected] USER=vpxuser python /get_one_time_password.py: 30 secs elapsed The user is unable to login from the CVM to its host, via ssh [email protected]. The error message is: nutanix@CVM:~$ ssh [email protected] NC confirms that the port is open and a connection can be established nutanix@CVM:~$ nc -v 192.168.5.1 22 The user is unable to log in to the host externally, e.g. via Putty session. The session is being constantly closed or the user is being kicked off from the session.ESXi host reports excessive CPU load in the vCenter:
Refer to ISB-113 https://confluence.eng.nutanix.com:8443/display/STK/ISB-113-2020%3A+ESXi+hostd+and+kernel+issues+triggering+out-of-memory+condition+in+SSH+Resource+Group for a detailed explanation of the stale python processes, CPU usage spikes, how to handle live issues, and resolution details.
KB6345
NCC check summary number mismatch between ncc command line and Prism UI
The number of NCC checks in the summary report after running “ncc health_checks run_all” is different in Prism UI and in the ncli report.
The NCC checks can be run from the command line using the ncc command "ncc health_checks run_all". They can also be run from Prism using the Health page > Actions > "Run NCC Checks" task. Both give summary reports at the end of their run. Below is a sample of a summary report from the ncc command. +-----------------------+ In Prism, the report can be viewed by going to the Tasks page, looking for the recent "Health check" task and clicking on the "Succeeded" or "Failed" link under the Status column. A pop-up "View Summary" window will appear showing the report. On the upper right side of this window is a link called "Download Output". Clicking it will download a text file containing a summary and detailed report. The summary portion is similar to the output of the ncc command. The number of NCC checks in the command line summary report is different from the View Summary window in Prism. The total number of checks performed, Failed checks, Warnings, etc. are different on both reports. The number of checks in Prism is more than the command line.
This is valid and expected behavior. The ncc command line reports on the number of NCC plugins while the Prism View Summary window reports on individual NCC checks. An NCC plugin can have multiple checks on it. Because the command line reports on plugins, the numbers would be less than Prism, which reports on individual checks. For example, the NCC plugin hostname_resolution_check is one plugin, which has four individual checks, as follows: Host FQDN resolutionNSC(Nutanix Service Center) server FQDN resolutionNTP server FQDN resolutionSMTP server FQDN resolution
KB13483
OVA export from Prism Central fails with "NfsError: ^X"
OVA export from Prism Centralfails with "NfsError"
In some cases, AHV VMs cannot be exported to OVA from PC.On Prism Central VM, the ~/data/logs/metropolis.out shows the OVA export task failed with NfsError error: E0703 01:03:11.503519Z 15799 vm_export_task.go:486] OVA package create subtask failed with err code 4: Traceback (most recent call last): On CVMs, the log ~/data/logs/anduril.out shows the same error: 2022-07-03 01:02:41,552Z ERROR ergon_base_task.py:473 Task OvaCreateTask failed with message: Traceback (most recent call last): Looking at the failed task on PE, it shows the same NfsError error below nutanix@cvm:~$ ecli task.get 6ba075d4-b143-4d36-7880-b12fb0aef594
This has been found to be due to ENG-323989 https://jira.nutanix.com/browse/ENG-323989, where we are encountering delays due to the oplog limits for write and single-threaded workflow for reads.The workaround is to apply Anduril gflag, max_ova_read_retries. Consult with an STL/Sr. SRE since the value of the gflag can go from 10 to 200 depending on the scenario. On ONCALL-13369, a value of 10 was used and was enough but in case 01403982 it was set to 200.Please refer to KB-1071 https://portal.nutanix.com/kb/1071 on how to set the gflags.
KB8465
Alert - A801003 - VpneBgpDown - eBGP session between VPN gateway peers down
Investigating the VpneBgpDown Alert on a Nutanix cluster.
This Nutanix article provides the information required for troubleshooting the alert A801003 - VpneBgpDown for your Nutanix cluster.Note: Nutanix DRaaS is formerly known as Xi Leap. Alert Overview The A801003 - VpneBgpDown alert occurs when eBGP session between the on-prem VPN gateway and Xi VPN gateway is down. This will affect connectivity between on-prem and Xi clusters. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "The eBGP session between the on-prem VPN gateway and Xi VPN gateway is down." }, { "Check ID": "The eBGP session between the on-prem VPN gateway and Xi VPN gateway is down." }, { "Check ID": "Follow the on-prem VPN device vendor's troubleshooting steps. If you suspect that the problem is not with the on-prem VPN device, please contact Nutanix support." }, { "Check ID": "VPN connectivity between Xi and the on-prem datacenter will be impacted because routes cannot be exchanged." }, { "Check ID": "A801003" }, { "Check ID": "eBGP session between on-prem and Xi datacenter is down" }, { "Check ID": "eBGP session between on-prem and Xi datacenter is down. Detailed error text: error_detail." } ]
Troubleshooting and resolving the issue Review the error details provided in the alert message. Ensure that the following ports are open between on-premises and Xi VPN gateway devices: IKEv2: Port number 500 of the payload type UDPIPSec: Port number 4500 of the payload type UDPBGP: Port number 179 of the payload type TCP Note: To check network connectivity over the aforementioned ports, inspect the network firewall configuration and verify that traffic between the gateway devices is allowed over the specified ports. Make sure that the correct eBGP password is configured on on-premises VPN gateway. Follow the on-prem VPN device vendor's documentation to troubleshoot eBGP connectivity issues. If you are using Nutanix VPN VM and need further assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach them to the support case. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run NCC health checks and collect the output file. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC log bundle using the following command. For more information, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching files to the case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB14522
Slow start time of Siemens NX application on AHV due to FlexLM network query
Slow start time of Siemens NX application on AHV (about 60 seconds) while the same applications start three times faster on ESXi (about 20 seconds), caused by a FlexLM query to an unavailable APIPA address
After installing Siemens NX application on a Windows 10 VM running on AHV, launching the application takes much longer compared to the same application on a Windows 10 VM running on ESXi. On AHV, the application takes about 60 seconds from clicking the start menu icon to the application's main window appearing, compared to about 20 seconds on ESXi.During troubleshooting, it was found that the application uses FlexLM for licensing. On AHV, FlexLM will try to contact a licensing server at IP 169.254.169.254. This connection attempt will time out and cause a delay in starting up the application. This can be seen using ProcMon:Meanwhile on ESXi, FlexLM will recognize that it is running on a VM but not in a cloud environment, and skip contacting that IP address.
The suggested workaround is to block access to 169.254.169.254 in Windows Firewall so that accessing the IP fails immediately instead of waiting to time out.This is documented in the FlexNet Publisher Knowledge Base: FNP calling IP address 169.254.169.254 https://community.flexera.com/t5/FlexNet-Publisher-Knowledge-Base/FNP-calling-IP-address-169-254-169-254/ta-p/3069
KB15144
NKE cluster scrubbing failed
NKE clusters PreUpgradeChecks fail with message "NKE cluster scrubbing failed".
Trying to upgrade Kubernetes or node OS-image might fail during the PreUpgradeChecks phase with the error message: PreUpgradeChecks: preupgrade checks failed: unable to accept the request. NKE cluster scrubbing failed, check error logs and rectify. The failing precheck prevents the upgrade from starting.
Prism Central hosts the Karbon_core service. This service manages NKE clusters and does a scrub operation on all registered NKE clusters when the service starts and/or is restarted. The scrub operations are performed to ensure all NKE cluster nodes are configured properly; for instance, if there are any proxy-related changes on the PE cluster hosting the NKE nodes, the Karbon_core service pushes the new proxy configuration to NKE nodes and services.Scrub operations for a specific NKE cluster might not complete properly or fail for various reasons. For example, if the NKE cluster is not healthy, the error message is logged to karbon_core.log file on PCVM. This log file should be inspected to understand why scrubbing has failed: nutanix@PCVM:~$ less /home/nutanix/data/logs/karbon_core.out Note: On a scale-out PC, the "karbon_core.out" log file should be inspected on the Karbon_core leader service.Execute below command to find the NKE/karbon_core service leader on a scaled-out PC deployment: nutanix@PCVM:~$ allssh "grep 'The chosen Leader' data/logs/karbon_core.out" For example, karbon_core service on PCVM "x.x.x.103" is the service leader: nutanix@PCVM:~$ allssh "grep 'The chosen Leader' data/logs/karbon_core.out" Check karbon_core.out log file to see why the scrub operation has failed for the affected NKE cluster.Most of the time, this issue is not noticed until a K8s or OS-Image upgrade is done. This could be a long time since the karbon_core service was last restarted, and the logs would have been rotated already.To resolve this issue and after inspecting karbon_core.out log file for possible causes: Ensure the NKE cluster is reported as healthy, either in PC WebUI or using karbonctl command from PCVM: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-clusters-tab-r.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-clusters-tab-r.html Or via karbonctl: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-logon-karbonctl-t.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-logon-karbonctl-t.html nutanix@PCVM:~$ karbon/karbonctl cluster health get --cluster-name <AFFECTED_CLUSTER_NAME> Restart karbon_core service on all PCVMs; wait for 10 minutes till a new scrubbing operation is completed, then retry the upgrade task: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-service-restart-t.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_8:top-service-restart-t.html If the above does not resolve the issue, contact Nutanix Support http://portal.nutanix.com, collect Karbon_core logs from Prism Central and attach them to the case for further analysis. See KB-12133 https://portal.nutanix.com/kb/12133 for instructions on how to collect karbon/NKE logs.
KB3493
How to expand a cluster when IPv6 or multicast is disabled
This article provides recommendations for expanding a cluster when IPv6 or multicast is disabled.
The expand cluster feature leverages Nutanix Discovery Protocol (NDP) to discover new nodes and determine their configuration. It uses IPv6 multicast destination addresses (224.0.0.251 / ff02::1). When the discover-node process is initiated, the CVM (Controller VM) sends out multicast packets. The foundation process on a non-configured CVM (new node) receives these discovery packets. It will respond with a message containing information regarding its hardware and software configuration. If IPv6 is not enabled on the network or multicast is disabled, the expand cluster process cannot determine the new node config. Hence, the node does not get listed in Expand cluster UI. Manual discovery of nodes is supported with certain caveats for ESXi and Hyper-V. For an ESXi or Hyper-V cluster, manual host discovery requires that the target node has the same hypervisor type and version. In addition, the AOS version must be lower than or equal to the cluster's.If the above conditions are not met, IPV6 needs to be enabled on the ESXi/Hyper-V clusters, refer to Expanding a Cluster. https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-cluster-expand-wc-t.html
Choose one of the following options: Upgrade to AOS 6.5 or later and retry expand cluster operation. Manual discovery of HCI nodes is supported in AOS 6.5 or later. You need to provide the CVM IP of the new node. Refer to Expanding a cluster https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_5:wc-cluster-expand-wc-t.html.Enable IPv6 in your network and retry to expand cluster operation.
KB10962
Post PC-DR steps when Recovery Plan Jobs are in running state
This article provides the steps to follow after a PC-DR event, if there had been Recovery Plan Jobs running before launching disaster recovery of PC to a replica PE.
After a PC-DR, you may find the following validation error while running a Recovery Plan Job. The maximum number of VMs that can be recovered concurrently in an Availability Zone is <number of entities based on recovery>. When executing Recovery Plans concurrently, make sure that the sum of the entities across those Recovery Plans does not exceed that number.” You may see Recovery Plan jobs executed before the PC-DR event being stuck in a running state. See the Nutanix Disaster Recovery Guide https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:Disaster-Recovery-DRaaS-Guide for information on DR limitations https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-limitations-pc-r.html and best practices https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-recommendations-pc-r.html. Example screenshot of validation error: You experience issues protecting/un-protecting VM(s) and other DR operations post PC-DR. A script needs to be executed on the Prism Central VM to get magneto service out of the recovery mode.
SummaryAfter performing PC DR: Identify if any Recovery Plan Jobs (RPJs) are stuck in running state and abort them.Execute the script to take magneto service out of recovery mode.Resume operations. Step 1: Find all the Recovery Plan Jobs that are in a Running state, by logging into Recovered PCVM and running the below command: nutanix@PCVM:~$ nuclei recovery_plan.list The output will show something similar to: nutanix@PCVM:~$ nuclei recovery_plan.list Get the Recovery Plan Job details. Use the Recovery Plan UUID found in the previous step (a). nutanix@PCVM:~$ recovery_plan_util --recovery_plan_uuid=<recovery_plan_uuid> --action=list_jobs The output will be similar to: nutanix@PCVM:~$ recovery_plan_util --recovery_plan_uuid=6ce321c6-042d-4324-8de2-b8b4bb3efc7c --action=list_jobs Once the Recovery Jobs are identified, proceed and abort the ones in running state from step (b), using the command below: nutanix@PCVM:~$ recovery_plan_util --recovery_plan_job_uuid=<recovery_plan_job_uuid> --action=abort_execution The output will look similar to: nutanix@PCVM:~$ recovery_plan_util --recovery_plan_job_uuid=9f65090c-f140-4d11-86c3-b7898f2f9691 --action=abort_execution Repeat the previous command to abort all the Recovery Plan Jobs (RPJ) in the Running state found earlier before proceeding to the next step.After running the command to abort the jobs, restart the magneto service to update the entity count for the aborted RPJs. nutanix@PCVM:~$ genesis stop magneto; cluster start Note: In case an RPJ was in a running state before PC-DR and had to be aborted manually after PC-DR, the following recommendations can be followed depending on the operation type of the RPJ. If Test Failover was running, then: Clean up test VMs. The operation can be done on the Recovery Plan to delete the Test Failover VMs that were recovered. If Unplanned Failover was running, then: VMs that are not restored have to be manually restored from the snapshots, and reconfiguration has to be done.VMs that got restored have to be reconfigured depending on what configuration is missing for each VM.OR clean up the VMs that got restored, and Unplanned Failover can be triggered on the Recovery Plan again. If Planned Failover was running, then: For Migrate VMs, manually verify the configuration and reconfigure with missing config parameters.VMs that are not migrated can be migrated by triggering Planned Failover on the Recovery Plan again. All the possible reconfigurations that exist for the VMs are: vNIC attachmentCategory attachmentStatic IP configurationNGT reconfigurationIf there is any script execution configured within the VM, this has to be done manually. Step 2: The script below needs to be executed on the recovered Prism Central VM to take magneto service out of recovery mode and for future DR operations to succeed. Note: After the script execution, there might be some VMs that need to be recovered from Async snapshots. These are those that have their SyncRep state broken (due to a number of issues) in the middle of a failover. If these VMs are not present on either side after the PC-DR happens, they have lost their latest state as preserved by SyncRep; therefore, these VMs must be recovered from async snapshots. SSH into the PCVM and go to /home/nutanix/bin directory. Then download the pc_dr_sync_rep_cleanup.py script. nutanix@PCVM:~$ cd /home/nutanix/bin The script can be run as follows: nutanix@PCVM:~/bin$ python pc_dr_sync_rep_cleanup.py Normal Operation: nutanix@pcvm:~/bin$ python pc_dr_sync_rep_cleanup.py Script run without PC DR: nutanix@pcvm:~/bin$ python pc_dr_sync_rep_cleanup.py Script run in print only mode: nutanix@pcvm:~/bin$ python pc_dr_sync_rep_cleanup.py print-stretch-params Script run after a previous crashed execution: nutanix@pcvm:~/bin$ python pc_dr_sync_rep_cleanup.py Script run when a migration was in progress: nutanix@pcvm:~/bin$ python pc_dr_sync_rep_cleanup.py Graceful exit if no sync rep VMs found: nutanix@pcvm:~/bin$ python pc_dr_sync_rep_cleanup.py After running the above script, make sure sync_rep_recovery_mode is set to Completed. zkcat /appliance/logical/prism/pcdr/recover/sync_rep_recovery_mode Monitor the sync_rep_recovery_mode zknode for couple of days. zkcat /appliance/logical/prism/pcdr/recover/sync_rep_recovery_mode If it switches back to InProgress, contact Nutanix Support by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/.
KB7880
All disks on a node show red blinking LEDs
All disks on a node show red blinking LEDs
Customer reports an issue that all of the disks on a node have red blinking LEDs, as shown in the picture below. All disks on one node of NX-8035-G6 show red blinking LEDs. Disks are mounted and accessible. No alerts in Prism and all disks passes smartctl test. Firmware is the same as the other nodes in the cluster.
Below will the sequence of steps to be checked before replacing the chassis/node. Turn the individual drive LEDs on:off from Prism to help identify if these were enabled by customer for any maintenance. Check status from Prism -> Hardware -> Diagram -> Node -> Click each disk and see towards bottom right of diagram option to toggle LEDs.Power cycle node (cold), power off the node, wait for 10 minutes to drain the power out of the system and power on the node again.Manual BMC update (via web UI and then reset to factory defaults per usual workflow).BIOS update (LCM suggested, will reboot host also but that was already done above).Upgrade HBA and disk FW to latest supported version. If already on latest version, re-flash the HBA and disk FW.Reseat the node in the same slot (Refer to KB 3182 - Reseating a Node in a Cluster http://portal.nutanix.com/kb/3182).Move the node to any other free slot in the same chassis (Refer to KB 3182 - Reseating a Node in a Cluster http://portal.nutanix.com/kb/3182) to see if the issues with following by moving the node to new slot or stays with the old slot. Final solution in this case, we moved the node to a different slot on a different chassis and the LED blinking stopped, which concluded the chassis was faulty. Send a chassis replacement to the customer and mark the node for FA.
KB10210
Move Windows7 VM stuck on the bootup page for the prism console
null
Use Move 3.6.0/3.6.2 migration Windows7 VM(Gen1) from Hyper-V cluster(2012-R2) to AHV Cluster(5.15.2), the Windows 7 VM can be power on when the first bootup on AHV cluster. But it will be stuck on the following page when reboot it. Actually the vm has started normally, we can login through windows RDP.The issue does not occur on newly installed windows7 VMs
Remove the remote software( Remote Access for VSPACE SERVERS https://www.ncomputing.com/) from the windows 7 vm that running on AHV Cluster, the issue for startup screen stuck was resolved
KB13025
Bulk NGT install Failing in Linux VM's
Bulk NGT Installations on Linux VM's throiugh Prism Central Failing with "Internal Error message"
Bulk NGT installation on Linux VMs from Prism Central is failing with "Internal Error Message"Scenario 1:One possible trigger for this scenario is when the customer configured their UVM to display additional messages during user login. One example we encountered in the field is when the customer has configured a timeout for idle users. Linux system administrators can configure this to automatically log out users without activity for a set amount of time. More information can be found under the "Reaping idle users" section of this article https://wiki.centos.org/HowTos/OS_Protection.To verify if we encounter this scenario, we need to check the audit logs on the Linux VM (on the CVM they are stored under /home/log/audit/audit.log) and look for the command strings logged against the NGT install user and decode those. Then we need to check if any of those commands failed.Below is a sample log snippet. "scptad" is the user specified for NGT installation. Oct 13 15:34:11 <host> audispd: node=<host>.ad.customer.com type=USER_CMD msg=audit(1634153651.233:532): pid=4786 uid=1001 auid=1001 ses=4 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='cwd="/home/scptad" cmd=49646C652075736572732077696C6C2062652072656D6F766564206166746572203135206D696E75746573202D4C204E5554414E49585F544F4F4C53 terminal=pts/1 res=failed' We can decode the string after "cmd=" to find out which messages were logged: echo "49646C652075736572732077696C6C2062652072656D6F766564206166746572203135206D696E75746573202D4C204E5554414E49585F544F4F4C53=" | xxd -r -p; echo In this case, we see the "Idle users will be removed after 15 minutes" message that breaks the NGT install.We can parse all audit logs for failed commands with the following script: for i in $(egrep "cmd=.*res=failed" /var/log/audit/audit.* | awk -F"cmd=" '{print $2}' | awk '{print $1}'); do echo $i | xxd -r -p; echo; done You may need to adjust the path to point to the directory with the correct audit logs. For failed NGT installs we should look for messages containing "NUTANIX_TOOLS". Additionally, we can find the ISO mount failure logged in anduril.out: 23005 2021-10-13 19:03:15,117Z INFO ngt_utils.py:443 Installing NGT on Linux UVM: 10.79.28.9 Scenario 2Check whether the NGT ISO extraction to /mnt results in the guest filesystem filling up. You can find the logs snippet in Linux User VM's guest_agent_monitor.ERROR logs under NGT. Also, check whether there are any quotas set on these mount points. :logs/guest_agent_monitor.ERROR Verify the available space on the UVM /mnt filesystem: nutanix@NTNX-CEREAL-C-CVM:A.B.C.D:~$ df -h /mnt
Scenario 1: Ask the customer to remove any banners or additional login messages from displaying when a user logs in to the system.Scenario. 2:NGT ISO extraction occupies 521 MB on the /mnt filesystem so we should have at least this amount of space available on the filesystem.A workaround is to mount the CD-ROM and install the package directly from the CD-ROM device.
KB10204
NCC Health Check: esx_product_locker_setting_check
The NCC health check esx_product_locker_setting_check checks for ESXi ProductLocker settings.
The NCC health check esx_product_locker_setting_check checks for ESXi ProductLocker settings. This check will generate a FAIL if the host contains a ProductLocker symlinked to any container.This check was introduced in NCC 4.1.0. Running NCC Check Run this check as part of the complete NCC Health Checks nutanix@cvm$ ncc health_checks run_all Or run this check separately: nutanix@cvm$ ncc health_checks hypervisor_checks esx_product_locker_setting_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day Sample outputFor Status: PASS Running : health_checks hypervisor_checks esx_product_locker_setting_check For Status: FAIL Detailed information for esx_product_locker_setting_check: For Status: INFO Detailed information for esx_product_locker_setting_check: Output messaging [ { "Check ID": "Check ESXi ProductLocker settings" }, { "Check ID": "The host contains a ProductLocker location that is not properly configured" }, { "Check ID": "Manually configure the ProductLocker location. Review KB 10204." }, { "Check ID": "CVMs appear to be stunned or frozen periodically on an ESXi cluster, potentially causing instability of cluster services and VMs. This issue can affect multiple CVMs at once, causing a complete cluster outage." } ]
Open SSH connection to one of the CVMs as "nutanix" user and execute the below command to identify the /productLocker symlink: [nutanix@CVM] hostssh 'ls -latr / | grep ^l' Example output: [nutanix@CVM] hostssh 'ls -latr / | grep ^l' Map the container: [nutanix@CVM] hostssh 'ls -latr /vmfs/volumes' In the above example, we can see the /productLocker symlink is pointing to a non-local datastore. You may see the following ESXi setting changed to the path seen below in the String Value line: [root@ESXi] esxcli system settings advanced list -o /UserVars/ProductLockerLocation Note: If the productLocker String Value points to a Metro Protected datastore, engage Nutanix Support for remediation. If Metro Availability is disabled, do not re-enable and engage Nutanix support immediately. Re-enabling Metro in this state can cause CVMs to lockup and restart, leading to a potential outage.In order to resolve this issue, reconfigured ESXi hosts to point productLocker back to local SATADOM (default config). For more information on this, you can refer to the below VMWare article https://kb.vmware.com/s/article/2129825. Ensure the change is effective. In case the above-mentioned steps do not resolve the issue, engage Nutanix Support https://portal.nutanix.com/page/home, gather the output of "ncc health_checks run_all" and attach it to the support case.
KB15460
File Analytics - Manage File Analytics option gives error while fetch the FA details
Manage File Analytics option gives Error while fetching File Analytics VM
You can encounter an issue where clicking the "Manage File Analytics" tab in your Prism Element (PE) at the File Server section results in a loading pop-up that eventually gives an error message "Error while fetching File Analytics VM."This problem can also cause the file analytics software version to not display on the Life Cycle Management (LCM) page after performing an inventory in a configured cluster.
It typically indicates a problem with the communication or configuration between your Prism Element and the File Analytics VM. If you're experiencing an issue with the above symptoms, engage Nutanix Support https://portal.nutanix.com/ for assistance.
KB13923
Uploaded AOS installer is not recognized when the installer file name is not in a valid format during the foundation process
AOS installer file is not recognized with the message "No files have been added" during the foundation procedure, although the file is successfully uploaded into CVM or Foundation VM. This KB explains the symptom and workaround to address the issue.
In the step "AOS," where an AOS installer file can be manually uploaded during the foundation procedure, Although the file is successfully uploaded, it cannot be selected to use having the message saying "No files have been added" if the file name is not in a valid format. For issue identification, please see the below steps. 1. During the foundation process, select the AOS installer file by clicking on "Manage AOS Files" in Step 4 "AOS" and start uploading the installer file into the foundation VM by clicking on Manage AOS Files 2. Confirm the upload is moving forward with no issues 3. Upload is nearly complete 4. After the uploading is all successfully completed, It says, "No files have been added". If you see this message, then the foundation cannot proceed any further as no AOS installer file cannot be selected 5. Compare md5sum of the AOS install file uploaded in foundation VM to the one in portal and make sure they are exactly same. MD5SUM of the uploaded installer file nutanix@NTNX-19FM6H240085-D-CVM:~/foundation/nos$ ls MD5SUM for the installer file in portal 6. Confirm the below POST and GET API in http.access log file have been successfully completed with response code 200 (http.access log file can be found under foundation folder) 2022-11-14 01:47:38,639Z INFO ::ffff:10.140.26.151 [2022-Nov-14 01:38] POST /foundation/upload?installer_type=nos&filename=nutanix_installer_package-release-fraser-6.5.1.6-stable-7c84485a320ec3a2552e8894822c9b6933ad8c91-x86_64.gz 200 OK HTTP/1.1 Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36
The issue can happen due to the invalid format of the installer file. In the below screen capture, for example, the file name does not have ".tar" in it, which is the main cause of the issue. It can be fixed by fixing the file name to the correct format. ENG-511350 https://jira.nutanix.com/browse/ENG-511350 has been filed to enhance error handling.
KB15760
Loading the Admin Center and Switching between Apps freezes for AD admin users
Customers logged in with AD admin users may experience screen freeze or stuck loading page when attempting to access the Admin Center or Switching between Apps in Prism Central.
Customers with role mappings referencing duplicated group entries in IDF and IAM DB(Cape) might face issues accessing the Admin center page and switching between Apps in PC.The AD admin users had a role mapping that wasn't migrated to IAMv2, even though logs indicate all groups were migrated. The customer then recreated the group through the UI. This triggers the problem due to concurrent updates."Unable to create or update user capability" error signatures seen in aplos.out: nutanix@NTNX-xx-x-x-21-A-PCVM:~$ grep -i "Unable to create or update user capability" data/logs/aplos.out | tail -50 "Unable to create or update user capability" error signatures seen in styx.log: nutanix@NTNX-xx-x-x-21-A-PCVM:~/data/logs$ allssh 'sudo grep "[email protected]" /home/docker/domain_manager/log/styx.log' Duplicate entry: The same name is defined multiple times with a different distinguished_name in CAPE: name | distinguished_name | uuid To get the above output, start at step 10 from KB 13487 http://portal.nutanix.com/kb/13487 and use "select name,distinguished_name,uuid from user_group;" for your SQL query. Additionally, there might be a third (or 4th) stale user_group entry in IDF which is not seen in nuclei or CAPE (See the output above). Stale user_group UUID 0e5aa4d0-bf91-49c7-b971-6147f980e881 entry in IDF nutanix@NTNX-xx-x-x-21-A-PCVM:~/data/logs$ idf_cli.py get-entities --guid user_group | grep -A 3 group_name | grep str_value nuclei user_group list only shows us 11 entries. Note how 0e5aa4d0-bf91-49c7-b971-6147f980e881 isn't present in the output. nutanix@NTNX-xx-x-x-21-A-PCVM:~/data/logs$ nuclei user_group.list = 11 entries
As a result of existing duplicate user_group entities with the same DN, aplos will constantly try to update abac_user_capability and may hit a race condition due to the same cas value on 2 IDF updates. This issue is resolved in the pc.2023.3 version through ENG-569735 https://jira.nutanix.com/browse/ENG-569735.However, for customers who upgraded their PC to the pc.2023.3 version and still have duplicate user_group entries, the issue will persist until the groups are manually deleted through nuclei. Workaround: Delete the UUIDs of the groups with incorrect distinguished_name entriesGroup UUID 0e5aa4d0-bf91-49c7-b971-6147f980e881, which was present in IDF but not in CAPE, can safely be deleted through nuclei as well <nuclei> user_group.delete <UUID-of-the-stale-and-duplicate-entries>
KB13622
NDB legacy DR replication failing due to snapshot naming
This KB talks about NDB replications failing due to snapshot naming in legacy DR
Nutanix Database Service (NDB) based snapshots may fail if the snapshot created is named as per Entity Centric snapshots and not as per Legacy DR snapshots. Legacy DR snapshots are named as "<remote_cluster_name>:<snap_uuid>"Entity Centric snapshots are named as "remote_<remote_site_ip>_uuid:<snap_id>". This issue is caused when Legacy DR snapshot is replicated to remote site and the snapshot name is in the format "remote_<remote_site_ip>_uuid:<snap_id>" instead of "<remote_cluster_name>:<snap_uuid>". This happens if Leap was configured before Legacy DR and we pick up Entity Centric remote name because corresponding remote name object in zeus config is listed before remote name corresponding to Legacy DR entry.Identification In the following example. On cluster 1, we have 2 entries for cluster 2, one for entity centric and second one for legacy DR. nutanix@cvm:~$ cerebro_cli list_remotes
To fix the issue, remote_name object for Legacy DR should be listed before remote_name for Entity Centric. To achieve this, follow the steps below. Unprotect VMs from Leap Protection Policies. Make a note of the VMs in protection policies to add back later. Disconnect Availability Zone. Reconnect availability zone and re-protect the VMs in protection policies.
KB12468
Acropolis service crash due to large number of volume groups
Acropolis service crashing due to very large amounts of volume groups present.
Following an AOS upgrade when a large quantity of volume groups are present on a cluster, the acropolis service can become unstable and enter a crashloop due to failure to complete master initialization within the specified timeout. When this occurs, one or more of the following crash signatures could be generated in acropolis.out: ... It is worth noting that the quantity of volume groups which triggers this issue appears to vary based on the upgrade path followed. For example: Other symptoms may include: Failure of VM and VG management operations. VMs not being able to power on.Issues with Rubrik backups referencing metadata errors (See KB 12004 https://nutanix.my.salesforce.com/kA07V000000LWwa).Long delays in VG publishing tasks.[ { "Upgrade path": "AOS 5.20.X -> 6.5.X", "Quantity of VGs present when issue was observed": "3.5k+" } ]
The acropolis instability can be eased by deleting any extraneous volume groups on the system, preferably below the thresholds noted in the table above based on the upgrade path in question. If removal of volume groups is infeasible and acropolis is in a crash loop, you may need to tune one or more of the following acropolis gflags in order to allow acropolis master initialization to complete:For guidance on working with gflags, please refer to KB-1071 https://portal.nutanix.com/kb/1071WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit acropolis_master_init_timeout_sec (default 900) acropolis_vg_task_arithmos_timeout_sec (default 15) For guidance on specific values, please consult with a Senior SRE or STL before making any changes. The following oncalls can be referred to regarding values utilized in other scenarios where relief was provided:ONCALL-11272ONCALL-11318ONCALL-11837ONCALL-12381ONCALL-14760ONCALL-15350 Thresholds and limitations for volume groups were being discussed under ENG-408837 https://jira.nutanix.com/browse/ENG-408837. Currently we do not enforce any hard limit on the number of volume groups one can create, though based on ENG-461479 https://jira.nutanix.com/browse/ENG-461479 it appears that we're targeting a limit of 2000 volume groups once the Castor service is enabled in AOS 6.6.
KB15387
NDB - Archive WAL files are present in /tmp in postgres ha node
Archive WAL files are present in /tmp in postgres ha node due to Log Catchup operation leaving behind temporary files
Noticed Archive WAL files are present in /tmp in postgres HA nodee.g. [postgres@pg_node1 tmp]$ Reviewing the above files, they can be read like text files, while actual Archive WAL files are in binary format
The logcatch operation determines the start time and end time from the PostgreSQL WAL files. In order to determine the timestamps it dumps the WAL files content into the /tmp directory one by one until until it gets the required information, after that it deletes the dump files. But in one of the work flows it leaves behind one dumped WAL file. These dumped files are in plain text and this file will not be used during PITR based restoreSo these files can be deleted as far as there are no current Log Catchup operation running on that DB Server.
""ISB-100-2019-05-30"": ""ISB-019-2016-10-10""
null
null
null
null
KB13522
Single SSD Repair fails on G8 node stating "Rescue CVM did not come up"
An out-of-date utility package is causing the "Repair Disk" feature in Prism and the single_ssd_repair script in the CLI to fail on all G8 series nodes.
Due to an out-of-date utilities package in the Rescue ISO, disks on G8 series nodes cannot be inventoried within the Rescue Shell used by the SSD Repair features. This causes tasks for the Prism-based "Repair Disk" feature https://portal.nutanix.com/page/documents/details?targetId=Boot-Metadata-Drive-Replacement-Single-SSD-Platform-G8:Boot-Metadata-Drive-Replacement-Single-SSD-Platform-G8 and the manual CLI-based single_ssd_repair script to fail at the "sm_bring_up_cvm" phase. Only G8 nodes are affected by this issue. Before proceeding further, confirm that the node you are working on is an NX-xxxx-G8 series model. You may go to Prism GUI > Hardware > Diagram for that information. After the SSD Repair is initiated, the task labelled "Repair CVM Bootdisk" will eventually fail, as shown in the following example: You will also see that the task is stuck on the "sm_bring_up_cvm" phase for more than 30 minutes, as seen in the output of the "ssd_repair_status" command. The below was run from a working Controller VM (CVM) in the same cluster: nutanix@CVM$ ssd_repair_status The task will eventually fail, with the following output seen in the genesis.out log on the Genesis Leader CVM. To determine which CVM is the Genesis Leader, run the following command: nutanix@CVM$ panacea_cli show_leaders | grep genesis SSH to the CVM mentioned above and run this command to obtain the relevant lines from genesis.out log: nutanix@CVM$ grep ssd_breakfix /home/nutanix/data/logs/genesis.out If you open a VM Console to the CVM undergoing repair, you may see the following output stating "No suitable SVM boot disk found": Imaging SVM... The above message indicates that the repair process could not detect a valid SSD drive on which to install AOS. The rescue shell cannot detect any drives at all. From the rescue shell inside the CVM console, running the lsscsi command returns no disks. Also, the lspci command gives the following error: # lspci
The issue is fixed with the AOS 6.5.3 release. Upgrade the AOS to the latest version. If you have encountered this problem while recovering from a Single SSD failure and have confirmed that the model undergoing repair is the NX-1065(N)-G8, contact Nutanix Support https://portal.nutanix.com/ and cite this KB article (KB-13522) in the issue description. Upgrade the AOS to the latest version after applying the workaround.
KB10237
Nutanix Disaster Recovery(Leap) NearSync Transitioning out due to stale Legacy Remote Site config
Nutanix Disaster Recovery NearSync may transition in and out of NearSync due to a stale legacy remote site which can not be garbage collected.
Description:Due to what is found as incorrect handling of remote site types both in Cerebro and Stargate (As seen in ONCALL-9517 https://jira.nutanix.com/browse/ONCALL-9517 / TH-4770 https://jira.nutanix.com/browse/TH-4770) the following scenarios can be observed: Cerebro is unable to garbage collect the legacy remote site because it was wrongly associated with a Nearsync CG which was actually associated with kEntityCentric Remote site.Stargate picks up the wrong type of remote site by kLegacy, and fails to replicate. Due to this VM's keep transitioning in and out of NearSync. Identification: Nearsync replication fails with the following signature in Stargate logs: /home/nutanix/data/logs/stargate.INFO E1008 03:48:45.844962 13496 cg_controller_lws_replicate_op.cc:866] NS-REPLICATION cg_id=XXXXX:YYYYY:ZZZZZ operation_id=10677837 lws_id=1 LWS metadata replicate finished with error kRetry And on the receiving cluster's Cerebro master (cerebro_cli get_master_location) logs the following can be observed: /home/nutanix/data/logs/cerebro.INFO W1008 03:53:41.418490 3837 cerebro_master.cc:6533] Fetch replication target request sent by unknown cluster: 3488227162339459002 1601970999538576 of type: kLegacy cerebro_cli list_remotes shows the old legacy remote site config which is to be removed but has not been cleaned up yet: nutanix@CVM:~$ cerebro_cli list_remotes Listing remote legacy sites does not list the remote site which is to be removed but not GC'ed yet: nutanix@CVM:~$ ncli rs ls
Change the RPO for the Protection Policy from NearSync (1 - 15 minutes) to 1 Hour RPO to allow for Legacy Remote Site to be garbage collected.Check that cerebro_cli no longer lists the legacy Remote Site which had a status of "to_remove: true" nutanix@CVM:~$ cerebro_cli list_remotes Change the RPO for the Protection Policy back to NearSync (1 - 15 minutes)
KB14145
[Objects] S3 application(s) may perform poorly due to Envoy load balancer VMs running out of iptables netfilter connection limits in its conntrack table
During rapid connection open/close workload, LB VMs were running out of iptables netfilter connection limits in its conntrack table.
S3 application(s) queries might timeout or get terminated. You can verify following error signature to identify if the cluster hitting the same issue described in this document.Follow below steps to get envoy IPs: SSH to PCVM.Run the following command from PCVM to get object cluster list: nutanix@PCVM:~$ mspctl cluster list Obtain the envoy/loadbalancer IP by following step. If you have multiple MSP clusters then please choose the correct Objects cluster to obtain the information: nutanix@PCVM:~$ mspctl cluster get test2 SSH to the Envoy VM: nutanix@PCVM:~$ mspctl lb ssh x.x.x.179 Please verify if the established connection count is going above 1000: [nutanix@test2-xxx-envoy-0 ~]$ sudo netstat -plan | grep :80 | grep -i established | wc -l Verify the dmesg log if the connections are getting dropped: [nutanix@test2-xxx-envoy-0 ~]$ dmesg -T | less
The issue has been fixed in MSP-2.4.4, bundled with PC 2023.3. Suggest an upgrade to the same. If you see described symptoms, please follow the below-mentioned workaround and please engage an Object SME before applying the workaround on the customer's environment.Workaround:Please follow below steps for all the Envoy VMs.Step1: Scale up all the Envoy VMs with 8GiB Memory and 4 vCPUs from aCLI. SSH to one of the CVMs where object cluster is deployed and run below command: nutanix@CVM:~$ acli vm.update envoy-vm-name memory=8 num_vcpus=4 Step-2: After scaling UP all the Envoy VMs, now we need to increase the max_connections count to 1500. Please SSH back to Envoy VM to edit the /etc/envoy/envoy.yaml file on all the Envoy VMs and add these 4 lines for port 80 and 443.Before updating the file please take a backup copy: [nutanix@test2-xxx-envoy-0 ~]$ cp /etc/envoy/envoy.yaml /etc/envoy/envoy.yaml.backup [nutanix@test2-xxx-envoy-0 ~]$ vi /etc/envoy/envoy.yaml Step-3: Add net.netfilter.nf_conntrack_max=327680 in /etc/sysctl.conf, save the changes and exit, then run sudo sysctl -p to persist the change.Before proceeding with modification, please take a backup of the file: [nutanix@test2-xxx-envoy-0 ~]$ cp /etc/sysctl.conf vi /etc/sysctl.conf.backup [nutanix@test2-xxx-envoy-0 ~]$ vi /etc/sysctl.conf Step-4: Restart envoy service: [nutanix@test2-xxx-envoy-0 ~]$ sudo systemctl restart envoy
KB16833
Nutanix Files: Adding a Nutanix Files Cluster to Rubrik Security Cloud
This KB provides the general guidelines to add a Nutanix Cluster to Rubrik Security Cloud
Details required to add a Nutanix Cluster to Rubrik Security Could is provided in the following screenshot:
Please refer to the following guidelines for filling up the required details: Rubrik Cluster: Select your respective Rubrik cluster from the drop-down menu.System Name: Select "Nutanix File Server"IP or Hostname: Add the File Server FQDN as the hostname. The following steps can be following to get the file server domain name details: Launch Prism > navigate to File Server page > select the file server > look into the details section on the bottom left of the screen. API Username and Password: Add the REST API Username and Password from the Files console. Steps to get API user details: Launch Prism > navigate to File Server page > select the file server > click on "Launch Files Console". The File Server Console will open up on a new window. Select "Configuration" > Manage Roles > REST API user access. ( Nutanix Files User Guide https://portal.nutanix.com/page/documents/details?targetId=Files:fil-file-server-rest-api-access-t.html)Ensure that the above REST API user is authorized using these steps https://portal.nutanix.com/page/documents/details?targetId=Files-v4_4:fil-files-api-authorize-user-t.html before adding the user to Rubrik Security Cloud.
KB14555
Nutanix Objects or PC with enabled CMSP | iscsi failed to mount the target volumes due to 0-sized iscsi db files present on the nodes
The iscsiadm might fail to mount the target volumes on the kubernetes node due to presence of an empty scsi db file. The observed error: " iscsiadm: Could not stat /var/lib/iscsi/nodes//,3260,-1/default to delete node: No such file or directory."
In environments requiring mounting volumes via Nutanix CSI driver or Nutanix DVP plugin, CSI/DVP may fail to mount iSCSI volume if the iSCSI file-based database contains a malformed 0-sized file.Nutanix CSI and Nutanix DVP rely on iscsiadm to mount volumes. iscsiadm maintains a file-based database. It was observed in the field that 0-sized files in the iscsi database may cause failure to mount volumes.Technically problem may affect any environment that uses Nutanix CSI or DVP, such as NKE (formerly Karbon), PrismCentral, and Objects.In the sample scenario below, the problem was observed on Objects nodes.Identification:The problem will cause failures to mount iSCSI volumes via Nutanix CSI or DVP and may manifest in various ways; the below symptoms are provided as examples only. iscsiadm discovery executed manually show errors similar to: root@objects-master-0:~# iscsiadm -m discoverydb -t sendtargets -p <dsip>:3260 -I default --discover iscsi database files in /var/lib/iscsi/nodes/ contains 0-sized 'default' files similar to: root@objects-master-0:~# sudo find /var/lib/iscsi/nodes/* -maxdepth 2 -type f -ls Considering the sample scenario with affected Objects node: Pods may be stuck in ContainerCreating state due to CSI driver being unable to mount iscsi volume: nutanix@objects-master-0:~$ kubectl get pod -A | grep -v Running etcd may fail to start due to dvp unable to mount the volume with symptoms similar to: nutanix@objects-master-0:~$ sudo journalctl -fu etcd The same scenario can be found on PC with enabled CMSP since we use the same stack. ETCD/Registry can be down with the following error message: nutanix@NTNX-A-PCVM:~$ sudo systemctl status etcd iscsiadm discovery executed manually show errors similar to: nutanix@NTNX-PCVM:~$ sudo /sbin/iscsiadm -m discoverydb -t st -p <DSIP>:3260 --discover
Workaround: Remove 0-sized iscsi database files identified in the "Identification" section: root@objects-master-0:~# sudo rm /var/lib/iscsi/nodes/iqn.2010-06.com.nutanix\:<traget-id>\:nutanix-docker-volume-plugin-tgt0/<disp>\,3260\,1/default Re-run discovery so files are regenerated root@objects-master-0:~# iscsiadm -m discoverydb -t sendtargets -p <disp>:3260 -I default --discover Confirm previously 0-sized files recreated and are not empty now and contain valid config: root@objects-master-0:~# ls -ltra /var/lib/iscsi/nodes/iqn.2010-06.com.nutanix\:<traget-id>\:nutanix-docker-volume-plugin-tgt0/<disp>\,3260\,1/default Previously failing DVP/CSI iscsi mounts are expected to be completed successfully.
{
null
null
null
null
KB3051
Adding Linked Child Account on Portal
This article describes the Linked Child Account or "Login As" feature within the Nutanix Portal and how to enable it.
What is a Linked Child Account? If at any time a reseller or partner requires access to the account of an end customer within the Nutanix Portal, they can submit a request to enable the Linked Child Account feature. Once the Linked Child Account is added to a partner's account, the partner can switch to the linked account after logging in to the portal. Click the Login As menu item under the user menu as shown in the following screenshot. Choose Linked Account at the top of the page. Once enabled, the following operations can be performed by the reseller or partner on behalf of the customer. Accessing customer asset information (My Products > Installed Base)Viewing and claiming customer licenses (My Products > Licenses)Opening and viewing support cases for customer assets (Support > Open Case / View Cases)
Authorization to access a Linked Child Account To enable the Linked Child Account feature, the reseller or partner must first obtain written authorization from the customer in the form of an email or letter granting access to their account. The following information should be included in the authorization. Name of the partner account and an email address of the portal user who would want access.Name of the customer account and an email address of the person who authorized access. Send an email to [email protected] This alias will auto create a Non-Technical Portal case and will route directly to the CSA team for review. Once this authorization has been submitted to Nutanix, the Linked Child Account feature can be enabled.
KB5717
NCC Health Check: recovery_plan_entity_limit_check / recovery_plan_vm_limit_check
The NCC health check recovery_plan_entity_limit_check / recovery_plan_vm_limit_check raises a critical alert if the number of VMs in a recovery plan exceeds the maximum limit.
Note: From NCC 4.3.0 onwards, recovery_plan_vm_limit_check has been renamed to recovery_plan_entity_limit_check.The NCC health check recovery_plan_entity_limit_check / recovery_plan_vm_limit_check raises a critical alert if the number of entities in a recovery plan exceeds the maximum limit. This check is executed from Prism Central paired with an availability zone. This check was introduced in NCC 3.6. Running the NCC check This check can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks draas_checks recovery_plan_checks recovery_plan_vm_limit_check From NCC 4.3.0 and above, use the following command for the individual check: nutanix@cvm$ ncc health_checks draas_checks recovery_plan_checks recovery_plan_entity_limit_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day, by default. This check will generate an alert after 1 failure. Sample output For status: FAIL Running : health_checks draas_checks recovery_plan_checks recovery_plan_vm_limit_check From NCC 4.3.0 and above Running : health_checks draas_checks recovery_plan_checks recovery_plan_entuty_limit_check Output messaging From NCC 4.3.0 and above Note: The check may FAIL even when the number of VMs is within the specified limit if the entities/VM(s) are recently deleted and the protection run schedule has not triggered.Please wait for the protection rule schedule to kick off again so it marks the entities/VMs for deletion to re-run the check and the alert should auto-resolve.[ { "Check ID": "Checks if the VM count exceeds the threshold in the Recovery Plan." }, { "Check ID": "Number of VMs in the Recovery Plan exceeds the limit." }, { "Check ID": "Reduce the number of VMs in the Recovery Plan." }, { "Check ID": "VM Recovery prone to failure." }, { "Check ID": "A300414" }, { "Check ID": "Number of VMs in Recovery Plan exceeds the threshold" }, { "Check ID": "Number of VMs in the Recovery Plan recovery_plan_name exceeds the threshold" }, { "Check ID": "Maximum number of VMs in a recovery plan should not exceed max_vm_count. Recovery Plan recovery_plan_name have vm_count VMs." }, { "Check ID": "300414" }, { "Check ID": "Checks if the entity count exceeds the threshold in the Recovery Plan." }, { "Check ID": "Number of entities in the Recovery Plan exceeds the limit." }, { "Check ID": "Reduce the number of entities in the Recovery Plan." }, { "Check ID": "Entity Recovery prone to failure." }, { "Check ID": "A300414" }, { "Check ID": "Number of entities in Recovery Plan exceeds the threshold" }, { "Check ID": "Number of entities in the Recovery Plan {recovery_plan_name} exceeds the threshold" }, { "Check ID": "Maximum number of entities in a recovery plan should not exceed {max_entity_count}. Recovery Plan recovery_plan_name have {entity_count} entities." } ]
The NCC check reports the Recovery Plan name on FAIL if the number of entities in a Recovery Plan exceeds the threshold. The total number of entities that are supported in a recovery plan are: AHV to AHV = 200ESX to ESX = 200AHV to Xi = 200 ESX to Xi = 100 Note: Each recovery plan has a limit of 500 entities. If you plan to keep one entity per recovery plan, the limit is 500 entities. Nutanix is aware of the issue and fix has been applied in pc.2023.4.0.3 and pc.2024.2 release. To resolve this alert, reduce the number of entities associated with the Recovery Plan If the Recovery Plan is associated with multiple categories, you can separate the categories into separate Recovery Plans. If the Category associated with the recovery plan exceeds the threshold, separate the VMs associated with this Category into multiple categories and subsequently, multiple recovery plans Note: If a new category is created to resolve the issue, ensure that the category is associated with a protection plan so the entities are replicated. A Category associated with a Recovery Plan should have a protection policy associated with it, so the entities are snapshotted and replicated based on the protection policy schedule. If you are unable to resolve the issue please engage Nutanix Support https://portal.nutanix.com. Additionally, gather the output of the command below and attach it to the case. nutanix@cvm$ ncc health_checks run_all
KB14069
Cassandra is in forwarding state after CVM wiped by phoenix
Cassandra might get stuck in forwarding state after CVM was wiped by phoenix instead of the reconfigure option
After wiping the CVM with phoenix and adding it back to the cluster with boot_disk_replace script, the CVM is stuck in forwarding mode and cassandra_keyspace_cf_check ( KB 4274 https://portal.nutanix.com/kb/000004274) health check will fail with the following message FAIL: No metadata disk found on this node nutanix@NTNX-CVM: :~$ zeus_config_printer | grep dyn_ring_change_info -A9 For the disks, repartition and add works fine, however for the SSD it gets detected correctly that it runs on a single SSD system in /home/nutanix/data/logs/hades.out 2022-11-01 12:19:47,424Z INFO Dummy-19 disk_manager.py:1540 Single SSD system But then it was trying to repartition this SSD as a non-boot disk, which means all partitions cleanup is required. 2022-11-01 12:19:47,615Z INFO Dummy-19 disk_manager.py:1670 Repartitioning regular (non-boot) disk And since CVM still actively using root and home partitions - the repartition operation fails. 2022-11-01 12:19:47,636Z ERROR Dummy-19 disk_manager.py:1272 Failed to wipe partition and add GPT partition table on disk /dev/sda, ret True, stdout , stderr Error: Partition(s) on /dev/sda are being used. AOS does not automatically re-add disks if they are not empty, the repartition and add procedure must be initiated manually. During the repartitioning process of a single SSD, the cluster may try to repartition it as a data disk and fail with the error below, since the CVM is already running from this disk. 2022-11-01 12:19:47,636Z ERROR Dummy-19 disk_manager.py:1272 Failed to wipe partition and add GPT partition table on disk /dev/sda, ret True, stdout , stderr Error: Partition(s) on /dev/sda are being used.
To fix this issue use the commands below for the SSD: nutanix@cmv:~$ sudo cluster/bin/hades stop This issue is reported in ENG-409047 https://jira.nutanix.com/browse/ENG-409047, and is fixed in AOS 6.1 and above.
KB16788
Unable to configure Recovery Plan over Stretched Network with Traffic Tromboning
This article explains the DR recovery plan cannot be configured by pc.2023.3 over the stretched network with traffic tromboning.
Problem Description Configuring the DR Recovery Plan fails over the stretched network with Traffic Tromboning. Symptom As of May 2024, configuring the Recovery Plan fails with an "Invalid network settings" message over the stretched network with Traffic Tromboning. About Traffic Tromboning: Traffic Tromboning represents one of the shapes of the network traffic. In this article, it applies to traffic between two sites. Note: The above diagrams are from Flow Virtual Networking Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide-vpc_2023_4:ear-flow-nw-traffic-troboning-hair-pinning-to-one-side-pc-r.html. Refer to this document for details.[ { "WithoutTromboning": "WithTromboning", "This is a typical configuration where the VMs use the nearest default gateway in each site.\nThe VMs in AZ01 use the default gateway (192.168.1.1) in AZ01.The VMs in AZ02 use the default gateway (192.168.1.1) in AZ02.": "With this configuration, we can limit the egress traffic to only one gateway for various purposes (audit, security, etc.)This configuration is achieved by assigning a dummy IP address to the default gateway in either of the sites.In this example:\nA dummy IP address, 192.168.1.129, is assigned to the default gateway in AZ02, to make it unreachable intentionally.The VMs in AZ01 use the default gateway (192.168.1.1) in AZ01.The VMs in AZ02 use the default gateway (192.168.1.1) in AZ01, too.\n The current problem:\nThis configuration is available over the Stretched Network. However, the workflow for creating a Recovery Plan cannot accept the dummy IP address for the default gateway, which is different from the one on the other site." } ]
Nutanix is aware of the issue and is working on a solution. Workaround: There is no workaround available. The "Non-stretch networks" option allows recovery plan configuration to be saved. However, the VMs take the dummy default gateway value during failover.
KB12621
Flow Network Security Visualisation not displaying inbound traffic to a VM
Flow Network Security Visualisation may not display inbound traffic to a VM if the source of the packet is another VM running on same cluster which has a different outbound security policy applied to it.
Flow Network Security visualisation might not display inbound traffic to a VM if the source of the packet is another VM running on same cluster which has a different outbound security policy applied to it.The expected traffic should be visible as outbound traffic in the other security policy that is applied to source VM, rather than the affected destination VM inbound traffic.Flow Network Security visualisation will prioritise matching packets based on the source IP/VM first. If there is no match based on source IP/VM, packets will then be matched based on destination IP/VM.If the source of the packet is a VM with another security policy applied to it, that security policy will take priority for visualisation and no further visualisation of that traffic will occur.For example, consider two VMs: VM1 in SecurityPolicy-A.VM2 in SecurityPolicy-B. SecurityPolicy-A is in Monitor mode. There are no specific inbound or outbound rules in this policy.SecurityPolicy-B is in Enforce mode. Inbound and Outbound rules configured as Allowed list only.VM1 to VM2 is not allowed in SecurityPolicy-B. When VM1 tries to ping VM2 packets will be dropped but Flow Network Security Visualisation will not display the traffic. The policy-hit-log will not log the packets in SecurityPolicy-B.Instead, this traffic is matched first based on source IP/VM and so SecurityPolicy-A will be applied to this packets and traffic will be visualised in SecurityPolicy-A.
This visualisation behaviour is by design. When investigating network traffic behaviour through Flow Network Security, be sure to check if the source and destination traffic fall under multiple security policies and remember to check the source outbound traffic in addition to the destination inbound traffic.
KB11621
VMs deleted after registered on the remote site during a PD migration with stuck "delete_vm_intentful" in src site
VMs getting deleted after getting registered on the remote site during a PD migration with stuck "delete_vm_intentful" in src site
After a PD migration VMs will get registered on the DR site and will get deleted within a couple of mins.In this case after a PD migration we will see "delete_vm_intentful" tasks stuck in the Src cluster. nutanix@CVM:x.x.x.89:~$ ecli task.list include_completed=false After the VM is registered on the remote cluster we will also see delete tasks being triggered for these VM and the same will be seen in the Uhura logs of the remote site. 2021-06-15 00:24:32 INFO notify.py:325 notification=VmDeleteAudit vm_name=BHHQCTRX25 message=Deleted VM {vm_name} The "delete_vm_intentful" tasks on the Src are created by Aplos to cleanup its DB and to remove the entries of the VM's that were migrated as part of the PD migration , this DB updation task is not part of the PD migration and will get triggered for any V3 API task on a VM.Aplos here will then call Anduril to perform these tasks and then Anduril would need to create a task for this and so it would contact the Ergon service to do that.Now , as per the current workflow Anduril will ask Zookeeper for a healthy CVM IP to create an ergon task and will keep that IP in its Cache.Anduril will only ask Zookeeper for a new IP until it receives a transport error from the one saved in its cache . Anduril sends no-op Ops to check the connection with the IP in the cache so if the IP is replying then it will not ask for a new one and will keep using that CVM IP to perform its tasks and this is where the problem lies.In this case when Anduril was communicating with the Ergon service to create a task to perform the clean-up that Ergon was running on a CVM which was not part of the Src cluster anymore and had been moved to the remote cluster without changing the CVM IP so the "delete" tasks were getting created on the Remote cluster thus deleting the VMs after they were registered. This problem can be tested one more way , if we create a test VM on the Src cluster using "nuclei" which is again a V3 API task , we will see the VM getting created on the remote cluster.This happens because Nuclei calls Aplos and Aplos in turn will call Anduril so the workflow remains the same. For eg : If you run the below command on the Src cluster you will see the testVM getting created on the remote site. nutanix@cvm$nuclei vm.create testvm For this KB to exactly match with the issue then it would need to meet the below requirements : A few of the nodes or maybe even just a single node on the remote cluster was part of the Src cluster at some point.The IP address when the node/nodes were removed from the SRC cluster and then added to the remote cluster were kept the same.
This issue is resolved in: AOS 6.5.X family (LTS): AOS 6.5.4AOS 6.7.X family (STS): AOS 6.7 Upgrade AOS to versions specified above or newer.WorkaroundWe need to restart the Anduril service across the cluster so that its cache gets cleared and then it will request for a new IP. nutanix@cvm$allssh "genesis stop anduril ; cluster start"
KB13409
Stale vCenter entry in vpxd.cfg causing failure to register Prism Element to vCenter
Stale vCenter entry in vpxd.cfg causing failure to register Prism Element to vCenter
If the vCenter Appliance hostname is changed after installation then the vCenter server config (vpxd.cfg) may contain a stale entry pointing to the old hostname. This would cause the Prism Element "Register vCenter" workflow to fail with an error "Unable to register extension". This also means that any VM operations will not be possible. Validate network connectivity using (ping/netcat) between vCenter and CVM/hosts for ports 80 and 443. This issue is not related to network unreachability. nutanix@cvm:~$ allssh "nc -zv <vcenter IP> 80" The Management server will be listed in ncli but shows "Registered: False" nutanix@cvm:~$ ncli managementserver list-management-server-info Uhura does not show the vCenter as registered either: nutanix@cvm:~$ acli uhura.network.list While trying to register vCenter from PE, uhura.out shows error: Unable to register extension 2022-06-24 06:35:00,455Z INFO vcenter_extension.py:110 Generating private key and certificate for vCenter Extension The vCenter MOB page would not show com.nutanix.UUID extension under the extension list however the FindExtension option on MOB page may show the extension UUID. Verify that the serverIP field on the vpxa.cfg on all ESXi hosts is the same and is the IP of the required vCenter: nutanix@cvm:~$ hostssh "grep 'serverIp' /etc/vmware/vpxa/vpxa.cfg" From /var/log/vmware/vpxd/vpxd.log we see Nutanix extension registration failed with “Fault cause: vmodl.fault.SecurityError”. 2023-04-24T15:17:59.272Z info vpxd[11905] [Originator@6876 sub=MoExtensionMgr opID=5b47aa14] Registering unrestricted extension with extensionKey = com.nutanix.<cluster uuid> by user: VSPHERE.LOCAL\Administrator In /var/log/vmware/lookupsvc/lookupserver-default.log we can see error message “ Operation create is not permitted for user {Name: vpxd-<vCenter owner UUID>, Domain: vsphere.local” [2023-04-24T15:17:59.328Z pool-2-thread-38 INFO com.vmware.identity.token.impl.SamlTokenImpl] SAML token for SubjectNameId [value=vpxd-<Vsphere-Owner-UUID>@vsphere.local, format=http://schemas.xmlsoap.org/claims/UPN [schemas.xmlsoap.org]] successfully parsed from Element
If the vCenter was renamed after installation, the vCenter configuration (/etc/vmware-vpx/vpxd.cfg) may still contain a stale entry referring to the old vCenter name. If the owner has been updated, this UUID may be incorrect as well. This stale entry would cause failure to register Prism Element with vCenter. Check the Owner UUID to verify if it matches the outputs above: /usr/lib/vmware-vmafd/bin/vmafd-cli get-machine-id --server-name localhost Verify if the <name> parameter under the <SolutionUser> tag points to the current vCenter name in vpxd.cfg: <solutionUser> Engage VMware support to validate if the solutionUser field is correct and update if required.Note: The vpxd.cfg file should only be updated by the VMware support engineer. Once the solution user is updated, try to register PE with vCenter again and ensure VM operations are successful.
KB13449
Nutanix Database Service | Debugging Windows DB Server VM Provisioning Failures
This article describes debugging Windows DB server VM provisioning failures.
Note: Nutanix Database Server (NDB) was formerly known as Era. Scenario 1: Sysprep fails when antivirus software is installedSysprep of a VM could fail when antivirus software is installed in the system. Operation logs /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>.log or /home/era/era_base/logs/drivers/sqlserver_database/provision/<operation_id>.log: [2022-08-01 10:10:33,189] [140408303626048] [INFO ] [0000-NOPID],Got an successful status code File "/usr/local/lib/python3.6/site-packages/nutanix_era/era_drivers/common/utils/Cluster.py", line 633, in clone_db_server From sysprep logs /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>_syspre_logs/Panther/setupact.log or /home/era/era_base/logs/drivers/sqlserver_database/provision/<operation_id>_syspre_logs/Panther/setupact.log: SclRegProcessKeyByHandle@525 : (c0000022): Failed to process reg key or one of its descendants: [\REGISTRY\MACHINE\SOFTWARE\Sentinel Labs] Scenario 2: Joining a workgroup/domain failsIf the software profile hostname is longer than 15 characters, joining a workgroup gets stuck. This is identified by checking sysprep_task.log from the zip file generated by the sysprep failure. If the last line is workgroup join, the issue is identified. See sample output below: 2022-07-27 12:04:02,859 p=4408 u=Administrator | powershell.exe -command "Add-Computer -WorkGroupName 'WORKGROUP'" Another situation where this issue can be seen is if there is a domain join failure. In this case, there is a file called sysprep_task_domain.log. The domain join failure would be present in this log file. Below are some of the reasons for a domain join failure: The custom domain OU path is incorrect.The credentials provided do not have the necessary permission to join the domain.The credentials are incorrect.Another VM with the same name exists in the domain already. Scenario 3: Network connectivityWhen there is no network connectivity between the Era server/agent and the DB server VM, the WinRM connection from the Era server/agent would fail with the following message: Failed to connect to host Scenario 4: Disabling WinRM as part of GPO policyWhen the DB server is added to the domain, and you have an active GPO policy that might disable any remote WinRM connection to the newly provisioned DB server, the provisioning would fail with WinRM connectivity failures. Scenario 5: Missing virtIO installationWhen the software profile does not include VirtIO (a collection of drivers), the DB provisioning process cannot add NICs to the DB VMs. This will cause the provisioning process to fail. Here is the error message in the /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>.log: [2023-07-05 01:36:07,589] [140265328723776] [INFO ] [0000-NOPID],'Database provisioning failure. Failed to Provision Database Server VM. Reason: "Failed to clone db server. Reason: \'Failed Sample screenshot: Scenario 6: Locked service accountWhen the Era worker service account is locked in the domain, DB provisioning fails with the following error message in /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<operation_id>.log: [2023-06-16 16:14:08,818] [139738442479424] [INFO ] [0000-NOPID],Domain user xxxxxx.local\xxnx not part of local Administrator group The domain user in the above operation log is the SQL Service Startup Account used when provisioning the DB Server: Go to the Active Directory -> Users and Computers, and search for this account. You may see that this account is locked.
Resolution 1: Sysprep fails when antivirus software is installedAntivirus software should be disabled on the software profiles prior to provisioning. Disable or delete the antivirus software service (see the antivirus software's documentation on how to do this).Take a new software profile (see Creating a Software Profile https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Era-User-Guide:era-era-creating-a-software-profile-sql-server-t.html in the NDB User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Era-User-Guide:Nutanix-Era-User-Guide).Perform provision DB server operation (see SQL Server Database Provisioning https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Era-User-Guide:era-era-database-provisioning-sql-server-c.html in the NDB User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Era-User-Guide:Nutanix-Era-User-Guide). Resolution 2: Joining a workgroup/domain failsChange the hostname of the source VM to less than 15 characters, or engage Nutanix Support https://portal.nutanix.com to skip joining a workgroup as part of the provision. If the sysprep_task.log does not have more than 4-5 lines of output, this is a different issue with the same error message. In this case, engage Nutanix Support https://portal.nutanix.com for assistance. To resolve a domain join failure: Correct the OU path.Give different credentialsDelete the entry of the same name or provision with a new name. Resolution 3: Network connectivityEnsure that the Era server/agent can connect to the subnet where the VM is being deployed. Resolution 4: Disabling WinRM as part of GPO policyEnable WinRM remote connection to the host via GPO. Resolution 5: Missing virtIO installationInstall VirtIO in the gold image DB VM, then create a new software profile from it. For the VirtIO installation, refer to Nutanix VirtIO for Windows https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:vm-vm-virtio-ahv-c.html. Resolution 6: Locked service accountUnlock the service account in the domain by ticking the checkbox.
KB9581
Unable to login after IPMI password changed.
Recently changed password will not work when you try to validate by logging into IPMI console or run commands via IPMI using new password.
Recently changed IPMI password doesn't work while logging into IPMI console or running IPMI tools commandsThe use of script/cmd may not work as expected to change the passwords for all hosts in the cluster.Some of the common causes include using longer passwords than the supported character limit or using unsupported special characters in the password
Changing IPMI password and password requirements are documented here https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:har-password-change-ipmi-t.html BMC versions older than 3.40 didn't support all of the special characters (KB- 1737 http://portal.nutanix.com/kb/1737) Verify the BMC version using the following command nutanix@CVM $ hostssh "ipmitool mc info | grep -i firmware" The current IPMI password character limit is 20 but IPMI will limit the password to 16 characters if set thru "ipmitool" command without specifying the password length option (KB- 7702 http://portal.nutanix.com/kb/7702) In the following example, the last value <20> is the password length. If no length is entered it defaults to 16 and truncates the password string. [root@AHV]# ipmitool user set password 2 MyVeryLongPasswordXY 20 IPMI Password Length by Platform (KB- 3231 http://portal.nutanix.com/kb/3231) To reset the IPMI password to default, first, confirm the ADMIN user ID is 2 by running the following command on the affected hosts [root@AHV]# ipmitool user list Then reset the password to default "ADMIN" and confirm you can log in to the IPMI Console [root@AHV]# ipmitool user set password 2 ADMIN Note: If the password is not updated you can try to disable then enable the ADMIN user [root@AHV]# ipmitool user disable 2 Note: For ESX the command is /ipmitool The password can be set cluster-wide using the following for-loop from one of the CVMs, Always use single quotes with passwords that include special characters nutanix@CVM $ for i in `ipmiips`; do echo $i; ipmitool -I lanplus -H $i -U ADMIN -P ADMIN user set password 2 'C0mpl3xP@$$w0rd'; done Note: Special Password Requirements - Require password length: 8 to 20 characters - Password can not be reverse of the user name - Password must include characters from at least 3 of the listed character classes - Allowed character classes - a - z - A - Z - 0 - 9 - Special characters
KB15980
Prism crashes due to SIGABRT - terminate called after throwing an instance of 'std::bad_weak_ptr' - what(): bad_weak_ptr
Investigation into Prism Crashing due to SIGABRT - terminate called after throwing an instance of 'std::bad_weak_ptr' - what(): bad_weak_ptr
Prism dead flagged in zookeeper_monitor.INFO I20230827 17:00:59.871104Z 5947 cluster_state.cc:662] 172.20.33.53:9080 is dead When Prism was dead on the above dates, it was dead due to SIGABRT in prism.out logs below: terminate called after throwing an instance of 'std::bad_weak_ptr' what(): bad_weak_ptr Note that the timestamp of the SIGABRT matches the Prism dead log in zookeeper_monitor.INFO $ date -d @1693155658
This issue is resolved in: AOS 6.6/6.6.1 Advise to upgrade unless SIGABRT event and traces are not all exactly matched to the ENG.
KB6084
NCC ERR: Invalid Raw SEL log
Customers may report this on OEM Platform(Specifically Lenovo) while running NCC: ipmi_sel_check
Part of the ipmi sel check is to check for OCP related events.Powersupply events are parsed using ipmi raw sel list.NCC expects powersupply related events to follow below signature based on Super Micro Platforms: ESXI/AHV:POWER_SIGNATURE = "0x04 0x08"REG_POWER_ISSUE_SIG = "0xff"REG_POWER_RESTORE_SIG = "0xef"OCP_POWER_ISSUE_SIG = “0x10" A line must have at least 7 columns else we would hit the Invalid Raw SEL logIf the raw sel starts with 0x04 0x08 then it will check the 6th column. It should be either (0xff , 0xef or 0x10). Logic for OCP check is done post this.If the raw sel starts with 0x04 0x08 and does not have the 6th column matching the above mentioned 3 values, we would hit the Invalid Raw SEL log​ Hyper-V:POWER_SIGNATURE = "04 08"REG_POWER_ISSUE_SIG = "ff"REG_POWER_RESTORE_SIG = "ef"OCP_POWER_ISSUE_SIG = "10" A line must have 16 columns else we would hit the Invalid Raw SEL logIf 10th and 11th column = 04 08 (powersignature) then the 13th Column is expected (ff , ef , 10). Logic for OCP check is done post this.if the raw sel starts with 04 08 and does not have the 13th column matching the above mentioned 3 values, we would hit the Invalid Raw SEL log​
Seems like fix as per discussion in below ENG is to only NX platforms included in NCC 3.6.2.If customer is running Non-Nutanix Platform, ask customer to upgrade NCC and can ignore this test. Note: As of 29/8/2018 - testing is not fully done. Please check ENG and update NCC if fix is in place If NCC is upgraded to 3.6.2 for Non-Nutanix Platforms or the issue is occurring on NX Platforms, further investigation must be performed.Get the raw sel list specific for Power Signature on the node with the failure entries:AHV: ssh [email protected] "ipmitool sel save ipmi_sel_raw.log > /dev/null" ;echo '----------Reading sel raw file for powersupply ----------'; ssh [email protected] 'grep -E ^"0x04" ipmi_sel_raw.log' VMWare: ssh [email protected] "/ipmitool sel save ipmi_sel_raw.log > /dev/null" ;echo '----------Reading sel raw file for powersupply ----------'; ssh [email protected] 'grep -E ^"0x04" ipmi_sel_raw.log' Hyper-V:(Please match the 10th and 11 for "04 08" - below script just searches generically 04 08) winsh ipmiutil sel -r |grep "04 08" Once you have the Power Signature Raw SEL Entries, check the logic mentioned in the problem section and validate why it is hitting the invalid raw sel issue
KB15991
NCC check idf_db_to_db_sync_heartbeat_status_check fails due to stale Nutanix Files entries
NCC check idf_db_to_db_sync_heartbeat_status_check reports and error due to stale Nutanix Files entries
After a Prism Central restore operation, the NCC check idf_db_to_db_sync_heartbeat_status_check fails with the below error: Detailed information for idf_db_to_db_sync_heartbeat_status_check: These UUIDs are minerva protection policy UUIDs, as we can confirm from the below output: nutanix@PCVM:~$ links --dump 'http://0:2027/all_entities?type=minerva_protection_policy' The PC restore operation added these policies to the zknode due to a known issue: nutanix@PCVM:~$ zkls /appliance/physical/files_clusterexternalstate
A fix is available in Nutanix Files 4.4.x / FM 4.4 as per ENG-614746 https://jira.nutanix.com/browse/ENG-614746.As a workaround, it's possible to manually remove the minerva policies UUIDs from the zknode with the below steps.WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit.Take note of the affected UUIDs and remove the zknode with: nutanix@PCVM:~$ zkrm /appliance/physical/files_clusterexternalstate/<minerva_policy_uuid> As an example, from the above NCC output, we should remove the below UUIDs: nutanix@PCVM:~$ zkrm /appliance/physical/files_clusterexternalstate/57a51848-b1b2-45b9-7f69-2225703649ef Note: removed zknodes will get recreated on every PCVM reboot or insights_server service restart, unless Files Manager on PCVM updated to version 4.4 or above.
KB9925
How to delete all data sent to Nutanix by Pulse
This article describes the process to request the deletion of data that was collected by Pulse on the Nutanix side.
Sometimes it may be required to delete the data collected by Nutanix Pulse for various legal or security compliance reasons. Even though Nutanix does not collect any personal data by Pulse, we understand that it may be required to delete the data anyway and we respect that right. It is important to note that if Pulse is disabled, it will not be possible for Nutanix Support to do any proactive support. Nutanix will not be able to receive alerts from the cluster and will not know the software versions affected. It will also be impossible to collect the logs from the cluster remotely, so if a log collection is required for a root cause analysis or troubleshooting, the logs will have to be uploaded to a support case manually. That can increase the case resolution time.
Here is the procedure to request the Pulse data deletion: Disable Pulse on the cluster. In Prism Element, go to Settings > Pulse > de-select Enable and click Save. Open a case with Nutanix Support and ask to delete all Pulse data.
KB16011
Nutanix DR | Approval Policy changes with SecureSnaphot
This article describes how to add/ remove users from approval policy in SecureSnaphot
The purpose of Secure Snapshot feature is to protect the snapshots/recovery points from a malicious attacker who compromises the administrator id by deleting or modifying the snapshot configuration.Currently in AOS 6.8 and pc.2024.1 there is a limitation with the SecureSnaphot feature that does not allow managing users in the approval policy once created which may be required in case of organizational changes / approvers leaving or unavailable.This limitation is expected to be resolved in the future version of AOS and PCIf you need to add/remove users in the approval policy please reach out to Nutanix Support.
SRE needs to ensure that the request to add/remove user to the approval policy is authentic.SRE needs to send the email below to receive approval from majority of the approvers in the approval policy to go ahead with adding/removing user from the approval policy. If the customer claims no other approvers are available please involve the account team to make sure the request is authentic Dear {CustomerContactName1, CustomerContactName2, CustomerContactName3}, Once you get approval via email from majority of the approvers follow the steps in the internal section of the KB to download the scripts to mange the users.
KB7258
Hypervisor upgrade stuck at 0% for pre-upgrade checks on Single node cluster
Hypervisor upgrade stuck at 0% pre-upgrade checks on Single node cluster with no progress or warning of what might be stalling it.
1 click Hypervisor upgrade stuck at 0% for pre-upgrade checks on Single node cluster causing the customer to have no idea why it is stuck.host_preupgrade.out just states that: 2019-12-17 17:56:27 INFO zookeeper_session.py:131 host_preupgrade is attempting to connect to Zookeeper After figuring out that zookeeper is the issue here look at zookeeper.out and saw: 2019-12-17 20:03:05,665 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9876:ZooKeeperServer@996] - Client host_preupgrade is attempting to establish a new connection with version kDelayedPingResponse at /x.x.x.x:33356 The above bold text x.x.x.x is a DNS entry in prism causing the pre-upgrades to appear stalled.
This issue can occur if any of the DNS servers are not able to be pingable or are down. Check the Name Server configuration in prism and remove the non-pingable DNS server and then refresh the Prism Element screen. Once that has been cleared out you will see the task no longer running. Go back to the settings menu and kick off the hypervisor upgrade again and it should succeed after that On case 00809268 with single node AHV cluster we found that DNS was pingable but NTP was non-pingable. Removing non-pingable NTP server helped upgrade to proceed.
KB14796
Performance | HPE Gen-10 Cascade/Intel M50CYP Using Gen-10 Plus Ice-Lake Systems Experience Degraded Performance With Sleep C-States Enabled
It has been observed that Nutanix clusters running on HPE or Intel hardware may experience performance degradation due to CPUs entering sleep C-States regardless of BIOS settings. This KB provides diagnostics that help identify if this issue is occurring and the steps required to resolve it.
So far, this issue has been seen on HPE Gen-10 Cascade and Intel M50CYP systems, however, it may impact other systems using Gen-10 Plus Ice-Lake CPU's. When sleep C-States (C2 and lower) are active, this can introduce operational delay when threads are scheduled on physical CPU cores operating at reduced power. In a Nutanix environment, for example, the following steps are required to write data to disk: App -> DBDB -> OS Kernel/FilesystemOS Filesystem -> OS storage device driverOS storage device -> Hypervisor data pathHypervisor data path service -> Hypervisor network thread(s)Hypervisor network threads -> CVM X kernelCVM kernel -> CVM datapath process (stargate)Stargate on CVM X -> Stargate on CVM Y (replica creation for RF 2/3) Write to physical media Stargate on CVM X -> local diskStargate on CVM Y -> local disk Stargate on CVM Y -> Stargate on CVM X (write acknowledgment)CVM X Kernel -> Stargate on CVM X (write acknowledgment) To trace the acknowledgment back to the App/DB/OS, reverse the steps from Step 7 back to Step 1.At each step in the process, delay can be introduced by sleep C-States. In particular, in each step from 6 to 11, this delay will be seen as IO latency at the higher layers (Hypervisor / UVM OS / App).For example, the graphs below show Read and Write latency for IO being generated by a database application (NOTE: Y-Axis is scaled). Every operation is represented by a dot. Observe that for both reads and writes, there are two clusters of operation latencies: One cluster at the very bottom of each graph, representing operation that took < 1 msClusters of operations at higher latencies Over 1 ms for ReadsOver 5 ms for Writes It was determined that the “gap” between low and higher latency operations was primarily due to processor delay - not queuing within stargate. As a point of reference from a different database application benchmark, the variance in latency with C-States enabled vs. disabled can also be seen by looking at these test runs: With C-States disabled, average latency (in red) is reduced. More significantly, latency ouliers (in blue) - seen in the 99/99.9/99.99th percentiles are reduced dramatically. A more simple test that demonstrates the effect that C-States have on latency (particularly for write operations) is to check network response times between nodes. ================== X.X.X.193 ================= As seen above, average latency between CVMs is ~0.5 ms, with maximum latency approaching 1 ms. With C-States Disabled: ================== X.X.X.193 ================= Average latency drops to ~0.2 ms, while maximum latency between nodes is ~0.6 ms.
This issue is resolved in: AOS 6.5.X family (LTS): AHV 20220304.441, which is bundled with AOS 6.5.4AOS 6.7.X family (STS): AHV 20230302.1011, which is bundled with AOS 6.7.1 Upgrade both AOS and AHV to versions specified above or newer.To verify that sleep C-States are being entered, SSH into an AHV host and execute the cpupower command: [root@AHV-1 ~]# cpupower monitor -m Idle_Stats -i 5 The output above shows data for the CPU in PKG (socket) 0. Each PKG Core has 2 CPUs associated with it: “Normal” CPU cores 0-17“Hyperthreaded” CPU cores 36-53 Time spent in the “C6” state indicates that those CPU cores are entering a low-power state. Nutanix has published KB-8129 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CrLqCAK which provides the steps required to enable C-State management in AHV. This configuration will override the incorrect behavior and prevent the host CPU from entering lower C-States. For other hypervisor vendors, consult their documentation for disabling low-power C-States or enabling Maximum Performance mode.
KB7084
NCC Hardware Info: show_hardware_info
The NCC Hardware Collector is a module within NCC whose purpose is to periodically scan and inventory detailed information about all hardware components on each node in the cluster. This information is cached and is readily available on demand for immediate reference during hardware triage events, e.g. DIMM/drive replacements.
The NCC Hardware Collector is a module within NCC whose purpose is to periodically scan and inventory detailed information about all hardware components on the particular host in the cluster. This information is cached and is readily available on demand for immediate reference during hardware triage events, e.g. DIMM/drive replacements. Use show_hardware_info command to display this cached information for the CVM (Controller VM) it is run on: nutanix@cvm$ ncc hardware_info show_hardware_info To display hardware information of an offline CVM, the following command can be run from any other working CVM on the cluster: nutanix@cvm$ ncc hardware_info show_hardware_info --cvm_ip=<Offline CVM IP> Modern NCC versions support the ability to display hardware info for all CVMs. Use the following command: nutanix@cvm$ ncc hardware_info show_hardware_info --cvm_ip=cluster Note: If we use --cvm_ip=cluster flag then it shows output of all the nodes from every CVM. The output of all the nodes should be printed once. However, if we have 4 nodes, for example, it will display 16 outputs. This is fixed in NCC 4.5.0.The above commands will generate a log file, which will be stored in: /home/nutanix/data/hardware_logs The show_hardware_info command is scheduled to run every day, by default. The NCC Hardware Collector also includes the command update_hardware_info, which automatically refreshes data every day, by default. nutanix@cvm$ ncc hardware_info update_hardware_info This command can be run manually, for example, after a component replacement to refresh the cached information. Then, run the show_hardware_info command afterwards to show the updated information. Similar to the show_hardware_info command, the update_hardware_info command can be run simultaneously on all CVMs as well. nutanix@cvm$ ncc hardware_info update_hardware_info --cvm_ip=cluster Note: On NX-3060/1065/8035-G8 platforms, some Memory Temperature may show as 0.0 degree C. This is a false positive and it can be ignored. Nutanix is targeting to fix this bug in the next NCC major release. The DIMM info can be checked from the host manually by the below command: AHV [root@ahv]# ipmitool sensor list ESXi [root@esxi]# /ipmitool sensor list Hyper-V PS C:\> ipmiutil.exe sensor list
Below are the data that are captured by NCC show_hardware_info command: Nutanix Product Info Manufacturer Product Name Product Part Number Configured Serial Number Chassis Bootup State Manufacturer Serial Number Temperature Thermal State Version Node Module Node Position Bootup state Host name Hypervisor type Manufacturer Product Name Serial Number Temperature Thermal State Version BIOS Information Release Date Revision Rom size Vendor Version BMC Device ID Device Available Device Revision Firmware Revision IPMI version Manufacturer Manufacturer ID Product ID Storage Controller Location BIOS version Firmware Version Manufacturer Product Part Number Serial Number Physical Memory Array Bank Configured Slots Max size Max Slots Total Installed Size System Power Supply Location Manufacturer Max Power Capacity Product Part Number Revision Serial Number Status Processor Information Socket Information Core Count Core Enabled Current Speed External Clock Id L1 Cache Size L2 Cache Size L3 Cache Size Max Speed Signature Status Temperature Thread Count Type Version Voltage Memory Module Location Bank Connection Capable Speed Current Speed Enabled Size Installed Size Manufacturer Product Part Number Serial number Temperature Type NIC Location Device Name Driver Name Firmware version MAC Address Manufacturer Sub device Sub vendor Driver Version SSD Location Capacity Firmware Version Hypervisor Disk Manufacturer Power on hours Product Part number Serial Number HDD Location Capacity Firmware Version Hypervisor Disk Power on hours Product Part Number Rotation Rate Serial Number FAN Location Rpm Status GPU Class Device Revision Slot Sub device Sub Vendor Vendor Note: If any of the components is not present in the node, you will not see the output for that component.
KB16979
vCenter server user authentication failed alert on Prism Central
vCenter Server User Authentication Failed alert on the PC for vCenter monitoring may occur in one of the scenarios when the vCenter user authentication is changed but not updated for the vCenter monitoring configuration.
Prism Central reports an alert below: vCenter Server xx.xx.xx.xx User Authentication Failed for user <domain>\<username> This alert could be triggered by a potential change in user authentication for vCenter or vCenter monitoring. Even after re-registering vCenter with the username in the alert or a different user, the issue persists as the old username is still stored in the IDF on the PC. Troubleshooting Ensure there are no issues with vCenter connectivity and registration. To troubleshoot vCenter connectivity and registration problems, refer to KB-3815 https://portal.nutanix.com/kb/3815 and KB-8527 https://portal.nutanix.com/kb/8527.Ensure there was no user lockout despite recurring alerts for failed vCenter authentication.Ensure that vCenter monitoring is configured on Prism Central, as this alert is noticed when it's configured. Refer to Application Instance Details View (vCenter) for pc.2023.3 https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2023_3:mul-explore-integration-details-view-vcenter-pc-r.html, Application Instance Details View (vCenter) for 2023.1.0.1 https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2023_1_0_1:mul-explore-integration-details-view-vcenter-pc-r.html.Check if re-registering vCenter using the username in the alert or another user resolves the alert. If the alert persists, this could be due to the old username is still stored in the idf on the PC. To resolve this alert, please follow the workaround given in the solution section.
Delete vCenter monitoring from the PC and re-configure it with the correct username and password. For PC versions 2023.3 and newer Refer to the 'Modifying Application Monitoring' section in the linked document below for instructions on deleting vCenter monitoring. Multiple links for different PC versions are provided for reference. Modifying Application Monitoring for 2024.1 https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2024_1:mul-integration-instance-modify-pc-t.html Modifying Application Monitoring for 2023.4 https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2023_4:mul-integration-instance-modify-pc-t.html Modifying Application Monitoring for 2023.3 https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2023_3:mul-integration-instance-modify-pc-t.html https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2023_4:mul-integration-instance-modify-pc-t.html https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2024_1:mul-integration-instance-modify-pc-t.html You can also find the steps below on how to delete vCenter monitoring and configure it: Login to Prism Central as admin user At the top by the hamburger menu select "Intelligent Operations"Select ‘Monitoring Configurations’ Select monitored vCenter and delete itNow you can configure vCenter for Application monitoring. Refer to the Application Instance Details View (vCenter). https://portal.nutanix.com/page/documents/details?targetId=Intelligent-Operations-Guide-vpc_2023_3:mul-app-discovery-authenticatiing-vcenter-pc-t.html For pc.2023.1.0.2 and older versions Refer to the 'Modifying Application Monitoring' section in the linked document below for instructions on deleting vCenter monitoring. Multiple links for different PC versions are provided for reference. Modifying Application Monitoring for 2023.1.0.1 https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2023_1_0_1:mul-integration-instance-modify-pc-t.html Modifying Application Monitoring for 2022.9 https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2022_9:mul-integration-instance-modify-pc-t.html Modifying Application Monitoring for 2022.6 https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2022_6:mul-integration-instance-modify-pc-t.html You can also find the steps below on how to delete vCenter monitoring and to configure it: Login to Prism Central as admin user Select "Prism Operations" and then choose "Monitoring Configurations" to load the monitoring page. Select ‘Monitoring Configurations’ Select monitored vCenter and delete itNow you can configure vCenter for Application monitoring. Refer to the Application Instance Details View (vCenter). https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2023_1_0_1:mul-app-discovery-authenticatiing-vcenter-pc-t.html
KB12697
Prism Central eth0 resets to DHCP settings after Upgrade / Reboot
After an upgrade or reboot a Prism Central VM may lose eth0 configuration and reset to DHCP.
After an upgrade of Prism Central or a reboot of the VM eth0 may lose its ifcfg-eth0 configuration and reset back to DHCP. If you re-configure ifcfg-eth0 file it will again be lost on reboot.This issue was tracked by ENG-419625 which appears to be a regression of ENG-352043.
Solution: Upgrade to pc.2022.1.The issue is solved and delivered on pc.2022.1 According to Release Notes https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-Prism-Central-vpc_2022_1:top-resolved-issues-r.html.Workaround:If the upgrade is not possible, the Current workaround steps are below. Find cloud-init version in PC with rpm -qa | grep cloud-initFor cloud-init version < 18.5-3 Open the cloud.cfg file. sudo vim /etc/cloud/cloud.cfgChange the line: datasource_list To: datasource_list: [ None] For cloud-init version >= 18.5-3 *​​​​​Do not* make above [step 2] changes in PC with cloud-init version > 18.5-3Verify a file /etc/cloud/cloud.cfg.d/99_network_config.cfg is created with below content. network: {config: disabled} If the file is missing please run the following. $ sudo -i Manually configure ifcfg-eth0 with correct IP.Restart PCVM to verify configuration does not clear again.
KB17139
Objects UI not accessible due to aplos service certificate expired
Customers may experience an issue where the aplos service certificate has expired, resulting in the Objects UI being stuck at loading
Customers may experience an issue where the aplos service certificate has expired, resulting in the Objects UI being stuck at loading. This affects all PC versions. Identification Objects UI is stuck at loading, as below aplos is unable to query aoss service manager using certificate based authentication. Aplos logs in PC /home/nutanix/data/logs/aplos.out contains the below signature. The key error is "Invalid X509 Certificate": 2024-07-09 03:42:56,462Z WARNING aoss_service_manager_utils.py:109 AOSS service manager readiness check returned SERVICE_ENABLING athena log in /home/nutanix/data/logs/athena.out, is complaining unable to find valid certificate path: ERROR 2024-07-09T04:00:22,483Z Thread-1 athena.authentication_connectors.CertificateAuthenticator.trustCertificate:370 certificate verifying failed on building certification path check {} aplos certificate expired nutanix@PCVM:~$ sudo openssl x509 -text -in /home/certs/APLOSService/APLOSService.crt 5. Expired Objects Service Manager Certificate can also cause similar issue, please refer to KB11368 https://portal.nutanix.com/kb/11368 for more details ​​​​​​
The service certificates are valid for two years since the installation or the last upgrade of Prism Central. If customer is running end of support life Prism Central, all the service certificates could have expired. Upgrade Prism Cental to a newer version to resolve the issue.
KB15805
NDB | Database Server VM registration fails with error “Unsuppored configuration: OS or DB_SOFTWARE drives must not used as DATA Drives”
Database Server registration fails in NDB with error “Unsuppored configuration: OS or DB_SOFTWARE drives must not used as DATA Drives”
When we try to register Database server VM to NDB, The operation fails with the below error 'Unsuppored configuration: OS or DB_SOFTWARE drives must not used as DATA Drives' The below messages are also seen in the DB-Server eraconnection.log (dbserveripaddress/logs/common/eraconnection.log):eraconnection.log [2023-10-05 14:42:02,052] [139645047797632] [INFO ] [0000-NOPID],Request :{'id': '7a18d797-xxxx-4d5c-ae89-yyyy03ff0adc', 'status': '4', 'percentageComplete': '5', 'message': "LV '/dev/mapper/vg_test' contains both DATA (on dir {'/ora123/ixxx/oradata/data'}) and SOFTWARE entities. Ensure that the database files and the software home/inventory are on distinct logical or physical volumes.", 'type': 'register_database'} This pre-check was introduced starting NDB 2.5.2
The above configuration of having OS or DB_software and data drives on the same physical/logical volumes is not recommended. Ensure that the following requirements are met before registering a database server VM with NDB. OS or database software disks must be distinct from the data disks. For general requirements, see the respective Database Management Guides.
KB14186
Nutanix Files - Manual DNS verification fails even if the DNS entries are properly created
Attempting to perform a Manual DNS verification operation may fail even if DNS entries are properly created.
A manual DNS verification in Nutanix Files version 4.2 may fail even if the records are correctly created.When clicking on the "Verify" button, it keeps loading until it eventually fails without returning an evident error.When opening the developer tools, you will see the verify-dns-entries API call returning a 400 error.
This issue was identified and fixed already.To get the fix, upgrade to Nutanix Files 4.2.1 or higher.
KB10893
Nutanix Kubernetes Engine - Forwarded logs are missing fields
Logs forwarded from Nutanix Kubernetes Engine clusters are missing kubernetes.* fields on remote Elasticsearch
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Customer using Karbon 2.2 enables log forwarding on deployed Kubernetes clusters.Customer will notice that on the remote Elastic endpoint, kubernetes.* fields are missing from Kubernetes clusters running fluent-bit image v1.5.3 or v1.6.10If the customer is checking Kibana running locally on the Kubernetes cluster, the kubernetes.* fields are presentCustomer will not see the behavior with an earlier version of deployed Kubernetes cluster running fluent-bit version 0.13.2Next to two print screen with the different - Message in the Kibana with the without the kubernetes.* fields- Message in the Kibana with the kubernetes.* fields
The fix is present in Karbon 2.2.2. The customer should upgrade to the latest version of Karbon. Before applying the workaround, please engage with your local Karbon SMEPlease check the Kubernetes cluster to confirm the issue is related to this KB:- verify that the Karbon Kubernetes cluster has log-forward enabled nutanix@PCVM:~$ karbonctl cluster log-forward get --cluster-name test - verify the versions of fluent-bit images [nutanix@karbon-test-149498-k8s-master-0 ~]$ kubectl -n ntnx-system get pod -l k8s-app=fluent-bit-logging -o yaml |grep -i 'image:.*fluent' - verify that in the configmap for 'all the namespaces' the tag property from the [INPUT] configuration file does not match with the Kube_Tag_Prefix property from the [FILTER] configuration file. [INPUT] [nutanix@karbon-test-149498-k8s-master-0 ~]$ kubectl -n ntnx-system get cm fluent-bit-config -o yaml |egrep -v "apiVersion|Regex"|egrep -i -C4 'Path /var/log/containers/\*\.log' [FILTER] [nutanix@karbon-test-149498-k8s-master-0 ~]$ kubectl -n ntnx-system get cm fluent-bit-config -o yaml |egrep -v "apiVersion|Regex"|egrep -C4 "FILTER" |egrep -C3 "toforward.*" The actual workaround, until a future release of Karbon containing the fix, is to edit the fluent-bit-config configmap from the ntnx-system namespace.We need to change the Kube_Tag_Prefix property value from the [FILTER] configuration file to match with the value of the tag property from the [INPUT] configuration file.' To edit the specific configmap please use the next command kubectl -n ntnx-system edit configmap fluent-bit-config Next to the working snipped from the configmap kubernetes-all-namespaces-filter.conf: | After you did the same the configmap, the fluent-bit pods need to be restart in order to use the new configuration kubectl -n ntnx-system delete pod -l k8s-app=fluent-bit-logging
KB4198
Override default stargate behavior for disk cloning on ESXi (thick provisioned to thin provisioned)
Customers might need to override default Stargate behavior (thick provisioned disks become thin after cloning) for accountability / management reasons.
As KB1591 describes ( Virtual disk provisioning types in VMware with Nutanix storage /articles/Knowledge_Base/Virtual-disk-provisioning-types-in-VMware-with-Nutanix-storage ), Stargate by default will create a thin provisioned disk after a cloning operation (assuming the clone is deployed to the same container), even if the source disk is thick provisioned. The reasoning behind this is to avoid wasting space and also because there is no performance gain due to the structure of the NDFS filesystem. Some customers might request revert to the default and expected behavior for management or accountability reasons.This KB describes what changes need to be performed in order to accomplish so.
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)" Customer running 5.x: Apply the following gflag ( Working with GFlags /articles/Knowledge_Base/Working-with-GFlags) in all cluster nodes: --stargate_nfs_adapter_allow_reserve_space_on_clone=true This requires a rolling restart of stargate. Ensure cluster is healthy before restarting stargate on a rolling fashion to avoid any outage. Verify there are no recent FATALs, run following command. #date; allssh ls -ltr /home/nutanix/data/logs/*.FATAL Verify all services are up and running: #cluster status | grep -v UP Verify all nodes are part of the metadata store: #nodetool -h0 ring Apply gflags via aegis page and rolling restart stargate on all CVMs. After stargate has been restarted in one CVM, ensure service is not FATAling before proceeding to the next one. #genesis stop stargate; cluster start; allssh ls -tlr /home/nutanix/data/logs/stargate.FATA Check if gflags have been applied successfully: #allssh links -dump http://0:2009/h/gflags | grep reserve With this gflag in place, the following behavior is expected: Cloning thick disk in the same container -> Clone has a thick disk.Cloning a thin disk in the same container -> Clone has a thin disk.Cloning a thick disk in a different container -> Clone has a thick disk.Cloning a thin disk in a different container -> Clone has a thin disk. Customer running 4.7.x: Currently there is a bug in the code for this gflag on 4.7.x branch where cloning a VM with a thin disk will become thick after cloning. The code-fix from 5.0.x will be backported into AOS >= 4.7.5. Do not apply the gflag into a cluster running AOS below the version with the fix. More information in the following ENG tickets: https://jira.nutanix.com/browse/ENG-79292https://jira.nutanix.com/browse/ENG-35952 https://jira.nutanix.com/browse/ENG-35952
KB13541
Nutanix Files - Async DR - Protection Domain failover operation fails to deactivate the PD when the container has been renamed
Nutanix Files - Async DR - PD Failback fails to deactivate the Files PD during a failover operation if the Files container name is not in the form "Nutanix_<fs_name>_ctr" and when FSM version = 2.2.0.
During a PD failover or failback operation in the deactivate step we construct the container name using the FS name. Due to a software defect in FSM version 2.2.0 the code incorrectly assumes that the container name will be always in the form "Nutanix_<fs_name>_ctr" however the container name may have manually been changed by the administrator.Therefore if the container name is different than the expected "Nutanix_<fs_name>_ctr" the failover of failback PD operation will fail.This typically occur during a failover and then failback sequence during the failback step. Symptoms Example of the failed FileServerPdDeactivate task: <ergon> task.get 28e820e0-3dac-4995-6f4c-f10dddfbc1f4 <ergon> task.list component_list=minerva_cvm In minerva_cvm.log on the Minerva leader CVM (afs info.get_leader), we see that instead of container Nutanix_testfs_ctr_dr , we are trying to delete container Nutanix_testfs_ctr 2022-08-24 00:08:57,882Z INFO 88455120 file_server_misc.py:1386 Deleting files [u'NTNX-testfs-1.iso'] in container Nutanix_testfs_ctr_dr: To confirm the FSM version: nutanix@NTNX-CVM:~$ cat minerva/version.txt
This issue has been resolved in FSM 2.2.2 and later. No workaround is approved to resolve the issue without an upgrade.
KB6017
NGT installation, removal, upgrade, or an update error on Windows VMs: 0x80070643 Fatal
Resolve issues with NGT upgrades when uninstall MSI is missing from system drive or C:\ drive.
The errors below are seen after an AOS upgrade or when re-installing Nutanix Guest Tools (NGT) on a Windows VM. Prism Alert for the VM: NGT on <VM_name> should be upgraded. NGT Setup errors: The feature you are trying to use is on a network resource that is unavailable 0x80070643 - Fatal error during installation The older version of Nutanix Guest Tools Infrastructure Components Package 2 cannot be removed. Contact your technical support group. If the usual methods of uninstalling NGT from Windows VMs described below do not work, continue to the Solution section of this article for registry key removal. Check if NGT is mounted as Drive D:\ and attempt to uninstall. If this does not work, proceed with the next steps. Note the UUID of the affected VM from the output of the command 'ncli vm ls' Unmount, remove and delete NGT from the VM by running the following commands: ncli ngt unmount vm-id=<VM-UUID> Uninstall NGT on the affected VM either from the Add/Remove Programs or from PowerShell: Note: The examples below use "Nutanix Guest Tools Infrastructure Components Package 2". You may have a different package number installed, for example, "Nutanix Guest Tools Infrastructure Components Package1" (instead of "2"). You can check the package version from the NGT Setup errors earlier in this article, or by issuing the following command as an Administrator: PS C:\Users\Administrator> Get-WmiObject -Class win32_product | Where-Object {$_.name -Like "Nutanix*"} Check if NGT is still installed and the version of NGT (can be compared to another VM). Check this after every step to see if it is still present. PS C:\Users\Administrator> Get-Package -Name "Nutanix Guest Tools Infrastructure Components Package 2" Sample output: PS C:\Users\Administrator> Get-Package -Name "Nutanix Guest Tools Infrastructure Components Package 2" PS C:\Users\Administrator> $app = Get-wmiobject -class win32_product -filter "Name = 'Nutanix Guest Tools Infrastructure Components Package 2'" Sample output: PS C:\Users\Administrator> $app = Get-wmiobject -class win32_product -filter "Name = 'Nutanix Guest Tools Infrastructure Components Package 2'" PS C:\Users\Administrator> Get-Package -Name "Nutanix Guest Tools Infrastructure Components Package 2" | Uninstall-Package -Force Notes: In certain scenarios, you might see two packages for NGT. You will have to perform the above steps for both packages. IdentifyingNumber : {FFC8323A-3A1E-4D94-9CFE-C1DB6F9D4EB8} In a few cases, just by following the above steps, the packages will be removed, and so you will not see the registry keys as mentioned in the below steps and hence the steps would not be required. Go to (CTRL+R) regedit-> HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall and click on the directory that matches the IdentifyingNumber of NGT. IdentifyingNumber can be taken from previous Get-wmiobject command (without "$app=").Then copy the UninstallString and enter it in PowerShell. If the above does not uninstall NGT, continue to the Solution section below. Another scenario could be that NGT fails to install/update with the error below: The specified account already exists Follow the Additional options section below to perform stale entry cleanup. Note: If NGT setup fails with "Nutanix Guest Tools already installed" but there is no option to uninstall NGT from Programs, try uninstalling Nutanix Guest Agent from PowerShell: PS C:\Windows\system32> Get-Package -Name "Nutanix Guest Agent" Attempt NGT installation after the above is completed.
Additional options Before doing a registry cleanup for NGT, attempt the following options: Clone the VM and install NGT on the cloned VM.Contact Microsoft to remove NGT using Microsoft internal tools (Fixit, for example). Registry cleanup Take a snapshot of the VM before modifying the registry. In Prism > VM tab > click on the VM in question > Take Snapshot. Do not omit this step.Navigate to the Registry key: Click Start, then click on Run, enter regedit in the Open field and click HKEY_LOCAL_MACHINE\Software. Right-click on the Software key and select the Export option.Create a backup of the Software registry key on the Desktop in case of a problem. In the dialog box where you can select a directory and name for the file to which you will export, set the Save in the field to Desktop. For the File name, enter Software_backup and use the Save button. If you would like to revert to the backup, double-click Software_backup file to restore from the saved registry key.Delete the Registry key. Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall. Search for the NGT IdentifyingNumber noted from Get-wmiObject command above, right-click and choose Delete. To remove all references to NGT (in the above example, "6FC36A2F-..."), go back to regedit, navigate to HKEY_LOCAL_MACHINE\Software > Right-click > Find... > paste the identifying number from the get-wmiObject command. Remove all other references. Press F3 to find the next reference until none is found. Entries can be found under Installer\Dependencies, Installer\Products, etc.Additional clean-up of the Registry may be needed in case you still see stale instances of old NGT installation. Remove from HKEY_LOCAL_MACHINE\Software. Search "Nutanix Guest" and remove all the references under the search. Example of registry tree / Key that should be removed. Even if found on SourceList or InstallProperties Key, remove the parent Key as below: Example of a single registry string to be removed when it is part of a directory with other unrelated registry key strings: The same steps above must be repeated for VSS (Nutanix VSS Modules 1.1.0) and NGA (Nutanix Guest Agent) if installing NGT fails again, returning an error about these packages. Once removed, unmount/remove and delete NGT from the nCLI using the steps above if not performed already. Then, proceed with mounting and re-installing NGT. If the installation fails with the error below, refer to KB-7136 https://portal.nutanix.com/kb/7136 to manually copy the contents of the config folder: The system cannot find the file specified. cmd.exe /c net start "Nutanix Guest Agent"
KB13478
Stargate fatal because of epoll thread stuck in TcpConnection::FlushPendingWritesInternal
Stargate service crash because of epoll thread stuck in a TCP connection on CVM.
Random Stargate service crash has been seen with below FATAL signature on CVM due to rare instances. In most cases, there is no service impact and no cluster wide storage down. However, there is chance of VM restart on local AHV host because of iSCSI timeout. Symptoms Stargate service would restart with error "Watch dog fired" for first instance during the time of issue. F20220604 06:00:06.762014Z 14133 stargate.cc:1499] Watch dog fired: event timeout (time of last alarm handler: 1654322386760883) Stagrate core file generated as part of Stargate FATAL reporting that epoll threads are busy in various stages of TcpConnection ===== epoll_1/14162 ===== Stargate might report multiple TCP connection threads are stuck at "waiting for quiescing callbacks" state for 20 sec before watch dog alarm handle trigger to restart Stargate service. I20220604 05:59:44.282284Z 14150 {{tcp_connection.cc:666]}} TcpConnection with fd 499, conn_id 626 waiting for quiescing callbacks peer info 172.27.84.28:46030:TCP Note: As a general rule of thumb, eliminate any network unreachability, communication issues between CVMs and hardware issues on the CVM.
Solution Recommend upgrading AOS the the version having the fix for ENG-480410 https://jira.nutanix.com/browse/ENG-480410, namely 6.5.3.5 or later on LTS, 6.7 or later STS.
KB13726
[JPKB] Windows VMでPrism または API から実行した電源操作が動作しないことがある
AHVで稼働している Windows VM は、スクリーン セーバーが有効である、もしくはACPI要求への応答の構成が原因で電源操作が動作しない場合があります。
この KB は KB 5828 の日本語版です。最新の内容については KB 5828 をご参照ください。AHV 上で動作する Windows VM に対して、Prism から、または API 経由でゲスト再起動やゲストシャットダウンなどの電源操作を実行すると、これらのアクションが動作しない場合があります。Prism やログにはエラーメッセージは表示されません。その後、同じ電源操作を実行すると正常に完了することがあります。この動作は一般的なサーバー仮想化のユースケースと比較してVDI環境の場合により頻繁に見られます。ゲスト再起動またはゲストシャットダウン操作が実行されると、Acropolis サービスは仮想マシンに ACPIシャットダウン/再起動 コマンドを送信します。これらのコマンドの取り扱いは、仮想マシン上で実行されているOSが決定します。Windows VMがACPIコマンドを扱えない典型的なケースがあります。 VMでスクリーンセーバーが有効な場合、WindowsはACPIコマンドを受信すると、実際の シャットダウン/再起動 を実行せずにスクリーンセーバーを停止するだけです。スクリーンセーバーが再びアクティブになる前に、後続のACPIコマンドが送信されると シャットダウン/再起動 が実行されます。Windows の電源ボタンの設定が シャットダウン でない場合、Windowsは ACPIシャットダウンコマンドを無視することがあります。 Microsoft社のドキュメント https://learn.microsoft.com/en-us/windows-hardware/customize/power-settings/power-button-and-lid-settings-power-button-actionを参照してください。Windows VM で電源サービスが無効になっている可能性があります。 このような OS の動作は AHV ホストがメンテナンスモードに状態遷移するときの VM 操作にも影響を与える可能性があります。ライブマイグレーションが不可能な VM(例えば エージェントVMなど)を検出した場合、 Acropolis は ACPI コマンドを送信して VM のシャットダウンを試行します。OS が ACPI シャットダウンコマンドに反応しないまま 30 秒が経過した場合、Acropolis はその VM をパワーオフします。
以下は Windows Server 2016 で電源ボタンの動作を変更しスクリーン セーバーを無効にする手順です。正確な手順はご利用中の OS バージョンのドキュメントを確認してください。VDIの場合、新しい設定を適用するにはゴールドイメージを更新し、それを使用してユーザーVMを再展開する必要があります。Case 1 設定 > 個人用設定 > ロック画面 に遷移し、スクリーン セーバー設定 を選択します。スクリーン セーバー設定 画面で、スクリーンセーバーを (なし) に設定し OK を押下します。 コントロールパネル を開き、システムとセキュリティ > 電源オプション に遷移します。(もしくは、設定 > システム > 電源とスリープ に遷移し、電源の追加設定 を押下します。) 現在有効になっている お気に入りのプラン の隣にある プラン設定の変更 を押下します。 ディスプレイの電源を切る: の設定時間を 適用しない に設定します 変更の保存 を押下します。 Case 2 コントロールパネル を開き、システムとセキュリティ > 電源オプション に遷移します。(もしくは、設定 > システム > 電源とスリープ に遷移し、電源の追加設定 を押下します。) 電源オプション 画面で左側のパネルにある 電源ボタンの動作の選択 を押下します。 システム設定 画面で 電源ボタンを押したときの動作: を シャットダウン に設定し 変更の保存 を押下します。 Case 3 Power サービスが無効になっている場合、 電源オプション に次のメッセージが表示されます。 お使いの電源プランの情報が利用できません。RPCサーバーを利用できません。 Windowsの スタートボタン を押下し サービス と検索し、表示されるアプリケーションを押下します。サービスの一覧で Powerのプロパティ を確認すると、スタートアップの種類 が 無効 になっています。また、サービスの状態 は 停止 になっています。 スタートアップの種類 を 自動 に変更し、適用 または OK を押下して設定を適用します。 開始 を押下してサービスを開始し、サービスの状態: を 実行中 に変更します。 コントロールパネルの電源オプションのページにオプションが表示されます。 Case 4Windows OS が ACPI 呼び出しを受信し、上記の設定で shutdown/reboot コマンドを処理しない場合であっても、以下の手順に従って 電源ボタンによる強制シャットダウンを有効にすることができます。注: 注: この機能を有効にすると、未保存の作業を保存するオプションなしでVMがシャットダウンされるため、慎重に設定ください。 管理者として コマンドプロンプトを開きます。 コマンドプロンプトに次のコマンドを入力し Enterキーを押下します。 powercfg -attributes SUB_BUTTONS 833a6b62-dfa4-46d1-82f8-e09e34d029d6 -ATTRIB_HIDE コントロールパネル を開き、システムとセキュリティ > 電源オプション に遷移します。(もしくは、設定 > システム > 電源とスリープ に遷移し、電源の追加設定 を押下します。) 電源オプション 画面で、現在アクティブなプランの プラン設定の変更 > 詳細な電源設定の変更 に遷移し、電源ボタンとカバー で全てのオプションを オン にし、OK を押下して保存します。 これ以降、Windows VM は ACPI shutdown コマンドを受信すると強制的にシャットダウンします。注: 電源ボタンによる強制シャットダウンの表示設定は以下のコマンドで元に戻すことができます(-ATTRIB_HIDEを+ATTRIB_HIDEに置換します)。このコマンドは表示設定のみを変更し、実際の構成は変更されません。 powercfg -attributes SUB_BUTTONS 833a6b62-dfa4-46d1-82f8-e09e34d029d6 +ATTRIB_HIDE 注: 電源ボタンによる強制シャットダウンの表示設定は以下のレジストリキーやグループポリシーで変更することができます。 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\4f971e89-eebd-4455-a8de-9e59040e7347\833a6b62-dfa4-46d1-82f8-e09e34d029d6]
KB10395
Alert - A801109 - L2StretchLocalIfConflict
Investigating L2StretchLocalIfConflict issues on Prism Central.
This Nutanix article provides the information required for troubleshooting the alert L2StretchLocalIfConflict on Prism Central where Advanced Networking (Flow) is enabled. Alert overview The L2StretchLocalIfConflict alert is raised if the local VPN interface IP address is conflicted with one of these: Gateway IP address of subnet in a local or remote availability zone.The IP address allocated to a VM in a local or remote availability zone.DHCP pool of subnet in a local or remote availability zone. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Local VPN interface IP involved in Layer-2 subnet extension is in use in peer AZ." }, { "Check ID": "A vNIC in peer AZ was assigned the same IP address as VPN interface IP used in the local AZ." }, { "Check ID": "Resolve the IP conflict by ensuring the VPN interface IP is not used for any user VMs in any of AZs involved in the Layer-2 subnet extension." }, { "Check ID": "Some VMs in the subnets involved in Layer-2 subnet extension will be unable to communicate with other VMs in peer AZ." }, { "Check ID": "A801109" }, { "Check ID": "Local VPN interface IP involved in Layer-2 subnet extension is in use in peer AZ." }, { "Check ID": "Local VPN interface IP involved in Layer-2 subnet extension" }, { "Check ID": "Local VPN interface IP involved in Layer-2 subnet extension" } ]
Resolving the issue Ensure that the local VPN interface IP address is not used for the gateway IP address, VM, or DHCP pool in local and remote availability zones. If you need further assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB-2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB-6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching files to the case To attach files to the case, follow KB-1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB5797
NCC Health Check: conflicting_floating_ip_check_plugin
The NCC health check conflicting_floating_ip_check_plugin checks if there are multiple VM NICs associated with Nutanix Disaster Recovery (DR) recovery plans assigned with the same floating point IP.
Note: Nutanix Disaster Recovery (DR) is formerly known as Leap. Nutanix DRaaS is formerly known as Xi Leap. The NCC health check conflicting_floating_ip_check_plugin checks if there are multiple VM NICs associated with recovery plans assigned with the same floating point IP. This check is executed from the Prism Central (PC) paired with a Nutanix DRaaS Tenant. Running the NCC check The NCC check can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvn$ ncc health_checks draas_checks recovery_plan_checks conflicting_floating_ip_check_plugin You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every hour, by default. This check will generate an alert after 1 failure. Sample output For status: WARN The NCC check will WARN if there are multiple VM NICs associated with recovery plans with the same Floating Point IP. Detailed information for conflicting_floating_ip_check_plugin: Output messaging [ { "Check ID": "Checks if VMs which are part of different Recovery Plan have same Floating IPs." }, { "Check ID": "VMs belonging to different Recovery Plans are assigned the same Floating IP." }, { "Check ID": "Update the Recovery Plans to ensure that a Floating IP address is to be assigned to only one VM.\t\t\t\t\t\tFrom NCC 5.0 onwards.\t\t\tUpdate the Recovery Plans to ensure that a Floating IP in one external subnet is not assigned to multiple VMs." }, { "Check ID": "Floating IP assignment post VM recovery may fail." }, { "Check ID": "A300416" }, { "Check ID": "The same floating IP is associated with multiple VMs belonging to different Recovery Plans." }, { "Check ID": "Same Floating IP is associated with multiple VMs belonging to different Recovery Plans." }, { "Check ID": "Same Floating IPs should not be assigned to multiple VMs and should not be part of multiple Recovery Plans. Floating IP floating_ip is assigned to VMs {alert_msg}." } ]
In order to resolve the issue, from Prism Central, go to the Recovery Plan Page and review the recovery plans and the associated network configuration. Ensure that Floating Point IPs are associated with only one VM NIC. Correct any duplicate assignment. In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Additionally, gather the following command output and attach it to the support case: nutanix@cvm$ ncc health_checks run_all
KB3086
After Apply LSI SAS FW Update SAS Address Is Showing All Zeros
If the SAS FW update is disrupted in any way, it is possible that the 'SAS Address' from the sas3flsh -list command will show all zeros.
If the SAS FW update is disrupted in any way, it is possible that the 'SAS Address' from the sas3flsh -list command will show all zeros as shown below.
To resolve this issue, you will need to try to obtain the SAS address via one of the two following methods (unless you had previously recorded the entire address per KB2902 https://portal.nutanix.com/kb/2902): Boot the CVM and go to /home/log an run: sudo grep sas_addr * | less If you cannot find the address via step 1, someone has to physically pull the node and read the 16-digit SAS Address on the Adapter card that sticks out of the motherboard, usually it starts with '5003048'. Once that is done, boot back into the DOS ISO and rerun the original procedure using the last 9 digits obtained via one of the two above methods: SMC3008T.batIf the reflash does not work, pass the address manually via the commands below: sas3flsh -o –sasaddhi <Upper 7 digits> Once the update is complete, confirm the updated address via sas3flsh -list and reboot.
KB15666
Flow Network Security (FNS) PE auto-upgrade fails during node addition/expansion on a kBackplane + kService on NS-enabled-cluster with error ValueError: list.remove(x): x not in list
On clusters where the backplane network and Service network segmentation is enabled, if a cluster expansion is performed, the flow service on the newly expanded node may fail to start and cause genesis to crash on the new node.
After a cluster expansion in clusters where Network Segmentation (Backplane Network) and service network segmentation are enabled, genesis may crash on the newly added node due to failure to upgrade the Flow version on the new CVM.The following traceback may be observed: 2023-10-11 13:20:17,120Z INFO 99331984 flow_upgrade_helper.py:357 Upgrading Flow on added node from 1.0.1 to 3.1.0 version You can further confirm by verifying the Flow versions on all the nodes nutanix-CVM~$ allssh cat /home/nutanix/flow/flow_version.txt
Note: Consult with a Senior SRE or a Support Tech Lead (STL) before following these steps.Follow these steps below to upgrade Flow Network Security (FNS) PE on the newly added CVM.Note: These steps can also be followed if the CVM has not yet been added to the cluster. Use Genesis on the new CVM's CLI to stop the flow service nutanix@cvm:~$ genesis stop flow Change directory to /usr/local/nutanix/ (where the "flow" binaries exist) nutanix@cvm:~$ cd /usr/local/nutanix/ Make a backup of current FNS PE version 'flow' directory to a the 'nutanix' user's temporary directory nutanix@cvm:~$ cp -r flow/ /home/nutanix/tmp/ Change directory to the the 'nutanix' users's home directory nutanix@cvm:~$ cd /home/nutanix Use SCP to copy the newer FNS PE bundle from an existing CVM in the cluster, which is running the intended target version required to bring the new CVM's FNS PE version in to line, to the new CVM nutanix@cvm:~$ scp nutanix@<IP_of_a_CVM_with_correct_FNS-PE_verion>:/usr/local/nutanix/flow/installer/nutanix-flow-*.tar.xz /home/nutanix/ Use TAR to extract the correct version FNS PE 'flow' bundle over the top of the incorrect version on the new CVM nutanix@cvm:~$ tar -xf /home/nutanix/nutanix-flow-*.tar.xz -C /usr/local/nutanix/ Restart the Genesis service on the new CVM nutanix@cvm:~$ genesis restart Use 'cluster start' command to trigger the 'flow' service to start again, this time using the new version binaries that were replaced in the earlier steps. nutanix@cvm:~$ cluster start Wait a few minutes for the Flow service to startConfirm that the FNS PE Flow version was upgraded nutanix@cvm:~$ allssh cat /home/nutanix/flow/flow_version.txt Confirm that Genesis is no longer crashing on the new node by monitoring that the PIDs for the 'flow' service remain stable nutanix@cvm:~$ genesis status Also we can monitor the genesis.out log file nutanix@cvm:~$ tail -f /home/nutanix/data/logs/genesis.out And we can keep an eye on the Cluster Status for the affected CVM and confirm that it is 'Up' nutanix@cvm:~$ cs | grep -v UP
KB15555
NDB - Staging Snapshot replication operation fails with "'NoneType' object is not subscriptable" error
Staging Snapshot replication operation fails with the 'NoneType' object is not subscriptable" error.
Note: Nutanix Database Service (NDB) was formerly known as Era. Staging Snapshot replication operation fails with the following error message: 'NoneType' object is not subscriptable If an Update Time Machine SLA operation has failed or timed out before, then this issue can be seen.
Workaround Get the Time Machine name for which the staging snapshot replication operation fails. The time machine name can be fetched from the alerts page of the NDB UI, where the alert for the failed operation can be seen. Go to the NDB operations page and select the Nutanix cluster for which the staging snapshot replication operation is failing. Select the timeline to be “1 month”, Status as “Completed”, Name as “Replicate Snapshot”, and select the “Show System Operations” checkbox. Enter the time machine name fetched in Step 1 (along with STAGING in the brackets) in the Search by entity/operation ID search bar. There should be no such operation listed on the NDB UI. If there is even a single operation listed, then contact Nutanix Support http://portal.nutanix.com and do not go ahead with the steps listed in this KB. Go to the Time Machines Page on the NDB UI and choose the time machine identified in step 1. Choose the Data Access Management section on the Time Machine page.Click on the Table tab of the Data Access Management section. Select the replication cluster from the list of replication clusters associated with the Time machine for which the replication operation is failing and click the Remove button. Select “Yes” to remove the replication cluster from the Time Machine. NDB will automatically trigger two “Perform Curation” operations to clean up the entities created for the removed replication cluster. Wait for them to be completed. The status of the operations will be visible on the NDB operations page, and the “Show System Operations” checkbox should be selected to view the operations. Once both operations are complete, the replication cluster must be re-added to the time machine. To do this, go to the “Data Access Management” section of the time machine and click on the “Table” tab as done till Step 4 above. After that, use the “Add” button. Select the replication cluster removed earlier, choose the SLA you want to associate with the replication cluster and click on the “Add” button. Note that this step should be done only when the system is stable (the NDB should not be overloaded with too many operations) and the operations are completed successfully. This will trigger an Update Time Machine SLA task. You can monitor the task's status from the NDB operations page. Once the operation is successful, the Time Machine snapshots and database logs will begin the replication to the secondary cluster.
KB17055
Determine who deleted a VM on AHV
In some instances an administrator may need to know who deleted a VM as well as other data useful for VM recovery.
Administrators are sometimes tasked with determining the actions of their users. One such instance is an unplanned VM removal.
If you are using Prism Central, you can look up the VM deleted task and determine the user who requested it. See the Prism Central Infrastructure Guide for more details: https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2024_1:mul-explore-entities-activity-pc-c.html https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2024_1:mul-explore-entities-activity-pc-c.htmlIf you do not have access to Prism Central, you can ssh into the CVM and issue the following command to determine the user who deleted the VM Replace The_VM_Name with the actual name of the VM deleted. nutanix@cvm:~$ grep -i VmDeleteAudit /home/nutanix/data/logs/consolidated_audit.log|grep '"vm_name":"The_VM_Name"' The following is an example of the output: {"affectedEntityList":[{"entityType":"vm","name":"The_VM_Name","uuid":"531d0b16-2288-4fc2-8d1e-e9569792baec"}],"alertUid":"VmDeleteAudit","classificationList":["UserAction"],"clientIp":"10.22.26.16","creationTimestampUsecs":"1719010077422550","defaultMsg":"Deleted VM The_VM_Name","opEndTimestampUsecs":"1719010077419545","opStartTimestampUsecs":"1719010076172939","operationType":"Delete","originatingClusterUuid":"00061b3e-3e2a-6a54-72f0-ac1f6b3fecf9","params":{"delete_snapshots":"true","vm_name":"The_VM_Name"},"recordType":"Audit","sessionId":"675360ea-64af-4b0c-a10a-1559ffe6a7ef","severity":"Audit","userName":"joe-user","uuid":"47a64d39-fd86-4ac1-b33e-8b8c5859b386"} This should have everything you need to determine who deleted the VM. In fact, if there is hope of recovering the VM, having this information ready when you call Nutanix Support to recover an accidentally deleted VM would be useful.
KB10124
Creating a migration plan in Move fails with "Not able to find File on VM."
Creating a migration plan fails with "Not able to find File on VM." Upgrade Move to the latest version.
A migration plan fails during attempt to migrate a VM with the following error message in the Move user interface: "Not able to find File on VM <vmname>. Got error 'Got error while listing files at path [C:\Users\admin\AppData\Local\Temp\vmware]. Error is [ServerFaultCode: File <unspecified filename> was not found]' Open SSH connection to the Move VM. Inspect /var/log/xtract-vm/srcagent.log for the validation task events: I1008 10:05:19.074355 7 invcontroller.go:479] [VC: 10.10.10.200] VMs processed count=15 After some time, it fails: I1008 10:33:23.720378 7 uvmcontroller.go:142] [VM:testvm|MOID:vm-6338] Resetting context after failed operation retry 2 with retryCtxTimeout: 3m20s In the above error message, the IP address the Move is trying to reach 192.168.10.2 is different from the one configured by the user which is 10.10.10.2.At the same time, multiple vmkernel interfaces are present on the ESXi host and two of them have "Management Traffic" option enabled.
To resolve the issue, upgrade Move to version 3.7.1 or later.
KB14014
Inactive account. Please contact [email protected]
Please review this article if you are unable to access the Nutanix support portal and are observing the error ''Inactive account. Please contact [email protected]"
If you have encountered the error "Inactive account. Please contact [email protected]" and are not able to access the support portal please send an email to the specified address. The email should include the login name that was used for testing. A support team member will then be able to assist. Example:
Please email [email protected] for further troubleshooting as specified in the alert and a support team member will assist. The email should include the login that was used for testing and if possible, a screenshot of the issue including the URL.
KB15265
Linux VM static ip resetting to dhcp upon VM reboot
Network configuration file may change after rebooting a Linux VM moved from AWS to AHV. All configuration vanished and "BOOTPROTO" changed in to "dhcp" mode.
After moving the VMs from cloud (AWS) to Nutanix(AHV), windows VMs may move without any issue and network configuration works fine. But linux VMs lose the Network configuration when Vm reboots. All configuration vanished and "BOOTPROTO" changed in to "dhcp" mode. **ETH configuration before reboot** [root@linux~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 **ETH configuration after reboot** [root@linux~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
In this case, Cloud-init service is responsible to vanish the Network configuration during boots. ifcfg-eth0 file also specifies following: # Created by cloud-init on instance boot automatically, do not edit. As a workaround, disable cloud-init to ensure that it doesn’t do anything on subsequent boots To disable cloud-init, create the empty file /etc/cloud/cloud-init.disabled. During boot the operating system’s init system will check for the existence of this file. If it exists, cloud-init will not be started. $ touch /etc/cloud/cloud-init.disabled Assign the IP address after disabling the cloud-init and reboot the VM to confirm resolution.
KB6313
Move for VMs migration is stuck on Err="ServerFaultCode: A specified parameter was not correct: changeId"
Move for VMs migration will fail to delete the snapshot during Data Seeding.
When taking creating a migration plan for VMs in Move the Data Seeding part proceeds successfully but the Data Syncing part is where you will see a failure that : 2018-10-03T09:47:18.706485+00:00 E esxprovider.go:1372] [Err="ServerFaultCode: A specified parameter was not correct: changeId", Location="/home/hudsonb/workspace/workspace/hermes-production-build/go/src/srcagent/esxprovider.go:1463", Snapshot="XTRACT-VMSnap-1", VM="KLHKHQSRV751"] Failed to query snapshot. (error=0x3008) This usually happens when the last snapshot is created before the VM is powered off and CBT on the VM fails to delete the snapshot, which means this is not a Move's issue. The migration does not fail, but the srcagent in the backend will keep on trying to delete the snapshot using CBT. You can keep checking the xtract-vms-srcagent.log for this error.
Check if the CBT is enabled on the VM and on all the controllers on the VM using the KB 4820 https://portal.nutanix.com/kb/4820 Check if there are any backup scheduled for that VM, as there might be a CBT error if there is an ongoing backup schedule. Check the host resources are sufficient, which means that if there is a memory utilization for the host exceeding 80% utilization, then you will face this issue. To confirm, SSH into the ESXi host the Vm resides on and navigate to /vmfs/volumes/<datastore where the VM resides>/VM folder to check the vmware.log. If you see the log : 2018-10-10T13:51:10.550Z| vcpu-0| I120: DISKLIB-CBT : Creating cbt node 3c10ab8f-cbt failed with error Cannot allocate memory (0xbad0014, Out of memory). Go ahead and migrate the VM to another ESXi and discard the migration and re-create a new plan you will be able to migrate the VM OR Turn off the VMs on the host which are not needed and proceed to re-create a new migration plan after discarding the old one. You will be able to migrate the VM this time successfully. You can refer the VMware KB 2114076 https://kb.vmware.com/s/article/2114076 for the threshold of the memory usage.
KB4335
Role-mapping for AD groups does not work under PRISM central
If you are unable to log in to Prism Central using any user who is a member of the configured AD groups.
Customer configured role mapping for their AD groups in their Prism Central. However, they are unable to log in to Prism Central using any user who is a member of the configured groups.The Prism login page responds with an error message that the user does not have enough permissions to log in, even though the group role mapping is assigned cluster admin role and the customer can log in without any issues when the individual user is mapped to a role and not his AD group.
The above issue can be caused by any of the following situations: When the AD group is a member of multiple domains and the customer has defined the LDAP address of their AD using port 389 and not 3269. For more info on this issue, reference KB-2066 - Unable to Log In to the Prism web console using Group LDAP authentication https://portal.nutanix.com/kb/2066.Prism matches AD group name using case sensitive checks. So, if the group name defined under the role mapping in Prism has different cases than how it is defined in AD, Prism will fail to perform the name mapping for the group. Note: Ensure also that the customer is adding the @domain_name to the username when they are logging to Prism Central.
KB8564
NCC Health Check: robo_readonly_state_check
The NCC health check robo_readonly_state_check examines the ROBO cluster to determine if it is in a Read Only state. 
The NCC health check robo_readonly_state_check examines the ROBO cluster to determine if it is in a Read Only state. Prior to running this check, upgrade NCC to the latest version.Running the NCC CheckYou can run this check as part of the complete NCC Health Checks. nutanix@cvm$ ncc health_checks run_all Or you can run this check separately. nutanix@cvm$ ncc health_checks robo_checks robo_readonly_state_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run every 300 seconds. For Status: PASS Running /health_checks/robo_checks/robo_readonly_state_check on all nodes [ PASS ] For Status: FAIL Running /health_checks/robo_checks/robo_readonly_state_check on all nodes [ FAIL ] For Status: ERROR Running /health_checks/robo_checks/robo_readonly_state_check on all nodes [ ERROR ] Output messaging This health check is introduced in NCC-4.0.0 [ { "Check ID": "Check if the robo cluster current state is Read Only" }, { "Check ID": "The cluster is running in single node mode and has only one functional SSD" }, { "Check ID": "Make sure there are at least 2 working SSDs, and let the cluster transition out of read-only mode" }, { "Check ID": "Data resiliency is degraded, and user VM writes are blocked" } ]
If the check reports a failure, follow the below steps to troubleshoot : Ensure the reported CVM is UP and reachable over the networkCheck there are no hardware issues reported by any other NCC check.Check the Health status of SSD drives on the cluster. In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. https://portal.nutanix.com. Additionally, please gather the following command output and attach it to the support case: nutanix@cvm$ ncc health_checks run_all
KB3715
Nutanix Files - Creating / Updating the FSVM VM-to-VM Affinity Rules on ESXi
An alert is raised if VM-to-VM affinity rules are not created for File Server on ESXi. To set VM-to-VM affinity rules on ESXi, you should enableDistributed Resource Scheduler (DRS) on the cluster. If the DRS rule is not set when you are creating or updating File Server, you can update the rules by following the instructions described in this article.
An alert is raised if VM-to-VM affinity rules are not created for File Server on ESXi. An error message like the following is displayed if the VM-to-VM affinity rules are not created while creating File Server on ESXi. task.get 3a3e6a34-abcd-486a-8179-d681183bf1a5 ​​An alert similar to the following is raised if the VM-to-VM affinity rules are not created.​ Failed to create VM-to-VM anti-affinity rule for rule_name. rule_name is replaced with the name of the affinity rule.
To check if the cluster services are up and running, run the below commands: run cluster status and check if all the CVMs, FSVMs, and services are UP: cluster status | grep -v UP Run file_server_status_check on any of the CVM to determine the issues that would be flagged by NCC. nutanix@cvm$ ncc health_checks fileserver_checks fileserver_cvm_checks file_server_status_check If any of the cluster services are down, consider engaging Nutanix Support https://portal.nutanix.com/. To set VM-to-VM affinity rules on ESXi, you must enable Distributed Resource Scheduler (DRS) on the cluster. If the ADS rule is not set when you are creating or updating the File Server, you can update the rules by using the following procedure: Log on to a Controller VM with SSH.Identify the UUID of the File Server and note down the Fileserver name: ncli> file-server list Run the following command on any CVM to create a DRS rule if it does not exist for the cluster: afs infra.create_anti_affinity_rule <fs_name> Replace fs_name with the name of the File Server obtained in step 2. Update the DRS rule if it does not have the correct File Server NVMs. afs infra.update_anti_affinity_rule <fs_name> Replace fs_name with the name of the File Server obtained in step 2.
KB16734
Creating SQL Quorum on Nutanix File share
Failure observed when adding Nutanix File Share as Quorum disk in SQL Server Cluster due to incorrect share name or permissions.
Adding a Nutanix File Share as Quorum disk in SQL Server Cluster will fail due to permission issues on the share.
Troubleshooting steps:1. Confirm that the selected share has DFS disabled. In the following example, the share name is Quorum. Refer to KB 11210 https://portal.nutanix.com/kb/11210 for more information about DFS and Witness Shares. <afs> share.dfs Quorum status 2. In SQL Server, the Configure Cluster Quorum Wizard indicates an error stating 'Method failed with unexpected error code 67'. Error code 67 translates to "network name cannot be found"3. In the Wireshark trace Nutanix Files responds with STATUS_BAD_NETWORK_NAME whenever the client tries to connect to Nutanix Share.4. In the SMB client logs we see that negotiation succeeds, and later on, we can see NT_STATUS_BAD_NETWORK_NAME. Confirm the correct File Server name is used while configuring SQL Quorum disk as Nutanix File Share. 2024-03-18 11:52:06.430395Z 2, 215427, smb2_negprot.c:607 smbd_smb2_request_process_negprot 5. Give the SQLCLUSTER$ type user full permission on Nutanix File Share.
KB15392
PC 1-click DR - pc_backup_sync_check fails due to incorrect Prism Central VM uuid
PC 1-click DR backups may fail when the Prism Central VM was recovered on the source cluster previously.
Customers may receive an NCC alert reporting PC DR backups are failing for the Prism Central VM like below, Detailed information for pc_backup_sync_check: Accessing Prism Central Settings -> Prism Central Management Page will not show the status of PC DR no details are shown about PC DR status.Checking the API call will show 404 for url https://<pe_ip>:9440/api/nutanix/v3/prism_central like below,Checking /home/nutanix/adonis/logs/prism-service.log file you will notice "No Vm with the given uuids [<uuid>] not found" error 2023-08-25 19:49:31,614Z ERROR [scheduling-1] [ade326d88dc90b1c,ade326d88dc90b1c] BackupScheduler:highFrequencyBackupSchedule:168 Aborting backing up data. Unable to add backup, the following error encountered: [No Vm with the given uuids [3527f294-b1a0-41d0-a3b2-912ae417fbef] found. Aborting.] Checking aplos.out log file you will see below stack trace for /api/nutanix/v3/prism_central call 23-08-26 01:16:13,258Z ERROR prism_central_util.py:399 Could not find VM details with uuid : 3527f294-b1a0-41d0-a3b2-912ae417fbef Checking the uuid of Prism Central VM in the Prism Element cluster shows a different uuid (below command should be executed on PE and replace the <pc_vm_name> with the name customer given to PC VM) nutanix@NTNX-CVM:~$ acli vm.list | grep <pc_vm_name> For ESXi Environments: nutanix@NTNX-CVM:~$ acli uhura.vm.list | grep <pc_vm_name> Checking on the PC zeus_config_printer output the uhura_uvm_uuid shows a different uuid. nutanix@NTNX-PCVM:~$ zeus_config_printer | grep uhura You can see in the above command PC VM uuid running on the Prism Element cluster is 9ebc55ef-9840-4bc6-aef6-add6eedc7c2a but in the above prism-service.log and aplos.out files you see the uuid as 3527f294-b1a0-41d0-a3b2-912ae417fbefCustomer in this case recovered Prism Central VM on the source cluster few weeks before as part of troubleshooting another problem. Since the Prism Central VM was restored and overwritten it received a new UUID in the Prism Element cluster but the uuid in Prism Central VM zeus config was still pointing to the old PC VM uuid causing the backups to fail and api calls to fail.
Pre-requisites All the symptoms in the description section matches the issue.Customer recovered the Prism Central VM on the source cluster after which the issue started happening. NOTE: Customer should have restored the PC VM on the source PE cluster not the Destination PE cluster Resolution:To resolve the issue follow the steps below,Step1: Following steps should only be executed under the guidance of a Staff SRE or Support Tech Lead, Update the uhura_uvm_uuid with the uuid found for the Prism Central VM in the Prism Element cluster, to find the uuid of the Prism Central VM run the below command on PE where the PC is hosted,replace the <pc_vm_name> with the name customer given to PC VM nutanix@NTNX-CVM:~$ acli vm.list | grep <pc_vm_name> For ESXi Environments: nutanix@NTNX-CVM:~$ acli uhura.vm.list | grep <pc_vm_name> Step2: Update the uhura_uvm_uuid with the uuid from Step1 on the Prism Central VM zeus database using edit-zeus command Step3: Restart adonis service nutanix@NTNX-PCVM:~$ genesis stop adonis;cluster start Confirm the backups work now. Prism Central Setting -> Prism Central Management page should show the backup status now.
KB7357
How to create long living tokens to integrate CI/CD pipeline in Nutanix Kubernetes Engine
How to create long living tokens to integrate CI/CD pipeline.
Nutanix Kubernetes Engine is formerly known as Karbon or Karbon Platform Services. Nutanix Kubernetes Engine default token is valid for only 24 hours, which makes integration difficult with external components like CI/CD pipeline and Kubernetes cluster deployed by Nutanix Kubernetes Engine.
Nutanix recommend using the Nutanix Kubernetes Engine API to regenerate the kubeconfig before the 24 hours expiry and integrate that process in your CI/CD workflow. See Nutanix Dev https://www.nutanix.dev/reference/karbon/Intro/ for reference.Alternatively, you can use Nutanix Kubernetes Engine Service Account. This is not recommended as it requires an experienced administrator.In the procedure below, we are going to create a service account for Jenkins integration. For the sake of simplicity, admin privilege is assigned via ClusterRole.More restricted access can be assigned using RBAC. Create a service account $ kubectl create serviceaccount jenkins Create a role binding based on the permission needed by the application $ cat <<EOF | kubectl create -f - Extract Service account token. $ kubectl get secrets $(kubectl get serviceaccounts jenkins -o jsonpath='{.secrets[].name}') -o jsonpath='{.data.token}' | base64 -d Clusters running Kubernetes > 1.24: Starting Kubernetes 1.24, Kubernetes won't generate Secrets automatically any longer for ServiceAccounts. This means the command from step 3 will return an empty output on k8s > 1.24 If the cluster is running Kubernetes 1.24 or above, use the alternative step 3a: 3a. Create a service account token secret or create the token manually for the ServiceAccount Option 1: Create a secret of type service-account-token $ cat <<EOF | kubectl create -f - Then extract the token: $ kubectl get secrets jenkins -o jsonpath='{.data.token}' | base64 -d Option 2: Create a token manually for the Service Account: $ kubectl create token jenkins --duration=999999h Download a new kubeconfig file from Karbon.Update the token in the kubeconfig file with the token we generated in the steps above.Use the modified kubeconfig in Jenkins pipe to integrate the k8s cluster. Important Notes: Nutanix recommends using kubeconfig from Rest API or UI for User logins.If you have to use Service accounts, limit or restrict the use of service integration like CI/CD pipeline. Avoid using service accounts for general-purpose tasks.Distributing long-lived tokens to users may introduce secret sprawl, thus, administrators must ensure that the tokens are not used for unintended purposes.
KB14520
VM management tasks fail with error 'Command failed: /sbin/ip tuntap add dev tap0 mode tap multi_queue: Object "tuntap" is unknown, try "ip help"'
VM management tasks fail when AHV is not compatible with AOS.
AOS 6.5.2 and above are not compatible with AHV-20170830.x.Customers who are using older AHV while upgrading AOS using Prism 1-click Software upgrade process could run into issues with VM manageability due to the incompatibility.VM management tasks such as VM power on, Host maintenance mode fail with the following error: Error received: Operation failed: NetworkError: OVS error: host-ip:x.x.x.x connected:True fn:create_local_port error:Command failed: /sbin/ip tuntap add dev tap0 mode tap multi_queue: Object "tuntap" is unknown, try "ip help". The AOS version of the cluster is 6.5.2 or above: nutanix@cvm$ cat /etc/nutanix/release_version The AHV version on the hosts is 20170830.x which can be verified with the command below: nutanix@cvm$ hostssh uname -a
This issue is resolved in: AOS 6.5.x family (LTS): 6.5.3 AOS 6.5.3 introduces a capability to provide backward compatibility on a certain AHV VM network management workflow to avoid the reported VM management error in this article. However, AHV must still be upgraded to meet AOS compatibility per the Compatibility and Interop Matrix https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix.
KB2997
NCC Health Check: ldap_config_check
The NCC health check ldap_config_check examines the Cluster LDAP Configuration.
The NCC health check ldap_config_check examines the Cluster LDAP Configuration available on the Controller VM (CVM) to ensure that the cluster connects properly to the LDAP server. Running the NCC checkRun this check as part of the complete NCC health checks: nutanix@cvm$ ncc health_checks run_all Or run this check separately: nutanix@cvm$ ncc health_checks system_checks ldap_config_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is not scheduled to run on an interval. This check does not generate an alert. Sample Output For status: PASS nutanix@cvm$ ncc health_checks system_checks ldap_config_check For status: FAIL nutanix@cvm$ ncc health_checks system_checks ldap_config_check For status: WARN (Available in >= NCC 4.1.0) Microsoft enables LDAP signing and binding by default. Non-secure LDAP Communication to Prism will break after this update is installed on the customer's AD domain controller. To catch non-secured LDAP, the check provides the below message: nutanix@cvm$ ncc health_checks system_checks ldap_config_check nutanix@cvm$ ncc health_checks system_checks ldap_config_check For status: INFO nutanix@cvm$ ncc health_checks system_checks ldap_config_check Output messaging [ { "Check ID": "Check LDAP Configuration." }, { "Check ID": "LDAP is not correctly configured in the Cluster." }, { "Check ID": "Review KB 2997" }, { "Check ID": "Directory users may be unable to log in properly." } ]
If the test result is a FAIL status or WARN status, the cluster function is not impacted, and some users may not be able to log in to the Prism Web Console. If the check reports a FAIL status, do the following: Verify that the DNS can resolve the LDAP address: nutanix@cvm$ ncli authconfig ls Verify that the configured LDAP Port is open. LDAP Ports Port 389 (LDAP). Use this port number (in the form of the following URL) when the configuration is a single domain, single forest, and does not use SSL. ldap://ad_server.mycompany.com:389 Port 636 (LDAPs). Use this port number (in the form of the following URL) when the configuration is a single domain, single forest, and uses SSL. Ensure that all Active Directory Domain Controllers have installed SSL certificates. ldaps://ad_server.mycompany.com:636 Port 3268 (LDAP - Global Catalog). Use this port number (in the form of the following URL) when the configuration is multiple domains, single forest, and does not use SSL. ldap://ad_server.mycompany.com:3268 Port 3269 (LDAPs - Global Catalog). Use this port number (in the form of the following URL) when the configuration is multiple domains, single forest, and uses SSL. ldaps://ad_server.mycompany.com:3269 Verifying if the ports are openFrom any machine in your environment, use the nmap (or a similar) utility. [root@localhost]# nmap -p 389,636,3268,3269 x.x.x.x If the check reports a WARN status, do the following: Verify that the Role Mappings are configured. If the Role Mappings are not configured, everyone in the directory service can log in. Follow ncli authconfig manual https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_7:acl-ncli-authconfig-auto-r.html.Refer to KB-9031 http://portal.nutanix.com/kb/9031 to resolve Microsoft LDAP Channel Binding and Signing issues. If the check reports an INFO status, do the following: Verify that the LDAP is configured. Refer to Security Guide: Configuring Authentication https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_7:wc-security-authentication-wc-t.html.Verify that you can ping the LDAP IP. If you can't ping it, the Firewall in your environment might be blocking ICMP packets.Verify that the LDAP is set through a Hostname and not an IP Address because an IP Address is a single point of failure.If LDAP is configured with IP and the Server is changed, then the configuration on Prism needs to be updated.If LDAP is configured with FQDN, then changing the IP of the LDAP Server should not matter and does not require updates to Prism Configuration. (since the DNS server will resolve the new IP for the LDAP server) If the above steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com.
KB7587
No foundation service item “foundation: []” in the output of “genesis status” after rescue CVM.
No foundation service item “foundation: []” in the output of “genesis status” after rescue CVM.
No foundation service item “foundation: []” in the output of “genesis status” after rescue CVM. Other services are normal.The output of "genesis status | grep foundation" is empty.Sample output of "genesis status": 2019-06-06 15:33:31.728211: Services running on this node:
Upgrade Foundation to the latest version.Workaround: Confirm Foundation status. nutanix@cvm$ ~/foundation/bin/foundation_service status If the above commands are successful, it indicates that this component is normal. If so, kill the foundation process. Confirm the Foundation version. nutanix@cvm$ cat ~/foundation/foundation_version If foundation is not seen in "genesis status", then ~/data/locks/foundation should not exist. nutanix@cvm$ ls -rtl ~/data/locks/foundation Create the file ~/data/locks/foundation and give it the same permissions as other files in the ~/data/locks directory, then we should see it in "genesis status". nutanix@cvm$ touch ~/data/locks/foundation
KB15725
vGPU VM fails to start with "Failed to get VM UUID from QEMU command-line" errror
VMs with vGPU may fail to start on the latest hardware platforms like G9 or HX665 V3.
VMs with vGPU may fail to start on clusters running AOS 6.5.4, 6.5.4.5, 6.7.0.5, 6.7.0.6 with bundled AHV releases and using the latest hardware platforms like G9 or HX665 V3.When the VM is being started, the NVIDIA vGPU manager parses the QEMU command line and looks for the "-uuid" field. In some cases, this field is located after 1024 symbols, which is not supported by the NVIDIA vGPU manager and results in VM power-on failure. The main reason why the command line becomes longer than 1024 symbols is the increased length of "-cpu" argument string. The following error can be found in the /var/log/libvirt/qemu/<VM UUID>.log log file: 2023-10-24T02:08:44.605714Z qemu-kvm: warning: This feature depends on other features that were not requested: CPUID.8000000AH:EDX.pfthreshold [bit 12] VM UUID can be found in the output of the acli vm.list | grep <VM Name> command.The following errors can be found in the /var/log/messages: root@AHV# grep 'nvidia-vgpu-mgr\|vmiop' /var/log/messages | grep -v audisp Check the CPU flag length on the host to confirm the issue: root@AHV# lscpu | grep Flags
This issue is resolved in: AOS 6.5.X family (LTS): AHV 20220304.478, which is bundled with AOS 6.5.5AOS 6.7.X family (STS): AHV 20230302.1011, which is bundled with AOS 6.7.1 Please upgrade both AOS and AHV to versions specified above or newer.
KB6957
Backup job fails with an SQL VSS Writer error after installing ACT! Software
Customer has issues with app consistent snapshot taken by Veeam software. Normal snapshot works well. Also, SQL installed on that VM which has vss writers in failed state. (vssadmin list writers)
If you have an ACT! Software installed, Veeam Agent for Microsoft Windows job or Veeam Backup & Replication job with Application-aware processing fails with the following: Failed to create snapshot. Error code -2147212300. 'Backup job failed. At the same time, the following errors appear in Windows Application log: Log Name: Application Event logs also show this error during the snapshot: Create Snapshot for Transaction ID: [1168:1548964873543247] failed with error: [Create Snapshot operation failed as Call to function: [CreateSnapshot] in the dll: [C:\Program Files\Nutanix\VSS\NutanixVSSRequestor.dll] failed with error: [Windows Error 0x80042302]]
Perform the following steps in order to fix the issue: Open SQL Server Configuration Manager.Navigate to SQL Server Services.Select an appropriate instance (e.g., SQL Server(ACT7)).Change Log On from Built-in account: Local System to Built-in account: Local Service
KB9892
NCC check marvell_boss_card_status_check fails on ESXi 7.0 on Dell platforms
This article describes an issue where NCC check marvell_boss_card_status_check fails on ESXi 7.0 on Dell platforms.
NCC check marvell_boss_card_status_check may report below ERR status on ESXi 7.0 based Nutanix clusters on Dell platforms (both PowerEdge and XC). Sample NCC output: Running : health_checks hardware_checks disk_checks marvell_boss_card_status_check Running the following curl command on the CVM (Controller VM) results in HTTP error 403: nutanix@cvm$ curl -k https://192.168.5.1:8086/api/PT/v1/host/adapters 2020-08-14 23:44:36 INFO esx_utils.py:155 [10gbe_check] Attempting to connect to host with IP x.x.x.x
To resolve the NCC check failure, perform the following steps: For Dell XC: Perform LCM inventory https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide:top-lcm-inventory-t.html and verify that the LCM version is 2.3.4.Rerun NCC Health Checks and verify it completes successfully. For Dell PowerEdge: Log in to any CVM and run the following curl command. This will set the rest_auth_enabled value to “False” on all the nodes in the cluster. nutanix@cvm$ hostssh /opt/dell/DellPTAgent/tools/pta_cfg set rest_auth_enabled=False rest_ip=https://192.168.5.1:8086 [192.168.5.1] idrac_pass_change_interval_days=60 Sample output: nutanix@cvm$ hostssh /opt/dell/DellPTAgent/tools/pta_cfg set rest_auth_enabled=False rest_ip=https://192.168.5.1:8086 [192.168.5.1] idrac_pass_change_interval_days=60 To verify the value is set correctly, run the following command: nutanix@cvm$ hostssh /opt/dell/DellPTAgent/tools/pta_cfg get rest_auth_enabled rest_ip idrac_pass_change_interval_days Sample output: nutanix@cvm$ hostssh /opt/dell/DellPTAgent/tools/pta_cfg get rest_auth_enabled rest_ip idrac_pass_change_interval_days Stop PTAgent on every host in the cluster: nutanix@cvm$ hostssh /etc/init.d/DellPTAgent stop For newer versions of DellPTAgent: nutanix@cvm$ hostssh systemctl stop DellPTAgent Start PTAgent on every host in the cluster: nutanix@cvm$ hostssh /etc/init.d/DellPTAgent start For newer versions of DellPTAgent: nutanix@cvm$ hostssh systemctl start DellPTAgent Verify that both iSM and PTAgent are running: nutanix@cvm$ hostssh /etc/init.d/DellPTAgent status For newer versions of DellPTAgent: nutanix@cvm$ hostssh systemctl status DellPTAgent Sample outputs: nutanix@cvm$ hostssh /etc/init.d/DellPTAgent status nutanix@cvm$ hostssh /etc/init.d/dcism-netmon-watchdog status Re-run NCC Health Checks and verify it completes successfully.
KB14698
Space Accounting | Prism shows different space usage for a VM than the Guest OS shows
This KB explains why the space usage reported in a VM might be different from what is shown in Prism for this VM.
For other Space Accounting issues not covered in this article, please take a look at the Space Accounting | General Troubleshooting article https://portal.nutanix.com/kb/14475.Based on the configuration of your guest OS and hypervisor and the workload within the guest OS, you may encounter a scenario where the disk space usage reported within the guest OS differs from what is reported in Prism Element. This article explains why this can happen and what can be done to address the discrepancy.As an example, let's create a test VM with a single disk and look at the space usage reported in Prism Element:Within the UVM, we see the following space usage: [root@localhost ~]# df -h As we can see, the combined space usage on this VM's disk is relatively close to what Prism Element reports.Now, in the VM, let's add two files of 10 GiB each. > dd if=/dev/urandom of=test1 bs=10M count=1024 Prism Element shows the expected increase:As well as the VM: [root@localhost ~]# df -h / Let's remove a 10 GiB file within the VM and check the space usage in Prism Element.As we can see, the Logical Usage has remained the same.But according to the guest VM, space usage has gone down by 10 GiB: [root@localhost ~]# df -h / This is caused by the fact that AOS doesn't know when a guest VM deletes data from a Virtual Disk. The guest will update its filesystem and show the deleted space as available but from an AOS perspective, the data has not been freed, so we still count it as Logical Usage.We can further illustrate how this behavior increases the space reporting discrepancy by deleting the remaining 10 GiB file on the guest VM and creating additional test files with random data. Within the guest, we're now showing 6.8 GiB in use which is a 5 GiB increase from the point we started: [root@localhost ~]# df -h / But the Logical Usage reported in Prism Element has further increased:Over time and with multiple VMs this can result in a significant divergence between what's reported by the VMs versus what's reported by the cluster.
For the guest VMs to properly notify AOS of deleted blocks, we need to leverage SCSI UNMAP functionality. By issuing the SCSI UNMAP command, a host application or OS specifies that a given range of storage is no longer in use and can be reclaimed. When the guests send UNMAP commands to the Nutanix storage layer, Prism accurately reflects the amount of available storage as AOS Storage background scans complete. If the guest doesn’t send UNMAP commands, freed space isn’t made available for other guests and doesn’t display as available in Prism.SCSI UNMAP fully supported:AHV:There's no additional configuration needed for AHV as AHV VM vDisks and VGs support the SCSI UNMAP command. Best practices https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2029-AHV:scsi-unmap.html for Linux and Windows Operating Systems are documented in the AHV solutions documentation.Nutanix Volumes:Any client connecting to Nutanix Volumes can leverage the SCSI UNMAP functionality. Best practices https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2049-Nutanix-Volumes:scsi-unmap.html for Linux and Windows Operating Systems are documented in the Nutanix volumes documentation.Hyper-V:SMB storage supports the functionality.SCSI UNMAP not supported:ESXi:NFS datastores do not support the SCSI UNMAP command. This limitation is specific to the NFS protocol and impacts all VMs running on NFS storage, regardless of the platform. Guest VMs need additional configuration steps to reclaim space on the Nutanix storage layer properly. One way to achieve this from within the VM is to use a program that fills its filesystem(s) with zeros after deleting files and then deletes the zeros again. Since the Nutanix storage layer doesn't write zeros and only references them in metadata, it will consider the blocks free and update the Logical Usage accordingly.
{
null
null
null
null
KB14749
Enabling MSP fails after upgrading PC above 2022.9 using dark site LCM
This article covers scenario where enabling MSP fails after upgrading PC to 2022.9 or higher using dark site LCM due to symlinks not preserved during extraction.
Microservices Platform (CMSP) 2.4.3 enablement may fail in dark site scenario if MSP dark site bundle is extracted on the customer web server without preserving symlinks.Symlinks may not be preserved depending on the software and method used to extract MSP dark site bundle. This is usually observed if the MSP bundle is extracted on Windows-based web servers.If symlinks are not preserved, MSP may fail due to error 404 during an attempt to download a file from /builds/msp-builds/msp-services/docker.io/nutanix/ path. /home/nutanix/data/logs/msp_controller.out: 2023-03-24T14:16:10.2Z msp_registry.go:1057: [DEBUG] [msp_cluster=prism-central] AirGap Enabled. Url is http://darksite.server.tld/release Cause:MSP 2.4.3 LCM bundle is distributed with symlinks inside tar.gz pointing from builds/msp-builds/msp-services/464585393164.dkr.ecr.us-west-2.amazonaws.com/nutanix-msp/ to builds/msp-builds/msp-services/docker.io/nutanix/ If the symlinks are not preserved during extraction, it will result in a 404 error.
PC upgrade to 2023.1.0.2 and MSP to 2.4.3.2 resolves the issue. Suggest upgrading to PC 2023.1.0.2 and deploy MSP 2.4.3.2 bundle in the dark site. Workaround:As a workaround, copy the contents of builds/msp-builds/msp-services/464585393164.dkr.ecr.us-west-2.amazonaws.com/nutanix-msp/ to builds/msp-builds/msp-services/docker.io/nutanix/ in the Web server file system or recreate symlink if OS/FileSystem and webserver allows.
KB11912
PD based Application consistent snapshot is failing with an error "Quiescing guest VM(s) failed or timeout out"
PD based Application consistent snapshot is failing with an error "Quiescing guest VM(s) failed or timeout out"
While taking Application-Consistent Snapshot for a particular VM, it fails with the below error message : VSS snapshot failed for the VM's XXX protected by the XXX in the snapshot XXXX because Quiescing guest VM(s) failed or timeout out In NGT Master, we can notice 'Unable to find VSS transaction id' error ERROR C:\Program Files\Nutanix\python\bin\vss_guest_rpc_client.py:128 Failed to execute abort_vm_snapshot operation, kDataProtectionError: Failed to abort VSS snapshot operation due to Nutanix Data Protection service error, error detail: Unable to find VSS transaction id Cerebro Master log 22:32:08.643589 25221 snapshot_consistency_group_sub_op.cc:5617] <parent meta_opid: 95514186, CG: NARUTO - ConfigMgr 2012 R2 Primary Site Server>: Skipping File: /ctr-prd/.acropolis/vmdisk/622148a5-419e-41a7-98ab-ca6e68d29b3d, File bytes: -1[Skip reason: 2] guest_agent_service.logs INFO C:\Program Files\Nutanix\python\bin\rpc_service_windows.py:417 Received VssQuiesceVm request for snapshot with uuid 5ea71171-032d-45d5-9e4a-fcc1f4360b61, txn_id 149:1565933545979007
This problem occurs when Cerebro is able to take the snapshot but VSS seems to be taking more than 30 second to issue finish_vm_snapshot call to Cerebro. Cerebro waits for only 30 seconds (FLAGS_cerebro_vss_finish_timeout_secs) after taking the snapshot, after that it drops the transaction id and takes a crash-consistent snapshot. This issue is generally seen on highly busy VM's.However before modifying the GFLAG setting, verify the following : - Confirm that NGT is enabled (true), VSS Snapshot is set to true, and Communication Link Active is set to true when you run the command "ncli ngt list".- Re-install NGT - Make sure there are no error in the command (Needs to be executed in the UVM) : vssadmin list providers vssadmin list writers- Disable antivirus temporarily and then check you can take application consistent snapshot.
KB12581
Nutanix Files - Dark Site Files Deployment fails with "NFS: CREATE of / failed with NFS3ERR_INVAL(- 22)
Resolving an issue where dark site deployment fails for Nutanix Files.
This problem occurs exclusively when UI upload is used. During deployment of Nutanix Files in Dark Site Environment, the below error is displayed on the AFS leader in minerva_cvm logs: MinervaException: Image conversion failed qemu-img: nfs://127.0.0.1/Nutanix_<>_ctr/: error while converting raw: Failed to create file: creat call failed with "NFS: CREATE of / failed with NFS3ERR_INVAL(-22)" To verify if the issue matches the one described in the article, follow the steps below: Identify the AFS leader CVM: nutanix@NTNX-CVM:~$ afs info.get_leader SSH into AFS leader.The error below is found in /home/nutanix/data/logs/minerva_cvm.log: 2021-10-31 10:28:34,597Z INFO 49535600 uvm.py:1606 Running cmd ['/usr/local/nutanix/bin/qemu-img', 'convert', '-p', '-f', 'qcow2', '-O', 'raw
Workaround: To resolve the issue, use the command line approach ncli to upload an image and proceed with Files Deployment in Dark Site. Remove the old image which failed using UI. Substitute the 'name' with the exact version to be removed: nutanix@NTNX-CVM:~$ ncli software remove name=4.0.0.1 software-type=afs Check the status of the software repository if it's empty post removal: nutanix@NTNX-CVM:~$ ncli software ls software-type=afs Download Nutanix Files https://portal.nutanix.com/page/downloads?product=afs Upgrade from the Nutanix Support Portal – both the 'json' and 'qcow2' files.Use 'wget' or WinSCP to move both downloaded files to "/home/nutanix/tmp/" of any CVM.Ensure the file names remain exactly as shown on the Portal.Once you confirm that all failed software has been removed, proceed with the below command to upload the same software using ncli. nutanix@NTNX-CVM:~$ ncli software upload software-type=afs meta-file-path=/home/nutanix/tmp/afs-4.0.0-metadata.json file-path=/home/nutanix/tmp/afs-4.0.0.qcow2 Once the upload is complete, proceed with deployment using Prism Console.
KB10911
Prism Central - After upgrade to PC 2021.1.0.1 or fresh deployment of 2021.1.0.1, the version shown in UI is pc.2020.11
After upgrading a PC instance to 2021.1.0.1, the UI still shows pc.2020.11 in the Version field
Prism Central UI version incorrectly shows version pc.2020.11 even though we're running pc.2021.1.0.1Scenarios:1. After fresh 1-click deployment of Prism Central pc.2021.1.0.12. Attempt to upgrade to Prism Central pc.2021.1.01 with LCM or 1-Click shows no version available
1. Confirm in command line current version is should show correct version pc.2021.1.0.1 nutanix@PCVM:~$ ncli cluster info The "Cluster Full Version" string matches the output in zeus_config_printer: nutanix@PCVM:~$ zeus_config_printer | grep el7.3-release-euphrates-5.19-stable- 2. To workaround the problem switch ENVOY OFF and ON: nutanix@PCVM:~$ python /home/nutanix/ikat_proxy/replace_apache_with_envoy.py disable Note: Reboot of PCVM does not fix problem
KB9539
How to clear BIOS password
This KB has information where BIOS of a node has a password set which is lost or forgotten, and also a way to reset the password.
If a password is set for BIOS on a server node and is lost or forgotten there is no way to clear the password from the BIOS screen. This article provides one of the few solutions that may be utilized to clear the BIOS password.
This solution requires powering off and dismounting the server node from the rack. Plan accordingly to execute these tasks. Power off and disconnect all cables from the node. Make sure to mark the cables to put them back in the same ports. Before powering off the node, perform the node shutdown pre-checks https://portal.nutanix.com/page/documents/details?targetId=Chassis-Node-Replacement-Platform-NX3170G6%3Abre-node-shutdown-precheck-t.html.Put the CVM and host in the maintenance mode. This will ensure the guest VMs on the node are migrated to the other nodes in the cluster.See KB 4639 https://portal.nutanix.com/kb/4639 on placing the CVM and host in maintenance mode.Shutdown the node. Shutting Down a Node in a Cluster (vSphere Web Client) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Node-Replacement-Platform-NX3170G6:vsp-node-shutdown-vsphere-t.html Shutting Down a Node in a Cluster (vSphere Command Line) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Node-Replacement-Platform-NX3170G6:vsp-node-shutdown-vsphere-cli-t.html Shutting Down a Node in a Cluster (AHV) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Node-Replacement-Platform-NX3170G6:ahv-node-shutdown-ahv-t.html Shutting Down a Node in a Cluster (Hyper-V) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Node-Replacement-Platform-NX3170G6:hyp-node-shutdown-hyperv-t.html Dismount the node from the rack.Locate the CMOS battery on the motherboard.Remove the battery.After few seconds, put the battery back.Rackmount the node, reconnect the cables to their original order, and power it on. At this point, the BIOS settings are reset and the BIOS password is cleared. Follow KB 4639 to bring the node and host out of the maintenance mode.