id
stringlengths
1
25.2k
โŒ€
title
stringlengths
5
916
โŒ€
summary
stringlengths
4
1.51k
โŒ€
description
stringlengths
2
32.8k
โŒ€
solution
stringlengths
2
32.8k
โŒ€
KB16052
AAG DB provision might fail if Active Directory Sites and Services configuration is inaccurate
AAG DB provision might fail if Active Directory Sites and Services configuration is inaccurate, especially if customer's AD infrastructure has multiple domain controller servers across multiple sites.
The Always On Availability Groups (AAG) feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring.In the Always On Availability Groups provisioning workflow, NDB prestages the VCO object for the AAG listener. Enabling and bringing up the VCO object while creating the Availability group is needed. When the AD infrastructure contains multiple domain controller servers across different geographical sites, replicating computer objects takes an extended time.The case below is when the NDB AAG provisioning workflow accesses the VCO after pre-staging. If an inter-site lookup of the object is performed and replication is not complete, the access fails randomly on several domain controller servers. The symptom can be confirmed if the failover clustering logs show something similar to the one below. [Verbose] 00001550.000016ec::2023/10/25-14:24:03.136 INFO [RES] Network Name: [NNLIB] FindSuitableDCNew - objectName wf1ndbtest32-AG, username - WF1NDBTEST32$, firstChoiceDCName - \\ To isolate this issue from the NDB side, create an AAG listener from the Microsoft SSMS (SQL Server Management Studio) console. You will see the same error.
Can configure an intra-site lookup instead of an inter-site lookup of the computer objects where replication is meant to happen immediately. The intra-site lookup can be configured by adding subnets against a specific site inside AD sites and services configuration. These subnets refer to the subnets defined in the vLAN consumed in the Network profile to provision the VMs in NDB.Refer to the below documents for more information on AD intra-site lookups: Microsoft Learn Windows Server: Active Directory Replication Concepts https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/replication/active-directory-replication-concepts Microsoft Learn Windows Server: What Is Active Directory Replication Topology? https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc775549(v=ws.10) In the Active Directory Sites and Services MMC, right-click Subnets and select New Subnet. In the New Object โ€“ subnet window, type the subnet e.g. 192.x.x.x/24.In the Select a site object for this prefix option, select the preferred site for the subnet e.g.siteA. Click OK to finish. To update network firewall rules and allow all required ports, follow Nutanix Software Type: Ports and Protocols https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=NDB%20%28SQL%20Server%29.To check and open the firewall ports required for Active Directory communication with the provisioned DB Servers, follow Microsoft Learn Windows Server: How to configure a firewall for Active Directory domains and trusts https://learn.microsoft.com/en-us/troubleshoot/windows-server/identity/config-firewall-for-ad-domains-and-trusts.If the above steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com/.
KB13251
NC2 - hibernate/resume stuck due to DRC actions on CVM without S3 access
This article describes an issue where if hibernation is started and the CVM lost access to S3 bucket, the hibernation task gets stuck without any progress.
Starting with AOS 6.0.1 onwards, NC2 on AWS offers the capability to hibernate/resume the cluster to/from an AWS S3 bucket. An issue has been identified where if hibernation is started and access to the S3 bucket is lost, blocked, etc, the task will get stuck without showing any progress. Before proceeding with the solution in this article, it is advisable to do the following: 1. Check access to the bucket. To obtain the bucket name, go to the cluster from MCM and under "Cloud Resources" find the S3 Bucket name. nutanix@cvm:~$ allssh "aws s3 ls s3://nutanix-clusters-hb-0005e17d-6ccc-cafe-e443-13809dcb53c0" 2. If any of the above fails, it is advisable to check with the customer if he has made any changes to the bucket from the AWS console to block the access. For example, the customer could've modified the bucket to where only he can make changes, this will have to be resolved. However, there's a number of changes that the customer can make to the bucket to not allow the access and they'll need to verify this from AWS console3. If the customer is able to fix the access to the bucket from AWS, then there's no reason to follow the solution in this article. 4. If the customer is unable to fix the access, then proceed with the solution in this article.In addition to checking the S3 access above, the following steps can be taken in order to identify the issue further. Looking at progress_monitor_cli --fetchall we see the following stuck task: progress_task_list { Error messages will be seen in dynamic_ring_changer.INFO: E20220525 12:27:25.543635Z 17596 hibernate_resume_manifest_manager.cc:430] PutObject failed for metadata hibernate global manifest file kHttpError Please verify the S3 access on the acting/orchestrator DRC. To know what is acting DRC. Please check dyn_ring_change_info in zeus. zeus_config_printer | grep -A14 dyn_ring_change In this case CVM ID 14 is the acting DRC.
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)"To resolve this, the dynamic ring changer operation should be moved to another CVM with S3 access.Stop the current acting DRC (Dynamic Ring Changer) from the CVM without S3 access genesis stop dynamic_ring_changer Using edit-zeus, remove the entire ring operation from zeus with edit-zeus: dyn_ring_change_info { Wait until a new DRC has picked up the operation. From the output above you can see that the CVM ID doing service_vm_id_doing_hibernate_resume is 14 service_vm_id_doing_hibernate_resume: 14 When the DRC has picked up a new operation, you can see that the CVM_ID will change from 14 (in this case) to another CVM_IDAfter DRC has a new CVM ID, start the old DRC using โ€œcluster startโ€. DRC should remain down on the original CVM until a new one is elected to prevent it from coming back to the same CVM.
KB6150
Intel Network Adapter X520 SFP not detected after Foundation
Intel Network Adapter X520 SFP not detected after Foundation
After Foundation completes, nodes will detect the Intel Network Adapter X520: HOST# lspci |grep -i ethernet However, not all NIC ports are visible by the host: (It is common for an unload of the module and ports if an incompatible SFP is used in all ports) CVM$ manage_ovs show_interfaces The following messages will be seen in dmesg: [22.605600] ixgbe 0000:19:00.0: failed to load because an unsupported SFP+ or QSFP module type was detected. This issue is not the Intel Network Adapter X520, it is the SFP inserted in it. Check to see what SFP modules are being used, e.g. Cisco SFP-10g, Juniper SFPP-10G, etc.It is common for Cisco UCSC servers to come with Cisco SFP optical modules even when using Intel NICs.
As per documentation https://www.intel.com.au/content/www/au/en/support/articles/000005528/network-and-i-o/ethernet-products.html from Intel, "Other brands of SFP+ optical modules will not work with the Intelยฎ Ethernet Server Adapter X520 Series". Cisco is very particular about supported hardware. For the case of the UCS-C240M5SX:Check that the adapter is listed in the CIMC under Network AdaptersFor the case of the Intel Network Adapter X520, the PID is N2XX-AIPCI01: PCI Check the UCS-C240M5SX spec sheet http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/ucs-c-series-rack-servers/c240m5-sff-specsheet.pdf:Table 16 (10G NIC Interoperability with Cables/Optics) confirms the only supported PIDs are UCS-SFP-1WSR and UCS-SFP-1WSL Once the correct SFPs are inserted in the Network Adapter, you should be able to see NICs: CVM$ manage_ovs show_interfaces In this case, the following SFP UCS-SFP-1WSR (SFP+,10GE, SR Optical, 850nm) is inserted, run ethtool -m ethX to get more SFP details: [root@host ~]# ethtool -m eth2 This also applies to the Intelยฎ Ethernet Server Adapter X710 Series. Intel NICs need to use Intel branded SFP+ optical modules or those provided by Nutanix, for example, as part of your sales order.
KB13551
DR Cloud Connect - Error while setting up AWS as Cloud Connect Target: "Failed to fetch time on AWS server. Please very DNS/nameserver setting."
AWS remote site connection SSL errors due to firewall/IDS/IPS.
The Cloud Connect feature helps you back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on an Amazon Web Service (AWS) cloud. The Nutanix Controller VM is created on an AWS cloud in a geographical region of your choice.While trying to set up AWS as remote site on Prism Element, on the Remote Site Settings Tab we get the following error - "ERROR Failed to fetch time on AWS server. Please very DNS/nameserver setting." We verified that the AWS credentials have been entered correctly.The same error is observed while trying to list appliances using janus_cli - nutanix@NTNX-A-CVM:xx.xx.xx.xx:~$ janus_cli netcat connectivity to aws.amazon.com over 443 and 80 is successful. nutanix@NTNX-A-CVM:xx.xx.xx.xx:~$ nc -zv aws.amazon.com 443 In cerebro.INFO logs, we observed that the RPC call to ListRegions fails with the following error trace - E20220726 09:59:18.686529Z 12643 list_regions_op.cc:114] ListRegions rpc failed, error: kUnknown In janus.out logs, we see SSL errors while trying to connect to AWS server logged continuously - 2022-07-26 09:59:17,940Z ERROR aws_server.py:193 Failed to fetch time on AWS server, please check nameserver settings. Error: SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618)') We verified the DNS configuration as wells as name resolution using nslookup from the CVM. No issues were seen with NTP server/cluster time.Using tcpdump, we could see issues in the transport layer due to the presence of an unknown CA certificate -We used wget/curl to try and fetch a file from a bucket on AWS (download any existing file in the bucket or create a new dummy bucket along with a file), and could clearly see IDS blocking traffic. nutanix@NTNX-A-CVM:xx.xx.xx.xx:- wget https://https://abcbucket.s3.region.amazonaws.com/testfile
Customer needs to work with his security team to ensure communication between cluster IPs to AWS is not being affected by external IDS/IPS/firewall devices.
KB4502
NX Hardware [Memory] โ€“ Alert - A1052 - RAMFault
Investigating RAMFault issues on a Nutanix cluster.
This article provides the information required for troubleshooting the alert RAMFault for your Nutanix cluster. Alert Overview The RAMFault alert occurs when the amount of physical memory detected in a node is less than the amount of memory installed in a node. This situation arises if: A DIMM has failed in the node. A DIMM has been physically removed from the node. Sample Alert Block Serial Number: 16SMxxxxxxxx
If this is a known issue where memory has been deliberately removed from the node or if a DIMM has failed, then you can run the following command to update the configuration with the increased or decreased memory. If memory has been increased: nutanix@cvm:$ ncc health_checks hardware_checks ipmi_checks ipmi_sensor_threshold_check If memory has been decreased/removed: nutanix@cvm:$ ncc health_checks hardware_checks ipmi_checks ipmi_sensor_threshold_check --use_rpc=0 --installed_memory_gb=<total memory installed on the node currently> If a DIMM failure has caused the alert, update Nutanix Support https://portal.nutanix.com with the information below along with your understanding of the current status, so they can assist with resolving the alert, including dispatching new memory DIMMs if required. Note: DIMM replacements or break-fix activities can also trigger this alert, which may create an unneeded support case. Currently, this check runs once every minute and generates an alert if 10 failures occur (or every 10 minutes). With NCC 3.5, this check runs once every hour and generates an alert if 24 failures occur. Troubleshooting Complete the following steps: Upgrade NCC. For information about upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Use NCC to verify the DIMM configuration (available since NCC 2.2.5). This information is cached, so it can be run even against a CVM (Controller VM) or node that is down. NOTE: As the information is cached once every 24 hours, you may need to run "ncc hardware_info update_hardware_info" if a replacement was made less than 24 hours before you logged in to the cluster. Command: For local node: nutanix@cvm:$ ncc hardware_info show_hardware_info OR: For gathering cached information from an offline node: nutanix@cvm:$ ncc hardware_info show_hardware_info --cvm_ip=<CVM_IP_ADDRESS> Sample Output: nutanix@cvm:$ ncc hardware_info show_hardware_info If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the Case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB11000
NCC Health Check: async_and_paused_vms_in_recovery_plan_check/async_and_paused_entities_in_recovery_plan_check
The NCC health check async_and_pause_vms_in_rp_checkย introduced in NCC 4.2.0 is usedย when witness configured Recovery Plan has async and/or break VMs. The check is expected to fail if a Recovery Plan has async or break entities.
The NCC health check async_and_paused_vms_in_recovery_plan_check introduced in NCC 4.2.0 is used when witness configured Recovery Plan has async and/or break VMs. The check is expected to fail if a Recovery Plan has async or break entities. Running NCC checkThis check can be run as part of a complete NCC health check: nutanix@cvm$ ncc health_checks run_all You can also run this check separately: nutanix@cvm$ ncc health_checks data_protection_checks witness_checks async_and_paused_vms_in_recovery_plan_check From NCC 4.6.0 and above, use the following command for the individual check: ncc health_checks data_protection_checks witness_checks async_and_paused_entities_in_recovery_plan_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled by default to run every 6 hours. From NCC 4.4 onwards, the check raises an alert. Sample OutputFor Status: PASS health_checks hypervisor_checks data_protection_checks witness_checks async_and_paused_vms_in_recovery_plan_check From NCC 4.6.0 and above: health_checks hypervisor_checks data_protection_checks witness_checks async_and_paused_entities_in_recovery_plan_check For Status: WARN Running : health_checks data_protection_checks witness_checks async_and_paused_vms_in_recovery_plan_check Running : health_checks data_protection_checks witness_checks async_and_paused_vms_in_recovery_plan_check From NCC 4.6.0 and above: Running : health_checks data_protection_checks witness_checks async_and_paused_entities_in_recovery_plan_check Output messaging From NCC 4.6.0 and above: [ { "110458": "Check if a Recovery Plan with Witness configured has asynchronously protected VMs or has VMs with Synchronous Replication paused", "Check ID": "Description" }, { "110458": "Recovery Plan with Witness configured has asynchronously protected VMs or has VMs with Synchronous Replication paused", "Check ID": "Cause of failure" }, { "110458": "Ensure all the VMs that are part of the Witness configured Recovery Plan are Synchronously protected", "Check ID": "Resolutions" }, { "110458": "The entities will not be managed by Witness", "Check ID": "Impact" }, { "110458": "This is an event triggered alert", "Check ID": "Schedule" }, { "110458": "A110458", "Check ID": "Alert ID" }, { "110458": "Check if a Recovery Plan with Witness configured has asynchronously protected VMs or has VMs with Synchronous Replication paused", "Check ID": "Alert Tittle" }, { "110458": "The Witness configured Recovery Plan '{recovery_plan_name}' has asynchronously protected VMs or has VMs with Synchronous Replication paused", "Check ID": "Alert Message" }, { "110458": "110458", "Check ID": "Check ID" }, { "110458": "Check if the Recovery Plan configured with Witness has asynchronously protected entities or has entities with Synchronous Replication paused.", "Check ID": "Description" }, { "110458": "Recovery Plan configured with Witness has asynchronously protected entities or has entities with Synchronous Replication paused.", "Check ID": "Cause of failure" }, { "110458": "Ensure all the entities that are part of the Witness configured Recovery Plan are Synchronously protected.", "Check ID": "Resolutions" }, { "110458": "The entities will not be managed by Witness", "Check ID": "Impact" }, { "110458": "This is an event triggered alert", "Check ID": "Schedule" }, { "110458": "A110458", "Check ID": "Alert ID" }, { "110458": "Check if the Recovery Plan configured with Witness has asynchronously protected entities or has entities with Synchronous Replication paused", "Check ID": "Alert Tittle" }, { "110458": "The Recovery Plan '{recovery_plan_name}' configured with Witness has following entities with synchronous replication paused '{paused_entity_name}'\t\t\tThe Recovery Plan '{recovery_plan_name}' configured with Witness has following asynchronously replicated entities '{async_vm_names}'.", "Check ID": "Alert Message" } ]
If the check fails, then follow the below troubleshooting steps: Remove VMs not protected by SyncRep out of the Recovery plan.Create a new Recovery plan for those VMs or make sure to synchronously protect the identified VMs in a Protection Policy. In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Additionally, gather the following command output and attach it to the support case: nutanix@cvm$ ncc health_checks run_all
KB2994
HW: LSI 3008 Firmware Manual Upgrade Guide
LSI 3008 controller firmware has concerns which can cause system hangs - Applies to: NX-8150-G3
It was determined that there are significant firmware deficiencies with the LSI 3008 firmware releases prior to PH09. They have been determined to be one cause of system hangs seen on the 8150-G3. The current recommended FW version for the NX-8150-G3 is PH14.LSI disk controller firmware upgrade:for some drive instability, failures and high IO await times/disk latency without corresponding IO. Most of our LSI 3008 platforms initially shipped with PH06, some of the 8150-G3's shipped with PH05. It is ok to upgrade directly from any interim PH0x release to PH 14 The bootable ISO file link below includes the firmware image and command tool needed to update the LSI SAS3008 disk controller. Make sure you download the correct ISO file from above and use it for the correct node type, check the driver version as well when updating to PH 14 AOC Nodes: NX-8150-G3ISO: https://s3.amazonaws.com/ntnx-sre/LSI_PH14/lsi3008-AOC-2U2N-PH140000.iso https://s3.amazonaws.com/ntnx-sre/LSI_PH14/lsi3008-AOC-2U2N-PH140000.iso Checking the current driver version from a CVM If the driver version is 15.00.00.00 (AOS based) the firmware version should be PH 14 nutanix@CVM$ modinfo mpt3sas LSI release notes that document node hang and disk issues:If the customer specifically requests the release notes for the LSI firmware you can provide the following PDF's to the customer The Phase 7 release notes that specifically identify the cause and firmware fix for the LSI controller and host lockup issue which is also included in phase 10. Phase7_FW_GCA_Release.pdf https://s3.amazonaws.com/ntnx-sre/G4-LSI+release+notes/Phase7_FW_GCA_Release.pdf ID: SCGCQ00747472 (Port Of Defect SCGCQ00735541) Headline: IOP: Config space read during startup can cause firmware lockup Description Of Change: Changed code to not enable critical interrupts until after firmware enables the config space registers. Issue Description: If the host reads certain config space registers before firmware has been able to enable the config registers, firmware may get into an infinite loop and the controller will become unresponsive. Steps To Reproduce: The host should boot the system and release the controller from reset. Within about 600ms, the host should attempt to read the MSIx register (offset 0xC0) repeatedly. Firmware will lock up and the heartbeat LED will stop. Phase_10.00.03_ReleaseNote.pdf https://s3.amazonaws.com/ntnx-sre/G4-LSI+release+notes/Phase_10.00.03_ReleaseNote.pdf Phase 14.0.0.0 Release Notes http://images.45drives.com/Firmware/LSI9305/docs/LINUX_RH_SL_OEL_CTX_MPT_GEN3_PHASE14.0-15.00.00.00-1.pdf
Steps to manually update the LSI FW: 1) Verify the version of the LSI before the upgrade. a) Hyper-VFrom CVM of the node: winsh "cd \Program?Files; cd Nutanix\Utils ; .\lsiutil.exe 0" b) ESXi or AHV:From CVM of the node: sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil 0 Example output: nutanix@CVM$ winsh "cd \Program?Files; cd Nutanix\Utils ; .\lsiutil.exe 0" 2) Verify the driver version. If the driver version is 15.00.00.00 (AOS based) the firmware version should be PH 14. modinfo mpt3sas Example: nutanix@CVM$ modinfo mpt3sas 3) Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter to make sure that cluster can tolerate node being down. Do not proceed if cluster cannot tolerate node failure.4) Verify that the below NCC hardware_checks pass when running the below NCC command. disk_online_checkboot_raid_checkdisk_smartctl_check ncc health_checks hardware_checks run_all 5) Put Host into maintenance mode and ensure that all VMs are migrated off the host once complete. ESXi Right click host in vCenter an put into maintenance AHV acli host.enter_maintenance_mode <host in question> 6) SSH into the CVM of the Host in question and power it down gracefully with the below command. cvm_shutdown -P now 7) Go to the IPMI UI and launch the Console Redirection.8) Click on Virtual Storage under the Virtual Media dropdown. 9) Select "ISO File" under Logical Drive Type and click Open Image to select the ISO. Ensure you have selected the correct .iso for the HW model.10) Click Plug In and then click OK11) Click "Set Power Reset" under the Power Controle drop down.12) When using the PH 14 ISO, the host boots the ISO will be mounted and perform the LSI firmware upgrade automatically. Once it completes it will display the below. 13) Click on Virtual Storage under the Virtual Media dropdown. 14) Click Plug Out and click OK15) Click "Set Power Reset" under the Power Controle drop down.16) Once Host comes back up into the hypervisor, verify that the host is out of maintenance and the CVM is started. ESXi Right click host in vCenter and take out of maintenance and start CVM if needed AHV From working CVM 17) Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html to make sure cluster resiliency is OK Downgrading the LSI 3008 firmware: Manually downgrading the LSI firmware should not be performed unless it is absolutely necessary. The firmware can be downgraded using the same manual ISO process of upgrading the firmware. Please make sure that you use the correct ISO for the node type you are applying the firmware for. To downgrade, the use of the ISO process is necessary as it requires the firmware to be applied in a UEFI boot environment.
KB16291
Nutanix Files - File Server Clone operation fails as external interfaces are updated before checking NVM RPC server is UP
File Server clone operations fail as external interfaces are updated before checking NVM RPC server is UP.
The log signature observed in the minerva_nvm.log (/home/nutanix/data/logs/minerva_nvm.log) on the minerva_cvm leader for the failed restore task. Find the minerva_cvm leader first: nutanix@CVM:~$ afs info.get_leader Search the log file: nutanix@CVM:~$ less minerva_cvm.log | grep -B6 -i "File-server restore task failed on FSVM" The exact failure later in the same minerva_nvm log shows the clone operation failed while updating the external interface of the cloned FSVM: 2024-02-13 03:45:41,870Z ERROR 99694416 nvm_utils.py:1254 One of external network interface updates failed. We also see messages in the Minerva log about IP addresses not being set for eth1: nutanix@CVM:~$ allssh 'zgrep "No IP address found for interface" ~/data/logs/minerva_nvm*'
This issue is resolved in File Server version 5.0. If this scenario is encountered in an earlier File Server version, contact Nutanix Support http://portal.nutanix.com/ for assistance.
KB4481
Upload of whitelist fails with error "Upload whitelist operation failed with error: Invalid whitelist. Missing field 'last_modified"
In Prism, uploading the whitelist fails with error "Upload whitelist operation failed with error: Invalid whitelist. Missing field 'last_modified"
In Prism, uploading the whitelist fails with the following error. Upload whitelist operation failed with error: Invalid whitelist. Missing field 'last_modified
The ISO whitelist is different from the JSON file of AOS metadata. Download the ISO whitelist from https://portal.nutanix.com/#/page/Foundation https://portal.nutanix.com/#/page/Foundation
KB15637
Nutanix Kubernetes Engine - How to configure etcdctl in an NKE Kubernetes cluster
The etcdctl command may be used to list etcd members, check member health, and list member status, among other operations; however, etcdctl requires endpoint and certificate variables be passed via CLI or environment variable, or the command will fail. This article explains how to specify these variables.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon.In an etcd VM deployed as part of an NKE Kubernetes cluster, the etcdctl command provides a way to interact with the etcd datastore. etcdctl may be used for operations such as viewing etcd cluster members and checking member health; however, etcdctl commands will fail unless certain variables are passed either on the CLI, or set as environment variables. These variables consist of: Endpoints, which are a combination of the protocol, etcd VM IP address, and etcd port. For example: https://<etcd IP>:2379 Certificates, which are the files, with full path, to the certificate, CA certificate, and key files. Following are the certificate files: Certificate: /var/nutanix/etc/etcd/ssl/peer.pemCA Certificate: /var/nutanix/etc/etcd/ssl/ca.pemKey: /var/nutanix/etc/etcd/ssl/peer-key.pem If a command, such as etcdctl member list, is executed without the variables set, the command will fail: [nutanix@etcd ~]$ sudo etcdctl member list
The variables may either be exported as environment variables, or set on the CLI at runtime.To include the variables on the CLI, specify --endpoints, --key, --cert, and --cacert as shown in the following: [nutanix@etcd ~]$ sudo etcdctl member list --endpoints=https://<etcd-0 IP>:2379,https://<etcd-1 IP>:2379,https://<etcd-2 IP>:2379 --key /var/nutanix/etc/etcd/ssl/peer-key.pem --cert /var/nutanix/etc/etcd/ssl/peer.pem --cacert /var/nutanix/etc/etcd/ssl/ca.pem For example: [nutanix@etcd ~]$ sudo etcdctl --write-out=table member list --endpoints=https://10.100.50.48:2379,https://10.100.50.53:2379,https://10.100.50.50:2379 --key /var/nutanix/etc/etcd/ssl/peer-key.pem --cert /var/nutanix/etc/etcd/ssl/peer.pem --cacert /var/nutanix/etc/etcd/ssl/ca.pem Note: In the above example, --write-out=table is an optional parameter, but typically makes output easier to view.To specify the variables as Linux environment variables, execute the following: [nutanix@etcd ~]$ export ETCDCTL_CACERT=/var/nutanix/etc/etcd/ssl/ca.pem Then, run the etcdctl command without the need to specify the certificates or endpoints: [nutanix@etcd ~]$ sudo -E etcdctl --write-out=table member list Note: without sudo, the etcdctl command will fail with a "permission denied" error. When executing etcdctl with sudo as the nutanix user, the -E argument may need to be specified, as shown in the example above, to preserve the environment variables.Once the current SSH or console session is exited, the variables must be re-configured again before etcdctl will work.
KB14395
Prism Central GUI page does not load as expected.
Users are not able to login to Prism Central GUI, but instead of getting the normal GUI login page, some lines with directory names are listed
Users are not able to login to Prism Central GUI, but instead of getting the normal GUI login page, some lines with directory names are listed Example of a blank page or directories listed instead of the expected GUI login page
This issue is observed when the contents of the directory /home/apache/www/console have been removed or modified. Follow these steps to identify if the files have been modified: Search in the history events of the relevant PC VM for events related to this directory nutanix@CVM$ panacea_cli show_bash_history | egrep "===|www" Check the contents of the /home/apache/www/console directory and compare it to a working PC instance if necessary.After confirming that the contents of this directory were removed or there are missing files, please proceed to the next step to resolve it. Otherwise, this KB does not apply, so do not proceed further.If you need help understanding the root cause, please go ahead and collect the logs and engage EE assistance with a TH. To resolve this, copy all the contents from the directory /home/apache/www/console from a working PC VM (running the same version as the affected one) to the /home/apache/www/console/ directory on the affected PC VM then refresh the page. Note: Copy can be done using scp or any other way; for example, SCP from the affected PCVM to another working PCVM with the same version root@PCVM: scp -r root@<IP-address-of-Working-PC-same-version>:/home/apache/www/console /home/apache/www/ Verify the files are correctly copied under /home/apache/www/console. root@PCVM:/home/apache/www/console# ls -l Once this is completed, the GUI should show normal again.
KB13487
Prism Central: SAML authenticated users getting 403 Access Denied error
When there are changes to the SAML provider (i.e OKTA) some users UUID may still be tied to the old SAML provider.
When there are changes to the SAML provider (i.e OKTA) some user's UUID may still be tied to the old SAML provider. This will cause these users to get a 403 Access Denied error when trying to log into Prism Central.The below fix is only for MSP enabled clusters.
1) Run the below command in a PC VM to identify the user's UUID. nutanix@PCVM:~$ nuclei user.list count=1000 | grep -i <user name> Example: nutanix@NTNX-172-23-22-254-A-PCVM:~$ nuclei user.list count=1000 | grep -i [email protected] 2) Use the UUID above and run the below command in a PC VM to verify that the user has access control policies configured in Prism Central. nutanix@PCVM:~$ nuclei user.get <UUID> Example: nutanix@PCVM:~$ nuclei user.get 12536138-cc35-5437-bd2a-00cb10d271fd 3) Put aplos into debug mode on the PE cluster being accessed. nutanix@cvm:~$ allssh 'echo "--debug=True" > ~/config/aplos.gflags' 4) Run the below command tail the aplos logs for the user in question on the PE cluster being accessed. tail -F ~/data/logs/aplos.out | grep -i <user name> 5) Request the affected user to login again.6) Verify if you are seeing the below signature in tail'ing of the aplos.out on the PE cluster being accessed. 2022-07-26 20:04:14,469Z ERROR iamv2_auth_info.py:201 User [email protected] [372c9de3-82c3-5934-8bc6-6304afd88b75] is not allowed to access the system without access control policy. 7) Verify if the user's UUID from nuclei differs from what is in aplos.out.Example: nuclei: 12536138-cc35-5437-bd2a-00cb10d271fd 8) Take aplos out of debug on the PE cluster being accessed. nutanix@cvm:~$ allssh 'rm ~/config/aplos.gflags' 9) Run the below command on a PC VM to identify the master cape pod. sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector=pg-cluster=cape,role=master -o jsonpath='{.items[*].metadata.name}' 10) Run the below command on a PC VM to drop into the kubectl bash prompt nutanix@PCVM:~$ sudo kubectl -n ntnx-base exec -it cape-699df8fb6d-zln57 bash 11) Drop into postgres cli psql 12) Set the role to iam-user-authn. postgres=> set role='iam-user-authn'; 13) Run the below command on each UUID for the user (nucli and aplos.out) to identify the connector_id for each UUID. Replace <UUID> with the actual user UUID. postgres=> select uuid, email_id, type, active, username, connector_id from iam_user where uuid='<UUID>'; Example: postgres=> select uuid, email_id, type, active, username, connector_id from iam_user where uuid='372c9de3-82c3-5934-8bc6-6304afd88b75'; 14) Once you identify the connector_id associated with each UUID, run the below command to get the details of the connector. Replace <UUID> with the actual connector UUID. postgres=> select uuid, domain,name, description from connector where uuid = '<UUID>'; Example: postgres=> select uuid, domain,name, description from connector where uuid = 'b9f59404-cd28-5c40-ba15-6454244379e6'; 15) Identify the user UUID associated with the incorrect incorrect connector_id and use the below command to delete that user. Replace <UUID> with the actual user UUID. postgres=> DELETE from iam_user where uuid='<uuid>'; 16) Exit out of Postgres DB and kubectl bash prompt to get back to the PC VM cli. exit 17) From the PC VM delete the user that is associated with the incorrect incorrect connector_id from nucli. Replace <UUID> with the actual user UUID. nutanix@PCVM:~$ nuclei user.delete <UUID> 18) Create a new User using the correct connector UUID. Replace <connector UUID> with the actual connector UUId and replace <username> with the actual username. nuclei user.create spec.resources.identity_provider_user.identity_provider_reference.kind=identity_provider spec.resources.identity_provider_user.identity_provider_reference.uuid=<connector UUID> username=<username> Example: nuclei user.create spec.resources.identity_provider_user.identity_provider_reference.kind=identity_provider spec.resources.identity_provider_user.identity_provider_reference.uuid=b9f59404-cd28-5c40-ba15-6454244379e6 [email protected]
KB13468
Nutanix Move | Disk access error: Cannot connect to the host
When migrating from ESXi, a plan may fails with the error "Disk access error: Cannot connect to the host". srcagent.log and diskreader_vddk601.log should be checked to investigate the cause. Expired SSL certification may be the cause of the error.
A migration plan may fail at the start with the following error. Disk access error: Cannot connect to the host It indicates that Move cannot connect to the ESXi for some reason.
First, Move must be able to communicate with vCenter Server on port 443, ESXi hosts on 902 and 443, and AHV on 9440. If there is no problem, check the opt/xtract-vm/logs/srcagent.log for the cause. The log may show the following error. server: PrepareForMigration for taskid 5707cdd6-230c-4bad-9874-59c5927c2fdc completed with error [Location="/hermes/go/src/vddk/vddk601.go:115", Msg="Cannot connect to the host", VDDKErrorCode="0x4650"] VDDK error (error=0x1000) If a VDDK error is included, check diskreader_vddk601.log for details on VDDK behavior. Here is an example. I0725 08:10:45.798581 7 diskreaderserviceimpl.go:255] server: entering PrepareForAccess : xxxxxxxx If the following appears in the log, it indicates that the SSL certificate for the hostname (vCenter or ESXi) has expired. MemoryDbMappingsExpire: Expiring SSL ID mapping, hostname x.x.x.x SSL ID ae:f9:2d:9a:f2:66:8e:c8:7b:f9:21:bf:bf:bb:62:2c:6e:46:fb:a4 As a workaround, if the affected hostname is vCenter, use the ESXi host directly as the migration source instead of vCenter. If the affected hostname is ESXi, disabling SSL verification on the ESXi side or regenerating the certification may solve the issue.
KB12399
Nutanix Files :Troubleshooting CFT third-party backups
How to troubleshoot incremental third party backup in Nutanix Files
There are cases when we need to work in parallel with third-party backup vendors (Commvaul in this example) in Nutanix Files to troubleshoot the backup process. In this scenario, Commvault during his incremental backup is not able to find any files changed and it is not backing up anything.From Commvault log we will see the following snippet showing 0 files modified. 1448 3040 05/26 11:48:03 50853 CScanVolumePerformanceCounter::LogAllThreadPerformanceCounters(321) - ----------------- Total Volume Time Taken ------------------ From the Nutanix Files, we should check the snapshot process and the backup tasks in two different important logs minerva_ha_dispatcher.log and minerva_nvm.log. These are the two files that will contain information about the backup process and snapshots taken on AFS side. 2021-05-26 02:41:04,702 INFO 80723456 minerva_task_util.py:2339 Snapshot create Task called The following log that needs to be checked is minerva_nvm.log. 2021-05-26 02:46:16,767 WARNING 91707088 rpc_service.py:947 Diff file/s are invalid in comparison to markerdiff_path: /zroot/shares/1978f6aa-7a65-4f09-8b26-8096cfa73fc0/:79c8a3c4-d8d7-46e3-b8fc-b90869455ccb/backup_diff/45c4976e-42a7-4ca2-a82b-1965106f2a85/d452c954-83ef-4f10-be4e-3b0f07b989f4_80147ca3-613a-4dd9-8b0c-f1e4ebc41113 The output above will be seen when there is no change done since the last backup.
1. Check ergon tasks for diff marker. This task is initiated to understand if there was any change done on the files from the last snapshot. In the example below all the tasks have succeeded. ecli task.list operation_type_list=SnapshotDiffMasterTask 2. If one of the tasks is failing check in the minerva_nvm.log and we should see RPC failure and error about failed top get diff marker. 2021-06-09 12:05:47,795 INFO 67237680 rpc_service.py:818 Snapshot diff Get Change List RPC called with arg: url_uuid: "b9d46f97-3b4e-4f6c-8d66-acd702a34fc1" 3. You can also manually check the diff from afs prompt in the fsvm. afs bk.list_diff format_true=true 4. From the output of the UUID you will get from step 3 run. nutanix@NTNX-172-16-2-142-A-FSVM:~$ allssh 'zgrep "XXXX-XXXXX-XXXXX-XXXX-XXXXXX" data/logs/minerva_ha_dispatcher.log*' The output will show the diff marker. (example output from minerva_ha_dispatcher.log) 5. In the example above you see that there is no output from the RPC call and that means that there is no change between the snapshots. Meanwhile the example below shows how is the output when there is a difference and backup will initiate. 6. In cases when there are changes we should expect to see following output. 2021-06-03 12:03:01,243 INFO 10591408 rpc_service.py:818 Snapshot diff Get Change List RPC called with arg: url_uuid: "0d7a7041-32a2-4290-9141-8f9e7bb03999"
KB15299
3rd party backups may fail when Container vdisk migration is in progress
3rd party backups may fail when Container vdisk migration is in progress
3rd party VM backups may fail when Container vdisk migration is in progress.For example, Cohesity VM backup runs may fail with error: Unknown snapshot state string On the Nutanix cluster, a new task to take a snapshot is created around the same time as the backup task and may fail with error: error_code": 148, "error_detail": "Container vdisk migration is in progress for some entities For example: Consider below VM in question nutanix@CVM:~$ acli vm.list | grep "uservm" The VM in question does not have any non-completed tasks nutanix@CVM:~$ ecli task.list entity_list=vm:b8c76564-2ac2-482c-9ee7-a57994cd3d6d limit=4000 The last backup task (snapshot) failed nutanix@CVM:~$ ecli task.get 8e56b362-b591-545f-9be3-117ad245531c Additionally, the current non-completed tasks will show "ContainerVdiskMigrate" in-progress: nutanix@CVM:~$ ecli task.list include_completed=false limit=4000
Backups should be either re-run manually or should be scheduled to run after the container vdisk migration tasks are completed successfully. The migration tasks can be verified to be running by either checking the Prism > Tasks page or from command prompt: ecli task.list include_completed=false limit=4000 For more information about vDisk Migration Across Storage Containers, refer to the AHV Administration Guide: AOS 6.6 https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_6:ahv-vdisk-migration-c.html AOS 6.5 https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_5:ahv-vdisk-migration-c.html
KB13851
Alert - A160159 & A160160 - File Server Volume Group Configuration Checks
Two alerts to verify that a File Server's Volume Group configuration is present and correctly configured.
This Nutanix article provides the information required for troubleshooting the alert file_server_vg_check for your Nutanix Files cluster.Alert overview The file_server_vg_check is generated when a Nutanix Files Server volume group configuration is missing or inconsistent. Sample alert Block Serial Number: 23SMXXXXXXXX Output messaging [ { "Check ID": "Checks if the File Server VG configuration is consistent or not." }, { "Check ID": "File Server VG configuration may be inconsistent." }, { "Check ID": "Refer to KB article 13851. Contact Nutanix support if the issue persists or assistance is needed." }, { "Check ID": "File Server shares may not be accessible." }, { "Check ID": "A160159" }, { "Check ID": "File Server VG configuration inconsistent" }, { "Check ID": "File server {file_server_name} : Configuration of VGs {vg_list} is not proper." }, { "Check ID": "A160160" }, { "Check ID": "File Server VG attach configuration missing" }, { "Check ID": "File Server VG attach configuration missing in FSVM IDF" }, { "Check ID": "Refer to KB article 13851. Contact Nutanix support if the issue persists or if assistance is needed." }, { "Check ID": "File Server share may become unavailable." }, { "Check ID": "A160160" }, { "Check ID": "File Server VG attach configuration missing" }, { "Check ID": "{message}" } ]
Troubleshooting This alert is triggered when there is a change in the required Volume Group (VG) for Files. Since this VG is set up with Files and should not be altered, this alert is often an indicator of another issue. Resolving the IssueCheck Prism for any other alerts, correcting them as possible. If no other issues are present, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the Case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB14037
Nutanix Files - 3rd Party incremental backup fails for Nutanix File Share
3rd party incremental backups fails on home shares hosted on Nutanix Files
When using 3rd Party backup software, you may observe that the incremental snapshots fail for certain file shares. However, you will notice that the full snapshots are complete. For example on Hycu, you will see the following signature of task failures: Name: Home share On the cluster, you will see many SnapshotDiffIncrementalTask failures below <ergon> task.list status_list=kFailed Additionally, you will see diff_url error in aplos.out logs as below 2022-09-07 04:20:35,226Z INFO file_server.py:98 {'565b1b35-6adb-4c15-b09c-d2260770cbab': [{'dns_name': u'NTNX-AFS-1', 'ip_address': u'10.XX.XX.XX', 'uuid': '6640c093-83a5-49a4-ac81-0b83649d07f2', 'name': u'NTNX-AFS-1'}, {'dns_name': u'NTNX-AFS-3', 'ip_address': u'10.ZZ.ZZ.ZZ', 'uuid': '939d8b26-e0e8-43ab-8b24-531a5bb2b77e', 'name': u'NTNX-AFS-3'}, {'dns_name': u'NTNX-AFS-2', 'ip_address': u'10.YY.YY.YY', 'uuid': 'deccc302-b689-40cb-9a48-2b0d9d44d68e', 'name': u'NTNX-AFS-2'}]}
This is a known issue due to a potential ZFS leak and it is fixed in Files 4.2.1 version. Please have the customer upgrade to Files 4.2.1 version.
KB13366
LCM upgrades fail with "Stream Timeout" when using Dark Site local web server
If the local web server is not available during LCM operations then upgrades and inventory will fail with the message "Stream Timeout"
LCM Inventory and upgrades using a local web server for Dark Site upgrades fail with the red banner message "Stream Timeout".This message appears if there is a problem accessing the web server. To confirm, check the genesis.out log on the LCM leader.To find the LCM leader, log on to the CVM as the "nutanix" user: nutanix@CVM:~$ lcm_leader SSH to this node, and view the genesis.out nutanix@cvm$ less ~/data/logs/genesis.out From the CVM command prompt, connection can also be generally tested by: nutanix@cvm$ nc -z -v aa.bb.cc.dd 80
Ensure the Dark Site web server at aa.bb.cc.dd is active and reachable from the CVM IP addresses.Upload the required update bundles per the Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LCM
KB14264
Curator scans failing intermittently with Medusa error kResponseTooLong
Curator Full, Partial and Selective scans can fail intermittently with Medusa error kResponseTooLong on AOS versions 6.5.x
SymptomsIn AOS 6.5.x releases, Curator Full, Partial and Selective Scans can fail intermittently with Medusa error kResponseTooLong and get marked as kCanceled in the 2010 page. Note: Regular I/O workflow on the cluster is not impacted by this issue.Verification Check the last successful scans to see the timestamps of recently completed scans and confirm that recent scans have been failing. nutanix@CVM:~$ date Review 2010 page on the Curator master CVM to confirm that scans are not completing with a kCanceled status: Checking one of the failed MapReduce jobs for these Canceled scans shows a Failed (kMedusaError) status: Curator logs also report "Message too long" errors on CVMs nutanix@CVM:~$ grep 'Message too long' ~/data/logs/curator.INFO If the above message is seen, validate that there are no "Protocol Violation" errors in Curator logs nutanix@CVM:~$ grep 'Protocol Violation' ~/data/logs/curator.INFO Note: If you see logs with this "Protocol Violation" error, check KB 12690 https://portal.nutanix.com/kb/12690 for potential matches. This issue is observed only on a cluster that has high vDisk fragmentation. This can be verified with the following command: nutanix@CVM:~$ curator_cli get_vdisk_fragmentation_info As seen in the example below, the high numbers in the [64, inf) buckets for the number of regions and the number of zeroed regions indicate high fragmentation for a vDisk. +----------------------------------------------------------------------------------------------------------------+ Curator logs on multiple CVMs report "kResponseTooLong" errors - Key Signature nutanix@CVM:~$ grep kResponseTooLong ~/data/logs/curator.INFO Check the Cassandra row sizes across all CVMs and make sure that the Max Row size (displayed in bytes) is not very high (< 64 MB) nutanix@CVM:~$ allssh 'links --dump http:0:8081/mbean?objectname=org.apache.cassandra.db%3Atype%3DColumnFamilies%2Ckeyspace%3Dmedusa_vdiskblockmap%2Ccolumnfamily%3Dvdiskblockmap | grep RowSize'
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit and KB-1071 https://portal.nutanix.com/kb/1071.Solution This issue was tracked under ENG-526808 https://jira.nutanix.com/browse/ENG-526808 and is fixed in AOS 6.5.3. Recommend an AOS upgrade to the fixed version or later to resolve this issue. If an upgrade is not possible, follow the workaround below to apply gflags and provide short-term relief for this issue. Workaround To provide immediate relief for these scan failures, the following Curator gflag needs to be applied: --medusa_large_message_limit_bytes=134217728 This gflag is to increase the medusa_large_message_limit_bytes value from a default 64 MB to 128 MBWorkaround Step-by-step: Check the current Gflag value on all CVMs: nutanix@CVM:~$ allssh 'links -dump http:0:2010/h/gflags | grep medusa_large_message_limit_bytes' Follow the KB 1071 https://portal.nutanix.com/kb/1071 and update this Curator gflag to the aforementioned value, and make it so that it is not persistent across AOS upgrades. Note: Ensure that the customer upgrades to the fix version as soon as it is available. Example: nutanix@CVM:~$ ~/serviceability/bin/edit-aos-gflags --service=curator After modifying the gflag value, proceed to restart the Curator service on all CVMs: nutanix@CVM:~$ allssh genesis stop curator; cluster start Check that new Gflags values have been applied: nutanix@CVM:~$ allssh 'links -dump http:0:2010/h/gflags | grep medusa_large_message_limit_bytes' Start a Curator Full scan in the cluster: nutanix@CVM:~$ curl http://$(curator_cli get_master_location | grep Using | awk '{print $4}')/master/api/client/StartCuratorTasks?task_type=2 With these new gflags, confirm that scans are completing successfully on the cluster. Note: If Curator scans continue to fail even with this updated gflag, reach out to a Senior SRE or a Support Tech Lead (STL) for further assistance.
KB8291
Unmount NGT stuck tasks - Safely Deleting Hung or Queued Tasks
Unmounting NGT may fail, resulting in a queued NGT task, Safely Deleting Hung or Queued Tasks
It is possible that ejecting the NGT ISO from a guest VM will fail, resulting in a queued task that will never complete. nutanix@cvm$ ecli task.list include_completed=false
Attempt to identify the root cause of the hung umount NGT tasks PRIOR TO deleting them. Collect a full log bundle from the task create time "create_time_usecs" which can be found in "ecli task.get <task-uuid>". RCA will not be possible if the logs have rolled over or are unavailable from the initial time that the upgrade tasks hung. Review the Ergon service and associated logs to identify any issues around the time of failure for the hung upgrade. If required, consult with a Sr. SRE or open up a TH for additional assistance with RCA, provided that logs are available from the time the task initially got stuck. Link this KB to your case.
KB11688
Re-enabling bridge_chain on AHV when Flow Network Security or IPFIX is in use may require additional actions
When re-enabling bridge_chain on AHV after it has previously been disabled whilst either Flow Network Security or IPFIX features were in use, a service restart may be required to refresh commit rules in dmx.
Re-enabling bridge_chain on an AHV cluster may display the following message: nutanix@cvm$ manage_ovs enable_bridge_chain This is because when re-enabling bridge_chain on AHV after it has previously been disabled whilst either Flow Network Security (FNS) or IPFIX features were in use, a service restart may be required to refresh commit rules in dmx. If the cluster uses IPFIX, then a restart of the 'acropolis' service is required. If the cluster is using Flow Network Security (Microseg), then: AOS >= 6.0: A restart of the 'microsegmentation' service is requiredAOS < 6.0: A restart of the 'acropolis' service is required.
Review the cluster's health and ensure resiliency is good. Follow the AHV Administration Guide / Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html before restarting any services. If there are any issues with the cluster health, first solve those before continuing. If you are unable to solve the issues, then do not restart any services and engage Nutanix Support https://portal.nutanix.com/ for further assistance. When re-enabling bridge chaining on AHV with IPFIX enabled: Contact Nutanix Support https://portal.nutanix.com/ for further assistance with restarting the 'acropolis' AOS service. When re-enabling bridge chaining on AHV with Flow Network Security enabled on AOS >= 6.0: Restart the 'microsegmentation' service from any of Controller VM (CVM) in the cluster as the nutanix user as follows: allssh genesis stop microsegmentation; cluster start When re-enabling bridge chaining on AHV with Flow Network Security enabled on AOS < 6.0: Contact Nutanix Support https://portal.nutanix.com/ for further assistance with restarting the 'acropolis' AOS service. Note: Restarting the Acropolis service on AHV clusters should be done with caution as it may lead to an inability to manage user VMs.
KB11729
Powered-off VMs on AHV 20170830.x may disappear from the VM list in Prism
Powered-off VMs on AHV may disappear from the VM list in Prism
A VM may disappear from the VM list in the Prism GUI after shutting it down. The VM won't be listed in ncli and acli as well: nutanix@cvm:~$ ncli vm ls name=<VM_name> The problem is seen on clusters running AHV 20170830.x and OVS versions older than 2.8.x.To confirm that you are hitting this issue, run the following commands from any Controller VM in the cluster to check the running AHV and OVS versions: nutanix@cvm:~$ hostssh "cat /etc/nutanix-release" To confirm that the VM is still part of the cluster, run the following command from any Controller VM in the cluster: nutanix@cvm:~$ acli vm.get <VM_name or VM_UUID>
If the VM is present, run the following command to power on the VM: nutanix@cvm:~$ acli vm.on <VM_name or VM_UUID> Perform an LCM https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide-v2_4:Life-Cycle-Manager-Guide-v2_4 inventory and upgrade to the latest LTS/STS AOS and AHV release to prevent this issue from occurringIf you are encountering this issue with the newer versions of AHV and OVS, please engage Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/
KB12786
Prism Central AD authentication fails due to unexpected security policy change on the AD server
It is possible that secure sites using ID Based Security have modified LDAP to work over port 389 and to use the simple bind authentication method to fix FEAT-13069. This requires an exception in any STiG that might be applied, and might not be allowed in default Active Directory configurations. It is likely any IT admin might notice this at a later date, and "fix" it by disabling port 389 and simple bind authentication, causing AD authentication to suddenly stop working. The steps in this KB should help diagnose any situation in which port 389 is needed for LDAP instead of the usually recommended ldaps port 636.
Because ID Based security requires LDAP to use port 389 (see FEAT-13069), and because we only use simple bind for LDAP authentication, users must modify the security policies in active directory to allow ID based security to work correctly.AD authentication will suddenly stop working if a new STiG was applied, and this exception was not added to the STiG, or if a well-meaning IT admin saw it as a vulnerability and "fixed" it by reverting the policy.Setting the LDAP URI to ldaps://x.x.x.x might fix authentications, but it will break the issue in FEAT-13069.
Log examples are not available. One way to investigate is to run Wireshark on the AD controller (configure LDAP in Prism to point only to the IP of the AD controller), then capture all traffic to Prism Central's VIP.In the Wireshark filter, you can type "ip.addr==x.x.x.x" where "x.x.x.x" is the VIP for the Prism Central cluster.In the packet trace, you will see the simple bind request coming from Prism Central, and a response from Active Directory which says, "The server requires binds to turn on integrity checking if SSL\TLS are not already active on the connection." Another short response might simply be "strongAuthRequired".Example bind request:Example AD server response:Until FEAT-13069 can be fixed, it is necessary to disable secure LDAP in the URL. This is how they probably had it.Now that it's broken, the security policies have been tightened in AD. You can temporarily get LDAP authentication working by setting ldap:// to use ldaps:// as follows: Go to the "gear" menuLook for "Authentication"Edit the existing directory entry. The user should be already familiar with this, having had to change it to port 389 from the defaults. Try "ldaps://" in the "Directory URL". As noted above, secure ldap could break something like ID based security if they had it previously set to non-secure LDAP. Usually it is disabled for a reason, and something changed.The change that requires port 389 this was made on the AD server's security policies. Retrace what has been done in Active Directory. If a STiG was applied, go over the settings or in sure the STiGs were applied in the correct order. In one case, the customer found the issue was that a STiG designed to relax the policies was not applied in the correct order.
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""esxcli network vswitch standard portgroup list""
null
null
null
null
KB12046
Cluster admin AD users are not allowed to pause/resume synchronous replications
This Article discusses an issue wherein AD users who are Cluster Admin are not allowed to perform pause/resume synchronous replications
Cluster admin AD users are not allowed to pause/resume synchronous replications. Option for "Pause Synchronous Replication" and "Resume Synchronous Replication" is not shown for such users.This option (VM Summary > More > Pause Synchronous Replication) is available for User Admin.
Ideally, the Pause/Resume Synchronous Replication option should be available for the Cluster Admin Role also.This is tracked on ENG-380946 and will be fixed on PC.2021.9 (QA Verified)
KB9227
Stuck Aplos tasks "create_vm_snapshot_intentful" & "delete_vm_snapshot_intentful"
Stuck Aplos tasks "create_vm_snapshot_intentful" & "delete_vm_snapshot_intentful" goes orphaned due to intent_specs getting deleted
NOTE: For both scenarios, before taking any action, ensure to run diag.get_specs and check for matching specs for every create/delete vm_snapshot_intentful task.Scenario 1: Stuck Aplos tasks "create_vm_snapshot_intentful" & "delete_vm_snapshot_intentful" goes orphaned due to intent_specs getting deleted. This issue happens due to a bug where Aplos does not mark tasks as failed but deletes their intent specs when these tasks encounter exceptions due to Ergon/IDF not being in the correct state. To identify such stuck/orphaned tasks in queued/running state with โ€œintent_specsโ€ deleted, run the following commands:1. Identify the stuck tasks: nutanix@cvm:~$ ecli task.list include_completed=0 limit=10000 Example output: nutanix@cvm:~$ ecli task.list include_completed=0 limit=10000 2. Check that the intent_specs are missing: nutanix@cvm:~$ nuclei Example output: nutanix@cvm:~$ nuclei If observed โ€œNo matching specs foundโ€, then this issue is where the APLOS task has encountered an exception and โ€œintent_specโ€ has been deleted. Scenario 2: Stuck tasks due to multiple concurrent requests coming to Aplos for the same snapshot.The expectation is to receive a single request for each snapshot UUID identifier, but it was observed some 3rd party backup software might send more than 1 request for the same snapshot UUID.If Aplos get multiple concurrent requests for the same snapshot in a short timeframe, it could result in invalid intent spec transitions, for example:Change from kRunning back to kPending (in ~/data/logs/aplos.out): nutanix@cvm:~$ grep -i 'changing state from kRunning to kPending' ~/data/logs/aplos.out Example: aplos.out:2021-11-17 10:38:16,877Z INFO intent_spec.py:151 [8e2ab0b9-7d35-45ed-b430-203de0dc67cd] For intent spec with UUID f513a7f3-e59c-5767-a081-ece2b0a1088e, changing state from kRunning to kPending Unable to transition from kPending to kRunning (in ~/data/logs/aplos_engine.out): nutanix@cvm:~$ grep -i 'Spec State change: kPending -> kRunning' ~/data/logs/aplos_engine.out Example: aplos_engine.out:2021-11-17 09:06:55,928Z WARNING intent_spec.py:702 <50285838> [2ba726f8-2fa1-4e89-9465-bef92456063c] Spec State change: kPending -> kRunning is not allowed when task uuids are diferent. Such incorrect transitions may result in stuck tasks.In this scenario, some of the tasks might have their intent_specs populated. Scenario 3: Multiple stuck tasks for create_vm_snapshot_intentful and delete_vm_snapshot_intentful are seen in kQueued state and all of these tasks have intent specs. We also, do not see any spec change in the logs unlike the scenario 2.The Entity ID from the task lines up with the message "Invoked reap_orphans" in ~/data/logs/aplos_engine.out and is also seen in the list of potentially orphaned specs.1. Get the entity id from the task: nutanix@cvm:~$ ecli task.get <Task UUID> Example: nutanix@cvm:~$ ecli task.get 0eda3be5-b439-4841-b2e4-575febabecf3 2. Compare this to the "Invoked reap_orphans" message in ~/data/logs/aplos_engine.out, it will match. It will also be seen in the list of potentially orphaned specs. Enable debug flags for aplos without restarting the service - KB-15230 https://portal.nutanix.com/kb/15230 to check for task being reported in "potentially orphaned specs" 2022-01-28 16:15:51,607Z INFO intent_spec_watcher.py:168 intent spec watch is triggered for spec with uuid 8d697bf4-87f1-45ce-8c48-bba4bdad543c
NOTE: These scripts can't be used for any other type of stuck task that's missing its intent_spec. DO NOT abort a task that has intent_specs unless Engineering approves so. You can attempt an RCA of the stuck tasks PRIOR TO deleting them by: Collecting a full log bundle from the task create time "create_time_usecs" which can be found in "ecli task.get <task-uuid>". RCA will not be possible if the logs have rolled over or are unavailable from the initial time that the upgrade tasks hung.Review the Ergon service and associated logs to identify any issues around the time of failure for the hung upgrade.If required, consult with a Sr. SRE or open up a TH for additional assistance with RCA, provided that logs are available from the time the task initially got stuck. Scenario 1: None of "create_vm_snapshot_intentful" ot "delete_vm_snapshot_intentful" have intent_specs populated According to Engineering, we cannot abort orphaned tasks as their "intent_specs" would not exist. In such cases, the only solution is to delete those tasks after verifying the cause of failure. As part of TH-3899, TH-7519 we confirmed we can delete running/queued create_vm_snapshot_intentful and delete_vm_snapshot_intentful tasks using the following scripts:1. Navigate to the ~/home/nutanix/cluster/bin path on any CVM: nutanix@CVM:~$ cd ~/bin 2. Download the correct script depending on what tasks are running/queued and add execution permissions to it: a. All create_vm_snapshot_intentful tasks in the running and queued state: nutanix@CVM:~/bin$ wget -O marktaskfailed_cintentful.sh https://download.nutanix.com/kbattachments/9227/marktaskfailed_cintentful.sh b. All delete_vm_snapshot_intentful tasks in the queued state: nutanix@CVM:~/bin$ wget -O marktaskfailed_dintentful.sh https://download.nutanix.com/kbattachments/9227/marktaskfailed_dintentful.sh NOTE: 5.10.X doesn't support the same syntax to list the kQueued and kRunning tasks as in the script provided. You will need to grep the kQueued and kRunning from the task list. If unsure on how to do this please consult with a Staff SRE or Devex. Scenario 2: For create_vm_snapshot_intentful or delete_vm_snapshot_intentful tasks with missing intent_specs: Abort stuck tasks with missing intent_specs using steps from Scenario 1 If one or more of "create_vm_snapshot_intentful" or "delete_vm_snapshot_intentful" have intent_specs populated: Engage an STL via Tech-Help so engineering can review if it's possible and safe to cancel such task. Scenario 3: Issue reported in ENG-450148 https://jira.nutanix.com/browse/ENG-450148 which is resolved as of AOS >= 6.6Workaround for AOS < 6.6 - Restarting the Aplos/Aplos_engine on the leader will get all the kqueued tasks to cycle through and complete. nutanix@cvm:~$ genesis stop aplos aplos_engine ; cluster start
}
null
null
null
null
KB1941
HW: Disk Debugging Guide
Internal KB - This article gives guidance on how to debug disk related issues
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit and KB-1071 https://portal.nutanix.com/kb/1071.General Debugging Tips When debugging disk issues, do your best to read through the logs. Logs can be cryptic, but spending a little bit of time to attempt to read through the logs will save a lot of pain later. In addition, reading through the logs will often point you in the right direction if the logs do not contain any relevant information.If you do not understand what a log message means, do not be afraid to ask.Spend time trying to understand and sift through dmesg and /var/log/messages or /home/log/messages -- new versions of AOS will store the messages log in /home/log/messages. This is re-iterated throughout the KB because these two sources contain a lot of information. Below are common errors with appropriate debugging steps:
"Could not select metadata disk" (seen during a cluster start) What does this error message mean? If Genesis is starting services and detects that no metadata disk has been chosen on the local node, Genesis will try to pick a metadata disk from the disks that are mounted in /home/nutanix/data/stargate-storage/disks . Thus, the error message means that Genesis was unable to select a disk from the set of disks mounted at /home/nutanix/data/stargate-storage/disks . Reasons why this error occurs and how to debug: A) No disks mounted at /home/nutanix/data/stargate-storage/disks. Disks are mounted by the script /usr/local/nutanix/bootstrap/bin/mount_disks when the node boots. This script logs to /usr/local/nutanix/bootstrap/log/mount.logNone of the disks on the system have partitions or none of the partitions are formatted. There are several different ways to debug this, you can use fdisk -l to list the disks and their partition tables. You can simply run ls /dev/sd* to list out disks and their partitions. In addition, you can determine if each of the partitions is formatted by running blkid and determine if there is a filesystem present on the partition. If your system fits this description, then the node might not have been properly prepared through the factory process. To workaround this, you can prepare the disks manually: Run: sudo /home/nutanix/cluster/bin/repartition_disks -d <list of disks that need to be partitioned>Run: sudo /home/nutanix/cluster/bin/clean_disks -p <list of partitions created by step 1. to format>Run: sudo /home/nutanix/cluster/bin/mount_disks The mount_disks script was never run. This scenario is not as likely, but if you suspect that it occurred, you can check the console log for the CVM. This log file is present in the /tmp directory on KVM and in the folder of the CVM on ESXi. You can also check /usr/local/nutanix/start.d/rc.local to check whether the script is referenced and also /etc/rc.local which should point to the former. B) Genesis could not find any suitable disks for hosting metadata mounted at /home/nutanix/data/stargate-storage/disks. Check if any SSDs are mounted at /home/nutanix/data/stargate-storage/disks . Note that the type of SSD depends on the model type of the node, but common SSDs you will want to look for are Intel PCI SSDs (will usually appear as /dev/mapper/dm0p1), or some form of a SATA SSD (will usually appear as /dev/sd<letter sequence>4). Notes about device mappers: When an Intel PCI SSD is attached to a node it is exposed through 200 GB block devices that appear in /dev. We create a device mapper so that we can bundle the block devices together into one logical block device and we can stripe writes across all of the block devices.In order to debug issues with the Intel PCI SSD you can use the isdct utility that is located under /home/nutanix/cluster/lib/intel_ssd (needs to be run with sudo).If you see that everything is okay with the SSD, but there is no device mapper present, no partitions present, or the partitions are not formatted, run this command: Run: sudo /home/nutanix/cluster/bin/initialize_device_mapper The disks are present in the tombstone list. Here you will want to run zeus_config_printer and check for disks in the disk_tombstone_list, which contains disk serial numbers. You can get the serial numbers of disks by running udevadm info -q all -n <dev node> | grep SERIAL . If you find local disks in the disk_tombstone_list , you can remove them by running edit-zeus --editor <your favorite text editor>.Alternatively, the same activity as the previous bullet can be done via ncli. The command in question is a hidden command and can be accessed by dropping into an ncli shell with -h=true. "ncli -h=true" Once you've entered the shell, you'll need to identify tombstoned disks and get their serial-number(s). After the serial-number(s) have been found, the next command to remove the tombstone entry can be used: ncli disk list-tombstone-entriesncli disk remove-tombstone-entry serial-number=<3XAMPL3> Note that this case usually occurs if you remove a node and then add it back. Debugging whether a disk has gone bad: For all SCSI/SAS disks, sudo smartctl -a <dev node> will provide a good amount of information about a drive and will give an indication of whether a drive has failed.lsscsi and lspci are also useful for checking whether the drives are present on the node.Intel PCI SSDs: Also included in the note above, the utility for this drive is called isdct and is located under /home/nutanix/cluster/lib/intel_ssd (needs to be run with sudo). If all else fails, check dmesg or check the messages log for errors about the drive. Again, dmesg and messages are your friend.
KB16401
Flow Virtual Networking (FVN) VPN/Network Gateway Troubleshooting
This article provides basic troubleshooting steps to diagnose a Nutanix Flow Network Gateway
As per the Flow Networking Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Flow%20Virtual%20Networking, this is the function of a network gateway: "A network gateway connects two networks together, and can be used in both VLAN and VPC networks on AHV. In other words, you can extend the routing domain of a VLAN network or that of a VPC using a connection between two gateways, one local and one remote. A network gateway pair (local and remote) may host one service, such as VPN, VXLAN, or BGP, that provides connectivity between the local and remote networks." Note that the same 'Gateway' appliance is used in different SDN contexts across both on-prem FVN VPC/VPN/VTEP/BGP deployments but also via NC2 and Xi more purely as a VPN appliance to extend cloud platform connectivity. Therefore, we might see terminology such as 'Flow Gateway', 'VPN Gateway' and 'Network Gateway' being used. Whilst the naming/terminology can be specific to the deployment type, for the purpose of troubleshooting, these are effectively the same thing, and deployed from the same image using LCM. One caveat is that NC2/Xi-based VPN Gateway deployment can be made at the on-prem side on ESXi or AHV due to the lack of integration needed with VPC/BGP/VTEP etc offered by FVN, whereas if FVN via on-prem PC is deploying a Network Gateway for external connectivity only AHV is supported.See the "Creating a network gateway" section in the Flow Networking Guide http://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Flow%20Virtual%20Networking for a step-by-step procedure for deploying a network gateway.After deployment, the on-prem Prism Central GUI shows the network gateway status under "Network & Security / Connectivity." If the gateway status is down, follow the recommendations on this KB to diagnose the problem.
Verify the communication on the necessary ports for Flow Networking Virtual Networking (FVN) from Prism Central (PC) to AVH nodes and vice-versa: See the required port connectivity in the Portal documentation https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Flow%20Virtual%20NetworkingNote the following ports relevant to the gateway deployment: PC, or, to be specific, the Advanced Network Controller (ANC) running in PC, requires communication to the AHV nodes on port TCP 6653AHV node must be able to reach the ANC on port 6652AHV nodes must be able to reach the DNS service running on prism Central on port 53 UDP.ANC must be able to reach the Gateway public IP on port TCP 8888 Check the overall health of the Flow Networking Virtual Networking components. See KB-16283 https://portal.nutanix.com/kb/16283 for more details. As per the previous KB, make sure the AHV nodes are connected to the control plane on Prism Central: hostssh "ovn-appctl connection-status" To get the gateway status information, log in to Prism Central as nutanix user via SSH. Use the command "nuclei vpn_gateway.list" to get the gateway UUID: nutanix@PCVM:~$ nuclei vpn_gateway.list Type "nuclei vpn_gatewayget <UUID> to get details about the gateway status. Note the "operational status" section: nutanix@PCVM:~$ nuclei vpn_gateway.get aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa The following are possible causes for the network gateway status down scenario: PC can't ping the gateway's public IP. In this case, the detail under operatioal_status is this: Gateway is unreachable. Unable to ping the network gateway For this scenario, follow general networking connectivity steps to diagnose the PC to the gateway's public IP connectivity issue.Note that two IPs are in the output of the command "nuclei vpn_gateway.get" above. One is in the range of 100.64.x.x, which is for internal management purposes, and the other is denoted as "public_ip", the static IP specified during the gateway configuration. Prism Central cannot connect to the REST server on the gateway on port TCP 8888. In this case, we see the error mentioned previously: The REST server on the network gateway is down. Use the netcat command from the PC to the gateway IP to confirm if port 8888 is open nc -zv X.X.X.204 8888 Log in to the gateway VM (see the procedure in the internal comments) to confirm it's listening on port 8888.Try the same nc command above locally on the gateway VM.There may be other causes that trigger this error. Nutanix Engineering is currently working to improve the information provided by this message. See NET-15115 https://jira.nutanix.com/browse/NET-15115. Note that this is for informational purposes only. Avoid mentioning this to customers. NTP configuration issues at the VPN/Network Gateway appliance level: Service ntp is down If the configured NTP servers are not reachable by the VPN/network Gateway the above error message is reported.Connect to the gateway VM directly and use the vyos commands in the internal comments section to check the NTP data and connectivity/reachability status.Between PC 2022.3 and 2023.1.0.1, the VPN/Network Gateway inherited the DNS/NTP config from the Prism Element DNS/NTP configuration. To resolve the above error in these PC versions, ensure the PE DNS/NTP are appropriately configured and verify reachability from the VPN/Network Gateway appliance, which may sit in a different/restricted subnet/VLAN as compared to CVM/AHV Host. Since PC 2023.1.0.1 the VPN/Network Gateway defaults to hardcoded DNS / NTP servers as; 8.8.8.8 / time.google.com, in an effort to avoid issues which were caused by purposeful VPN/Network Gateway isolation from the management plane (CVM/AHV Hosts etc), however, if these external services are also not reachable from the VPN/Network Gateway, the above error message may also be displayed. In order to fix the issue, follow KB-1071 (working with Gflags; Please consult a Sr. SRE/Support resource before making gflag edits) and set the "Atlas" service gflag on PC as the following (single edit on one PCVM will apply to all PCVMs in a scale-out PC configuration): --vpn_use_pe_ntp_dns_servers=true For example: nutanix@pcvm$ ~/serviceability/bin/edit-aos-gflags --service=atlas Find the gflag and change "false" to "true": vpn_use_pe_ntp_dns_servers : Use the NTP/DNS configuration of the PE for the Network Gateway configuration. : bool :: false <-- change "false" to "true" Write/Quit (save): #############################################################Writing the following gflags to zookeeper for atlas:vpn_use_pe_ntp_dns_servers: true#############################################################If these gflags are not desired, please edit them before restarting services.Services may require restart to use updated gflag values. Restart "Atlas" service on PC via CLI; (works for single or scale-out PC; stops and starts Atlas on one PC at a time) allssh "genesis stop atlas && cluster start" Verify the gflag is set correctly via PC CLI; allssh "curl -s http://0:2060/h/gflags | grep vpn_use_pe_ntp_dns_servers"... After this, all gateways will be set to use the PC/PE configured NTP and nameservers. As above, validate via the appliance CLI using vyos commands in internal section.Nutanix Engineering is aware of this issue and is working on a more robust solution, via NET-15450 https://jira.nutanix.com/browse/NET-15450.
KB13646
How to link accounts in the Nutanix Support Portal
This article describes the process for linking accounts in the Nutanix support portal.
The linked accounts feature provides Nutanix Partners/ASP's access to their customerโ€™s portal accounts to open and view cases or manage assets and licenses on behalf of them. This feature can also be used on customer accounts as well, specifically in scenarios where there are multiple subsidiary accounts of the same organization.Any time a link is created, the users under parent entity will be able to see the child accounts but the children will not see the parent. A partner account can be linked to various end-customer accounts but the link cannot be created from a customer account to a partner account. Once the accounts are linked, all of the users in the parent account will be able to access the child account using the "Login As" option.
Creating a Linked Account Upon receiving a request to link accounts please obtain written approval from the account owners of both accounts and save this in the related case. Please describe the complete parent to child relationship in the ticket as well.Parent โ€“ Child relationship:Parent Account = Partner/ASP account or End Customer account.Linked Account = End Customer account (Child account)Expiration Date = Based on the latest support contract end date of End Customer. Entering an expiration date is mandatory. Without an expiration date, users won't be able to access the linked account because the "Login as" option will not appear.Example Screenshot: Accessing Linked Accounts After the link has been created, log in to support portal under the parent account and select the โ€œLogin Asโ€ button from the drop-down menu at the top right of the screen.Example:Upon clicking โ€œLogin Asโ€ option, the user will get a drop-down listing the linked accounts at the top left of the screen.Example:By selecting 'Exit view' from the top right the user can return to his primary account.
KB10754
Alert - A130355 - VolumeGroupRecoveryPointReplicationFailed
This Nutanix article provides the information required for troubleshooting the alert VolumeGroupRecoveryPointReplicationFailed for your Nutanix cluster.
Alert Overview The VolumeGroupRecoveryPointReplicationFailed alert is generated when the cluster detects any issues that prevent the replication of the Volume Group Recovery Point. Sample Alert Block Serial Number: 16SMXXXXXXXX From NCC 4.6.3 onwards [ { "130355": "Volume Group Recovery Point replication failed", "Check ID": "Description" }, { "130355": "Volume Group Recovery Point will not be replicated to the recovery location. This may impact the RPO.", "Check ID": "Impact" }, { "130355": "A130355", "Check ID": "Alert ID" }, { "130355": "Volume Group Recovery Point Replication Failed", "Check ID": "Alert Title" }, { "130355": "Failed to replicate recovery point created at: {recovery_point_create_time} UTC of the volume group: {volume_group_name} to the recovery location: {availability_zone_physical_name}", "Check ID": "Alert Smart Title" }, { "130355": "Network Connectivity issues between the Primary and the Recovery Availability Zone", "Check ID": "Cause #1" }, { "130355": "Check network connection between the Primary and the Recovery Availability Zone", "Check ID": "Resolution #1" }, { "130355": "Data Protection and Replication service is not working as expected. The service could be down", "Check ID": "Cause #2" }, { "130355": "Please contact Nutanix Support.", "Check ID": "Resolution #2" }, { "130355": "Volume Group migration process is in progress", "Check ID": "Cause #3" }, { "130355": "Retry the Recovery Point replication operation after the migration is complete", "Check ID": "Resolution #3" }, { "130355": "Virtual IP address has not been configured on the remote cluster", "Check ID": "Cause #4" }, { "130355": "Configure the Virtual IP address and then retry the Recovery Point replication operation", "Check ID": "Resolution #4" }, { "130355": "Remote clusters are unhealthy", "Check ID": "Cause #5" }, { "130355": "For a manually initiated Volume Group Recovery Point replication, retry again. For a scheduled Volume Group Recovery Point replication, ensure all the remote clusters are healthy. Then wait for the next scheduled Recovery Point replication", "Check ID": "Resolution #5" }, { "130355": "130355", "Check ID": "Check ID" }, { "130355": "Volume Group Recovery Point replication failed", "Check ID": "Description" }, { "130355": "Volume Group Recovery Point will not be replicated to the recovery location", "Check ID": "Impact" }, { "130355": "A130355", "Check ID": "Alert ID" }, { "130355": "Volume Group Recovery Point Replication Failed", "Check ID": "Alert Title" }, { "130355": "Failed to replicate recovery point created at: {recovery_point_create_time} UTC of the volume group: {volume_group_name} to the recovery location: {availability_zone_physical_name}", "Check ID": "Alert Smart Title" }, { "130355": "Network Connectivity issues between the Primary and the Recovery Availability Zone", "Check ID": "Cause #1" }, { "130355": "Check network connection between the Primary and the Recovery Availability Zone", "Check ID": "Resolution #1" }, { "130355": "Data Protection and Replication service is not working as expected. The service could be down", "Check ID": "Cause #2" }, { "130355": "Contact Nutanix support", "Check ID": "Resolution #2" }, { "130355": "Volume Group migration process is in progress", "Check ID": "Cause #3" }, { "130355": "Retry the Recovery Point replication operation after the migration is complete", "Check ID": "Resolution #3" }, { "130355": "Virtual IP address has not been configured on the remote cluster", "Check ID": "Cause #4" }, { "130355": "Configure the Virtual IP address and then retry the Recovery Point replication operation", "Check ID": "Resolution #4" }, { "130355": "Remote clusters are unhealthy", "Check ID": "Cause #5" }, { "130355": "For a manually initiated Volume Group Recovery Point replication, retry again. For a scheduled Volume Group Recovery Point replication, ensure all the Remote clusters are healthy. Then wait for the next scheduled Recovery Point replication", "Check ID": "Resolution #5" }, { "130355": "Replication target site may not support Volume Group Recovery Points.", "Check ID": "Cause #6" }, { "130355": "The AOS version of the target cluster should be upgraded to a version 6.1 or higher.", "Check ID": "Resolution #6" }, { "130355": "Nutanix DRaaS Remote Availability Zone does not support Volume Groups.", "Check ID": "Cause #7" }, { "130355": "Remove Volume Groups(s) from Categories which are configured in the Protection Policy of this Volume Group or, remove Volume Groups from Protection Policy if protected explicitly.", "Check ID": "Resolution #7" } ]
Troubleshooting and Resolving the Issue1. Check network connection between the Primary and the Recovery Availability Zone. Login to the Primary or Destination Prism CentralGo to Administration -> Availability Zones -> Make sure that the Availability Zone is reachable Alternatively, login to the Prism Central console and run the command nutanix@pcvm$ nuclei remote_connection.health_check_all 2. Login to Prism Central and check if there are any Recovery Point migration tasks in a running state3. Login to the remote cluster and confirm that the cluster has a Virtual IP configured In Prism Element -> click on the cluster name (top left hand corner) Verify that Cluster Virtual IP Address is configure 4. Ensure AOS version on the cluster is version 6.1 or higher.5. Nutanix DRaaS does not support Volume Groups hence remove Volume Groups(s) from Categories which are configured in the Protection Policy of this Volume Group or, remove Volume Groups from Protection Policy if protected explicitly.6. Login to the remote cluster and make sure that the health of the cluster is OK. Run a full NCC ncc health_checks run_all If there's anything concerning in the NCC results, contact Nutanix Support if you require assistance resolving the cluster health issues. If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com. Collect additional information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command: nutanix@cvm$ ncc log_collector run_all Attaching Files to the Case Attach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
KB8675
Cannot plug out the Phoenix (or other) ISO from the IPMI
Sometimes, when one of the users mounts an ISO on the IPMI of the host, it is kept mounted and cannot be unplugged from a different workstation.
Sometimes, when one of the users mounts an ISO on the IPMI of the host, it is kept mounted and cannot be unplugged from a different workstation. It can happen that, for example, the person who mounted the ISO forgot to unmount it and left the office. Then, his colleagues may be in a situation when the host will keep rebooting into the ISO image instead of the hypervisor.When logging in to the IPMI and launching the Java console, we can see the error when clicking on the Plug Out button: Exist an effective Connect from others
To release the ISO device mount, please, reboot the IPMI Unit. To do that, you can log in to the IPMI interface and go to Maintenance - Unit Reset. The Unit Reset will simply reboot the IPMI interface. It will not reboot the host and it is absolutely safe thing to do.
KB16595
Updating vCenter Server TLS Certificate Thumbprint in DKP
Updating vCenter Server TLS Certificate Thumbprint in DKP
null
When using DKP to deploy Kubernetes clusters in a vSphere environment with self-signed certificates, the TLS thumbprint must be trusted https://docs.d2iq.com/dkp/2.4/vsphere-quick-start#id-(2.4)vSphereQuickStart-CreatetheDKPclusterdeploymentYAML, otherwise the cluster-api vSphere provider wonโ€™t be able to communicate with the vCenter API. If you have a DKP cluster running and the vCenter appliance is patched or upgraded and the TLS thumbprint is changed as a consequence, some controllers wonโ€™t be able to communicate with the vCenter API and actions like persistent volumes creation/deletion wonโ€™t be possible. Here is an example of what type of event is logged by the vsphere-csi-controller when is using an outdated TLS thumbprint: {"level":"error","time":"2023-01-04T18:46:10.304026066Z","caller":"service/driver.go:157","msg":"failed to run the driver. Err: +Post \"https://10.0.0.9:443/sdk\": host \"10.0.0.9:443\" thumbprint does not match \"69:3B:BB:FD:BC:F0:83:A3:9D:2D:49:3A:B1:08:07:E8:7E:AC:C8:03\"","TraceId":"eb60d75f-7cd4-44d4-8968-d1587dc280fe","stacktrace":"sigs.k8s.io/vsphere-csi-driver/v2/pkg/csi/service.(*vsphereCSIDriver).Run\n\t/build/pkg/csi/service/driver.go:157\nmain.main\n\t/build/cmd/vsphere-csi/main.go:89\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:225"} Some customers have reached out asking which DKP objects must be updated in order to avoid the aforementioned issues. Below we describe the cluster-api objects where the TLS thumbprint must be updated to avoid disrupting the cluster life-cycle: The first object where the TLS thumbprint is referred to is the vspherecluster. The thumbprint can be updated with the following command: kubectl patch vspherecluster <CLUSTER_NAME> --type=merge -p '{"spec": {"thumbprint": "<TLS_THUMBPRINT>"}}' The vspheremachinetemplate objects, both control-plane and worker, refer to the TLS thumbprint but these are immutable objects. Because of this, there is no reason to patch these objects. The secret vsphere-config-secret in the vmware-system-csi namespace is mounted as a volume and used by the vSphere CSI driver. To path the secret please use the command below. Please remember that the value of csi-vsphere.conf must be base64 encoded. kubectl patch secret vsphere-config-secret -n vmware-system-csi --type='json'-p='[{"op" : "replace" ,"path" : "/data/csi-vsphere.conf" ,"value" : "<BASE64 Encoded>"}]' Lastly, the vsphere-cloud-config configmap in the kube-system namespace must be updated as well. The information in this configmap is used by vsphere-cloud-controller-manager. To update the TLS thumbprint please patch the configmap with the following command: kubectl --kubeconfig <CLUSTER_NAME>-workload.conf patch cm vsphere-cloud-config -n kube-system --type=merge -p'{"data": {"vsphere.conf": "global:\n secretName: cloud-provider-vsphere-credentials\n secretNamespace: kube-system\n thumbprint: <TLS_THUMBPRINT>\nvcenter:\n <vCenter_Address>:\n datacenters:\n - 'dc1'\n secretName: cloud-provider-vsphere-credentials\n secretNamespace: kube-system\n server: '<vCenter_Address>'\n thumbprint: <TLS_THUMBPRINT>\n" }}'
KB11930
Steps to analyze and troubleshoot sporadic increases in SSD utilization
The KB should contain the steps to analyze and troubleshoot sporadic increases in SSD utilization caused by VM I/O (heavy writes), especially if those alerts happened in the past and have been resolved since
This internal KB does explain the steps to analyze and troubleshoot sporadic increases in SSD utilization caused by VM IO (Heavy writes). Alert description: The NCC check Alert ID A1005 checking on the following conditions (check interval 2700 seconds = 45 minutes). The space usage over 90% must be true for at least 10 x 45 minutes or 5 x 45 for over 95% usage: Whenever disk utilization passes a certain threshold, Curator normally fixes this automatically via jobs like ILM https://portal.nutanix.com/kb/3569 and Disk Balancing https://portal.nutanix.com/kb/2416. However, there may be times where certain VM's hosted on a Node may have heavier than normal Reads or Writes that Curator may not be able to catch up with these jobs on time before the thresholds for the alert are met. Below are the steps to gather the data from the cluster to determine the cause:1. Collect a Panacea report for the period from the alert started and resolved to identify which Disks triggered the alert. Alternatively, we can use Curator to figure it out: a. Note the Curator Master location: curator_cli get_master_location b. List the Outlier disks: nutanix@CVM:~$ links -dump http:<Curator_Master_IP>:2010/master/tierusage | egrep "Disk Id|Outlier" In this example we saw the following outlier disks: | Rack Id |Service VM| Disk Id | Disk | Disk | Disk Usage |Size|Usage|Zone of|Cumulative Usage| c. Using the Disk ID from the Outlier disk's output run the following to check the SSD Usage and check for trends: nutanix@CVM:~$ for i in 97 98 ; do echo && echo disk_id: $i; sampling=300; arithmos_cli master_get_time_range_stats entity_type=disk entity_id=$i field_name=storage.usage_bytes start_time_usecs=`date +%s -d "4 days ago"`000000 end_time_usecs=`date +%s`000000 sampling_interval_secs=$sampling | perl -ne 'if(/start_time_usecs: (\d+)000000/) { print "Start time: ",localtime($time=$1)." ($time)\n"; } elsif(/value_list: (.*)/) { print localtime($time)." ($time): $1\n"; }; $time+='$sampling';'; done >> cvm_ssd_usage.txt 2. Collect the below one liners to find out what is happening for each VM's Bandwidth and IOPS. This will collect last 24 hours of time frame (with a sampling rate of 10 minutes - 600 seconds) when they saw this alert. All commands below needs to be run from one of the CVMs a. for ALL VM's nutanix@CVM:~$ ncli vm list >~/tmp/vms.txt b. For the VM's Bandwidth for the last 24 hours nutanix@CVM:~$ for vm_id in `ncli vm list | grep "Id" | grep -v "Hypervisor" | sed 's/.*:://'`; do echo "===== $vm_id ====="; sampling=600; arithmos_cli master_get_time_range_stats entity_type=vm field_name=controller_io_bandwidth_kBps entity_id=$vm_id start_time_usecs=`date +%s -d "24 hours ago"`000000 end_time_usecs=`date +%s`000000 sampling_interval_secs=$sampling | perl -ne 'if(/start_time_usecs: (\d+)000000/) { print "Start time: ",localtime($time=$1)." ($time)\n"; } elsif(/value_list: (.*)/) { print localtime($time)." ($time): $1\n"; }; $time+='$sampling';'; echo; done >~/tmp/bandwidth.txt c. To Collect VM's Write IOP's for the last 24 hours nutanix@CVM:~$ for vm_id in `ncli vm list | grep "Id" | grep -v "Hypervisor" | sed 's/.*:://'`; do echo "===== $vm_id ====="; sampling=600; arithmos_cli master_get_time_range_stats entity_type=vm field_name=controller_num_write_iops entity_id=$vm_id start_time_usecs=`date +%s -d "24 hours ago"`000000 end_time_usecs=`date +%s`000000 sampling_interval_secs=$sampling | perl -ne 'if(/start_time_usecs: (\d+)000000/) { print "Start time: ",localtime($time=$1)." ($time)\n"; } elsif(/value_list: (.*)/) { print localtime($time)." ($time): $1\n"; }; $time+='$sampling';'; echo; done >~/tmp/write_iops.txt d. To Collect VM's Read IOP's for the last 24 hours nutanix@CVM:~$ for vm_id in `ncli vm list | grep "Id" | grep -v "Hypervisor" | sed 's/.*:://'`; do echo "===== $vm_id ====="; sampling=600; arithmos_cli master_get_time_range_stats entity_type=vm field_name=controller_num_read_iops entity_id=$vm_id start_time_usecs=`date +%s -d "24 hours ago"`000000 end_time_usecs=`date +%s`000000 sampling_interval_secs=$sampling | perl -ne 'if(/start_time_usecs: (\d+)000000/) { print "Start time: ",localtime($time=$1)." ($time)\n"; } elsif(/value_list: (.*)/) { print localtime($time)." ($time): $1\n"; }; $time+='$sampling';'; echo; done >~/tmp/read_iops.txt e. To collect Controller bandwidth for last 4 days (to see trends) nutanix@CVM:~$ for host_id in $(ncli host list | grep Id | awk -F ":" '{print $4}'); do echo Host_ID: $host_id; sampling=300; arithmos_cli master_get_time_range_stats entity_type=node entity_id=$host_id field_name=controller_write_io_bandwidth_kBps start_time_usecs=`date +%s -d "4 days ago"`000000 end_time_usecs=`date +%s`000000 sampling_interval_secs=$sampling | perl -ne 'if(/start_time_usecs: (\d+)000000/) { print "Start time: ",localtime($time=$1)." ($time)\n"; } elsif(/value_list: (.*)/) { print localtime($time)." ($time): $1\n"; }; $time+='$sampling';'; echo; done >~/tmp/controller_bandwidth.txt The goal of collecting the above data is to figure out which VM's are considered "Hostspots" and are producing heavier than usual Reads and Writes on the Node where the SSD's are filling up. Heavier than normal Reads can cause up-migrations from HDD tier to SSD tier, in case of Hybrid environments. Heavier than normal Writes will write data to the SSD's directly. By figuring out which VM's have above normal Reads and Writes, we can then decide on how to move forward. You can either inspect the data collected visually or export the data into Spreadsheets to create graphs. []
There are multiple options to mitigate the issue. Discuss with the customer the load of these VMs and explain that the load of these SSDs are a result of the heavy read or write pattern.Decrease the NCC health check frequency from 1 hour to every 2 hours. This will help avoid the alerts, especially if space usage goes down after some time after Curator takes care of the problem via ILM or Disk Balancing.Increase the alert threshold. Again, this is useful only for situations where Curator just needs time to catch up. For example, increase the Warning threshold from 90% to 92%. In cases where there are multiple VM's that are heavy, migrate some of the VM's to another Host to split the workload. In case the customer is willing to modify the heavy VM's, they can consider using a Load Balanced Volume Groups https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_5:ahv-enabling-load-balancing-vdisks-volume-group-t.html instead of a single vDisk. This way, the Volume Group will have multiple vDisks that are hosted in multiple CVM's, effectively distributing the workload.If the customer does not want to make any modifications above and the issue is SSD utilization, we can also make ILM more aggressive by modifying the gflags below. NOTE: Please make sure an STL is consulted before making any of these modifications, as this will increase work for both Curator and Chronos. In certain cases increasing such workloads may negatively impact the cluster: == Try to set this gflag to 65% (default 75%) == --curator_tier_usage_ilm_threshold_percent=65 == And this to 25 (default 15%): == --curator_tier_free_up_percent_by_ilm=25 Please set them via: /home/nutanix/serviceability/bin/edit-aos-gflags --service=curator --all_future_versions We have to see if these gflags work as it could be that their Chronos jobs will run a long time due to the size of the cluster but I could think that it should help a bit. That being said there will be data earlier migrated from SSD to HDD but if their concern is space on the SSD tier this could be a good workaround.
KB15780
Nutanix Files - Files crashing due to DSIP unavailable
A Nutanix Files server may experience a crash if data services IP (DSIP) is not available for a longer period of time - this KB shows an example of investigating such situation
Nutanix Files is heavily relying on communication with the Nutanix cluster storage over data services IP (DSIP). In case such communication is not successful within the current limit of 240 seconds, FSVMs may experience crashing. Below is an example of crash dumps created after FSVMs crash nutanix@NTNX-A-FSVM:~/data/cores/127.0.0.1-2023-08-28-18:06:43$ ll The vmcore dump files may contain signatures similar to the below: [3577900.015260] WARNING: [FS]: slow zio[6]: pool=zpool-NTNX-as-cbv-nas-01-66d1572d-72fb-49ee-ab0e-1fb9d4189539-fa5d6ad9-3184-4a0e-98ad-4f9056005890 zio=ffff94ac8a555ca0 cur_ts=3577900014321415, zio_[no]wait_ts=3577674219945087, vdev_queue_ts=3577674219945165 delta=225794376250 total(Q+IO)=0 io=0 path=/dev/sdr1, devid=scsi-1NUTANIX_NFS_16_0_888_b5f176c3_b7fc_4836_b344_a6114bab797d-part1 physpath=ip-10.0.254.49:3260-iscsi-iqn.2010-06.com.nutanix:fa5d6ad9-3184-4a0e-98ad-4f9056005890-tgt5-lun-0 meta disk=1 last=3577653801606545 type=2 priority=3 flags=0x184880 stage=0x80000 pipeline=0xb80000 pipeline-trace=0x80001 objset=459 object=0 level=2 blkid=0 offset=6511218688 size=4096 error=0 scsi_state=0 tgt= They may also contain a signature similar to below with "Couldn't find the portal" and DSIP "physpath=ip-10.0.254.49:3260" [3577907.720899] ERROR: [FS]: get_current_portal at 0: Disk=/dev/sdbl1 Couldn't find the portal, rc=-107 FSVM crashes 240 seconds later: [3578165.768177] WARNING: [FS]: Slow txg_wait_synced, pool=zpool-NTNX-as-cbv-nas-01-66d1572d-72fb-49ee-ab0e-1fb9d4189539-02d4b708-c5ac-44af-90eb-4b6240cd084a sync txg=191420
Following analysis shows why DSIP 10.0.254.49 was not available. CVM .14 which was NFS master and hosting DSIP 10.0.254.49 was experiencing low memory condition via alerts in the cluster: nutanix@NTNX-B-CVM:10.0.254.17:~$ ncli alert history duration=90 |grep 'Main memory usage in' -B1 -A4 In meminfo systats we can see a drastic drop of the CVM memory usage #TIMESTAMP 1693270349 : 08/29/2023 12:52:29 AM Few seconds later services stopped responding and peers reported them as dead, for instance here in cassandra_monitor.INFO: I20230829 01:00:30.392879Z 19824 zeus_health_ops.cc:751] GetAllHealthyOp(967280)[cassandra watch_cbks: 1 watch_id:4]: 10.0.253.9:9161 with peer_id 84 and incarnation id 396 has been found dead As expected In genesis, HA route was injected 2023-08-29 01:00:38,538Z INFO 51728080 ha_service.py:902 Stargate on node 10.0.253.9 is down At this point DSIP 10.0.254.49 should have been re-hosted on new NFS master but because CVM.14 was completely hung it couldn't be released and the attempt to rehost on different CVM failed with IP address conflict: ID : 10c34cb7-ac58-46c6-b720-689c4e1e51ad We checked MAC 50:6b:8d:fa:dd:e5 - it belonged to CVM.14 which was NFS master: ================== 10.0.254.14 ================= Only after reboot of the CVM.14 the DSIP 10.0.254.49 moved to CVM.16 which became new NFS master: ================== 10.0.254.16 ================= Once DSIP was re-hosted by a new CVM, Files started operating as expected.
KB15814
MSP Controller upgrade failing on Scale-Out PCVMs with VLAN enabled CMSP
After upgrading to pc.2023.3 version, MSP Controller upgrade can fail on Scale-Out PCVMs if CMSP was deployed with VLANs instead of default VXLANs
Problem Description: After upgrading to pc.2023.3 version, MSP Controller upgrade may fail on Scale-Out PCVMs if CMSP was deployed with VLANs instead of default VXLANs due to race a condition between eth2 NIC removal operation and IAMv2 infra availability. This is a race condition between IAMv2 infra being ready to process AuthN/Authz and msp_controller trying to remove eth2 nics via v3/vms API to aplos PCVMs deployed with VLAN nics instead of default VXLAN interfaces will have extra eth2 nicsDuring upgrade to pc.2023.3, cmsp upgrade CTRLUPGRADE step will attempt to remove these eth2 nics from PCVMsTo trigger nic remove, msp_controller service sends v3/vms API to PCVM aplosThis can fail due to aplos failing to proccess v3 APIs at this point, because authorization request to themis is failingAuthorization failing as themis at that point unable to communicate with cape PG database, due to cape leader pod not started yet at this point of timeCape and IAMv2 became available couple minutes after, but msp_controller already fails CTRLUPGRADE and msp upgrade fails leaving PCVM not operationalcmsp_controller CTRLUPGRADE step should wait for IAMv2 to be operational before doing eth2 NICs removal as IAMv2 is necessary to process v3/vms API calls Symptoms: โ€‹โ€‹โ€‹admin user gets 403 when logging in to Prism Central GUI all registered clusters show in Prism Element GUI that the Prism Central is disconnected ncli command not working Identification: ncli will show the following "500" error on the Prism Central VM: nutanix@pcvm:~$ ncliHTTP 500 Internal Server Error /home/apache/ikat_access_logs/prism_proxy_access_log.out logs will have the following "500" messages /home/apache/ikat_access_logs/prism_proxy_access_log.out:[2023-11-07T19:25:25.465Z] "GET /api/nutanix/v3/vms/f6cf8666-ce00-458a-a95f-af05e1a3b393 HTTP/1.1" 500 - 0 195 5155 5155 "159.144.50.86" "Go-http-client/1.1" "7aea52b5-f27d-434a-ba9e-6e14cbf8e986" "xx.yy.zz.vv" "xx.yy.zz.vv:9444" /home/nutanix/data/logs/mercury.out logs will have the following "403" errors: E20231107 19:25:25.323364Z 46726 request_processor_handle_iamv2_cookie_op.cc:1894] <HandleIAMv2CookieOp: op_id: 819202> Authentication failed with error Request to Aplos failed with response code 403E20231107 19:25:25.044744Z 46727 cookie_utils.cc:256] <HandleIgwCookieOp: op_id: 819285> Could not validate HMAC Computed HMAC: LZ4XOGF/BdFRUZUqRFAY8OBp5XSIcKlfiDn9N+TgSPc= Received HMAC value: fKhl1LIDy8QHq8C7vLeps0Uaji888A+kzaYNzYM7DOY= Base api path: /v3/directory_servicesE20231107 19:25:25.231593Z 46727 request_processor.cc:1335] API request with id: 66352_49_ kInvalidCookieE20231107 19:25:25.285645Z 46726 request_processor_authenticate_op.cc:316] <AuthenticateOp: op_id: 820046> Failed to get header X-Ntnx-Remote-Jwt: Header X-Ntnx-Remote-Jwt occurs 0 timesE20231107 19:25:25.285705Z 46726 request_processor_authenticate_op.cc:316] <AuthenticateOp: op_id: 820046> Failed to get header X-Federated-Iamv2-Id-Token: Header X-Federated-Iamv2-Id-Token occurs 0 timesE20231107 19:25:25.285722Z 46726 request_processor_authenticate_op.cc:316] <AuthenticateOp: op_id: 820046> Failed to get header X-Ntnx-Api-Key: Header X-Ntnx-Api-Key occurs 0 timesE20231107 19:25:25.285740Z 46726 request_processor_authenticate_op.cc:386] <AuthenticateOp: op_id: 820046> No IAM auth headers in the requestE20231107 19:25:25.350976Z 46726 request_processor_authenticate_op.cc:290] <AuthenticateOp: op_id: 820046> Error getting versions response from Aplos while authenticating request. Response code FORBIDDENE20231107 19:25:25.351122Z 46727 request_processor_handle_iamv2_cookie_op.cc:1894] <HandleIAMv2CookieOp: op_id: 820045> Authentication failed with error Request to Aplos failed with response code 403 /home/nutanix/data/logs/aplos.out logs will have the following "401" and "AUTHENTICATION_REQUIRED" errors: ERROR auth.py:100 Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/aplos/lib/auth/auth.py", line 90, in session_authenticate File "build/bdist.linux-x86_64/egg/aplos/lib/auth/athena_auth.py", line 116, in authenticate File "build/bdist.linux-x86_64/egg/aplos/lib/auth/athena_auth.py", line 101, in _validate_authentication_requestBasicAuthRequiredError: {'api_version': '3.1','code': 401,'message_list': [{'details': 'Basic realm="Intent Gateway Login Required"', 'message': 'Authentication required.', 'reason': 'AUTHENTICATION_REQUIRED'}],'state': 'ERROR'} /home/nutanix/data/logs/aplos.out logs will have the following "500" and "ENTITY_READ_ERROR" errors:: 2023-11-07 19:25:25,496Z INFO capability_tracker.py:249 Starting capability tracker: aplos_vm_plugin...2023-11-07 19:25:30,599Z ERROR vms_uuid.py:115 Error while fetching VM GET response: kInternalError: Authorization failed: Post "https://iam-proxy.ntnx-base:8445/api/iam/authz/v1/authorize": read tcp xx.yy.zz.vv:33462->xx.yy.zz.vv:8445: read: connection reset by peer2023-11-07 19:25:30,614Z ERROR resource.py:253 Traceback (most recent call last): File "build/bdist.linux-x86_64/egg/aplos/intentgw/v3_pc/api/resource.py", line 251, in dispatch_request File "/usr/local/nutanix/lib/py/Flask_RESTful-0.3.8-py2.7.egg/flask_restful/__init__.py", line 583, in dispatch_request resp = meth(*args, **kwargs) File "build/bdist.linux-x86_64/egg/aplos/lib/access_control/audit_util.py", line 38, in wrapper File "build/bdist.linux-x86_64/egg/aplos/intentgw/v3_pc/api/resource.py", line 106, in wrapper File "build/bdist.linux-x86_64/egg/aplos/intentgw/v3_pc/validators.py", line 186, in wrapper File "build/bdist.linux-x86_64/egg/aplos/intentgw/v3_pc/api/resource.py", line 81, in wrapper File "build/bdist.linux-x86_64/egg/aplos/intentgw/v3_pc/validators.py", line 177, in wrapper File "build/bdist.linux-x86_64/egg/aplos/lib/utils/log.py", line 142, in wrapper File "build/bdist.linux-x86_64/egg/aplos/intentgw/v3_pc/api/vms_uuid.py", line 128, in getApiError: {'api_version': '3.1','code': 500,'kind': 'vm','message_list': [{'message': 'vm : f6cf8666-ce00-458a-a95f-af05e1a3b393 could not be read.', 'reason': 'ENTITY_READ_ERROR'}],'state': 'ERROR'} mspctl controller info will have the following failure message: nutanix@pcvm:~$ mspctl controller infoLeader : xx.yy.zz.vvUpgrade State : FailedUpgrade Message : error deleting vm nic: failed to update VM: failed to Get VM: Vm in failed state, retry of put also failed /home/nutanix/data/logs/msp_controller.out logs will have the following "500" and "ENTITY_READ_ERROR" errors for the nic deletion process: 2023-11-07T19:25:25.315Z vm_network.go:128: [INFO] [msp_cluster=a89a-npc-be:CTRLUPGRADE] Deleting nics ([]*types.VmNic) (len=1 cap=1) {2023-11-07T19:25:25.315Z helper.go:418: [DEBUG] [msp_cluster=a89a-npc-be:CTRLUPGRADE] Trying update operation of VM f6cf8666-ce00-458a-a95f-af05e1a3b3932023-11-07T19:25:25.315Z helper.go:558: [DEBUG] [msp_cluster=a89a-npc-be:CTRLUPGRADE] GetVmStatus with retry called for f6cf8666-ce00-458a-a95f-af05e1a3b3932023-11-07T19:25:30.621Z helper.go:678: [ERROR] [msp_cluster=a89a-npc-be:CTRLUPGRADE] Couldn't get specified VM: f6cf8666-ce00-458a-a95f-af05e1a3b393, Error Error code: 500, messages "[{\"message\":\"vm : f6cf8666-ce00-458a-a95f-af05e1a3b393 could not be read.\",\"reason\":\"ENTITY_READ_ERROR\"}]"2023-11-07T19:28:32.025Z helper.go:566: [WARN] [msp_cluster=a89a-npc-be:CTRLUPGRADE] Could not fix error state of VM in %!s(int=5) retries2023-11-07T19:28:32.025Z helper.go:432: [ERROR] [msp_cluster=a89a-npc-be:CTRLUPGRADE] Failed to get status for uuid: f6cf8666-ce00-458a-a95f-af05e1a3b393. Error: failed to Get VM: Vm in failed state, retry of2023-11-07T19:28:32.025Z helper.go:512: [ERROR] [msp_cluster=a89a-npc-be:CTRLUPGRADE] failed to update vm f6cf8666-ce00-458a-a95f-af05e1a3b393 : failed to update VM: failed to Get VM: Vm in failed state, retry2023-11-07T19:28:32.025Z helper.go:955: [ERROR] [msp_cluster=a89a-npc-be:CTRLUPGRADE] Error deleting nics: "failed to update VM: failed to Get VM: Vm in failed state, retry of put also failed" You will see that the eth2 nics were not removed: nutanix@pcvm:~$ allssh 'ifconfig eth2'================== xx.yy.zz.vv =================eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet xx.yy.zz.vv netmask 255.255.254.0 broadcast xx.yy.zz.vv inet6 fe80::526b:8dff:fe94:e2b6 prefixlen 64 scopeid 0x20<link> ether 50:6b:8d:94:e2:b6 txqueuelen 1000 (Ethernet) RX packets 100628 bytes 8910128 (8.4 MiB) RX errors 0 dropped 7061 overruns 0 frame 0 TX packets 6012 bytes 502153 (490.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0================== xx.yy.zz.vv =================eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet xx.yy.zz.vv netmask 255.255.254.0 broadcast xx.yy.zz.vv inet6 fe80::526b:8dff:fef8:900f prefixlen 64 scopeid 0x20<link> ether 50:6b:8d:f8:90:0f txqueuelen 1000 (Ethernet) RX packets 95836 bytes 8180889 (7.8 MiB) RX errors 0 dropped 6917 overruns 0 frame 0 TX packets 4115 bytes 253938 (247.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0================== xx.yy.zz.vv =================eth2: error fetching interface information: Device not found iam-themis logs will show connectivity issues to cape pg database: {"log":"{\"application_name\":\"iam-themis\",\"file_name\":\"/go/src/github.com/nutanix-core/iam-themis/services/server/storage/handler/database_handler.go:29\",\"function_name\":\"github.com/nutanix-core/iam-themis/services/server/storage/handler.dbInit\",\"line_no\":29,\"message\":\"Unable to connect to postgres failed to perform migrations: creating migration table: dial tcp xx.yy.zz.vv:5432: connect: connection refused\",\"namespace\":\"ntnx-base\",\"pod_name\":\"iam-themis-7ff9b876-v28d7\",\"port_no\":\"5558\",\"severity\":\"fatal\",\"timestamp\":\"2023-11-07T19:22:43Z\"}\n","stream":"stdout","time":"2023-11-07T19:22:43.631443483Z"}{"log":"{\"application_name\":\"iam-themis\",\"file_name\":\"/go/src/github.com/nutanix-core/iam-themis/services/server/storage/handler/database_handler.go:29\",\"function_name\":\"github.com/nutanix-core/iam-themis/services/server/storage/handler.dbInit\",\"line_no\":29,\"message\":\"Unable to connect to postgres failed to perform migrations: creating migration table: dial tcp xx.yy.zz.vv:5432: connect: connection refused\",\"namespace\":\"ntnx-base\",\"pod_name\":\"iam-themis-7ff9b876-v28d7\",\"port_no\":\"5558\",\"severity\":\"fatal\",\"timestamp\":\"2023-11-07T19:23:36Z\"}\n","stream":"stdout","time":"2023-11-07T19:23:36.520800742Z"}{"log":"{\"application_name\":\"iam-themis\",\"file_name\":\"/go/src/github.com/nutanix-core/iam-themis/services/server/storage/handler/database_handler.go:29\",\"function_name\":\"github.com/nutanix-core/iam-themis/services/server/storage/handler.dbInit\",\"line_no\":29,\"message\":\"Unable to connect to postgres failed to perform migrations: creating migration table: dial tcp xx.yy.zz.vv:5432: connect: connection refused\",\"namespace\":\"ntnx-base\",\"pod_name\":\"iam-themis-7ff9b876-v28d7\",\"port_no\":\"5558\",\"severity\":\"fatal\",\"timestamp\":\"2023-11-07T19:25:03Z\"}\n","stream":"stdout","time":"2023-11-07T19:25:03.524317854Z"} Checking the cape pods will show that the cape leader started and IAMv2 became available after a couple of minutes than msp_controller which is around "19:31:23". However, msp_controller tried processing v3/vms API calls to remove the eth2 nic around "19:25:25" as per the above msp_controller.out and aplos.out logs: nutanix@NTNX-159-144-50-85-A-PCVM:~$ sudo kubectl logs -n ntnx-base cape-hjze-6bcd8ff595-j8pv8 database | headTue Nov 7 19:31:23 UTC 2023 INFO: postgres-ha pre-bootstrap starting...Tue Nov 7 19:31:23 UTC 2023 INFO: pgBackRest auto-config disabledTue Nov 7 19:31:23 UTC 2023 INFO: PGHA_PGBACKREST_LOCAL_S3_STORAGE and PGHA_PGBACKREST_INITIALIZE will be ignored if providedTue Nov 7 19:31:23 UTC 2023 INFO: Defaults have been set for the following postgres-ha auto-configuration env vars: PGHA_DEFAULT_CONFIG, PGHA_BASE_BOOTSTRAP_CONFIG, PGHA_BASE_PG_CONFIGTue Nov 7 19:31:23 UTC 2023 INFO: Defaults have been set for the following postgres-ha env vars: PGHA_PATRONI_PORTTue Nov 7 19:31:23 UTC 2023 INFO: Defaults have been set for the following Patroni env vars: PATRONI_NAME, PATRONI_RESTAPI_LISTEN, PATRONI_RESTAPI_CONNECT_ADDRESS, PATRONI_POSTGRESQL_LISTEN, PATRONI_POSTGRESQL_CONNECT_ADDRESSTue Nov 7 19:31:23 UTC 2023 INFO: Setting postgres-ha configuration for database user credentialsTue Nov 7 19:31:23 UTC 2023 INFO: Setting 'pguser' credentials using file systemTue Nov 7 19:31:23 UTC 2023 INFO: Setting 'superuser' credentials using file systemTue Nov 7 19:31:23 UTC 2023 INFO: Setting 'replicator' credentials using file system
Workaround:If you find out that the problem was caused by described race condition above, msp_controller restart to switch leader will retrigger CTRLUPGRADE and should finish successfully. Find msp_controller leader: nutanix@pcvm:~$ panacea_cli show_leaders | grep -i msp SSH to the msp_controller leader PCVM IP and restart it to move the msp_controller leadership to another PCVM: nutanix@pcvm:~$ genesis stop msp_controller ; cluster start
}
null
null
null
null
KB10544
LCM inventory failing since httpd service failing to start
In some corner scenarios the HTTPD service may be in an error state causing LCM inventory to be stuck and prism not loading.
LCM auto inventory failing continuously and the following is seen in lcm_op.trace file 2020-11-12 02:53:37,359 {"leader_ip": "10.162.17.20", "event": "Inventory operation enqueued", "root_uuid": "80bdad30-aa64-49aa-8461-1337831ed92d"} lcm_ops.out on CVM's which has issue would have following entry 2020-11-12 04:56:28 INFO metric_entity.py:1494 (10.162.17.34, inventory, 4b1fa538-719f-4106-b9a8-ac43bd9b5a4f) Exception report: {'error_type': 'LcmStagingError', 'kwargs': {'ip_addr': u'10.162.17.34', 'host_type': 0, 'err_msg': 'Took too long to download from https://10.162.17.29:9440/file_repo/99c3873b-d9a7-4d18-938a-44aff29bfb75', 'step': 'Transfer', 'env': 'host', 'catalog_item': '318cd8fa-ad41-42c6-9f93-f7fc1caabb95'}} Further going up on lcm_ops.out on these cvm we could see following 2020-11-12 04:56:06 INFO download_utils.py:976 (10.162.17.34, inventory, 4b1fa538-719f-4106-b9a8-ac43bd9b5a4f) Updating file /scratch/tmp/lcm_staging/99c3873b-d9a7-4d18-938a-44aff29bfb75 with size -1 an While running netcat against the problematic node [ in this case .29], we could see connection refused from all CVMs in the cluster. nutanix@NTNX-118KS13-A-CVM:10.162.17.20:~/data/logs$ allssh nc -vz 10.162.17.29 9440 But same command works for healthy cvm in the cluster. nutanix@NTNX-118KS13-A-CVM:10.162.17.20:~/data/logs$ allssh nc -vz 10.162.17.28 9440 prism.out on affected CVMs. Nov 11, 2020 2:31:33 PM org.apache.catalina.startup.ClassLoaderFactory validateFile httpd service status on affected node nutanix@NTNX-6HFCCP2-A-CVM:10.162.17.30:~/data/logs$ sudo service httpd status Here we can see that httpd is failing to create due to "Invalid argument: AH01185: worker slotmem_create failed 2020-11-18T11:01:04.569477-07:00 NTNX-6HFCCP2-A-CVM Proxy[21134]: [Wed Nov 18 11:01:04.568826 2020] [proxy_balancer:emerg] [pid 21105:tid 140095854151808] (22)Invalid argument: AH01185: worker slotmem_create failed
To resolve this issue we need to regenerate the shared memory segements for httpd.For more information about SHM files: http://publib.boulder.ibm.com/httpserv/manual24/mod/mod_slotmem_shm.html http://publib.boulder.ibm.com/httpserv/manual24/mod/mod_slotmem_shm.html NOTE This workaround will need to be performed on EACH CVM where httpd is failing to start with the error signature 1) Stop ssl_terminator - otherwise the service will continue to restart httpd genesis stop ssl_terminator 2) Stop httpd sudo systemctl stop httpd 3) Remove .SHM files from "/etc/httpd/run" directory rm -rf /etc/httpd/run/slotmem-shm* Example: root@NTNX-6HFCCP2-A-CVM:10.162.17.30:/etc/httpd/run# ls 4) Start httpd service sudo systemctl start httpd 5) Start ssl_terminator cluster start At this point we would expect that httpd should be up, running, and servicing requests. We can check the httpd status to confirm its operational state via: sudo service httpd status
KB7554
Critical: Cluster Service: Aplos is down on the Controller VM
Upgrading the LCM framework leads to a restart of the Aplos service. If this alert is raised after the LCM framework upgrade, you can ignore it after confirming Aplos stability.
Upgrading the Life Cycle Manager (LCM) framework through an LCM inventory involves a restart of the Aplos service. This planned service restart is done by LCM for refreshing backend table schema and is expected. You may see one or more alerts within a few minutes of each other after the LCM framework update. It is likely you will see one alert per Controller VM (CVM) in the cluster. Critical : Cluster Service: aplos is down on the Controller VM x.x.x.x
The above-described symptoms of Aplos down alert with LCM update is fixed in NCC 4.2.0. Upgrade NCC to the latest version to avoid these alerts. In case you have further questions, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. However, if your cluster is running an NCC version older than 4.2.0, use the solution below. This issue is primarily seen on LCM auto-upgrade (when auto-inventory is enabled) with clusters running NCC versions less than NCC 4.2.0. These alerts should not appear after updating to the latest NCC version. If you are seeing Aplos alert after an LCM framework update (which gets initiated by an LCM inventory), you can likely ignore the alert and mark it as resolved after confirming that Aplos is up and stable. See below for details on how to confirm this. If you need help confirming whether an LCM framework update was involved, use the following command to search the LCM logs (/home/nutanix/data/logs/lcm_op.trace) for lines mentioning an LCM framework update. In the example below, you can see there were framework updates at around 10:01 am on 8/19/2020 and also 4:54 pm on 9/7/2020. You can correlate these times with the times seen in the Prism alerts page to help determine if this may be the cause. There may be some delay (~20 minutes) between the LCM update occurring and when the alert gets raised. nutanix@cvm$ allssh 'grep "Updating LCM framework" /home/nutanix/data/logs/lcm_op.trace' To check the current status of cluster services and confirm that Aplos was properly restarted after the LCM upgrade was completed, use the cluster status command. In the following example, the output from the cluster status command is filtered to highlight services that are not UP. Since no services appear as DOWN in the output below, all services are currently up. nutanix@cvm$ cluster status | grep -v UP You can further check that Aplos is not in a crashing state on a particular CVM by running the following command and ensuring the process ids beside aplos and aplos_engine are not changing frequently. You can use Ctrl+C to stop the command. Do note that the "watch -d genesis status" command to confirm whether a service is stable or not is not a reliable way to check/confirm the stability of the "cluster_health" and "xtrim" services since these services for their normal functioning spawn new process ID temporarily. The new temporary process ID or change in the process ID count in the "watch -d genesis status" command may give the impression that the "cluster_health" and "xtrim" services are crashing but in reality, they are not. Rely on the NCC health check report or review the logs of the "cluster_health" and "xtrim" services to ascertain if they are crashing. nutanix@cvm$ watch -d genesis status Assuming these alerts correspond with an LCM framework update and Aplos seems up and stable on all CVMs, you can mark the alerts as resolved in Prism and ignore them. These alerts should only be raised once per CVM and, if they continue to re-occur after you resolve them, then they are likely caused by a different issue. If you are unsure if the alerts are caused by this issue, then consider engaging Nutanix Support https://portal.nutanix.com/. For reference purposes, troubleshooting steps to check if you are hitting this issue: Multiple alerts will be seen for most or all CVMs in a cluster indicating the Aplos service is down. For example: Message : Cluster Service: aplos is down on the Controller VM 192.168.1.101. LCM version will show (2.2 in below snippet) and the framework update as seen in /home/nutanix/data/logs/lcm_op.trace will show the framework update just prior to the alert. nutanix@cvm$ zkcat /appliance/logical/lcm/config Upon investigating, cluster status confirms that Aplos is UP on all CVMs. Aplos.out will show being updated to point to the new file at the same time of the alert or shortly after. nutanix@cvm$ ls -latr ~/data/logs/aplos.out
KB2922
No 10GigE network devices found error when running Phoenix
While running Phoenix (or foundation) there might be some corner cases where Phoenix / Foundation fails with error "No 10GigE network devices found error".
While running Phoenix (or foundation) there might be some corner cases where Phoenix / Foundation fails with error "No 10GigE network devices found error" when the platform (>NX-1020) has indeed 10Gb interfaces. lspci -nn command will report the 10gb interfaces driver properly loaded:
These errors have been spotted in the field in some cases where customers have connected gbic transceivers in our 10gb interfaces. This can cause ethtool report 10/100/1000 speeds inaccurately, causing phoenix / foundation to fail with the error shown in the description. However, there are officially qualified gbic transceivers so we don't want to simply workaround this issue without collecting proper data to feed back to engineering if the transceiver is an officially qualified model. This is especially important for platforms such as the NX-3175 where there are no 1gb ports. In cases where the platform has 1gb ports, this Phoenix check is designed to ensure that the hardware in the node is healthy and has identified several DoA nodes / NICs according to Engineering. In order to progress with the installation, customer must remove the gbic(s) transceiver physically from the nodes and power cycle it. Afterwards, ethtool should report the proper speeds and work as expected. There has been some instances where even after physically removing the gbic transceiver , ethtool still reports 10/100/1000 speeds, so Phoenix install will not proceed further. If that is the case, and we have verified indeed that the 10gb drivers are loaded and the gbic is removed from the node, then we can follow the steps below to force Phoenix continue installing. BEAR IN MIND THAT THIS IS A WORKAROUND THAT SHOULD BE USED AS LAST RESORT. Note: If a Foundation VM is being used, minimum_reqs.py can be modified by doing the following: Create a file on the Foundation VM within the path /home/nutanix/foundation named minimum_reqs.py.Add the minimum_reqs.py with any necessary changes.Add minimum_reqs.py to phoenix_override.tar.gz using the command 'tar -czvf phoenix_override.tar.gz minimum_reqs.py' run from within /home/nutanix/foundation. 1. From Phoenix console, edit minimum_reqs.py script: 2. Find the line 'if "10000baseT in info or "40000base" in info' under the function CheckNics. 3. Change 10000baseT for 1000baseT and save the file 4. Save the file and run Phoenix again from command line (./phoenix). Proceed with the installation as usual.
KB14919
Alert - A130168 - VMNotReachable
Investigating VMNotReachable alerts on a Nutanix cluster
This Nutanix article provides the information required for troubleshooting the alert A130168 - VMNotReachable for your Nutanix cluster. Alert Overview The alert A130168 - VMNotReachable is generated when the NGT service on the VM is either not reachable or if it is unstable. Sample Alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "NGT on the VM is not reachable." }, { "Check ID": "The communication link to Nutanix Guest Tools on the VM is not working." }, { "Check ID": "Restart the Nutanix Guest Tools service in the VM." }, { "Check ID": "Crash consistent snapshot will be taken instead of application-consistent snapshot and static IP address preservation/mapping will fail." }, { "Check ID": "A130168" }, { "Check ID": "NGT on VM is not reachable" }, { "Check ID": "The communication link to Nutanix Guest Tools on the VM {vm_name} is not working." } ]
Troubleshooting the Issue: Nutanix Guest Tools (NGT) is a software bundle that is installed inside the User Virtual Machines (UVM) to enable advanced VM management functionality via the Nutanix platform. This alert is generated when NGT service running on the user VM is unreachable, paused, or unstable. Checking the Nutanix Guest tool service status will show the communication link as inactive as shown below. Refer to KB 13784 https://portal.nutanix.com/kb/13784 for details to check NGT CVM connectivity. nutanix@cvm$ ncli ngt list vm-names=<VM_name> Resolving the issue: To resolve inactive communication link refer to KB 3868 https://portal.nutanix.com/kb/000003868 .If the steps in above KB do not resolve this issue, refer to Nutanix Guest Tools Troubleshooting Guide https://portal.nutanix.com/kb/000003741 to troubleshoot scenario-specific issue. If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. https://portal.nutanix.com./ Collect additional information and attach them to the support case." Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 http://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 http://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the Case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB16231
How to manually verify an account in RAFT.
This article describes how to verify an account manually in RAFT
This article describes how to verify a customer account manually. This can occur when the customer did not receive a verification email or when the one they received has expired.
Log on to RAFT http://raft.nutanix.com and under the "Manage" drop down select the "Manage users" option. Search using the Email Id of the user facing the issue and in the "Actions" section you will see an option to verify the user account. Click on "Verify" and confirm and the account will be verified. The customer can be informed that the account has been verified and to try logging in. No other verification will be needed from their end.
KB6306
NCC Health Check: protection_rule_max_entities_per_category_check / protection_rule_max_vms_per_category_check
Raise an alert when the number of entities in a category that is associated with a protection policy is greater the limit for the paired configruation.
NOTE: From NCC 4.3.0 onwards protection_rule_max_vms_per_category_check has been renamed to protection_rule_max_entities_per_category_check The NCC check protection_rule_max_entities_per_category_check / protection_rule_max_vms_per_category_check checks if the number of VMs associated with a category linked to a Protection Policy, exceeds the maximum number of entities that can be recovered by a recovery plan. This Check is executed from the PC paired with an availability Zone.The check can be run as part of the complete NCC check by running ncc health_checks run_all or individually as nutanix@cvm:~$ ncc health_checks draas_checks protection_policy_checks protection_rule_max_vms_per_category_check From NCC 4.3.0 and above, use the following command for the individual check: nutanix@cvm:~$ ncc health_checks draas_checks protection_policy_checks protection_rule_max_entities_per_category_check This check is scheduled to run every hour and raises a alert if the condition is not met.Sample Output Check Status: PASS Running : health_checks drass_checks protection_policy_checks protection_rule_max_vms_per_category_check From NCC 4.3.0 and above Running : health_checks drass_checks protection_policy_checks protection_rule_max_entities_per_category_check Check Status: FAILThe check returns a FAIL if the number of entities associated with a category linked to a Protection Policy exceeds 200 entities. ( the below check was run from a setup that was AHV to AHV) Detailed information for protection_rule_max_vms_per_category_check: From NCC 4.3.0 and above Detailed information for protection_rule_max_entities_per_category_check: Output messaging From NCC 4.3.0 and above [ { "110402": "Checks if the VM count for a category specified in Protection Policy exceeds the maximum allowed limit.", "Check ID": "Description" }, { "110402": "Number of VMs for the specified categories in the Protection Policy exceeds the limit.", "Check ID": "Causes of failure" }, { "110402": "Reduce the protected VM count for the specified categories in the Protection Policy.", "Check ID": "Resolutions" }, { "110402": "Specified category will not be considered for the recovery as the Recovery Plan supports categories with limited number of VMs.", "Check ID": "Impact" }, { "110402": "A110402", "Check ID": "Alert ID" }, { "110402": "Number of VMs for categories in the Protection Policy protection_rule_name exceeds the maximum allowed limit.", "Check ID": "Alert Smart Title" }, { "110402": "Protection Policy Max VMs per Category Check Failed.", "Check ID": "Alert Title" }, { "110402": "Maximum number of VMs for a category in a Protection Policy should not exceed max_vm_count. \" Following categories exceeds VMs limit : category_name", "Check ID": "Alert Message" }, { "110402": "110402", "Check ID": "Check ID" }, { "110402": "Checks if the entity count for a category specified in Protection Policy exceeds the maximum allowed limit.", "Check ID": "Description" }, { "110402": "Number of entities for the specified categories in the Protection Policy exceeds the limit.", "Check ID": "Causes of failure" }, { "110402": "Reduce the protected entity count for the specified categories in the Protection Policy.", "Check ID": "Resolutions" }, { "110402": "Specified category will not be considered for the recovery as the Recovery Plan supports categories with limited number of entities.", "Check ID": "Impact" }, { "110402": "A110402", "Check ID": "Alert ID" }, { "110402": "Number of entities for categories in the Protection Policy protection_rule_name exceeds the maximum allowed limit.", "Check ID": "Alert Smart Title" }, { "110402": "Protection Policy Max entities per Category Check Failed.", "Check ID": "Alert Title" }, { "110402": "Maximum number of entites for a category in a Protection Policy should not exceed max_entity_count. \" Following categories exceeds entities limit : category_name", "Check ID": "Alert Message" } ]
The Below Limits apply to the category in a paired configuration: Resolution: Note the Protection Policy identified in the alert.From Prism Central, Load the Protection Policies page If executing this on the Leap Tenant - Go to Explore -> Protection PoliciesIf executing this on On-Prem cluster - Go to Dashboard -> Policies -> Protection Policies Select and click the Protection Policy identified in the NCC check .In the Protection Policy Page that loads, note down the categories associated with the Protection Policy. Close the window.Load the Category page If executing this on the Leap Tenant - Go to Explore -> CategoriesIf executing this on On-Prem cluster - Go to Dashboard -> Virtual Infrastructure -> Categories Identify the category that was noted down in Step 4, and review the "Assigned Entities" column. If the Number of entities in the "Assigned Entities" column exceeds the limits for the paired configuration, click on the category and in the page that loads, click on the "VMs" link. This will load the VM page filtered by VMs associated with this category.Update the category for some of the VMs, such that the number of VMs associated with this category does not exceed the limit for the paired configuration. You could either create a new category and associate some VMs with the new category, or associate some of the VMs with other existing categories. In case the above mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/ . Additionally, please gather the following command output and attach it to the support case: ncc health_checks run_all[ { "Paired Configuration": "AHV to AHV", "Limits": "200" }, { "Paired Configuration": "ESX to ESX", "Limits": "200" }, { "Paired Configuration": "AHV to Xi Leap", "Limits": "200" }, { "Paired Configuration": "ESX to Xi Leap", "Limits": "100" } ]
KB15486
NCC Health Check: invalid_node_population_check
The NCC health check invalid_node_population_check detects presence of Node D in the chassis of NX-1065-G9 (invalid population of Node D).
The NCC health check invalid_node_population_check detects presence of Node D in the chassis of NX-1065-G9 (invalid population of Node D). When run manually, it provides additional concise summary information to assist you with resolution and identification. This check alerts when a failure condition is detected to notify the customer. Running the NCC checkThis check can be run as part of the complete NCC check with the following command: nutanix@cvm$ ncc health_checks run_all This check can be run individually as follows: nutanix@cvm$ ncc health_checks hardware_checks invalid_node_population_check You can also run the checks from the Prism Web Console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run every 24 hours, by default and generates an alert after 1 failure.Sample OutputsFor status: PASS Running : health_checks hardware_checks invalid_node_population_check For Status: FAIL Running : health_checks hardware_checks invalid_node_population_check Output messaging The DDR5 DIMM in G9 platforms requires increased power with increased heat generation. To ensure the thermal and power consumption consistently meet the requirement in the chassis, the NX-1065-G9 can be configured up to 3 nodes per chassis. Adding the 4th node into chassis and exceeding the power budget may result in node failures and cluster outage.The NX-1065-G9 chassis will be provided with an empty node tray installed in the Node D Slot. Never remove the empty node tray from the Node D Slot for airflow control essential for the thermal control of the NX-1065-G9 chassis.[ { "Check ID": "Check to detect presence of Node D in the chassis of NX-1065-G9" }, { "Check ID": "Presence of Node D is detected in the chassis of NX-1065-G9 may exceed the power budget and can subsequently cause cluster outage" }, { "Check ID": "Please refer to KB 15486 for more information." }, { "Check ID": "May result in node failures and cluster outage." }, { "Check ID": "A106089" }, { "Check ID": "Invalid Population of Node D in the chassis of NX-1065-G9" }, { "Check ID": "Presence of Node D detected in the chassis of NX-1065-G9" } ]
Remove Node D from the chassis as soon as possible. Introduction of the 4th node per chassis and exceeding the power budget will result in node failures and cluster outage. The power supply may be sufficient and no issue may be detected initially. However, if one of the PSUs fails and the power draw exceeds 2200W, then nodes will fail and the cluster outage may follow. If PSU failure is detected, please check and troubleshoot the issue using KB-7386 https://portal.nutanix.com/kb/000007386.Remove a node from Cluster1. Remove the host from Prism UI https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-removing-node-pc-c.html2. Once the host is removed using Prim UI you can physically remove the node https://portal.nutanix.com/page/documents/list?type=hardware. For further assistance please engage Nutanix Support https://portal.nutanix.com/.
KB14937
Self-Service UI shows incorrect cost and cost/hr for Applications whose VM configuration was updated directly via vCenter
This KB describes a behaviour where incorrect cost per hour is shown for an application.
Background: Self-Service (formerly known as Calm) uses Beam Showback to keep track of the cost and cost/hr for applications based on the amount of memory, storage and vCPU assigned to the VMs. To know more, refer to the Showback https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-Guide:nuc-showback-overview-c.html section of the Self-Service Administration and Operations Guide https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-Guide:Self-Service-Admin-Operations-Guide.When a new application is created from a Blueprint, it deploys VM/services configured in the blueprint. Based on the vCPU, memory and storage configured for VMs, along with the cost per hour configured in Self-Service Showback settings, Self-Service contacts Beam via APIs with all the VM information and Beam further keeps track of the application's overall cost.The same is displayed on the Application page as shown below: Cost calculation workflow in Self-Service: For example, let us consider an application that has 1 VM with the following configuration cost in Self-Service backend. 16GB Memory ($0.01 per GB per hour)10 vCPUs ($0.01 per GB per hour)100GB Storage ($0.003 per GB per hour) The total application's per-hour cost would turn out to = (16 * $0.01) + (10 * $0.01) + (100 * $0.003) = $0.56/hrIdentification: If the VM configuration is updated in such a way that the updated configuration is not pushed to Beam Showback, it will continue to return the older value for cost/hr calculation workflow.One known scenario was observed in ONCALL-14717 https://jira.nutanix.com/browse/ONCALL-14717 where the customer updated the configuration of the VM directly through vCenter and in these cases, the updated config was not pushed to Beam and hence the incorrect cost was returned.
It is recommended not to update the VM configuration directly via vCenter ideally for Self-Service applications. An improvement is raised (tracked under CALM-35643 https://jira.nutanix.com/browse/CALM-35643) to push the config to Showback as part of the platform sync operation, whenever the VM config is updated from vCenter.As a workaround, to update the beam showback calculation, follow the below steps: Soft delete the application from Self-Service UI and again brownfield https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-Guide:nuc-brownfield-applications-tab-usage-c.html it with the VM. This workflow creates a new entity in the Showback with updated configurationUnder the Application's Mange tab, first Stop the Application and then Start the application as below. This will push the updated configuration to Showback. Note: Customers should be made aware that their previous cost data might be lost when a workaround is applied. This should not really be a concern in most cases as the cost calculation of the application would reflect a stale state. The real cost would be drastically different than what is shown after applying the workaround because it considers all the VM config changes.
KB15766
Move: Hyper-V - This VM is configured with Standard checkpoints.
When migrating the VM from a Hyper-V source provider, the following warning may appear if the VM is configured with standard Hyper-V checkpoints: This VM is configured with Standard checkpoints. For Hyper-V host OS version Windows Server 2016 and later, it is recommended to configure VM with Production Checkpoints for better performance and application consistency.
When migrating the VM from a Hyper-V source provider, the following warning may appear if the VM is configured with standard Hyper-V checkpoints: This VM is configured with Standard checkpoints. For Hyper-V host OS version Windows Server 2016 and later, it is recommended to configure VM with Production Checkpoints for better performance and application consistency. Please follow KB 15766 to change the checkpoint type on the VM.
The warning message was added in Nutanix Move to 5.1.1 or higher version. If the Microsoft Hyper-V Server version is 2016 or higher, Nutanix recommends configuring the checkpoints as production checkpoints for better migration performance. For information about the procedure to change checkpoints to production checkpoints, refer to the Microsoft Learn documentation: Changing the Checkpoint Type http://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/checkpoints#changing-the-checkpoint-type
KB11010
Adding Nutanix Objects as Primary Storage with Veritas Enterprise Vault
This article describes the steps to add Nutanix Objects (S3) as Primary Storage with Veritas Enterprise Vault.
This article describes the steps to add Nutanix Objects (S3) as Primary Storage with Veritas Enterprise Vault. Versions Affected: Nutanix Objects 3.1 and above Prerequisite: If using a self-signed certificate, add the Nutanix Objects self-signed CA certificate on the Enterprise Vault Servers. Refer to KB-10953 http://portal.nutanix.com/kb/10953 for more information.
Adding Nutanix Objects as Primary Storage Enterprise Vault 14.1 and later supports Nutanix Objects (S3) as Primary Storage for data archiving. The below steps describe adding Nutanix Objects (S3) to a Vault Store Partition on Veritas Enterprise Vault. Launch the Enterprise Vault Administration Console. Navigate to the required vault store in the Vault Store Group. Expand the vault store, right-click on "Partitions" and select "New" -> "Partition". Note: Create a new Vault Store Group and Vault Store https://www.veritas.com/content/support/en_US/doc/115743999-142064348-0/id-SF390534065-142064348 first if not already created. Provide the required details on the โ€œNew Partitionโ€ window and click "Next". Select "Nutanix Objects (S3)" in the โ€œStorage Typeโ€ menu and click โ€œNextโ€. Provide the following details in the Nutanix Objects (S3) connections settings window: Access Key IDSecret Key IDService hostname (Example: objtest.nutanixbd.local)Bucket nameBucket region: us-east-1Bucket access type: path/virtualStorage class: S3 Standard Click on "Test" to test the connection to Nutanix Objects. "Nutanix Objects (S3) connection test succeeded" is displayed if the connection is successful. Click "OK" and click "Next". Select the option โ€œWhen archived files exist on the storageโ€ and the appropriate scan interval for securing the archived items. Note: The option โ€œWhen archived files are replicated on the storageโ€ is not supported by Nutanix Objects. Verify the Vault Store Partition summary and click โ€œFinishโ€ Nutanix Objects has now been added to the Vault Store Partition as the Primary Storage for data archival.
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""netsh int ip reset resetlog.txt \t\t\tnetsh winsock reset catalog""
null
null
null
null
KB14661
Prism Central - After upgrade to 2023.1.0.1 users unable to login, including local admin
Article describe situation: After PC upgrade from 2022.6.X 2022.9.X to pc.2023.1.0.1 users are receiving 403 message after successful (with correct credentials) login
Following scenario is possible:1. Customer has pc 2022.6.X with IAM 3.6 enabled2. Customer upgrading to pc.2023.1.0.1Users may see 403 error after Prism Central upgraded to pc.2023.1.0.1 including login with local admin account.Identification:After upgrade pc.2023.1.0.1 services started - at this point prism_gateway expect to see "users" field in response sent to iam-authn, but MSP still running IAM 3.6, and sends back "result" field casing auth failuresprism_gateway.log has following Exceptions: RROR 2023-04-14 16:02:06,372Z http-nio-127.0.0.1-9081-exec-8 [] auth.commands.GetLoggedInUser.doPostprocess:123 Error while retrieving the logged in user Checking within SourceGraph: this is failure in GetLoggedInUser caused by NullPointerException in getUserType(). Issue caused by reply from IAM missing field usersReply is incorrect due to iam-auth running with incorrect/old version image nutanix@PCVM:$ sudo kubectl get pods -n ntnx-base Explanation:MSP msp upgraded and as part of MSP upgrade IAM upgrade to 3.11 triggeredIAM Images upgrade may take long time as it will pull updated images from the internet. If connection is slow - it can take significant time. Once IAM PODs upgrade completed, iam-authn will send back "users" field and auth start to workUntil IAM upgraded to 3.11, local admin unable to login, ncli not workingNote: To find MSP version to PC match please check: PC-MSP-IAM branch mapping https://confluence.eng.nutanix.com:8443/display/IAM20/PC-MSP-IAM+branch+mapping
Check if IAM upgrade is running or failed in ~/data/logs/msp_controller.out:if still running - give it more time 2023-04-17T09:34:09.754Z base_services.go:596: [INFO] [msp_cluster=prism-central] svc IAMv2 current version 3.6.0.1658839862 < spec version 3.11.0.1675810195, upgrade required alternate method, check for Upgrade State from mspctl Good result: nutanix@PCVM:~$ mspctl controller info Impacted result (error messages may vary): nutanix@PCVM:~$ mspctl controller info If upgrade of MSP components failed - restart msp_controller service to re-trigger IAM upgrade. genesis stop msp_controller; cluster start
KB11815
Prism Central services down after upgrade
Prism Central upgrade failure
Prism Central VM didn't reboot after upgrade. Customer force rebooted PC. After reboot services are down. + Upgrade history nutanix@NTNX-179-114-61-188-A-PCVM:~/data/logs$ cat ~/config/upgrade.history + Genesis status nutanix@NTNX-179-114-61-188-A-PCVM:~$ genesis status + Genesis restart nutanix@NTNX-179-114-61-188-A-PCVM:~$ genesis restart + Checking config_home_dir_marker file nutanix@NTNX-179-114-61-188-A-PCVM:~$ cat /tmp/config_home_dir.log + The file was present nutanix@NTNX-179-114-61-188-A-PCVM:~$ ls -al /home/nutanix/prism/security + httpd service status nutanix@NTNX-179-114-61-188-A-PCVM:~/data/logs$ sudo systemctl -l status httpd
Log signature resembles ENG-203789 https://jira.nutanix.com/browse/ENG-203789, KB 7906 where script was failing in config_home_dir. This issue has been resolved in 5.11. Customer was running 5.19.We couldn't RCA the issue. Please do not reboot PCVM if you see similar issue. Engage engineering via TH or oncall
KB8671
How to determine which M.2 device failed on the node
This KB describes how to find which M.2 drive is broken when S.M.A.R.T error ocurrs on the M.2 device on the node during boot up
Sometimes S.M.A.R.T error of the M.2 device is notified when node is booting up.In this situation, it is hard to determine which device Port-0 or Port-1 has failed due to which hypervisor/CVM fails to boot.In most cases we have seen Port-1 being failed, but rarely Port-0 also fails even if it is blank without any boot files.
Review the previous message of the S.M.A.R.T error.The number starting with I-SATA indicates which device has failed. I-SATA0 : INTEL SSDCSKJB240G7 0 indicates that Port-0 drive is broken. I-SATA1 : INTEL SSDCSKJB240G7 1 indicates that Port-1 drive has failed. Contact Nutanix Support http://portal.nutanix.com to request a replacement.
KB16500
Prism Central scheduled reports are not getting generated for non-utc timezones
Prism Central scheduled reports are not being generated when non-utc timezone is specified in the report config.
Scenario 1Upon reviewing the generated reports within the Prism Central tab, it was discovered that no reports had been generated for the scheduled times, especially in non-UTC time zones. This has been identified as a known issue with PC version 2023.4.The following errors can be observed in the vulcan log. I0321 11:07:49.535836Z 64179 authz.go:292] Trying to Perform report_configs_put operation on 79454f11-cbf9-4814-455a-787ed7d1a5fc resource Scenario 2With multiple schedules, for example, daily, weekly, monthly, may generate the same report multiple times in different time zones (UTC and time zone set on the PC). The issue is observed with the report created prior to PC upgrade to pc.2023.4.In vulcan logs, we can observe similar error as highlighted above.
Nutanix Engineering is aware of the issue and a fix has been integrated into pc.2024.1. This problem is tracked in ENG-645986 https://jira.nutanix.com/browse/ENG-645986.
}
null
null
null
null
KB10687
Unable to add Hyper-V hosts to move appliance
Customers may see different errors when trying to add Hyper-V hosts due authentication issue.
When we try to install move agent on Hyper-V hosts using Move automatically we get the following error on webpage. Error: Move HyperV agent automatic installation failed: Powershell command .\'move-agent-installer.exe' --o='install' --ip='10.240.157.202' --servicemd5='cdf7d8b792da9bea077818a9bad770ec' --certmd5='3ff37b6b0280ec7d3c27b1d7898e35c3' --keymd5='02ca26b4e009287d12d1280149e5ef16' --u='[email protected]' --p= **** failed. For Manual installation please refer: https://portal.nutanix.com/page/documents/details/?targetId=Nutanix-Move-v3_7:v37-deploy-nt-service-t.html In host we can see the md5 for service file is correct by running the command [$md5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider; $hash = [System.BitConverter]::ToString($md5.ComputeHash([System.IO.File]::ReadAllBytes('C:\Users\adm-XX90XX37\Nutanix\Move\3.7.0/move-service.exe'))); echo $hash] Output Matches cdf7d8b792da9bea077818a9bad770ec In the move-agent-installer.exe.XXXXXXX.XXXXXX-IN903537.log.INF log on the host we do see same error [The account name is invalid or does not exist, or the password is invalid for the account name specified.] We can install the agent with local admin account without any issues however none of the VMs shows not migratable due to error ( In case source VMs are on Nutanix Cluster too) The VM has disk(s) attached which are not actually present on HyperV host. Even when we run the install from the host using PowerShell we get the same error. C:\Users\XXX_indvdi>move-agent-installer.exe -o install -ip 10.240.157.202/ -u [email protected]
Use the steps in KB-7932 http://portal.nutanix.com/kb/7932 to remove failed installation before trying the solutions given below.ISSUE 1:When installing the move agent using the [email protected] format or just username it may not authenticate in some domains. Solution:From move appliance use the domain account in domain\username format from move appliance to install the agent on the host. ISSUE 2:Even when using the domain\username format it may fail. Solution:Make sure the user is not logged in with a temp profile, to confirm you can check C:\Users directory and see there is a directory with domain username. If the directory is missing try with a user ISSUE 3:Even when directory for the user is there on the host and domain\username is being used it may fail for other reasons.Workaround 1. Use local admin account to add the agent automatically from the move appliance page. 2. Connect to host Services MMC (services.msc)3. Find the Nutanix Move Agent service, Right Click -> Properties -> Logon Account -> Change it to a domain account. 4. Go to move page and change login account to the same domain account.5. Refresh the inventory, the VMs will now show migratable.
KB13280
Nutanix DRaaS - Cannot recover VM from this Recovery Point - The VM has delta disks
Cannot recover VM from this Recovery Point - The VM has delta disks
Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.VMs being protected to Xi from a Nutanix-ESXi cluster cannot be recovered in Xi with the following error: Cannot recover VM from this Recovery Point Checking the cerebro logs on the on-prem cluster, it shows app-consistent snapshots are attempted via the NGT VSS capability: I20220508 17:18:27.548004Z 26223 snapshot_protection_domain_meta_op.cc:1685] <meta_opid: 28222699 parent meta_opid: 28222691 , PD: pd_1645467506830816_5>: Create application consistent snapshot for protection domain But the VSS capability is either not enabled in NGT (Nutanix Guest Tools) for this VM or there is any other issue with the NGT that prevents taking such a snapshot: I20220508 17:18:28.964233Z 26223 snapshot_consistency_group_sub_op.cc:4792] <parent meta_opid: 28222699, CG: cg_1645467506830816_5>: NGT VSS capability is not enabled for VM 501002a6-d5ea-52bf-1ae1-f1db72310a68 Because of that, Cerebro falls back to hypervisor snapshot to provide App consistency: I20220508 17:18:28.964270Z 26223 snapshot_consistency_group_sub_op.cc:4404] <parent meta_opid: 28222699, CG: cg_1645467506830816_5>: VSS snapshot is not supported on the VM with ID : 501002a6-d5ea-52bf-1ae1-f1db72310a68, falling back to old style of app consistent snapshot. If you check the ESXi datastores on the on-prem cluster, you can confirm that VMware snapshots (delta disks) are being generated by the Protection Policy (these are not manually created by the user), in the .snapshot directory. For example: [root@bordeaux01:~] find /vmfs/volumes/ -iname "*delta.vmdk" | grep TestVM
The issue is happening because the customer has app-consistent snapshots enabled in the Protection Policy, but the VSS snapshots are not working at the Nutanix level, so Cerebro reverts to creating VMware snapshots (delta disks). These snapshots cannot be recovered in Xi, as Xi clusters are running AHV.To solve this issue check the NGT settings for each of the affected VMs from a CVM: ncli ngt list vm-names=<vm-name-1>,<vm-name-2> Option 1:If the customer wants app consistent snapshots to replicate to Xi from an ESXi cluster the following aspects will need to be met: The VMs must have NGT installed. The NGT communication link for the VMs must be active. The VSS capability should be enabled in the NGT settings of each VM. Troubleshoot as needed if the above conditions are not met. For more details check KB-3741 https://portal.nutanix.com/kb/3741Option 2: If the customer does not need or want app-consistent snapshots, the setting can be disabled in the Protection Policy. Then, after a new snapshot is replicated to Xi, recovery of the VM should succeed.
KB16255
Node Removal Stuck - Possible Scenarios
This KB lists various possible problems during Node removal and how to resolve them.
Overview This KB lists various possible problems during Node removal and how to resolve them. Scenarios are many and with the aid of this Generic Troubleshooting KB you should be able to identify which one you've hit in a case and then proceed to the specific break-fix KB describing the solution. NOTE: There is a separate KB-16236 https://portal.nutanix.com/kb/16236 for Disk removal stuck scenarios. Please do not mix the two. Node removal might get stuck due to incomplete data migration for its disk, but then the root cause would be in the disk stuck issue rather than the Node level component Acks for services like Zookeeper or Acropolis or Mantle. Node set "kToBeRemoved" Start the troubleshooting by finding the node_status: kToBeRemoved in the zeus_config_printer output.Note: Beginning with AOS 6.1 ( FEAT-12753 https://jira.nutanix.com/browse/FEAT-12753) the system might perform multiple node removals in parallel (maximum 4). In prior releases only 1 node could be removed at a time. There are several pre-requisites to even start the Node removal. These are documented in the PE Web Console Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-prerequisites-node-remove-pc-c.html. Some extra pre-requisites allow for multinode removal. Node Removal Status For Node removal slow or completely stuck the node_removal_ack is the source of truth. We can see the current progress in the node_removal_ack via zeus_config_printer.Component Acks is a hexadecimal 6-digit set. Each one of these is set to 1 when the node is ok to be removed from the corresponding component's perspective: If your NodeRemovalAck is not listed above, please reference this confluence to translate it: https://confluence.eng.nutanix.com:8443/display/STK/Node+Removal+Status https://confluence.eng.nutanix.com:8443/display/STK/Node+Removal+Status Code reference for NodeRemovalAck: https://sourcegraph.ntnxdpro.com/ntnxdb-fraser-6.7.1/-/blob/zeus/zeus/configuration.proto?L595 https://sourcegraph.ntnxdpro.com/ntnxdb-fraser-6.7.1/-/blob/zeus/zeus/configuration.proto?L595 For example: node_removal_ack=1118464=0x111100 in hexadecimal. This means Genesis, Mantle, Acropolis, and ZK services gave their Ack for node removal, while Curator and Cassandra processes haven't. nutanix@CVM:~$ zeus_config_printer | grep -B 4 node_removal_ack In order to convert the decimal status shown in zeus_config to hexadecimal for checking which component acks are in place, you can use the bc command as follows: echo "obase=16; 1118464" | bc Look for the bits that are missing, the zeros. Note: Cassandra is always the last service to acknowledge and waits for other services to complete their Ack. If we see any situation where one or two services and Cassandra have not acknowledged, troubleshoot it from the other services' perspective. For example if we see that Ack has not been sent for Zookeeper and Cassandra, troubleshoot Zookeeper issues first.[ { "kCuratorOkToRemove": "kCassandraOkToRemove", "0x000,001": "0x000,010", "= 1": "= 16" }, { "kCuratorOkToRemove": "kZookeeperOkToRemove", "0x000,001": "0x000,100", "= 1": "= 256" }, { "kCuratorOkToRemove": "kAcropolisOkToRemove", "0x000,001": "0x001,000", "= 1": "= 4096" }, { "kCuratorOkToRemove": "kMantleOkToRemove", "0x000,001": "0x010,000", "= 1": "= 65536" }, { "kCuratorOkToRemove": "kGenesisOkToRemove", "0x000,001": "0x100,000", "= 1": "= 1048576" }, { "kCuratorOkToRemove": "kAllOkToRemove", "0x000,001": "0x111,111", "= 1": "= 1118481" } ]
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit and KB-1071 https://portal.nutanix.com/kb/1071. A detailed overview of the node removal process for each component, and the "GrepSheet" for related log signatures is available in the below Confluence page: https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=250338088 https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=250338088 This KB lists specific scenarios and handling guidance for Node removal issues seen on supported AOS versions. For EOL section, see Internal Comments. Node removal slow or stuck - Possible Scenarios Current Releases: AOS 6.5.x and later [ { "#": "8.", "Component Acks /\t\t\tnode_removal_ack": "", "Description and Symptoms": "Node removal stuck due to slow NearSync Oplog draining\n\n\t\t\tSSD(s) showing data_migration_status = 273 (0x0111)", "Handling Guidance": "Refer to KB 6707. If KB-6707 did not help, this requires an ONCALL.\n\n\t\t\tIf the customer cannot disable --> re-enable NearSync, refer to ONCALL-14576 & TH-10277\n\n\t\t\tCollect the following details and proceed to open an ONCALL:\t\t\t a. Details of the SSD that is in the process being removed.\t\t\t b. List of all Oplog episodes in the stuck disk (from medusa) \n\n\t\t\tallssh 'cat data/logs/curator.* | grep \"Egroups for removable disk\"'\nmedusa_printer --lookup egid --egroup_id | head -n 50\n\n\t\t\tc. PD details on which the NearSync is enabled", "ENG": "ENG-289449\t\t\tnot fixed\t\t\t\t\t\tFEAT-6057\t\t\twent GA in AOS 6.6" }, { "#": "10.", "Component Acks /\t\t\tnode_removal_ack": "69905\t\t\t(0x011111)", "Description and Symptoms": "Node removal is stuck because the Curator cannot unlock the SED drives on the removed node.\t\t\t\t\t\tNCLI shows Host Status is DETACHABLE: \n\n\t\t\tnutanix@NTNX-CVM:~$ ncli host get-rm-status\n Host Id : 8a1f4d27-eb16-418b-ae08-23e6f9751c7c\n Host Status : DETACHABLE\n\t\t\tTo confirm the issue check for the following signature in Curator logs - \"Genesis at xx.xx.xx.xx failed with error 4 when trying to clean password-protected drives\" \n\n\t\t\tE0923 23:30:27.789451 10257 curator_config_helper_ops.cc:1072] \n Genesis at xx.xx.xx.xx failed with error 4 when trying to clean password protected drives\nE0923 23:30:30.795256 7624 rpc_client.cc:506] \n Transport error reported by bottom half while trying to send RPC with rpc_id=675617721668337972, detail=Http connection error", "Handling Guidance": "In a Nutanix cluster with self-encrypting drives (SED), after all the data on the removed node is replicated to other nodes, as the last step it will issue RPC calls to genesis service on the removed node for cleaning and unlocking the drives permanently. If the node that is getting removed is shut down or not reachable over the network, removal operation can be stuck forever. \n\n\t\t\tThis last step is not tracked in the node_removal_ack field so it looks as if the removal is complete however it does not finish.\n\n\t\t\tPower on the node that is getting removed. Once genesis is up Curator will be able to complete the RPC and the node removal will be finished immediately.\n\n\t\t\tENG-255923 is open to track the SED step in the node_removal_ack field to aid with the troubleshooting. Duped to ENG-74094 introducing the alert A1106 - CannotRemovePasswordProtectedDisks. See KB 16375 for more details on the alert.\n\n\t\t\tIf it is not possible to bring the powered the node being removed back on then open an ONCALL to get assistance from engineering to complete the removal manually.", "ENG": "ENG-74094\t\t\tfixed in AOS 6.1.1, check added in NCC-4.3.0" }, { "#": "21.", "Component Acks /\t\t\tnode_removal_ack": "1052929\t\t\t(0x101101)", "Description and Symptoms": "Multiple nodes are down, unreachable, and physically removed from the site\n\n\t\t\tNodes are listed in Zeus as \"to_remove: true\" but the data_migration_status and node_removal_ack fields are not populated. There are no node removal tasks.\t\t\tnode_removal_ack 1052929 (0x101101) indicates that node removal is waiting on Mantle service ack.\n\n\t\t\tmantle log on a leader:\n\n\t\t\tI20221111 02:17:54.921808Z 18798 mantle_util.cc:369] Unable to connect to 10.xxx.xxx.214:9880 errno : 115\nE20221111 02:17:54.921823Z 18798 mantle_server_rotate_mk_op.cc:122] All Mantle servers are not up, cannot rotate Master Key\nE20221111 02:17:54.921994Z 18796 mantle_server.cc:3072] Mantle error:kNoQuorum\nE20221111 02:17:54.922042Z 18796 mantle_server.cc:3362] Unable to do Master Key rotation to -1 error: kNoQuorum", "Handling Guidance": "We're hitting the ENG-371980 as multiple nodes are unreachable. Engage engineering via Oncall to flip the node_removal_ack value in zeus for Mantle (after confirming everything else is fine).\n\n\t\t\tAfter this, Cassandra should also acknowledge the node-removal and node get removed from the configuration.\t\t\t\t\t\tNote:\t\t\tBy acking Mantle bit, the node removal is completed. However, in case multiple nodes are down, Mantle may still be in a crash loop because the master key rotate op cannot get any update, and it does not also update the \"Master key node list\".\t\t\tIf 3 nodes go down, we have to manually ack the mantle for 2 out of those 3, but the 3rd node does not hit the bug and can complete mantle_server_rotate_mk_op. This updates the \"Master key node list\" and \"Master key version\" gets updated as expected.\t\t\tSo do not leave in the cluster that does not complete the rotate op, such as removing only 1 out of 3 nodes. \t\t\t\t\t\t\"Master key node list\" and \"Master key version\" can be confirmed on page 9880.\n\n\t\t\tnutanix@CVM$ links --dump http:0:9880\n Mantle Server\n\n โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”\n โ”‚Start Date โ”‚20220829-08:40:10-GMT+0900 โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Build Version โ”‚el7.3-release-euphrates-5.20.4-stable-2e5bbbf3d397df65357d6bec23daf7d4558df792 โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Build Last Commit Dateโ”‚2022-03-18 07:48:34 +0000 โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Master โ”‚Yes โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Prism Central โ”‚5886cb82-ef3b-40ab-9039-7bbe9dc5212f โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Master key version โ”‚3 โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Master key setup time โ”‚20230220-01:59:21-GMT+0900 โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚Master key threshold โ”‚5 โ”‚\n โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค\n โ”‚ โ”‚128b7bd6-807e-4233-a50b-43165130206c d373eece-5ff0-4e8a-9205-72761ef768ca 3c1ac990-0f59-4cf6-b4b5-79a313beec98 โ”‚\n โ”‚Master key node list โ”‚4c0634cf-c9d6-41c1-a992-cc72460bef5f c8c7a650-75ab-4fc7-b138-648e2eb8b875 5df7f161-0623-4ee1-8bd7-e9a8dc2ea85f โ”‚\n โ”‚ โ”‚eab64466-837a-481f-95c9-dc4a71a55b22 f7830d24-0c40-475e-baf0-6029b3ce66d1 596a3696-77da-4b2c-a1ee-5666f6cc8269 โ”‚\n โ”‚ โ”‚5708d6b1-5d18-411b-b6c9-ba76efddeb50 โ”‚\n โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜", "ENG": "ENG-371980\t\t\tpending" }, { "#": "22.", "Component Acks /\t\t\tnode_removal_ack": "1118209\t\t\t(0x111001)", "Description and Symptoms": "Node removal is stuck at the Zookeeper phase due to inactive zookeeper_monitor prevents zk-server duties from being migrated\n\n\t\t\tIf ZK cannot find another CVM in the cluster to take over ZK server duties, you will find log messages similar to the ones below on the Zeus Leader CVM.\t\t\tNote: You can find the ZeusLeader designated in the output of the \"cluster status\" command.\n\n\t\t\t# grep migration ~/data/logs/zookeeper_monitor.INFO\nI20221207 23:57:10.595769Z 19429 cluster_state.cc:700] Checking if we need to migrate Zookeeper\nI20221207 23:57:10.595885Z 19429 cluster_state.cc:2293] Getting a migration target when the migration source is xx.xx.xx.174\nW20221207 23:57:10.595901Z 19429 cluster_state.cc:904] \n Cannot migrate Zookeeper on node xx.xx.xx.174 which is marked to be removed as a migration target was not found", "Handling Guidance": "Refer to KB 16391 for further guidance and instructions.", "ENG": "ENG-524531\t\t\tpending\t\t\t\t\t\tENG-526014\t\t\tpending" }, { "#": "24.", "Component Acks /\t\t\tnode_removal_ack": "", "Description and Symptoms": "Node removal is stuck as the cluster runs out of space in the SSD tier\n\n\t\t\tIn the stargate.INFO logs you will see messages with kDiskSpaceUnavailable errors, for example: \n\n\t\t\tE0406 13:01:12.379101 30672 vdisk_micro_vblock_writer_op.cc:606] \n vdisk_id=64761062 operation_id=1213502080 Assign extent group for vdisk block 312456 failed with error kDiskSpaceUnavailable\nW0406 13:01:12.379123 30672 vdisk_distributed_oplog.cc:2914] \n vdisk_id=64761062 inherited_episode_sequence=-1 ep_seq_base=119336 VDisk oplog micro vblock write op 1213502080 failed with error kDiskSpaceUnavailable\n\n\t\t\tPrism UI will show very high latency for User VMs (thousands of milliseconds)\n\n\t\t\tGuest OS on User VMs may crash (BSOD on Windows), or re-mount the filesystem as read-only (Linux).\n\n\t\t\tCurator logs on the leader will show messages like the following for the disk ID of the HDDs:\n\n\t\t\tI0406 13:10:40.125768 31503 curator_execute_job_op.cc:2751] ExtentGroupsToMigrateFromDisk[50] = 193076", "Handling Guidance": "The fastest way to recover from the full storage situation is to Cancel the node removal.\n\n\t\t\tConsult with a Support Tech Lead (STL) / EE to engage Engineering via an ONCALL", "ENG": "" }, { "#": "27.", "Component Acks /\t\t\tnode_removal_ack": "", "Description and Symptoms": "AHV node to be removed is stuck at \"EnteringMaintenanceMode\"\n\n\t\t\tThe acli command indicate that the node.51 which is being removed got stuck at \"EnteringMaintenanceMode\" state:\n\n\t\t\tnutanix@CVM:~$ acli host.list\nHypervisor Hypervisor Host UUID Node state Connected Node type Schedulable Hypervisor CVM IP\nIP DNS Name Name\nx.y.z.51 x.y.z.51 44dfcc72-... EnteringMaintenanceMode True Hyperconverged False AHV x.y.z.51\n\n\t\t\tVmMigrate task failed due to all hosts in not schedulable state.", "Handling Guidance": "Refer to KB 14161 for further guidance and instructions.", "ENG": "ENG-518297\t\t\tfixed in AOS 6.8" }, { "#": "28.", "Component Acks /\t\t\tnode_removal_ack": "", "Description and Symptoms": "The cluster was not upgraded for more than 2 years; this caused all service level certs to expire on nodes.\t\t\tIn the below scenario, the last cluster upgraded on Fri, 11 Feb 2022\n\t\t\tFri, 11 Feb 2022 23:05:00 el7.3-release-fraser-6.0.2.5-stable-9d63d78985bddf0aab3ef9c772eb4d7550013704\n\t\t\t\t\t\tBelow is the output of the certificate validity, which has been expired\n\t\t\tnutanix@NTNX-CVM:~$ sudo openssl x509 -enddate -noout -in /home/certs/MantleService/MantleService.crt\nnotAfter=Feb 11 20:23:46 2024 GMT\n\nnutanix@NTNX-CVM:~$ openssl verify -CAfile /home/certs/root.crt -untrusted /home/certs/ica.crt /home/certs/MantleService/MantleService.crt\n/home/certs/MantleService/MantleService.crt: CN = MantleService, O = Nutanix\nerror 10 at 0 depth lookup:certificate has expired\nOK\n\t\t\t \n\n\t\t\tAs part of this, mantle was trying to fetch the master key parts from all the nodes, but mantle communication was failing across the nodes with kNetworkError because mantle is the only one that uses grpc from intra cluster communication.\t\t\t \n\n\t\t\t/home/nutanix/data/logs/mantle.out.20240325-152529Z:E0326 16:46:48.658254041 2076 {{ssl_transport_security.cc:1233]}} Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.\n/home/nutanix/data/logs/mantle.out.20240325-152529Z:E0326 16:47:16.669887041 2075 {{ssl_transport_security.cc:1233]}} Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.\n\n\t\t\tI20240327 03:49:18.329051Z 1942 {{mantle_server_fetch_mk_op.cc:143]}} Sending an rpc to 3386bcc0-a092-4f5d-91d2-c5b1045082a1 to fetch master key\nI20240327 03:49:18.329084Z 1942 {{mantle_server_fetch_mk_op.cc:143]}} Sending an rpc to 7a573771-c34e-41f6-ba73-2aedbe0c45fb to fetch master key\nI20240327 03:49:18.329123Z 1942 {{mantle_server_fetch_mk_op.cc:143]}} Sending an rpc to ba912f2e-bf2f-4917-a731-05caf9426721 to fetch master key\nI20240327 03:49:18.329205Z 1942 {{mantle_server_fetch_mk_op.cc:143]}} Sending an rpc to 3e782c2a-894f-41e5-8a9b-fe02f9bfb198 to fetch master key\nI20240327 03:49:18.329237Z 1942 {{mantle_server_fetch_mk_op.cc:143]}} Sending an rpc to 7160b6a6-c6cc-4f27-a27e-0d36d14cefda to fetch master key\nI20240327 03:49:18.329262Z 1942 {{mantle_server_fetch_mk_op.cc:156]}} Received a reply from 3386bcc0-a092-4f5d-91d2-c5b1045082a1 with error kNetworkError\nW20240327 03:49:18.329268Z 1942 {{mantle_server_fetch_mk_op.cc:245]}} Node 3386bcc0-a092-4f5d-91d2-c5b1045082a1 failed with error kNetworkError\nI20240327 03:49:18.329281Z 1942 {{mantle_server_fetch_mk_op.cc:248]}} outstanding: 4\n\n\t\t\t\t\t\tThis caused the node removal to get stuck on the mantle ack,\n\n\t\t\tTo solve this, we deleted the expired service level certs on the affected nodes and restarted genesis, which will regenerate the certs.\n\n\t\t\t$ find /home/certs/ -name \"*.crt\" ! -name root.crt ! -name ica.crt -exec rm {} \\;\n\n$ genesis restart\n\n\t\t\tAfter this new certs were generated with updated expiry time.\n\n\t\t\tnutanix@NTNX-CVM:~$ sudo openssl x509 -enddate -noout -in /home/certs/MantleService/MantleService.crt\nnotAfter=Mar 27 03:41:48 2026 GMT\n\n\t\t\tAfter this, the mantle was restarted on these nodes so that the mantle could reload certs; this restored mantle communication. After this master key rotation was completed, node removal was also completed.\n\t\t\tNote: As part of this service level certs are generated, which will expire after 2 years, expecting that there will be at least one upgrade under 2 years.", "Handling Guidance": "", "ENG": "" } ]
KB2060
Failure to upgrade the CVM memory through vCenter
The following article explains the procedure to upgrade the CVM memory through vCenter after failure.
After increasing memory size of a CVM, it fails to start with the following error: Failed to start the virtual machine. In the virtual machine properties, the correct memory size is displayed.
Perform the following steps to resolve the issue. Connect to the ESXi host with the CVM via SSH.Under the Local-datastore, open the ServiceVM_Centos.vmx CVM configuration file and look for the following lines. sched.mem.min = "16384" Check the following field (at the top of the vmx file). memsize = "16384" If sched.mem.min and sched.mem.minsize have a lower value than the memsize field, edit the file to change the values of sched.mem.minsize and sched.mem.min to match the memsize value. After making these modifications, the CVM will start correctly.
KB2263
NCC Health Check: check_storage_access
The NCC health check check_storage_access verifies if the storage is accessible from the host and whether essential configurations are in place on the Nutanix cluster.
The NCC health check check_storage_access verifies if the storage is accessible from the host and whether a few essential configurations are in place on the Nutanix cluster. This check was designed specifically for Hyper-V clusters but starting from NCC 3.6, this check runs on ESXi clusters as well. Hyper-VThis check verifies the following configuration on the Hyper-V cluster or hosts: The storage cluster name is mapped to 192.168.5.2 on the host.Storage is accessible by the storage cluster name on the host.If Kerberos authentication for SMB is not enabled, SPNs for the Nutanix storage cluster must not be set.If Kerberos authentication is not enabled, RequireSecuritySignature must be set to False in the SMB client configuration on the host. ESXiThis check verifies the following configuration on ESXi: Datastore is mapped to ESXi host using internal 192.168.5.2 IP addressDatastore and container names are identicalExternal datastores are mapped to the ESXi hostsIf the datastores are mounted on all host or not.Then check logic excluded RF1 containerFrom 4.6.0 check complaints for datastore is mounted on all host or not only when there is a VM present on the datastore . If no VM is present on the datastore then check will pass even though datastore is not mounted on all host. Running the NCC CheckRun the NCC check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or run the check_storage_access check separately: nutanix@cvm$ ncc health_checks hypervisor_checks check_storage_access You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. Sample output For Status: PASS Running /health_checks/hypervisor_checks/check_storage_access on the node [ PASS ] For Status: WARN Running : health_checks hypervisor_checks check_storage_access For Status: FAIL Case 1The following names are not mapped to 192.168.5.2 in C:\Windows\System32\Drivers\Etc\Hosts: nutanix-clusternutanix-cluster.YOURDOMAIN.COM Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Case 2The storage is not accessible. Kerberos is enabled in the storage cluster, and the following SPN entries are present in AD: cifs/nutanix-clustercifs/nutanix-cluster.YOURDOMAIN.COM Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Case 3The storage is accessible via IP address but not name. Kerberos is enabled in the storage cluster, and the following SPN entries are present in AD: cifs/nutanix-clustercifs/nutanix-cluster.YOURDOMAIN.COM Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Case 4The storage is not accessible. Kerberos is not enabled in the storage cluster, but the SMB client configuration requires the security signatures to be enabled. Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Case 5. ESXiDatastore is not mapped to 192.168.5.2 IP address: Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Case 6. ESXiDatastore and container names are not identical: Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Note: If the names are different, then the stats for the VMs residing on this datastore would be missing on Prism. Case 7. ESXiA non-Nutanix datastore is mounted to any of the ESXi servers and triggers a fail of the check (since NCC 4.1.0): Running /health_checks/hypervisor_checks/check_storage_access on all nodes [ FAIL ] Case 8. ESXi HostDatastore is available in Zookeeper but not mounted on ESXi host Running : health_checks hypervisor_checks check_storage_access Case 9. ESXi Host SRA/SRM protected containerAfter performing a failover with SRM, it is expected to see a WARN message when running NCC because unmounting the SRM-protected container from the source is the normal SRM workflow. If you notice this WARN for an SRM-protected container it is safe to ignore the message. For Status: INFOCase 10. ESXi Hosts Metro setup with local containers not in metro The check may return an INFO for Metro / ESX setup (2 Nutanix/ESX clusters in 1 single ESX cluster in vCenter), where local containers with registered VMs are used on 1 side of the metro but not on the other side.Below output would be seen on the side where the container is not in use and doesn't exist: Detailed information for check_storage_access: Output messaging [ { "Check ID": "Check if storage is accessible from the host" }, { "Check ID": "Storage is not properly configured on the host." }, { "Check ID": "Review KB 2263." }, { "Check ID": "All storage I/O might go to a single node." }, { "Check ID": "A106463" }, { "Check ID": "Unable to access storage from the host." }, { "Check ID": "Unable to access storage from the host." }, { "Check ID": "Unable to access storage from the host. Ensure no external storage is connected." }, { "Check ID": "This check is scheduled to run every 48 hours." }, { "Check ID": "This check will generate a Warning alert after 1 failure" }, { "Check ID": "106449" }, { "Check ID": "Check if storage is accessible from the host" }, { "Check ID": "Storage is not properly configured on the host." }, { "Check ID": "Review KB 2263." }, { "Check ID": "All storage I/O might go to a single node." }, { "Check ID": "This check is not scheduled to run on an interval." }, { "Check ID": "This check does not generate an alert." }, { "Check ID": "106478" }, { "Check ID": "Check if external datastore(s) is connected to a host" }, { "Check ID": "Non-supported configuration." }, { "Check ID": "Unmount/Disconnect external datastores from host(s). For more details review KB 2263" }, { "Check ID": "Non-supported configuration" }, { "Check ID": "A106478" }, { "Check ID": "This is a non supported configuration." }, { "Check ID": "Found external datastores connected to host(s)" }, { "Check ID": "Found external datastores external_datastores are connected to ESX host host_ip. Please note this is an unsupported configuration and may impact cluster operations." }, { "Check ID": "This check is not scheduled to run on an interval." }, { "Check ID": "This check will generate a Warning alert after 1 failure" } ]
A disabled Metro Availability protection domain can trigger the following result: FAIL: Failed to get ESXi storage information from hosts:... The Metro container is in read-only status, causing ESXi commands to hang and the check to timeout. A more detailed explanation is provided in KB-8283 http://portal.nutanix.com/kb/8283. If the Metro configuration in Prism matches the described scenario, follow KB-8283 http://portal.nutanix.com/kb/8283 to fix the problem. [ { "Summary": "Resolution for Case 1", "Action": "Probable Cause: Storage cluster name is not mapped to 192.168.5.2 on the host\n\n\t\t\t\n\t\t\t\tConnect to the Hyper-V host which failed the test and edit the file C:\\Windows\\System32\\drivers\\etc\\hosts:\n\n\t\t\t\tPS C:\\> cd C:\\Windows\\System32\\drivers\\etc\nPS C:\\Windows\\System32\\drivers\\etc> notepad hosts\n\t\t\t\t\n\t\t\t\tRemove all leading # from the entries for your cluster or add the cluster name to map to the IP:\n\n\t\t\t\t# Copyright (c) 1993-2009 Microsoft Corp.\n#\n# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.\n# 127.0.0.1 localhost\n# ::1 localhost\n192.168.5.2 nutanix-cluster\n192.168.5.2 nutanix-cluster.YOURDOMAIN.COM" }, { "Summary": "Resolution for Case 2", "Action": "Probable Cause: Nutanix cluster is stopped, Stargate is failing on CVMs (Controller VMs) *or* Kerberos is enabled and the required SPNs are not created in Active Directory\n\n\t\t\tCheck overall cluster status with the following commands:\n\t\t\t\tnutanix@cvm$ cluster status\nnutanix@cvm$ ncc health_checks run_all\nnutanix@cvm$ ncli smb-server get-configuration\n\n\t\t\t\tNOTE: In the newer release, the command \"ncli smb-server get-configuration\" may not exist and you will get the error \"Invalid Command\". So try:\n\n\t\t\t\tnutanix@cvm$ ncli smb-server get-kerberos-status\n\t\t\t\tIf the cluster is not running, verify if the cluster or any of the CVM was stopped/down for maintenance.\n\t\t\t\tIf confirmed that this cluster can be started, use the command:\n\n\t\t\t\tnutanix@cvm$ cluster start\n\t\t\t\tIf the SPNs are missing in Active Directory, use the \"Active Directory Users and Computers\" tool. Click on View --> Advanced Features.\n\t\t\t\tRight-click on the Nutanix cluster SMB object and set the correct SPNs under the \"Attribute Editor\" --> \"servicePrincipalName\".\n\n\t\t\t\tThe entries should be made line by line and NOT separated by commas." }, { "Summary": "Resolution for Case 3", "Action": "Probable Cause: Kerberos authentication failure *or* Kerberos is disabled, and SPNs are still set in Active Directory\n\n\t\t\tCheck if Kerberos is possibly failing due to NTP/time difference in your environment; see KB 1656.\n\n\t\t\tIf the administrator and the SPNs disable Kerberos are still set (for example, as a leftover from previous tests), remove the SPNs using the Resolution for Case 2 described above." }, { "Summary": "Resolution for Case 4", "Action": "Probable Cause: Kerberos authentication is disabled on the Nutanix cluster, but Hyper-V host SMB configuration is set to \"RequireSecuritySignature=True\"\n\n\t\t\tCheck the Hyper-V host SMB configuration.\n\t\t\t\tPS C:\\> Get-SmbClientConfiguration\nConnectionCountPerRssNetworkInterface : 4\nDirectoryCacheEntriesMax : 16\nDirectoryCacheEntrySizeMax : 65536\nDirectoryCacheLifetime : 10\nEnableBandwidthThrottling : True\nEnableByteRangeLockingOnReadOnlyFiles : True\nEnableLargeMtu : True\nEnableMultiChannel : True\nDormantFileLimit : 1023\nEnableSecuritySignature : True\nExtendedSessionTimeout : 1000\nFileInfoCacheEntriesMax : 64\nFileInfoCacheLifetime : 10\nFileNotFoundCacheEntriesMax : 128\nFileNotFoundCacheLifetime : 5\nKeepConn : 600\nMaxCmds : 50\nMaximumConnectionCountPerServer : 32\nOplocksDisabled : False\nRequireSecuritySignature : True\nSessionTimeout : 60\nUseOpportunisticLocking : True\nWindowSizeThreshold : 1\n\t\t\t\tIf the option \"RequireSecuritySignature\" is enabled (True), either disable it with the command below or also enable it on Nutanix storage for parity (see Case 3):\n\t\t\t\tPS C:\\> set-SmbClientConfiguration -RequireSecuritySignature $False" }, { "Summary": "Resolution for Cases 5&6", "Action": "Execute the following command in ESXi to verify datastore configurations\n\n\t\t\troot@esxi$ esxcli storage nfs list\n\t\t\tSample output:\n\n\t\t\tVolume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration\n---------------------- ---------- ------- -------- ----- -------------- ------- -----------------------------\nNTX-PRD-Main-Container 192.168.5.2 /NTX-PRD-Main-Container true true false false Supported\nNTX-PROTECT-CONTAINER 192.168.5.2 /NTX-PROTECT-CONTAINER true true false false Supported\nNTX-PROD-PLACEHOLDER 192.168.5.2 /NTX-PROD-PLACEHOLDER true true false false Supported\n\n\t\t\tIf the concerned datastore is in the local cluster, then make sure that each datastore is only mapped to 192.168.5.2.\t\t\tAlso, the Volume name and share name should be identical.\t\t\tIf the datastore presented to vCenter is external to the local cluster, then ignore the check and upgrade your NCC to the latest version available in the Nutanix portal. Run the NCC check again after the NCC upgrade.\t\t\tFailure of this check may cause ESXi upgrades pre-checks to fail. Make sure datastore/Container names are identical.\t\t\tIf the steps mentioned above do not resolve the issue, consider engaging Nutanix Support." }, { "Summary": "Resolution for Case 7", "Action": "While there are situations where a datastore is mounted temporarily (Backup vendors sometimes use this method to restore VMs), we advise the customer to unmount permanently connected datastores as these\t\t\texternally connected non-Nutanix datastores could affect the stability of the overall cluster. Every IO goes through the single vmkernel within ESXi. If there is an unpredicted issue with these datastores, it could affect the uptime and overall stability of the Nutanix cluster." }, { "Summary": "Resolution for Case 8", "Action": "The WARN will be raised if a datastore is created on the Nutanix cluster side but missing from the ESXi hosts datastores list.\t\t\tExecute the following command from any CVM to verify the container exists in the Nutanix cluster:\n\t\t\tnutanix@cvm$ ncli ctr list\n\t\t\tExecute the following command to verify if the container is missing from ESXi host:\n\n\t\t\troot@esxi$ esxcli storage nfs list\nroot@esxi$ df -h\n\n\n\t\t\tIf the container exists in the Nutanix cluster but is not mounted/created on particular ESXi hosts, either mount it on the ESXi hosts or delete it from the Nutanix cluster if it's not being used.\t\t\tRefer to Modifying a Storage Container | Prism Web Console Guide to delete or mount the container on ESXi hosts.\n\n\t\t\tIf the container reported at the alert has the format