This website use cookies to help you have a superior and more admissible browsing experience on the website.
Loading...
A “Log Disk Exhaustion” alert is the /storage/log partition on your vCenter Server Appliance (VCSA) is nearly full, typically past 75% capacity. This partition stores logs for all core vCenter services, including SSO, the VMware Directory Service, and vpxd.
Once it fills up completely, vCenter stops writing logs and shuts down critical services to prevent data corruption. You lose access to the vSphere Client and can no longer manage your ESXi hosts, until the space is freed or expanded.
Identifying the source of the bloat is the first step toward a permanent fix. Here are the six most common reasons the vCenter storage log fills up:
A known issue in vCenter 7.0 and 8.0 involves the VMware Authentication Framework Daemon (vmafdd).
A registry mismatch causes the service to ignore log rotation rules, leading to a single massive log file that can consume the entire partition.
In vCenter 8.0 versions prior to Update 3, support bundles generated for troubleshooting are sometimes not automatically purged. These bundles are large compressed files that can quickly trigger log disk exhaustion on VCSA if multiple bundles accumulate.
In vCenter 7.0 environments older than Update 3c, a certificate validation failure can trigger a runaway logging loop. The system repeatedly generates pod-startup.log files, filling the disk with redundant error messages rapidly.
High-traffic environments or those with frequent API calls may see significant growth in localhost_access.log and catalina.log files.
These are located within the Single Sign-On (SSO) and Lookup Service directories and can fail to rotate correctly in older 6.x and 7.x builds.
The default 10 GB allocation for /storage/log is often insufficient for large-scale environments with hundreds of hosts or thousands of VMs. High logging verbosity or rapid object churn can push usage past the limit faster than expected.
vCenter uses an embedded PostgreSQL database to store inventory and configuration data. Environments following STIG hardening may enable pgaudit for this database.
If misconfigured, every database transaction gets recorded, filling the log partition very quickly.
Before deleting files or expanding disks, identify exactly which files are consuming the space. Acting without a clear diagnosis risks removing critical system data or missing a recurring underlying issue.
Take a VM-level snapshot or a fresh backup of the VCSA before performing any disk operations or file deletions. Resizing disks without a safety net can cause partition table corruption and leave the VCSA unbootable. If vCenter is part of a Linked Mode group, review the restore implications before proceeding.
If the UI is still accessible, log in to the vCenter Appliance Management Interface (VAMI) at https://<vcenter-fqdn>:5480.
/storage/log. A critical health status or usage above 75% confirms the alert.If the VAMI is unresponsive, log in to the VCSA via SSH as the root user. If you land in the Appliance Shell, type shell to switch to the Bash interface.
Run the following command to view disk usage in a readable format:
df -h
To filter for partitions already above 78% usage, run:
df -h | awk '0+$5 >= 78 {print}'
Once you have confirmed /storage/log is the problem, run the following command to list the 20 largest files and directories:
du -ah /storage/log/ | sort -h -r | head -n 20
Look for unusually large files (several GBs) or a high volume of small, repeated log files.
-h flag in sort may not be supported on all VCSA builds. If the command returns an error, use sort -r instead and compare raw byte values.A partition can behave as if it is full even when free space is available. This happens when the partition runs out of inodes — index nodes the filesystem uses to track individual files. Millions of tiny log files can exhaust inodes without filling the disk by size.
Check inode usage with:
df -i
If IUse% for /storage/log is at or near 100%, inode exhaustion is the likely cause.
If Step 4 revealed high inode usage, run this command to find which directory holds the most files:
find /storage/log -type d -exec sh -c "echo -n '{}: '; ls -1 '{}' | wc -l" \; | sort -n -k 2
Once you have identified the cause, you can resolve the issue by either purging unnecessary log data or increasing the available storage. Choose the method based on what your diagnosis reveals.
Several instances of log disk exhaustion in vCenter 7 and 8 are caused by software bugs that prevent logs from rotating or compressing correctly. Check the table below before proceeding — if your version matches a known issue, apply the corresponding fix first.
| vCenter Version | Known Issue | Fix |
|---|---|---|
| vCenter 6.0 before U3 | cloudvm-ram-size.log rotation broken | Upgrade to latest 6.0/6.5 build |
| vCenter 7.0 before U1 | SSO logs not compressed | Upgrade to 7.0 U1 or later |
| vCenter 7.0 before U3c | pod-startup.log runaway | KB workaround or upgrade to 7.0 U3c |
| vCenter 7.0 / 8.0 before U1 | vmafdd.log registry mismatch | Registry fix (KB 318575) |
| vCenter 8.0 before U3 | Support bundles not auto-deleted | Manual cleanup or upgrade to 8.0 U3 |
This method provides immediate relief by removing old, rotated log archives. Only delete compressed or numbered archive files (ending in .gz, .zip, or .log.1). Do not delete active log files ending in .log.
find /storage/log -name "*.gz" -mtime +7 -type f -delete
du output from your diagnosis showed large files under /storage/log/vc-support-bundles/, remove them with:rm -rf /storage/log/vc-support-bundles/*
service-control --stop --all && service-control --start --all
.gz files manually until usage drops below 95%, then retry.If the environment has simply outgrown its default 10 GB allocation, expanding the virtual disk is the more sustainable fix.
/storage/log, but verify this in the VAMI or by running lsblk in the shell before making any changes./usr/lib/applmgmt/support/scripts/autogrow.shdf -h again to confirm that /storage/log reflects the new size.A VM-level snapshot is a good starting point, but it is not a substitute for a proper backup. Snapshots are stored on the same datastore as the VCSA and can be invalidated by the same storage issues you are trying to fix. For a production vCenter environment, a dedicated backup solution gives you a reliable, independent recovery point before you touch any disk or file.
i2Backup is an enterprise backup solution that supports agentless VMware VM backup using native virtualization APIs, with no agents to install on the VCSA itself.
Before You Proceed
Take a full backup of the VCSA with i2Backup by info2soft before resizing any disk or deleting any files. If something goes wrong during the fix, a clean recovery point means the difference between a quick restore and a full vCenter rebuild.
Q1: Is it safe to delete files in /storage/log?
It depends on which files you delete. Compressed archive files ending in .gz, .zip, or .log.1 are safe to remove. Active log files ending in .log should not be touched — deleting them can disrupt running services or cause vCenter to behave unexpectedly.
Q2: Can I fix log disk exhaustion without SSH access?
In most cases, no. The VAMI provides visibility into disk usage but does not offer tools to delete log files or resize partitions directly.
SSH access is required for the cleanup and expansion steps covered in this guide. If SSH is disabled, enable it temporarily through the VAMI under Access > Edit before proceeding.
Q3: What is the difference between Log Disk and SEAT Disk exhaustion?
The /storage/log partition stores service logs generated by vCenter components. The SEAT (Statistics, Events, Alarms, and Tasks) disk, typically /storage/seat, stores historical performance data and event records.
Both can fill up independently. The symptoms look similar, but the fix targets a different partition and a different set of files.
Q4: Why is /storage/archive always at 100%? Is that normal?
Yes. The /storage/archive partition is designed to operate at or near 100% capacity. vCenter uses it as a staging area for log archiving, and the system manages its contents automatically. A full /storage/archive is not an alert condition and does not require any action.
Q5: Will expanding the disk cause downtime?
VCSA supports hot disk expansion, so you can increase the virtual disk size without shutting down the appliance. However, downtime can still occur if the partition is already completely full and services have stopped.
In that case, clear enough space first using Method 1 before attempting the expansion.
Q6: What if vCenter is already inaccessible (503 error)?
If the vSphere Client returns a 503 error and SSH is unresponsive, the appliance may have stopped most services due to a full disk. Access the VCSA directly through the VM console in ESXi (without going through vCenter) by connecting your vSphere Client directly to the ESXi host IP.
Log in via the console, free up space by deleting .gz files, then restart services with service-control –start –all. If the disk is too full to write anything, you may need to expand the VMDK from the ESXi host level before the appliance can recover.
Log disk exhaustion on vCenter is a recoverable problem, but it needs to be addressed quickly. A full /storage/log partition will take your management plane offline, even if your VMs continue to run.
The fix follows a straightforward sequence: take a backup, diagnose which files are consuming the space, check whether a known version-specific bug applies to your environment, then either clean up the log files or expand the partition. For environments that have outgrown the default 10 GB allocation, expanding the disk is the more lasting solution.
Once the issue is resolved, consider setting up a regular backup schedule for the VCSA with a dedicated tool like i2Backup to ensure you always have a clean recovery point before your next maintenance window.