SharePoint storage has a habit of creeping up quietly—often without any obvious single cause. Recently, while working with a customer, I noticed their tenant was approaching its SharePoint storage limit far faster than expected. Rather than immediately pushing the problem back to site owners with the usual “decide what to delete” conversation, I wanted to see if there were any low-effort, low-risk optimizations available at the platform level.
It turns out there is a very effective one hiding in plain sight: file versioning.
Even in a brand-new tenant, SharePoint is configured to keep up to 500 major versions per file, and the default cleanup behavior is set to Manual. In environments with active collaboration, this can result in hundreds of gigabytes being consumed by old file versions alone—often without anyone realizing it.
In 2024, Microsoft introduced an alternative called Intelligent Versioning, which automatically trims older versions using a tiered, age-based retention model. For most users, this change is completely transparent in day-to-day work, yet it can significantly reduce storage consumption. The catch? While you can enable it safely at tenant level, it only applies to newly created sites. To get the real benefits, existing sites require a bit of PowerShell.
Enabling Intelligent Versioning
The setting for it can be found in the SharePoint Admin Center as shown:

When Automatic version limit is in effect, SharePoint uses a built-in tiered retention algorithm based on version age. In simple terms, the older a version is, the less frequently it’s kept. Here is a summary of the default intelligent retention logic:
| Age of File Version | Retention by Automatic Cleanup |
| 0–30 days old | Keep all versions. Every saved version from the last 30 days is preserved (up to 500 versions). This ensures you can track all recent changes in detail. |
| 31–60 days old | Keep hourly versions. For versions in this range, the system prunes away some duplicates, aiming to retain roughly one version per hour of edit activity. In practice, if multiple versions were saved within the same hour, only the latest from that hour might be kept. |
| 61–180 days old (2–6 mo.) | Keep daily versions. Versions older than two months get further thinned out to about one per day, preserving a daily snapshot of the file’s state. |
| Over 180 days old | Keep weekly versions. Very old versions (beyond ~6 months) are trimmed to approximately one per week, maintaining a weekly snapshot over long periods. |
To me, this seems like a reasonable approach that shouldn’t impact the majority of users in their daily work. Unfortunately, Microsoft doesn’t seem to make this a one click option to enable across all your sites since the tenant-wide setting shown above only applies to newly created sites (I wager that SharePoint bills count for a good chunk of revenue). This does mean you can safely enable it tenant-wide without causing any disruption but to reap the full benefits we need to do some PowerShell.
Enabling it on exisiting sites
Considering the customer had additional backup software in place, they had no problem with me enabling this across all their sites using a small PowerShell script I sourced from here (along with some heavy tweaking from me to handle 429 throttling errors).
Caution! Don’t just run this script without testing you have valid backups in place. This WILL delete versions that are unrecoverable. Understand what you are doing and start small by testing a couple sites first before rolling it out to everything.
$SharepointAdminURL = 'https://REPLACETHISWITHYOURS.sharepoint.com'
# Import module only if not already loaded
if (-not (Get-Module -Name 'Microsoft.Online.SharePoint.PowerShell')) {
Write-Verbose 'Importing Microsoft.Online.SharePoint.PowerShell module...' -Verbose
Import-Module Microsoft.Online.SharePoint.PowerShell -UseWindowsPowerShell -Scope Global -DisableNameChecking
Write-Verbose 'Module imported successfully.' -Verbose
} else {
Write-Verbose 'Microsoft.Online.SharePoint.PowerShell module is already loaded, skipping import.' -Verbose
}
# Connect to SPO only if not already connected
try {
$null = Get-SPOTenant -ErrorAction Stop
Write-Verbose "Already connected to SharePoint Online ($SharepointAdminURL), skipping sign-in." -Verbose
} catch {
Write-Verbose "Not connected to SharePoint Online. Initiating sign-in to $SharepointAdminURL..." -Verbose
Connect-SPOService -Url $SharepointAdminURL -UseSystemBrowser $true
Write-Verbose 'Connected to SharePoint Online successfully.' -Verbose
}
# Proactive per-request pacing delay (milliseconds) to reduce the chance of hitting throttle limits
# Increase this value if you are still seeing frequent 429 responses
$RequestPacingMs = 300
# Helper function to invoke SPO cmdlets with retry logic for HTTP 429 throttling
function Invoke-SPOWithRetry {
param(
[Parameter(Mandatory)]
[scriptblock]$ScriptBlock,
[int]$MaxRetries = 10,
[int]$InitialDelaySeconds = 10,
[int]$MaxDelaySeconds = 300
)
[int]$attempt = 0
while ($true) {
try {
return & $ScriptBlock
} catch {
$statusCode = $null
[int]$retryAfterSeconds = 0
if ($_.Exception -is [System.Net.WebException] -and $_.Exception.Response) {
$statusCode = [int]$_.Exception.Response.StatusCode
# Respect the Retry-After header when present — SharePoint sets this on 429 responses
$retryAfterHeader = $_.Exception.Response.Headers['Retry-After']
if ($retryAfterHeader -and [int]::TryParse($retryAfterHeader, [ref]$retryAfterSeconds)) {
# Header parsed successfully; $retryAfterSeconds is now set
} else {
$retryAfterSeconds = 0
}
} elseif ($_.Exception.Message -match '429|throttl') {
$statusCode = 429
}
if ($statusCode -eq 429 -and $attempt -lt $MaxRetries) {
$attempt++
if ($retryAfterSeconds -gt 0) {
# Use the server-specified delay, adding a small jitter to avoid synchronized retries
$jitter = Get-Random -Minimum 1 -Maximum 6
$delay = $retryAfterSeconds + $jitter
Write-Warning "SharePoint throttling (HTTP 429) — Retry-After header: ${retryAfterSeconds}s. Waiting ${delay}s before retry (attempt $attempt of $MaxRetries)..."
} else {
# Exponential backoff with jitter as fallback when no Retry-After header is present
$baseDelay = [math]::Min($InitialDelaySeconds * [math]::Pow(2, $attempt - 1), $MaxDelaySeconds)
$jitter = Get-Random -Minimum 1 -Maximum ([math]::Max(2, [int]($baseDelay * 0.2)))
$delay = [int]($baseDelay + $jitter)
Write-Warning "SharePoint throttling (HTTP 429) — No Retry-After header. Exponential backoff: waiting ${delay}s (attempt $attempt of $MaxRetries)..."
}
Start-Sleep -Seconds $delay
} else {
throw
}
}
}
}
Write-Verbose 'Retrieving all SharePoint Online sites...' -Verbose
$allsites = Invoke-SPOWithRetry -ScriptBlock { Get-SPOSite -Limit All }
$allsitescount = $allsites.Count
Write-Verbose "Found $allsitescount site(s) total." -Verbose
# Get Batch Delete Status
Write-Verbose 'Collecting file version batch delete job progress for all sites...' -Verbose
$list = New-Object 'Collections.Generic.List[psobject]'
[int]$i = 0
foreach ($siteUrl in $allsites.Url) {
try {
$progress = Invoke-SPOWithRetry -ScriptBlock {
Get-SPOSiteFileVersionBatchDeleteJobProgress -Identity $siteUrl |
Select-Object Url, Status, FilesProcessed, StorageReleasedInBytes,
@{ Name = 'ReleasedGB'; Expression = { [math]::Round(((($_.StorageReleasedInBytes / 1000) / 1000) / 1000), 2) } }
}
$list.Add($progress)
} catch {
Write-Warning "Failed to retrieve batch delete job progress for $siteUrl : $_"
}
$i++
Write-Progress -Activity 'Collecting batch delete job status' -Status "$i of $allsitescount : $siteUrl" -PercentComplete (($i / $allsitescount) * 100)
Start-Sleep -Milliseconds $RequestPacingMs
}
Write-Progress -Activity 'Collecting batch delete job status' -Completed
$totalReleasedGB = ($list.ReleasedGB | Measure-Object -Sum).Sum
Write-Output "Total storage released across all sites: $totalReleasedGB GB"
# Show a breakdown of statuses found
$statusSummary = $list | Group-Object -Property Status | Select-Object Name, Count
Write-Verbose 'Site status breakdown:' -Verbose
$statusSummary | ForEach-Object { Write-Verbose " $($_.Name): $($_.Count)" -Verbose }
# Order cleanup of 'NoRequestFound' sites
$sitesNeedingCleanup = $list | Where-Object -Property Status -EQ 'NoRequestFound'
[int]$nocleanupcount = $sitesNeedingCleanup.Count
Write-Verbose "$nocleanupcount site(s) with status 'NoRequestFound' will have auto-expiration version trim enabled and a batch delete job submitted." -Verbose
[int]$ii = 0
foreach ($siteUrl in $sitesNeedingCleanup.Url) {
try {
Invoke-SPOWithRetry -ScriptBlock { Set-SPOSite -Identity $siteUrl -EnableAutoExpirationVersionTrim $true -Confirm:$false }
Invoke-SPOWithRetry -ScriptBlock { New-SPOSiteFileVersionBatchDeleteJob -Identity $siteUrl -Automatic -Confirm:$false }
$ii++
Write-Progress -Activity 'Enabling auto-expiration version trim' -Status "$ii of $nocleanupcount : $siteUrl" -PercentComplete (($ii / $nocleanupcount) * 100)
Write-Verbose "[$ii/$nocleanupcount] Enabled auto-expiration and submitted batch delete job for: $siteUrl" -Verbose
} catch {
Write-Warning "Failed to configure auto-expiration or submit batch delete job for $siteUrl : $_"
}
Start-Sleep -Milliseconds $RequestPacingMs
}
Write-Progress -Activity 'Enabling auto-expiration version trim' -Completed
Write-Verbose "Done. Successfully processed $ii of $nocleanupcount site(s)." -Verbose
This will iterate across all sites and enable automatic version trim and also kick off a batch deletion job. Depending on the size of your environment, this may take quite some time to complete but after a few days, expect your SharePoint usage to drop a fairly decent amount.
To give some perspective, I’ve personally seen a single site with an original size of ~414GB drop to ~392GB with this single change. Not all sites are equal though and it heavily depends on the number of files, and their historic usage. In total for one customer, it managed to clear up 4.8TB within 24 hours.
Conclusion
I strongly recommend anyone with SharePoint storage problems to consider enabling Intelligent versioning through the above methods.
That said, this is not a change to make blindly. Automatic version trimming permanently removes older file versions, so you should always confirm that you have appropriate backup solutions in place and fully understand how retention and compliance policies (for example, in Microsoft Purview) may override or block trimming behavior.
