Adversaries may delete a large number of files within a short time window to exfiltrate data or erase evidence. SOC teams should proactively hunt for this behavior in Azure Sentinel to identify potential data exfiltration or sabotage activities early.
KQL Query
let deleteThreshold = 3;
let deleteWindow = 10m;
union
StorageFileLogs,
StorageBlobLogs
| where StatusText =~ "Success"
| where OperationName =~ "DeleteBlob" or OperationName =~ "DeleteFile"
| extend CallerIpAddress = tostring(split(CallerIpAddress, ":", 0)[0])
| summarize dcount(Uri) by bin(TimeGenerated, deleteWindow), CallerIpAddress, UserAgentHeader, AccountName
| where dcount_Uri >= deleteThreshold
| project TimeGenerated, IPCustomEntity=CallerIpAddress, UserAgentHeader, FilesDeleted=dcount_Uri, AccountName
id: 85e16874-72aa-4ebe-b36e-e45f8ba50f79
name: Azure Storage Mass File Deletion
description: |
'Detect mass file deletion events within Azure File and Blob storage. deleteWindow controls
the period of time the deletions must occur in, whilst the deleteThreshold controls how many files
must be deleted within that threshold. Query works on a per-IP address basis, so will only detect a single
IP deleting multiple files.'
requiredDataConnectors: []
tactics:
- Impact
relevantTechniques:
- T1485
tags:
- Ignite2021
query: |
let deleteThreshold = 3;
let deleteWindow = 10m;
union
StorageFileLogs,
StorageBlobLogs
| where StatusText =~ "Success"
| where OperationName =~ "DeleteBlob" or OperationName =~ "DeleteFile"
| extend CallerIpAddress = tostring(split(CallerIpAddress, ":", 0)[0])
| summarize dcount(Uri) by bin(TimeGenerated, deleteWindow), CallerIpAddress, UserAgentHeader, AccountName
| where dcount_Uri >= deleteThreshold
| project TimeGenerated, IPCustomEntity=CallerIpAddress, UserAgentHeader, FilesDeleted=dcount_Uri, AccountName
entityMappings:
- entityType: IP
fieldMappings:
- identifier: Address
columnName: IPCustomEntity
Scenario: Scheduled Backup Job Cleanup
Description: A legitimate scheduled job (e.g., Azure Data Factory, Azure Backup, or third-party tools like Veeam) performs a bulk deletion of old backup files as part of a retention policy.
Filter/Exclusion: Check for job names or execution contexts associated with backup systems (e.g., AzureBackupJob, VeeamBackup, or RetentionPolicy). Use job_name or caller fields in the event logs.
Scenario: User-Initiated File Cleanup via PowerShell or CLI
Description: An admin or user runs a script (e.g., PowerShell, Azure CLI, or AzCopy) to delete a large number of files during routine maintenance or data organization.
Filter/Exclusion: Filter by user context (e.g., user_principal_name), script names (e.g., Cleanup-Storage.ps1), or command-line arguments (e.g., az storage blob delete). Use user or process_name fields.
Scenario: Azure DevOps Pipeline Artifact Cleanup
Description: A CI/CD pipeline (e.g., Azure DevOps, GitHub Actions) deletes old build artifacts from Azure Blob Storage as part of the pipeline’s cleanup process.
Filter/Exclusion: Filter by pipeline name or job ID (e.g., AzureDevOpsPipeline, BuildCleanupJob). Use pipeline_name or job_id fields in the event logs.
Scenario: Azure Storage Lifecycle Management Policy Execution
Description: Azure’s built-in Lifecycle Management policy automatically deletes files based on defined rules (e.g., delete after 30 days).
Filter/Exclusion: Check for lifecycle_management or policy_name in the event metadata. Use policy_name or action_type fields to identify policy-driven deletions