Cloud Waste Hunter
GCP Persistent Disk Storage

Unattached Persistent Disks

Compute Engine Persistent Disks continue billing after instance deletion or rebuild when they remain detached beyond a conservative review window and do not carry an explicit retention signal.

This detector sits inside the GCP Orphaned and Stale Resources category guide for broader cleanup planning.

Potential savings

$10 to $1,800 / month

$120 to $21,600 / year

Detector ID
gcp-persistent-disk-unattached
Service
Persistent Disk
Category
Storage
Published
Mar 18, 2026
Updated
Apr 3, 2026

The problem

Persistent Disk storage is billed independently from the VM that once used it. When rebuilds, migrations, or teardown workflows leave disks behind, teams keep paying for provisioned storage even though no workload is using it.

Why it happens

  • VM deletion workflows do not always clean up non-boot disks.
  • Teams keep detached disks temporarily for rollback and forget to revisit them.
  • Project-level disk inventory is rarely reviewed as often as running instances.

What this means for cost

Estimated monthly

$10 to $1,800/mo

Estimated annual

$120 to $21,600/yr

This waste pattern often shows up as $10 to $1,800/mo in recurring monthly cost, or roughly $120 to $21,600/yr if it sits untouched for a year.

How to detect unattached persistent disks

The strongest signal is a persistent disk with no current VM attachment, no explicit keep-style label, and enough age to look like cleanup drift rather than a fresh operational change.

List disks and check whether a VM is using them:

gcloud compute disks list --format="table(name,zone,sizeGb,type,status,creationTimestamp,users.list():label=USERS,labels.list():label=LABELS)"

Disks with no USERS value are detached candidates. Review age, labels, and project context before deciding whether the disk is real waste or temporary retention.

Large SSD or balanced disks in old dev, migration, or preview projects are usually the fastest wins.

What this detector actually checks

Cloud Waste Hunter keeps this page aligned to the current implementation boundary. A finding requires all of the following:

  • a supported Persistent Disk type
  • zero attached users
  • a ready state
  • at least 14 days of age unless the disk is a live-test fixture
  • no explicit keep-style retention label

That means this detector is not a generic “all old disks” report. It is a conservative review queue for detached storage that is already outside a running workload and lacks an obvious retention signal.

How to fix unattached persistent disks

Use a staged cleanup flow:

  1. Confirm the disk is not part of an intended rollback plan.
  2. Snapshot if needed.
  3. Delete the disk when ownership and retention are clear.
gcloud compute disks snapshot my-disk --zone us-central1-a --snapshot-names my-disk-predelete
gcloud compute disks delete my-disk --zone us-central1-a

Longer term, enforce labels and expiry review on detached disks so they do not linger indefinitely.

Caveats and overlap boundaries

Detached does not automatically mean disposable. Rollback holds, forensics, migration buffers, and temporary recovery windows can all justify keeping a disk. The detector also does not assess snapshots, Hyperdisk volumes, or generic storage-policy drift.

If the broader storage review uncovers retention problems rather than detached block devices, continue into GCS Buckets Without Lifecycle Policies. For the wider stale-resource workflow, continue into the GCP Orphaned and Stale Resources guide.

How Cloud Waste Hunter helps

Cloud Waste Hunter identifies detached persistent disks with likely low business value, estimates their recurring storage cost from type, size, and location, and helps teams batch review them safely. The current implementation stays conservative by requiring a detached state, a minimum review age, and no explicit keep-style retention label before it opens a finding.

FAQ

Should I snapshot an unattached disk before deleting it?

If the owner or recovery requirement is unclear, taking a snapshot first is often the safest path before final deletion.

Why does this detector wait before flagging a detached disk?

The implementation uses a conservative 14-day review threshold so fresh operational detach events do not look the same as long-lived cleanup drift.

Do keep or retention labels suppress the finding?

Yes. v1 intentionally skips disks with explicit keep-style labels so review stays focused on detached disks that do not already carry an obvious retention signal.

Related Detectors

Related detectors

These detectors cover similar resource families or cost behaviors and make good follow-on reviews during cleanup.