The problem
S3 access logging is useful, but it needs a dedicated destination. When a bucket writes logs back into itself, logging can stop being a clean audit trail and start becoming a self-referential storage problem that is harder to reason about and easier to miss.
Why it happens
- Teams enable server access logging quickly and reuse the same bucket name for convenience.
- Logging infrastructure is added after the bucket already exists, and the destination decision is skipped.
- Separate logging buckets are planned later but never created.
What this means for cost
The exact impact depends on resource size, retention age, and region, but this pattern is usually worth reviewing because it compounds quietly over time.
How to detect an S3 access logging loop
The key signal is simple: the bucket’s logging destination points back to the same bucket that is generating the logs.
Cloud Waste Hunter flags an S3 bucket when its access logging destination bucket is the same bucket. The current implementation does not try to infer the eventual storage impact. It simply identifies the self-targeted logging configuration so operators can fix it before it becomes a larger hygiene problem.
Review the bucket logging configuration:
aws s3api get-bucket-logging --bucket my-bucket
If TargetBucket matches my-bucket, the bucket is writing access logs back to itself and should be corrected.
Why this matters
This pattern is risky because it blurs the line between source data and access-log destination. Even before the cost impact becomes obvious, it makes storage behavior harder to reason about and weakens the operational separation that logging setups are supposed to provide.
It is also a strong sign that logging was enabled quickly without a dedicated bucket design, which usually means retention and ownership need review too.
How to fix an S3 access logging loop
Move access logging to a dedicated bucket:
aws s3api put-bucket-logging \
--bucket my-app-bucket \
--bucket-logging-status '{
"LoggingEnabled": {
"TargetBucket": "my-access-logs",
"TargetPrefix": "app/"
}
}'
After that, make sure the logging bucket has its own lifecycle policy and access model so log growth stays controlled.
How this differs from nearby detectors
CloudWatch Log Groups Without Retention is about missing expiration policy on logs that already exist. The AWS Logging Cost Optimization guide covers the broader family of logging hygiene issues. This detector is narrower: it catches self-targeted S3 access logging configuration.
How Cloud Waste Hunter helps
Cloud Waste Hunter can highlight the exact bucket whose access-log destination points back to itself, giving teams a fast fix for a subtle but high-signal S3 logging mistake. For related logging drift, continue with the AWS Logging Cost Optimization guide.
FAQ
Does this detector prove runaway spend already exists?
No. It proves the logging target is the same bucket, which is a risky setup that should be corrected whether or not the current storage impact is already large.
What is the safe pattern for S3 access logging?
Use a separate destination bucket dedicated to logs, ideally with its own prefixing, retention, and access controls.
How is this different from S3 lifecycle detectors?
Lifecycle detectors focus on retained data or unfinished uploads. This detector focuses on a logging configuration that can create unnecessary data in the first place.