It happens. You commit a key to a public repo. You paste it into a screenshot. You type it into the wrong terminal window. The key is out. Now what?
The next hour matters. Here's the playbook.
Minute 0–5: Revoke
Don't try to figure out what happened first. Don't check if anyone used it. Just revoke.
Free tier — 250 calls/month, no card required.
Go to the provider's dashboard (OpenAI, Anthropic, whatever). Find the leaked key. Revoke or delete it. The button is usually labeled clearly. Once it's revoked, any further use of that key fails. The bleeding stops.
If you have a vault: revoke the affected credential at the vault. Same effect, faster, and you don't have to log into multiple provider dashboards if the key was used across providers.
Minute 5–15: Issue a replacement
Generate a new key (or new vault credential) immediately. Update the apps and projects that were using the old one. Restart whatever needs restarting.
This is the part where pre-existing setup pays off. If your apps fetch keys from a vault, you update one place and everything keeps running. If your apps each have their own copy of the key in their .env file, you're hunting through every project. Same time pressure, much more work.
Minute 15–30: Audit usage
Go to the provider's billing/usage dashboard. Check what's been spent in the last 24 hours. If there's an unexpected spike between when the key leaked and when you revoked it, someone used it.
If you see a spike: contact the provider's support immediately, tell them the key was leaked at [time], and ask if they can reverse the unauthorized charges. Most providers will work with you on this if you act fast.
If you don't see a spike: you're probably fine. Most leaks get caught before they're exploited, especially if you revoked within the first hour.
Minute 30–45: Find the leak source
Now figure out how the key got out. The most common culprits:
— Committed to a public git repo (check git history)
— In a screenshot you posted publicly
— In a Slack message to someone outside your trusted circle
— In a shared Docker image
— In an email to a contractor who saved it
You need to know the source so you can prevent it from happening again. If it was a git commit, check whether the repo is public, and if so, scrub the git history (BFG Repo-Cleaner is the standard tool). If it was a Slack message, that key is in their search index forever — the revocation handles the security part, but be aware of the exposure.
Minute 45–60: Document and learn
Write down what happened in three sentences. The leak source. The detection time. The damage (if any). Save it somewhere you'll see it next time you're about to do whatever caused the leak.
Then update your workflow to prevent the specific pattern. If it was a git commit, set up a pre-commit hook that scans for keys (gitleaks, trufflehog). If it was a screenshot, get into the habit of redacting before posting. If it was a contractor share, set up your vault for next time.
The frame
The first hour is reactive. The hour after is preventive. Most people skip the second hour and go right back to whatever they were doing. That's how the same leak pattern repeats six months later with a different key.
One leak is bad luck. Two of the same kind is a workflow problem. Use the hour after the incident to make sure you're not setting up the next one.
— Jeff
Free tier — 250 calls/month, no card required.