How to Block AI Agents from Accessing AWS Credentials
SafeClaw by Authensor blocks AI agents from reading your AWS credentials — including ~/.aws/credentials, ~/.aws/config, and AWS environment variables — through deny-by-default action gating. No credential file can be accessed unless you explicitly allow it in your policy. Install with npx @authensor/safeclaw and your cloud infrastructure keys are protected from the first action.
Why AWS Credentials Are a Critical Target
Your ~/.aws/credentials file contains access key IDs and secret access keys that grant programmatic access to your entire AWS account. An AI agent that reads these credentials could spin up EC2 instances, access S3 buckets containing customer data, modify IAM policies, or delete cloud resources. Even a read-only key can exfiltrate terabytes of data from S3.
AWS credentials are also commonly stored in environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) and .env files, giving agents multiple vectors to discover them.
Step 1: Install SafeClaw
npx @authensor/safeclaw
Zero dependencies, MIT licensed. Works with Claude, OpenAI, and every major agent framework.
Step 2: Block AWS Credential Files
# safeclaw.policy.yaml
rules:
- action: file.read
path: "~/.aws/credentials"
effect: deny
reason: "AWS credentials file contains access keys"
- action: file.read
path: "~/.aws/config"
effect: deny
reason: "AWS config reveals account structure and regions"
- action: file.read
path: "~/.aws/**"
effect: deny
reason: "Block all access to the .aws directory"
- action: file.write
path: "~/.aws/**"
effect: deny
reason: "Block modification of AWS configuration"
Step 3: Block AWS CLI Commands
An agent might use the AWS CLI directly to access cloud resources:
rules:
- action: shell.execute
command_pattern: "aws *"
effect: deny
reason: "Block all AWS CLI commands"
- action: shell.execute
command_pattern: "aws configure*"
effect: deny
reason: "Block AWS configuration commands"
- action: shell.execute
command_pattern: "aws sts *"
effect: deny
reason: "Block AWS STS token operations"
- action: shell.execute
command_pattern: "aws s3 *"
effect: deny
reason: "Block AWS S3 access"
Step 4: Block Environment Variable Access
AWS credentials often live in environment variables. Block the agent from reading them:
rules:
- action: env.read
variable: "AWS_ACCESS_KEY_ID"
effect: deny
reason: "Block reading AWS access key from environment"
- action: env.read
variable: "AWS_SECRET_ACCESS_KEY"
effect: deny
reason: "Block reading AWS secret key from environment"
- action: env.read
variable: "AWS_SESSION_TOKEN"
effect: deny
reason: "Block reading AWS session token from environment"
- action: env.read
variable: "AWS_DEFAULT_REGION"
effect: deny
reason: "Block reading AWS region from environment"
Also block reading .env files that might contain these variables:
rules:
- action: file.read
path: "**/.env"
effect: deny
reason: "Env files may contain AWS credentials"
- action: file.read
path: "*/.env."
effect: deny
reason: "Env variant files may contain AWS credentials"
Step 5: Block the EC2 Metadata Service
If your agent runs on an EC2 instance, it could query the instance metadata service to obtain temporary credentials:
rules:
- action: network.request
destination: "169.254.169.254"
effect: deny
reason: "Block EC2 metadata service (SSRF/credential theft vector)"
- action: network.request
destination: "fd00:ec2::254"
effect: deny
reason: "Block EC2 metadata service (IPv6)"
This is a critical protection against Server-Side Request Forgery (SSRF) attacks through AI agents.
Step 6: Allow Specific AWS Operations (Optional)
If your agent needs to perform specific AWS operations, create narrow exceptions with human approval:
rules:
- action: shell.execute
command_pattern: "aws s3 ls s3://my-public-bucket"
effect: allow
conditions:
- human_approval: required
reason: "Allow listing a specific public bucket with approval"
Step 7: Test and Audit
npx @authensor/safeclaw --simulate
Check the hash-chained audit trail:
npx @authensor/safeclaw audit --filter "reason:AWS"
Every blocked access attempt is recorded with a tamper-proof hash chain.
SafeClaw is open-source with 446 tests and works with both Claude and OpenAI providers.
Related Pages
- How to Prevent AI Agents from Reading Dotfiles (.bashrc, .zshrc, .gitconfig)
- How to Prevent Claude from Reading My .ssh Folder
- Threat: Cloud Metadata SSRF
- How to Prevent AI Agents from Accessing macOS Keychain or Windows Credential Manager
- How to Stop AI Agents from Leaking Keys
Try SafeClaw
Action-level gating for AI agents. Set it up in your browser in 60 seconds.
$ npx @authensor/safeclaw