SecurityTrails Blog · Jul 29 · by Luke Stephens

How I Lost the SecurityTrails #ReconMaster Contest, and How You Can Win: Edge-Case Recon Ideas

Reading time: 13 minutes
Listen to this article

A while back, SecurityTrails announced that they would be running a contest dubbed "Recon Master"—the aim of which is to find hostnames that resolve to an IPv4 address that haven't already been found by SecurityTrails.

As it had been a while since I flexed my recon muscles, that sounded very interesting to me. These days, the majority of my asset discovery phase is spent literally just using SecurityTrails, so this would force me to think outside of the box and stop being so lazy.

Also, the whopping $5,000 prize for first place was pretty appealing.

At the beginning of the contest, I was in first place by a wide margin for quite some time. Now, the game is in full swing, I'm in seventh place, and I've given up on winning. Frankly, I've spent too much time on it, but during that time I've come up with a few interesting ideas that I believe are useful enough to share in a blog post—so here we are.

Much like bug bounties, winning this contest is all about creativity. SecurityTrails is already extremely good at finding hostnames because that is a huge part of their core business. In order to defy the odds and discover hostnames that the SecurityTrails team hasn't, a big chunk of creativity will need to be applied.

Let's jump into the techniques I tried, from failures and successes to successes that ended up failing... you'll see what I mean.

Streamlining the submission process

Amass is great and all, but currently it's the only DNS enumeration tool I'm aware of that has the SecurityTrails submit endpoint built right into it. The contest page actually advertised this, which means that a whole lot of people will simply be using Amass to discover hostnames. I figured that if I wanted to discover hostnames that nobody else has discovered, it would be wise for me to avoid using the same methods and tools.

Upgrading haktrails

The first thing I did was build the submit endpoint into haktrails, a tool I had already written to query SecurityTrails data. I didn't add the submit endpoint to any of the documentation because I thought that this would give me a slight advantage in the contest. Actually, I still haven't added the documentation, but I'll show you how to use it right here:

cat subdomains.txt | haktrails submit -b 1000000

This will take each line in subdomains.txt and submit it in chunks of 1 million lines until there are no lines left to submit.

Now that I had an easy way to submit data, I could literally just pipe the output of any subdomain enumeration tool, or any list of hostnames directly into haktrails to submit it.

Generating gzip files

I figured that there would be times when it would be more efficient to submit gzip files due to the sheer amount of data to upload, so I wrote another quick Golang tool which takes lines of text as input and gzips them into multiple gzip files with x lines each. I called it gzipsplit.

Usage is something like this:

cat huge-list-of-subdomains.txt | gzipsplit -b 1000000

This command would create multiple gzip files with one million subdomains in each. Note that this is different from splitting a gzip file into multiple files using the split command. If you use the split command, you end up with smaller split files, but they aren't valid gzip files unless you join them back together, so you can't extract the text from them.

Submitting bug bounty recon (or not)

My guess is that every bug bounty hunter and their dog will be submitting their bug bounty recon data to win this contest. That means that every bug bounty hunter and their dog will be submitting the same data.

The ideas ?

I started thinking about things to submit. In particular I wanted to find hostnames that were most likely to be ? fresh, so that I might beat SecurityTrails to it. The first thing that came to mind was Certstream.

Certstream

What is CertStream? Their website says it best:

​​CertStream is an intelligence feed that gives you real-time updates from the Certificate Transparency Log network, allowing you to use it as a building block to make tools that react to new certificates being issued in real time. We do all the hard work of watching, aggregating, and parsing the transparency logs, and give you super simple libraries that enable you to do awesome things with minimal effort.

Essentially, they offer libraries that watch for new SSL certificates being issued in real time, and display the details of those certificates. I didn't really care about most of the details of the SSL certificates other than the hostnames, so I wrote a quick golang tool to simply print the hostnames associated with the SSL certificates as they were issued.

I used my incredible tool-naming skills to come up with the name: ✨ hakcertstream ✨.

Here's a gif of the tool in action. It doesn't look very exciting... but it is, okay?

Hackertstream Tool

Submitting these was as easy as jumping on my VPS and running:

hakcertstream | haktrails submit -b 1000000

Now just to make this clear—I'm almost certain that SecurityTrails would already be collecting this data, but it's possible that some (or all) of the data I was submitting from these logs was merely getting through the system faster.

Root domains

My next thought was that the discovery methods used by SecurityTrails were most likely collating data from other sources, not necessarily brute-forcing for root domain names. Everybody knows that short domain names are in high demand; I read somewhere that every 2-, 3-, 4-, 5- and 6-character *.com domain has been registered. Why stop at the com domain though? We might as well do something similar for ALL of the TLDs.

I Googled something like "TLD list GitHub" and found this:

https://github.com/umpirsky/tld-list/blob/master/data/en/tld.txt

A list of 1,543 TLDs! "Yeeesss, this will do nicely" I said ominously, stroking a white cat in my armchair. Many of these TLDs do not allow 2-character registrations, so I decided to generate every 3-character root domain for every TLD, using this average Golang script that isn't even worthy of its own repository.

https://gist.github.com/hakluke/d4ba893149a65c737357b377de92c94e

That would generate a 734MB list of 3-letter domain names. Out of this list, I ran them all through ZDNS to see which ones responded with an A record, and then submitted those to SecurityTrails with a command similar to this:

go run hakcombos.go | zdns A -threads 10 | grep NOERROR | jq --unbuffered -r .data.answers[0].name | grep -v null | haktrails submit -b 1000000

Dynamic DNS subdomains

Next I focused my sights on dynamic DNS domains. If you're unfamiliar with dynamic DNS, it's basically a good way to keep track of a dynamic IP address. The idea is that you run a little utility on your computer which constantly checks what your current IP address is, and then updates your dynamic DNS record (e.g. hakluke.dyn.com) to point to that IP address. That way, instead of constantly needing to check what your IP address is, you can simply remember the hostname.

The reason that I focused my sights on these domains was because these subdomains will be volatile by nature—constantly being created and deleted. It's unlikely that SecurityTrails would be focusing on these specifically, so there was a good chance that I'd be able to find subdomains that will count towards my score, by constantly looking for new domains here and submitting them.

I used the #1 elite hacking tool (Google) to find this list of root domains for dynamic DNS services (33,322 in total):

https://gist.github.com/neu5ron/8dd695d4cb26b6dcd997

Then I used subfinder and some bash-fu to loop through each domain and find subdomains continuously. I saved the results to a file and only submitted new subdomains as they popped up.

Rapid7 FDNS

If you're unfamiliar with Rapid7's FDNS data set, you can find it here. New data sets are released every month. If you sign up for a researcher account on Rapid7, you can get more recent/regular data sets.

I went ahead and downloaded the fnds_a dataset, which contains every hostname that resolves to an A record that Rapid7 could find. It was 23.7GB after gzip compression. Honestly, for the effort, this really isn't worth submitting to SecurityTrails because they almost definitely already ingest this data. I did it anyway because I figured there would probably be some domains in here that did not resolve at the time that the SecurityTrails system attempted to ingest the data, which has since had an A record added.

The process simply looked like this:

zcat fdns_a.gz | haktrails submit -b 1000000

The results were bad, so I stopped the submissions after a while.

Provider data

My next thought was to target the RDNS records of providers. There are loads of these! Your home internet probably has one, and you could find it by doing a PTR DNS lookup on your home IP address. These are also good candidates for hostnames that SecurityTrails would not have already found because they wouldn't show up in a lot of data sources or certificate transparency logs.

I figured that a good place to start would be AWS, because they have a lot of IP addresses, and they're publicly available in a nicely formatted JSON file, which you can find here. An example hostname would be:

ec2-35-180-16-65.eu-west-3.compute.amazonaws.com

The format is easy to see:

ec2-$IP.$zone.compute.amazonaws.com

I whipped up a tool to download/parse the AWS IP ranges data and generate every AWS hostname. It's called hakawshostnames.

After I did this, SecurityTrails clarified the rules, disallowed the submission of provider hostnames and these automatically generated host -> IP services, and deleted all the domains I submitted in this format. Overnight I went from over seven million subdomains to less than one million, and from first place to seventh!

Fair enough. It wouldn't provide any value to their customers and it's pretty spammy behaviour.

The takeaway? Don't do this.

Unicode replacements

There are some unicode characters that will be automatically replaced with ascii equivalents when you navigate to them in a web browser. For example, ᴳoogle.com will direct you to google.com. The same does not generally apply to DNS lookups.

Unicode Relacements

Regardless, I thought I'd give it a shot, because if it worked it might be possible to submit every domain ever with some part of it replaced with unicode. I submitted ᴳoogle.com as a test. It didn't work.

Unicode Relacement Submission Result

It's probably a good thing that it didn't work because that would be cheating anyway. And it really isn't useful as a recon technique... I was just curious, okay?

Zone transfers

I was doing some extremely important work on Twitter when I came across this tweet from Chris Ueland. Chris owns SecurityTrails, so I dunno, he probably knows some things about recon.

Wait... zone transfers are still a thing in 2021? I stole some code from Stack Overflow to create another quick golang tool, hakaxfr. Basically you feed it domains via stdin, it will look up the nameservers for that domain, then attempt to perform a zone transfer. Here's the tool in action:

Hakaxfr Tool

I tried this on a huge list of domains and only found one single domain that allowed zone transfers, yielding about five subdomains. I'm not sure whether that's because the tool doesn't work or because I'm getting banned by nameservers, or if there's some other voodoo going on, or maybe there just aren't many vulnerable domains. I believe Chris though, so I think it's probably an issue with haxafr which I'll investigate further at some point. Feel free to make a pull request.

DNS permutations

The next recon method that I thought about was generating and brute-forcing permutations of hostnames using a tool like DNSGen, AltDNS or DNSCewl. Even with some basic, common permutations, you could generate a huge list of domains with a high hit-rate. Brute-forcing DNS is an excellent way to discover domains that can not be found in any other publicly available data sources, including SecurityTrails, but it's quite difficult to do at scale without getting banned or rate-limited by nameservers.

One way around this is to spread your DNS requests over a large list of DNS servers. This also presents a few problems because there are a lot of DNS servers out there that purposely serve inaccurate results. To mitigate this, there's a great tool called DNS Validator that allows you to maintain a list of trustable public DNS resolvers.

I haven't started generating or brute-forcing domains yet, but I will if I get some time to set it up!

Hosting provider patterns

Another method I haven't tried yet, despite being pretty sure that it would yield excellent results, is looking for subdomain patterns on large hosting providers.

When you buy a hosting package from certain large hosting providers, they often automatically configure some subdomains for you. For example, if you buy a hosting package that includes webmail, cpanel and a personalised admin dashboard, they're automatically configured to be on subdomains of the domain you purchased. For example:

  • mail.example.com
  • cpanel.example.com
  • admin.example.com

Discovering domains that are purchased through this provider would be as easy as performing a reverse WHOIS lookup, and then submitting these three subdomains for every root domain that you discover.

Conclusion

At the time of this writing I'm in seventh place with a total of 1,358,051 domains. The people above me have many millions of subdomains, but it remains to be seen how many of them are legitimate and how many will be removed.

I don't have high hopes of placing in the top three, but regardless of the outcome, I've really enjoyed the challenge. It's forced me to think about recon differently and create some useful new Golang utilities in the process.

This will improve my recon process for bug bounties and pentest targets in the future, and hopefully it will aid your recon efforts too!

Subscribe to the SecurityTrails newsletter
Sign up for our newsletter today!

Get the best cybersec research, news, tools,
and interviews with industry leaders

×