This article is an explanation of what security through obscurity is, why it’s terrible if it’s your only defense, and some ways in which you might be using it in your webapps. If you do pentesting, this will give you some tips on where to look in webapps to find examples of poor security you can exploit.
Security through obscurity is the reliance on secrecy and confusing attackers instead of building proper controls to keep them out.
Let’s take a real-world example.
Say you’re a teenager again, and you’ve got a particular folder of files that you’d rather your parents don’t find. You know the kind.
You probably hid this folder behind a whole bunch of other folders and named it something boring. And you might have felt very confident knowing that there’s no reason your parents would ever look in the “homework” directory.
This would be security through obscurity. It might work for a while, but the moment anyone checks the “Frequent Files” section of Windows Explorer, your secret’s out. A much better bet would have been to password protect your files.
In computing, security through obscurity is used more commonly than you’d expect.
Here are some of the most harmful examples of security through obscurity I’ve seen.
Robots.txt is a file located at the root of your domain, e.g., mywebsite.com/robots.txt. Robots.txt is used to tell search engines such as Google not to crawl certain sections of your website. A robots.txt might look something like this:
User-agent: * Disallow: /super-secret-passwords/ Disallow: /secret-admin-access-panel/
All this does is prevent Google from crawling those pages! It doesn’t ward away hackers. Checking for a robots.txt file is one of the first things a malicious person might do - and where do you think they’re going next when they see you’ve told Google not to index “super-secret-passwords/”?
Instead, if you want a page to not show up in results, add a noindex metatag to the page. Better yet, if anyone other than you shouldn’t see a page, make sure it’s behind a secure login page. Also consider IP-restricting it if you don’t move around too much.
Some Wordpress websites try to conceal the fact that they’re running Wordpress. Common ways of doing this include removing Wordpress’ readme.html file and renaming folders such as wp-admin.
While these might deter a novice attacker, any hacker worth their internet connection will be able to figure out that you’re running Wordpress by checking your CSS.
The alternative? Honestly, don’t bother too much. Hiding the fact that you’re using Wordpress isn’t as important as just keeping your Wordpress and plugins updated. If you have a Wordpress website, try running wpscan on it to see if there are any glaring vulnerabilities you should fix.
Say there’s a part of your website that you want to hide - maybe some insecure code that you still need to test, or some admin controls. One of the ways you might do this is stowing it away in a subdomain.
This is fine, as long as it isn’t your only method of security. If the subdomain requires a secure login and is IP-restricted, you’re a-okay. But just putting your insecure code behind a random subdomain with no other controls is a terrible idea. I explain why in this post.
Surely, given how long all those other hashing algorithms have been around for, they must be insecure by now, right? Maybe it’s better to make your own.
No, no no no.
No no no.
Unless you SERIOUSLY know what you’re doing, don’t try to make your own encryption or hashing algorithm. The current popular algorithms have been properly vetted by the security community and are much more secure than anything you could make on your own. If you make your own algorithm, it’s likely to contain serious mistakes that you might be overlooking. Please, just don’t.
Sure, maybe renaming your “User” SQL database column to something more esoteric will make it a little bit harder for attackers to perform SQLi. But if SQLi is possible on your site, fixing the vulnerability should be your main concern.
If you’re not sure whether or not your site is vulnerable to SQLi, check out sqlmap.
This is one of my favorites because of how often CTFs use it, but it also occurs in the real world.
Say you’ve got
http://mywebsite.com/normalpage, but when you navigate to
http://mywebsite.com/normalpage?admin=true, admin access is enabled.
An average user might not try to add the admin parameter, but any half-decent hacker with a fuzzing tool will find it in minutes. Using secret parameters to control access to hidden content is a bad idea.
Of course, “admin” isn’t such a secret word. What if the control parameter was instead something like
Now it’s pretty much the same as having a password, right? Nope. The main issue you’re facing now is that if your page links to anywhere else, your secret parameter has a chance of showing up as a Referer heading, which wouldn’t have been an issue if you had just implemented a standard login panel.
If you must do this, for whatever reason, maybe instead consider moving the parameter to be in the body of a POST request instead.
Of course, obscurity certainly has its place in webapp design. It’s perfectly reasonable to put sensitive code on a subdomain or remove references to your backend. The key consideration is that this should not be your only line of defense.
You can ensure you’re using obscurity appropriately by also implementing standard access controls, and just generally following best security practices. Remember: if the only thing between you and the hacker is obscurity, then the only thing between the hacker and you is time.
If you’re interested in upping your security skills, check out my other post on how to get into penetration testing.