Earlier on this month, information security researcher and analyst Shay Chen released the 2013/2014 Web Application Vulnerability Scanners Benchmark, where he compared 63 different web vulnerability scanners, or as they are also known web application security scanners.
The comparison contains a good wealth of information and for those who have time, it is worth to dive into and analyze all of the results. We of course already did our homework; we analyzed the results and are more than happy with results; Netsparker Web Application Security Scanner smoked the competition and is only second to IBM AppScan with only 4% difference ; a scanner that costs much more than Netsparker.
Hence when you also include the price in the equation, Netsparker is the best web vulnerability scanner with the best return on investment; while IBM Appscan has an expensive price tag users still have to spend a lot of time to verify its findings as opposed to Netsparker, which automatically checks its own findings to report no false positives.
There are several different angles on how you can look at the results to determine which is the best web vulnerability scanner for you. To start off with, below are the graphs for each web vulnerability class tested in this benchmark:
Netsparker detected all of the 136 SQL injection vulnerabilities like most of the other web vulnerability scanners. Only NTOSpider and N-Stalker did not detect all SQL injection vulnerabilities.
Netsparker detected all of the 66 cross-site scripting vulnerabilities like most of the other web vulnerability scanners. Only BurpSuite and N-Stalker failed to detect all XSS vulnerabilities.
Here is where things start becoming interesting; IBM AppScan and Netpsarker are on a league of their own when it comes to detecting path traversal and local file inclusion vulnerabilities.
According to the benchmark IBM AppScan detected all vulnerabilities while Netsparker missed 30. However when these vulnerabilities were scanned individually, Netsparker identified them all. This lead us to discover a very rare bug that only happens when a particular custom 404 configuration happens (it’s quite hard to see this bug in real world, hence we didn’t know about it before this benchmark). We are looking into this issue and addressing it. So if you scan for these vulnerabilities individually, or if they were on a different website Netsparker would have identified them all. Feel free to download the benchmark and test it yourself to see this.
Third placed NTOSpider missed 154, HP WebInspect missed 228, Acunetix WVS missed 348 and so on.
In this case only Netsparker, IBM Appscan and HP WebInspect detected all 108 XSS via RFI vulnerabilities. Next in line is NTOSpider which detected 86 instances, and then Acunetix which detected 84 instances.
HP WebInspect detected the most unvalidated redirect vulnerabilities by detecting 15 out of 60, followed by Netsparker and IBM AppScan with 11 detections.
Acunetix WVS leads the pack in this test, by detecting 60 out of 184 backup files. Then followed by Burpsuite with 46 detections, SyHunt with 34 detections and the rest follow.
After going through each individual vulnerability class chart, now its time to add up all vulnerabilities together and see how the scanners performed over all. As per the chart below, Netsparker and IBM AppScan were the only two automated web vulnerability scanners to identify more than 1,000 web application vulnerabilities. Both scanners lead thanks to excellent detection of critical path traversal and LFI vulnerabilities.
Netsparker detected 1,112 vulnerabilities and is only second to IBM AppScan, which detected 1,147 vulnerabilities. Next in line is NTOSpider with 958 vulnerabilities, then HP Webinspect with 917 vulnerabilities followed by Acunetix, which detected 819 vulnerabilities. BurpSuite, Syhunt and N-stalker follow with 791, 716 and 484 identified vulnerabilities respectively.
Below is another chart showing how many direct impact vulnerabilities each web vulnerability scanner detected. By direct impact we mean critical vulnerabilities that if exploited could affect the operations of the web application and the business itself, hence excluding the “Old backup files” and “Unvalidated / Open redirects” vulnerabilities from this chart.
As we can see after excluded non direct impact vulnerabilities the performance of the 2 major players was unaffected. The performance of all other scanners, especially the last 4 in the group was drastically affected, in a negative way. This shows that both IBM AppScan and Netsparker are more focused on identifying critical vulnerabilities.
When compared to comparisons of previous years, all web vulnerability scanners improved their detection rate and all of them managed to reduce the number of reported false positives. Funnily enough Netsparker, the only false positive free web vulnerability scanner reported 3 false positive SQL Injection vulnerabilities. How did this happen?
To start off with, Netsparker is shipped with an exploitation engine that is automatically triggered once a vulnerability is detected. If the vulnerability is exploited it is not a false positive.
During these tests Netsparker detected all of the 136 SQL injections and reported 3 additional ones. Netsparker exploitation engine confirmed all of the 136 valid SQL injections but was unable to confirm the 3 additional false positive ones, which were specifically marked as unconfirmed.
Even though Netsparker reported 3 false positive SQL injection vulnerabilities, it still leads the pack. When using Netsparker, the user only has to verify the 3 unconfirmed vulnerabilities.
On the other hand, all other web vulnerability scanners do not have an exploitation engine, hence the user have to confirm all vulnerabilities. Therefore in the case, a normal user would have had to confirm 136 SQL injection vulnerabilities, which might take quite a bit of time!
The best web vulnerability scanner is the one which detected most vulnerabilities, is the easiest to use and can automate most of your work. As we all know, users have to verify a scanners findings, therefore automated vulnerability confirmation is also something that should be considered in the equation. Verifying findings is a time consuming process and for sure you are better off spending time remediating the issues rather than verifying the findings.
Although the above statistics are a good indication of who are the web application security market leaders, don’t base your judgement just on these facts. There is no better way to determine which is the best tool for you by getting your hands dirty and scan some of your own test websites with a number of different web vulnerability scanners.
If you are new to this geeky world of automated scanning, the article how to evaluate web vulnerability scanners will give you a better insight of how to choose the right web scanner for you. And if you’d like to learn more, read this Getting Started with Web Application Security.
Of course we are very happy that even though we are the youngest contender in this industry, we are already up there with the major players such as IBM AppScan, although we have to admit that it would have been even more awesome if we beat them as well.
We have done very well in identifying almost all critical vulnerabilities and can see that our lowest point is detecting old backup files on websites. We never really focused on these type of issues since the cost of identification and the worthy of finding is not of a great value. However we will ship these checks as option in the upcoming releases, so users will have the option to enable them during a web application security scan. We will continue working hard to ensure that Netsparker is easy to use and can detect as much web application vulnerabilities as possible automatically.
Last but not least we would like to thank Shay Chen for all his professional work and dedication.
Netsparker Web Application Security Scanner will be exhibited at the RSA Conference by our resellers Portcullis at booth 2134.
The 2014 expo is being held between the 24th and 28th of February and is the biggest to date. With more than 350 exhibitors, the expo is divided into two main halls; North Expo and South Expo. If you are at the RSA conference in San Francisco head down to booth 2134 and visit our resellers.
Advancements in web applications and other technology have changed the way we do business and access and share information. Many businesses have shifted most of their operations online so employees from remote offices and business partners from different countries can share sensitive data in real time and collaborate towards a common goal.
With the introduction of modern Web 2.0 and HTML5 web applications our demands as a customer have changed; we want to be able to access any data we want to twenty four seven. Such demands are also pushing businesses into making such data available online via web applications. A perfect example of this are the online banking systems and online shopping websites.
All of these advancements in web applications have also attracted malicious hackers and scammers, because like in any other industry there is money to be gained illegally. And this also lead to the birth of a new and young industry; Web Application Security.
This article explains the basics and myths of web application security and how businesses can improve the security of their websites and web applications and keep malicious hackers at bay.
Table of Contents
Most probably this is the most common web application security myths. Many think that the network firewall they have in place to secure their network will also protect the websites and web applications sitting behind it.
Network security differs from web application security. In network security perimeter defences such as firewalls are used to block the bad guys out and allow the good guys in. For example administrators can configure firewalls to allow specific IP addresses or users to access specific services and block the rest.
But perimeter network defences are not suitable to protect web applications from malicious attacks. Business websites and web applications have to be accessed by everyone, therefore administrators have to allow all incoming traffic on port 80 (HTTP) and 443 (HTPS) and hope that everyone plays by the rules.
Network firewalls cannot analyze web traffic sent to and from the web applications, therefore it can never block any malicious requests sent by someone trying to exploit a vulnerability such as an SQL injection or Cross-site Scripting.
Network security scanners are designed to identify insecure server and network device configurations and vulnerabilities and not web application vulnerabilities. For example if an FTP server allows anonymous users to write to the server, a network scanner will identify such problem as a security threat. Network security scanners can also be used to check if all of the scanned components, mainly servers and network servers such as FTP, DNS, SMTP etc are fully patched.
A web application firewall, also known as WAF does analyse both HTTP and HTTPS web traffic, hence it can identify malicious hacker attacks. For example if the attacker is trying to exploit a number of known web application vulnerabilities in a website, it can block such connection thus stopping the attacker from successfully hacking the website. But such approach has a number of shortcomings:
A web application firewall can determine if a request is malicious or not by matching the request’s pattern to an already preconfigured pattern. Therefore most of the time web application firewall cannot protect you against new zero day vulnerability variants.
A web application firewall is a user configurable software or appliance, which means it depends on one of the weakest links in the web application security chain, the user. Therefore if not configured properly, the web application firewall will not fully protect the web application.
A web application security firewall does not fix and close the security holes in a web application, it only hides them from the attacker by blocking the requests trying to exploit them. Therefore if the web application firewall has a security issue and can be bypassed as seen in the next point, the web application vulnerability will also be exploited.
A web application firewall is a normal software application that can has its own vulnerabilities and security issues. Over time many security researchers identified several vulnerabilities in web application firewalls that allow hackers to gain access to the firewall’s admin console, switch off the firewall and even bypass the firewall.
Overall web application firewalls are an extra defence layer but are not a solution to the problem. In other words, if the budget permits it is of good practise to add a WAF after auditing a web application with a web vulnerability scanner. Additional layers of security should be always welcome!
Web application vulnerabilities should be treated as normal functionality bugs therefore should always be fixed, irrelevant if there is a firewall or any other type of defence mechanism in front of the application. In fact web application security testing should be part of the normal QA tests.
To ensure that a web application is secure you have to identify all security issues and vulnerabilities within the web application itself before a malicious hacker identifies and exploits them. That is why it is very important that the web application vulnerabilities detection process is done throughout all of the SDLC stages, rather than once the web application is live.
There are several different ways how you can detect vulnerabilities in web applications. You can scan the web application with a black box scanner, do a manual source code audit, use an automated white box scanner to identify coding problems, or do a manual security audit and penetration test.
Which is the best method? There is no single bullet proof method that you can use to identify all vulnerabilities in a web application. Each of the methods mentioned above has its own pros and cons.
For example while an automated tool will discover almost all technical vulnerabilities, more than a seasoned penetration tester can, it cannot identify logical vulnerabilities. Logical vulnerabilities can only be identified with a manual audit. On the other hand, a manual audit is not efficient and can take a considerable amount of time and cost a fortune. With a manual audit there are also the risks of leaving unidentified vulnerabilities. White box testing will complicate the development procedures and can only be done by the developers who have access to the code.
If budget and time permits it is recommended to use a variety of all available tools and testing methodologies, but in reality no one has the time and budget to permit it. Therefore one has to choose the most cost effective solution that can realistically emulate a malicious hacker trying to hack a website; use a black box scanner, also known as web application security scanner or web vulnerability scanner. Of course an automated web application security scan should always be accompanied by a manual audit. Only by using both methodologies you can identify all types of vulnerabilities, i.e. logical and technical vulnerabilities.
A black box web vulnerability scanner, also known as a web application security scanner is a software that can automatically scan websites and web applications and identify vulnerabilities and security issues within them. Web application security scanners have become really popular because they automate most of the vulnerability detection process and are typically very easy to use. For example to use a white box scanner one has to be a developer and needs access to the source code, while a black box scanner can be used by almost any member of the technical teams, such as QA team members, software testers, product and project managers etc.
There are several commercial and non commercial web vulnerbility scanners available on the internet and choosing the one that meets all your requirements is not an easy task. The best way to find out which one is the best scanner for you is to test them all. Below are some guidelines to help you plan your testing and identify the right web application security scanner.
There are many factors which will affect your decision when choosing a web application security scanner. The first obvious one is; should I use a commercial software or use a free, non-commercial solution? I recommend and always preferred commercial software. There are several reasons why, such as frequent updates of the software itself and the web security checks, ease of use, professional support and several others. For more information and detailed explanation of the advantages of using a commercial solution as opposed to a free one, refer to the article Should you pay for a web application security scanner?
Will you be scanning a custom web application built with .NET or a well known web application built in PHP, such as WordPress? Whichever web application you will be scanning, the security scanner you will be choosing should be able to crawl and scan your website. Although this sounds like the obvious, in practise it seems not.
For example many choose a web vulnerability scanner based on the results of a number of comparison reports released over a number of years, or based on what the web security evangelists say. Although such information can be of an indication of who are the major players, your purchasing decision should not be totally based on it.
Many others take another wrong testing approach when comparing web vulnerability scanners; they scan popular vulnerable web applications, such as DVWA, bWAPP or other applications from the OWASP’s Broken Web Applications Project. It is a wrong approach because unless the web applications you want to scan are identical (in terms of coding and technology) to these broken web applications, which I really doubt, you are just wasting your time. Such vulnerable web applications are built for educational purposes and are not in any way similar to a real live web application.
The best approach to identify the right web application security scanner is to launch several security scans using different scanners against a web application, or a number of web applications that your business uses. Note that it is recommended to launch web security scans against staging and testing web applications, unless you really know what you are doing.
During test scans verify which of the automated black box scanners has the best crawler; the component that is used to identify all entry points and attack surfaces in a web application prior to start attacking it. The crawler is most probably the most important component because a vulnerability cannot be detected unless the vulnerable entry point on a web application is identified by the crawler.
To identify the scanner which has the ability to identify all attack surfaces compare the list of pages, directories, files and input parameters each crawler identified and see which of them identified the most or ideally all parameters. If a particular scanner was unable to crawl the web application properly, it might also mean that it might need to be configured, which brings us to the next point; easy to use software.
While some black box scanners can automatically crawl almost any type of website using an out of the box configuration, some others might need to be configured before launching a scan.
Because web application security is a niche industry, not all businesses will have web security specialists who are able to understand and configure a web application security scanner. Therefore go for an easy to use scanner that can automatically detect and adapt to most of the common scenarios, such as custom 404 error pages, anti-CSRF protection on website, URL rewrite rules etc.
Easy to use web application security scanners will have a better return of investment because you do not have to hire specialists, or train team members to use them.
The next factor used in comparing web application security scanner is which of the scanners can identify the most vulnerabilities, which of course are not false positives. I have seen vulnerability scanners identified hundreds of vulnerabilities on a website, but more than 70% of them were false positives.
If a scanner reports a lot of false positives, developers, QA people and security professionals will spend more time verifying the findings rather than focusing on remediations, hence try to avoid it. For more more information about false positives and their negative effect on web application security refer to the article The Problem of False Positives in Web Application Security and How to Tackle Them.
The more a web application security scanner can automate, the better it is. For example imagine a web application with 100 visible input fields, which by today’s standards is a small application. If a penetration tester had to manually test each input on the web application for all known variants of cross-site scripting (xss) vulnerabilities, he would need to launch around 800 different tests.
If each test takes around 2 minutes to complete, and if all works smoothly such test would take around 12 days should the penetration tester work 24 hours a day. And this is just about the visible parameters. And what about the under the hood parameters?
Typically there is much more going on in a web application hidden under the hood rather than what can be seen. Therefore it is difficult for a penetration tester to identify all attack surfaces of a web application in a fashionable time, while an automated web application security scanner can do the same test and identify all “invisible” parameters in around 2 or 3 hours.
But it is not just about time and money. When hiring a security professional for a web application penetration test, it will be limited to the professional’s knowledge, while on the other hand a typical commercial web application security scanner contains large numbers of security checks and variants backed by years of research and experience.
Therefore automation is another important feature to look for. By automating the security test will cost less and is done more efficiently. For more information about the advantages of automating web application vulnerability detection, refer to Why Web Vulnerability Testing Needs to be Automated.
Web application security is something that should be catered for during every stage of the development and design of a web application. The earlier web application security is included in the project, the more secure the web application will be and the cheaper and easier it would be to fix identified issues at a later stage.
For example, an automated web application security scanner can be used throughout every stage of the software development lifecycle (SDLC). Even when the web application is in it’s early stages of development when it just has a couple of non visible inputs. Testing in the early stages of development is of utmost importance because if such inputs are the base of all other inputs, later on it would be very difficult if not impossible to secure them unless the whole web application is rewritten.
There are also several other advantages to using a vulnerability scanner throughout every stage of the SDLC. For example developers are automatically trained in writing more secure code because apart from just identifying vulnerabilities, most commercial scanners also provide a practical solution to how to fix the vulnerability. This helps developers understand and get to know more about web application security.
Scanning a web application with an automated web application security scanner will help you identify technical vulnerabilities and secure parts of the web application itself. But what about the logical vulnerabilities and all the other components that make up a web application environment?
Web application security scanners can only identify technical vulnerabilities, such as SQL Injection, Cross-Site Scripting, Remote Code execution etc. Therefore an automated web application security scan should always be accompanied by manual audit to identify logical vulnerabilities.
Logical vulnerabilities could also have a major impact on a business operations therefore it is very important to do a manual analysis of the web application by testing several combinations and ensure that the web application works as it was meant to be.
Imagine a shopping cart that has the price specified in the URL as per the example below:
What happens if the user changes the price from $250 to $30 in the URL? Will the user be able to proceed with the checkout and pay just $30 for an item that costs $250? If yes then that is a logical vulnerability that could seriously impact your business.
These types of vulnerabilities can never be identified by an automated tool because tools do not have the intelligence that allows them to determine the effect such a parameter could have on the operations of the business.
There are several other components in a web application farm that make the hosting and running of a web application possible. In a very basic environment at least there is the web server software (such as Apache or IIS), web server operating system (such as Windows or Linux), database server (such as MySQL or MS SQL) and a network based service that allows the administrators to update the website, such as FTP or SFTP.
All of these components that make up a web server also need to be secure because if any of them is broken into, the malicious attackers can still gain access to the web application and retrieve data from the database or tamper it. Therefore it is recommended that you to refer to the security guidelines and best practises documentation for the software you are using on your web server. Below are also some basic security guidelines which could be applied to any type of server and network based service:
The more functionality a network service or operating system has, the bigger the chances are of having an exploitable entry point. Therefore switch off and disable any functionality, services or daemons which are not used by your web application environment. For example typically a web server operating system has an SMTP service running. If you are not using such service switch it off and ensure that it is permanently disabled.
Ideally administrators should be able to login to the web server locally. If not possible though ensure that any type of remote access traffic such as RDP and SSH is tunnelled and encrypted. It would also be beneficial if you can limit the remote access to a specific number of IP addresses, such as those of the office.
Administrators do not typically like any type of restriction on their own accounts because sometimes limited privileges can be a little bit cumbersome to complete a specific task. Therefore if you work towards finding the right balance between security and practicality, you can have a secure web server while administrators can still do their job. For example an administrators can have different accounts to do different tasks; an account which is specifically used for backups, an account which is used for generic operations such as pruning of log files, an account which is used solely to change the configuration of services such as FTP, DNS, SMTP etc.
By using such approach you are limiting the damage that could be done if one of the administrator’s account is hijacked by a malicious attacker.
Complementing with user accounts, the same applies for every other type of service and application. For example most of the time the database user your web application is using to connect to the database only needs to read and write data to and from the database and does not need privileges to create or drop tables. But yet most of the time most administrators give an account all possible privileges because it “will always work”.
Another typical scenario for this type of problems are ftp users. FTP users which are used to update the files of a web application should only have access to those files and nothing else. Take the time to analyse every application, service and web application you are running and ensure the least possible privileges are given to the user, application and service.
It is of utmost importance to always segregate live environments from development and testing environments. By mixing such environments you are inviting hackers into your web application.
When developing or troubleshooting a web application developers leave traces behind them that could help a malicious hacker to craft an attack against the web application. For example debug, which could be used to expose sensitive information about the environment of the web application is left enabled. Log files containing sensitive information about the database setup can be left on the website and could be accessed by malicious users.
Hence why it is important that any development and troubleshooting is done in a staging environment. Once the development and testing of a web application is finished, the administrator should apply the changes to the live environment and also ensure that any of the applied changes do not pose any security risks and that no files, such as log files or source code files with sensitive technical comments are uploaded to the server.
Similar to the above, the same applies to the data itself. Do not keep non related information in the same database, such as customers credit card numbers and website user activity. Store such data into different databases using different database users.
Apply the same segregation concept on the operating system and web application files. Ideally web application files, i.e. the directory which is published on the web server should be on a separate drive from the operating system and log files. By doing so you are not exposing operating system files to the malicious attacker in case he or she exploits a vulnerability on the web server.
Even though this is one of the most important steps in any type of security, unfortunately this is still the most overlooked task. It cannot be stressed enough how important it is to always use the latest and most recent version of a particular software you are using and to always apply the vendor’s security patches. By doing so you ensure that malicious hackers cannot find and exploit any known security vulnerability in the software you use.
As the name implies, log files are used to keep a log of everything that is happening on the server and not simply to consume an infinite amount of hard disk space. From time to time every administrator should analyse the server log files. By doing so administrators can uncover a lot of information, such as suspicious behaviour on the server and therefore can better protect the web server better, or in case of an attack, can easily trace back what happened and what was exploited during the attack.
Apart from a web application security scanner you should also use a network security scanner and other relevant tools to scan the web server and ensure that all services running on the server are secure. Security tools should be included in every administrator’s toolbox.
Last but not least, stay informed! Today you can find a lot of information for free on the internet from a number of web application security blogs and websites. By keeping yourself informed on what is happening in the web application security industry, or any other industry related to your job you are arming and educating yourself, so you’ll be able to better protect and secure web servers and web applications.