According to the latest 2020 Verizon data breach report, exploiting vulnerabilities in the underlying operating systems and applications we use is a key area of attack for Cyber Criminals, with approximately 40% of all attacks started by exploiting a vulnerability. With this in mind, it’s important to understand regular application and infrastructure testing and the benefits it holds for your organisation.
What does application and infrastructure testing look like?
Most people will be familiar with infrastructure testing, which is associated with “known” vulnerabilities in operating systems and applications – we’ve all experienced Windows wanting to do its patching update just as you need to send an important email or meet a significant deadline! Put simply, when you’re on the latest version of a software, known vulnerabilities will be fixed or patched. The confusing element is that it also applies to the off the shelf applications such as Adobe.
From an infrastructure perspective, patch management has always been a challenge. It starts with determining how many vulnerabilities you have in your systems and how many critical vulnerabilities have yet to be patched. It’s the same concept with application testing, although there is a difference between the two.
With application scanning you are looking for “unknown” vulnerabilities in custom code, therefore the testing needs to take different approach. It’s scanning for a range of potential exploits and acts more like a manual penetration test. For example, it looks to see if you can exploit a form on a website to gain access to the database via SQL injection.
Now that we’ve explored the different types of testing, the next step is to determine the frequency of your testing cycle. This is typically governed by your compliance frameworks: CE+, PCI DSS, top 20 CSC, NCSC etc. Frequent scanning makes sense as the risk of a cybercriminal exploiting your organisation’s environment is greater the longer a vulnerability remains in your system. As a result, vulnerability management remains a vital tool in the fight against Cyber Crime.
Though vital, vulnerability management isn’t without its challenges. But what are the key problems faced in getting a regular patch deployment program working successfully?
Key vulnerability management challenges
With most organisations adopting a cloud-first strategy, at least some of the infrastructure and applications have already been moved to the cloud. It’s therefore easy to think that the vulnerability management overhead has also been transferred to the outsourcer. However, this can lead to confusion as to what the outsourcer is responsible for. Let’s take infrastructure, for example. Typically, when outsourcing compute power, the MSP/ Cloud provider will manage to an agreed level. Meaning, they may only cover up to the Operating System and it will still be your responsibility to test and patch the applications you run in that environment.
Another example can be seen when we consider websites that’ve been outsourced to a third-party. Whilst they can manage your site and patch it, most web hosting providers aren’t Cyber Security experts. A company’s website is typically hosted on a multi tenanted platform, and most likely based on WordPress. And whilst it’s not WordPress that’s a big risk if regularly patched, it’s the plug-ins and themes that typically don’t get the updates when WordPress does. Why? Because WordPress is an off the shelf application and therefore part of the known vulnerabilities that an infrastructure scan will identify. The plug-ins and themes are typically part of dynamic code the web developer has written and therefore any vulnerabilities in this will be unknown and require an application scan to identify them.
With application scanning, a new trend is emerging that incorporates vulnerability testing into the Software Development Life Cycle (SDLC). In the past organisations usually performed security related activities only as part of testing at the end of the SDLC. As a result of this late in the day technique, they wouldn’t find bugs, flaws, and other significant vulnerabilities until they were far more time consuming and therefore expensive to fix. Worse yet they wouldn’t find any vulnerabilities at all. The Systems Sciences Institute at IBM report shows that it cost six times more to fix a bug found during implementation than one identified during design. Furthermore, according to IBM, the cost to fix bugs found during the testing phase could be 15 times more than the cost of fixing those found during design.
Embracing best practice to remain secure
With a variety of challenges to keep in mind and manage when remaining secure, it’s unsurprising that even large well-resourced business that have been breached in the past and still find it challenging to keep their website vulnerability free. But there are some key actions you can take to keep your business and website secure.
An important best practise to adopt is to scan third party environments where possible and be in control of the patch management cycle and dynamic code changes. Even if your MSP is the one doing the work, it’s much better to be on top of your critical vulnerabilities and work with your MSP when something is missed. After all, it’s your organisation that will be in breach if your cloud environment is compromised.
Maintel takes the cyber security of our customers extremely seriously. As such, we are offering customers a free application and infrastructure vulnerability scan to highlight the benefits of carrying out this simple action. Even if you have a scanner for both infrastructure and applications, it’s still worth taking advantage of the offer as not all scanners are equal so it’s a good opportunity to test it against your own and compare identified vulnerabilities.
To find out more about security best practices, and how Maintel can support your business, click here.