When it comes to building software, it is very important to choose a design with the proper security attributes. In this article, we’ll uncover the main benefits of secure design and why it can be your best option.
The early stages of development are when our design decisions can have a huge impact on aspects like scalability, extensibility, and of course, security.
Blindly choosing a design without proper care for context or a bit of forward thinking could be an expensive decision, which then takes a much bigger toll to correct or even reverse. That is why secure design can be your best option. Let’s explore how!
What is secure design?
Secure design, or secure by design, aims to set up the foundations of software where we build a product meant to be inherently safe. Building software using secure design blueprints helps developers to take the necessary precautions and follow the necessary guidelines to produce code with the proper security attributes, which would otherwise depend entirely on developer expertise and a bit of luck. A secure software design should also consider risks and known issues so that case countermeasures can be planned and implemented ahead of time.
Secure designing is not different from designing software to be scalable or resilient. It requires a certain level of experience from designers to pick the best alternative for the job, but it is not all there is. Nowadays, software follows standards, best practices, frameworks patterns, infrastructure, and time constraints, which help designers to document, communicate ideas, and learn from other software solution experiences.
Do not miss this: 12 best practices for building secure software
What about insecure designs?
It may sound simple, but shockingly, having flaws related to an insecure design is one of the top 10 most commonly found vulnerabilities in recent reports by OWASP, who created this particular new category in response to the increasing number of findings related to this flaw.
These insecure design flaws may vary on impact, exposure, exploitability, and resources involved, but they all start in the same way; somewhere along the line, a design was drawn without without thinking ahead about security concerns or without experience or guidance.
Insecure designed applications, on the other hand, are rarely cheap or easy to fix because what makes them vulnerable are not algorithms or a bunch of isolated methods in our code but the rationale itself of how something was built that needs changing.
Secure design put into practice
We have discussed too much theory so far, so let’s review a few simple design examples which would, hopefully, make this clearer.
First, let’s analyze a few designs for a password recovery page. I’m sure everyone is familiar with these pages as it is very common for users to use them due to forgotten, blocked, or poorly-thought-out passwords. There are multiple approaches for implementing these pages and always with different costs, benefits, and downsides, but not all of these approaches properly take security into account.
Not long ago, it sounded like a good idea to answer security questions. More modern implementations nowadays take advantage of a pre-installed security token generator on a device. They may also require you to follow a link sent to a predefined recovery email account, or they send a text message to a predefined cell phone.
Differences between approaches are clear on cost and implementation time, but let’s look at them closely. Our cheapest and simplest approach, which is to use security recovery questions, doesn’t provide us with much proof of identity as it can be easily cheated by a person with basic knowledge on a given user account.
Another way is a middle ground approach like sending an activation link or prompt for a security token sent to a predefined — and validated — email address, which can give the user some confidence as only the intended recipient of the recovery email would be able to get it.
As simple as it may seem, and with how much it impacts securing accounts, having this recovery link or text message is more complex and requires more work than simple security questions. Plus, sending emails from the application can be exploited for phishing activities if the recovery email does not include a certain level of anti-phishing protection.
Let’s look at another design scenario but this time on a different layer — an authorization logic which would allow us to test action permissions for a given user.
There are many implementations around these kinds of rule engines, but they all need to return the same outcome: grant or deny an action. Usually, permission testing can be designed so that users are granted to all actions unless said otherwise. Another approach is to deny user actions unless it is explicitly granted.
Setting up permissions that only consider actions users are forbidden to do is simpler, takes less time, and requires less work to maintain than defining everything explicitly. However, it is far more prone to leave security gaps behind.
In contrast, being explicit about what users can do takes more time, development, and analysis, and it is not rare to find missing permission mappings every now and then. This causes access denied bugs if something gets omitted on the permission mapping updates beforehand.
The question we should ask ourselves here is, would we rather fix access denied bugs when something gets omitted? Or should we accept the risk of overexposing actions by not denying them when in development, which is a very frequent issue on large development teams?
One final example of security by design is the usage or not of proven ORMs for accessing data.
When sketching our application design, there is usually a need for persistence, and one of the most common tools to achieve it is the usage of ORMs with a SQL database backing the application. A solid design of our Database Access Layer with some aid of a proven ORM would avoid frequent SQL Injection attacks and also protect our database from dangerous inputs or values, which should be avoided in the first place on an upper layer. Not using a proven ORM or choosing to simplify data access without proper escaping or sanitization exposes the application to many vulnerabilities. Unfortunately, there are always trade-offs, as using ORMs could have an impact on performance and initial setup time.
The last words
It may sound like choosing the safest path is always the clear winner, but as with every design decision, there are always trade offs and constraints. When they are not communicated properly to the team and not weighted properly for the solution or context, it can cause serious project delays, refactors, production bugs, and complaints from end users.
Security by design is not free and is not without risk of misuse. It has a learning curve for the team, and it requires standards and best practices adherence.
Insecure design flaws have quite an impact. They rarely involve a single isolated component, and they require proper regression when being fixed, if it is fixable at all.
Comments? Contact us for more information. We’ll quickly get back to you with the information you need.