top of page

Meaning and History of Application Security

Updated: Sep 1, 2020

Application Security (AppSec) is a pretty recent addition for possible tech careers that may not make a lot of sense to those outside of the information security space. It’s part hacker, developer, and researcher that is hard to define where the job begins and ends. Application Security is also a job that is in very high demand due to a lack of qualified talent, and with a good amount of computer experience one can self-train themselves to get into it. We’ve written already in the past a little bit about getting into Application Security, so let’s get into what Application Security means and its history.

Examine the wording

Let’s break down the job title a little bit and get at the heart of what Application Security is. Application essentially refers to computer software. The modern world now revolves around these ‘applications’ to perform daily tasks like banking, shopping, and doing work. All of these applications are written by humans using computer code.

Unfortunately, computer code can become insecure. Security refers to the hardening of this code so that it cannot be abused by malicious hackers with ill-intent. News outlets regularly report on security incidents and breaches of companies that leak data out onto the internet, and unluckily a large number of those security events involve exploiting applications. The 2020 Verizon Data Breach Incident Report shows that 78% of data breaches in the last year fell into the ‘Web Application’ pattern 30% of the time, holding the single highest pattern seconded to ‘Everything Else.’

Code vulnerabilities are a subset of bugs causing unintended actions in the application that generally lead to a breach of one of the following: Confidentiality, Integrity, and/or Availability, or the CIA Triad. Confidentiality refers to privacy, keeping secret data secret for only those authorized to view it. Integrity refers to the consistency and accuracy of data, and preventing it from being altered or abused by anyone who is unauthorized to access said data. Finally, Availability refers to the availability of data and resiliency to denial-of-service.

AppSec’s position in an InfoSec Organization

The computer security field at large is known as Information Security and based on the size of an organization it will have a CISO at the top, a Chief Information Security Officer. They are generally responsible for the security of the company’s tech organization. Under the CISO there will generally be a couple of different groups.

There will be a ‘SOC,’ or Security Operation Center that actively monitors the organization’s network for malicious activity. Security engineers handle tooling like email gateways, firewalls, intrusion detection or prevention systems, and other security devices. There will usually also be people that handle security policy and procedures that apply to the tech organization as a whole. If it’s a much larger organization, they may have an architecture security team that architects and designs secure patterns and solutions for the organization’s unique needs.

Finally, there is the application security team. These are the impartial arbiters of the development organization confirming that whatever code comes out of the tech organization is also safe and secure for its customers use. You may then be asking yourself, “Why do we need people to confirm the security of code when the developers can just do it themselves?” Well, in a perfect world where all code is written securely, then yes, AppSec is not really needed. But we don’t live in a perfect world, we still need impartial arbiters of jobs and industries to ensure they are doing their job safely. Think of AppSec as like OSHA, but for programmers.

Also, classically trained developers that went through computer science courses in college usually only receive a basic understanding of code security concepts, so that really egregious errors don’t make their way into code. However, code security can be much more nuanced and we cannot ignore the human error side of things, as we don’t always get everything right all the time.

Penetration Testing vs Application Security

There is a hazy divider between what is considered Penetration Testing and Application Security. When I got started 10 years ago in this industry, there really was no distinction. Application Security Specialists and Engineers would test websites dynamically, sending attacks to applications as if they were a real-world hacker, analyzing the application for insecurities and weaknesses. These would be bundled up as a report for that application’s responsible engineering team to review and correct the issues in the code.

The type of work I do now as an Application Security Engineer is quite different to the work I was doing back then. I went from just doing pure penetration testing to doing higher level security analysis work like application architecture review, threat modeling, and code security review. Now I have moved onto even bigger-picture issues, looking at how application security functions within the business and how to introduce security automation into code pipelines.

There are a few variables that help define what application security work is required for an organization. How large is the information security organization in the business? What is your experience in the field? Finally, the history and evolution of application security to where it is today. The size of the business matters because generally people need to wear more hats in a smaller business, especially for the Information Security team. When I was on a smaller team, I was required to do all of the above like penetration testing, threat modeling, and secure code review. But being in a larger organization, now I can focus on the higher level application security tasks because we have our own dedicated penetration testing team. Also your experience in this field matters, as you may be asked to only perform certain tasks while others are reserved to those with more experience.

But what’s more interesting are the overarching changes in our industry over the past decade-plus that have altered what Application Security is all about. I know that in my time doing this work, the known good processes, industry standards, and security recommendations have matured and developed from when I first started. The version of AppSec that someone would come into in 2020 is an altered beast from when I started ten years ago.

Let’s dive a little bit into the history of Application Security so that we may get a better picture of the future.

A brief history of Application Security

Application Security is fairly late to the game as far as wide acknowledgement, but let’s look at some earlier forms of code security and notable events. In the 70’s and most of the 80’s, code security wasn’t really thought of as a risk. Most computer risk at that time was around computer physical security, theft, and insider threats like access to secret documents. Earlier than that, encryption and decryption of messages was a major concern, especially during the wars of the twentieth century. At the time it didn’t really cross anyone’s mind that you could write computer software for malicious purposes.

A researcher in 1971 named Bob Thomas realized that you could write a computer program that could jump between network nodes and leave a message on each, which he called ‘The Creeper’. The Creeper would spread over ARPANET and leave the following message on machines: “I’M THE CREEPER : CATCH ME IF YOU CAN.” Shortly thereafter, Ray Tomlinson wrote a program that would spread in the same manner as The Creeper that erased the ‘catch me if you can’ message, called ‘The Reaper.’

The back and forth history of malware is a long story in and of itself, but the first major malware outbreak was the ‘Morris Worm’ in 1988. Robert Morris had an idea to try to gauge the size of the internet by writing a program that would propagate across computer networks, exploit a known Unix bug, and copy itself. However due to human error, the Morris Worm began to replicate itself so aggressively that control of it was lost, and it began to take over. Ironically, the Morris worm took advantage of the ‘sendmail’ function in Unix, which was originally written by Ray Tomlinson. And the Unix bug that the Morris Worm took advantage of? An overflow vulnerability in the ‘finger’ protocol.

So application security was mostly known for malware up to this point and anti-virus started as a new industry. However, it was almost time for the world wide web’s turn. For a long period, websites on the early web were just simple documents delivered over the internet. Released in 1995 by Netscape, JavaScript was the first foray into a more reactive and interactive website. But it wasn’t long before hackers eventually discovered an exploit using JavaScript, Cross-Site Scripting (XSS). And in 1998, the first SQL Injection (SQLi) was discovered. From that point on more logic has been progressively introduced into websites, making the internet an essential modern tool that unfortunately is built on a pyramid of matchsticks.

In the early 2000’s we began to see the first tools and companies start up around protecting against these types of attacks. OWASP, the Open Web Application Security Project, was founded in 2001 and they spearhead many of the standards put forth for application security. The Payment Card Industry (PCI) Standards Council released the first Data Security Standard (PCI-DSS) in 2004, outlining the minimum security standards for handling credit card data.


But it wasn’t until 2005 when application security gained mainstream notoriety. Not unlike The Creeper from 1971, the ‘Samy Worm’ hit MySpace using a combination of Cross-Site Scripting and Cross-Site Request Forgery. It spread from MySpace page to MySpace page appending onto a user’s ‘My Heros’ portion of the page with, "but most of all, Samy is my hero." Within 20 hours, over a million MySpace users ran the payload and MySpace was taken offline.

And although it's been about twenty years since the inception of Web Application Security (WebAppSec), there is still a huge problem with under-staffing. Web-facing companies are generally always at a deficit with qualified AppSec people. In general, there are upwards of one-hundred developers per one AppSec person. That is a ton of code being written without oversight. And the definition of that oversight is constantly changing.

Two general realms of AppSec

There are two general realms when we talk about application security, both have viable career paths in their own right. One side deals with thick client applications that run directly on the operating system and the other are thin clients running on the internet, or Web Application Security.

Both of these disciplines require similar techniques, like code review, static analysis, design review, threat modeling, and fuzz testing. However, the approach is quite different. When testing an application that runs on the operating system, there are a few processes and tools involved that you wouldn’t do against a web application. First, thick clients run outside of the web browser, so browser-based issues are no longer a concern. Thick clients have their files and executables in the file system for review. These executables can be disassembled to look at a detailed flow of its bytecode. ‘Fuzz testing’ involves rapidly introducing inputs into the application until you begin to see crashes or odd-behaviors that need to be analyzed.

With the internet and websites the way they are now, Web Application Security is a discipline in its own right, and that is what I primarily focus on. Remember, when someone says something runs in the cloud, they just mean someone else’s computer. Since you cannot interact with the web application directly on the operating system, you have to interact with it via the network protocols it understands. And in the vast majority of the time that is the HTTP/S protocol. However, most of the principles of application security apply in both realms, but the approach will be different and require different training and skills with different tools.

Product Security There is also a more recent addition under the Application Security umbrella known as Product Security. Product Security still maintains all the same principles as the whole of Application Security, but they are generally only concerned with the products an organization produces instead of the security of said organization. For example, Apple would have an Information Security team that is concerned about the Apple organization itself and there would also be an iPhone Security team that is purely concerned about the security of the iPhone.

Sample Web Application Security Vulnerabilities

Looping back, let’s look at some issues that can affect web applications, XSS and SQL Injection. First coined by Microsoft in Jan 2000, Cross-Site Scripting is utilizing a victim’s browser against them. Developers use JavaScript to add both dynamic and interactive features in their web applications. However, this can be abused by malicious actors using Cross-site Scripting. If a web application does not properly sanitize external inputs, these inputs could be reflected back into the web application as is. If these inputs contain valid JavaScript, they will execute in a victim’s internet browser, possibly doing a whole slew of malicious actions.

SQL Injection is kinda what it sounds like. SQL is a database query language that has been around forever. It makes sense to use this to persist data for your web application, so you have to write code that can call to the database with SQL queries to retrieve data and display that back to your users. However, depending on how you form these SQL queries in the code, it’s possible for a malicious actor to input SQL commands of their own to be passed along to the database.

Impact of these issues

So what’s the impact of these issues? Well with Cross-Site Scripting, JavaScript has access to many internet browser resources that could be exploited. One, JavaScript has the ability to re-write portions of the webpage making it look normal visually but may in the background be causing malicious actions like stealing usernames and passwords. Secondly, JavaScript has access to browser cookies and these cookies typically contain sensitive token values that could allow a malicious actor to access your account.

With SQL Injection, the impact is more obvious. A user could log in as someone else, perhaps an administrator, or worse dump the data from the database and steal all of your private data!

Conclusion

Having a good understanding of the history of Application Security can help you understand some of the ‘whys’ we see now-a-days. This is crucial so we do not repeat our mistakes. And like all other industries, the way we do this work is constantly maturing. Whether we like it or not, computer code’s primary vulnerability is human error, and human error is a factor that will never go away.

 

About the Author

Aaron is an Application Security Engineer with over 10 years of experience. His unorthodox career path has led to many unique insights in the security industry.

1,081 views0 comments

Recent Posts

See All
bottom of page