Skip to main content

Why Do Security Research?

Published: August 22, 2011

Author: Chris Wysopal, Chief Technology Officer and Co-Founder, Veracode

One of the constants over the past 15 years of my life, as I have changed jobs from software developer to IT specialist to security consultant to CTO of Veracode, has been security research. However, my personal and career goals have changed over this time, and security in the IT industry has matured and evolved. So when I think about my motivations to perform security research, I realize they, too, have changed significantly.

When I look back to the mid-1990s and my time as a researcher at L0pht Heavy Industries, a hacker think tank of sorts, I realize that back then we didn’t even call our work security research. We just called it hacking. This was because the concept of security research, sometimes called vulnerability research, was just beginning. In 1993, Dan Farmer and Wietse Venema published a paper, “Improving the Security of Your Site by Breaking Into It,” that was geared toward system administrators testing their host and network configurations by looking at the network through the eyes of an attacker. From this, network penetration testing was born. At L0pht, we took a more focused approach, which could have been documented as “improving the security of software by finding vulnerabilities and exploiting them.” We didn’t try to break into a site. We tried to break into a particular software product.

Security research was something new, and the information uncovered was very powerful. If you found a weakness in the security of a site, you could only break into that site or sites that were configured the same way. If you found a weakness in a software package, you could break into any site where that software was deployed and exposed. Whoa!

In the beginning, my motivation for security research was to educate the IT community and the world at large about the huge number of gaping holes in IT infrastructure that were created by vulnerabilities in software. We take it for granted now, but back in the mid-1990s it was a new concept to breach a site by determining what software was used and then discovering which new vulnerabilities of that software could be exploited. Today, this “discovery then exploit” process has been fully automated for simple-to-find and exploit vulnerability classes such as SQL injection.

In the later 1990s, when security research was better defined and accepted as a legitimate activity to improve security, my motivations changed. No longer did I research to spread the word about the risk insecure software posed. I researched to demonstrate my expertise in software security: It was around this time that the security research field began to specialize and there was a desire to demonstrate your specialization. There was expertise by platform: Windows experts and Linux experts. There was expertise by type of vulnerability: web application weaknesses such as SQL injection and XSS and memory corruption weaknesses such as heap overflows or integer overflows. There was expertise in exploitation, which typically meant bypassing network or host security controls and mitigations.

Expertise demonstration wasn’t just an individual career advancer. It was also done at the organizational level, with the formation of security research teams. @stake, the company I consulted for at the time, won business because its name was in the press as an expert in software or application security. We got hired by companies looking for expertise in the areas in which we published. Other consultancies demonstrated their expertise, too, releasing advisories and tools to find vulnerabilities. Security product companies also got into this act: IDS and vulnerability scanning vendors all had research teams.

When security product companies got into the security research game in the early 2000s, things began to change. Security research started to become commercialized. It was a full-time job for people. The majority of the research wasn’t like an academic paper demonstrating expertise and pushing the envelope of vulnerability knowledge. Research became a numbers game. And, software vendors played the numbers game in return, claiming their products were more secure because no one was finding vulnerabilities in their software. Over the course of 10 years, security research came out of nowhere to become the way we judged the security of products.

I have to say once it became a numbers game, focusing security research on individual bugs to publish as advisories or to build detection signatures got a lot less interesting. Of course, having the knowledge of a zero day in a critical piece of software was still exciting, but publishing it (at least with coordinated disclosure) didn’t typically get you in the news or any recognition as an expert. There is still some great research going on, but it has moved away from focusing on yet another bug in a popular application to new classes of vulnerabilities in brand-new platforms such as the latest mobile devices.

Today, the type of research I do with my team at Veracode is to understand these new classes of vulnerabilities in software for emerging platforms such as mobile and cloud. We aren’t looking for individual vulnerabilities; those are just data to us. We are determining the patterns that cause these individual vulnerabilities. There are new languages, frameworks, and system APIs that need to be understood. There are new threat models. The myriad of ways developers are building software seems to be increasing. It is hard to keep up.

Once we understand the platforms and the vulnerability patterns, we describe them to our engineering team members so they can build scans that can detect these vulnerabilities in our customers’ software before the software is deployed. To me, this is a wonderful way to leverage the security researcher’s skills and why security research is more important today than ever.

We need more research that can be applied to building software with less latent vulnerabilities. This entails making it impossible for developers to write software that contains vulnerable patterns or at least being able to easily detect these patterns with automated testing. We also need more research into making these vulnerable patterns, which we may not be able to prevent or detect, less exploitable. This can be done at the operating-system level with system protections such as ASLR, or at the application level with sandboxing.

My motivations for research have changed over time. Yet, even over such a long period, I find security research is still a dynamic and thrilling pursuit. Without it, the IT industry would be in very bad shape. I encourage more people to get involved in security research and focus on the type of research that can help the software industry build increasingly secure products. There is at least another decade of fun and challenging work to do.

About the Authors

Chris WysopalChris Wysopal, Chieft Technology Officer (CTO) and Co-Founder of Veracode, is responsible for the company's software security analysis capabilities. In 2008, he was named one of InfoWorld's Top 25 CTOs and one of the 100 most influential people in IT by eWeek.

One of the original vulnerability researchers and a member of L0pht Heavy Industries, Chris has testified on Capitol Hill in the U.S. on the subjects of government computer security and how vulnerabilities are discovered in software. He published his first advisory in 1996 on parameter tampering in Lotus Domino and has been trying to help people not repeat this type of mistake for 15 years. He is also the author of " The Art of Software Security Testing" published by Addison-Wesley.

Microsoft Security Newsletter

Sign up for a free monthly roundup of security news, bulletins, and guidance for IT pros and developers.