The human brain is incredibly complex and powerful, but as with everything powerful, it comes with its own set of limitations. As humans, we’re not always as rational in our decision making as we’d hope to be. It’s unreasonable to think InfoSec professionals are an exception to this rule.
Cognitive biases can often lead to inaccurate assessments of true cybersecurity risks. Companies need to take human perceptions into consideration, rather than point fingers at human errors, so they can truly mitigate cybersecurity threats.
To further discuss the biases sometimes found among information security pros, and how we can use that knowledge to motivate more secure decision-making, we spoke with industry veteran Kelly Shortridge.
With a background in finance, Kelly is known for bringing her knowledge of behavioral economics into information security and highlighting the human aspect of the industry. Kelly is currently the VP of Product Strategy at Capsule8. She has experience in the industry not only as a product strategist, but as a founder, as well as frequenting security conferences.
We recently caught up with her right after her BlackHat talk about bridging the gap between DevOps and security pros. While strolling through New York City’s West Village charming cobblestone streets and surrounded with 19th century townhouses, we spoke with Kelly about her latest research on accepting the irrationality of cognitive biases among security professionals, DevSecOps and future trends in AI. We even got a chance to play a word game!
SecurityTrails: Were you born and raised here in NY? What do you enjoy most about the city?
Kelly Shortridge: I was born elsewhere, but I consider NYC my true home. The city is constantly in a state of flux — it is its own form of controlled chaos. I love the arts here, from the symphony or ballet at Lincoln Center, to the galleries of paintings dotting the Lower East Side like constellations. There is also a comforting sense of anonymity in NYC, since no one cares who you are or what you are doing, unless you are blocking their trajectory on the sidewalk.
The Cyber Security Summit: New York
BSides: NYC, Rochester, Long Island
Infosecurity ISACA North America Expo and Conference
You graduated with a B.A. in Economics and started your career in banking. What about security drew you away from finance?
Kelly: I really liked investment banking, but aside from the vagaries between individual deals, it consists largely of solved problems with established processes. As I delved deeper into information security as part of my market analysis, I realized how unbelievably broken infosec was, and how many incentive problems pervaded. Economics is ultimately the study of choice, so the messiness of choices in infosec made my brain lock onto these problems like a missile guidance system.
While in finance, you worked as a mergers and acquisitions analyst. Those deals can get quite messy in terms of information security during and post deal. Was that front row seat to security in M&As what tipped you off that you might want to change industries?
Kelly: Surprisingly, it was not. At the time (only a few years ago), information security concerns were not discussed much in M&A deals. I am still unsure how much they should matter in M&A, outside of buyers looking to get a better deal in their acquisition and using a breach as leverage, as Verizon did with its acquisition of Yahoo.
You managed to get into information security quite quickly, even with no technical background in terms of education. What tips would you give people wanting to switch careers?
Kelly: The benefit of a liberal arts education is it teaches you how to think critically — how to digest and synthesize information — and I credit that foundation, at least in part, with how I was able to get up to speed so rapidly. I also prioritized networking within the industry and meeting a diverse range of people within information security, though I learned the most from vulnerability researchers in my early days of covering information security as an i-banker. In fact, the amount I was able to learn just by listening to and asking the right questions of vulnerability researchers is precisely why I espouse that understanding “attacker math” — how attackers architect and conduct their operations — is essential for defensive strategy.
My general piece of advice for anyone looking to enter information security is act as a sponge and absorb all you can. Some people may be condescending toward you, which, to be clear, is not cool, but it is worthwhile to be honest about the fact that you are eager to learn and admit when you do not know something — because so many people are excited to share their thoughts and their work with those who are genuinely interested in learning. It is something I still practice today, because it is invaluable in leveling-up your abilities. I like the mantra of “iron sharpens iron,” and I believe it is difficult to accumulate and refine knowledge in a self-contained vacuum.
Understanding why humans make their supposedly silly decisions is essential if we want to craft solutions that encourage better decision-making within infosec.
We can often see your research in the application of behavioral economics models to infosec. What can information security learn from behavioral economics?
Kelly: Enumerating everything information security can learn from behavioral economics could probably fill an entire book! (Maybe I should write it.) Behavioral economics uses empirical evidence to understand how humans make choices. Information security involves humans, and ideally we want those humans to make more secure choices. Therefore, findings from behavioral economics can inform how we design, build, and maintain our systems and processes, to encourage more secure decision-making. But these insights do not just apply to how we architect infrastructure — it also includes how we communicate security risk, how we craft UX and user workflows, how we handle humans who make security mistakes, and more.
To reference your talk from Hacktivity¹ and cognitive biases among defenders and attackers, what are the main biases that influence red and blue team decision making?
Kelly: My thinking is continually evolving on this topic, but risk and loss aversion, status quo bias, and time inconsistency all strike me as the most obviously influential to red and blue team dynamics. Prospect Theory, which observes risk aversion, risk seeking, and loss aversion in how people actually make decisions, is a concept I wish more people in information security understood. Relatedly, framing effects — how the interpretation of information differs depending on how it is communicated — are extremely underexplored in the context of information security, too.
I like to think I helped, at least in part, to seed the growing chorus of infosec professionals arguing that you should not blame users for “irrational” and insecure behavior. Likewise, I do not necessarily blame infosec professionals who succumb to their own “irrationalities” and cognitive biases as they conduct their work. We need to focus on pragmatic solutions that accept this “irrationality,” rather than participate in vain blame games. Understanding why humans make their supposedly silly decisions is essential if we want to craft solutions that encourage better decision-making within infosec.
Controlled Chaos: The Inevitable Marriage of DevOps & Security
Slides Black Hat USA Las Vegas, NV
August 7, 2019
Why Defense is Hard
NYPD Cyber Intelligence and Counterterrorism Conference New York, NY
March 19, 2019
Return Oriented: 2017 InfoSec Market Recap
January 20, 2018
Paint by Numbers: Measuring Resilience in Security
Slides Square R00t NYC
November 14, 2018
Panel: AI in Security, Gaps Between Theory and Practice. Demonstrating Value to Customers
USENIX Security & AI Networking Conference (ScAINet ‘18) Atlanta, GA
May 11, 2018
Big Game Theory Hunting: The Peculiarities of Human Behavior in the InfoSec Game
Slides BlackHat USA Las Vegas, NV
July 26, 2017
Cyber Education: Your Options & Resources Mapped Out
Slides NYU Poly’s Career Discovery in Cyber Security: A Women’s Symposium NYC
And what are the common assumptions of attackers you see in infosec that need to be reassessed?
Kelly: I think threat modeling is still somewhat immature as a discipline. As an industry, I do not think we have fully digested the idea that attackers will generally attempt to minimize resource expenditure and maximize reliability. Until the easy, low-cost paths for attackers to reach their goals are eliminated by security controls and mitigations, there is no reason for blue teams to consider elaborate defensive measures. We have seen time and again that advanced attackers will use the same tactics as a script kiddie if those tactics are likely to succeed. No one is going to drop 0day unless they are out of lower-cost, reliable options.
In your new talk you had last week on Black Hat, you talked about molding DevOps and infosec together. People have different definitions of DevSecOps, but how would you define it?
Kelly: I think DevSecOps is a redundant term, primarily pushed by security vendors to sell an aspirational vision of modernity to security professionals. Some security professionals now cling to the term because it helps them feel as important as DevOps, which is an understandable desire — no one likes feeling irrelevant. But, DevOps naturally should include security, just as it includes testing, QA, and operations. Unless we decide to adopt a Frankenstein’s monster term like “DevTestQASecOps,” using a term like DevSecOps orients the conversation away from how infosec must embrace DevOps, and instead beguiles security people into thinking if they automate a few vulnerability scans, they successfully did the DevOps.
Fundamentally, the term DevSecOps erodes the implication of cultural change being necessary for infosec to succeed in a DevOps world, where it can no longer insert itself as an unyielding gatekeeper — and I think that delusion is dangerous.
What are the main cultural changes DevOps and infosec teams need to adopt in order to have this perfect marriage and integration of security into every development stage?
Kelly: There are a lot of cultural changes required, but there are two in particular that I think are essential. Security teams need to shift from a “no” mindset to a “no, and…” mindset. It is not useful to shoot things down if you do not propose alternatives, but infosec is largely averse to compromise due to the drive for what I call “security purity.” Security purity is the almost moralistic sense among some security professionals that any risk acceptance is unacceptable, and that anything but full faith in security’s paramount importance warrants punishment and scorn.
Ensuring the security team’s activities are business enablers is a necessary cultural shift that quite a few industry veterans will find painful.
Infosec also needs to focus on activities that improve organizational outcomes, not meaningless outputs that exhibit proof of work but not proof of successfully preserving the organization’s ongoing fortunes in the digital realm. Can you tie your security policies to a positive impact on software delivery performance? Can you demonstrate how your fancy spider chart generated by a security vendor or benchmark actually furthers the organization’s business priorities and is not just busywork? Ensuring the security team’s activities are business enablers is a necessary cultural shift that quite a few industry veterans will find painful.
There is definitely a communication gap between security teams and DevOps (and other IT teams) in organizations. What can one learn from the other, so they can bridge that gap?
Kelly: I believe DevOps would benefit enormously from understanding “attacker math” — being cognizant of the way attackers strategize their operations. At Black Hat 2017, I spoke about decision trees, which show the potential paths attackers can take to reach their goal, and how creating decision trees in the design phase for software visualizes the lowest cost path attackers might take when attacking an application or service. Knowing that lowest cost or lowest risk path helps inform what design mitigations or security controls are needed early on in the development lifecycle. It also helps keep teams honest about the highest likelihood threats — because optimizing security controls and mitigations for unrealistic threats, before covering the “basics,” is a widespread problem in infosec.
Of course, few security teams even think in decision trees, so infosec has work to do on that front, too. Generally speaking, I think information security has far more to learn from DevOps than the other way around, because infosec is historically resistant to gleaning insights from external sources. Every time I speak to people in the realm of DevOps, they are refreshingly eager to learn about security and genuinely want to build systems more securely — they just feel overwhelmed and lost based on how bewilderingly infosec knowledge is presented.
In contrast, people in the infosec realm tend to adopt a defensive stance when I broach the topic of DevOps. They will define what the concept means as they please and elaborate on its perceived downsides at length, rather than exhibiting curiosity. Because of this, I often feel there is more hope in teaching DevOps about security, than teaching security how to play nice in the world of DevOps, though I am still striving to do my best on both fronts.
There was your blog post² you wrote for TechCrunch, where you spoke about AI in cybersecurity often falling short. What are the good and the bad of AI and ML in cyber?
Kelly: Unfortunately, at present, I think AI and ML in infosec is mostly in the “bad” territory. I feel genuine pity for the infosec professionals bamboozled by honeyed words of how algorithmic magic would revolutionize their security programs. There is limited honest discussion about the merits of statistical approaches to security problems, with most vendors attempting to handwave concerns away, rather than address them so that AI/ML technology can be adopted with realistic expectations. The aftermath is a lot of wasted time and budget — a lamentable result given how behind the curve defenders already tend to feel.
What do you think are going to be some future trends with AI and ML?
Kelly: My hope is that observability becomes a trend with AI and ML — can you understand the system’s decision making process? Do you have the ability to inspect the model’s weights? Another trend I would love to see is customers becoming more skeptical regarding AI and ML claims from security vendors and having the ability to fully evaluate false-positive rates and efficacy relative to thoughtfully created rulesets. Overall, I suspect attackers will derive more value from AI and ML before defenders learn how to more efficiently unlock value from it.
You had this great post about InfoSec buzzwords. I want to play a game with you: give me one sentence using mainly industry buzzwords!
Kelly: Real-time cyber AI automates simplified intelligence through continuous, unknown prioritization without contextual insights.
You can find Kelly on Twitter to follow all her latest research, and on her blog to get a better glimpse into her thoughts on information security (you can also find some rather interesting reading lists).
We continue to bring industry leaders to our interview series to share knowledge about important topics in information security, but we want to know who you’d like to see next. Let us know your suggestions at [email protected] and follow our blog for all the latest additions to the interview series.
¹ https://www.youtube.com/watch?v=UdZDlt2dlqM ² https://techcrunch.com/2019/04/10/translating-ai-in-security