In the mid-1990s, I worked on a study for the United States secretary of defense that defined defensive information warfare, which is now essentially known as cybersecurity. Leading the effort was a brilliant person whose perspective added a lot of value and ended up shaping my point of view on the field:
"We obviously need protection, but protection will inevitably fail. Therefore, we need to implement it, but detection and reaction are as important, if not more important, as protection will inevitably fail."
From that point on, I've embraced this idea and embedded it in everything I do because it's essential as a defender. At the same time, it's also critical to intelligent attackers. When I speak to former special forces and intelligence operatives friends about their former professions, they always tell me the lengths they went through not to get caught by an adversary. They always realized that they could get caught if they slipped up or because someone else was in the wrong place at the right time, such as a guard sleeping on the job, hiding somewhere you would never expect them. Frankly, this is how you know someone is good at their position in offensive operations; they never think they are too smart to get caught. The bad spies always think they are too good to get caught.
Now, as I read stories of security executives getting burned out because of too many incidents and their frustrations over failed efforts, I admit to second-guessing their attitudes. It means that they don't understand security. Security is not about perfect protection. There is no such thing as perfect protection. Security is about protection, detection, and reaction. If you fail to realize that breaches are inevitable, it means you are dooming yourself and your organization to failure.
While you need to implement reasonable protection, design your cybersecurity program to expect failure despite the countermeasures. For example, you can have a great security awareness program, but you would be a fool to expect every user to behave perfectly. Be prepared to react to user behaviors and prevent harm proactively.
Consider that for a person to be looking at a harmful email message, it means that your technology failed to protect the organization from the dangerous message in the first place. Then even when a user clicks on a harmful email message, the user does not create the damage. The system has to grant the user privileges that allow them to download and install the software. Then the system has to allow malicious software to execute. It is not the user failing, but the entire system failing.
As I argue in my book, You Can Stop Stupid, there is no perfect security nor ideal protection. You, therefore, need to implement detection and reaction as a critical part of your cybersecurity program, not just as an afterthought. Unfortunately, various studies indicate that organizations put the bulk of their cybersecurity budget into protection, with less than 20% combined for detection and reaction. Remember, incidents are inevitable, and without the ability to detect and react to them, you are blind and at the whim of the incident and attacker.
The industry is shocked by cybersecurity incidents and is horrified when there's a significant incident. In many ways, I equate that to being shocked to see sick people filling up doctor offices. Frankly, I don't get horrified when protection fails; I do when an incident goes undetected for months, and there were missing countermeasures to mitigate the inevitable attacks.
Now, as I take a step back and hear about cybersecurity professionals who are frustrated with dealing with constant incidents, I have to admit that I have little sympathy. Again, they need to understand that incidents are as much a part of the cybersecurity professional's job as implementing a firewall. Again, lamenting the necessity to handle frequent cybersecurity incidents would be as bad as a doctor bemoaning sick patients. Frankly, a cybersecurity professional bemoaning security incidents is worse. A medical professional doesn't significantly influence the lifestyles or genetics of the people who walk in their door. Conversely, a cybersecurity team has, or at least should have, influence over their organization's security posture. Minimally, they should know where their protections are likely to fail and what reactions and IR playbooks to put in place.
If you are not on the cybersecurity team, you need to consider correctly supporting the cybersecurity team. Do you ignore recommendations from the CISO? Are you supporting budget and resource requests from the CISO? Are you making things more difficult? Do you heed the CISO's advice for avoiding harm, or ignore it?
If you aren't heeding the advice of your CISO (I'd question if they bemoan the existence of incidents), I'd fully support them for leaving an organization that is choosing to ruin itself. You're making their job unnecessarily complicated and they have a reason for leaving their position.
What to read next
Creating a 'Department of How'
'Safety science' for cybersecurity with Ira Winkler, leading cyber-espionage authority