I refuse to bow to our AI overlords
There’s been a lot of commentary by security nerds about how ChatGPT et alia ( GPT chat services ) are cause for concern and raising the threat level for everyone. I disagree and believe this reactionary pattern of assuming that any new technological capabilities should raise defense conditions is rooted in ignorance, arrogance, or predation. Same old INFOSEC, just a different day.
For the sake of level-setting, here are some generic concepts I am assuming. Obviously the real-world is grey and everyone models things differently, but I think these are basic enough that discussion can be generated without getting bogged down in pedantics and semantics.
The primary elements that constitute a threat are:
Reason or motive
Opportunity
Capability or means
The primary elements that constitute risk posed by a particular threat are:
Likelihood of occurrence of the threat event
Severity of impact caused by the threat event
As one of the threat elements increases, so does one or both of the risk elements.
As either risk element increases, so does the risk level posed by the particular threat.
Let’s start with motive. Unless your organization is directly related to the development, operation, procurement, or acquisition of a particular GPT chat service, I have yet to hear any reasonable explanation of how a particular threat actor’s motive to perform some threat event against a given organization will increase due to the availability of GPT chat services. If an organization is involved in the development, operation, procurement, or acquisition of a particular GPT chat service, the increased motive is increased in a way that is no different than any organization involved with a new technology of value.
Next we have opportunity. I don’t see how GPT chat services increase the opportunities for threat actors any more than capable OSINT can. If a threat actor is looking for opportunities to attack a particular organization, those opportunities exist, with or without the availability of GPT chat services. The ability to discover and capitalize on those opportunities comes down to the threat actors’ capabilities.
Capability or means seems to be where the majority of punditry revolves around. GPT chat services are not increasing adversarial capabilities any more than Google, Shodan, Stack Overflow, or the CTI industry’s favorite boogeyman: *spooky voice* dark net forums. As @thegrugq@infosec.exchange said on 09 January 2023,
“CheckPoint are saying some dumb shit about how ChatGPT will super charge cyber criminals who suck at coding. (See linked post: https://infosec.exchange/@dannyjpalmer/109659661412633830)
This isn’t a legitimate concern because most of the day to day operations of cyber criminals is drudgery that can’t be handled just by asking chatGPT.
They have to register domains and maintain infrastructure. They need to update websites with new content and test that software which barely works continues to barely work on a slightly different platform. They need to monitor their infrastructure for health, and check what is happening in the news to make sure their campaign isn’t in an article about “top 5 most embarrassing phishing phails”
Actually getting malware and using it is a small part of the shit work that goes into being a bottom feeder cyber criminal.
That post^W toot really sums it up well. Any potential improvements to poorly written malware that GPT chat services can provide, can already be provided by simply asking other people on the Internet for help. Or outsourcing. Or acquiring. Or using existing similar services like GitHub Copilot. This is where my accusation of ignorance comes in. A lot of people with the word “Security” in their title don’t understand what it takes to operate and maintain highly-capable adversarial organizations. That’s expected. But the idea that GPT chat services will level up potential threat actors any better than YouTube videos with questionable music doesn’t seem rooted in reality.
There is one caveat to this that I will begrudgingly allow. I have heard people say that GPT chat services lowered the barrier to entry for threat actors to SMBs who have historically considered themselves non-targets to state-adjacent threat actors. There seems to still be a belief that they should be concerned with high-capability TTPs and that that their traditionally low-capability adversaries will now be unstoppable. The truth is that we are still a long way from getting the basics of security right, especially in the SMB world. They should first focus on the fundamentals before concerning themselves with the cool kid hacks. That said, all organizations of all sizes should have had their eyes opened over the past 5 – 10 years to see they they can still be compromised whether they are targeted or not, as evident by events such as Wannacry and NotPetya, as well as the continued spread of opportunistic ransomware. All that said, if GPT chat services are what it takes to get more organizations to understand their potential as a target and start taking fundamental steps towards securing their assets, fine. Just minimize the FUD where possible.
If your threat models include the term “script kiddie” or any variation thereof, my accusation of arrogance is aimed directly at you. There seems to be a significant overlap between people who assume their adversaries are so incapable that they deserve ridicule, and the people that are crying that GPT chat services are going to turn them into uber 1337 h4x0Rz. Grow up. Not only have we all seen that kind of hubris lead to embarrassing security incidents, it shows a lack of awareness in presuming low adversarial capabilities despite all potential threat actors having access to the Internet here in the year 2023. Stop building your threat models models poorly and then using inevitable changes in technology as your excuse for your poor security posture.
Then we have the predators of this industry. We’ve all seen it. I expect we’ve all come to expect it every time something new comes along. Chicken Little FUD slingers come out of the shadows as quickly as the ambulance chasers to convince you that the sky is falling but don’t worry, they can save you. Politicians, law enforcement, vendors, consultants, even individual contributors within organizations who see new known unknowns as their opportunity to profit in one way or another are everywhere. An entire book could probably already be written on this tactic as it relates to GPT chat services.
In summary, GPT chat services are definitely a disruptive technology. There will be a lot of interesting use cases for it that will touch most people in various ways. However, if it has caused you to modify your threat models in a significant manner, it’s probably time to revisit how you model your threats. And that is always a good result.
[Original: https://infosec.exchange/@cR0w/109678042902426197 ]