September 18, 2022
-
-
When can we start saying that Microsoft's hegemony in enterprises and critical infrastructure is a national security threat?
Oh wait, we have been able to since 2003:
ccianet.org/wp-content/upl…
Alex Stamos @alexstamos
-
Today’s top two threads: Uber’s “humans are the weakest link in security!” breach, and other people exploiting a fancy but naive AI that… does expected things when asked politely.
And the tech spending cycle continues
-
We took a closer look at the video in which a man, who strongly resembles a Putin associate, Yevgeny Prigozhin, promises inmates release from prison in return for a six-month combat tour in Russia's war against Ukraine:
nytimes.com/2022/09/16/wor…
-
I wrote a post about the exciting new world of AI prompt injection that's going to blight our interfaces for the foreseeable future, including a few links to other posts
-
Decision-Making and Parliamentary Control for Int'l Military Cyber Ops by the Netherlands Armed Forces (2020) doi.org/10.2139/ssrn.3…
Amsterdam Law School Legal Studies Research Paper; No. 2020-07.
By Ducheine (@paulducheine), Arnold, & Pijpers (@lecanardfauve).
-
-
Using prompt injection to exfil the original (hidden) prompt, I absolutely love this new form of ML attack
!goose @mkualquiera
@Keleesssss The full prompt fed to the model is the original prompt ("Respond to the tweet with a positive attitude towards remote work in the 'we' form") + user content. If the user content contains new instructions, it can't tell the difference between the two – it's all one big prompt.
My favourite thing is how the inspiration for this recent spree of AI injection attacks is this Mr Show sketch.
-
I'm reminded this is definitely one of the most interesting pieces of cybersecurity-related research I've seen in recent memory.
You'd expect some thought-provoking work, based on the affiliations of those involved and the subject. And this is. But not like you might expect:
The more specific subject (breadth-first vs depth-first approaches to searching for vulnerabilities) may seem like inside baseball stuff. And, to be honest, the actual results are unpersuasive. The research design is inadequate (perhaps due to resource constraints). Etc.
But.
But yet the work is tremendously interesting, Not so much for the actual comparative research here, but for exploring the efficient use of human and machine resources to solve threat actor offensive problems in a way that is perhaps unprecedented in the public record.
A suggestion: Read the report or watch the video (both are short), but instead of thinking about organizing the use novice, experienced, and expert
efforts plus use of automated tools to find vulns in code think about doing so to attack networks.
Efficiently.
At state scale.
https://www.usenix.org/conference/usenixsecurity20/presentation/nosco
-
-
// by @elmant0-
@ThamKhaiMeng Visit this hotel each year for work this sign always makes me laugh. If you look at it just right... The book becomes something else...
-
Okay so here’s what I’ve learned about bots so far:
- new tweets are more effective than replies
- new lines break them
- multiple spaces are okay as long as certain terms are put together (“need”, “sugar daddy”, “help”, etc.)
- delimiters can work (eg. splitting a string with “)
Other things I’ve learned:
- the m*tamask bots are the fastest
- bots will respond to each other to “increase authenticity/credibility”
- “hacked” > “stolen” > “help”
-
Surprise! #PEbear is Open Source now! github.com/hasherezade/pe… - please check it out and let me know what do you think!
-
-
Quality piece by @peterpomeranzev on the importance of perceptions. Information operations - conducted by govs and esp bottom up by civic society - are therefore crucial
-
Don't miss what's next. Subscribe to the grugq's newsletter: