April 26, 2026
April 26, 2026
Woah. So @e65537 found this same bug a few days before we did, reported it, got a fix landed, and published a full exploit writeup while we were still poking at the source tree.
— thaidn (@XorNinja) April 24, 2026
That patch turned out to be incomplete (it refreshes the stale pointer on the first grow() but loses… https://t.co/nn6YVID4Po
Finding Gadgets Like it’s 2026 - Stephen Breen @atredishttps://t.co/4O6PxQaPVE
— Swissky (@pentest_swissky) April 25, 2026
Finding Gadgets Like it’s 2026 — Atredis Partners
openai built a model that HIDES personal data in text so nothing leaks
— chiefofautism (@chiefofautism) April 24, 2026
i flipped it INSIDE OUT
same 1.5B weights, same label taxonomy, but instead of masks you get structured spans, name, email, phone, bank account, address, secrets, char offsets and all
point it at logs,… pic.twitter.com/hXRE5Iy1D9
chiefautism/privacy-parser (327 stars, Python) Reverse of OpenAI Privacy Filter: same 1.5B model, returns PII as structured spans instead of masking.
source: chiefofautism (@chiefofautism)
Some notes on the security properties of the pipe_buffer kernel object@a13xp0p0v (me) posted an article describing multiple pipe_buffer features relevant for the Linux kernel exploits that rely on this objecthttps://t.co/QkFFg5RU21
— Linux Kernel Security (@linkersec) April 24, 2026
Some notes on the security properties of the pipe_buffer kernel object | Alexander Popov
Many exploits of Linux kernel vulnerabilities use the pipe_buffer kernel object to build strong exploit primitives. When I was experimenting with my personal project kernel-hack-drill, I discovered some interesting properties of pipe_buffer, which may not be described in public articles (at least, I didn't find them). That's why I decided to write this short post and share my thoughts.
Humans tried to tame horses 5,500 years ago. It didn't work. Those horses eventually went feral, and we had to start over 1,300 years later with a different bloodline.
— Anish Moonka (@anishmoonka) April 25, 2026
A group in Kazakhstan called the Botai kept horses for milk and meat around 3500 BCE. A 2021 Nature study read… https://t.co/l0vhEfRHuz
The original intent of the research was to play out the thesis that obscure malware could be used as hard evals for frontier mode capabilities. @vkamluk and @Gabeincognito ran an RE harness with access to tools, first autonomously, then w expert guidance. pic.twitter.com/rT2ro5um6W
— J. A. Guerrero-Saade (@juanandres_gs) April 24, 2026
Also true about AI and security: AI did not make the vulns in the "vulnapocolypse" just made them impossible to ignore. https://t.co/WAnC4YC63x
— Dave Aitel (@daveaitel) April 25, 2026
Mystery around Venezuelan cyberattack deepens with new discovery of "highly destructive" wiper. Hard-coded into the wiper was the domain for Venezuela's state-run oil company, suggesting the wiper may have been used in December's attack against company https://t.co/v0gHlATx4w
— Kim Zetter (@KimZetter) April 24, 2026
Mystery Around Venezuelan Cyberattack Deepens, with New Discovery of "Highly Destructive" Wiper
The mystery around a cyberattack that struck Venezuela's state-owned oil company in December is growing, following an announcement by researchers this week that they had discovered a "highly destructive" wiper program that appears to have been designed to target the oil company and may have been used in the December
Hamming's talk is so important that I reproduced it on my site. It's one of the only things on my site written by someone else.https://t.co/kWvKdwIiOm https://t.co/FlIUz1PzkS
— Paul Graham (@paulg) April 25, 2026
Anthropic’s Mythos raised the bar for AI vuln detection but kept it invite-only.
— XBOW (@Xbow) April 23, 2026
GPT-5.5 is OpenAI’s answer, and it’s open to all.
We had early access. Ran the benchmarks. Blackbox GPT-5.5 already beats whitebox GPT-5.
Best pentesting model we’ve tested.
Read our analysis:… pic.twitter.com/wIslddnGSx
XBOW - GPT-5.5: Mythos-Like Hacking, Open To All
Over the last couple of weeks, we’ve been part of a select group that had early access. We’ve been testing it across our benchmarks and workflows, and we’re sharing what we’ve observed in practice. Here’s our take on 5.5 and how it performed for our offensive security capabilities.
Singapore's Foreign Minister published the architecture for his "second brain for a diplomat" yesterday. Architecture diagrams, design rationale, the works. A developer-style writeup of his own system.
— Gavriel Cohen (@Gavriel_Cohen) April 25, 2026
It runs on a Raspberry Pi. It connects to his WhatsApp and Gmail, transcribes… https://t.co/m1XJKDFzy2
GitHub - qwibitai/nanoclaw: A lightweight alternative to OpenClaw that runs in containers for security. Connects to WhatsApp, Telegram, Slack, Discord, Gmail and other messaging apps,, has memory, scheduled jobs, and runs directly on Anthropic's Agents SDK · GitHub
A lightweight alternative to OpenClaw that runs in containers for security. Connects to WhatsApp, Telegram, Slack, Discord, Gmail and other messaging apps,, has memory, scheduled jobs, and runs dir...
GitHub - mnemon-dev/mnemon: LLM-supervised persistent memory for AI agents — graph-based recall, cross-session knowledge, single binary. Works with Claude Code, OpenClaw, and any CLI agent. · GitHub
LLM-supervised persistent memory for AI agents — graph-based recall, cross-session knowledge, single binary. Works with Claude Code, OpenClaw, and any CLI agent. - mnemon-dev/mnemon
GitHub - onecli/onecli: Open-source credential vault, give your AI agents access to services without exposing keys. · GitHub
Open-source credential vault, give your AI agents access to services without exposing keys. - onecli/onecli
NanoClaw — Personal Claude Assistant (second brain for a diplomat) · GitHub
NanoClaw — Personal Claude Assistant (second brain for a diplomat) - VB-NANOCLAW-MEMORY-OBSI-WIKI-PUBLIC.md
qwibitai/nanoclaw (28,083 stars, TypeScript) A lightweight alternative to OpenClaw that runs in containers for security. Connects to WhatsApp, Telegram, Slack, Discord, Gmail and other messaging apps,, has memory, scheduled jobs, and runs directly on Anthropic's Agents SDK
source: Gavriel Cohen (@Gavriel_Cohen)
mnemon-dev/mnemon (65 stars, Go) LLM-supervised persistent memory for AI agents — graph-based recall, cross-session knowledge, single binary. Works with Claude Code, OpenClaw, and any CLI agent.
source: Gavriel Cohen (@Gavriel_Cohen)
onecli/onecli (1,987 stars, TypeScript) Open-source credential vault, give your AI agents access to services without exposing keys.
source: Gavriel Cohen (@Gavriel_Cohen)
NanoClaw — Personal Claude Assistant (second brain for a diplomat) · GitHub
NanoClaw — Personal Claude Assistant (second brain for a diplomat) - VB-NANOCLAW-MEMORY-OBSI-WIKI-PUBLIC.md
source: Gavriel Cohen (@Gavriel_Cohen)
¿"Ataque excepcionalmente sofisticado acelerado por IA"? No: todo empezó con la descarga de cheats de Roblox que venían infectados con Lumma Stealer: así fue el breach en Vercel 🔑
— Juan Brodersen (@juanbrodersen) April 24, 2026
Además: hay versiones de un pago de rescate por la info robada.
Dark News #196 📧 pic.twitter.com/X1vJmFTNP4
Holy shit this is a tour de force! https://t.co/2zVyFh8DOX In 3 different areas: exploiting (forging) a ZKP, partially reverse-engineering the secret Google quantum cryptanalysis algorithm, and an important perspective on disclosure norms in infosec and science. Hats off @inf_0_
— zooko🛡🦓🦓🦓 ⓩ (@zooko) April 24, 2026
We beat Google’s zero-knowledge proof of quantum cryptanalysis - The Trail of Bits Blog
Trail of Bits discovered and exploited memory safety and logic vulnerabilities in Google’s Rust zero-knowledge proof code to forge a proof claiming better quantum circuit performance metrics than Google’s original results, demonstrating unique security risks in zkVM systems.
fuck it, authorization letter generator
— ultra (@0x_ultra) April 24, 2026
Claude does not push back when reading one of these lolhttps://t.co/szZ4vQDl08 https://t.co/CcW9qHQGUp pic.twitter.com/D7o1L0qqky
Authorized — Letters of authorization, drafted in seconds
Generate professionally-formatted letters of authorization for any task, delegation, or undertaking. Reference-numbered, signature-ready, downloadable as a PDF.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6617059Polymarket prices are highly accurate in predicting future events. The source of that accuracy is less obvious.
— Roberto Gomez Cram (@rgomezcram) April 24, 2026
In a new working paper, we find it is not the “wisdom of crowds,” but a small minority of informed traders.
Fewer than 3% of accounts appear to drive price discovery;… pic.twitter.com/sCUK7pmXeX
I recently got access to OpenAI’s Trusted Access for Cyber program.
— Ron Masas (@RonMasas) April 24, 2026
With all the GPT-5.5 hype and the Anthropic Mythos discussion, I wanted to test it for myself.
The result: **GPT-5.4** helped identify and develop a working Safari exploit affecting all Apple devices.
It found… pic.twitter.com/0k56D9RflO
Proposal: if you publish about an LLM finding vulns, please publish precise costs. Given the different levels of competence, verbosity etc per model, knowing token counts and cost per token is essential.
— Halvar Flake (@halvarflake) April 25, 2026
https://blog.calif.io/p/mad-bugs-rce-in-ladybirdMAD Bugs: RCE in Ladybird
— thaidn (@XorNinja) April 24, 2026
Blog: https://t.co/NNpFMo57LR
PoC: https://t.co/lxd1xplXSy pic.twitter.com/DRj94cSrQw
publications/MADBugs/ladybird at main · califio/publications · GitHub
Publications from Calif. Contribute to califio/publications development by creating an account on GitHub.
califio/publications (339 stars, C) Publications from Calif
source: thaidn (@XorNinja)
Xin Zhang created a way to do solid interrupt side-channel attacks against all Apple silicon. One attack avenue was fingerprinting websites users visited.
— Daniel Cuthbert (@dcuthbert) April 24, 2026
They disclosed. Apple said their paper was intriguing, but out of scope
Typical Apple. Great research Xin. pic.twitter.com/V6BG9xPPgG
Add a comment: