November 15, 2023
November 15, 2023
https://lock.cmpxchg8b.com/reptar.htmlhttps://torrentfreak.com/some-pirate-sites-received-more-visitors-after-being-blocked-231027/
https://torrentfreak.com/russia-blocks-167-vpns-steps-up-openvpn-wireguard-disruption-231031/
В России блокируются почти 170 VPN-сервисов - Россия || Интерфакс Россия
25 октября. Interfax-Russia.ru - В России под блокировку попали почти 170 VPN и более 200 почтовых сервисов в рамках противодейств... читать далее на "Интерфакс-Россия"
Blocking WireGuard is interesting. I wonder if any of the Tor obfuscation techniques could be used with WireGuard? It’s UDP weather than TCP so I think many (all?) of the existing tricks will fail.
#CacheWarp: a new software-based fault attack on AMD EPYC CPUs. It allows attackers to hijack control flow, break into encrypted VMs and perform privilege escalation inside the VM within minutes. pic.twitter.com/roxiD8Ioph
— Ruiyi Zhang (@Rayiizzz) November 14, 2023
Scoop: a16z is the money behind CivitAI, an AI platform that we've repeatedly shown is the engine for nonconsensual AI porn. We also revealed the site is offering "bounties" for AI models of specific people, including ordinary citizens https://t.co/yVdut41K5y pic.twitter.com/WsIUyEUAVE
— Joseph Cox (@josephfcox) November 14, 2023
New write-up on an Intel Ice Lake CPU vulnerability, we can effectively corrupt the RoB with redundant prefixes! 🔥 An updated microcode is available today for all affected products, cloud providers should patch ASAP.https://t.co/7fPo45iddV
— Tavis Ormandy (@taviso) November 14, 2023
This is so welcome and so heartening, and it wouldn’t have happened without the hard work and persistent voices of the technical expert and human rights community. Thank you! This really matters🙏 ❤️https://t.co/9KpHlfVxz7
— Meredith Whittaker (@mer__edith) November 14, 2023
I've released a short blog on IIS malware: https://t.co/KP6Ba5MajL
— John (@BitsOfBinary) November 14, 2023
In it, I give an overview of some IIS malware research that has been done already, present a case study into a custom backdoor I found earlier in the year, and release some tooling to help analyse IIS modules!
That was fun. I bypassed a @OpenAI ChatGPT /mnt/data restriction via a symlink, downloaded envs, Jupyter kernels' keys, and some source code from there. Reported via @Bugcrowd and got not applicable! Now this issue is fixed (in like an hours after my report).. Is it how it… pic.twitter.com/TfMoUJYKUb
— Ivan at Wallarm / API security solution (@d0znpp) November 14, 2023
Thought of an even easier way to do this: have the LD_PRELOAD raise RLIMIT_NOFILE to the max, then open /dev/null 1024 times. That way all fds used by the process will have to be numbered higher than 1024. https://t.co/a9rmSKnDra pic.twitter.com/wx23trLgwu
— Brendan Dolan-Gavitt (@moyix) November 12, 2023
Will still try to do a blog post on my @CSAW_NYUTandon CTF challenge, NERV Center, but for now here's a thread explaining the key mechanics. I put a lot of work into the aesthetics, like this easter egg credit sequence (all ANSI colors+unicode text) that contains key hints: pic.twitter.com/snJGAt8JDd
— Brendan Dolan-Gavitt (@moyix) November 11, 2023
Thread by @moyix on Thread Reader App â Thread Reader App
@moyix: Will still try to do a blog post on my @CSAW_NYUTandon CTF challenge, NERV Center, but for now here's a thread explaining the key mechanics. I put a lot of work into the aesthetics, like this...â¦
A funnier bug class is setting RLIMIT_NOFILE really low, then seeing what suid programs don't handle open() failing 😆
— Tavis Ormandy (@taviso) November 14, 2023
🚨 Insight from Unusual Script in Dagon Locker Ransomware Case
— The DFIR Report (@TheDFIRReport) November 14, 2023
🧩 We've analyzed an interesting PowerShell script that threat actors used during a Dagon Locker Ransomware case.
Let's dive into the script🧵 pic.twitter.com/DFTWM1iDzn
I wrote about how LockBit ransomware group have assembled a Strike Team and are using a Citrix vulnerability to extort the world’s largest companies.
— Kevin Beaumont (@GossiTheDog) November 14, 2023
Pieces together what happened at ICBC, Boeing, DP World, Allen & Overy and more. https://t.co/aXEsPfxnKi
Here’s the version without the weird login thing, I need to move my blog. https://t.co/cO5rW4rgW3
— Kevin Beaumont (@GossiTheDog) November 14, 2023
Meet Matthew Wiegman, the legendary phone phreak who survived the US prison system. This is his story. #phonephreak #phonephreaking #hacking #infosec #ghostexodus #mattweigman @CyberNews @Ghostx_xBunny @VostoSynthwave @2600 @emmangoldstein @EFF https://t.co/pS98EVQzo7
— ₲ⱧØ₴₮ɆӾØĐɄ₴.ØⱤ₲ (@ExodusGhost) November 14, 2023
“‘Somehow, Palpatine Returned’: Failures in Resistance Intelligence, Strategic Deception, and the First Order, 30-35 ABY”
— Star Wars, but it’s dissertation titles (@StarWarsABD) November 15, 2023
So it seems we may finally have a GPT-4 level model in open source. https://t.co/uWuD5RyYU0 It's a merge of two llama 70b and since we live in the best AI timeline it's created by an anon with an avatar that looks like this: pic.twitter.com/rv8LtPkwUz
— Alexander Doria ▶️ (@Dorialexander) November 15, 2023
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
Current privacy research on large language models (LLMs) primarily focuses on the issue of extracting memorized training data. At the same time, models' inference capabilities have increased drastically. This raises the key question of whether current LLMs could violate individuals' privacy by inferring personal attributes from text given at inference time. In this work, we present the first comprehensive study on the capabilities of pretrained LLMs to infer personal attributes from text. We construct a dataset consisting of real Reddit profiles, and show that current LLMs can infer a wide range of personal attributes (e.g., location, income, sex), achieving up to 85% top-1 and 95.8% top-3 accuracy at a fraction of the cost (100×) and time (240×) required by humans. As people increasingly interact with LLM-powered chatbots across all aspects of life, we also explore the emerging threat of privacy-invasive chatbots trying to extract personal information through seemingly benign questions. Finally, we show that common mitigations, i.e., text anonymization and model alignment, are currently ineffective at protecting user privacy against LLM inference. …