October 14, 2023
October 14, 2023
An unobtrusive image, for use as a web background, that covertly prompts GPT-4V to remind the user they can get 10% off at Sephora: pic.twitter.com/LwjwO1K2oX
— Riley Goodside (@goodside) October 14, 2023
The image: pic.twitter.com/hb2qNBO2i4
— Riley Goodside (@goodside) October 14, 2023
This is why you should care about the quality of the paper your resume is printed on — a good watermark brings you to the top of the pile: https://t.co/YCt1qIxwe4
— Riley Goodside (@goodside) October 14, 2023
How it works
Off-white text on white background
— Riley Goodside (@goodside) October 14, 2023
Text Embeddings Reveal (Almost) As Much As Text
[2310.06816] Text Embeddings Reveal (Almost) As Much As Text
How much private information do text embeddings reveal about the original text? We investigate the problem of embedding \textit{inversion}, reconstructing the full text represented in dense text embeddings. We frame the problem as controlled generation: generating text that, when reembedded, is close to a fixed point in latent space. We find that although a naïve model conditioned on the embedding performs poorly, a multi-step method that iteratively corrects and re-embeds text is able to recover $92\%$ of $32\text{-token}$ text inputs exactly. We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes. Our code is available on Github: \href{https://github.com/jxmorris12/vec2text}{github.com/jxmorris12/vec2text}.
GitHub - jxmorris12/vec2text: utilities for converting deep representations (like sentence embeddings) back to text
utilities for converting deep representations (like sentence embeddings) back to text - GitHub - jxmorris12/vec2text: utilities for converting deep representations (like sentence embeddings) back t...
“I think you’ve had enough, sir.” pic.twitter.com/nxOCGvkiLz
— Uncle Duke (@UncleDuke1969) October 13, 2023
The thumbs up emoji is a nice way to tell someone not only did you receive their message, you’re also done with the conversation.
— Darla (@ddsmidt) January 24, 2021
Definitely a bit disturbing to see Chinese naval strategists chewing over the details of Dec7 Pearl Harbor attack. This piece asks what if Japan had targeted US oil supplies rather than battleships. Xiandai Jianchuan, 9.2023. The chart tracks all USN oilers available 1939-1942. pic.twitter.com/KTu5o90wff
— Lyle Goldstein (@lylegoldstein) October 13, 2023
I feel like finding good information on types of vulnerabilities affecting different technologies has become much harder. Regardless of what you look for, the first few pages are vendor blogs with copy/pasted fragments of handwavy explanations compiled by marketing departments.
— Fabian Yamaguchi (@fabsx00) October 11, 2023
Kevin Beaumont: "That one is one of the big shifts in the industry…" - Cyberplace
That one is one of the big shifts in the industry in the past year - victims walking in the front door like this include Microsoft themselves (multiple times). Once they gain access they search for VPN access guides, VDI guides, Citrix guides etc. You can make a rule for detecting it using OfficeActivity in Sentinel. If you want an emerging threat it’s not AI going Skynet and hacking the flux capacitor, it’s the need to consider MFA threat scenarios.
Disclosure of Vulnerable Bitcoin Wallet Library — Unciphered
Unciphered - Advanced Cryptocurrency Wallet Recovery bitcoin ethereum walletrecoveryservice password forgot forgotten wallet recover recovery service ether monero dash seed hardware ledger trezor bip38 bip39 bip44 ripple eos Unciphered
one easy way to make a simulation of falling sand piling up-
— Matt Henderson (@matthen2) October 13, 2023
just define what happens for all 16 possible 2x2 grids, and repeatedly apply these rules to the picture pic.twitter.com/OJNhmkfz8A