Edition 1 - the first one is the hardest
Hi,
This is the first of hopefully many emails we’ll deliver to your inbox. Thanks for signing up!
For this week’s edition, Jan took a brief look at OpenAI’s Atlas browser from a researcher/data journo perspective. Does it spark joy? Spoiler: It doesn’t.
Got feedback, ideas for future topics or this newsletter in general? Drop us a line at readwritenewsletter@proton.me!
OpenAI's browser does in mere hours what would take minutes to do by hand
I recently tested the AI-powered browser that OpenAI released last week. It's essentially ChatGPT built into what feels like Apple's Safari browser and the marketing is bold: "[The browser] takes us closer to a true super-assistant that understands your world and helps you achieve your goals", because it can "understand what you're trying to do," "complete tasks for you," and "automate research". Well, let’s see.
As someone who works with data, scraping, and research workflows, I wanted to see how it would handle real-world tasks. If we set aside all considerations that come with the use of AI, the idea to make small, somewhat technical tasks super easy for users intrigues me. So I came up with three scenarios that a journalist or researcher would come across, from simple to complex: Download all PDF files from a page, take information from a site and put it into a table and find corporate records from a registry.
simple downloads
I was on a train when I did this, so for the first test, I opened the investor relations page of the Deutsche Bahn, the German national railway company. It has a bunch of PDF files with financial reports, executive summaries, etc., which I wanted on my computer. It’s a trivial task: I would use a terminal one-liner like the one below to download all files in seconds (explained here).
wget -r -l1 -H -nd -A.pdf https://ir.deutschebahn.com/en/reports/db-group-and-db-ag/
The AI cheerfully confirmed that it would achieve the same, but then kept trying to clarify things - just to eventually tell me that I should click the links myself. Thanks. I later read on the very sparse information OpenAI had put online with the browser’s release that it won’t be able to download files due to security reasons.

extracting lists
On to the second task. I was on my way to visit castle Rheinfels (worth it!), so naturally what I needed was a CSV file with all the castles along the Rhine valley. I opened the Middle Rhine Wikipedia page and requested a table of all castles with the links to their wikis.
I got my list, but it wasn’t consistent: The AI had added another column (left or right side of the river - helpful, maybe, but not what I had asked for) and when I ran it again, it labeled the ones that had no wiki page differently than before.
Manually, I used Python with the BeautifulSoup library to parse the page and create structured output. Even without golfing the code, it can be done in less than 10 lines. Nevertheless, the AI browser somewhat delivered and was definitely faster.
I gave it another try, this time making things a bit harder: I opened my Bluesky profile and asked it to put all posts from 2024 and 2025 into a table. It, again, gave me a table, but I noticed that some posts were missing. A small inconsistency that was fixable with some back and forth, but not great.
navigate sites
Next I wanted to do something that’s actually hard. I went to the German Handelsregister website, which holds documents on all companies registered in the country, and asked the machine to fine the records under a specific registration number. After some back and forth, which included me clicking a button manually, it managed to navigate to the company’s page, but then again failed to download the files.

To be fair, this would be a complex task to automate. It requires navigating an outdated user interface and some knowledge about how corporate records are archived in Germany. For a few records, I’d just do it by hand. For larger-scale work, I’d try to automate it using Selenium or Playwright - tools that can handle JavaScript rendering, form submissions, cookies, and downloads.
limitations
After a few more tests (clicking/navigating a map, recognizing the content of images on screen, comparing results of different searches), I felt like I had seen what it is capable of. None of the other tests impressed me, and none were interesting enough to talk about them here in detail. All in all, as a research or data journalism tool, it falls apart. It couldn't perform searches reliably, failed to navigate many pages, and frequently stopped mid-task. Some sites are off-limits entirely, for example everything that OpenAI considers adult content.

But the core limitation is that it doesn't actually understand how the web works. It can capture the visible content, but can barely interpret the underlying DOM - the internal architecture of a website’s elements - or handle dynamic elements like JavaScript sliders or lazy loading properly.
Imagine navigating a website, but with your eyes closed. Every five seconds or so you can blink and process what you see, then immediately close your eyes again. Also, you probably don’t remember what you saw two or three blinks ago. This is how this tool tries to solve the tasks you request from it.
And as all Large Language Models, it is not deterministic. Asking it to do the same thing twice will yield different results - or maybe not. That makes it useless for structured data work, where accuracy and repeatability matter. I am not sure if it has access to the site’s source HTML and I did not find anything in the OpenAI material explaining what it actually processes and what not.
the verdict
In short: it's bad at most tasks, but can be a quick help for none-techy users who need data summarized or put into tabular format and who can live with the downsides of running everything through an LLM. For serious research or data journalism (which I fully understand is not the purpose it was designed for), though, it makes most tasks ten times more complicated than just doing it manually. OpenAI promises "a super-assistant that researches, analyzes, and automates tasks", but what I saw was a system that struggled to click a button.
Despite my brief test drive, tools like this raise larger ethical and environmental implications. This post is meant as a quick overview, showcasing a tool that may or may not be helpful for some of our readers and offering a few alternatives along the way. We might dedicate another edition of this newsletter to the broader question of whether and how AI should be used in OSINT research and journalism.