This Station is Non-Operational: Issue 1
Hey!
Welcome to issue 1 of This Station is Non-Operational, an email newsletter for people who work (or live) in the technology space and spend most days battling a creeping sense of dread.
The plan, such as it is, is to put out one of these newsletters regularly, featuring links to interesting news stories, cool technology and software, things that have sparked thoughts in my mind, or just stuff that I think is relevant to trying to negotiate the modern world while piloting a computer.
If you don’t know me my name is Chris (my name is still Chris even if you do know me), I’m a Technical SEO Consultant that specialises in site crawling and scraping, but I’m also just generally interested in how humans and computers interface with each other. If you'd like to subscribe, I'd love that!
Thanks for checking it out, and with that let's delve into our first trough of stuff!
Artemis - A Calm Web Reader
Jamesg.Blog
I love RSS. I love syndicated feeds, and I spend a lot of my time as an observer of the AI industry in 2026 thinking "are you not just inventing RSS again?" (more on that in a bit).
But here's the problem - I also know I spend too much time on my phone, and I know that including a busy website like, say, The Guardian in an RSS reader immediately makes that reader almost unusable because they produce an absolute firehose of content which immediately drowns out smaller feeds that might only put out a piece of content once per week.
Artemis tries to solve these problems by doing two nice things: One, only updating once a day, and two offering things like rollup and filtering of busy feeds, meaning that quieter or less frequent voices still have time to shine. I've not had chance to sign up yet but I'm really looking forward to using what actually feels like an RSS Reader that will do what I want.
Internet spent Q4 '25 losing fights with cables, power, and itself
The Register
Every so often I'm reminded about the baffling physicality of modern infrastructure. The gas that powers your hob is actual gas! It's kept somewhere and sent down pipes when you need it! There's an explosive substance in your walls!
And this is also true of the internet. I understand how DDOS's work, I've seen what happens if you ask my poor aging macbook to open too many Chrome tabs. I understand how you can attack computers by forcing incorrect values into memory (I know how the missingno glitch works!), but the idea that there is a big cable in the sea that has the internet in it absolutely blows my mind for some reason. Is it covered over? Are bits of the sea not almost impossibly deep, filled with life straight from a horror film and also a big wire carrying season 2 of The Pitt? What happens if a crab snips it with its pincers?
Anyway yeah, I like that Cloudflare have published this. Big "We're all trying to find the guy who did this" energy.
Hidden prompt injection: The black hat trick AI outgrew
Search Engine Land
This is a really interesting piece of writing about how LLM models learned to filter and ignore hidden prompts - my apologies to Myriam for missing it when it was first published.
Of course, while this was a problem that needed solving, it doesn't even come close to addressing the bigger problem of LLMs just believing open and blatant misinformation such as the stuff highlighted in this blogpost from ahrefs.
More and more businesses are adopting Zero Trust frameworks today - that being the idea that no one user, device, or network is inherently or constantly trustworthy - it's why you have to 2fa into everything in work and refresh it every few days - but for some reason, the businesses who have decided that they are building the new oracle (not you, Oracle) in 2026 will simply believe any old junk that it finds under any old digital rock.
As SEOs, we spend a lot of time trying to demonstrate to Google that websites have EEAT (Experience, Expertise, Authoritativeness, and Trustworthiness), because Google has spent years being conned by enterprising black hat SEOs spinning up domains and using them to rank above well known brand for big money terms - and so now they use a web of different factors (both external and internal to the website itself) to understand whether a website is trustworthy or not, but it seems that still, LLMs are naive summer children, taking everything they see a face value.
(A funny aside: I am a big fan of the band Primus, and the big joke amongst Primus fans is declaring that everything they do sucks (Primus sucks! This song sucks!) - so it was very funny to see Youtube announce under a video interview with their singer Les Claypool, that the topic was "opinions are divided about the quality of this interview")
Image formats: Codecs and compression tools
Mozilla
This piece is great for a couple of reasons. One, because it really helps highlight how lacking the boilerplate guidance that appears in a lot of Tech SEO or Webperf audits are, and actually just how deep a rabbithole this stuff can be (I'd like to draw readers' attention to "can rotating gifs 90 degrees improve performance?" by Oliver Mason if they've not seen it before).
The second reason I think this is of interest is it asks the age old question "is a picture of a crow better if you compress it by removing the crow?"
Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health”
Ars Technica
I can't explain how important this feels. cURL is a service I would say most people with a hand in web development use almost daily; it's used by tech businesses of all sizes, in all kinds of projects and pipelines, and the fact they maintained a bug bounty program (A system that allows ethical security researchers a cash reward for responsibly identifying problems) was ultimately for the good of everyone who uses the internet.
So obviously, the fact that idiots have flooded the program so unrelentingly with AI generated junk that cURL are turning it off, is bad news.
See, the problem with vulnerability reports is that each one needs to be treated extremely carefully. It may be an incredibly complex vulnerability that requires intense triaging to understand, or a seemingly tiny issue with enormous repercussions that only become apparent with extra research. So obviously if cURL are now receiving tens (or more, they don't say) of these submissions every single day, it becomes impossible to identify the real issues from the crap, meaning that everyone who uses cURL (which really is everyone, I can't reiterate that enough) becomes a little worse off, and so does the web.
The Internet’s Clean Layer for AI Systems
Scrubnet.org
Very sorry to say that at the moment I am pulling a little bit of a face at this as a technology.
For a start; Aren't you just inventing RSS again?
Scrubnet markets itself as being for "the post website world", the idea being that you pay them to keep AI friendly versions of the content on your site available and up to date as html, json and text files, accessible via sitemaps linked to from your site, but hosted by scrubnet.
But of course, that poses a few interesting questions: number one, I appreciate that Scrubnet say it's for the post website world, but LLMs and AI overviews still operate at least partially in the website world, and like to link to websites to cite themselves, so I don't necessarily see what duplicating content and hosting it on a different domain does? Surely if a user is dumped onto a page of unformatted text, they simply won't trust it?
Maybe I'm seeing this through tired old SEO eyes, but dumping a copy of all your content on someone else's domain doesn't seem like the best way to drive visibility, although I'd love to be proven wrong. Sure, there's data to say that LLMs are accessign the content, but don't make me point to the cats.txt shaped sign.
Also as an aside, I would be fascinated to learn why scrubnet isn't using its own technology.