Let's talk tech Thursday #7
Last week I teased that I was going to talk about whether investment in AI was slowing down, and whether we're seeing the beginning of the end of the AI bubble. Well, I looked into it a little more, and... meh. Probably not?
The speculation comes from AWS pausing contract negotiations on some data centre leases. But it turns out that isn't necessarily indicative of anything other than AWS pausing contract negotiations on some data centre leases - something you might well expect in light of the current political and economic turmoil. I'm not saying it's not the beginning of the end, I just don't think it was interesting enough to make a big deal out of. Read more about it here if you want to.
So, if we aren't talking about the potential downfall of AI, what delights have we in store this week?
- We talk prime numbers, and how a new discovery could require a dramatic shift in our cybersecurity principles,
- If you thought we were done talking about the government's approach to security, I don't know what to tell you. New laws powered by the Online Safety Act come into play, which have drawn considerable criticism,
- And in a move that surprised exactly no one, Microsoft have launched a new feature that not everyone is happy with. This time it comes in the form of Recall - but why should we be concerned about this particular productivity tool?
We also look at things happening in the worlds of language learning, low earth orbit, and the Post Office.
And, as a bonus treat, stick around to the end where I talk a little about the inspiration for this newsletter, and why more isn't always better when it comes to tech.
Let's dig in...
Top Stories
'Prime numbers' discovery upends thousands of years of accepted beliefs and causes big security issues
Summary
A new discovery claims that prime numbers are not as random as previously thought, which could change mathematics and impact security systems. Researchers have created a chart called the Periodic Table of Primes that supposedly predicts when primes will appear. If accurate, this would have wide ramifications on everything from cybersecurity to astrophysics.
So what
As a quick intro, here's a 30 second run down of how encryption works, that would almost certainly put you bottom of the class in a CompSci course:
- In most encryption use cases, you need two elements - a public key and a private key.
- The public key is like a lockbox that you give out to people who you want to communicate securely with. You give them the open lockbox, they pop their message in, shut it, and give it back to you.
- The only way to then open that lockbox is with the private key, which only you have a copy of.
- What does this have to do with prime numbers? Well these keys are generated by taking two very large primes. Multiplying those numbers gets you your public key, factoring them creates your private key.
Still with me? Great! So the security of the system is predicated on the idea that predicting large prime numbers is very difficult. If prime numbers are suddenly easy to predict, then guessing private keys suddenly becomes much easier. This would allow people to decrypt messages, get in-between the secure connection between you and your bank, or fake security credentials to allow access to restricted systems. In short, almost the entirety of our cybersecurity infrastructure becomes weak to the point of uselessness overnight.
This all sounds super scary. Is there an upside to any of this? Well somewhat counterintuitively, predictable primes might also help with cybersecurity. If you can guess "password123", then you know not to use that as a password. Similarly, we may be able to identify "weak" or overused primes. If we can replace the presumed randomness of today's security with provable solidity, we might usher in a new type of security. Prime-number based security isn't the only game in town, and it could force us to look at better ways to stay secure.
Also, we IT nerds would be remiss if we stole all the column inches on this story. Science nerds would also get a kick out of it. Prime numbers are a big part of signal processing (for things like radio telescopes), simulations of complex astrophysics concepts, and generally underpinning a lot of what we know about maths. The Periodic Table of Primes still needs to be verified, but these could be very exciting times indeed.
Ofcom accused of prioritising interest of tech firms over child safety online
Summary
Ofcom has been criticized for not doing enough to protect children online, as the new safety measures are seen as too weak. Among advocates for stronger action is children’s commissioner, Rachel de Souza, who argues that these codes prioritize the interests of tech companies over child safety. While Ofcom claims the rules will improve online safety, many believe they fall short of what is needed to truly protect young users.
So what?
Ever since it's youth as a bill being batted around the Houses of Parliament back in the sunset of 2022, the Online Safety Act hasn't seen any respite from criticism on all sides. This latest wave comes as Ofcom announces new rules that will make it a legal requirement for companies to block children's access to harmful content.
I'll go out on a limb and assume you haven't read the 350+ page document, but if I may be allowed to provide my humble opinion, it is - at best - muddled. While the intention of the Act is laudable, there are some very concerning aspects to the language, among other issues. Remember our old friend, the Apple vs UK Government court case to implement a backdoor into everyone's iMessages? The Online Safety Act is one of the primary weapons in the government's arsenal.
While the government sees these new rules as a "fresh start" to securing children's online safety, de Souza and others counter that the legislation doesn't always go far enough, and that by pulling short it gives too much room for tech companies to interpret in favour of inaction.
So who is right? Is it the advocates for a stronger Online Safety Act, who want to ensure tech companies can't shirk their responsibilities? Or should we be backing those same tech companies, who are the ones fighting for our security in the face of government mandated backdoors into encryption? Clearly, this is a complicated issue, and well beyond the scope of a couple of hundred words of a newsletter. But I think there's room for both here.
I firmly believe that giving this government - giving any government - the ability to decrypt people's messages is the top of a very slippery slope to chaos. Equally, it is clear that there are far too many online spaces that cause harm and damage. And not just to kids, by the way - over 60% of adults experience online harm in any given week. The Online Safety Act is a mess. In its current state, it is an unwieldy tool that is not fit for purpose. It never has been. But I think there is hope. Despite the Act's flaws, it is at least shedding light on a series of serious conversations that deserve our time and attention.
Related story: Instagram Is Using AI to Automatically Enroll Minors Into 'Teen Accounts'
Microsoft's AI Starts Reading All Your WhatsApp, Signal Messages
Summary
Microsoft's Recall feature can screenshot and read messages on users' screens. The tool is designed as a productivity aide, but it's use could potentially expose sensitive information without the users' knowledge.
So what?
Ok, so because of the structure of these newsletters, the title is verbatim the headline of the article in question. This one might be a little fearmonger-y. While it is technically possible for Microsoft to do this, and you should definitely have thoughts about it, you don't need to go chucking out your Microsoft laptop immediately.
For starters, and for the time being, Recall is an opt-in feature, so if you don't activate it then it isn't on. Equally, Microsoft claims all the storage and processing of the images is done locally - in other words, nothing is sent to Microsoft for analysis. Supporting this is the statement by Microsoft that Recall will only work on Copilot+ PCs.
I will admit though that these don't make me feel particularly safe. Anyone who's tried to use any of the Microsoft 365 suite of late will know that Microsoft have no compunctions about chopping and changing what their software will and won't do. But that (for now) isn't even my main problem with this.
My issue speaks to what feels like a general shift in technology towards convenience at the explicit expense of others' privacy. And I don't mean targeted ads on your Google Mail account because it's read your emails - this goes far deeper than that.
Specifically, and the issue highlighted in the article, consider the use of something like WhatsApp web. The service allows me to access my WhatsApp messages from my browser, for when my phone is just slightly out of arm's reach. When someone sends me a message, there is an underlying assumption that I've taken some precautions about the device itself - a PIN on a phone, not leaving my laptop unattended on the train, etc.
But Microsoft Recall pulls another element into the equation. And crucially, it's not the purview of Meta's WhatsApp to deal with this. They've satisfied their end-to-end encryption promise. The message is decrypted at the point it hits your browser - that's how you're able to read it. That Microsoft then might take a screenshot of that message unannounced and store it somewhere is not on WhatsApp.
This idea that we can record whatever we want about our interactions with people is getting more prevalent. AI meeting assistants are a great example of this. Often, I'll join a call with one already in the room, and I've no idea where that data is being stored or what is being done with it. The difference is though that at I can ask the questions of the meeting host. I can affirm or withdraw my consent. Conversely, I would have no way of knowing if someone I'm speaking to has Microsoft Recall activated.
"But Will," you might, reasonably, be asking, "how is this any different from the other person forwarding the message without you knowing, or taking a screenshot manually and saving that somewhere insecure?" And the answer to that, I would have to admit, is technically nothing at all. But I'm aware of those risks when I send that message, and depending on my relationship with the person I'm messaging there might be an implicit (or even explicit) agreement to not do those things. In short, I know how messaging works, what the inherent risks are, and how to mitigate for them. Recall introduces a new layer of unknowns.
Oh, and just in case you thought I was defending WhatsApp too much, you should know that tucked in the updates of it's recent AI conference, Meta announced it was working on "Private Processing". At the risk of oversimplifying this too much, this basically means that Meta's AI will be able to read your messages to allow you to ask ChatGPT style questions of your conversations, without breaking the end-to-end encryption of the messages. I'll be keeping an eye on this, as I remain skeptical.
Related story: Meta has a plan to bring AI to WhatsApp chats without breaking privacy
What else is going on out there?
Post Office paid £600m to continue using bug-ridden Horizon IT system
The Post Office has spent over £600 million to keep using the faulty Horizon IT system, which it has wanted to replace for over a decade. This is all, it has been recently revealed, despite the government being warned of issues with the software back in 1999. More government funding is now being provided to help develop new technology, but there are considerable doubts about when a replacement system will be ready. Regular readers will remember that this comes just a week after news that Fujitsu (the makers of Horizon) has secured a £125 million public sector contract in Northern Ireland.
Duolingo is phasing out use of contractors, in favour of being an "AI-first" company
Duolingo's CEO announced the company will focus on AI technology, and no longer hire contractors. A lot of customers have since boycotted the app, citing (amongst other things) fears that the quality of the content would drop. Also in the world of "AI is actually taking jobs", a few weeks ago Shopify's CEO told the company that future hires would only be approved if they could prove AI couldn't perform the job.
Amazon launches Project Kuiper satellites designed to compete with Elon Musk's SpaceX
Amazon has launched 27 satellites into low-Earth orbit as part of its Project Kuiper, aiming to compete with SpaceX's Starlink for global internet connectivity. These types of satellites can provide much faster internet connections to remote areas, compared to traditional telecoms satellites, and Amazon aims to put around 3,200 in the air. Analysts are dubious about Amazon's ability to eat into SpaceX's market share, but with everyone from military leaders to digital nomads looking for ways to de-Musk their lives, there might be more support to be had. None of this, though, addresses that there's only so much space up there (yay first newsletter throwback)...
Also
I started this newsletter because I thought it would be easy to use AI and automation to pull together notes I'd been making on news posts into an email. And it was easy. It was also bad. Just because a technology exists, doesn't mean you have to use it, as I explore in my latest article. #selfpromotion
Felt like a bumper edition this week, so I'll keep the wrap up light. It won't have escaped your notice that safety and security were the common themes in our top stories. And despite my best efforts, trying to unpick the interplay between big tech and government eluded me for another week.
Let me know your thoughts - are you concerned about any of this?
Until then, enjoy the sunshine, and I'll see you next week.
Will