Let's talk tech Thursday #6
I know, "It's Friday," you're thinking, "not Thursday!" But in the words of Douglas Adams, time is an illusion.
Anyway, just for a change, this week we have some stories that look at the interplay between big tech and government. We're going to look at:
- Google’s staring down the barrel of a major antitrust ruling, which could fundamentally change the way it works,
- Telegram is threatening to shut up shop rather than weaken its encryption,
- And the UAE is betting big on AI to help write its laws, which is an absolutely foolproof plan with no possible bad outcomes.
We also check in on Gen Z's relationship with AI, a surprising public sector comeback from Fujitsu, and a new type of phishing scam that even those of us who've been round the block a few times might not spot.
Lets dig in...
Top Stories
Judge rules Google illegally monopolized adtech, opening door to potential breakup
Summary
A federal judge ruled that Google illegally monopolised the advertising technology market, violating antitrust laws. The court may require Google to break up its ad business (mostly likely by selling Google Ad Manager) or impose restrictions to ensure fair competition. This case follows a similar ruling last year where Google was found to have monopolised the internet search market.
So what?
We'll have to wait and see whether Google heads down door number one or two (or indeed whether the choice will be Google's to make), but regardless this is a big win in the battle against technology monopolies. It comes on the heels of Meta being called in front of the Federal Trade Commission.
Between the FTC, the Department of Justice, and numerous other bodies, there are dozens of similar cases going through various US courts at the moment. Antitrust battles aren't new, but it seems like they are getting more and more prominent, you'd be right. Back in 2017, third year Yale Law student Lina Khan wrote an impactful essay for the Yale Law Journal titled "Amazon's Antitrust Paradox", which laid out the case for how much damage Amazon specifically (but big tech monopolies in general) do to the economy. Fast forward to 2021, and Khan is appointed Chair of the FTC. She wastes no time in bringing a whole raft of new regulations and proceedings to bear. Remember the Right to Repair chat we had a couple of weeks ago? While not a federal law (yet), in 2021 the FTC did vote to enforce this, meaning action can be taken against companies who limit this, even in states outside of those with R2R laws.
Despite the change in administration since Khan's appointment, the FTC and other branches of the US Government, such as the DoJ who brought the Google case to court, don't seem to be slowing their roll when it comes to tackling big tech.
All that said, these cases take time. The closing arguments for this case were presented in November last year, and we're only now hearing the ruling. We still don't know what the outcome of this is actually going to be for Google. Meanwhile, the tech world famously doesn't hang around. There will be more to see with this one...
Telegram pledges to exit the market rather than "undermine encryption with backdoors"
Summary
Telegram's CEO, Pavel Durov, stated that the company would leave a market rather than compromise user privacy with encryption backdoors. This response comes as governments across the world push for legal access to encrypted messages for law enforcement.
So what?
Another week, another story about governments trying to backdoor their way into your messaging apps. I promise, I don't go looking for these...
In the interests of painting a full picture of this, you should know that Durov was arrested in August 2024 and charged on 12 counts, including violations related to drug trafficking, child exploitation, and money laundering. Much of this stems from complicity and negligence as the CEO of Telegram, an app that - among many legitimate and necessary uses - is also a safe home for others wanting to engage in these sorts of activities.
With that piece of housekeeping out of the way, Durov's response is an interesting one. It comes off the back of a defeat of the French government's Article 8, which would have mandated that all encrypted messaging apps and email services allow user data to be decrypted on request from the authorities. This, of course, is alongside similar court battles in the UK (as regular readers will know), Sweden, and in Florida.
Would Telegram really shut down if forced to put in a backdoor? It's hard to say. Durov has an estimated net worth in the region of $15 billion - largely due to his ownership of Telegram - which you have to imagine would be difficult money to walk away from. Equally though, if a backdoor was put in place, it seems likely that the value of Telegram itself would drop considerably.
UAE to use AI for writing laws
Summary
The United Arab Emirates will be the first country to use artificial intelligence to write and review laws, aiming to make the legislation process faster and clearer. The new approach is expected to create laws in simple language for the diverse population.
So what?
The article poses a rather upbeat view of events, focusing on the time saving nature of creating laws in this way, and also the ability to have plain-language laws in a variety of languages. With only 10% of UAE's population "locals", having laws written not only in simple language, but simple language in anyone's native tongue is a big step forward.
On top of that, the ability to parse thousands of court cases to determine how to adapt and change the law does sound like a great way to cut down on red tape. Given that my last point on the Google story above is that the law often moves too slowly for tech, bringing tech into the fold might be a way of mitigating that.
There are a lot of very large caveats to all of this though. The chief one being our old friend bias. If you've been reading anything I've put online for the last few months, you'll know that the one thing above all else I teach when I give AI training is that it is impossible to avoid bias in AI systems. You can work around it; you can put humans in the loop to identify where there are biases, you can widen the net of the Large Language Model to include data points outside of the mainstream, you can limit the scope of the AI to only focus on very specific facts and figures. But even with all of that, there is only so far AI is going to take you. And that, by the way, assumes that the human(s) you put in the loop are themselves unbiased. Which is exceptionally unlikely.
We already have examples of where AI is being used in the criminal justice system, and it isn't great. For example, the COMPAS algorithm (Correctional Offender Management Profiling for Alternative Sanctions) has long been used Stateside to help predict recidivism rates and therefore determine sentences. Whatever the intended aims, it amplifies an inherently racist judicial system and recommends disproportionately longer sentences for Black men compared to other demographics.
It is theorectically possible to use AI to support the making and upholding of laws, I do believe that. I don't know, however, of any case studies that have yet cracked that code. As with a lot of tech, containing it's use and being clear about what you're using it for is key to how useful a tool it is. Whether the UAE gets this right, will remain to be seen.
Anything else of note?
Gen Z students won't use ChatGPT - but not because it's cheating
In a survey of 333 Cambridge University undergraduates, while some admitted to being wary of using Generative AI in their studies, no one opposed it on grounds that it was cheating. Rather, the moral consternation comes from the impact to the environment, and the dubious ethical nature surrounding how Large Language Models are trained.
Fujitsu wins a £125 million Northern Ireland contract
Fujitsu has secured a £125 million contract to create a new land registry system for Northern Ireland, despite previous commitments not to bid for UK public sector work due to the Post Office Horizon scandal. Opinions on this are, as you might imagine, mixed.
Beware, hackers can apparently now send phishing emails from "no-reply@google.com"
Hackers are using Google's notification system to send fake phishing emails that appear to come from the real Google notifications account. They create Google accounts and OAuth apps to make these emails look legitimate, tricking users into giving away their credentials. This, the salvo in the arms race that is cybersecurity, can trick even the most tech savvy. Stay vigilant!
That's it for another Let's talk tech Thursday. A lot to keep an eye on, and I'm particularly curious to see what the UAE's AI endeavours will mean for other countries' relationship with the technology - even as rumours of an AI slowdown are on the horizon. But I'm getting ahead of myself. That's for next week's issue...
Speaking of next week, I promise I'll try and also bring you some stories that aren't about big tech and government. Oh - and I'll send on a Thursday too. How does that sound?
Have a great weekend, and see you next week.
Will