Let's talk tech Thursday #28
This week: the NHS goes closed source, electronic shelf readers might mean you pay more for your groceries, and we see what police on both sides of the pond have been up to.
Hello again,
To make up for a lack of LT3 last week (did you miss me?), we've got a bumper edition. Two top stories, and a smattering of additional pieces to get your teeth into.
This week we're looking at the NHS pulling access to it's code, and what electronic shelf readers might mean for UK inflation. Also, we talk VPNs and age verification, and check in on what police on both sides of the pond have been up to.
And for our blog spotlight, we talk burnout in the age of AI.
Let's dig in...
Top Story #1
🏥 NHS to close-source hundreds of GitHub repos over AI, security concerns
A quick summary
The NHS has ordered that all of its previously freely available project repositories be shut off from the public, over cybersecurity concerns.
NHS England have said that this is a temporary position while they look to shore up their cybersecurity position. This seems to be a direct response to Mythos - the advanced AI model from Anthropic (makers of the Claude chatbot) that can identify cybersecurity holes in software.
What's the deal with Mythos?
Last week, Mythos found 271 vulnerabilities in the popular open-source Firefox web browser. Firefox has since released an update to close these holes, but it left a lot of the community a little shaken. Anthropic state that access to Mythos is highly restricted, but have since been investigating claims of unauthorised access. The worry is that if bad people have access to Mythos, they can use it to find vulnerabilities in all kinds of software.
The NHS is concerned because open source means that anyone can take the code and analyse it. Point Mythos at the easily available code, and those with ill-intent could cause some serious headaches with healthcare data.
The thing is, that's not really how this works. In a blog post, Terence Eden (a technologist specialising in open data standards, privacy, and open source software) points out that the NHS made the code for the Covid Contact Tracing app immediately available to anyone who wanted to use it. He writes:
That was a nationally mandated app, installed on millions of phones, subject to intense scrutiny from hostile powers - and yet, despite publishing the code, architecture and documentation, the open source code caused zero security incidents.
So Open Source is safer?
The security of a piece of software isn't necessarily determined by whether it is open or closed source.
Closed source does add two additional barriers for those looking for vulnerabilities in the code. The first is technical - it makes it more difficult (but, crucially, not impossible) to see what the code for an application actually says. The second is legal - trying to read and analyse the code for an app without permission is against the law (think of it as IP theft).
The thing is, neither a legal barrier, nor a weak technical one, are going to stop a determined criminal. On the flipside, if your code is open, then far more people with good intentions can review it and point out flaws to the relevant people. To put it another way, transparency improves security.
The NHS, by the way, agree with this. As of writing this, point 12 of the NHS service standard still says that code should be made open source. And they have a direct response to security concerns, which is that you "shouldn't rely on closed code as your only security measure" (from the Gov.UK website).
What does it matter if the NHS is open or closed source?
In the grand scheme of things, perhaps it doesn't. And in any case, this is apparently a temporary measure. Maybe the NHS is trying to get access to Mythos for itself so it can do what Firefox did, before rereleasing the improved code?
But beyond the security of it, there are wider implications to this. For one the NHS is funded by public money, so you should be able to see what that money is doing. Additionally, open sourcing a project allows for others to make use of, build on, and iterate from that, ultimately reducing duplicate work, and making sure that people's time is being spent well. Again, these aren't just my opinions - this is all still in point 12 of the NHS service standard.
Good tech should be founded on principles of openness and of collaboration. As more and more of our European neighbours embrace (and in some cases, mandate) open source software, it's telling that we've made a decision to put up walls. If there's more to this than a misguided concern around security, if there are other reasons for restricting access to NHS IP, then perhaps we haven't seen the last of this story.
Top Story #2
🛒 Bank of England suggests electronic shelf labels might lead to dynamic pricing in supermarkets
A quick summary
Electronic Shelf Labels (ESLs) are digital displays that show the price of products. They are touted as an environmentally conscious way of keeping prices up to date, requiring very little power to operate, and no trees being harmed.
Last month, the Bank of England (BoE) published a report on the implications of algorithmic pricing when used outside of the usual industries of travel and tourism. Their opinion is that it would rapidly increase inflation, especially if used in supermarkets.
What's the issue?
If you've every tried to book an Uber at rush hour, you'll have experienced a form of dynamic pricing. Surge pricing, where you end up paying 2x the usual cost of an Uber because more people are trying to get an Uber, is a relatively straightforward example of price responding to supply and demand. No one likes it, but it is "fair" in the sense that everyone is affected equally. (In theory at least... there are semi-regular reports that in some locations Uber charges iPhone users more than Android users for the same rides. Uber, naturally, denies this.)
But what the BoE is alluding to here is a specific type of dynamic pricing, known as "personalised pricing". This is less about assessing the wider market conditions (though that is still important), and more about understanding the individual buyers.
Personalised pricing?
As my fellow economics nerds know, personalised pricing is the holy grail of capitalism. It ensures no value is left on the table. Let's say you'd be willing to pay £7 for a meal deal, but I won't go any higher than £5. So long as the store is still making a profit on a £5 meal deal, the most efficient thing they can do is to sell one to me at £5, and one to you at £7. If they offer it to us both for £5, you'd buy it, but the store would be losing out on the £2 extra you'd have paid. If they offer it to us both for £7, they'll lose out on a sale they could have made to me.
With the numbers: Say the meal deal costs the store £1 to produce. Selling it at £7 means that you'd buy it, but I wouldn't. The store makes a £6 profit. Selling it at £5 means we'd both buy it, and the store makes £8 profit. But selling one to me at £5 and one to you at £7 means that they make £10 of profit. Personalised pricing means the store can squeeze every bit of value they can out of customers.
Is personalised pricing bad? Certainly not, if you listen to the supermarkets. It's also nothing new. Anyone with a Tesco Clubcard will have received vouchers for specific products based on shopping history. If you're a regular purchaser of Dolmio bolognaise sauce, the team at Loyd Grossman might pay Tesco to tempt you over to the other side with a 50p off voucher. We largely accept this as a reasonable way to do business. Supermarkets would argue though that these are offers, not prices. They are a reward for, literally, being part of the club. All anyone else has to do in order to get the same offer is do exactly what you did - sign up for a Clubcard and let Tesco understand their shopping habits.
The important part to remember is that, with the amount of data that storecards have on you, they can work out - to a very high level of accuracy - what the highest amount of money you'll pay for a product is. That 30p off of Fox's Chocolate Viennese biscuits isn't a gift. It's because they know 25p off wouldn't have been enough to tempt you.
What has this got to do with electronic shelf labels?
ESLs have the ability to change the prices in real time, based on whatever is going on in the world. This could be at a micro-level (Sainsburys just started a 2 for 1 offer on bananas, so Tesco knocked the price down), or a macro one (research is released to show that blueberries have even more anti-oxidants than previously thought, so supermarkets hike the price).
But we can now add another, storecard-powered, layer onto that. It wouldn't take much - many shops already have AI cameras tracking individual shoppers (we covered this back in LT3 #14 and LT3 #15, but also ASDA have been trailing AI cameras for stock management). The technology exists to dynamically change the price of a product as you walk up to it, and for that price to be based on what the supermarket thinks is the most you'll pay for it.
And by the way, this isn't a hypothetical. The US store Kroger trailed a system of ESLs and facial recognition cameras back in 2024. So far, no store this side of the pond has been as open about their intentions, but a telling line from Thomas Hill, co-founder of HyperFinity, in the title article suggests that "core commodities like bread and milk" will not be subject to personalised pricing. Why specify that, if everything else wasn't on the table?
What else is happening in the world of tech?
🖥️ Utah first state to hold websites liable for users who mask their location with VPNs
A new law went into effect in Utah yesterday, stating that a person is said to be accessing a website from the state if they are physically there - regardless of where their connection says they are from. The main aim is to stop Utahns from using VPNs to circumvent online identification.
The law is controversial for a number of reasons. For one though, it's technically impossible to enforce. As a result, it means in the safest option for websites might be to ban users with VPNs. There are ways around this for the technically savvy, meaning this law is only really likely to stop people who use commercial VPN services. This would include a large number of people using VPNs for legitimate privacy concerns (journalists, those living under authoritarian regimes, etc), and who aren't likely to have the technical skillset to work around it. It is one more step towards a deeply fragmented internet.
Meanwhile, in the UK, kids are bypassing age verification by drawing mustaches on their faces. No, I'm not kidding.
📹 Police are using surveillance tech to stalk love interests
Over in the US it transpires that Flock, the largest surveillance company you've never heard of, allows access to footage from its over 80,000 cameras to any law enforcement with very little in the way of required authorisation. As a result, there have been "at least 14 cases" of police officers who have use the system to "keep tabs on their romantic interests, including current partners, exes, and even strangers who unwittingly caught their eye in public".
Also worryingly, police departments have been using Flock to keep tabs on activists, and track the movements of people who attended specific protests.
Of course, we know that power corrupts. This isn't even close to the first time police have used tech to abuse their authority. Back in LT3 #4 we talked about the dangers of allowing an encryption backdoor, and discussed the NSA agents spying on ex-lovers, and the Australian cop-turned-serial-rapist. But of course, we don't need to look overseas to know that we have enough of our own law enforcement officers with evil intentions.
With more and more AI-powered CCTV cameras popping up across the UK, there's more and more for us to be aware off.
And speaking of corrupt police...
🐀 Met investigates hundreds of officers after using Palantir AI tool
Back in March, we covered Palantir's increasing hold on UK state services. Well, the Met is one of those, and last week it transpired that as a result of some Palantir surveillance, dozens of London cops are now under investigation for a range of corruption offences. This includes, as a surprise to absolutely no one who has been paying attention to UK police antics, three officers who have been arrested for "abuse of authority for sexual purposes" and for sexual assault.
Also on the docket are 42 senior officers who are being investigated for "serious non-compliance", and 500 officers who received prevention notices for abuse of IT systems.
It's left something of an odd taste in some people's mouths. Clearly, rooting out corrupt law enforcement is a good thing. But using Palantir to do it feels a little like burning the fields to starve the locusts. Do we need an AI like Palantir to be able to catch dirty cops, or should the Met be able to wash its own face properly?
Blog spotlight
💻 Do I belong in tech anymore?
Most of my blogs of the week have been inspirational, or nerdy and insightful. This week it's a little of both, and a little of neither.
Ky Decker recently quit their job as a design engineer, and talks about how burnout - and specifically burnout as a result of the rise of AI - was a major factor.
I'm not a design engineer, but I do recognise some of the same things Ky points out. AI is here, everyone is using it, and that bears a lot of thinking about. Whether it's the use of AI in a meeting without other people's consent, or the use of AI that means you don't engage your own human brain, there are many things that we are losing. Ky talks about a lot of it eloquently and with heart.
Read it, but don't be fooled into thinking that this only applies to tech roles. Replace "code reviews" with "strategy updates" or "funding applications" or "meeting minutes" - I'd argue that every office-based job that exists has the same set of challenges that Ky lists out. A rose by any other name, and all that.
And that wraps up another LT3. Thanks as ever for reading - do forward this on if you think someone else would find the articles interesting.
Have a great rest of the week, and see you (probably) next Thursday.
Will