Let's talk tech Thursday #16
Welcome to another edition of Let's talk tech Thursday, the newsletter that would have made room on the door for Jack.
This week we:
- Take a little look at some of the ways Microsoft is handling the move from Windows 10 to Windows 11,
- Explore what it might mean that a third of Google's code is now AI generated,
- Get our recommended daily allowance of court cases, with a double whammy of Proton vs Apple, and a revisit of GenAI vs Content Creators,
- Round off with a feel-good about the power of AI as a tool for self-learning.
Let's dig in...
Top Stories
Don't be fooled: Microsoft's claim that Windows 11 PCs are 2.3x faster than Windows 10 ones is based on hardware, not software
Summary
Microsoft claims Windows 11 PCs are 2.3 times faster than Windows 10 PCs. But experts have shown that this speed difference comes mainly from newer hardware, not the software itself. The company compared old Windows 10 PCs with new Windows 11 PCs, which makes the claim misleading.
So what?
In the latest in a series of what we in the industry call "tech companies just straight up being dicks", Windows is trying to entice people into upgrading their existing devices by claiming that the newest version of their operating system is 2.3x faster. Except, they've been very clever, and been careful to not actually claim that at all.
Instead, they are saying that if you take a PC that is running Windows 11, and compare it to another PC that is running Windows 10, the Windows 11 PC is likely to be faster. Which makes sense, because Windows 11 will only run on certain, newer, hardware. It therefore isn't actually anything to do with the Operating System at all, but rather that the benchmarking for Windows 11 is done on faster equipment than the benchmarking for Windows 10.
Could this just be a marketing mix up? Should we give Microsoft the benefit of the doubt? We could. But this isn't the first time they've pulled a fast one on customers to try and eke the most out of them.
You may remember a couple of weeks ago that Asus, among others, were trying to convince people they needed an expensive Copilot+ enabled laptop in order to get Windows 11. Something that would have left most people, had they followed through on the advice, out of pocket with hardware they weren't going to use to its full potential.
With support for Windows 10 ending later this year, people have been very critical about the way Microsoft is handling this. In what seemed to be a move to appease those who might not make the 14th October deadline, Microsoft allowed users to get the first year of Extended Security Updates free. But with a lot of caveats that don't apply to anyone using it for commercial purposes. Oh, and the fee for those people goes up dramatically year on year.
This probably won't be the last time you hear me talk about the Windows 10 end of support. If you're still running Windows 10, just remember you have until 14th October to either upgrade to Windows 11 (or a different operating system entirely), or look into the Extended Security Update plan, in order to remain safe.
Google issues official internal guidance on using AI for coding - and its devs might not be best pleased
Summary
Google has released official guidelines to help its engineers use AI safely and effectively for coding. While the guidance stresses that humans are still essential for reviewing and securing code, nearly one-third of Google's code is now AI generated.
So what?
This is the latest in a growing number of examples of businesses leaning on AI to do a lot of the heavy lifting when it comes to developing code.
There appear to be two schools of thought at the moment. The likes of Salesforce, Microsoft, and others are using the improvements in AI to lay-off vast numbers of staff. The logic is that the much cheaper AI can augment the remaining staff and more than compensate for the reduction in human capital.
The other school of thought uses the exact same logic, but argues that because AI is so cheap, you're better off dramatically increasing staff capacity to make your company so much money that laying people off would be foolish. And as a side benefit you don't have to worry about things like the administrative headaches of redundancies.
In the latest Hard Fork podcast, OpenAI's Sam Altman claimed this was the point of AI. There is such a high demand for code, stated Altman, that we might never actually satiate it. Far better to augment your existing staff with AI, and allow them to produce 5-10 times the code they could produce without it, and create 30 times the revenue you could before. Indeed, Altman and colleague Brad Lightcap both seemed to think that AI couldn't be taking any jobs for this very reason.
Setting aside the fact that "number of lines of code written" as a performance metric is about as meaningful as "number of times the chef opens the fridge", it's wild that we're still debating that AI isn't taking people's jobs.
While the Google story doesn't talk about lay offs, and in fact seems to be following Altman's thought process that better to keep the same number of staff and create exponential amounts of code, it does strike me that very few companies in history have ever turned down the opportunity to cut costs.
By the way, it isn't just Altman who thinks that AI isn't taking jobs. A colleague of mine recommended this episode of the BBC's The Food Chain podcast to me, which focuses on how AI is being used in the food industry. In it, AI thought-leader Tamsin Deasey Weinsten also states that AI isn't going to reduce jobs, before almost immediately talking about how the job market is going to change. I'm seeing more and more of this happening, where certain sectors of the AI-world all seem to be reading from the same hymn-sheet. They claim that that AI isn't putting anyone's jobs at risk, despite this being demonstrably incorrect.
I'm not saying AI is evil, and I still very much think that it can, should, and is being used for great things. But be wary about what you're reading, and know that even "AI experts" have some pretty big blindspots.
What else is going on?
Privacy-focused app maker Proton sues Apple over alleged anticompetitive practices and fees
It's been a while since we've talked about tech companies suing each other. Proton - famous for their suite of secure, privacy first, products - is taking Apple to court on the grounds of unfair control. In particular, the fees charged by Apple to allow developers to put their products on the App Store, are so profitable for Apple that they can't possibly be just to support the running of the App Store (as Apple claim).
But that's not the most interesting part of this case. Neither is it the fact that Proton have said they'll donate any potential winnings to organisations supporting democracy and human rights. For me, the most interesting part is that that Proton is arguing that Apple's practices allow for dictators to to silence free speech. An example is the blocking of Proton's own VPN service, which Apple claimed could be used to "unblock censored websites" - a decision they appear to have taken in order to allow the App Store to still operate in (and allow Apple to profit from) authoritarian regimes.
If this case is successful, it could prove to be very important precedent for holding big tech to a more moral set of standards.
The Court Battles That Will Decide if Silicon Valley Can Plunder Your Work
Last week, we talked about Anthropic's court win against Bartz et al. The following day, a similar case against Meta also went in big tech's favour. In both cases, the judges appeared to be reluctant in their rulings.
In the Meta case, Judge Vince Chhabria stated after his ruling that the plantiffs "made the wrong arguments", and while it seems obvious that it's a stretch to call a company making billions of dollars off the back of creative's "fair use", the authors had presented a case of "clear losers".
If you remember, Judge William Alsup of the Anthropic case had similar feelings. It seems that, when it comes to battling AI, the legal system has a lot of catching up to do.
Woman in $23K Debt Asks ChatGPT for Help
Not all AI is bad, so let's end this week on some good news out of the world of tech. A woman from Delaware posted on social media this week saying that she'd mananged to make a pretty sizeable dent in credit card debt by asking ChatGPT for tips and support. She followed a social media trend that sees people using ChatGPT for 30 days to improve a particular skill. In Jennifer Allan's case, this was financial literacy.
At the end of the 30 days, she'd paid off a little over half of her debt. Now before you all rush off to OpenAI, I feel obliged to point out that almost all of this came from money that she already had, but didn't know she had access to. But while ChatGPT didn't turn her into a sales tycoon overnight, it did help her understand more about finance, and where to look, and how to consolidate money from a large number of different sources.
What skills have you learned from AI chatbots? I'd love to hear about it!
Hope you're all enjoying the slightly cooler weather, and with it the ability to moan about how "that was our summer been and gone".
I'll see you next week for more tech-based news musings.
Will