S06E09 of Connection Problem: European AI, Artificial Stupidity & Sailing Tech Nomads
🤖👋 [“Greetings from your algorithmic overlords,”]
📰 📰 📰 ["Lots of news to discuss.]
🏁👇 [“Let's go.”]
×
If you'd like to work with me or bounce ideas, let's have a chat.
×
Personal-ish update
Travel load just lightened a bit as two upcoming trips vanished fell from my schedule: I’ll have to skip Mozfest, which is ongoing as we speak, since I’m home alone with little K and he’s coming down with a thing, so putting him on a plane and into a conference of 1.000+ folks seems like Not A Good Idea. Also, the first installment of the brand-new Copenhagen conference TechCare is getting post-poned to spring. On one hand I’m bummed, because both should have been great, but also elated because it frees up a lot of time when I can very much use it. To unexpectedly available time!
Also, there’s more stuff coming up that’s exciting (listed at more detail and with links at the end of this email), including speaking at Friedrich Ebert Stiftung’s Digital Capitalism conference as well as Körber Stiftung’s & Open Knowledge Foundation’s joint event Forum Offene Stadt (both on smart cities); and a closed-door workshop with, among others, Francesca Bria. If I ever got to have a professional smart city crush on anyone, it’d be her — as Barcelona’s CTO she singlehandedly redefined what a lot of better alternatives can look like for smart cities. Very, very much looking forward to this.
These events, and others I alas won’t be able to participate in (because schedules) around connected consumer devices, all happen at the intersection of tech & policy & society: It’s great to see more happen at those specific intersections!
×
AI made in Europe / Germany
There’s been a lot of talk about “AI made in Europe”: Based on European values (whatever those are), and European rights (i.e. focused on things like privacy/data protection). What that might mean and who’s doing the most relevant work in that space is something I’ve been having the opportunity to look into as part of an ongoing project. (I’ll be able to share findings and more info soon.)
But slowly and steadily, the government side is certainly stepping up its game in this space:
A few months ago, a European High-Level Expert Group on Artificial Intelligence (AI HLEG) published their findings and recommendations, summarized on their website: “The Guidelines put forward a human-centric approach on AI and list 7 key requirements that AI systems should meet in order to be trustworthy.” There was also some dissent from one of the ethicists on the group, especially around the imbalance of the group regarding distribution of commercial players vs civil society, and the removal of so-called “red lines”, i.e. bans on certain types of uses of AI. (See Connection Problem S05E13.) The German Bundestag has a commission to study AI (Enquete Kommission KI).
And over the last couple of days, the German federal government’s Datenethikkommission (data ethics commission) is getting ready to publish their recommendations, which appear to claim a pretty broad mandate to deal with all things algorithmic.
Now, before going any further, a quick note: As part of some project research I’ve been conducting a lot of interviews with really smart people in this space. And some of the key themes that emerged from those conversations include:
- More assertive government action (regulation, laws, privacy protection, etc.) is a big chance for Europe to raise the bar for citizens/consumers globally. Things GDPR but for the more complex issues of AI.
- European regulators might be high on their success with GDPR and might try to rinse-repeat the same approach for AI, which is doomed to fail: data protection & privacy is important for AI, but would be myopic; more stringent regulation is good, but regulation without harmonized market incentives would be counter productive.
The underlying question is, is Europe a one trick pony that’ll just try to GDPR everything?
These two themes go hand in hand, they’re two sides of the same coin. They’re inherently competing. What the Datenethikkommission proposed go fall into either camp. I yet have to read the document (I registered for the release event in Berlin, but who knows if that’s going to happen). In the meantime, Politico has seen a draft and wrote up a first summary which is quick to skim and I’d recommend reading in full:
> A focus on regulating “algorithmic systems” — a bureaucratic way of describing essentially everything that can be considered artificial intelligence these days. AI systems should be labelled according to a 5-rank system depending on the risks they pose, the experts suggest. Systems ranked in category 3 and 4 would have to fulfill tough transparency obligations; those labeled 5 are outright banned. Generally, high-risk AI applications have to be visibly labeled as such, the document says. And lawmakers should, once and for all, drop the controversial idea to grant some AI systems the status of legal personalities.
the document recommends “regulating algorithmic systems with common horizontal requirements in European law” [note: author’s translation]. In other words, lawmakers should come up with broad, overarching rules that spell out key principles any AI system has to follow and that apply for public institutions and private corporations across sectors.
The author, Janosch Delcker, interprets this recommendation to be directly in conflict with the High Level Group’s that “unnecessarily prescriptive regulation should be avoided.” I’m not sure it is, but unlike him I have not actually had a chance to read them, so I lean towards trusting his interpretation. His reporting on the issue has been really good. (I also highly recommend their newsletter, which is linked above.)
One thing the document recommends, also reported by FAZ, appears to be a central authority for all things related to data protection, as well as a set of rules relating to algorithmic systems. In German: „Verordnung für Algorithmische Systeme“, or EUVAS for short. Should this happen, and that seems highly likely, remember that acronym. These things tend to stick around.
×
Amazon AI made in Germany
So, Amazon and AI, huh? Turns out I had a massive blind spot on my professional map regarding Amazon’s footprint in AI research in Germany. I had heard about a massive high rise that’s planned for housing a lot of Amazon developer and researcher capacity in Berlin, not far from where I’m writing this.
But I didn’t know until just now (also via Politico’s Janosch Delcker) just how big Amazon’s footprint in AI research is in Germany. Turns out that across Germany, Amazon’s Director of Machine Learning Ralf Herbrich has been opening research centers and partnerships (Tübingen, Dresden, Berlin), based not least on long term AI research efforts from even before it was cool again:
“Artificial intelligence was already supported here at a time when that wasn’t popular” (…) When Herbrich completed his doctorate at Berlin’s Technical University in the late 1990s, most people who wanted to find a job quickly focused on computer graphics or databases. But “to have the stamina and say 'No, this field is important and it will be relevant, even if that takes two decades' — that has helped us a lot,“ said the 45-year-old. “This puts Germany in a very, very good position when it comes to the strength of its basic research." After stints with Microsoft and Facebook, he joined Amazon in Seattle in 2012 as director of machine learning, before returning to Berlin in 2013 to oversee an expansion of AI research in his home country.
The occasion I learned this is that Amazon just lost Herbrich to Zalando that made him a VP. Zalando, for those of you who don’t know them, is a German tech company that got started as an early Zappos/Amazon knock-off (backed by the infamous Samwer VC brothers, who gave Germany a reputation for being a web startup clone factory), selling shoes and other apparel online.
It’s interesting to learn this; and also how much of Germany’s AI talent (of which there appears to be more than I was aware) works for a tiny number of tech companies that productize the basic research done in these other research centers.
(Cue Mazzucato’s The Entrepreneurial State, here’s a short video version. I think it’s more relevant now than ever, and it’s great to see her arguments develop so much currency especially at the European level, where her model of “mission driven organizations” is used to structure the next round of Horizon 2020 research funding. We’re talking €100 billion across 5 missions.)
In related notes: Is Amazon unstoppable? Pretty unflattering long read profile of Amazon by The New Yorker.
×
Artificial Stupidity & Misinformation
Zuckerberg has been on fire these last couple of weeks. With the US presidential campaign gaining ever more momentum (how do Americans live with these never-ending campaign cycles?!), there’s increasing scrutiny of Facebook’s role in the elections and the misinformation campaigns that they bring with them.
Zuckerberg repeatedly and reportedly has been defending allowing misinformation in campaign ads (The Hill): “I don’t think people want to live in a world where you can only say things that tech companies decide are 100 percent true. And I think that those tensions are something we have to live with.”
Last week I already mentioned that YES, I agree that tech companies shouldn’t be arbiters of what’s true. But I vehemently disagree with the notion that we’ll just have to live with the mess this leaves us in.
Either the organizations that enable these misinformation campaigns offer a convincing solution to the problem they amplified to this degree (they didn’t create it as much, but it wasn’t an issue of this scale before). Or if they don’t solve the issue, they simply don’t have a right to exist in their current shape. This is not an inherent issue, nor a super complex issue: If you scale an issue up to the size of a societal problem, you gotta solve it, or you gotta go.
The issue, and Zuckerberg’s total lack of understanding of the role FB should be playing, become painfully apparent in this short clip of Alexandria Ocasio-Cortez questioning Zuck. (This part starts around the 1:56 mark but the whole clip is so worth watching.) “Congresswoman, I think lying is bad. And I think if you ran and ad that had a lie in it that would be bad.” 😱 But of course not bad enough to do anything about it, or not to take that business. Mostly, Zuck hides behind ever-more obscure internal rules that apparently are optimized for ingesting all the ad money and deflecting all responsibility.
×
Purpose & Sailing Tech Nomads
In a beautiful moment of signal confluence, a few things crossed my radar within just a few days:
1) I learned about the Purpose Company, a kind of legal hack to make sure a company will stay mission driven and re-invest any profits rather than privatize them. I think of it as a B-Corp with teeth: “Purpose-Companies serve their employees and customers. Profits are primarily reinvested and serve the purpose of the company. Responsibility lies with the people and inside the organisation. Purpose-Companies work for purpose maximisation rather than shareholder-value maximisation.” While they’re working with governments to make this an officially recognized format, the way they do this for now is interesting: The company mustn’t be owned by external shareholders, and a Purpose Company foundation set up for this purpose holds a 1% golden share that has no voting rights but does have veto rights if anyone wants to change the underlying rules that govern the mission-driven status of the company. (There’s a simple overview here.) Like I said, it’s still a hack, but it appears to be a working one.
2) The occasion I learned about the Purpose Company structure is through a lovely conversation with someone involved in two projects/companies I love and have been following from a distance as they’ve slowly been coming together, and am happy to give a shout-out to: Greenloop, a kind of wall-hanging salad growing station for the kitchen. And WildPlastic, which take plastic trash and recycle it into new products (first, somewhat ironically, trash bags).
3) Through Robin Sloan’s newsletter I stumbled upon Hundred Rabbits, a couple (Rekka Bellum and Devine Lu Linvega) who has taken to nomadic life on a sail boat. He’s a developer, she’s an illustrator, and they both have a fascinating approach to the way they go about things. (Including moving onto a boat and crossing the Pacific with, it seems, basically no idea of what they were getting themselves into.) The website is a true miracle, but their XOXO talk offers a great glimpse behind the scenes. I really enjoyed it, both for the practical day-to-day considerations and for the philosophical underpinnings. It feels extremely zeitgeisty in the way the cut down on consumption, and how their patchy connectivity and access to electricity means they have to build their own low-power open source tools and other shenanigans to be resilient and still get some work done.
×
If you’d like to work with me or have a chat to explore collaborations, let’s chat!
×
Currently reading: The Beauty of Everyday Things (Soetsu Yanagi), Lost Japan (Alex Kerr), Nemesis Games (James S. A. Corey)
×
What's next?
Some conference action: In October, I’ll be speaking at the FES event “digital capitalism” in Berlin, on smart cities. In November, the Edgeryders Festival (Berlin) as well as at a Körber Stiftung event (Forum Offene Stadt) in Hamburg. And in December, of course the annual ThingsCon conference in Rotterdam. Overview here.
Enjoy your day!
Yours truly,
Peter
×
Who writes here? Peter Bihr explores the impact of emerging technologies — like Internet of Things (IoT) and artificial intelligence. He is the founder of The Waving Cat, a boutique research, strategy & foresight firm. He co-founded ThingsCon, a non-profit that explores fair, responsible, and human-centric technologies for IoT and beyond. Peter was a Mozilla Fellow (2018-19) and is currently an Edgeryders fellow. He tweets at @peterbihr. Interested in working together? Let’s have a chat.
Know someone who might enjoy this newsletter? Please feel free to forward your copy or send folks to tinyletter.com/pbihr. If you'd like to support my independent writing directly, the easiest way is to join the Brain Trust membership.
×
Header image: Unsplash (Renee Fischer)