S06E25 of Connection Problem: Publishing season
×
Hi,
Today marks the end of Season 6 of Connection Problem. Thanks for indulging me and for giving your attention: It’s not a small thing to give and I really appreciate it.
I’ll be traveling with varying degrees of connectedness for about a month. So it’s time to throw a few things out here lest they get lost along the way: It’s publishing / release season, even though it might take another couple of days to be able to find any of this online.
Enjoy.
×
If you'd like to work with me or bounce ideas, let's have a chat.
×
Personal-ish & project updates
Writing: My attempt at writing 20 blog posts in 20 work days (overview) has moved to now be officially out of reach. But with 15 or so, that’s not too bad. The latest:
- Innovation Rankings
- Introducing the Berlin Institute for Smart Cities and Civil Rights, see also Secretary-General's remarks to the UN Human Rights Council: “The Highest Aspiration: A Call to Action for Human Rights"
- Europe’s AI Strategy: Give your input today.
Attempts to also include a recent interview with Futurezone.at (about IoT and rights) and the releases of two reports (on smart cities for the Foundation of European Progressive Studies, or FEPS) and about the European AI & Society Ecosystem with Stiftung Neue Verantwortung for Luminate) have so far not materialized. They’ll be out within days, but I might already be in transit then. I’ll share those via Twitter, too, and you can just keep an eye on all the websites.
Taking a break: Repeat klaxon: I’ll be traveling for most of March. If there’s stuff to discuss, please bring some patience. This is time for refueling and reading, so that after I can hit the ground running. 🙏
Also, after I’m back I’ll be working from a new space, shared with some of my favorite Berlin digital rights initiatives and lots of lovely folks. More on that soon.
×
Algo accountability, wave 2
Good article on which questions we have been asking and should be asking, by Frank Pasquale:
Though at first besotted with computational evaluations of persons, even many members of the corporate and governmental establishment now acknowledge that data can be biased, inaccurate, or inappropriate. Academics have established conferences like “Fairness, Accountability, and Transparency in Machine Learning,” in order to create institutional forums for coders, lawyers, and social scientists to regularly interact in order to address social justice concerns. When businesses and governments announce plans to use AI, there are routine challenges and demands for audits. Some result in real policy change. For example, Australia’s Liberal government recently reversed some “robodebt” policies, finally caving to justified outrage at algorithmic dunning.
All these positive developments result from a “first wave” of algorithmic accountability advocacy and research (to borrow a periodization familiar from the history of feminism). These are vital actions, and need to continue indefinitely—there must be constant vigilance of AI in sociotechnical systems, which are all too often the unacknowledged legislators of our daily access to information, capital, and even dating. However, as Julia Powles and Helen Nissenbaum have warned, we cannot stop at this first wave. They pose the following questions:
Which systems really deserve to be built? Which problems most need to be tackled? Who is best placed to build them? And who decides? We need genuine accountability mechanisms, external to companies and accessible to populations. Any A.I. system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest.
While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.
×
Miscellanea
- EU-funded research project (now wrapped up) published the Ethical Stack, a toolkit to explore ethical questions in product development. Looks pretty interesting!
- Clearview AI, everyone’s favorite crooked AI company, said they had a major security breach. Who could’ve seen that coming. 🙈
×
If you’d like to work with me or have a chat to explore collaborations, let’s chat!
×
Currently reading: The Shortest History of Germany (James Hawes), Gideon the Ninth (Tamsyn Muir), The Longing for Less (Kyle Chayka)
×
What's next?
Mic drop. See you all in 4-5 weeks 👋🎤
Yours truly,
Peter
×
Who writes here? Peter Bihr explores how emerging technologies can have a positive social impact. At the core of his work is the mission to align emerging technologies and citizen empowerment. To do this, he works at the intersection of technology, governance, policy and social impact — with foundations, public and private sector. He is the founder of The Waving Cat, a boutique research and strategic advisory firm. He co-founded ThingsCon, a non-profit that explores fair, responsible, and human-centric technologies for IoT and beyond. Peter was a Mozilla Fellow (2018-19) and an Edgeryders Fellow (2019). He tweets at @peterbihr and blogs at thewavingcat.com. Interested in working together? Let’s have a chat.
Know someone who might enjoy this newsletter? Please feel free to forward your copy or send folks to tinyletter.com/pbihr. If you'd like to support my independent writing directly, the easiest way is to join as a member.
×