S06E15 of Connection Problem: Just Enough City
Hello & a beautiful, productive Thursday.
I’ll move a little off the usual formula this week and include a full blog post I wrote, followed by a few smaller bits and pieces that caught my attention. Usually I refrain from creating too much redundancy. (I know that quite a few of you will have that same post in their ⚡️RSS feeds⚡️ as well.) However, it feels timely and relevant and like there’s something interesting there to explore at some more depth in the coming weeks — a kind of local maximum. And, for reasons, I wanted to make sure it’s out there sooner rather than later. So, enjoy!
×
If you'd like to work with me or bounce ideas, let's have a chat.
×
Just Enough City
In this piece, I’m advocating for a Smart City model based on restraint, and focused first and foremost on citizen needs and rights.
A little while ago, the ever-brilliant and eloquent Rachel Coldicutt wrote a piece on the role of public service internet, and why it should be a model of restraint. It’s titled Just enough Internet, and it resonated deeply with me. It was her article that inspired not just this piece’s title but also its theme: Thank you, Rachel!
Rachel argues that public service internet (broadcasters, government services) shouldn’t compete with commercial competitors by commercial metrics, but rather use approaches better suited to their mandate: Not engagement and more data, but providing the important basics while collecting as little as possible. (This summary doesn’t do Rachel’s text justice, she makes more, and more nuanced points there, so please read her piece, it’s time well spent.)
I’ll argue that Smart Cities, too, should use an approach better suited to their mandate—an approach based on (data) restraint, and on citizens’ needs & rights.
This restraint and reframing is important because it prevents mission creep; it also alleviates the carbon footprint of all those services.
Enter the Smart City
Wherever we look on the globe, we see so-called Smart City projects popping up. Some are incremental, and add just some sensors. Others are blank slate, building whole connected cities or neighborhoods from scratch. What they have in commons is that they mostly are built around a logic of data-driven management and optimization. You can’t manage what you can’t measure, management consultant Peter Drucker famously said, and so Smart Cities tend to measure… everything. Or so they try.
Of course, sensors only measure so many things, like physical movement (of people, or goods, or vehicles) through space, or the consumption and creation of energy. But thriving urban life is made up of many more things, and many of those cannot be measured as easily: Try measuring opportunity or intention or quality of life, and most Smart City management dashboards will throw an error: File not found.
The narrative of the Smart City is based fundamentally that of optimizing a machine to run as efficiently as possible. It’s neoliberal market thinking in its purest form. (Greenfield and Townsend and Morozov and many other Smart City critics have made those points much more eloquently before.) But that doesn’t reflect urban life. The human side of it is missing, a glaring hole right in the center of that particular vision.
Instead of putting citizens in that spot in the center, the “traditional” Smart City model aims to build better (meaning: more efficient, lower cost) services to citizens by collecting, collating, analyzing data. It’s the logic of global supply chains and predictive maintenance and telecommunications networks and data analytics applied to the public space. (It’s no coincidence of the large tech vendors in that space come from either one of those backgrounds.)
The city, however, is no machine to be run at maximum efficiency. It’s a messy agora, with competing and often conflicting interests, and it needs slack in the system: Slack and friction all increase resilience in the face of larger challenges, as do empowered citizens and municipal administrations. The last thing any city needs is to be fully algorithmically managed at maximum efficiency just to come to a grinding halt when — not if! — the first technical glitch happens, or some company ceases their business.
Most importantly, I’m convinced that depending on context, collecting data in public space can be a fundamental risk to a free society — and that it’s made even worse if the data collection regime is outside of the public’s control.
The option of anonymity plays a crucial role for the freedom of assembly, of organizing, of expressing thoughts and political speech. If sensitive data is collected in public space (even if it’s not necessarily personably identifiable information!) then the trust in the collecting entity needs to be absolute. But as we know from political science, the good king is just another straw man, and that given the circumstance even the best government can turn bad quickly. History has taught us the crucial importance of checks & balances, and of data avoidance.
We need a Smart City model of restraint
Discussing publicly owned media, Rachel argues:
It’s time to renegotiate the tacit agreement between the people, the market and the state to take account of the ways that data and technology have nudged behaviours and norms and changed the underlying infrastructure of everyday life.
This holds true for the (Smart) City, too: The tacit agreement between the people, the market and the state is that, roughly stated, the government provides essential services to its citizens, often with the help of the market, and with the citizens’ interest at the core. However, when we see technology companies lobby governments to green-light data-collecting pilot projects with little accountability in public space, that tacit agreement is violated. Not the citizens’ interests but those multinationals’ business models move into the center of these considerations.
There is no opt-out in public space. So when collecting meaningful consent to the collection of data about citizens is hard or impossible, that data must not be collected, period. Surveillance in public space is more often detrimental to free societies than not. You know this! We all know this!
Less data collected, and more options of anonymity in public space, make for a more resilient public sphere. And what data is collected should be made available to the public at little or no cost, and to commercial interests only within a framework of ethical use (and probably for a fee).
What are better metrics for living in a (Smart) City?
In order to get to better Smart Cities, we need to think about better, more complete metrics than efficiency & cost savings, and we need to determine those (and all other big decisions about public space) through a strong commitment to participation: From external experts to citizens panels to digital participation platforms, there are many tools at our disposal to make better, more democratically legitimized decisions.
In that sense I cannot offer a final set of metrics to use. However, I can offer some potential starting points for a debate. I believe that every Smart City projects should be evaluated against the following aspects:
- Would this substantially improve sustainability as laid out in the UN’s Sustainable Development Goals (SGD) framework?
- Is meaningful participation built into the process at every step from framing to early feedback to planning to governance? Are the implications clear, and laid out in an accessible, non-jargony way?
- Are there safeguards in place to prevent things from getting worse than before if something doesn’t work as planned?
- Will it solve a real issue and improve the life of citizens? If in doubt, cut it out.
- Will participation, accountability, resilience, trust and security (P.A.R.T.S.) all improve through this project?
Obviously those can only be starting points.
The point I’m making is this: In the Smart City, less is more.
City administrations should optimize for thriving urban live and democracy; for citizens and digital rights — which also happen to be human rights; for resilience and opportunity rather than efficiency. That way we can create a canvas to be painted by citizens, administration and — yes! — the market, too.
We can only manage what we can measure? Not necessarily. Neither the population or the urban organism need to be managed; just given a robust framework to thrive within. We don’t always need real time data for every decision — we can also make good decision based on values and trust in democratic processes, and by giving a voice to all impacted communities. We have a vast body of knowledge from decades of research around urban planning and sociology, and many other areas: Often enough we know the best decisions and it’s only politics that keeps us from enacting them.
We can change that, and build the best public space we know to build. Our cities will be better off for it.
×
“Good” isn’t good enough
Ben Green at Harvard University wrote an excellent paper that blew up my Twitter timeline. (Which tells you a lot about my Twitter experience!) It’s titled “Good” isn’t good enough (PDF) and it takes to court pretty harshly the folks in tech who try to do “good” by throwing tech at society:
Despite widespread enthusiasm among computer scientists to contribute to “social good,” the field’s efforts to promote good lack a rigorous foundation in politics or social change. There is limited discourse regarding what “good” actually entails, and instead a reliance on vague notions of what aspects of society are good or bad. Moreover, the field rarely considers the types of social change that result from algorithmic interventions, instead following a “greedy algorithm” approach of pursuing technology-centric incremental reform at all points.
In other words, if you have an algorithmic hammer, everything looks like an algorithmic nail.
The point is not that there exists a single optimal definition of “social good,” nor that every computer scientist should agree on one set of principles. Instead, there is a multiplicity of perspectives that must be openly acknowledged to surface debates about what “good” actually entails. Currently, however, the field lacks the language and perspective to sufficiently evaluate and debate differing visions of what is “good.” This allows computer scientists to make broad claims about solving social challenges while avoiding rigorous engagement with the social and political impacts.
Also known as the Ex-Tech Worker Turned Ethicist problem.
By the way: Turns out Ben Green is also the author of a book out earlier this year, The Smart Enough City which looks amazing and I just went straight to my "must read" list.
×
How to tackle climate & traffic in cities
An amazing, and extremely accessible Twitter thread on how to tackle traffic related climate questions in cities, with interesting numbers and a what seems a very solid understanding of creating the conditions for better outcomes. (Spoiler alert: We need carrot & stick, and also better infrastructure.) The best 101 I've seen on this. (via Chris Adams)
×
If you’d like to work with me or have a chat to explore collaborations, let’s chat!
×
Currently reading: The Beauty of Everyday Things (Soetsu Yanagi), Lost Japan (Alex Kerr), Tiamat’s Wrath (James S. A. Corey)
×
What's next?
Next week, I’ll be co-hosting ThingsCon conference in Rotterdam. Come swing by! (In fact, if you haven’t got a ticket yet, hit me up; I have a few heavy-discount vouchers that I can pass on to this lovely crowd here.) For all other presentations and talk as they come in, see the overview here. Otherwise it’s research and writing all the way well into January.
Enjoy your day!
Yours truly,
Peter
×
Who writes here? Peter Bihr explores how emerging technologies — like Internet of Things (IoT) and artificial intelligence — can have a positive social impact. He is the founder of The Waving Cat, a boutique research, strategy & foresight firm. He co-founded ThingsCon, a non-profit that explores fair, responsible, and human-centric technologies for IoT and beyond. Peter was a Mozilla Fellow (2018-19) and is currently an Edgeryders fellow. He tweets at @peterbihr. Interested in working together? Let’s have a chat.
Know someone who might enjoy this newsletter? Please feel free to forward your copy or send folks to tinyletter.com/pbihr. If you'd like to support my independent writing directly, the easiest way is to join the Brain Trust membership.
×