[Petit Fours #401] On pleasing the algorithm, collaborating with bots, and a final CSCW workshop call
Hi, everyone! Without further ado, here’s what I’ve on my mind this week:
#1 The final deadline for expressions of interest to our CSCW workshop The Work of AI: Mapping Human Labour in the AI Pipeline is getting close. Please submit your position paper (or reach out to us) by September 20!
#2 Jesse Haapoja and colleagues have a cool new paper — Moral orders of pleasing the algorithm — out at New Media & Society: “This article examines how ‘pleasing the algorithm’, or engaging with algorithms to gain rewards such as visibility for one’s content on digital platforms, is treated from a moral perspective. Drawing from Harré’s work on moral orders, our qualitative analysis of Reddit messages focused on social media content creation illustrates how so-called folk theories of algorithms are used for moral evaluations about the responsibilities and worthiness of different actors. Moral judgements of the actions of content creators encompass ideas of individuals and their agency in relation to algorithmic systems, and these ideas influence the assessment of algorithm-pleasing as an integral part of the craft, as condemnable behaviour, or as a necessary evil. In this way, the feedback loops that arrange people and code into algorithmic systems inevitably make theories about those systems also theories about humans and their behaviour and agency.”
#3 For another interesting piece of recent research, check out Collaborating with Bots and Automation on OpenStreetMap by Niels Van Berkel & Henning Pohl: “OpenStreetMap (OSM) is a large online community where users collaborate to map the world. In addition to manual edits, the OSM mapping database is regularly modified by bots and automated edits. In this article, we seek to better understand how people and bots interact and conflict with each other. We start by analysing over 15 years of mailing list discussions related to bots and automated edits. From this data, we uncover five themes, including how automation results in power differentials between users and how community ideals of consensus clash with the realities of bot use. Subsequently, we surveyed OSM contributors on their experiences with bots and automated edits. We present findings about the current escalation and review mechanisms, as well as the lack of appropriate tools for evaluating and discussing bots. We discuss how OSM and similar communities could use these findings to better support collaboration between humans and bots.“
#4 For a bit of publication and peer-review doom and gloom, along with some thoughts on how to move past them, I point you to Ian Arawjo’s piece LLM Wrapper Papers are Hurting HCI Research.
-A