NLP-CoP Newsletter #22
Welcome to issue #22 of the Natural Language Processing Community of Practice (NLP-CoP) Newsletter - your monthly summary of what's happening inside the community, and in the wider world of NLP.
New to the CoP or looking to rediscover the community? Check out our Welcome Kit or our FAQ.
You may have noticed we've recently moved this newsletter to Buttondown. Please remember you can unsubscribe from this newsletter at any time. The link is at the bottom.
Events
Gender, AI and MERL Working Group Meeting: We are hosting an event on June 05, at 10am ET/7:30pm IST with Working Group co-lead, Savita Bailur, and gender and inclusion specialist, Medhavi Hassija, about gender inclusion in GenAI. Learn more and RSVP here.
The Geopolitics of Critical Minerals and the AI Supply Chain: The Institute for Advanced Study is hosting an event on June 2, where leading scholars in AI, geopolitics, infrastructure, and resource extraction will examine the intersections between the AI value chain and the extractive economies on which it relies. Register here.
Want to help build Feminist AI? Chayn, a global nonprofit working on gender-based violence, is trying to build an AI tool grounded in feminist values, and they’re inviting developers, researchers, and designers to join them on June 12. You can register here.
The AI+Africa Working Group is meeting on June 24 at 5pm CAT/6pm EAT. We’ll share more information about the call soon. In the meantime, you can RSVP here.
Community Updates
Propose a session: Gender, AI, and MERL Working Group Leads, Savita and Allison, are planning a series of monthly meetings to facilitate critical conversations, and they’d love to have community members join us as speakers! If you’d like to propose a session, please reach out to Bárbara.
AI Vendor Assessment Tool: Revolution Impact and The MERL Tech Initiative have teamed up to create an AI Vendor Assessment Tool designed for decision-makers who work in the international development, humanitarian, and social impact sectors and who need to assess AI vendors but may not have specialized knowledge in AI systems. Read it now.
Open Office Hours: NLP-CoP member, Annie Brown, founder of Reliabl, a “for-good AI development provider focused on building inclusive, accurate AI”, is offering open office hours to help mission-driven founders and teams better understand the technical side of AI.
New brief on AI in the humanitarian sector: Quito (MTI core collaborator and Humanitarian AI Working Group Lead) and Linda (MTI Founder) wrote this new humanitarian AI brief, which synthesizes key information about AI’s varied applications for critical humanitarian decision-makers.
What we’re reading
AI and Evaluation:
New Paper examining the use of AI in evaluation with a specific focus on democracy initiatives. The authors, Quito Tsui and Linda Raftree, emphasise the importance of taking a highly specific approach to AI tool selection, analysing discrete applications and their possible utility for democracy evaluations. The paper pays particular attention to the possible harms both broadly associated with AI in evaluation, such as that of biased data, as well as in the specific deployment of AI for the evaluation of democracy programmes, including the possibility of unintended outcomes. Read here.
Podcast: “Does AI really save work in evaluation?” This week, REvaluation Conference 2024 is sharing recorded panel discussions and keynote speeches in their REvaluation Podcast, including a talk by The MERL Tech Initiative’s Founder, Linda, about how GenAI is being used in the MERL space, how AI affects our societies and planet, and how we can think critically about AI in our work. Listen here.
Assessing evidence on the effectiveness of humanitarian AI use cases: Linda and Quito have also been featured in Humanitarian AI Today's latest podcast episode, focused on assessing evidence on the effectiveness of humanitarian AI use cases. Listen here.
Balancing innovation and rigor: New guidance on how to thoughtfully leverage the potential of AI for evaluation will require continuous experimentation, learning, and adaptation.
Climate, environment, and AI:
The energy cost of our AI future: Power Hungry, a new project from MIT Technology Review, offers a brief on everything you need to know about estimating AI’s energy and emissions burden, looks at the effects of this “hunger for AI,” and even includes into reasons to be optimistic despite it all.
Digital damage and environmental implications of generative AI: This podcast episode with researcher Dustin Edwards covers his research on the environmental implications of generative AI, feelings of generative AI ambivalence, and his research on digital damage. There is also a transcript here.
How is AI reshaping conservation science? This piece by Nathaniel Burola explores how AI is influencing conservation science and raising critical ethical questions. Burola challenges the assumption that conservation tech is inherently neutral or just, calling for data justice principles to ensure frontline communities are meaningfully included in the development of environmental AI tools.
AI and power:
Understand TESCREAL: Radical Futures put together this quick guide to TESCREAL, a cluster of ideologies guiding tech’s most powerful decisions. You can also learn more about the term, first created a few years ago by Timnit Gebru and Emile P. Torres, in this article.
An alternative to concentration of power and democratic challenges of mainstream AI: A new white paper about Public AI by Bertelsmann Stiftung presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized.
AI and government:
“Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It": Tiago C. Peixoto in Tech Policy Press calls for more empirical work on the innovation-diffusion lag.
Cutting Through the Noise: Powering the Next Generation of Government Portals with Generative AI. Over at the Center for Global Development, Han Sheng Chia, Surbhi Bharadwaj, and Christine Hwang write about how generative AI can improve citizen access to digital public services by transforming e-government portals. Read here.
Limits of AI tools:
LLMs overgeneralize scientific texts (more than humans!): A new paper shows that when summarizing scientific texts, “LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study”. The results of the study, which tested 10 prominent LLMs, indicate a strong bias in many widely used LLMs, posing “significant risk of large-scale misinterpretations of research findings."
AI’s limited understanding of gender puts health equity at risk: New research from Oxford Internet Institute shows how AI language models encode a flawed understanding of gender. The researchers explain, "This poses significant risks for transgender, nonbinary, and cisgender individuals, particularly where AI is integrated into health technologies."
Impact of AI in workplaces and labor markets:
Ethan Mollick summarizes “four key facts” about AI adoption, covering AI impacts on work performance, potential “transformational gains” from AI systems, and how companies are typically reporting small to moderate gains from AI so far.
A new study from Statistics Denmark and the University of Chicago found that half of workers have used ChatGPT, with younger, less experienced, higher-achieving, and especially male workers leading the curve. The study also showed women are 20% less likely to use ChatGPT compared to men in the same occupation, a gender gap that persists among coworkers within the same workplace. Workers surveyed were twice as likely to state that ChatGPT provides “smaller rather than larger time savings for workers with greater expertise”. Read here.
New AI Usage Data: A new study analyzing AI’s potential impact on labor markets points to the fundamental difference between today’s AI systems and previous waves of automation. According to MIT Gov Lab, it is the high-skilled, professional jobs being impacted first, not, like for previous technologies, the routine ones.
MTI Training
MTI offers a diverse menu of training modules covering everything from foundational concepts to advanced applications, our trainings can be delivered virtually, in-person, or in hybrid formats. The trainings can be mixed and matched based on your individual interests or your organization’s specific needs. Learn more here.
Questions or Comments? Get in touch on Slack or reach out directly via hello@merltech.org!
Take care, all, and thanks for being part of the NLP-CoP!
Best,
Bárbara