NLP-CoP Newsletter #20
Hello!
Welcome to issue #20 of the Natural Language Processing Community of Practice (NLP-CoP) Newsletter: your monthly summary of what's happening inside the community, and in the wider world of NLP.
You may have noticed we've recently moved this newsletter to Buttondown. We opted for this platform in consideration of the privacy of community members, as Buttondown allows us to control whether tracking occurs. We have opted not to track anything. Please remember you can unsubscribe from this newsletter at any time. The link is at the bottom.
New to the CoP or looking to rediscover the community? Check out our FAQ.
Events
Paradigm Initiative is hosting the Digital Rights and Inclusion Forum (DRIF) 2025 in Lusaka, Zambia from April 29 to May 1, 2025. The theme this year is “Promoting Digital Ubuntu in Approaches to Technology.” There is still time to register.
Interested in a virtual course about AI & Social and Behaviour Change Communication (SBC)? The Social and Behavior Change Communications Working Group Leads, Isabelle Amazon-Brown (design, chatbot, and ethical AI specialist) and Sarah Osman (SBC expert) are gauging interest in a 2 half-day training. Express your interest in participating.
Community Updates
We recently launched a new Climate, MERL and AI Working Group for those interested in learning, sharing, and defining actionable steps for AI and climate-related MERL. The Working Group space also welcomes those who are using AI for MERL in other areas and have questions about the potential benefits and environmentally-related downsides of AI in MERL. If you’re interested in joining, let us know!
We are thrilled to welcome Varaidzo Matimba to the role of AI+Africa Lead for the NLP-CoP, thanks to The Hewlett Foundation and its support for our AI+African MERL Working Group. With Vari on the team, we’ll be able to offer dedicated focus to this key Working Group and the interests of its members.
Back in February, the Ethics & Governance Working Group brought together an amazing panel of speakers to discuss AI and labour. The conversation revolved around questions such as the injustices faced by workers who power AI and machine learning and what we, as development and humanitarian professionals, can do to acknowledge and address these issues. MTI Core Collaborator and Working Group Lead Isabelle Amazon-Brown shared highlights from the event here.
During RightsCon, Linda Raftree (MTI founder) and Quito Tsui (MTI Core Collaborator), we convened a panel with Helen McElhinney (CDAC), Heather Leson (IFRC), and Sarah Spencer (expert on AI Policy and Governance). The discussion focused on the current state of M&E of humanitarian AI and the kinds of M&E frameworks necessary to ensure the sector can effectively assess AI tools. Quito summarised key takeaways here.
Earlier this month, we published a new paper by Savita Bailur (MTI Core Collaborator and Gender, AI & MERL Working Group Lead) and Medhavi Hassija (gender and inclusion specialist), examining how GenAI impacts women’s participation in key areas and highlighting the urgent need to address biases, technology-facilitated gender-based violence, and digital divides. Read the paper here.
We partnered with Sexual Violence Research Initiative (SVRI) to launch a new guide to support researchers make decisions about the use of GenAI tools. In the guide, we lay out big picture, structural, and ethics questions that researchers should ask before using GenAI for violence against women research (or other sensitive topics) as well as practical ways to mitigate risks when using GenAI in their work. The guide is available here.
The MERL Tech Initiative has joined some 20 global organizations who are launching “Evaluation and Learning in the Context of Climate Change: An Invitation to Take Action,” an initiative that aims to support climate action through evidence and learning. In this blog post, MTI founder Linda Raftree invites us to reflect about the extractive nature of the AI industry: “If we are going to use AI in the fight for the planet, we need to invest in green, sustainable, local-first AI”.
Call for Abstracts about Artificial Intelligence (AI) & Philanthropy. An upcoming themed issue of The Foundation Review will focus on the experiences, lessons, and challenges of foundations using or preparing to use AI as part of their philanthropic strategies and impact delivery across multiple sectors. Submissions are due April 18, 2025.
What we’re reading
Quito Tsui, MTI Core Collaborator and Humanitarian Working Group Lead, has recently published a book review of Mirca Madianou’s new book, “Technocolonialism”: “At this moment when humanitarians are recalibrating, it is clear a more comprehensive tool is needed to guide humanitarian work. Technocolonialism offers a mirror for humanitarian actions of the past, as well as a forward-looking litmus test for a more reflexive humanitarian practice.”
Building on Girl Effect's long-standing commitment to responsible digital practices, the organization recently conducted experiments with GenAI, which included a custom AI evaluation framework to rigorously assess response quality, safety, and relevance. They are sharing what they learned and next steps here.
Over at the Climate, MERL and AI Working group, we’ve been sharing links and resources related to AI and Environmental Justice, like this Resource Hub with a curated collection of knowledge, tools, and insights at the intersection of artificial intelligence and sustainability; this report from Friends of the Earth on using AI for environmental justice, covering the concept of 'just enough internet' and 7 principles for moving forward; and highlights from this experiment by Earth Genome and Conservation International on how to make environmental research more accessible by combining large language models with geospatial data.
As part the development of ethical AI guidelines for chatbots, we consulted with both global staff and young women across Africa to find out whether their understanding and priorities around ethical AI matched emerging received-wisdom on AI governance. Isabelle Amazon-Brown shared insights here.
In a new blog post at Better Evaluation Knowledge, Heather Britt proposes that, when using Gen AI for qualitative data analysis, evaluation practitioners can adopt a principle-led analysis plan to help turn implicit choices into intentional, ethically and methodologically sound decisions.
AI’s environmental impact is growing and, in a recent piece for Tech Policy Press, Robert Diab argues for the need for law that takes a holistic approach to environmental sustainability in AI: “We won’t make real progress without binding obligations on companies to be more transparent about impact – and to do so further back in the supply chain and further out in the afterlife of products.”
Han Sheng Chia is sharing key insights for funders and policymakers from three days of technical presentations by program developers who are at the frontier of Nonprofit AI Use.
Questions or Comments? Get in touch on Slack or reach out directly to Bárbara!
Take care all, and thanks for being part of the NLP-CoP!