NLP-CoP Monthly Newsletter logo

NLP-CoP Monthly Newsletter

Subscribe
Archives
May 5, 2025

NLP-CoP Newsletter #21

nlplogo.png

Welcome to issue #21 of the Natural Language Processing Community of Practice (NLP-CoP) Newsletter - your monthly summary of what's happening inside the community, and in the wider world of NLP. 

New to the CoP or looking to re-discover the community? Check out our FAQ. 

You may have noticed we've recently moved this newsletter to Buttondown. Please remember you can unsubscribe from this newsletter at any time. The link is at the bottom.

Events

  • On May 8, the NLP-CoP is hosting an event about the ethical and practical challenges of using AI in international development, humanitarian and social impact sectors in light of the current context. Register here.

  • On May 15, the AI+Africa Working Group is meeting to discuss NLP development, AI ethics and governance in Africa, and the working group priorities. Join us here.

  • On May 19, the Social and Behavior Change Communications Working Group is bringing together leaders in the field of SBC and AI working on sectors such as health, mental health and agriculture, for a dynamic roundtable focusing on emerging evidence on the use of GenAI in meeting SBC challenges. Don’t miss out!

Community Updates

  • Savita Bailur joined The MERL Tech Initiative (MTI) as a Core Collaborator! We’re excited to be able to add her background in research with a critical gender and technology lens to our core set of capacities and expertise. You may have met her already, as she’s been co-leading the Gender, AI and MERL Working Group at the NLP-CoP.

  • How can we better address online violence against women and girls? Back in March, we partnered with CNN’s As Equals to host a Technology Salon looking at what happens when girls and women’s rights are not protected online. Linda shares how shifting away from Western-centric framing and centering girls and women impacted by online violence are some of the key takeaways from the conversation. 

  • On March 12, the NLP-CoP’s Sandbox Working Group hosted a webinar featuring Gerard Atkinson, Director at ARTD Consultants, who presented findings from his research comparing the performance of various language models on standard evaluation tasks, such as qualitative text analysis and the use of rubrics to assess documents. In case you missed it, Pedro Martin shared a detailed event recap here.

  • Varaidzo Matimba, AI+Africa Lead for the NLP-CoP, is conducting research to learn more about the needs, gaps, and priorities of the AI+Africa Working Group and related communities. She will soon reach out to community members for interviews! In case you’d like to learn more, you’re welcome to reach out to Vari. 

What we’re reading

  • Ethical guidelines for development of chatbots: Girl Effect commissioned MTI to develop comprehensive and accessible guidelines to steer its work on AI-powered SRH chatbots. Read it here.

  • The AI + Planetary Justice Alliance, has created a framework to assess the climate impacts of AI, including a breakdown of each supply chain stage, a stage-by-stage questions for assessing AI’s planetary justice footprint, guidance on possible data sources, stakeholders, and indicators for each stage, and more!

  • Large language models (LLMs) come with significant environmental costs, particularly in carbon emissions. A new study by Purdue University proposes the “first evaluation framework for unveiling LLM serving’s environmental impact by leveraging functional units as the basis for comparison” and highlights opportunities for greener LLM deployment by optimizing model selection, deployment strategies, and hardware choice. 

  • An AI Evaluation Framework for the Development Sector: Center for Global Development is kicking off a series of articles to help implementers, policymakers, and funders unpack the different types of evaluations relevant for “AI for Good” applications.

  • Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI: The idea that the larger the AI system the “more valuable, powerful and interesting it is” has become mainstream. But in their new paper, Varoquaux, Luccioni and Whittaker explore the collateral consequences of the 'bigger-is-better' AI paradigm. In addition to arguing this approach is scientifically fragile, they pose that it comes with undesirable consequences (such as unreasonable economic requirements, disproportionate environmental footprint), it prioritises “certain problems at the expense of others” and it exacerbates a concentration of power, which centralizes decision-making in the hands of a few actors while threatening to disempower others. 

  • A group of over 200 experts shared a public letter “affirming the scientific consensus on bias and discrimination in AI” and urging policymakers to “continue to develop public policy that is rooted in and builds on this scientific consensus”. Learn more about the goals of this letter in this podcast interview with some of the signatories.

  • Rural Senses concluded a webinar series about 'AI in Impact Measurement'. Zach Tilton, core collaborator at MTI and Sandbox Working Group Co-Lead at NLP-CoP was one of the presenters in this series that brought together experts, practitioners, and policymakers to discuss opportunities, practical applications, ethical considerations, and future trends in AI-driven impact measurement. Find here a summary of the 4 sessions and links to the recordings. 

Questions or Comments? Get in touch on Slack or reach out directly via hello@merltech.org!

Take care everyone, and thanks for being part of the NLP-CoP!

Best,
Bárbara


Don't miss what's next. Subscribe to NLP-CoP Monthly Newsletter:
Bluesky X LinkedIn Join the CoP
This email brought to you by Buttondown, the easiest way to start and grow your newsletter.