NLP-CoP Monthly Newsletter logo

NLP-CoP Monthly Newsletter

Subscribe
Archives
July 1, 2025

NLP-CoP Newsletter #23

nlplogo.png

Welcome to issue #23 of the Natural Language Processing Community of Practice (NLP-CoP) Newsletter - your monthly summary of what's happening inside the community, and in the wider world of NLP. 

New to the CoP or looking to re-discover the community? Check out our Welcome Kit or our FAQ. 

You may have noticed we've recently moved this newsletter to Buttondown. Please remember you can unsubscribe from this newsletter at any time. The link is at the bottom.

Events

  • Virtual training on Introduction to AI for Social & Behavior Change: SBC experts Sarah Osman and Isabelle Amazon-Brown are offering a 2-day course on responsible use of AI in SBC programming. Register now to join on August 27-28, 2025. 

  • We are launching a new Climate & AI Working Group on July 17 at 10 am ET! Join us for this meeting if you’d like to hear about and help shape our plans for the rest of the year, as well as discuss all the ways how climate and AI intersect.

  • Deep Learning Indaba: The annual event of the Deep Learning Indaba will be held in Kigali, Rwanda on 17th-22nd August, 2025. Applications are open now!

Community Updates

  • Event recap: “Should we be using AI right now?” Back in May, we hosted a conversation with a group of over 90 CoP members about how we are navigating the practical and ethical challenges around the use of Artificial Intelligence in the current context, you can now read the key takeaways here.

  • Gender, AI and MERL Working Group Co-Lead, Savita Bailur, has shared her takeaways from the latest working group meeting, which focused on gender inclusion and GenAI. In case you missed the conversation (or would like to revisit the reflections made during the chat), visit our blog.

  • What can we learn from emerging evidence of GenAI use in Social and Behaviour Change (SBC) chatbots? The SBC Working Group e brought together 6 organisations using GenAI to deploy and evaluate SBC interventions and Isabelle Amazon-Brown is sharing key learnings from the conversation in our blog.

What we’re reading

  • AI and Power: 

    • Ideologies of Control: A Series on Tech Power and Democratic Crisis from Tech Policy Press and Data & Society features expert contributors who name and dispel the myths and ideologies that exist behind some of the changes of the current political moment in the US, as they relate to data, AI, and the tech sector: "(...) what is clear is that the changes we’re seeing advance the personal power and enrichment of tech elites at great cost to the capacity of average citizens to exercise their rights. What we are witnessing—in real time—is the growing capacity of a very small group of people to leverage the technology they own and control to impose their untested, anti-democratic visions of how to organize society and the economy on the rest of us." 

    • Is Responsible AI possible? Lighthouse, MIT Tech Review and Trouw investigated Amsterdam's efforts to "build a 'fair' algorithm to detect welfare fraud." They found that though Amsterdam "followed every piece of advice in the Responsible AI playbook", when the city deployed a pilot in the real world, "the system continued to be plagued by biases" and it wasn't more effective than the human case workers. "We reveal the different lessons drawn by participants and experts from Amsterdam’s experience of trying to build a Responsible AI system. These competing interpretations reflect deeper disagreements about whether Responsible AI can ever deliver on its promises, or whether some applications of artificial intelligence are fundamentally incompatible with human rights." 

    • Many African languages, despite being spoken by millions, are either misrepresented or ignored altogether from current mainstream AI systems. In this article for Nature, Mpho Primus write about how to develop AI models that include the complexity of African languages. 

    • The AI Policy Playbook, published by GIZ, is a practice-oriented guide for crafting inclusive, responsible, and context-aware AI governance—through the eyes and experiences of policymakers across Africa and Asia. Available here. 

  • Environmental impact of AI: 

    • Harmful impacts of data centers: Four researchers at The Maybe conducted a case study analysis of five data centers across Chile, the US, the Netherlands, Mexico, and South Africa. Through stakeholder interviews and secondary analysis, their report questions how government agencies and technology companies shaping data center development and the strategies local communities use to resist data centers’ harmful impacts. 

    • What is the Hidden Cost of Our AI Habits? Helena Rovner, Ezequiel Molina, and Maria Rebeca Barron Rodriguez (from the World Bank) have shared a guide for choosing the right AI for the job: “because not every task needs maximum power, and power takes a toll”. 

  • Generative AI: 

    • Deep fakes and tech-facilitated GBV: "While this abuse is another tool of the techno-patriarchy, the most alarming aspect of it is that these technologies are already cheap or free of cost, sophisticated, and freely accessible, so the abuse continues to increase. (...) A May 2025 study by the Oxford Internet Institute identified approximately 35,000 publicly downloadable deepfake generators, 96% of which targeted identifiable women, and many of which were intended to generate non-consensual nude or sexual imagery”.

    • New report by the European Commission’s Joint Research Centre examines the transformative role of Generative AI (GenAI) for innovation, productivity, and societal change, with a specific emphasis on the European Union. The authors call for comprehensive and nuanced policy approach to ensure that technological developments are fully aligned with democratic values and EU legal framework.

    • New report by the ARC Centre of Excellence for the Digital Child highlights nine of the most urgent challenges and issues in terms of everyday use of GenAI tools, especially when children might be using these systems. Read here. 

  • AI and Evaluation:

    • How can we realize the potential of LLMs while maintaining rigor? To answer that question, the World Bank has published a guidance note demonstrating good practices for experimenting with LLMs based on a frequently occurring use case in their evaluations: structured literature review (SLR). Read the guide for concrete examples of how LLMs can be integrated into evaluation.

    • The Independent Advisory and Evaluation service has published a technical note on Considerations and Practical Applications for Using Artificial Intelligence (AI) in Evaluations. It provides practical tools—such as software recommendations and prompt examples—to support responsible and effective use of AI in evaluation activities. 

MTI Training 

MTI offers a diverse menu of training modules covering everything from foundational concepts to advanced applications, our trainings can be delivered virtually, in-person, or in hybrid formats. The trainings can be mixed and matched based on your individual interests or your organization’s specific needs. Learn more here. 

Take care all, and thanks for being part of the NLP-CoP!

Best,
Bárbara

Questions or Comments? Get in touch on Slack or reach out directly via hello@merltech.org!


Don't miss what's next. Subscribe to NLP-CoP Monthly Newsletter:
Bluesky X LinkedIn Join the CoP
Powered by Buttondown, the easiest way to start and grow your newsletter.