Refuge's Tech Safety Newsletter August 2024
King’s Speech 2024
The new government announced its plans for new legislation in the King’s Speech on 18 July. The Government’s pledge to halve violence against women and girls was reiterated in the Speech and several pieces of legislation that may be relevant to tech abuse were set out:
Crime and Policing Bill
Aims to increase confidence in policing and set out early measures to tackle VAWG
Victims, Courts and Public Protection Bill
Aims to ensure victims get the support they deserve and reduce delays in the courts.
Product Safety and Metrology Bill
Aims to respond to new product risks and enable the UK to keep pace with AI.
Digital Information and Smart Data Bill
Aims to establish Digital Verification Services, to help with things like moving house, pre-employment checks, and buying age restricted goods and services.
Ofcom children’s online safety consultation
Ofcom has been consulting on its protection of children proposals for the implementation of the Online Safety Act.
Refuge supported a joint response to the consultation led by the Online Safety Act Network and signed by 12 organisations and academic experts on VAWG. The response welcomed Ofcom’s proposals for social media platforms to introduce greater controls on their recommender systems. But called for Ofcom to go further to protect girls online.
The risks of AI-generated content for survivors of domestic abuse
Like most technologies, artificial intelligence is not inherently bad. On the contrary, this technology can make our lives easier and perform tasks more efficiently. However, in the hands of perpetrators and abusers, AI can be used to exercise power and control over survivors.
By now, the term ‘artificial intelligence’ might be quite familiar to you having heard it being used in different contexts. This is because AI is everywhere. Even if you have never used tools like ChatGPT or an AI image generator, you are still likely to have used AI more often than you may think. When you use a smartphone, watch movies or TV shows through an online streaming platform, or you ask your device’s virtual assistant a question, you are interacting with AI. This technology is regularly used in our daily lives and it is not new, although its use has been widespread in recent years.
So, what is AI? In simple terms, artificial intelligence or AI is a technology that allows machines and systems to perform tasks, simulating human intelligence. AI systems work by processing large amounts of labelled training data, identifying patterns within this data, and then using these patterns to predict future outcomes, make decisions and solve problems.
AI technology can bring incredible benefits to our lives, such as helping doctors diagnose diseases more accurately. However, with the extensive and widespread use of AI comes challenges and risks, particularly when the technology is being misused. Additionally, as the technology continues to advance and become even more accessible, the potential risks of using it in harmful ways may grow.
One of the primary concerns, and one that may disproportionately affect survivors of domestic abuse, is AI technology being used by abusers to create content that may look real, but it has been entirely fabricated. This content, which is usually in the form of images, videos or audio clips, is sometimes denominated ‘deepfake’.
Due to their realistic look, perpetrators can use AI-generated content to spread disinformation to mislead people, to damage the reputation of survivors, as well as using them as threats or blackmail. The alarming part is that the technology capable of creating this harmful content is easily available to everyone on the internet.
Among the types of content generated using AI technology, non-consensual intimate imagery is the most common one, mainly depicting women. The technology to create this type of content has not only become easier to use and more accessible, but also more sophisticated and realistic. Perpetrators of abuse are creating and sharing these images online, but the harm for survivors does not always occur when these are distributed or posted online: the threat alone of sharing these AI-generated intimate images can also be used as an abusive tool by perpetrators, to coerce, manipulate or intimidate survivors causing great psychological impact.
Despite the images not being real, the impact and harm caused is. Additionally, due to the sophistication of the technology used to create AI-generated intimate images, it has become increasingly difficult to recognise real images from fake ones. This form of abuse can have devastating effects on the survivor’s wellbeing and their own physical safety, as well as being used by perpetrators to blackmail, humiliate, damage relationships, create employability issues, etc.
Due to the seriousness and the profound impact of this offence and as a result of campaigning by Refuge and our allies, the Online Safety Act took a step forward in addressing intimate image abuse by criminalising sharing intimate images that are generated by AI technology, which is a crucial legal protection for survivors of this form of abuse.
Tech Safety Summit
Refuge is pleased to be hosting the Tech Safety Summit 2024, with key speakers already being announced, get more information and sign up here.