Refuge's Tech Safety Newsletter February 2026
Valentines Day and Romance Fraud
With Valentine’s Day once again fast approaching, many financial institutions are warning their customers to be stay vigilant for signs of romance fraud scams that are increasingly weaponised by perpetrators this time of the year. Romance scams continue to be a rising form of fraud, with the Financial Conduct Authority (FCA) reporting a 9% increase in incidents recorded for the previous financial year, generating losses of over £106 million for both victims and the economy. Whilst this devastating figure highlights the significant losses to those impacted by romance fraud, it widely recognised that the overall impact of such scams is largely unknown due to underreporting which is influenced by factors such as ‘stigma, emotional distress and fear of judgement’ felt by the victims, according to Dr Rasha Kassem.
Whilst perpetrators of romance fraud weaponise many different means to inflict harm on victims, authorised push payment (APP) fraud remains a common method of harm. FCA describes romance fraud involving APP whereby victims are ‘convinced they are in a genuine relationship and deceived into sending money to criminals’ which often results in ‘significant emotional, psychological and financial impact’ for the victim. Considering romance fraud tactics further, Anna Rowe from think tank LoveSaid highlights that perpetrators will often build trust and relationships with victims over a period of time, seek to maintain the relationship online within encrypted apps, avoid in person meetings and request funds for detailed personal circumstances and difficulties. This can include personal and family medical emergencies, sudden disruption to income due to loss of employment or travel costs to facilitate an in-person meeting with the targeted victim that never manifests. Fortunately, victims of APP fraud now have greater protections in place following regulatory changes in October 2024. Payments System Regulator (PSR) has introduced a new set of rules which enables all eligible individuals who have been impacted by APP fraud to seek a mandatory reimbursement from their banking provider.
When considering the ages of victims that are being targeted by romance fraudsters, data shared by Lloyds Banking Group shows that individuals aged 55 to 64 make up a fifth of romance scam victims and that cases have increased by 52% involving victims aged over 55, during 2024. However, when we consider other forms of fraud that can be utilised and exploited within a romantic relationship for financial gain, apart from APP, money laundering schemes are increasingly used to target young people. The National Crime Agency reports that approximately 6 out of 10 victims who are targeted to move money as a form of laundering are under the age of 30, with social media platforms being weaponised to recruit and manipulate young people into such schemes. The Home Office describes money laundering within the context of financial exploitation whereby ‘criminals target children and adults at risk and take advantage of an imbalance of power to coerce, control, manipulate, or deceive them into facilitating the movement of illicit funds’. Perpetrators coerce victims to give up access to their banking accounts by exploiting their relationship, through coercion or threat of violence as well as by fraudulent means such as identity theft of the victim by opening new accounts to be used for the transfers.
Victims who have been affected by this form of financial exploitation, even those who were not aware that their accounts are being misused for criminal purposes, can be subject to significant long-term impacts such as debanking, criminal prosecution and loss of access to credit such as mortgages, according to NatWest bank. UK Government also highlights that employment opportunities for the affected individuals can also be limited by losing rights to practice regulated professions such as medicine or law. This indeed highlights that victims affected by these scams can be impacted by significant and long-lasting consequences that cannot be understated.
If you have been impacted by any of the issues discussed in this article, do know that support is available.
For domestic abuse support, please contact the National Domestic Abuse Helpline on a trusted device. The National Domestic Abuse Helpline can be accessed 24/7 and provides both practical and emotional support to individuals impacted by domestic abuse. The Helpline can be contacted on a freephone 0808 2000 247. Support is also available via British Sign Language – 10 AM to 6 PM Monday to Friday.
You can also access further guidance and information on financial exploitation from the Don’t Be Fooled project by CIFAS and UK Finance which includes specialist advice for young people and those working with young people.
If you wish to report fraud to the police and you are not in immediate risk of harm, contact UK’s fraud reporting service – Report Fraud. Report Fraud can be contacted online or by phoning 0300 123 2040. If you are in immediate risk of harm, contact police on 999 or 101 for non-emergencies.
If you believe that your identity or banking accounts have been misused fraudulently, contact your banking provider as soon as possible to share your concerns. Your banking provider is required to investigate your concerns, provide you with support and issue you with a formal response. If you have not received a response, consider raising a complaint and escalating it to the Financial Ombudsman Service.
Data by UK Finance showed that 75% of all romance scams originated online in 2024. To find out more information about digital safety and how you can keep your information safe online, please access Refuge’s tech safety guides through our website on a trusted device.
Updates from the Technology-Facilitated Abuse and Economic Empowerment Team
In February, the cold winter months draw to an end, and we begin to see the green shoots pushing their way through the earth. It’s an exciting time, full of hope. It is with this in mind that I will update you on some of the developments in the team.
The 10th February was Safer Internet Day 2026, and the theme was “Smart Tech, Safe Choices – Exploring the Safe and Responsible Use of AI.”
Larome Hyde, our Children and Technology Project lead is developing a Safer Internet Day resource pack for parents and Children’s Support Workers to help start age-appropriate conversations with children and young people about AI, chatbots, and new technologies in refuge settings. She will also be running focus groups with children, young people, and parents to hear directly about their experiences of AI chatbots and companions — what they like, what worries them, and how these tools show up in everyday life. What people share will shape new Refuge resources and an external workshop co-facilitated with Training Lead Kiera Brodie, making sure survivor voices are at the heart of this work.
The Policy, Public Affairs and Research team kicked off January by attending a Ministerial and VAWG Sector event at Bumble's Head Office, where we discussed the benefits of proactive technologies to detect image-based sexual abuse. The event was attended by Minister Narayan from DSIT and Minister Phillips from the Home Office.
We responded to the fast-moving situation around X's AI tool, 'Grok', which some users have instructed to create non-consensual deepfake intimate images of women and children. The Policy team participated in several media opportunities discussing the issue of deepfake intimate image abuse on survivors and we signed a joint letter with organisations in the VAWG, online safety, and children's sectors calling on the Government to stop using X in light of the scandal. We also put out several press statements. These reiterated calls on Ofcom to be swift and robust in its investigation of X's compliance with the Online Safety Act and to use the full extent of its enforcement powers. We also called on the Government to urgently bring into force the offence of creating or requesting deepfake intimate images without consent and to provide a timeline for its recent commitment to ban 'nudification' apps.
The Government has now announced that the offence of creating or requesting the creation of non-consensual deepfake intimate images will come into force on 6th February. It will also be made a 'priority offence' under the Online Safety Act, meaning tech companies must take proactive steps to prevent this content from appearing on their services. Further, it announced that the ban on nudification tools will be enacted via an amendment to the Crime and Policing Bill, which is currently going through Parliament. We will monitor these developments closely.
We briefed MPs on a Westminster Hall Debate on 'tackling digital exploitation of women and girls’ on 27th January. The tabling MP cited Refuge during the debate and highlighted that almost all of the survivors we support have experienced some form of tech-facilitated abuse.
In January, Emma Pickering, Head of Technology-Facilitated Abuse and Economic Empowerment, was quoted in Women’s Grid about AI and nudeification apps. She noted Grok has been weaponised by perpetrators to generate misogynistic and violent content at the expense of women, the recent decision to restrict access to paying subscribers is not only inadequate – it represents the monetisation of abuse.
AI also could be used by perpetrators of domestic abuse, to access a survivor's bank accounts.
Advances in artificial intelligence have made deepfake voice cloning, the replication of a person’s voice using generative models, an emerging risk in financial fraud. Many banks and financial institutions have increasingly adopted system called voice ID or 'my voice is my password' for their phone banking, recognising voiceprints as a convenient way to verify a customer’s identity against recorded unique vocal traits. However, deepfake voice technology now allows malicious actors, including perpetrators of economic abuse, to impersonate a survivor’s voice using publicly available recordings or intercepted audio in order to bypass these voice authentication checks and gain unauthorised access to accounts even after separation.
Perpetrators can exploit that evidence of vocal approval to make transfers or authorise instructions that appear legitimate to banking systems and staff, placing an unfair and highly burdensome evidential challenge on survivors who are then required to prove that they did not authorise the transaction, because the bank’s systems treat the synthetic voice sample as credible. This dynamic intensifies the economic harm and psychological impact of abuse, as victims must navigate complex disputes with financial institutions over “authorised” transactions they did not approve. Although UK regulators such as the Financial Conduct Authority (FCA) mandate consumer-centric outcomes and fraud mitigation under frameworks like the Consumer Duty, current protections focus on general fraud and do not specifically address the vulnerabilities created by synthetic media or voice cloning exploited to defeat authentication systems. The FCA’s principles-based approach to emerging technologies means that detailed, explicit safeguards against AI-enabled impersonation, including recognition of deepfake voice fraud as a distinct risk category in customer protection obligations, are not yet part of the regulatory regime, leaving a gap in protections for survivors of economic abuse whose voice biometrics may be misused.
In a BBC news article a journalist used voice cloning and successfully gained access to bank accounts. A very eye-opening read. BBC News Advice from the team is to steer clear of voice security settings and instead focus on two-step security authentication.
Lastly, February is also LGBT History Month, with the theme Science and Innovation. To celebrate, we invited Melissa Dee, Chief Operational Officer at Loving Me, a new and innovative service that supports trans and non-binary survivors of domestic abuse, to talk to us about her work with trans survivors of domestic abuse. www.lovingme.uk
Please do check out the podcast on Spotify.