Safety Tech Challenge Fund

The #SafetyTech Challenge Fund will be awarding five organisations £85,000 each to develop technologies to help keep children safe online.

Find out more about these five innovative companies and the work they're involved in below.

What will the Fund do?

Through the Fund, HMG will award five companies £85,000 each to spend five months prototyping and evaluating innovative ways in which child sexual abuse images and videos can be detected and addressed within end-to-end encrypted environments, while ensuring user privacy is respected.

A further £130k will be available to help these companies deliver additional functionality as part of their projects, taking the prize total to £555k.

What challenges will the fund tackle?

Last year, tech companies identified and reported 21.4 million global instances of child sexual abuse.

End-to-end encryption is a technology which encrypts communication data - including messages, images and recordings - between sender and recipient to prevent third parties from accessing them. End-to-end encryption will have significant consequences for tech companies' ability to detect, remove, and report child sexual abuse material and will detrimentally impact law enforcement's ability to identify offenders and safeguard victims.  

The UK Government supports strong encryption and is committed to ensuring that our citizens’ privacy is protected online. We believe that technology can support implementation of end-to-end encryption in such a way that can protect children from abuse online, whilst respecting user privacy.

Scroll down to learn about the 5 winners that will be awarded government funding:

Building on their existing technology which detects illegal child abuse material, Cyan Forensics and Crisp Thinking, in partnership with the University of Edinburgh and Internet Watch Foundation, propose a solution using a Privacy Assured Matching protocol to secure user privacy, and develop a ‘plug-in’ form to integrate within encrypted social platforms.

Cyan’s project aims to demonstrate a practical solution for detecting Child Sexual Abuse Material in encrypted messaging apps which maintains a high level of user privacy,
and to show this could be deployed at scale in the real world. A successful outcome will give messaging companies, law enforcement and regulators a compelling new option in this area
which is critical for online safety.

GalaxKey, a provider of secure messaging services, will work with Poole-based Image Analyser and Yoti, an age-assurance company, where they will combine three areas of expertise and softwares focusing on user privacy, detection and prevention of child sexual abuse material, and age verification to detect child sexual abuse before it reaches an end-to-end encryption environment, preventing it from being uploaded and shared.

London based SafeToNet, a leading provider of on-device safety tech, and the Policing Institute of the Eastern Region at Anglia Ruskin University will develop a suite of live video-moderation safeguarding AI technologies that can run on any smart device to prevent the filming of nudity, violence, pornography and child sexual abuse material in real-time, as it is being produced.

Child Sexual Abuse related material is being created at a staggering rate. SafeToWatch is the technology that can proactively prevent the creation in real time. SafeToNet's technology can prevent the filming of nudity, violence, pornography and child sexual abuse material in real-time, as it is being produced. By using efficient algorithms that utilise the processing power of the device, the technology runs locally on the device and without the need for cloud interaction. This helps maximise the privacy rights of the user whilst automatically keeping them safe.

T3K-Forensics will work to implement their AI-based child sexual abuse detection technology on smartphones to detect newly created material, providing a toolkit that social platforms can integrate with their end-to-end encryption services.’s solution uses an AI Classifier to detect child sexual abuse material. That means that instead of the more commonly used method of comparing a picture’s unique hash or PhotoDNA value (the
“fingerprint” of the image) with a database of previously known data, T3K look at the actual content of a picture or video. TFK uses biological features to find visible children and place emphasis on the situation those children are in.

DragonflAI is a cutting edge technology company specialising in visual content moderation, that has focused on providing instant results on-device, and completely offline. This drive to offer excellent protection without compromising on user privacy led to joining the fund in order to further one of the main goals of the company - child protection. This fund, and working alongside another sector leading company like Yoti will help develop and deliver much needed software in an increasingly online world.

DragonflAI's solution utilises specialist algorithms that can run entirely on mobile phones to identify key features in images such as age and levels of nudity to detect potential indecent and illegal content. End-to-end encryption messaging companies will be able to install the software that will analyse images before they can be sent, without the images needing to leave the mobile phones.