The UK government has earmarked £100 million to bolster AI regulation in an effort to foster a more agile regulatory environment, augment economic growth, and ensure public safety.
This initiative coincides with an additional £10 million allocation aimed at upskilling regulators to address the risks and opportunities posed by AI technology. The fund is intended to support regulators in sectors ranging from telecommunications and healthcare to finance and education through cutting-edge research and practical tools.
According to Michelle Donelan, Secretary of State for Science, Innovation, and Technology, the UK’s pioneering approach to AI regulation positions it as a global leader in AI safety and development. ‘I am personally driven by AI’s potential to transform our public services and the economy for the better—leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future,’ Donelan stated.
Nearly £90 million from the investment will be dedicated to establishing nine new research hubs across the UK, in collaboration with the US, to promote responsible AI. These hubs will concentrate on harnessing AI technologies in various fields, including healthcare, chemistry, and mathematics. Another £19 million will fund 21 projects focused on developing innovative, trusted, and responsible AI and machine learning solutions, aimed at accelerating technology deployment and driving productivity.
Furthermore, a steering committee will be launched in the spring to guide and support the activities of a formal regulator coordination structure within the government. This committee will complement the £100 million previously invested in the world’s first AI Safety Institute, which evaluates the risks of emerging AI models. Additionally, the global leadership demonstrated by hosting the world’s first major summit on AI safety at Bletchley Park in November signifies the UK’s commitment to leading in AI regulation.
Cybersecurity expert Andy Ward, VP International for Absolute Software, commented on the initiative: ‘The heightened risk of cyber-attacks, amplified by evolving AI-powered threats, makes vulnerable security systems a prime target for attackers. By investing in secure, trusted, and responsible AI systems, the government initiative contributes to strengthening the national cybersecurity infrastructure and protects against AI-related threats.’
Ward added, ‘Organisations must always look to adopt a comprehensive cybersecurity approach with proactive and responsive measures, especially around rapidly evolving innovations such as AI. This involves assessing current cyber defences, integrating resilient Zero Trust models for user authentication, and establishing complete visibility into the endpoint, giving organisations details on device usage, location, which apps are installed, and the ability to freeze and wipe data if a device is compromised or lost.’
Oseloka Obiora, CTO of RiverSafe, also weighed in: ‘This investment is a good first step but in tandem part of the investment should be targeted towards defence and response research to some of the clearer threats understood around AI. These research activities should prioritise critical national infrastructure and treat scenarios posed through the use of AI now.’
Obiora further noted, ‘Boosting regulation is a key step forward, but we need to see much greater resources set aside for the inevitable fallout when hackers and cyber criminals gain access to AI systems to wreak havoc and steal data. We need a much more ambitious, broader international strategy to tackle the AI threat, bringing together governments around the world, regulators, and businesses to tackle this rapidly emerging threat.’
In summary, the UK government’s £100 million investment in AI regulation underscores its commitment to fostering a secure and innovative technological environment. This initiative aims to position the UK at the forefront of AI safety and development, addressing both the opportunities and risks associated with AI technologies.