BitBulteni

BitBulteni

Monday 23 March 2026
Policy & Regulation | May 20, 2024 | BitBulteni

Artificial Intelligence Risks and Potential: UK and USA in Cooperation

Artificial Intelligence Risks and Potential: UK and USA in Cooperation

The United Kingdom's AI Security Institute will begin operating internationally by opening a new branch in the United States.

British Technology Minister Michelle Donelan announced on May 20 that the institute will open its first overseas office in San Francisco in the summer.

The strategically chosen San Francisco office will allow the UK to “tap into the rich tech talent pool available in the Bay Area” and collaborate with one of the world’s largest artificial intelligence laboratories located between London and San Francisco, the announcement said.

Additionally, it was said that the move would help the institute “solidify” relationships with key players in the US to promote global AI security for the “public benefit”.

The London branch currently has a team of 30 people and is on track to grow and develop, particularly to gain greater expertise in risk assessment for new AI models.

Donelan stated that the initiative is a concrete movement of the UK’s leadership and vision in the field of artificial intelligence security.

“This is an important moment for the UK’s ability to examine both the risks and potential of AI from a global perspective. “We continue to lead in this space by strengthening our partnership with the United States and enabling other countries to share our leadership in AI security.”

This announcement follows the UK’s milestone AI Security Summit taking place in London in November 2023. The summit was the first event focused on AI security on a global scale.

The event featured leaders from around the world, including the United States and China, as well as prominent names in the field of artificial intelligence, such as Microsoft President Brad Smith, OpenAI CEO Sam Altman, Google and DeepMind CEO Demis Hassabiss, and Elon Musk.

In its latest announcement, England stated that the institute also published some of the results of the security tests it conducted on five publicly available advanced artificial intelligence models.

By anonymizing the models, the results provide a “snapshot” of the models’ capabilities rather than classifying them as “secure” or “unsecure,” the institute said.

Some of the findings included that some models were able to complete cybersecurity challenges, while others struggled with more complex ones. Some models were seen to have doctoral level knowledge in chemistry and biology.

It was concluded that all tested models were “extremely vulnerable” to basic “jailbreaks” (system hijacking) and that the tested models were unable to complete more “complex, time-consuming tasks” without human supervision.

The institute’s president, Ian Hogearth, said these evaluations would contribute to an empirical assessment of model capabilities.

“AI safety is still a very young and developing field. These results represent only a small part of the assessment approach that AISI is developing.”

Tags: yapay zekaBirleşik KrallıkABDküresel işbirliği

Related Posts