With a startup’s assist, the UK Government publishes new AI security guidelines

The British government published a new collection of research reports on the cyber security of AI pulling on sources from the private and public sectors. It includes a broad set of recommendations for organisations prepared by Mindgard, the report’s only startup contributor. This report, together with the new draft Code of Practice on cyber security governance, was created in response to the Chinese cyberattack on the Ministry of Defence earlier this year, and is aimed specifically at directors and business leaders in the federal and private sectors.


The Department for Science, Innovation, and Technology (DSIT) commissioned Mindgard to conduct a systematic study to identify recommendations linked to addressing cyber security risks to Artificial Intelligence (AI). Mindgard’s contributions focused specifically on identifying and mapping vulnerabilities across the AI lifecycle. Titled Cyber Security for AI Recommendations, the Mindgard report described 45 unique technical and general recommendations for addressing cyber security risks in AI.


The first type of recommendation proposed by Mindgard is technical. ⁤⁤This technology-focused approach aims to mitigate cybersecurity risks in AI by altering the software, hardware, data, or network access of a computer system that runs the AI. ⁤⁤This can also involve altering the AI model itself, encompassing adjustments in training methodologies, pre-processing techniques, and model architecture. ⁤⁤These measures c ..

Support the originator by clicking the read the rest link below.