ACCA UK Calls for AI Cybersecurity Approach to Emphasise Global Applicability

Date:

A proposed AI cyber code is a “useful starting point” for a global regulatory approach, says accountancy body ACCA.

Responding to a UK Government consultation led by the Department for Science, Innovation & Technology outlining an AI cybersecurity code of practice, ACCA said the Government was best placed to set up overarching regulatory structure and principles, while those on the frontline of AI developments should be given the space to work to combat emerging cyber risks.

However, the pro-innovation approach of the proposed code – as set out in the Government’s white paper – needs to have safeguards and its requirements may need to be revisited, says ACCA. The cyber challenge in AI is dynamic, and a ‘point in time’ view can become quickly outdated, it says.

ACCA also highlighted the risks and impacts to end users in SMEs, with a significant number of its members operating in this segment. The greater challenges faced by this group of stakeholders on cyber readiness – across both skills and budgets – are well-documented, it says. ACCA wants end-user SMEs to be safe and protected from cyber risk, yet empower them to choose AI given its potential to augment business productivity.

Glenn Collins, head of technical and strategic engagement, ACCA UK, said:

ACCA is pleased to see the consultation taking a principle-based approach as our current view of AI offers too many unseen scenarios. ACCA, its members and partners, will be profoundly impacted by its planned use of AI including delivering finance professionals with an optimal experience and skill set for the modern workplace.”

ACCA warned that adherence to any code carries a cost, including indirect costs of adhering to the code and the impact through the supply chain. Effort and cost will be needed to raise awareness of the code, as well as monitoring and enforcement.

Narayanan Vaidyanathan, head of policy development, ACCA, said:

“We anticipate utility from such a code for those providing assurance or third-party verification of AI systems. This is an important category of stakeholders who will have a key role to play in creating a trusted AI eco-system to supplement the regulatory and legal direction from policy makers.

“We do not anticipate this group to be subject to the requirements of the code itself, but assurance requires checks against a well-defined, and ideally, publicly available standard – which this code could provide. Cyber risks are a part of what the assurance of an AI system may need to check for. Therefore, those providing assurance would find such a cyber code and associated standards helpful.”

In its response, ACCA also called on the Government to tackle the skills gap, which it said needs to be filled in order to combat cybersecurity risks. It has suggested that the Apprenticeship Levy could be expanded to a ‘Growth and Skills Levy’ that is more flexible and can be used to fund shorter-term accredited training programmes that upskill and reskill workers on the cybersecurity of AI.

It also says that companies should also be able to increase the proportion of their unspent levy funds to their supply chains – ACCA suggests an increase of 25% to 40%. This could unlock millions of pounds to develop AI skills, according to ACCA.

Ultimately, it says, cybersecurity issues linked to AI need staff to be trained on current and emerging risks, and if insufficient training is given, standards and frameworks will fail to achieve any impact.

Share post:

Popular

More like this
Related

Poor stops put people off bus travel – report

Some 23% of people say they are put off...

The Princess of Wales is festive in forest green on Christmas Day

It’s Christmas at Sandringham — and the royals kicked...

Horoscope Tomorrow, December 27, 2024, read predictions for all sun signs

Aries (Mar 21-Apr 20)You may come to...