AI disinformation in local communities may pose a more significant threat than Taylor Swift deepfakes or Biden robocalls, subtly undermining societal trust where it’s most vulnerable.
“Especially so in areas with low digital literacy,” the Australian Strategic Policy Institute’s former deputy director of Cyber, Technology and Security, Mike Bareja, says. Here, misinformation spreads quickly because people are less equipped to question what they see online.
He joined John Hines, Verizon Business’ APAC head of cybersecurity, in a hard-hitting episode of the Securing AI podcast series, as the democratisation of AI floods digital channels with fake content generation and new reports of identity hijacking.
While Mr Hines agrees that “the information that we have is becoming less trustworthy,” he shared the positive side of the ledger: “AI has helped us reduce false positives — incorrect alerts about threats — by up to 90 per cent.”
This improvement allows Verizon to focus more effectively on fundamental security issues in cybersecurity, including threat detection, prevention, incident response and authentication.
They explore the AI arms race, seen as both a challenge and an opportunity in various sectors, including defence, cybersecurity and public information authenticity.
Mr Beraja says the key is not to stop generative AI (genAI) but to “make sure that people can make an informed decision”.
Examples include the ability to put opt-in metadata on media that tells viewers it was generated on an AI platform at a particular date, reflected in initiatives like Tech Accord and Coalition for Content, Provenance, and Authenticity (C2PA).
The accord, outlining principles for managing deceptive AI election content, was signed by 20 major companies at the Munich Security Conference. C2PA develops technical standards to ensure content provenance and authenticity, allowing users to verify if AI-generated media.
Regionally, Mr Hines notes some nations are more mature in threat detection and response, particularly Singapore.
“As the financial hub of Asia, it’s a heavily regulated market that is very forward thinking from an IT and AI perspective,” he says. “Japan is advanced, too, holding several patents across the AI realm.”
He notes their vision of Citizen 5.0, “making them a very technical place in terms of living for citizens.”
The pictures vary across other markets, such as Malaysia, Indonesia, and the Philippines, where multinational corporations tend to drive advances.
“Securing the data that trains AI algorithms is critical for both the private and public sectors across APAC.”
Organisations typically buy a model and then train it with their own data, which presents a challenge: How can you be sure the model hasn’t been tampered with or doesn’t contain malicious code?
Additionally, there’s the risk of data theft or unintentional leaks through the AI system.
He shares an example in the United States of a man using a chatbot at a Chevrolet dealership to trick it into offering him a $50,000 car for just $1. He even got the chatbot to declare the offer as legally binding.
Whether such a claim would hold up under consumer law is still being determined. Still, it underscores the rising calls for companies to establish AI councils to steer innovation safely.
“It’s crucial to put a framework in place. At Verizon, for instance, we have stringent rules about who within the organisation is allowed to play with AI on our corporate networks.”
Mr Bareja describes the APAC cybersecurity landscape as a free market where the biggest target for the biggest value draws attention.
“Financially motivated cybercriminals deploy ransomware on vulnerable organisations that they think will pay; state-based actors, after strategic effect or espionage, might target sensitive AI models.”
Ultimately, the “democratisation of AI” puts massive power into the hands of ordinary users, more sophisticated cybercriminals, and state actors.
“AI will exponentially speed up the cybersecurity arms race, introducing new threats and defences in an ongoing, iterative challenge. It’s about who can utilise AI more effectively,” says Mr Hines.
The Securing AI podcast series and accompanying articles are produced by InnovationAus.com and sponsored by Verizon.
Do you know more? Contact James Riley via Email.