What digital media leaders are watching in efforts to regulate AI
As Bill C-27 makes its way through parliament, experts are hoping for a transparent, ethical and responsive framework.
The ad industry has been adopting artificial intelligence for several years, but things like ChatGPT and Dall-E have put the tech's capabilities front and centre in mainstream discussions. In media, the applications thus far are not as flashy, but that doesn't make them any less powerful, as it provides more effective analysis of data and consumer behaviour to create more targeted and effective campaigns, as well as to automate certain tasks like ad placement and optimization.
"We've embraced AI in our proprietary tech stack to create unique tools that analyze large amounts of data to gain insight, such as product reviews through to AI Attribution Modeling solutions, which minimize reliance on cookie tracking to better inform real-time optimizations and future performance planning decisions," says Matt Ramella, president of Reprise. "AI is also being used to increase our client's efficiency and speed to market by generating multiple variations of content that can be used to test and improve performance."
But in Canada, there is another reason AI is a hot topic, and it could shape how it is used here just as interest hits a fever pitch.
Bill C-27, a long-awaited effort to update Canada's privacy laws, would also create a law regulating the development and use of AI systems. It aims to ensure "high-impact" AI systems are developed in a way that mitigates the risk of harm and bias, while also outlining penalties for when AI systems unlawfully obtain data or used in a "reckless" way. It would also create an AI and Data Commissioner role to monitor compliance and issue audits.
The bill is still making its may through parliament, and as it stands right now, the guidelines are still outlined in broad strokes.
"This is not to just mitigate risks and harms but to balance that with the need to allow for technological advancement," says Robin LeGassicke, managing director of digital at Cairns Oneil. "AI is built on algorithms that have the potential to continue to improve themselves and 'learn.' Some of the challenges we're seeing with this is built-in bias based on where the inputs come from, who builds the algorithm, and the depth of data and information available."
The human factor of bias in AI – if you put garbage in, you get garbage out – and transparency for consumers are the two main areas the industry expects the AI portion of the bill to cover.
Erica Kokiw, EVP of digital for UM Canada, says that to ensure that AI systems are fair, unbiased, reliable and used ethically, transparency and interpretability in AI systems need to be part of the legislation.
"Bill C-27 should include requirements for transparency of data collection, as well as clear guidelines for how data can be shared and used," she says. "Additionally, the bill should include measures so individuals understand how decisions are being made and can challenge any decisions that may be discriminatory or biased. Finally, there should be oversight and enforcement mechanisms built into the legislation to ensure compliance with the law."
Kokiw points out that Canada can look to the European Union for inspiration on how to implement a governance structure and proposed regulatory framework on AI. EU legislation classifies intelligent algorithms into buckets ranging from minimal risk (like AI-based video games) to high risk (like being used in transportation). It then provides regulations appropriate to each level of classification, such as free use of AI in minimal risk uses and heavy regulation for high-risk applications.
Derek Bhopalsingh, EVP of platform media at Publicis Media, also thinks Canada should be looking at how other countries are handling their privacy legislation, especially when it comes to responding to changes in a sphere that is, by its nature, rapidly changing. "We have to find a way to be able to amend legislation at a quicker pace. That's difficult because the system by which we operate as a government makes that difficult. I think they've done a good job of trying to keep ahead of what's coming, but it's going to be difficult. It's going to evolve faster than we can make legislation to protect consumers."
The EU's GDPR is related to automated decision making, which AI would fall under. Consumers also have the right to know when there's an automated decision being made about them and would have the right to object to it and have human assess the decision.
Walter Flaat, chief digital officer at Dentsu, says that when you build an AI application, it learns from data. That data comes from society. Society has biases, so those get reflected in an AI as well. "With all forms of AI, machines don't have ethics so it depends on the people that make them. Sometimes we like those outputs, and they're acceptable, and sometimes they're not and the computer comes up with a racist segment, or maybe gender bias. We have to really think about how to make it ethical by design instead of trying to fix it afterwards."