Ever since the start of 2023, we’ve seen a massive increase in the use of AI in everything: ChatGPT, Dall-E, Google’s Bard, and Snapchat’s My AI. Due to its widespread use and public availability, as well as its reliance on abundant amounts of personal data, safety concerns have been raised by many. Citizens are demanding that their governments regulate the ways AI is being implemented in everyday society, and it seems that countries are tackling this problem in different ways.
While most people understand that AI poses a threat to the operations of our current society, not many understand what the reasons are for this threat. Potential problems vary in severity, contributing to existing problems or creating entirely new ones. For example, due to the copious amounts of personal data AI requires, large companies now have new outlets to collect information that can be used for competition. Additionally, we have already seen AI replace jobs in professions like journalism, web development, graphic design, finance, and research. Many people’s occupations are at risk of being eventually replaced by AI, with 73 million potential job displacements in the US over the next five years. Furthermore, experts are worried that, without regulation, AI could slip out of humanity’s control. Oren Etzioni, founder and CEO of the Allen Institute for AI, maintains that regulation and public policy must be implemented to keep the risk of AI at bay.
The EU has been at the forefront of AI regulation, advocating for global standards and drafting the AI Act, the first law on AI by a major regulator anywhere. The Act focuses on categorizing AI systems by risk, outright banning those that they consider an unacceptable risk, and placing tremendous restrictions on those considered high-risk. Places like the U.K. are also trying to regulate AI based on its use in each industry, rather than regulating AI as a whole. However, the Act’s broad language has caused the tech industry to raise concerns about banning entire categories of AI usage. They believe that the Act sacrifices all the benefits AI has to offer for fear of the potential risks and that it should instead focus more on regulating the uses of AI. Meanwhile, experts assert that the AI Act will likely set the standard for global AI regulation, as the rest of the world is paying close attention to its implementation and success.
Canada’s laws already work to regulate most of AI’s uses. That’s why the highly anticipated Artificial Intelligence and Data Act (AIDA) works to fill in the gaps and implement measures to make sure regulation keeps pace with innovation. AIDA aims to maintain consistency by ensuring that companies meet the privacy standards of before by placing the necessary restrictions on companies without outright banning any forms of AI, unlike the EU. These restrictions include the implementation of human oversight, transparency with the public and government, prioritizing safety, securing accountability from companies, and ensuring the resilience of the technology. Furthermore, Canada goes one step further by attacking the bias that often pervades AI’s results, by ensuring that false, incorrect correlations are not made when analyzing data.
Despite being one of the leaders in AI development, the US has not been as proactive in its efforts to regulate AI, as there has not been any notable legislation passed or drafted. However, US Senator Chuck Schumer drafted a framework for regulation and is leading the congressional effort to regulate AI. Other American efforts include a guide for tech companies released by the US Department of Commerce, highlighting the ways to successfully implement AI into business settings and important factors to consider when developing AI. In the future, is likely that the US will follow the EU’s example, and implement a risk-based approach rather than the current voluntary system.
Unlike the EU’s standard, China attempts to tackle the different uses of AI such as text, images, and videos. Experts speculate that Chinese officials fear the risk of false, unregulated information that AI offers, encouraging such harsh restrictions to maintain control. While it aims to be the leader in AI, directly competing with the US, China’s excessive regulations may hinder its ability to achieve this goal.
While navigating the threat of AI is daunting, as its potential impacts are so vast, it is clear that progress is being made. Stanford University's 2023 AI Index shows that, in 2022, 37 AI-related bills were passed into law globally. This only cements how vital it is that governments and companies navigate through the worlds of geopolitics, ethics, and technology to ensure that AI is used ethically and effectively, without acting as an obstacle to innovation.