Table of Contents
Most of us are familiar with artificial intelligence (AI) through the use of generative AI language models like ChatGPT, Bard, etc.
Their ability to refine and steer conversations towards a desired tone, style, and language has not only opened our eyes to the potential that AI can bring but has also shone a light on how it can impact how we work.
The adoption and development of AI in Australia have gained significant momentum in the last couple of years. Organisations are embracing AI to enhance business operations and optimise workflows and help fill the talent gap. The mining sector, which contributes to almost 15% of Australia’s GDP growth is a prime example of how it leverages AI to optimise workflow.
AI in mining helps to reduce workplace injuries by analysing data to predict potential accidents, monitoring wear and tear, and more importantly, reducing time spent in a cave by creating geo-maps to sort minerals. This in turn has allowed companies to use their time efficiently to better plan and execute operations.
The Changing Face of AI
While it seems like AI is a recent innovation, it has actually been around as late as the 1950s. AI has evolved into an entire sector dedicated to the use of robust datasets and computer science to enable problem-solving at speed, and scale. There are graduate courses on the topic and not-for-profits like Women in AI helping attract and retain women in this field.
AI is playing a transformative role in everyday life for not only the consumer but for businesses too. The technology is helping streamline operations and improve return on investment, due to its ability to enhance efficiency and provide valuable insights from large volumes of data. AI-powered systems automate routine tasks, optimise resource allocation, and enable predictive analytics, enabling businesses to make data-driven decisions and gain a competitive edge in the market.
Contrary to popular belief, AI also creates job opportunities and allows for new ways of professional development to be explored. This is because the technology alleviates time spent on monotonous tasks, opening employees up to more opportunities for innovative thinking and focusing on more value-add-based business tasks.
Because of this, more and more businesses have been incentivised to invest in AI infrastructure to harness its benefits. In fact, aggressive user adoptions are driving 30+% 5-year CAGR on AI server demand, which is why Lenovo recently announced a $1 billion investment over the next three years in the expansion of infrastructure solutions to accelerate AI.
As a result of the technology’s unprecedented growth and fanfare however – especially with the onset of new innovations such as generative AI – conversations are increasing around the need to regulate the technology, to avoid potential implications if used irresponsibly.
These have since resulted in tangible action being taken, such as the launch of UNESCO’s policy paper last month. It outlines “a procedural framework to address and mitigate risks that may arise with [the] use across the AI project life cycle”.
Meeting the Challenges of Tomorrow
Such action is justified. Implementing AI comes with challenges, including ensuring data quality, addressing algorithmic biases, and maintaining ethical standards. AI systems are not immune to biases since they tap on historical data which is ‘fed’ into them in the form of algorithms, by humans. This means algorithms are opinions written in code, potentially leading to unfair discriminatory practices being actioned as a result. Countering these biases can be difficult, as algorithms need to be consistently tested in real-life situations and accounting for how “fairness” is computed in.
Such ramifications can be dire, as outlined by local commentator Tracey Spicer in her latest book, Man-Made: How the bias of the past is being built into the future: “The phenomenon of ‘bias in-bias out’ can have catastrophic consequences for patients. Who’s offered the first Covid-19 vaccines? When resources are scarce, who’ll get the last ventilator, oxygen tank, or intensive care bed? Imagine being on your deathbed and hearing, ‘Computer says no’.”
Data and security remain a top consideration for governments as more industries express interest in adapting AI to their sectors. Safeguarding personal data and ensuring robust security measures in AI systems shouldn’t be borne by just the governments. It is a shared responsibility that businesses should also partake in.
Transparency in AI systems is essential for building trust and accountability. Users and stakeholders should have access to understandable explanations of how AI systems arrive at their decisions. Regulations can focus on ensuring that AI algorithms are explainable and transparent, enabling users to understand the underlying processes and facilitating accountability.
As AI continues to evolve, regulations should be adaptable to keep pace with emerging technologies and use-case scenarios. This requires a forward-looking approach to anticipate the potential risks and ethical considerations associated with new AI applications. Regulators should collaborate with experts, industry leaders, and researchers to identify areas where regulations need to be updated or expanded to address these emerging developments.