Balancing virtues and pitfalls

The discerning use of AI, coupled with education, would have
much greater efficacy than trying to trying to prohibit it.

 JOURNEYING WITH YOUTH 

By Dr Paul Hine, Principal, Saint Ignatius’ College Riverview 

Like many technologies which impress, the advent of AI could be regarded as a new “holy grail”, given its processing power and impact. It is tempting to believe at times like this that we have found the answers, when sometimes we struggle to understand the sophistication of the questions. 

For those in a Jesuit school that promotes the virtue of values and faith, I am not sure that machine learning and algorithms – no matter how clever – are the source of truth on either of these. We have entered a new era in human history, where intelligence has been decoupled from consciousness, and the latter is the moral compass for action that enables us to make sense not only of daily life, but the depth of human experience associated with it. We need to appropriate and understand the power and use of technology including AI, but hardly be seduced into believing that we have arrived at a new frontier that rejects the primacy of faith, reason and values that are integral to human endeavour and experience. 

Artificial intelligence brings with it virtues and pitfalls which are indicative of its complex programming and diverse applications. There are advantages, just as there were when logbooks and slide rules in Mathematics (now in museums!!) were replaced by graphics calculators. Among the benefits, AI systems can process vast amounts of data quickly and accurately, in accord with the sources from which they draw. Multiple source acquisitions, many of them from experts in their field, is not something to be dismissed when considering how to refine knowledge or pursue research.  

Efficiency is improved and automation leads to solutions that machines can find through processing power that we would struggle to access without it. In areas such as healthcare and financial forecasting, diagnostics have already led to improved outcomes that could not have been realised without the quantum of processing that AI allows. There are also persuasive arguments associated with precision and accuracy that can respond to personal tastes and preferences, the downside being the surrender of private data that goes into meta-banks for as yet unknown purposes.  

It is clear that while virtues and benefits undoubtedly exist, there are also concerns and pitfalls that are in their infancy of being assessed. Recently, 50 girls in a school in Victoria were targeted with fake, AI manipulated images taken from social media. It has rightfully caused immense distress to the girls and their families, and the teenager who was allegedly responsible was arrested by police. There is also the violation of intellectual property rights related to the vast collection and analysis of data which can compromise copyright and be reproduced instantly without authority.  

In effect, AI is programmed to access and process, not to assess the status of such data, nor discriminate about its use in a calibrated or selective manner. Detecting bias – that particularly human consideration associated with ethics that we bring to conflicts of interest, is beyond the rubric of AI at the present time.  

The collection and analysis of vast amounts of personal data by AI systems raises specific concerns about privacy infringement and data security breaches. In particular areas such as employment, lending, and law enforcement, data extracted from publicly available sites does not differentiate on the subtlety of privacy, which we have seen all too often leads to massive data breaches. I am sure that we have all had credit cards cancelled and personal details compromised, not only through the sub-world of hackers but also through the frequency of data leaks. The latter has become a scourge of daily life. 

In 2014, a Hong Kongbased investment company – Deep Knowledge Ventures (DKV) –appointed an algorithm to its Board of Directors. It seemed absurd at the time, but hedge funds are about profits and to have an algorithm at the table to track and project financial returns was not without merit. The concern relates to ethics, particularly in the use of generative AI decision-making processes, should the investments to which they apply produce profit to the exclusion of human benefit, the environment, or nefarious activities such as gambling, illicit drug production, arms procurement or mining that causes environmental destruction.  

Without discernment, profit lines can become blurry and subjugate morality. One needs more than the balance sheet or dividend projections to make informed choices. There is a fundamental essence here that needs to be pursued and understood if the true benefits of AI are to be realised.

One thing is for certain: rapid technological developments inevitably bring advantages but also risks and challenges. In the case of the digital world and particularly AI, the technology has in most cases outpaced governments and legislation, which are currently trying to play catch-up. Policy platforms and regulatory frameworks are in arrears and while the upgrades to ChatGPT, ChatGPT 4.0 and beyond continue to develop, some of the calculated risks remain. Perhaps there are precedents here – social media platforms such as Facebook, X (formerly known as Twitter), Instagram, Snapchat, Tik Tok and myriad others, held a social function that saw populations across the world go online. We are currently in the process of understanding the full impact of that, given the unfettered access that has been provided to young people, as well as corollaries including mental health, pornography and cyberbullying.  

In an opinion-page article published in The New York Times on 18 June, US Surgeon General Vivek Murthy proposed that social media platforms have health warnings placed on them, similar to the mandatory labels on cigarette packets. He conceded that such warning labels wouldn’t actually make social media safe, but would help alleviate the mental health crisis among young people. According to this ABC News report, “Last year, Dr Murthy warned that there wasn’t enough evidence to show that social media is safe for children and teens. He said at the time that policymakers needed to address the harms of social media the same way they regulate things like car seats, baby formula, medication and other products children use.”  

At around the same time that The New York Times published this article, Microsoft was briefly displaced by technology multinational Nvidia as the world’s most valuable company. What does that have to do with AI? Everything! Nvidia makes the chips that are the lifeblood of artificial intelligence. Not surprisingly, the rapid global pivot to AI has seen the company’s stock price rise from under $US20 in 2021 to nearly $US140 on 18 June, establishing Nvidia’s value at $US3.34 trillion.

Does all of this mean that we prohibit AI? I suggest not, for prohibition has never been an effective way of controlling anything, but more discerning and thoughtful use, coupled with education, would have much greater efficacy.  

Understanding and addressing a complex technological landscape is crucial for harnessing the full potential of AI, while mitigating its risks and ensuring it aligns with human values and societal goals. 

This is an updated version of an article originally published in a recent edition of the ‘Viewpoint’ newsletter for Saint Ignatius’ College Riverview. 

Banner image by alexsl, Canva.

Share This