AI & algorithmS: FRIEND OR FOE?

At the 2023 Sydney Writer’s Festival panel entitled ‘Who’s Afraid of AI,’ Tracey Spicer suggested certain forms of new AI to when cars first were introduced to the public, and there were no seatbelts to keep the public safe.

A Walkley Award-winning author, journalist, and broadcaster, Tracey stated AI could be a powerful tool, like in the case of medical AI, which can predict illness. Still, she warned that there needs to be more gatekeeping and ethics applied around the frameworks of AI and especially algorithms and tools such as ChatGPT.

It’s also free, which means we are helping build out the next ChatGPT 4.0 because, as Tracy reminded us, “If you aren’t paying for the product, then you are the product.”

Recently making waves in the content world, concerns have been raised not only regarding AI eliminating content employment opportunities but also something even more insidious, bias within algorithms and AI.

In the Netflix documentary Coded Bias, an MIT Media Lab researcher, Joy Buolamwini, uncovered that facial recognition tools would only work on white males and were highly ineffective for the rest of the population. Through Joy’s investigation, she was eventually driven to lobby for the first-ever legislation in the United States of America to help stamp out bias in the algorithms that impact us all, specifically female, the disabled, and minority populations.

Further highlighting the importance of testing new technologies before releasing them for public use, the documentary also reports on the Amazon facial-recognition technology, which mistakenly tagged lawmakers, which were women and people of non-white ethnicities, as criminals due to the bias built into the algorithm.

Much of AI, inclusive of algorithms that often stemmed from big tech, was created by primarily white males. A quick run of our fingers over our keyboard will return a search of who created some of the giants of big tech, and the results will serve up a variety of white male faces from Bill Gates to Steve Jobs. Whistle-blowers have had to sound the alarm that tech is not immune to the biases of their creators, which can have profound effects on the global population, from declined mortgage loan applications to false arrests.

Even the highly affluent are not immune to algorithm biases. Tech entrepreneur David Heinemeier Hansson alleged that the Apple card that he and his wife both separately applied for gave David a credit limit 20 times higher than his wife despite the couple having joint assets and the same spotless credit history.

This account is troubling because it showcases how the entire global population could be affected by the negative impacts of algorithm biases which could hinder not only success but the quality of life of the populace. Augmenting the integration of AI into our lives poses serious risks, and sadly, minorities and gender equality are hardwired into many of the current algorithms.

Dr. Abeba Birhane’s received her Ph.D. in cognitive science at University College Dublin, but during her studies, she realised the software developers and data science students were submerged in the data sets they were using, however, there seemed to be little concern about what was really in the data sets.

“I want, in an ideal world, a civilized system where corporations will take accountability and responsibility and make sure that the systems they’re putting out are as accurate and as fair and just for everybody…but that just feels like it’s asking too much,” she said.

When we consider that society has often granted algorithms with the power to determine home loan approvals, credit ratings, and hiring eligibility, it is sobering to realise that many big tech companies aren’t 100% sure of what is in the code.

Considering the potential dangers of AI and, specifically, coding, Data Scientist Kathy O’Neil has warned that “Algorithms are opinions embedded in code,” highlighting that we should be careful with how much power we give the automated tools we use. Its clear human guidance is not only needed but entirely necessary.

During the panel, the speakers highlighted some of the pros of using AI, such as apps that detect illnesses, like the new breast cancer app or creating tools for people who really need them, like those with disabilities.

“Computers allow us the opportunity to make better choices,” suggested panellist Toby Walsh, the Chief Scientist of UNSW’s new AI Institute and honoured with an appointment to the international ‘Who’s Who in AI’ list of influencers. An activist for AI limits, which eventually led to him being ‘banned indefinitely’ from Russia, Toby advocates ensuring AI complements our lives and insists that some systems like ChatGPT are still “incredibly stupid.”

“They’re not very good at reasoning, and we can still outthink them,” he said.

Toby said today’s questions of ethics and AI had reminded him of when calculators were first invented and mass-produced, but his advice is not to eliminate AI or tools like ChatGPT completely. He asserted that we need to learn how to harness new technology but set ethical guidelines, like what was done with calculators on new systems, because AI can amplify productivity.

Despite some genuine challenges that need to be addressed within AI and algorithms, Toby hopes that the current obstacles in AI and algorithms will usher in another golden age of philosophy. Tracey also suggested that she would like to see the current AI and algorithm challenges to require stakeholders, government officials, and the global population to apply critical thinking to the obstacles to find an equitable way forward for all.

It remains to be seen if this will come to pass, but for now, AI might still be more friend than foe.