newspaper article image

AI still far from replacing humans

By Nigel P. Daly and Laurence Chen 陳家宏 (Taipei Times, Mar.2, 2024, https://www.taipeitimes.com/News/editorials/archives/2024/03/02/2003814319)

It is job-finding season in Taiwan and many are becoming concerned about how the advancement in artificial intelligence (AI) would impact the workforce.

Fifty percent of 18-to-25-year-olds are worried about how generative AI would impact their job opportunities, a recent study by Taiwan’s Market Intelligence and Consulting Institute showed.

Before ChatGPT was released in 2022, concerns about job losses were mainly focused on blue-collar workers.

With the success of large language models (LLM) that are able to generate human-like language, code and images, the jobs that now seem most threatened are white-collar ones, such as writers, programmers and artists.

It is clear that AI is revolutionizing job landscapes, but should you be worried about being replaced by LLMs?

To answer this question, we need to understand that jobs are comprised of many tasks and for most jobs — blue or white-collar ones — AI still has a long way to go from being able to do them all.

Yes, AI automates some repetitive tasks, but other tasks still need humans. Some would remain human-only, while some would need humans enhanced by AI.

Potentially, this sounds very good. Writers and coders in particular are presented with the possibility of outsourcing the menial, boring and repetitive tasks to AI while enjoying AI-enhanced abilities to undertake more creative tasks. This would undoubtedly make white-collar workers more productive and creative.

However, this is a double-edged sword.

Many people would lose jobs not to AI, but to AI-enhanced humans who streamline the workforce and increase profitability.

Generative AI, at least in current LLM forms such as ChatGPT or GitHub Copilot, would not be able to replace humans completely.

To understand why, we need to talk about intelligence, wisdom and agency.

Let’s start with tasks on a knowledge-action continuum that starts with data and ends in wisdom.

When data are organized, they become information, which — if made meaningful by humans — becomes knowledge.

When “knowing” becomes “doing,” knowledge becomes intelligence, which enables one to perform complex tasks such as writing a report, summary or subroutine.

Intelligence is where knowledge becomes action and is where competence takes off.

AI does these tasks, but does not understand what it is doing.

When coders simply copy functions without understanding why they do what they do, the code is “garbage,” Linux kernel creator and lead developer Linus Torvalds said.

Next is wisdom or insight. This requires a competence to use intelligence to identify and produce value for humans.

An example is Steve Jobs’ idea of a touch screen phone, which transformed the simple phone into an “all-in-one” device.

AI cannot do this. This type of wisdom is hard-earned, taking 10,000 hours of learning, practice and skill building to achieve.

Let us think of wisdom as expert craftsmanship, which requires time, effort and practice.

Perhaps this is why people are able to enjoy a beautiful Da Vinci-style artwork generated by DALL-E 3 in 20 seconds, but it would not be valued in the same way as a real Da Vinci is.

Only humans who painstakingly work toward wisdom and become expert craftsmen are able to create works that are of value to humans and are valued by humans.

While generative AI has made impressive advances in creating text, code and images, it is still only a sophisticated statistical model.

Humans tell it to do something and it does it, but humans still need to evaluate it.

Generative AI has no agency. It cannot act autonomously, intentionally or with purpose.

Perhaps this would happen one day when AI becomes artificial general intelligence.

Right now, we are not there yet and it is unclear whether LLMs are going to lead us there.

Currently, LLMs are “intelligent” models that generate text or code. However, they are only as good as the datasets they have been trained with.

They are essentially statistical models producing code or language that regresses to the mean. In other words, the output is of average quality.

Generative AI writing and coding might appear exceptional to the untrained eye, but for highly competent professionals, the results are mediocre at best and riddled with inefficiencies and mistakes at worst.

The lesson here for jobseekers is twofold:

The first is to become familiar with AI tools and to use them to boost personal work performance.

The second is to identify the skills that require human wisdom and to work on developing them.

This is in preparation for a future with AI that allows you to still stand out from the competition.

Nigel P. Daly is a business communications instructor at TAITRA’s International Trade Institute. He has a doctorate in English from National Taiwan Normal University.

Laurence Chen is a writer, speaker and IT consultant at REPLWARE. He assists companies in addressing programming issues and optimizing their data pipelines.

Leave a Comment

Your email address will not be published. Required fields are marked *