AI Race -- The Future is Coming Too Fast

Note: this is an unedited version of my Jakarta Post March 31st article. The published version had to be condensed in order to fulfill the word limit. For the Bahasa Indonesia translation, see Balapan AI -- Masa Depan Datang Terlalu Cepat.

Mohamad Mova AlAfghani*

Voice:
![[synthesized_audio (5).mp3]]

There has been many debates and criticism about the recently viral ChatGPT. In addition to those who are "wowed" by its capability, some people notes that ChatGPT sometimes "hallucinate" with its output. Oftentimes, its hallucination can only be detected by a domain expert. This is correct and I have experienced it myself when using ChatGPT to assist me on my research on water governance or environmental regulation -- the domains of my expertise.

However, people are often mistaken ChatGPT with GPT (Generative Pre-trained Transformer) or AI "language models" in general. I am no expert in AI, but I have been using these AI tools for some time now to help me with my research. First, ChatGPT is only a "fine-tuned" version of OpenAI's GPT (the company that created it). A better sense of what the GPT can do can be done using OpenAIs playground where we select many versions of the GPT models. Secondly, the GPT models including ChatGPT can have better results if the user better manipulate their prompts -- something known as "prompt engineering".

Thirdly, the GPT models can have better output if they are "fine-tuned" (further trained with specific context) or fed with contextual documents (a process called "embedding"). GPT-4, the latest version which was released in March 14 , is said to be multimodal, capable of receiving image and text input and claimed by OpenAI to have better performance in various academic benchmarks.

That was OpenAI. The race in Large Language Models (LLMs) has just been started. Facebook's model is called LLaMA and was published on February. In March 14, Google released the access to its PaLM and Microsoft launched copilot on March 16th. It isn't likely that these big companies care too much about the socioeconomic implications of AI.

So, what are the socioeconomic impacts?

First, AI is coming for white collar jobs. I think senior and experienced expert positions are going to be safe for a while. However, entry level white collar jobs are at stake. It doesn't mean that these LLMs are going to replace all entry level jobs, however, since AI are making jobs more efficient, we may not need that many people to do entry level work. This creates a higher barrier of entry for the younger generation where it will become more difficult to become an expert as less positions will be available.

These LLMs are trained in billion parameters. Any kinds of knowledge or information that is online, be that a legal text, code, images, GPS location, tweets, journal papers, etc may have been fed to them. They can write simple contracts, computer codes, genetic codes, text to image, text to videos, text to music, text to website and vice-versa. They are not perfect (often only an expert can detect the flaw), but they are good enough to displace entry level jobs.

On the other hand, anything that is not yet online, such as gaining empirical understanding of a particular social situation which cannot be derived from internet connected means (such as doing anthropological work) is probably more secured than any kind of work in which data is already available online.

AI is also coming for blue collar jobs, driverless cars, driverless plane, driverless boats, delivery drones -- but blue collar jobs require certain investment and infrastructure, be that a good internet connection, spare parts or better roads. This is a bit lacking for most developing countries including ours. Replacing knowledge workers on the other hand require only a laptop and an access to a remote cloud server, way cheaper than investing in infrastructure. Hence, the speed of displacement is going to be faster for white collar jobs than blue collars.

Secondly, I think the period between disruptions are coming too fast. In the old days, when a new technology disrupt an existing one, there is usually enough period for the new investor to recoup the profit by operating the new tech for sometimes until it becomes disrupted again. With AI, I think it's going to be different. When GPT-3 was launched, a lot of new startups used the GPT backend to develop new products.

One of the business model is "document querying". So, essentially, users can upload a bulk of documents and then they can query the AI regarding the contents of the document. I uploaded the bulky Jobs Creation Law and the AI can pinpoint which articles regulates what and what are the responsibility of this and that parties. But when Bing Chat was launched (I was given early access), this business model seemed outdated. Bing can do that better. It can read PDFs I put on my websites and provide synthesis to my publicly available papers. It can also answer questions about regulations containing hundreds of articles.

On March 15 (the same date OpenAI launched GPT-4 where they claim that "...it passes a simulated bar exam with a score around the top 10% of test takers"), Google announced that it will be embedding AI into Google Workspace. A day later, Microsoft announced that it has started trial of the Microsoft 365 "Copilot". What this means is that -- not only "document querying" business model but also most existing startups which relied on OpenAI's GPT could be obsolete. Anything from writing email, summarizing email, copywriting, presentation drafts can be created by AI. What is most important is that these AI will be able to learn and generate text or images from existing data already stored in Google Drive or Microsoft's Onedrive. This creates a paradox: if you want to invest in AI, think again. Or maybe, you should only invest in those big companies like Google or Microsoft who already store for God knows how many zettabytes of consumer data? That's not good for competition.

The third question is how should our education system cope? I am a lecturer. Knowing that Microsoft and Google will soon launched their AI systems to the public, resistance to AI is futile. So I told my students to use ChatGPT -- article in Bahasa Indonesia and any available AI tools. I just need to come up with better exam questions for the time being. Existing "AI text detector" are no good, they've mistaken original human written text with AI. Besides, with a little prompt-tweaking or use of services like "quillbot" it would make it even harder to detect AI content.

As soon as Google and Microsoft opened their AI system to existing services, AI generated content will explode. In a few years, it is possible that AI generated or AI assisted contents from text to music to videos will dominate the web. In a few years, original, non-AI assisted content may be much more difficult to find. AI will write everything, read everything and then summarize it to people. AI will be the intermediaries of every information.

How are we going to teach our children? Many of the current newly created professions such as "social media manager" which did not exist few decades ago may no longer exist in the future. Even selebgrams are in danger of being replaced by handsome and beautiful AI avatars. Voice synthesizers already replace narrators; you can easily find online services which can read this article using Sir David Attenborough's voice, with the proper intonation and style. You can also synthesize your voice for a few dollars. I can also tell AI to write articles using my writing style.

There is another paradox here: you can't fully trust AI. At the time of writing, none of these LLMs are free from hallucination. However, oftentimes you need a domain expert to really know which ones are misleading. Unless people are a domain expert, they are not advised to work heavily in domain-specific content using AI. So domain experts -- given that they know how to use AI -- are going be even more productive. They can also cut costs by reducing entry-level knowledge workers. Any kinds of jobs which are repetitious, common or routine will be easier to displace.

For the time being, "prompt engineering" is still going to be an important skill. In other words, knowing how to "use" AI in order to provide desired output will put people at advantage over those who does not know how to command AI. In the near future, I think, it is imagination and lateral thinking that is most important for children. This is because AI have significantly improved its accuracy in deciphering brainwaves. There is one session in the last World Economic Forum event that discusses "brain transparency" and its impact on society. When AI is finally able to satisfactorily decipher brainwaves, there is no use of prompt engineering (or perhaps even the use of written language will decrease, but this is for another article).

Fourthly, fake information is going to be harder to detect. Whatsapp fake images are bad enough but this AI craze is worse. If voice and writing styles can be cloned and human face can be artificially generated it can be more difficult to distinguish fake from reality.

Finally, if AI manage to displace entry and mid level knowledge workers, the political and economic ramifications would be huge. Entry and mid level knowledge workers belong to the middle class that currently drives the economy. A strong middle class is also required to sustain democracy -- which is currently under threat everywhere.

The current presidential race needs to discuss these issues. We need safety net and universal basic income if AI manage to displace entry level knowledge workers. While some of our children are still crossing rivers in order to get to school, our educational system will need to cope with this AI race. The changes that AI bring will be exponential. This means that our existing values and institutions including our legal system will have tremendous challenges to be able to cope with frequent disruptions. The future is coming too fast.

·       Lecturer at Universitas Ibn Khaldun Bogor