By Nico Katzke, Head of Portfolio Solutions at Satrix*
Less than two years ago we were all enthralled by the emergence of a technology that promised to disrupt all facets of our lives. What started as a live social experiment of opening large Natural Language Processing (NLP) algorithms, in the form of ChatGPT, to answer the public’s unscripted questions, soon proved so effective that most see it as a technology with the potential for major disruption.
Many observers’ enthusiasm was somewhat curbed in recent months by the realisation that physical limitations exist in building larger and ever more capable training models. It turns out, when we ask ChatGPT a question online, somewhere a machine is whirring and crunching numbers – consuming more than 10 times the power of a Google search. It is, after all, not magic. Nor is it a tireless sentient being answering our questions – it is machines doing complex math. The more we ask of it, the bigger it needs to be.
Despite clear physical challenges, to many it seems the future domination of Artificial Intelligence (AI) through Large Language Models (LLMs) is inevitable. Others are more sceptical. First, it is costly. Building, training and maintaining these models requires incredible costs and expensive manpower, access to enormous datasets and advanced chip making infrastructure. Securing a stable supply of raw materials required in chipmaking seems increasingly precarious given rising geopolitical tensions globally.
Other possible drawbacks include the environmental costs associated with training and managing these energy hungry algorithms, the inherent biases that undocumented training could embed in model responses with limited recourse, the inability of modelers to reverse engineer parameters (making it a black-box process by design), as well as the lack of ethical scrutiny required to ensure it is safely deployed on society.
Proponents of AI’s unbridled growth might point to these being fixable problems; eggs broken in the pursuit of a sentient omelette. But a bigger problem might lurk in its current design that should give even the most optimistic pause: the lack of Intelligence, or the “I” in AI. People are mesmerised by Generative AI’s output and the illusion of understanding that it possesses. It also doesn’t help that researchers and companies at the forefront of its development have strong incentives to feed this illusion with anthropomorphic language like learning, intelligence and reasoning.
But at its core, the models we interact with are simply computer algorithms trained on human supplied information (think all the public Reddit, Wikipedia, etc. pages), that take text as input and produce answers by predicting what word (or what pixel when drawing) should come next. It is, ultimately, super-efficient predictive text strung together in a way that, given its vast library of human conversational data for training, seems (unsurprisingly) human-like. But it is a parrot, not a mind. A remarkable achievement in mimicry and information collation, but certainly not sentient. Its current design will always scupper its ability to “think” outside the (very black) box. Even if a methodology is identified that can bring us to the holy grail of Artificial General Intelligence (AGI), that doesn’t mean that it will necessarily be achieved. Think the decades long pursuit of nuclear fusion which is theoretically possible.
Irrespective of our views on whether and to what extent the technology is destined to disrupt our lives, the toothpaste has been squeezed from the tube. We have been irreversibly thrust on a path to discover how far artificial generative technology will mature and how it will impact society in the future, be it as a net positive or net negative force. At this point we can at best only speculate.
The speed of getting to a world of broad-based generative AI integration will depend on various factors. First, finding solutions to the aforementioned physical constraints that at present have an exponentially increasing cost structure.
Second and unconnected to the first depends on how all facets of society will embrace it. Some have argued that government institutions should limit its unbridled development for fear of mass worker displacement and the development of nefarious applications. But one of the greatest impediments to the feared displacement of labour might be the companies set to benefit from these technologies themselves.
There is an inherent tension for companies to appease both Wall Street and Main Street, or in the South African case Maude Street and Church Street. While companies no doubt will start to feel investor pressure to use the technology to improve efficiencies and reduce cost, most also care about not being seen as cold-hearted capitalists only interested in the bottom line. Managers may also realise that the jobs most at risk of full automation are in fact those required to give new entrants the training needed to make them more productive in future. A skills gap may emerge that could prove costly should all simple tasks be automated. End consumers will likely also be slow to warm to the idea of fully automated creative output (think AI generated music, art, writing, etc) – which companies will no doubt be mindful of.
This means that even if the technology becomes more capable and widely available than it is today, there will likely still be a slower and more gradual adoption – giving workers a chance to adapt, making the disruption less severe than some anticipate. AI will likely prove to be a useful productivity tool, not a source of displacement. Government intervention and its accompanying distortions may not be the best course of action: let the markets rage against the machines.
Finally, how should one go about investing in this technological revolution? It might be wise to consider that when it comes to securing the rights to raw inputs for chip manufacturing, access to the top minds and use of expensive computing equipment to develop the technology further – size and scale matters. Mark Twain said that when everyone is looking for gold, it’s good to be in the picks and shovels business. Investing in the companies owning the rights to picks and shovels used for tomorrow’s application may very well be the best course of action at this point – and few indices encapsulate this better than the Nasdaq 100.
*Satrix is a division of Sanlam Investment Management
Disclaimer
Satrix Investments (Pty) Ltd is an approved FSP in terms of the Financial Advisory and Intermediary Services Act (FAIS). The information does not constitute advice as contemplated in FAIS. Use or rely on this information at your own risk. Consult your Financial Adviser before making an investment decision. Satrix Managers is a registered Manager in terms of the Collective Investment Schemes Control Act, 2002.
While every effort has been made to ensure the reasonableness and accuracy of the information contained in this document (“the information”), the FSPs, their shareholders, subsidiaries, clients, agents, officers and employees do not make any representations or warranties regarding the accuracy or suitability of the information and shall not be held responsible and disclaim all liability for any loss, liability and damage whatsoever suffered as a result of or which may be attributable, directly or indirectly, to any use of or reliance upon the information.