by Tony Palladino
We are living in an amazing time period, as we witness the meteoric evolution of AI. It reminds me of the Pink Floyd song, Welcome to the Machine. (Fun fact: they recorded that song’s intro using the sound made from an electric shaver.) The concept behind that song has much to do with the machine, invented by mankind, eventually consuming humanity. In today’s world, the machine has evolved quickly, about 2x every two years according to Moore’s Law. Now, the bright and shiny object is Generative AI. How long will AI hold its luster? Where are we currently along the AI journey? Will the machine, rather AI, eventually consume humanity? Perhaps it’s not a matter of “will”, but rather “when”. This raises all sorts of ethical questions. As I write this, our world leaders are trying to figure out how to regulate AI on a global scale, as if that were even possible. Let’s pause and take stock, and see what the hype is all about….
Cautious Optimism
The latest Generative AI Benchmark Report from Lucidworks surveyed 2,500+ business execs responsible for AI adoption. The report revealed a 32% decrease in AI investments planned for the next 12-months. The “pumping of the brakes” is due to the execs serious concerns around data security, transparency, accuracy and, above all, cost to implement.
The state of AI today, while still in its adolescent stage, is simply not quite ready for prime-time from a practical perspective; it’s expensive and inaccurate. While there is optimism around solving the security, transparency, accuracy and cost issues, it is clear that the perceived risks are outweighing the benefits right now. The VC investor community is also being cautious. Angel investors are being extra cautious and they are quite happy with the 5%+ return derived from parking their funds in a bank CD right now. Most everyone agrees that the concerns raised in the survey are very realistic, supporting the fact that Gen AI is still in its adolescent stage. I should point out that the Lucidworks survey does not address the impact on our society – such as the efficiencies to be gained by AI, the many jobs to be replaced, new product innovations, etc. – rather, just the concerns that these 2,500 execs have impeding their investments in AI technology.
Sound the Alarm
What I find even more interesting is in a separate report written recently by Leopold Aschenbrenner, a former ChatGPT developer at OpenAI, whom recently raised the red flag of concern to OpenAI’s Board regarding the safety and security protocols that he felt were lacking in Generative AI’s development. His 165-page report, called Situational Awareness – The Decade Ahead, reveals his insightful insider’s perspective. Aschenbrenner claims that Artificial General Intelligence (AGI), the level of AI that achieves human cognitive capabilities, will be achieved by 2027. This time frame is conservative in my view. Regardless, it is happening fast, in a blink of an eye.
More interesting and concerning is his claim that Artificial Super Intelligence (ASI), a level of AI that surpasses human intelligence such that it can think on its own, will be achieved by 2030. Another blink of an eye in the big picture. While that sounds like good progress, most people do not grasp the gravity of what is about to happen and the impact it will have on our society near term.
While there is much to unpack in his report, the idea of achieving ASI by 2030 is alarming on several levels. First, think about the quantity of GPUs required to run ASI – “Orders of Magnitude” greater than today’s Generative AI requires. Then, think about the amount of electricity needed to run those GPUs. The consumption would devour our planet’s energy resources. We absolutely do not have the means to power AI Super Intelligence today. But when has that ever stopped progress? Rest assure that decisions are being made by the powers that be to divert our future energy resources toward the end game of ASI. For example, did you know that Amazon recently acquired nuclear powered atomic datacenters in March for $650M to power its massive AWS cloud infrastructure? Yes, nuclear powered datacenters.
On another level, and more alarming, ASI will be able to design and implement its own path to the future. It would likely do so by creating its own language; one that us humans would not comprehend. Think about it…. Among the many jobs to be replaced by AI will include programmers and developers. AI will be able to generate more accurate code, at scale, much faster than humans within the next few years. In doing so, AI won’t need to use our cumbersome computer languages or Large Language Models (LLMs). It will invent its own language and models. When it does, humans will not be able to intervene. This is when AI will officially be deemed out of our control.
Yet on another level, there are those out there that would use AI for nefarious purposes. Aschenbrenner points this out rather strongly in his report, citing China’s CCP as a prime example. It has been proven time and again that China’s CCP is adept at copying (or dare I say “pirating”) American ingenuity for their own exploitation. And it’s not just China; the USA apparently has many enemies these days. Using AI as a weapon will be a new threat to our nation. ASI will be hard to stop, if not impossible, unless we can eliminate its power source.
I encourage you all to read both reports and get educated on the AI revolution. Aschenbrenner states that the current ChatGPT-4 is at high school level intelligence. Artificial General Intelligence (AGI), coming soon, is beyond a university doctorate level. Artificial Super Intelligence (ASI) will be so superior to our level of intelligence that we may wind up causing our own demise. The transformation to come will surpass the Industrial Revolution in terms of its impact to mankind. The impact on jobs alone will be devastating to many people, all in the name of progress. There is no governance nor regulation that will stop the AI train from barreling down the tracks.
Light at the End of the Tunnel
Is the integration of mankind and AI the answer? Elon Musk believes so, as evidenced by his development of Neuralink, a surgical implant to the human brain to directly connect us to AI. According to Elon, we must race to embrace the idea of embedding AI technology in our brains sooner than later. In fact, Neuralink was surgically installed in a human for the first time back in January, 2024. It’s all happening so quickly, here and now, in our lifetime.
Welcome to the machine.