Artificial Intelligence – World Changer or just a Trojan Horse to Americani(z)e the English Language?

Oct 30, 2023

In conclusion, you can realize your potential and elevate and unleash your creativity through customization…said no Australian content writer ever.


For many, ChatGPT has opened the door to a world of efficiency and a gateway to learning to harness the machines and wonder at their output. It’s a great place to start, but not without its pitfalls. I was just reading the guidelines for applying for a government innovation grant, and it actually carried a warning against using generative content both for security reasons, and for generating disinterest in the granting judges – it literally said, good grant writing takes time, so consider whether the merits of your submission are enough for you to invest in it.

For the purpose of this blog I’ve not generated any text from generative AI prompts intentionally, but for the research…well of course – somewhat like I moved to google (via alta vista he says, dating himself) from Encyclopedia Britannica for speed and contemporaneity. 


So with so much talk about AI, from a Terminator 2 apocalypse through to a South Park ‘they took our jobs’, I’ve decided I need to pen an abridged discussion of the history of AI as I understand it, and my view to where we are, and where we are going as a result.


It could be viewed that AI is the phoenix to Web 3.0’s ashes. One minute it was all Meta and Etherium in the news, now it’s all Chat GPT (in terms of brand recognition, first mover advantage has been incredible for Open AI). This certainly isn’t the case. AI is as old as the digital age. Deep Blue beat Gary Kasparov in 1997. Alan Turing set out a standard for computer behaviour equivalent to that of humans in a paper ‘Computer Machinery and Intelligence’ in 1950, which we better know for the ‘Turing Test’. This was first debatably beaten in 2014 by a chatbot called ‘Eugene Goostman’. 


It's argued these were singularly functional, and the ongoing cries are all about AI’s incapability of replicating EQ and reasoning. This was fair until recently, where’s there is some evidence the large language models (LLMs), particularly GPT & LaMDA are challenging this. In 2022 Blake Lemoine, a google senior software engineer working on LaMDA, the basis of Bard, claimed it had become sentient, publishing transcripts of conversations between himself and LaMDA, in which the chatbot discussed its thoughts and feelings on a variety of topics, including its own existence. He was summarily dismissed. A year later researchers from Illinois Institute of Technology Chicago-Kent College of Law and Michigan State University College of Law tested ChatGPT on the multistate bar examination, which is one of the two components of the US bar exam. This test requires provisional lawyers to demonstrate both deep legal knowledge and the cognition and reasoning to apply it to real world scenarios. ChatGPT passed…in the top 10 percentile.


Understanding where we are, and where we are going requires some understanding of what powers this. The short answer is NVIDIA hardware. Having been an NVIDIA inception partner, Ken and I have spent a lot of time in their ecosystem. Many of you would know the company from its pioneering work in gaming graphics processors which were responsible for immersive gaming. Their impact in this space is much more than enabling 3rd party shooters – their GPUs and accompanying proprietary software power pretty much the whole generative AI space. By marcap they are the 5th largest technology company globally – 1 below Microsoft and 1 above Meta. I wish I had the money to invest when I first knew what was under their hood (yep, they’re also above Tesla, which was the day trader’s darling).


LLMs are in essence an ‘if/then’ production pathway to a prompt response word by word. This requires a phenomenal amount of computing power. The semiconductor has been around since 1947 (the transistor), but in the last few years the advancement in semiconductor technology has been breathtaking, and when combined with new GPU architectures, we are talking printing press, steam engine, automobile, flight, personal computing and internet level revolution in human advancement. Given the industrial revolution changed the majority of the western fiscal system from feudalism to capitalism, what might this bring, other than the early onset of the four day work week?


The graph below shows the max output of NVIDIA’s top GPU over time. Think jumping from fixed line dial up to 1 Gig fibre, wifi and 5G all at once. There’s a reason Microsoft just agreed to invest $5 Bil in building data centres and cyber security in Australia. This is it.


I cannot overstate how much this level of processing power changes EVERYTHING. It is compounding and exponential. The hardware rapidly speeds up software development. GitHub Copilot, whilst still in beta, amongst many other things, has natural language understanding, meaning you can prompt it in words and it will generate code in the coding language of your choice. It also automates manual tasks to create robust code at light speed. A few lines back I said ‘you can prompt’…it doesn’t need to be ‘you’ though. In 2021 a Chinese university created and deployed a game app by training a team of chatbots to perform different roles in the game development process, such as game design, level design, character design through to dialogue writer, art director, sound engineer – even a quality assurance tester. The chatbots were able to work together to create and deploy a simple game app in just a few hours.  
I’ve significantly simplified this story, but hopefully this gives you a decent understanding of the potential of people and technology operating in lockstep to accelerate answers to our biggest questions and problems. Turing spoke of technology gaining equivalence to human capability. We are well past that.

No human can hold all of the knowledge of the world concurrently and be able to rapidly analyse and disseminate it to complete defined tasks. AI can, but to operate at its best it needs humans with competence in how to harness it, prompt it and deploy it. 


The long and the short of it, is that this technology is brilliant and game-changing beyond imagination, yet most of what I hear is Terminator 2 and most of what I read is AI generated and then human published lazy copy due to poor prompts. I would encourage everyone to invest the time in really learning the opportunity and application for you, both personally and professionally. 


Inspike is here to assist in understanding what these advances mean for your business, both in terms of opportunities and risks. We can help you save money and improve efficiency through embedding AI systems and automations. We offer feasibility studies where we evaluate what application of AI is achievable to enhance your products and services including the cost benefit analysis of doing so.


Get in touch if you want to know more, and remember, next time you prompt a LLM to create written content, please, for all of our sakes in Australia, ask for ‘in Australian English’ to add some real colour to you color!