top of page

AI, the History and Why It’s Happening Now!



AI - everyone is talking about it. 


I delivered forty keynotes on the topic in 2023, making it my most requested keynote topic of the year.


People’s interest in AI continues, and in January of this year, I began studying Artificial Intelligence at MIT. 


This blog breaks out AI history and why now. In the coming weeks I intend to demystify the topic.


So - What is AI? - It's machines acting in ways that seem intelligent.


MIT cites that AI has had four key waves. Here’s some history!


1. In 1956, a small group of scientists gathered for the Dartmouth Summer Research Project on Artificial Intelligence, which was the birth of this field of research. Prior to this, Alan Turing developed the Turing Test in 1950; that if, within five minutes, a human couldn’t tell if it was a human or a computer, it was deemed intelligent.


2. In the 1960s, the first wave of AI involved problem reduction - breaking problems into simpler problems, then down to solvable problems.


3. In the 1970s, AI evolved to answer questions, subject to a set of conventions for situations, known as a ‘representation’. There were constraints with this AI because you needed to get the representation right!   

4. Since the inception of AI, many expected AI at scale to be only fifteen years away. The progress was exciting for those working within the field, but AI didn’t realise its potential. As a result, investment in AI dried up in the mid-1980s and is known as the AI winter.


5. The third wave was in 2010 with Apple’s Siri and IBM’s Watson. This was made possible by lots of data and computing power. At this time, Geoffrey Hinton used gradient mathematics with great success for image detection


6. The fourth wave involves CHAT GPT 3 and having AI at scale, with a million users within five days of its founding in November 2022. This deep learning neural network has over 175 billion machine learning parameters. The largest trained language model before GPT-3 was Microsoft's Turing Natural Language Generation (NLG) model, which had 10 billion parameters (early 2021). 


Why is AI happening now at scale?


There are several reasons; the accumulation of large amounts of data, computational resources, large models are easier to train and flexible deep learning architectures.


My blog in the coming weeks will break out these components … plus the differences between Machine Learning, Natural Language Processing and Robotics.


If you have an upcoming event or team offsite, I’m continuing to deliver keynotes. You can learn more about these in the video above (it’s one of my best!) or contact me here.


Comments


bottom of page