Our guest author, Antero Duarte of Wallscope takes a wry and somewhat insightful look into the world of all things AI. Take it away Antero...............or should that be Siri, Hal, or even Holly............?
Artificial Intelligence has been around for a long time and it has been called about a gazillion different names throughout its history.
The term itself has also been used to describe about a bajillion different things.
While some of the uses of the term align with the definition, some have been a bastardisation of the concept for marketing purposes and can hurt the development of the technology. Or can they?
Defining AI the way only humans can
How do you get nothing done? Ask a group of AI students to agree on a definition of AI — Based on a true story
Let me take you back to the notes from when I took an Artificial Intelligence module at university. An artificially intelligent system was defined as belonging to one or more of 4 broad categories by having one or more of the 4 following characteristics:
Can a system mimic the ways in which humans think? (e.g. Introspectively)
Can a system act in a way that is based on the logical analysis of its environment? (if temperature > 21 then turn heating off)
Can a system act in a human way? (usually to try to convince humans that it is a human — Turing Test)
Is that a broad definition? Yes it is. Is it useful? Probably not?!
Definitions like this one mean that nothing about them is incorrect. It also makes them so vague that they are not very meaningful. Nonetheless, when I refer to an AI, I will be referring to a system that possesses at least one of those characteristics.
This is also the definition that no one came up with during a 2 hour practical when asked to define AI.
(This section is pretty much just a summary of the wikipedia page on the History of AI, so go there for the full picture)
With such a broad definition, it’s no wonder that AI is everywhere. We can basically stretch the definition of AI to fit any system that replaces human behaviour/intervention in any way.
Since the 1950s people have been developing systems and calling it artificial intelligence (it goes further back, but that’s when computer AI started).
When it started it was based on the human brain and how digital signals are passed around the brain. This was the first attempt at a system that thinks humanly. But the problem is as far as we know, there’s more to thinking than just electrons firing. How do we define thinking? Is it based on consciousness? If so, are animals intelligent?Which ones? So many questions… Yes, this is the birth of the philosophy of AI. The Turing Test marks it.
People are experimenting with computers, and start developing Game AI(which would be used as a measure of the progress of AI throughout history). Game AI is important because it marks the realisation that a system can act humanly but still think rationally.
It’s 1956 and we have the Dartmouth conference. If AI had a birth certificate this would be the time and the place in there. This is where the name was picked, the mission was defined and the biggest players joined.
After that we have the first period (1956–1974) of heavy development where a lot of the algorithms still used today were created, there were major breakthroughs in fields like Natural Language Processing and everyone was pouring money into AI research. This is where systems in areas other than games were making the jump from acting rationally to acting humanly
Then it slowed down (1974). Then it picked up again (1980). The it slowed down again (1987).
The predictions were too optimistic, which meant that most of them didn’t become true. As one of the founders of AI Marvin Minsky put it: “So the question is why didn’t we get HAL in 2001?”.
There isn’t one answer to that as much as a combination of several (speculative) factors like limited computer power, the end of funding, profit driven research which focused on short term gain…
Excuses. We want HAL! (Actually, do we? It would kill us all… I’ll save that for another article).
We also want hoverboards. Also the world didn’t end in 2012. It’s like human made predictions never come true, someone should get a machine to predict these things. Anyway…
Big Data Wordart because I’m stuck in the 90s. send help
Big Data and Deep Learning changed everything. Suddenly we are able to throw a lot of data at a machine and it will use these magic black boxes that allow them to act humanly, but interestingly they are also the closest we have gotten to thinking humanly.
That is where we are. We live in a world where more and more machines are making more and more decisions based on big data.
Data that is abundant as corporations have been collecting it for decades now. It is also biased because it reflects the biases that exist in the real world and we are yet to find ways of preventing these models from learning these biases that we are training them on.
This should be the subject of the next article that I won’t write. People who know way more about this problem are writing about it. I recommend Invisible Women by Caroline Criado-Perez and Racist in the Machine: The Disturbing Implications of Algorithmic Bias by Megan Garcia.
These techniques are being used everywhere. They work for the most time and they are cheaper and easier to build as long as you’ve got the data. Open data is also abundant, so sometimes you don’t even need to own the data to get decent results.
These techniques are also so widely used because they are popular. They become popular by being easy to talk about. Artificial Intelligence is a concept that most people can easily grasp even if they are not technical, as long as they’ve got an interest in the subject and are reasonably tuned in to current events (see point above about living under a rock).
So to (mis)quote the great Salt N’ Pepa: Let’s talk about Buzzwords.
AI, Machine Learning, Lean Startup, Agile, Web 2.0 (and 3.0. Any version of the web really, Blockchain… (the list goes on here it’s actually quite fun to read some of them…), what do they have in common? They are all buzzwords.
Buzz Lightyear on buzzwords
But wait. I’m here to make a case for buzzwords. After all, I didn’t just want to bastardise Kubrick’s film name, I’ve got a point to make… Let’s look at a chart.
These are the 3 main names for technologies that have been historically or still are associated with AI: Artificial Intelligence (AI), Machine Learning (ML) and Business Intelligence (BI). The number on the Y axis is a Google metric that makes searches comparable. Details here
For the more technical people this might seem like a strange comparison, but from my personal experience, these are the types of terms that people conflate in the business world when talking about software that acts humanly.
Artificial Intelligence starts as the dominant term (keep in mind this data is only from 2004 onward) but it is already declining (presumably because of the big 2001 bust? I digress…).
The other 2 are already being used but not nearly as much as AI. Machine Learning is a niche term that is used but presumably by technical people almost exclusively.
Around 2007 something interesting happens. BI catches up with Artificial Intelligence in a dramatic turn of events! (I get way too excited about this) and it leads for almost 10 years. During those 10-ish years we see a slow but steady decline in the usage of that term.
At a certain point (around 2015) both AI and Machine Learning catch up with BI. This shows a shift in language and an approximation of the language that is used by a wider audience to the language that technical people use. Assuming that Machine Learning was only used in a technical sense until around 2015, we can see that there is little fluctuation, which can mean that the development and deployment of these systems didn’t grow, stop or decline, rather just the way people refer to it has changed.
So with the adoption of a more technical language, what does this mean for AI?
Taking into consideration that correlation does not equal causation, and that there are equal chances that the usage of the terms grew because the technology got more popular or the technology got more popular because the usage of the terms grew, the truth is that everyone can walk into a room nowadays and talk about artificial intelligence and most people will know you don’t mean Terminator or any other 80s action film inspired singularity that will kill all humans, but instead that you probably mean Siri, Alexa, Google Home, Cortana, Tesla self-driving cars, or something along those lines.
And this is powerful. Because as a web developer, I can’t walk into a room and talk about REST APIs and expect anything other than weird looks and maybe people asking if I’m tired and need to lie down for a bit.
For people who are not technical REST is the act of resting, not REpresentational State Transfer, so I can’t get people interested in my work by telling them about that.
And in the end of the day, getting people interested in my work is what is going to make it prosper. Getting people interested in the technological side of my work is what is going to help me get funding to develop it, and that’s why everyone is pouring money into AI, because it’s something that a lot more people understand than REST.
Is that good? Is that bad? I don’t know… Some things about it are good, some things about it are bad. Sure some people will use that funding to put AIs in bins, but the rest will try to use AI to do good things.
The point is that we’re getting to a chicken-egg situation where we don’t know if the buzz is generating progress or the progress is generating buzz. But we want buzz. I wish people were buzzing about more technical things. So to all the haters who think AI was cooler before more people were talking about it, this is what you sound like:
No hate towards hipsters though
So that’s my hot-take on buzzwords, which I must thank Company Connecting for providing the platform to publish on.
That’s it for now. I need a Capital REST, the lying down kind.