All IT Courses 50% Off
Artificial Intelligence Tutorials

A Gentle Introduction to Artificial Intelligence

AI for the absolute beginners

Artificial Intelligence has become a buzzword for the past few years. But while everybody talks about it, not many people really know it. In this series of tutorials, we will discuss what artificial intelligence is, why AI in 2022, how AI is different from machine learning, yet intertwined and many more. Other articles in this series will teach you from scratch how to build machine learning models. Well, not technically from scratch! Just a little background in Python is fine. But even if you do not have a background in Python, we’ve got you covered. Click here to view our Python tutorial for absolute beginners. It teaches you everything you need to know to write full-fledged Python codes.

Anyway, let’s get back into AI. We’d begin by understanding the basic definition.

What is AI?

AI has been defined in several ways by different people. But we will stick to the definition by the pioneers in the field, John McCarthy and Marvin Minsky. They defined AI in 1959 as the ability of any program or machine to carry out a task, such that a human would need to apply intelligence to do that same task. This followed a theory by Alan Turing in 1950, that determines whether a machine can be tagged artificially intelligent. The Turing Test, as it is called, says that if it becomes difficult to set a machine apart from a human, from its behavior, then the machine can be called intelligent.

That said, AI systems are built to have intelligent behaviors of humans such as learning, problem-solving, knowledge representation, motion, creativity, planning, etc. At the very least, they should possess attributes related to speech, reasoning, and vision.

By the end of the tutorial, you will be conversant with:

All IT Courses 50% Off
  • The timeline of AI
  • The Two Approaches in AI
  • Machine Learning and Deep Learning
  • The AI Revolution
  • The reason for the explosive growth in AI

The Timeline of AI

AI was first mentioned in a conference led by John McCarthy at Dartmouth University, in 1956. In the conference that lasted for 8 weeks, cognitive scientists and mathematicians brainstormed on how to tackle AI. Many of the innovations that followed were conceived from this conference. A prominent academic, Marvin Minsky suggested a top-down approach which implies that computer programs can be programmed, with the rules that guide humans’ behavior. Some researchers on the other hand, were in favor of bottom-up approach. That involved creating models that simulate the activities of the brain – neural networks.

In 1959, the term ‘Machine Learning’ was coined by Arthur Samuel. In 1969, Minsky developed a general-purpose robot, called Shakey. Shakey was able to choose its actions based on its environments. But Shaky was slow! Minsky later said in 1967 that in 3 to 8 years, computers would be able to behave like humans in all ramifications. Years passed by and we were not anything near Minsky’s lofty dream. People began doubting AI could attain human-like intelligence. In 1987, there was a shift in the approach. Rather than creating machines that can do all things, scientists began to explore machines that can do specific tasks at an expert level. RI, a system that helped to configure orders for newly purchased systems, was the first of this innovation. In 1986, it saved its owners a whopping $40 million in a single year.

A Gentle Introduction to Artificial Intelligence

1988, another major shift occurred when IBM members published an article ‘A statistical approach to language translation’. From that time, machine learning was based on a statistical analysis of past events, rather than an understanding of the task to be done. In 1997, IBM supercomputer, Deep Blue, took on the world chess champion, Garry Kasparov. Deep Blue was able to analyze more than 200 million moves in one second and think strategically. It beat the world champion to his greatest surprise. In November 2008, Google developed an app on the new iPhone, which could carry speech recognition. By this time, computers were doing cool stuff already. In 2012, AlexNet could successfully classify random images with an error of 16%. In 2014, the first self-driving car by Google passed a state driving test. By 2015, ResNet could do image classification with an error of 3.57%. Even humans had an error of 5%. In March 2016, Google’s AlphaGo defeated the world champion Go player, Lee Sedol. This was a more complex game than chess and it made it that AI is beginning to surpass some of the human abilities.

The Two Approaches

In the Introduction to Artificial intelligence can be divided into two main categories – Narrow AI and Artificial General Intelligence

·         Narrow AI: This is a kind of AI that does specific tasks. It operates within a particular context and is focused on performing that particular task very well. Narrow AI can sometimes be called weak AI. Examples of narrow Ai includes classification models, self-driving cars, image recognition software, Siri, Cortana, and Google Assistant

·         Artificial General Intelligence: This was the kind of AI Minsky was talking about in 1967 – a machine that can solve all problems, much like a human itself. This is a herculean task and it’s still very much under development. Although we see it in science fiction movies, we are not close to that level of AI at the moment. Article General Intelligence is sometimes called strong AI.

A Gentle Introduction to Artificial Intelligence

The Two Subfields in AI

Artificial Intelligence can be divided into 2 subfields – machine learning and deep learning

·         Machine learning: Machine learning involves the development of algorithms that learn from experience. It is hinged on the fact that past experiences have underlying patterns. With a lot of past experiences (data), the algorithm can identify these patterns and learn from these patterns and make future predictions with them. In machine learning, the machine learns on its own to find its rules.

·         Deep Learning: Deep learning is a field under machine learning. In deep learning, the machine learns from the data in layers. The architecture, typically called a neural network, is such that the layers are stacked on top of one another. The first layer learns something about the input data and in the same vein, the next layer learns from the first layer. This learning process continues across each layer until the final layer, where the model would have learned sufficiently enough to make predictions. The number of layers can be called the depth of the network and generally speaking, more depth is better.

A Gentle Introduction to Artificial Intelligence

The AI Revolution

Artificial intelligence is changing the world at a very fast pace. With the advent of robots, nanotechnology, the internet of things in the Fourth Industrial Revolution, AI is now the buzzword of the last decade and this. Many countries are pumping in a lot of money in research and innovation in this field. The biggest companies are in like manner, funding a lot of research to come up with AI-driven solutions to world problems.

AI has penetrated virtually every field of work. The impact is visible even to the blind. In the automotive industry, cars are becoming self-driven. These cars autonomously evaluate traffic, movement, and various routes, cutting down on travel times. In production and manufacturing, robots are now used to complete tasks in a faster and more efficient manner. These robots can work for longer hours and work in conditions that can be hazardous to humans. In the banking sector, AI can be used to predict customers that are worthy of credit approval. Surgical robots are now being used in healthcare to carry out delicate surgery operations. Automated image diagnosis as well as virtual nursing assistants has revolutionized the healthcare sector in no small way.

AI can be used to make paintings and compose music tracks. They are used in sports to make game predictions, amount of tickets to be sold and an athlete game performance. It is machine learning that determines the best ads to pop up in your social media feeds, classify emails as spam or ham, suggest an appropriate reply to a message, translate languages in real-time, recommend the videos to watch next on YouTube or the kind of songs you’d most likely love when streaming online. Facebook, Twitter, Instagram uses AI to recommender people you should connect with as friends.

The reason for the explosive growth in AI

The next question to ask is what caused this massive revolution? Two things basically– Superfast computers and data availability.

In the Dartmouth conference in 1956, the IBM 7090 was one of the fastest computers. It could do up to 24,000 operations in a second. When Apple released its first iPhone in 2007, it could do up to 200,000 operations in a second. Now, AMD’s 8-core Bulldozer FX processor can perform up to 8.429 trillion operations in a second. With these very fast processors, computers can do extremely large computations and do it in a short time.

The second ingredient to this AI revolution is data and nothing has fueled this data like the advent of the internet and social media. Anytime you click on a link, like a picture, register for an online course, follow a celebrity, purchase a good on Amazon, data is generated. Even offline activities like checking in into a hotel or attending a conference generate data. It has become difficult to quantify the amount of data in the world. Suffice to say that the world creates roughly 2.2 billion gigabytes every single day. Many of which hold untapped revelations. Nevertheless, with faster processors, progress is made.

Tying it all up

With AI, programmers do not need to hardcode the exact task they want to perform. The machine can learn from data and past experiences. AI mimics how a baby learns. Babies use their sense organs to relate with the world, take actions, and learn from the consequences of their actions. AI models use their algorithms to learn from data, make some decisions, and strive to reduce their error rate over time.

Artificial Intelligence is an interesting and ever-growing field. Even though there is still a lot of research and improvement, its applications have become ubiquitous. The field is receiving a lot of investment from big companies and governmental agencies. It is estimated that artificial intelligence has the potential to double every industry’s growth date.

Facebook Comments

6 Comments

  1. Thank you for this valuable information. It was really useful for us. Nowadays AI is huge developed in more sectors. Keep sharing more insights.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles

Back to top button