History of AI: Timeline and the Future

The concept of artificial intelligence (AI) dates back several decades and has roots in the origins of modern computing. The history of AI includes the many ways television shows and movies use it as a simplistic catchall for the fear that technology is taking over our lives. This is a misleading depiction, of course, especially the stories where humanlike robots fight back against their creators, such as in “Blade Runner” and “2001: A Space Odyssey.”

An industry leader embraces the potential of AI.

Setting aside the anxieties that the technology may provoke, AI now offers many benefits to our lives. They include:

  • Increasing efficiency of transportation
  • Limiting the need for human manual labor
  • Automating home care routines
  • Helping organizations make quicker, smarter decisions through data analytics
  • Improving the customer service experience

AI has evolved to the point of having multiple applications in various industries. It continues to improve as more people use it and refine its capabilities. There are now even specific academic disciplines that have grown up around artificial intelligence to formalize the development of these new technologies.

What Is Artificial Intelligence?

Artificial intelligence is commonly defined as computer systems that simulate human thinking and capacities such as learning. Modern AI is made up of different categories of systems that each have unique specializations. Given the numerous and varied types of artificial intelligence applications, it makes sense that more specific categories have developed over time. The broadest delineation of AI is between narrow (all current AI systems) and general (all potential future AI systems).

Narrow AI

Also known as “weak” AI, these tools perform a single, often simple confined function that assists with a routine task. Examples include a digital assistant that can automate a series of steps and software that analyzes data to give recommendations. These tools will usually require a person to set up the task or series of tasks as well as a person to take action on the information provided by the AI.

General AI

General AI is sometimes referred to as “strong” AI. This category of AI does not exist currently, as any modern AI tool requires some level of human collaboration or maintenance. However, many developers continue to improve on the capabilities of their systems in an effort to reach a level of effectiveness that will require less human intervention in the machine learning process.

Four Common AI Types

So, what is artificial intelligence beyond the categories of weak and strong? AI systems generally are divided into four main types. They are similar to the weak and strong categories in that they start from practical applications that exist today and evolve to envision what could exist in the future.

Reactive Machines

As the name implies, reactive machines are the simplest form of artificial intelligence systems: They react to what is put in front of them. Their performance is reliable, as they will respond the same way to the same stimuli every time. On the other hand, they have a very limited functionality set because the AI will not learn or grow over time. A famous example of this is IBM’s chess-playing computer Deep Blue, which relied on programmed rules and variables to understand the game of chess.

Limited Memory

Limited memory AI systems have some capacity to recall information and make predictions as to what function is needed. They do require human feedback to train the machine to learn how best to perform in each instance. A common example of this type of AI would be the automated chatbots that many organizations use as a way to scale their customer support and streamline their interactions.

Theory of Mind

Theory of mind is the first theoretical stage of AI development. It does not exist currently, but it represents the point when an artificial intelligence system will be able to discern emotion as it makes a decision about how to respond to a prompt from a user. A system might be able to understand if there is an urgent tone in someone’s voice or if someone is frustrated. This capacity would enable the system to adjust its responses to different situations far beyond what is possible today.

Self-Awareness

In the very distant future, an artificial intelligence system that has mastered theory of mind might be able to reach a stage of self-awareness. In this stage, the system would understand what it is and that it was made by humans. A self-aware AI system would essentially have human-level consciousness, which would allow it to adapt to immensely complex situations. This development would rely on vastly more advanced technology than what we have today.

Key Benefits of AI

The real-world applications of AI provide the potential for a few key benefits. AI tools can automate all sorts of tasks, whether they are mundane or complex, such as answering customer questions through a chatbot or analyzing large volumes of data to help make predictions. They can also try to predict what an employee or customer needs through recommendation engines to expedite their search experience. The applications often are only limited by the imagination of the developers and the time they wish to invest in nurturing the various systems.

Artificial intelligence promises to help organizations scale their teams and have their people focus on what truly needs their attention, rather than menial tasks. Its key benefits are often in augmenting human work instead of replacing it. While a great deal can be automated, human involvement is often necessary so as not to overrely on an imperfect technology.

When Was AI Invented?

The concept of machines thinking like humans has a long history going back many centuries, with philosophers as far back as the 1700s writing about how knowledge is constructed and whether or not it could be predicted in some way. However, the possibility came to fruition in the 1950s. When AI was invented, it was largely thanks to two computer scientists, Alan Turing and John McCarthy.

Turing is considered the “father of AI” due in part to his work introducing the Turing Test in 1950. This test provides a theoretical means of discerning a human from AI, through a series of questions centered around the issue of whether a machine can think. McCarthy coined the term “artificial intelligence” in 1955 as part of a research proposal. He wanted to test a theory that a machine could prescribe the core principles of intelligence.

Who Invented AI?

While Alan Turing had the famous test named after him, John McCarthy is usually acknowledged as the person who invented AI. At the gathering for a 1956 Dartmouth summer research project on artificial intelligence, the coining of the term was attributed to him. This convening served as the basis for much of the early foundational development of AI theory.

From there, development of AI continued through organizations like the Defense Advanced Research Projects Agency (DARPA) in the U.S. In the 1970s, DARPA’s projects included street mapping that allowed users to view immersive interactive maps of cities, which decades later helped give rise to the digital personal assistants we use today. Research and development of other AI projects over time have contributed to numerous advancements that we now use in our everyday lives.

Timeline of Artificial Intelligence

The centuries leading up to the 1950s saw the emergence of several philosophical and logical concepts that served as the foundation for theories of artificial intelligence. Ancient Greek philosophers had a major influence on Western culture, with ideas about the essence of consciousness, human thought, and learning. For hundreds of years, these concepts evolved to eventually become more focused on the possibility of machines gaining the capacity to learn and on artificial intelligence, as technology was further integrated into human life.

The timeline of artificial intelligence specifically dates back to 1763, when Thomas Bayes developed a framework for the probability of events, called Bayesian reference, which served as a leading approach for machine learning. The early 1900s saw the first depictions of robots in popular media from around the world, such as the movies “Metropolis” and “Gakutensoku.”

Building upon these early roots, here is a high-level timeline of the major events during the rapid rise of artificial intelligence over the past several decades:

  • 1950s: Alan Turing publishes his seminal work, “Computing Machinery and Intelligence,” and the term “artificial intelligence” is coined by John McCarthy. McCarthy also develops the popular programming language Lisp, which is used in AI research.
  • 1960s: The first industrial robot starts working at a General Motors factory. The program ELIZA is developed, which is able to carry on a conversation with a person in English.
  • 1970s: The first anthropomorphic robot is built in Japan with the very basic ability to see, move, and converse. An early bacteria identification system is developed at Stanford University.
  • 1980s: Mercedes-Benz tests out the first driverless car that embodies the foundational principles of such cars made today. Jabberwacky is released as an early example of a modern chatbot system.
  • 1990s: Deep Blue, a chess-playing computer, beats the reigning world champion. Google’s first web index has 26 million pages.
  • 2000s: Several new robots are developed, such as Honda’s ASIMO and MIT’s Kismet. The amount of digital information being produced counts in the hundreds of exabytes and is growing fast. Google’s web index reaches 1 billion pages in the span of two years.
  • 2010s: IBM’s Watson natural language processing computer defeats two former champions on the television show “Jeopardy!” The number of internet users worldwide surpasses 4 billion.

We are just at the beginning of the current decade, but further developments will doubtless come as the amount of data created and consumed by users increases exponentially. The pace of data proliferation has played a major role in AI’s evolution, as has the ease with which researchers can access this information, collaborate with one another, and share their results.

Future of AI

The future of AI is one where the benefits of this technology become more integrated into our daily lives. Even after several decades of research, AI is still in a relatively early stage of its development. There are many opportunities for these tools to impact areas such as healthcare, transportation, and retail.

In terms of healthcare, AI has the potential to increase access to personalized treatment. As machine learning evolves, systems will be able to diagnose illnesses and dispense medications without the need for waiting in a doctor’s office. In addition, medical research will be increasingly efficient as data is able to be analyzed and shared more quickly.

Transportation is an area where we’re already seeing automation take hold. While local trains are often operated without a driver on board, we’ll see more driverless cars and trucks on the roads. The positive outcome here will be the ability to minimize accidents, increase efficiency, and reduce stress on drivers.

Another area where we’ll see the current rate of implementation of AI increase is in retail. Mainly this change will occur in automated warehouses, where large inventories can be managed without overwhelming human workers. In addition, recommendations for customers will continue to evolve and be more relevant in the future.

Help Build a More Technologically Advanced World

Whether in driverless cars, smart speakers, or chatbots, AI has an important role to play in our lives. The future of AI promises to build on these technologies, creating opportunities for more freedom for people in different fields. As has been the case over the past several decades of artificial intelligence development, we must always keep in mind the importance of ethics in this work. AI must serve everyone and not create undue harm in people’s lives, for example by reducing employment opportunities through automation, increasing social isolation, or perpetuating bias.

As the world of AI continues to grow, the need for dedicated and trained professionals working in this space will also grow. Explore the opportunities through Maryville University’s online Master of Science in Artificial Intelligence and AI certificate programs. Through focused higher education and training in this future-oriented field, you can be a part of creating a more technologically advanced world for all.

Recommended Reading

The 10 Hottest Careers in Software Development

Big Data and Artificial Intelligence: How They Work Together

AI And The Evolution Of Software Development

Sources

Built In, “Artificial Intelligence”

Forbes, “114 Milestones in the History of Artificial Intelligence (AI)”

Forbes, “The Future of AI: 5 Things to Expect in the Next 10 Years”

IBM, “Artificial Intelligence (AI)”

Investopedia, “Artificial Intelligence: What It Is and How It Is Used”

Oracle, “What Is AI? Learn About Artificial Intelligence”

Be Brave

Bring us your ambition and we’ll guide you along a personalized path to a quality education that’s designed to change your life.