• Home
  • Sofware
  • What is AI? The Development of Artificial Intelligence from Past to Present

What is AI? The Development of Artificial Intelligence from Past to Present

Let's take a look at what AI (Artificial Intelligence) is, its types and application areas, which became a big part of our lives before the millennium, carried human beings into space, and is now indispensable in our daily life.
 What is AI? The Development of Artificial Intelligence from Past to Present
READING NOW What is AI? The Development of Artificial Intelligence from Past to Present

Are we living in a simulation? Who are we? Is the human brain really invincible? Everyone must have asked themselves such questions at some point in their life. You’ve seen or at least heard of the movie The Matrix. Now imagine living in a Matrix, would you realize that you live in a world made up entirely of codes?

The answer is simple, you wouldn’t have arrived. Unless, of course, you’re woken up in the real world. So, how big is the artificial intelligence we see in the Matrix today? Will it take over our lives and throw us out of our jobs, or will it help us with our work and give us more time for ourselves? Let’s talk a little bit about what artificial intelligence is, how it has developed and what awaits us in the future.

What is AI (Artificial Intelligence)?

AI is the abbreviation of “Artificial intelligence”, that is, artificial intelligence in English. It is one of the computer sciences that deals with building intelligent machines that can perform tasks that usually require human intelligence without human assistance, and it is quite extensive.

Types of AI (Artificial intelligence):

  • Reactive machines
  • Those with limited memory
  • Included in the theory of mind
  • Those who have self-awareness who are self-aware.

Reactive machines: Uses their intelligence only to perceive and react to the world in front of them

A reactive machine follows the most basic AI principles and, as the name suggests, can only use its intelligence to sense and react to the world in front of it. A reactive machine cannot store a memory and consequently cannot rely on past experience to inform decision making in real time.

One of the most famous examples of a reactive machine is Deep Blue, designed by IBM in the 1990s as a chess-playing supercomputer, which defeated grandmaster Gary Kasparov in a game. Deep Blue could only identify pieces on a chessboard and know how each move was made according to chess rules, accept the current position of each piece and determine what would be the most logical move at that moment.

Those with limited memory: Capable of storing previous data and forecasts

With limited memory, AI has the ability to store previous data and predictions while gathering information and weighing potential decisions, essentially looking to the past for clues as to what might happen next. Limited memory AI is more complex than reactive machines, meaning it offers greater possibilities.

When using artificial intelligence with limited memory in machine learning, six steps must be followed: The training data must be created, the machine learning model must be created, the model must be able to make predictions, the model must be able to receive human or environmental feedback, the feedback must be stored as data, and these steps must be repeated as a loop.

Included in theory of mind: Can understand how people feel

Theory of Mind is just that, it’s theoretical. Unfortunately, we have not yet reached the technological and scientific capabilities necessary to reach the next level of artificial intelligence.

The concept is based on the psychological premise of understanding that other living things have thoughts and feelings that affect one’s own behavior. In terms of AI machines, this means that AI can understand how humans, animals, and other machines feel and make decisions through self-reflection and determination, and then use that information to make their own decisions.

Self-aware: Understands the presence and emotional state of others

Once the Theory of Mind is built in AI, it will be the final step for AI to become self-aware, in the very good future. This type of AI has a human-level consciousness and understands the presence and emotional state of others as well as its presence in the world. They will be able to understand what others may need based on not just what they convey to them, but how they convey it.

What are the types of artificial intelligence?

  • Smart assistants like Siri, Alexa, Google Assistant
  • Driverless vehicles
  • Robots
  • Chat bots
  • E-Mail spam filters
  • Apps like Netflix, YouTube, Spotify

So, how did artificial intelligence first appear?

Intelligent robots and artificial beings first appeared in Ancient Greek myths. Yes, at the time Aristotle’s development of syllogism and his use of deductive reasoning was a pivotal moment in humanity’s quest to understand its own intelligence. Although its roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. Let’s take a closer look at its historical development together.

1940s:

  • (1943) Warren McCullough and Walter Pitts publish the article “A Logical Account of Ideas Contained in Nervous Activity”. The article contains the first mathematical model proposal for constructing a neural network.
  • (1949) In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experience and that the connections between neurons become stronger with more frequent use. Hebbian learning still remains an important model in artificial intelligence.

1950s:

  • (1950) Alan Turing publishes Computing Machines and Intelligence, which proposes what is now known as the ‘Turing Test’, a method for determining whether a machine is intelligent. Harvard graduates Marvin Minsky and Dean Edmonds are building the first neural network computer, SNARC.
  • (1950) Claude Shannon publishes the article “Programming a Computer for Playing Chess”. Isaac Asimov publishes the “Three Laws of Robotics”.
  • (1952) Arthur Samuel develops a self-learning program for playing checkers. The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.
  • (1956) The term artificial intelligence originated in the “Dartmouth Summer Artificial Intelligence Research Project” led by John McCarthy, who defined the scope, and the goals of AI are widely considered to be the genesis of artificial intelligence as we know it today.
  • (1956) Allen Newell and Herbert Simon produce the Logic Theorist (LT), the first reasoning program.
  • (1958) John McCarthy develops the AI ​​programming language Lisp and publishes the article “Programs with Common Sense”. The article recommends the hypothetical Advice Taker, a complete AI system capable of learning from experience as effectively as humans.
  • (1959) Allen Newell, Herbert Simon and J.C. Shaw developed General Problem Solver (GPS), a program designed to simulate problem solving like a human. Herbert Gelernter develops the Geometry Theorem Prover program. Arthur Samuel uses the term machine learning while at IBM. John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

1960s:

  • (1963) John McCarthy starts artificial intelligence lab at Stanford.
  • (1966) The Automatic Language Processing Advisory Committee (ALPAC) report by the US government detailed the lack of progress in research into machine translations, a major Cold War initiative with the promise of automatic and instant translation of Russian. ALPAC report leads to cancellation of all government funded MT projects.
  • (1969) The first successful expert systems were developed at DENDRAL, an XX program, and MYCIN, designed to diagnose blood infections, was created at Stanford.

1970s:

  • (1972) The logical programming language PROLOG is created.
  • (1973) The “Lighthill Report” is published by the British government, detailing disappointments in artificial intelligence research, resulting in severe cuts in funding for AI projects.
  • (1974-1980) Frustration with the progress of AI development leads to massive cuts in academic grants. Combined with the previous ALPAC report and the previous year’s “Lighthill Report”, AI funding is cut and research stalls. This period is called the “First Artificial Intelligence Winter”.

1980s:

  • (1980) Digital Equipment Corporations develops the first successful commercial expert system, the R1 (also known as XCON). Designed to structure orders for new computer systems, R1 effectively ends the first “AI Winter”, initiating an investment boom in expert systems that will last for much of a decade.
  • (1982) Japan’s Ministry of International Trade and Industry initiates the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop a platform for supercomputer-like performance and AI development.
  • (1983) In response to Japan’s FGCS, the US government initiates the Strategic Computing Initiative to provide DARPA-funded advanced computing and artificial intelligence research.
  • (1985) Companies spend more than a billion dollars a year on expert systems, and an entire industry known as the Lisp machinery market has sprung up to support them. Symbolics and Lisp Machines Inc. Companies such as companies build custom computers to run on the AI ​​programming language Lisp.
  • (1987-1993) As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the “Second Artificial Intelligence Winter”. During this time, expert systems turn out to be very expensive to maintain and update, and eventually AIs fall out of favor.

1990s:

  • (1991) US forces use DART, an automated logistics planning and scheduling tool during the Gulf War.
  • (1992) Japan ends the FGCS project in 1992, arguing that the ambitious goals outlined ten years ago were not met.
  • (1993) DARPA terminates the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.
  • (1997) IBM’s Deep Blue beats world chess champion Gary Kasparov.

2000s:

  • (2005) STANLEY, a self-driving car, wins the DARPA Grand Challenge. That same year, the US military begins investing in autonomous robots such as Boston Dynamics’ “Big Dog” and iRobot’s “PackBot”.
  • (2008) Google breaks new ground in speech recognition and offers it in the iPhone app.

Transition period between 2010-2014:

  • (2011) IBM’s Watson beats the competition at Jeopardy. In the same year, Apple launches Siri, an AI-powered virtual assistant, via the iOS operating system.
  • (2012) Google, Andrew Ng, founder of Brain Deep Learning project, tests 10 million YouTube videos as a training set by feeding a neural network using deep learning algorithms. The neural network has learned to recognize a cat without being told what it is, thus ushering in a groundbreaking era for neural networks and deep learning funding.
  • (2014) Google makes the first self-driving car to pass the state driving test. In the same year, Amazon’s Alexa is launched as a virtual home assistant.

2015- present period:

  • (2016) Google DeepMind’s AlphaGo defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to solve in AI, but the AI ​​has succeeded. That same year, the first “robot citizen”, a humanoid robot named Sophia, was created by Hanson Robotics and was capable of facial recognition, verbal communication, and facial expression.
  • (2018) Google has released its natural language processing engine BERT, reducing barriers to translation and understanding with machine learning applications. That same year, Waymo launched its Waymo One service, which allows users in the Phoenix metropolitan area to request a pickup from one of the company’s self-driving vehicles.
  • (2020) Baidu presented its LinearFold AI algorithm to science and medical teams working to develop a vaccine in the early stages of the SARS-CoV-2 pandemic. The algorithm was able to predict the RNA sequence of the virus in just 27 seconds, 120 times faster than other methods.

This is the historical development of artificial intelligence. The concept of MetaVerse has been introduced and considering that the NFT era has already begun, the future is not far away. Who knows, maybe 50 years later, artificial intelligences that can imitate the past will emerge. What are you thinking? How will artificial intelligence develop in the future? Don’t forget to mention in the comments…

Comments
Leave a Comment

Details
539 read
okunma608
0 comments