AI has been around since ancient times, but it was Alan Turing in the middle of the 20th century who is regarded as one of its first pioneers. In the 1950s, computers were not yet powerful enough to generate AI, and their leasing fees could be so expensive that they would cost up to $200,000 per month.
– In the mid-1950s, Newell, Shaw, and Simon developed Logic Theorist, widely regarded as the first Artificial Intelligence program. This computer system was able to solve mathematical problems with symbols.
– In the 1960s and 1970s, there was an increase in funding for artificial intelligence, with backing from DARPA (the US Defense Advanced Research Projects Agency).
– In the 1980s, significant advances in computer technology, such as deep learning, enabled computers to learn from experience and expert systems that could imitate human decision-making processes.
– In the early 2000s, AI became increasingly commonplace due to the success of IBM’s Deep Blue, Furby Toys, Roombas, and Apple’s Siri virtual assistant. These were just some of the examples that demonstrated how accessible AI has become.
– Developers are creating a greater range of AI-driven programs, including autonomous vehicles, machine learning tools, chatbots, virtual assistants, and beyond; this is leading to an ever-expanding Internet of Things (IoT) and plenty of new possibilities.
AI has its roots in ancient times and tales of artificial beings with consciousness, such as the golem of Jewish mythology. These creatures were created from nonliving matter like clay and activated through chant.
The origins of contemporary AI can be traced back to the first computers of the mid-20th century and Alan Turing, a British mathematician, cryptographer, and computer scientist. His 1950 work “Computing Machinery and Intelligence” was seminal.
Turing’s question of whether machines can employ knowledge and logic to tackle issues and make decisions similar to humans still steers current endeavors to construct and progress AI technologies.
During Turing’s lifetime, the development of AI was largely hampered by technological constraints. At that time, computers were highly expensive (up to $200,000 per month in the 1950s) and very basic compared to current technology.
Turing’s contemporaries faced difficulty because computers of the era could only execute instructions, not save them. This meant they could do tasks but couldn’t remember what they had done afterward.
Developed in the mid-1950s by Allen Newell, Cliff Shaw, and Herbert Simon, Logic Theorist was one of the first AI programs. This computer program could utilize symbolic language to demonstrate mathematical theorems, making it a groundbreaking development for Artificial Intelligence. Furthermore, Logic Theorist has had an enduring influence on cognitive psychology over the last few decades.
In the 60s and 70s, computing technology experienced rapid growth. Computers were able to execute tasks faster and store more data. Most importantly, they became more widely available and more affordable.
Other computer scientists, influenced by Newell, Shaw, and Simon, developed algorithms and software more suitable for solving certain problems. Joseph Weizenbaum’s ELIZA was one such program – it was an early natural language processor.
Scientists such as Marvin Minsky had high hopes for AI in the ’60s, and they predicted that a machine with the average intellectual capabilities of humans was only a few years away; this was propelled by the powerful financial aid from DARPA to leading universities and research teams, spurring developments at a rapid pace.
Computer scientists faced many obstacles while attempting to reach the goal of creating AI machines that could replicate human-specific skills such as natural language processing, self-recognition, and abstract thinking. One major issue was the lack of sufficient computational power in existing computers.
In the 1980s, with advances in “deep learning” technology, computers could be taught to take advantage of experience and learn new skills. One such area attempting to replicate human decision-making ability was Expert Systems, a concept introduced by Edward Feigenbaum.
In 1997, IBM’s Deep Blue computer program made history by beating the renowned grandmaster Gary Kasparov in a highly publicized chess match. With this feat achieved by Artificial Intelligence (AI), it gained unprecedented public recognition that had not been seen before.
In 1998, the Furby, a groundbreaking artificial intelligence “pet” robot toy, was released. Alongside its debut, speech recognition software had achieved sufficient sophistication to be combined with Windows. This marked another profound advancement in AI’s influence on public life.
In 2000, AI was brought closer to science fiction by releasing two humanoid robots: Kismet and ASIMO. Kismet had a human-like face and could recognize and simulate emotions, while Honda’s ASIMO displayed similarly life-like qualities.
In 2009, Google created a prototype for a car that drove itself. However, this technology was not revealed to the public until afterward.
In the last decade or so, there has been a tremendous proliferation of AI technologies and applications. ImageNet, first launched in 2007, serves an integral role as an annotated image database that is used to give AI programs the ability to learn.
AI technology has quickly gained attention, moving from the domain of science fiction to reality. Jeopardy!, video games, and Apple’s introduction of the virtual assistant Siri in 2011 have all incorporated AI applications, while Amazon’s Alexa and Microsoft’s Cortana have only further fueled its popularity.
In this era of the Internet of Things (IoT), AI is becoming increasingly widespread; self-driving cars, machine learning algorithms, chatbots, virtual helpers, and more are all programmed using AI, advancing rapidly in quantity and sophistication.
The launch of ChatGPT in late 2022 has been widely publicized, and its fans are multiplying. Investor interest in AI firms is growing, despite criticisms concerning potential ethical issues. The outlook for AI appears promising and is attracting more attention.
The history of AI has been a rollercoaster ride, full of ups and downs, breakthroughs and setbacks. However, one thing is certain: AI will continue to shape our world in ways we cannot yet imagine, and we must embrace this technology with caution and foresight.
Source: Decrypt