Question: In what decade was the term artificial intelligence (AI) coined?
- A) 1950s
- B) 1970s
- C) 1980s
- D) 1990s
The answer is: A) 1950s. Unless you have a background in the field, the answer might surprise you. While AI has always been a part of our science fiction musings, it’s only recently become a cultural phenomenon with the release of ChatGPT, AI image generators, and other AI-powered tools available to the general public. However, AI is a field (and industry) that’s been around for almost 70 years.
With every advancement in AI, threat actors are also using the technology to increase the sophistication and impact of their attacks. In this post, we’ll look at the origins of AI, highlight key moments in its development, how it’s evolved in the past five years, and AI’s relation to cyber risk and security.
Defining AI and Where It Started
When we think of AI, we usually picture famous examples from books and films (looking at you, “I, Robot” and “The Matrix”). In most instances, AI is usually an intelligent robot or computer with thoughts and feelings (and malicious intent towards humans). These depictions are fun but can muddy our understanding of AI and how it started.
What is AI?
IBM defines AI as “A field that combines computer science and robust datasets to enable problem solving.” The problem solving occurs through the intelligent machines and applications that we engineer. So AI can be everything from intelligent data analysis apps to Starship delivery robots.
Ever wonder how an AI tool defines artificial intelligence? Here’s what ChatGPT had to say:
“Artificial Intelligence (AI) refers to the simulation of human intelligence in machines, designed to mimic cognitive functions such as learning, problem-solving, reasoning, perception, and language understanding. AI systems can be programmed to perform tasks that typically require human intelligence, with the aim of improving efficiency, accuracy, and reliability.”
The Origin of AI
The concept of AI has been around for hundreds of years, and we’ve seen predictive literature around AI from great scientific minds like Nikola Tesla. Experts in the field, however, tend to trace the first instance of practical AI back to Alan Turing, a British mathematician and logician. In the 1940s, Turing invented a code-breaking machine called the Bombe, which deciphered codes from the German cipher machine Enigma. The Bombe is considered a primitive form of machine learning.
Turing went on to publish “Computing Machinery and Intelligence,” a paper where he discussed intelligent machines and introduced the Turing Test. The Turing Test is a method of inquiry to test whether or not an intelligent machine/application can successfully mimic human responses under certain conditions. Basically, can a computer trick you into thinking it’s not a computer?
The Evolution of AI Post-Turing
After Turing developed the Bombe and published his paper, we still wouldn’t have the term “artificial intelligence” until 1956 when John McCarthy and Marvin Minsky launched a Dartmouth summer research project (and conference) on AI.
Fun fact: The Logic Theorist program, created in 1955 and presented at the conference, is considered the first AI program. After the conference, AI would continue developing over the next few decades as computing speed, capacity, and our collective imagination grew. Let’s take a look at a few of the highlights.
Historical AI Highlights
Many AI programs and machines entered the scene the AI conference in the ‘50s and ‘60s.
- Unimate, an industrial robot used in automobile assembly lines, was unveiled in 1961. According to inventor George Devol, the machine had its own “memory.”
- Four decades before Siri, there was ELIZA, a program created by Joseph Weizenbaum in 1965 that is considered a prototype for interactive programs like Siri and Alexa.
- In 1970, Japan introduced WABOT-1, a robot designed to look like humans. WABOT-1 had movable limbs and could see and talk. The same year, Marvin Minsky famously said in Life Magazine, “From three to eight years, we will have a machine with the general intelligence of an average human being.” Minsky was ambitious but still far off.
- In the ‘80s, ‘90s, and early 2000s, we saw Mercedes-Benz release a driverless van (1986), the earliest versions of chatbots (late ‘80s to early ‘90s), and the release of Roomba, the autonomous AI robot vacuum (2002).
While each of these decades played a significant role in the development of AI, experts in the field consider the past five years an explosion in AI technology and tools.
AI in the Past Five Years
Stanford’s 100 Year Study on Artificial Intelligence found that in the past five years, the field of AI has progressed in nearly every one of its subcategories: vision, speech recognition and generation, natural language processing (understanding and generation), image and video generation, multi-agent systems, planning, decision-making, and integration of vision and motor control for robotics. Here are a few examples:
- Language processing technology: ChatGPT was released at the end of 2022 and is considered one of the fastest-growing services – ever. While we’ve seen language processing technology before (Hello again, ELIZA), these newer programs have significant enhancements, like the ability to process and learn from large amounts of complex, context-sensitive data. ChatGPT generates an average of 4.5 billion words per day, and over 300 applications (and growing) use the program.
- Computer vision and image processing: Intelligent apps and machines that use algorithms to understand images have become highly developed in the past five years — with the market growth to show for it. We currently use this technology to change the backgrounds in our Zoom calls, snap a filtered photo on our phones, shop at cashier-less grocery stores, or enjoy the perks of a self-parking car. These tools use deep learning to understand images and classify what they can “see.”
The Controversial/Dark Side of AI’s Growth
The growth in AI is both exciting and somewhat controversial. For example, ChatGPT’s ability to produce written content is impressive and for some, a cause for concern. Teachers and schools are grappling with “AI-generated plagiarism,” or students using these tools to produce some (or all) of their assignments.
This controversy also extends to AI images. Experts are both impressed and worried by the work produced by AI image-generating programs like DALL-E 2 and Imagen. On the one hand, the photorealistic images showcase the dramatic improvements in this type of tech. Still, these programs raise questions about whether these images undermine the creation of new art and how real artists should receive compensation when AI images draw from their work.
These concerns are smaller parts of a larger conversation around who should be allowed to regulate AI, how developers and organizations can monetize their AI technology, and whether we should develop AI technology without ethical/legal guidelines. Currently, governments are struggling to simply define the term for purposes of regulation.
Aside from these questions, there’s also the use of AI by threat actors. The evolution of AI is sparking a concurrent shift in the techniques that threat actors use to target victims.
Cyber Risk and AI
In the past few years, cybersecurity experts tracking the use of AI in cyber attacks have seen growth in both the scale and sophistication of AI-assisted cyber attacks. From AI-supported hacking to password generation, more threat actors are leveraging AI technology to improve the methods and effectiveness of their attacks.
In 2019, cybercriminals used deepfake audio to convince a U.K. executive to wire over $200,000 into a fraudulent account. The victim believed he was speaking to an executive at the parent company. The rise of deepfake technology like this falls under the image processing umbrella of AI and is just one example of cybercriminal abuse of the technology.
Phishing campaigns are also on the rise as threat actors use AI tools like ChatGPT to generate more effective email, text, and voice campaigns. Not only are the quality of the phishing emails better, but experts are finding that AI tools help threat actors launch larger campaigns. AI is also driving the barrier to entry for cybercriminals lower. In other words, targeting and attacking a victim requires less technological skill, which means that more threat actors will try.
Cybersecurity and AI
The news isn’t all bleak, however. As threat actors leverage AI technology to improve their methods, so are cybersecurity experts. In fact, more organizations are investing in AI and machine learning in their IT budgets than ever before. AI helps organizations fight against cybercriminals by providing: pattern recognition for automated threat response, continuous monitoring to detect attacks in real time, and data analysis to catch false positives for more accurate/efficient threat response.
At Black Kite, we use AI and machine learning to ingest information. When calculating compliance levels for third-party vendors, our natural language processing (NLP) and deep learning techniques help us correlate cyber risk findings to industry standards and best practices. Linking cyber risk findings to industry standards allows us to accurately report on a company or their vendor’s level of compliance for standards like NIST 800-53, HIPAA, GDPR, and more. Companies can leverage this information to improve their compliance levels or assess the level of compliance in potential partners.
AI in the Everyday
Outside of science fiction, AI has been a background player in our lives for years. We benefit from technological advances in the field, but until the past few years, AI failed to capture the public’s attention. Now, AI and its applications are at the forefront of most industries and quickly becoming a part of our personal lives.
As AI continues to evolve, society must grapple with questions around security and regulations, ethical usage, and more. Also, both individuals and corporations will have to be on guard against the rising cyber risk that AI presents. Practicing cybersecurity awareness can help, and for companies, leveraging the latest AI-backed security technologies can ensure that they stay one step ahead of threat actors.