TLDGPD (Too Long, Didn’t Get a Programming Degree) - Artificial Intelligence History Sparknotes
Unit 1: Age of AI; Topic 2: History of AI
Hello Angel,
A quick note on how to engage with this article.
1. Like a class curriculum, each topic is part of a larger unit that dives into a distinct subject. Think of it like a vinyl record, each topic can be enjoyed as a track or fully experienced like an album. Subscribe to get them weekly in your inbox!
2. No knowledge gatekeeping here! We’re neophytes ourselves, presenting our laywomen’s research from topics that make us curious. If you see a bolded term, it is defined in the glossary at the end of this article.
3. Look out for underlined passages, these are your portal down the knowledge rabbit hole and lead to further reading. Insatiable? Original sources are merchandised on our interactive reference page at the end.
4. This is a *reciprocal* experience! We invite you to contemplate the discussion questions at the back. Or even cooler, forward the topic to a friend, colleague, or your crush and start a meaningful conversation.
Curiously,
Ambi & Abbey
Since strutting upright, humans have had a fascination with playing God and create life. If Frankenstein taught us literally anything, it’s that this type of macabre, deus ex-machina doesn’t always end well.
The age of AI seems to be dawning on us, or at least Bill Gates is tolling the bell. No human or robot for that matter, can confidently predict the future. Caveating this discussion to say, any projections are no more than mere speculation.
History doesn’t repeat, it rhymes, but we can better equip ourselves for the murky waters of artificial intelligence by reviewing some cursory context.
The topic on everyone’s lips (or fingertips) seems to be what technology holds for subsequent generations.
Robots domination is not on the horizon, it’s been on. Robots are fixture members of our household - dishwashers, Alexas, and smart devices are permanent installations in American life.
The ChatGPT that has been clogging your Twitter feed did not materialize out of vapor, though its capabilities tower over its predecessors.
Artificial Intelligence’s Midlife Crisis: A Cursory History of AI
Today’s near-future-sightedness is reminiscent of the retro-futurism of the midcentury, when robots and AI as we know it were initially conceived. “When two robots love each other very much…”
The 1950s is a natural starting point for this story, as that was when the modern definition of AI was conceived. Alan Turing, aka Big Daddy AI, was the first to postulate a machine that could converse with humans, a machine that would enter the uncanny valley in which humans are unable to discern the difference between the computer and a human.
Alan’s theory of a talking robot lit the tech scene’s greed imagination. Shortly after Mr. Roboto uttered his first words, Dartmouth officially coined the term artificial intelligence in a workshop.
A flood of Ivys soon sprinted to developing artificial intelligence programs, salivating at the prospect of pioneering the unplowed field. From the 1950s to 1980s, global leaders took note of the profit *ahem* the prospect of this emerging technology, and invested. Everyone was drinking that Jetsons juice.
The retro-futuristic lens of the 1950s had a rose-colored tinge - robots were doing our laundry and we were going to the moon! (Not like that, crypto bros).
But after a few decades of seemingly endless research, as the horizon for realizing this technology stretched into the indiscernible future, investors lost steam, and a sort of “AI-winter” froze progress. (This time you can relate, crypto bros.)
While AI progress itself might have lost some steam, there were a number of technological advancements between the 1980s to now that certainly snowballed into the explosion we’re seeing today. Without going too deep, here’s a quick timeline for reference:
AI Road Test: Not Quite Ready for Its Driver's License
Today, the field is gaining ground in the public sphere, but privately, automation at the hands of artificial intelligence has been underway for decades. As a society, we are well acquainted with forms of specialized AI, programmed to surgically execute a specific task, that we barely acknowledge its presence.
Without consciously registering it, the average American interacts with specialized AI multiple times a day - the machine learning algo’s that keep the hamster wheel of our TikTok feeds spinning or the Roomba that dutifully sweeps away our Cheez-It crumbs.
In a global survey, 84% of respondents used at least one AI-powered device in their daily lives.
These specialized AIs have various applications such as machine learning, robots, imaging, and natural language processing. They enable humans to offload the most mundane and routine tasks. These types of AI technologies are equipped to solve problems within a narrow scope of domain knowledge, but they cannot code switch (pun intended). But perhaps not for long…
To be clear, generalized AI or AGI, the next evolution of AI, is not yet at the party. AGI would have the distinctly human capability to critically think and problem-solve on the fly, its methods unbeknownst to its original programmers.
Roadblocks to AGI: License to Red Pill
ChatGPT is classified as a sophisticated version of specialized AI. It studied abroad in Bar-the-lona and sips tea with a pinky out. It uses natural language processing (NLP), a type of specialized AI, that basically allows humans to talk to robots.
The hype around the natural language processing capabilities of ChatGPT has poised it as a harbinger of a new era of AI, one that might carry us to the cyberpunk land of Artificial General Intelligence (AGI). Could we be in touching distance of artificial intelligence that could be truly generative and creative, a Renaissance of artificial artists? You *know* the scrubs who bought NFTs would snatch up some robo-art.
Many feel interacting with the platform feels like interacting with a sentient being. When the robots gain consciousness, they might not clue us into their enlightenment. Would we even know when AGI arrives?
Like a modern Odysseus calling an Uber from Club Trojan - we’ve got a long ride ahead. We’re still an odyssey away from manufactured creativity and there are some obvious roadblocks ahead. Buckle up, buddy!
You Are What You Eat: Why Algo’s Need to Avoid Junk
The front and center challenge is data availability. AI is on a steady diet of data inputs, and Mama hungry. In order to iterate, algo’s need access to massive silos of data. Not all of that data is quality. Call it the South Byte Diet, successful algorithms need to practice “clean eating”.
Tainted with biases, “junk data” can be not only inaccurate but straight-up hazardous, if not neutralized for those contaminants, known as noise. These disturb meaningful messages from inputs, known as signals. Not all of that data is quality, and algorithms need to adopt purity in order to be successful (and non-toxic).
If there are disturbances in data, researchers must hit the drawing board and reset the inputs that are driving learning. With ChatGPT, users are doing the free labor of “training” this algorithm’s neural networks through input requests and feedback. Users even prompt ChatGPT to “try again” and provide more context when outputs are unsatisfactory.
Seemingly benign noise can cause later signal inaccuracies, making AI accurate only some of the time. For AI to replace most roles, it has to be right all of the time….You can’t pilot a plane well *most* of the time.
Who will be the data nutritionists of the future, evaluating how sanitary data is? Primary source data, such as arrest rates, are skewed by selection biases and other common factors that need to be neutralized. As with well-intentioned but maladjusted parents, we may be unwittingly programming AI to act in accordance with our own human biases, which played out, can wreak havoc at scale IRL.
Despite clean data inputs, sometimes AI can experience hallucinations, for example, instances of reporting fake news. Unless there is a two-factor approach fact-checking generated results, we might all begin to collectively trip out like a societal Ayahuasca circle.
The data quality aside, the sheer volume of data consumption poses concerns. AI systems are hungry and their metabolisms are hella fast. Their constant data chow equates to a heavy computational and electrical load with a lofty environmental price tag.
To put things into perspective: the cost of the electricity that powered the servers and graphical processing units (GPUs) during training of the GPT-3 language model is estimated at $4.6 million.
According to OpenAI, accelerating demand has caused power requirements for training large models to skyrocket 300,000-fold in 2018, and they are currently doubling every 31⁄2 months. This explains why you kept getting your error message.
A Lack of Self Awareness & AI’s Shadow Work
Finally, but perhaps most profoundly, there's a certain opaqueness to how AGI processes. You can’t read minds, even algo-based ones.
It’s unsettling to hand over the reins to AI when we don’t fully understand how they “think” and what they are capable of. Like your ex, AI still needs many years of therapy to develop its self-awareness skills.
Trained through repeated pattern recognition, AI doesn’t partake in the same abstract thinking that humans employ to shape our understanding of the world. This can lead to AI exhibiting what is known as emergent behaviors, AKA rogue actions from the original programming.
Impressively, it can translate the Odyssey from Latin, but it still can’t appreciate the hero’s journey. Like a newborn, it’s simply regurgitating. And before trusting AI completely, we must ask and be able to confidently answer: How is AI drawing these conclusions? What is informing the results?
For almost a century, robotics have been stellar at repetitive fine motor skills, but they haven’t yet mastered unpredictable physical inputs. Humans can also rest easy knowing we could beat a robot at hand to hand combat…at least for a little while longer, as the US military aiming defense spending at programming an A.G.I. Joe. Rodger Federer can sleep easy for the time being. But what about the rest of us?
Robots & the “Terminator”: What Does Job Security Look Like for Mere Mortals?
With the sophistication of ChatGPT, there are bound to be a subsequent Renaissance of human creation built atop this revolutionary technology. AI optimists see the tech as tools, not replacements, for people in the workforce. Historically, the death of one industry creates fertile soil for another, and the workforce up-skills and shifts rather than dissolves completely.
While we can reflect on the impacts of technological leaps on work experiences, plenty has changed in our working habits since the launch of the world-wide-web, let alone the steam engine.
To understand how AI might shape the future of work, we must first unpack how the nature of labor itself has transformed in recent decades.
In Topic 3, we’ll take a look at how new forms of work, from gig-style employment to droves of digital contractors, came to be. We will then examine subsequent shifts in productivity, compensation, and spending changes. Then we will establish the conditions of contemporary labor markets in order to postulate the impact that our humanoid counterparts will have in the office and beyond. Finally, we will reflect on the skill gaps of AI to determine what us flesh suits will still have as an edge.
Discussion Qs:
If you had to perform your daily tasks in different technological eras, how would it look different? What might be the same?
What do you think the digital footprint of the data you share says about you?
What do you think was lost in each technological era? What do you think was gained?
Glossary:
Deus Ex Machina: translating to “God from the Machine”, a plot device, dating back to ancient Greek theatre in which a predicament is resolved out of an unlikely occurrence.
Retro-Futurism: referring to notions people in the past thought the future would hold. Often said in reference to a midcentury aesthetic characterized by shows like Star Trek.
Uncanny Valley: the point at which a robot starts to be perceived a human, which evokes an emotional response from the interacting human.
Artificial Intelligence (AI): a branch of computer science that focuses on creating machines that are capable of intelligent behavior, such as learning, problem-solving, and decision-making.
Specialized AI: AI that is designed for a specific task or set of tasks, rather than being designed to be generally intelligent like a human. For example, a chatbot.
Artificial General Intelligence (AGI): this field aims to replicate human-like intelligence in machines, enabling them to reason, comprehend natural language, learn from experience, exhibit creativity, and adapt to new situations.
AI Hallucination: an event when an AI model generates an answer that is completely made up, often without the user's knowledge.
Data Training: In the field of AI, refers to the process of using a dataset to train a machine learning model or an AI system.
(Data) Neutralization: the process of removing any latent biases or preconceived notions that may be present in a dataset.
Natural Language Processing (NLP): a subfield of artificial intelligence and computational linguistics that involves the development of algorithms and techniques to enable machines to understand, interpret, and generate human language in a meaningful way.
Neural Networks: a type of artificial intelligence that can learn to recognize patterns and make decisions based on data. They are modeled after the structure of the human brain and consist of layers of interconnected nodes that process information, identifying archetypes and patterns.
Noise: unpredictable, meaningless fluctuations around the data signal that blur the receptiveness of the signal.
Signal: predictable and meaningful fluctuations in the data set that lend themselves to reasonable conclusions
Emergent Behaviors: AI behaviors that emerge outside of their original programming.
Graphical Processing Units (GPUs): a specialized processor designed to accelerate the creation of images and video on a computer's display screen. Used for complex computations, such as rendering 3D graphics or machine learning.
Sources:
Demystifying GPT-3 by Chuan Li
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
The Age of AI Has Begun by Bill Gates
The History of Artificial Intelligence by Rockwell Anyoha
The Use of Robots and Artificial Intelligence In war by Abhinav Kumar & Feras A. Batarseh
This article must have been written for me. I am finally starting to get it!
There are applications where AI is already being used to create art. It has been used to create original images, videos, and music. Users typically provide specific prompts for the AI to generate something (the AI is not inspired to create anything itself). The AI typically will reference other similar art and copy a variety of aspects to produce the final product. This is very similar to the creative process of human artists. Creativity is really just the unique combination of existing ideas. Perhaps AI could be even more creative than humans if it has large libraries of art to steal and combine ideas from.