The acronym “AI” seems to be everywhere, but what does it mean? What is its purpose?
AI stands for artificial intelligence. This article provides an overview of where AI started and how it’s going. It’s the second of the Neighbors for Change AI Impact series.
The AI evolution
Our brains can learn, reason, perceive, problem-solve, process language, and make decisions. Artificial intelligence describes the ability of computational systems to perform those types of tasks to complete defined goals. Essentially, it’s the study of creating machine intelligence that mimics human intelligence.
AI is used in things like algorithms, web search engines, self-driving vehicles, virtual assistants, and games. Governments use AI about 50% of the time for daily operations and to deliver benefits and services. Businesses use it for things like analysis and automating workflows.
The concept of artificial intelligence dates to the early 1940s, when scientists from various fields first mulled over the possibility of electronic brains. Alan Turing was one of the first scientists to explore machine intelligence. He believed a machine could simulate all forms of mathematical reasoning.
In the mid-1950s, AI became an academic discipline, ushering in the study of artificial neural networks, cybernetic robots (albeit ones controlled by analog circuitry), and game AI, which involved creating software for playing checkers and chess. The development of digital computers during the same time prompted scientists to explore the idea of machines that could think.
Throughout its history, AI cycled from optimism and hype to periods of doubt about this technology along with decreases in funding. The AI hype and unrealistic predictions of its power during the mid-20th century gave way to criticism (e.g., ethical, philosophical) of its limited capabilities and reductions in funding during the 70s and 80s. Research continued, but at a much lower rate.
Between 2012 and 2017, deep learning advanced, resulting in accelerated AI growth. This meant AI could create and modify content such as images, video, and text. The 2020s ushered in more advances in generative AI, causing an “AI boom.” One example is OpenAI’s ChatGPT, which was released in 2022.
AI: a new surge
As of this writing, the AI industry is at another hype stage, with tech billionaires pushing mass adoption of AI. Learn more about their ultimate goals for a “superintelligent AI.” We’re experiencing an integration of AI into our schools, our jobs, and our social lives whether we asked for it or not.
Caution is in order regarding the concept of thinking machines. In his book More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, astrophysicist Adam Becker notes that, “not all neuroscientists agree that the brain’s activity can be fully represented by computation.” (pg. 186)
Machines can’t experience the world the way we do. For example, even the simple act of holding a flower in one’s hand involves a host of sensory experiences and information feedback loops unique to human biology that a computer can’t replicate. A robot can pick up a flower but won’t experience it in the same way as a human. Therefore, it can’t use information about that experience the way we would, such as if we wrote a poem about the flower’s beauty.
Knowing the limits of AI, we should question if the pressure from tech billionaires for society to mass adopt AI is merely an attempt to strong-arm consumers and/or disrupt the workplace. For example, AI didn’t live up to its hype when some companies tried to replace workers with AI, only to rehire their former employees when the AI tools failed to help them turn a profit.
Next, let’s explore the pros and cons of what AI technology has accomplished so far.
AI: applications and risks
There are different types of AI. Companies have integrated them into our daily lives in a variety of ways. AI tools and their uses range from positive to negative and potentially dangerous.
As an assistive technology tool, AI can have impressive benefits but also mixed results. For example, historians now have a new tool to decipher old, damaged texts, likely saving a lot of time. But at least one historian found that after asking Microsoft CoPilot to write a publication for a journal, the end result was unusable.
Google Gemini helps many people with research, saving time and opening up new avenues of learning and brainstorming. It can give people like activists a strategic edge when they’re up against opponents such as data center companies.
In addition to the examples mentioned above, other applications of AI include
- chatbots to address customer service/technical issues
- early cancer detection
- AI in the classroom
- business analytics
- robots in settings like hospitals, warehouses, and aerospace
- autonomous vehicles (e.g., Waymo)
- surveillance technology
- facial recognition systems, such as the one being used by ICE to speed up its arrests
- Military and intelligence
As with any technology, there exists the potential for problematic uses, criminal uses, and dangerous uses. Ethical questions abound regarding AI’s existential risks and long-term effects. For example, sometimes AI chatbots are powered by exploited workers.
In recent years, companies that – without legal permission – trained their generative AI with copyrighted content such as books and art resulted in both proven and alleged mass theft of said content. This means that when we make images using generative AI, the result is usually cobbled together from a database of stolen art. Many creators and publishers are fighting back. One example is the Anthropic class action lawsuit, which resulted in a $1.5 billion dollar settlement.
OpenAI’s ChatGPT is facing lawsuits based on claims it caused people to commit suicide. Autonomous cars have also posed a safety risk: “Waymo’s driverless vehicles have been involved in about 30 different collisions resulting in some type of injury.”
There’s also the recent integration of Elon Musk’s AI tool Grok into American military networks. A watchdog group had already discovered that Grok was used to “create child sexual abuse imagery.” Does that mean child sexual abuse imagery is now in the Pentagon’s military network?
More discussion and action are needed on the topic of AI regulation and safety. The widespread integration of AI into our lives means we must grapple with not only their general uses but also their unintended consequences.
AI can be a helpful tool, but who is at risk of being harmed by it? How can we ensure this technology is used safely and equitably? How much government regulation should there be over AI?
AI technology is making some people a lot of money, but are the profits being invested back into society in ways that benefit us all? In our next article, we’ll explore AI’s impact on jobs and the economy.
Take Action
- Read the first article in this series: The Impact of Artificial Intelligence




