• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
N4C Logo

Neighbors For Change

  • News
  • Events
  • N4C Spotlight
  • Data Centers
  • Resources
    • Take Action
    • About
    • Contact
  • Newsletter

A Brief History of Artificial Intelligence

January 31, 2026
By: Heather Massey

The acronym “AI” seems to be everywhere, but what does it mean? What is its purpose?

AI stands for artificial intelligence. This article provides an overview of where AI started and how it’s going. It’s the second of the Neighbors for Change AI Impact series.

The AI evolution

Our brains can learn, reason, perceive, problem-solve, process language, and make decisions. Artificial intelligence describes the ability of computational systems to perform those types of tasks to complete defined goals. Essentially, it’s the study of creating machine intelligence that mimics human intelligence.

AI is used in things like algorithms, web search engines, self-driving vehicles, virtual assistants, and games. Governments use AI about 50% of the time for daily operations and to deliver benefits and services. Businesses use it for things like analysis and automating workflows.

The concept of artificial intelligence dates to the early 1940s, when scientists from various fields first mulled over the possibility of electronic brains. Alan Turing was one of the first scientists to explore machine intelligence. He believed a machine could simulate all forms of mathematical reasoning.

In the mid-1950s, AI became an academic discipline, ushering in the study of artificial neural networks, cybernetic robots (albeit ones controlled by analog circuitry), and game AI, which involved creating software for playing checkers and chess. The development of digital computers during the same time prompted scientists to explore the idea of machines that could think.

Throughout its history, AI cycled from optimism and hype to periods of doubt about this technology along with decreases in funding. The AI hype and unrealistic predictions of its power during the mid-20th century gave way to criticism (e.g., ethical, philosophical) of its limited capabilities and reductions in funding during the 70s and 80s. Research continued, but at a much lower rate.

Between 2012 and 2017, deep learning advanced, resulting in accelerated AI growth. This meant AI could create and modify content such as images, video, and text. The 2020s ushered in more advances in generative AI, causing an “AI boom.” One example is OpenAI’s ChatGPT, which was released in 2022.

AI: a new surge

As of this writing, the AI industry is at another hype stage, with tech billionaires pushing mass adoption of AI. Learn more about their ultimate goals for a “superintelligent AI.” We’re experiencing an integration of AI into our schools, our jobs, and our social lives whether we asked for it or not.

Caution is in order regarding the concept of thinking machines. In his book More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, astrophysicist Adam Becker notes that, “not all neuroscientists agree that the brain’s activity can be fully represented by computation.” (pg. 186)

Machines can’t experience the world the way we do. For example, even the simple act of holding a flower in one’s hand involves a host of sensory experiences and information feedback loops unique to human biology that a computer can’t replicate. A robot can pick up a flower but won’t experience it in the same way as a human. Therefore, it can’t use information about that experience the way we would, such as if we wrote a poem about the flower’s beauty.

Knowing the limits of AI, we should question if the pressure from tech billionaires for society to mass adopt AI is merely an attempt to strong-arm consumers and/or disrupt the workplace. For example, AI didn’t live up to its hype when some companies tried to replace workers with AI, only to rehire their former employees when the AI tools failed to help them turn a profit.

Next, let’s explore the pros and cons of what AI technology has accomplished so far.

AI: applications and risks

There are different types of AI. Companies have integrated them into our daily lives in a variety of ways. AI tools and their uses range from positive to negative and potentially dangerous.

As an assistive technology tool, AI can have impressive benefits but also mixed results. For example, historians now have a new tool to decipher old, damaged texts, likely saving a lot of time. But at least one historian found that after asking Microsoft CoPilot to write a publication for a journal, the end result was unusable.

Google Gemini helps many people with research, saving time and opening up new avenues of learning and brainstorming. It can give people like activists a strategic edge when they’re up against opponents such as data center companies.

In addition to the examples mentioned above, other applications of AI include

  • chatbots to address customer service/technical issues
  • early cancer detection
  • AI in the classroom
  • business analytics
  • robots in settings like hospitals, warehouses, and aerospace
  • autonomous vehicles (e.g., Waymo)
  • surveillance technology
  • facial recognition systems, such as the one being used by ICE to speed up its arrests
  • Military and intelligence

As with any technology, there exists the potential for problematic uses, criminal uses, and dangerous uses. Ethical questions abound regarding AI’s existential risks and long-term effects. For example, sometimes AI chatbots are powered by exploited workers.

In recent years, companies that – without legal permission – trained their generative AI with copyrighted content such as books and art resulted in both proven and alleged mass theft of said content. This means that when we make images using generative AI, the result is usually cobbled together from a database of stolen art. Many creators and publishers are fighting back. One example is the Anthropic class action lawsuit, which resulted in a $1.5 billion dollar settlement.

OpenAI’s ChatGPT is facing lawsuits based on claims it caused people to commit suicide. Autonomous cars have also posed a safety risk: “Waymo’s driverless vehicles have been involved in about 30 different collisions resulting in some type of injury.”

There’s also the recent integration of Elon Musk’s AI tool Grok into American military networks. A watchdog group had already discovered that Grok was used to “create child sexual abuse imagery.” Does that mean child sexual abuse imagery is now in the Pentagon’s military network?

More discussion and action are needed on the topic of AI regulation and safety. The widespread integration of AI into our lives means we must grapple with not only their general uses but also their unintended consequences.

AI can be a helpful tool, but who is at risk of being harmed by it? How can we ensure this technology is used safely and equitably? How much government regulation should there be over AI?

AI technology is making some people a lot of money, but are the profits being invested back into society in ways that benefit us all? In our next article, we’ll explore AI’s impact on jobs and the economy.

Take Action

  • Read the first article in this series: The Impact of Artificial Intelligence

Learn More

  • Artificial Intelligence is a hot topic for VA Legislators

Related Posts

  • New Series: The Impact of Artificial Intelligence

    The hottest technology buzzword of the 21st century is “AI,” meaning artificial intelligence. What role…

  • Neighbors for Change Rallies Community for Local Impact

    "We all have skillsets, we all have networks, we all have things we care about,…

  • How Healthcare Cuts in Big Beautiful Bill will Impact RVA

    The worst of Donald Trump’s Medicaid cuts have not even arrived yet, and local doctors…

  • Popsicle sticks diversity equity and inclusion matter
    Attacks on DEI in K-12 Public Education Will Impact Us All

    IK-12 public schools may have to choose between Diversity, Equity, and Inclusion (DEI) programs and…

About the Author

Primary Sidebar

Subscribe to Our Newsletter

Neighbor Spotlight

“I want a better society for my kids. I’m passionate about healthcare. I’m a physician, my wife is in public health and the way healthcare is going right now is very frightening and I just want a candidate who is willing to go against what is happening on the national level. The vaccination piece is definitely scary, I think it’s really important to stay up to date with those things, the pandemic changed so much about how people trust the healthcare system, and while there are a lot of ways the healthcare system needs to improve, leadership from the top down needs to be solid and it’s just not solid right now”.

“I’m mostly concerned about the SNAP funding. Because in my profession, I know some people through Virginia Cooperative Extension who were doing nutrition education for SNAP beneficiaries, all of those programs were cut as kind of a ripple effect from the (Big, Beautiful) bill, but they lost those programs, which is important to me – nutrition information. I’m concerned about out healthcare, I’m concerned about what’s happening with immigration right now and for me personally, I’m soon going on Medicare, and I’m concerned with what’s happening with that. Things are so divisive, you can’t even really talk about issues in a calm way”.

Stephanie

“I am passionate about gun safety regulation. I am passionate about enforcement of the anti-trust laws. Those are two big ones for me. And also intelligent AI regulation and data center/environmental regulations. I have concern about the kind of Wild West style lack of regulation of artificial intelligence. I think we need to take a systematic thoughtful approach so that we targeted AI rather than just an uncontrolled profit for a few big companies”.

View More Neighbor Spotlights

Events

Hanover Data Center opposition sign featuring raccoon holding tomato and "Grow Tomatoes Not Data Centers"

1/15/26 – STOP the Data Center Takeover of Hanover

Friends of Hanover is hosting a brief rally before the Hanover Planning Commission meeting where … READ MORE about 1/15/26 – STOP the Data Center Takeover of Hanover

Recent News

House to Vote on SAVE Act

The SAVE Act has been resurrected again and it’s even more awful than the last two times Republicans … READ MORE about House to Vote on SAVE Act

House of Delegates

Weeks 2 & 3 General Assembly Recap

Week 2 of the 2026 Virginia General Assembly session, which ended on January 30, 2026, saw … READ MORE about Weeks 2 & 3 General Assembly Recap

Monks

A Reflection on Peace and Justice

“Today is going to be my peaceful day.” This refrain will sound familiar to anyone following the … READ MORE about A Reflection on Peace and Justice

© 2026 NEIGHBORS FOR CHANGE
ALL RIGHTS RESERVED

  • Home
  • Take Action
  • News
  • N4C Spotlight
  • Events
  • Resources
  • About
  • Contact