Tweet This! :)

Friday, June 30, 2023

AI’s influence spans decades

© Mark Ollig


In 1956, John McCarthy, a computer scientist, first used the phrase “artificial intelligence” during a conference on “thinking machines” at Dartmouth College in Hanover, NH.

In 1973, a noted scientist, James Lighthill, authored a report called “Artificial Intelligence: A General Survey” for the Science Research Council, a UK agency responsible for publicly funded scientific and engineering research activities.

Lighthill expressed his concerns and evaluated many elements of artificial intelligence. He gave an overall negative assessment of its progress and potential, including criticism of robotics and artificial intelligence language processing research.

Known as “The Lighthill Report,” it initiated much debate among scientists and British government policymakers concerning artificial intelligence’s technical challenges and social implications.

The Science Research Council decreased its funding for artificial intelligence research programs after “The Lighthill Report” was published.

In June 1973, the Massachusetts Institute of Technology (MIT) sent its proposal to the US Defense Advanced Research Projects Agency (DARPA) to fund artificial intelligence research programs.

“The Artificial Intelligence Laboratory proposes to continue its work on a group of closely interconnected projects, all bearing on questions about how to make computers able to use more sophisticated kinds of knowledge to solve difficult problems,” wrote MIT in its research proposal to DARPA.

DARPA approved funding for MIT’s artificial intelligence research programs.

The United States Department of Defense oversees DARPA and provides financial support for innovative research and development projects with military and strategic significance.

DARPA has been instrumental in many groundbreaking projects, including ARPANET, an early version of the internet we use today.

On Feb. 22, 1978, the US Department of Defense, with the help of DARPA, launched the Navigation Satellite Timing and Ranging (NAVSTAR) satellite system into an earth orbit of 12,625 miles.

This satellite system provided round-the-clock navigational capabilities for US military forces.

NAVSTAR evolved into the Global Positioning System (GPS) used by the public.

In 2019, Microsoft gave OpenAI, an American artificial intelligence research lab based in San Francisco, $1 billion in funding. This year, OpenAI is receiving $10 billion.

Today, Apple’s Siri, Amazon Alexa, and Google Home Hub are examples of virtual assistant technology that uses artificial intelligence to allow users to interact through voice commands, obtain information, and manage connected devices.

AI tools like ChatGPT-4, Microsoft’s Bing AI, and Google with its Bard AI are used in homes, businesses, and schools.

The US Government employs AI in public services, security, addressing societal challenges, and policy-making.

Around 65 years ago, AI became a threat to humans in the 1968 science fiction movie “2001: A Space Odyssey,” where the AI computer HAL battles against the astronauts on the Discovery One spacecraft.

In the new 2023 science fiction movie “The Creator,” an announcement is broadcast from the House Chamber at the US Capitol south wing in Washington DC, saying, “The artificial intelligence created to protect us detonated a nuclear warhead in Los Angeles. This is a fight for our very existence.”

Today, many people are worried that AI could endanger our way of life and even our existence.

AI and its effect on society are regularly discussed in the media among experts, journalists, and policymakers.

On May 30 of this year, the CAIS (Center for AI Safety) website released a statement signed by over 600 AI experts, legal scholars, economists, entrepreneurs, physicists, computer scientists, public figures, and others developing advanced AI applications:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

To read this statement and see the list of signers, go to https://www.safe.ai/statement-on-ai-risk.

Granted, some dismiss the idea of AI-induced extinction.

What is important is to acknowledge that experts and prominent, respected individuals are seeking to raise awareness about the potential dangers of approaching advanced artificial intelligence.

They are committed to addressing AI-related concerns and finding methods of operations for their safe use.

In addition, US House and Senate lawmakers are pushing legislation for a commission to study AI and potentially regulate its technology.

The National AI Commission Act aims to create a 20-member commission of individuals with computer science or AI expertise and representatives from civil society, industry, labor, and government.

“This bipartisan bill can advance US innovation and competitiveness while bringing the tech industry to the table to examine how the US can harness AI’s transformative potential,” Jason Oxman, president, and CEO of the Information Technology Industry Council, said.

Some say in the next 35 to 75 years, there is a possibility of an “Advanced General Intelligence” (AGI), also known as an “artificial superintelligence.”

Experts predict AGI will have self-awareness and cognitive abilities equal to or beyond that of humans.

The way we interact with smart devices and use online information is being transformed by artificial intelligence.

As we navigate this expanding AI technological landscape, staying updated on its latest developments is essential in making knowledgeable judgments about its use, advantages, and risks.

John McCarthy
artificial intelligence pioneer,
playing chess at Stanford
using their IBM 7090 computer.