Tweet This! :)

Friday, April 28, 2023

AI: there is nothing to fear

© by Mark Ollig


Artificial intelligence (AI) continues to be in the news.

In 1955, American computer scientist John McCarthy coined “artificial intelligence” at Dartmouth College in Hanover, NH.

McCarthy had organized a summer workshop at Dartmouth that year, inviting researchers to explore the potential of computers to simulate human intelligence, and marked the birth of AI as a field of study.

During the early years of research, the primary focus was creating, writing, and executing AI computer program algorithms or sets of calculated operating instructions.

In the 1960s, researchers shifted their focus from symbolic AI (which relied on formal rules and logical reasoning) to connectionist AI (which relied on neural networks modeled after the brain).

In 1966, Joseph Weizenbaum, a computer scientist and professor at MIT, created Eliza, the first AI chatbot allowing interactive human-machine text conversation using a natural language processing computer program.

The 1970s and 1980s saw significant breakthroughs in AI research with computing protocols using rule-based expert systems to mimic human decision-making.

The introduction of these systems formed the development of revolutionary AI techniques like machine learning and natural language processing, leading to a transformation in the landscape of artificial intelligence.

Other early AI research included complex problem-solving, language-recognizing grammatical patterns, and natural language processing.

These early efforts were fueled by optimism and a belief that human-level AI was just around the corner.

The 1990s and 2000s witnessed a significant breakthrough in AI research with the emergence of machine-learning procedures and large digital data sets.

These advancements broke ground for developing sophisticated AI systems that accurately predict outcomes by recognizing patterns within the data.

Recent advances in AI computer technology have strengthened its primary usage of human speech; however, research leading to it comprehending complex human language contexts, nuances, and inflection meaning is still ongoing.

Deep learning AI model networks are a type of artificial neural network that use multiple layers of interconnected network nodes to learn and recognize patterns in various data.

These systems also process audio data, with the ability to accurately recognize and produce speech with astonishing precision.

AI systems are becoming better at understanding and responding to natural human language interactions.

Another deep learning breakthrough is developing neural networks that use internal assessment to understand complex speech patterns, representing a deep learning milestone.

Language AI systems serve various applications, such as virtual business assistants, healthcare diagnosis, intelligent device interactions, computer vision, human speech and face recognition, language translation, and interactions with chatbots.

In 2016, Sophia, a humanoid robot physically resembling an adult woman in her late 20s, was built by Hanson Robotics.

Its AI software responds by observing people’s facial expressions and processing conversational and emotional data it uses to form relationships with the humans it interacts with.

Sophia is known for its lifelike appearance and ability to hold conversations with humans using natural language processing and architectural components, including script software and an artificial intelligence program for general reasoning.

In April 2018, “The Tonight Show Starring Jimmy Fallon” featured a segment introducing Sophia. Fallon, the audience, and this writer were very impressed with Sophia’s human-like conversational interaction.

To see Sophia’s appearance on “The Tonight Show,” go to http://bit.ly/2rxDC8m.

As AI becomes embedded in our phone apps, personal and work computing software, intelligent devices, homes, smart cities, and cars, questions are raised concerning our privacy, security, and even the future of our working lives.

Some worry AI will become unmanageable, as portrayed in science fiction movies with intelligent machines running amok.

Another concern is the application of AI technology to create video deepfakes, cloning individuals’ likenesses and voices, whether they are famous or someone you know.

AI synthetic voice generator programs can accurately replicate the sound and tone of a known person’s speech pattern, raising potential misuse concerns.

Others worry about AI’s intentional exploitation by a government or corporation.

Many fear losing their job to AI technology.

Some suggest future AI networks may wield excessive power, and a human-designed “off-switch” needs to be available.

Given the rapid pace at which AI technology advances, predicting how an eventual autonomous AI network may perform is difficult.

Currently, top computing semiconductor makers are busy manufacturing AI electronic components.

Today, AI’s use is seen in healthcare, banking, education, transportation, government, commerce, telecommunications call routing, and commercial power grid path selections, and the list keeps growing.

AI-powered virtual assistants (Siri, Alexa, Google Assistant, et al.) are everywhere.

I see the future of artificial intelligence becoming deeply interwoven into most segments of our society on Earth, and the spacecraft and satellites used throughout the solar system.

Quantum qubit processors with neuromorphic computing (emulating the human brain) will generate unique AI paradigms and technologies in the distant future.

Addressing one’s fear of AI requires continued public discussion and research to ensure its monitored and safe use with current and future applications.

As President Franklin D. Roosevelt said in 1932, “The only thing we have to fear is fear itself.”

 Artificial Intelligence chip architecture 3D rendering model