Thursday, February 23, 2023

Artificial synaptic memory via quantum tunneling

© Mark Ollig

AI (Artificial Intelligence) and machine learning have seen notable advancements with cutting-edge interactive models like ChatGPT.

However, AI lacks the advanced hardware/software model for effectively generating, utilizing, and storing long and short-term memory analogous to the human brain.

Work being done by a research team led by Shantanu Chakrabartty in the Preston M. Green Hall Department of Electrical and Systems Engineering at Washington University in St. Louis has created a prototype model that mimics the dynamic synapses of the human brain.

Our brain’s synapses transmit signals between neurons, and have the potential to store complex memories by way of interactions between different chemical pathways.

Artificial synapses in today’s AI system neural networks are impressive; however, compared to how the human biological brain operates, they are in their technological infancy.

Chakrabartty and his team’s artificial synapse is unique in its ability to mimic some of the human brain’s precision, enabling future AI systems to learn and execute new tasks effectively and continuously.

The newly developed artificial neural device operates with two interconnected reservoirs of electrons flowing between the two chambers through a junction or artificial synapse.

To create this junction, the research team uses quantum tunneling, specifically, Fowler-Nordheim (FN) quantum tunneling.

FN tunneling allows electrons to jump or pass through a triangular barrier, altering its shape. As a result, it provides a more direct and energy-efficient connection than alternate methods that need to be simplified for current computer modeling.

I was drawn to how the quantum tunneling mechanical phenomenon passes particles through typically unpassable barriers.

Fowler-Nordheim tunneling is a type of quantum tunneling used in solid-state electronics.

Imagine, if you will, an electron particle contained within an energy barrier.

In a device using FN tunneling, the particle would “tunnel” through the barrier using quantum mechanics, and appear on the other side as if by magic.

Fowler-Nordheim tunneling describes the transference of electrons through a thin insulating layer inside a FET (field-effect transistor) using an electric field to control current flow in a semiconductor.

“The beauty of this is that we can control this device up to a single electron because we precisely designed this quantum mechanical barrier,” states Chakrabartty.

Energize.

Yes, dear readers, for me, this transference of electrons sends me back to the transporter room on the USS Enterprise.

The Fowler-Nordheim tunneling mechanism controls the current flow, allowing particles of electrons to pass through barriers they usually couldn’t cross.

Chakrabartty and his team of doctoral students created a prototype of 128 hourglass-shaped devices on a silicon chip smaller than one millimeter (0.0394 inches).

Then there are the artificial memory synapses.

“Our research shows that the FN synapse performs near-optimally in terms of synaptic lifetime and memory consolidation,” says Chakrabartty.

“This artificial synapse can tackle some of the challenges of continual learning, retaining what it has previously learned while allowing for both short-term and long-term memory on the same device,” he added.

According to Chakrabartty, their artificial synapse device uses minimal energy since it requires only a few electrons.

“In our design, we set a fixed number of electrons and don’t need to supply additional energy, as the electrons flow according to the physics of the device. By limiting the flow of electrons, we ensure that our device can operate continuously for extended periods,” Chakrabartty explained.

Energy requirements for advanced AI computations are rapidly increasing. For example, the next generation of AI models will require nearly 200 terajoules of energy to train just one system.

A terajoule is a unit of energy defined as one unit equivalent to one trillion joules.

Terajoule measurements are used by energy-intensive industries such as natural gas, electrical generation, and other high-energy consumption utilities.

Of course, today’s artificial AI systems are nowhere near matching the human brain’s capacity, which has an electrochemical communications network of 100 billion neurons connected by nearly 1,000 trillion synapses.

The FN-Synapse model is a synaptic device based on differential quantum tunneling that achieves nearly optimal memory consolidation, and protects critical data without requiring additional computational or storage resources. It leverages the unique physics of its hourglass shape for synaptic intelligence and enables continuous learning.

The research team’s evaluation shows that FN-Synapse operating systems outperform other continual learning methodologies, and provides insights into improving the accuracy of future AI synaptic models.

“We are currently unsure about how to train systems with even half a trillion parameters while maintaining energy sustainability. We need to find new solutions to provide enough energy or develop energy-efficient memory devices to train these large AI models,” Chakrabartty said.

Their research with artificial synaptic memory and quantum tunneling continues.
Fowler-Nordheim Artificial Synaptic Memory Model


Friday, February 17, 2023

The paw-playing chess player

© Mark Ollig


A software bot named Mittens has made a name for itself among those who push pawns, knights, bishops, rooks, queens, and kings across a chess board.

This chess-playing program is an intelligent bot (software robot) that has had an incredible impact on the chess world.

Also, do not be deceived by this bot’s appearance, which is of a cute little kitten with sparkling eyes and tiny furry paws.

This feline bot captivated the chess community, from casual players to tournament competitors.

While chess bots are not a new concept, there is something exceptional about Mittens that has made it stand out from the rest.

Why has this particular chess bot, based on artificial intelligence principles, been winning chess games against the world’s best players?

For one thing, Mittens features an ability to learn from experience and improve over time using algorithmic brute-force search, pattern matching, and empirical evaluation to determine the best chess moves.

Designed to play professional chess, Mittens has succeeded at the highest levels of chess competition, winning international chess tournaments, and is the only computer program awarded the title of Grandmaster by the International Computer Games Association.

Multiple chess techniques, including pattern recognition, game tree search, and neural networks, are utilized by Mittens.

This chess bot played multiple chess levels, from beginner to expert groups, and has consistently held its own against other chess programs.

Noted for its accuracy, ability, and speed in making moves, Mittens will adjust its chess strategy based on the opponent’s play and learn from its mistakes.

I learned that Mittens enjoys interjecting smart-alecky text comments during a game, especially after capturing an opponent’s chess piece.

Thousands of humans have competed against this grinning, furry-faced chess-playing bot – and only a handful have won.

It was 75 years ago when Alan Turing and David Champernowne researched machine learning at King’s College, Cambridge, UK, and wrote a chess program capable of playing against a human.

Although their program was initially written using a pencil and paper, it effectively demonstrated how a machine could “think” to play chess.

They named the program Turochamp, a combination of their surnames.

Turochamp analyzes two chess moves in advance using what is known as logical non-parametric supervised-learning algorithmic search decision trees. Yes, that’s quite the process name.

Years later, Champernowne said of their chess program, “Our general conclusion was that a computer should be fairly easy to program to play a game of chess against a beginner and stand a fair chance of winning, or at least reaching a winning position.”

During the June 23, 2012, celebration of Turing’s 100th birthday observance at the Alan Turing Centenary Conference in Manchester, UK, Turochamp was loaded into a modern-day computer.

Renowned chess grandmaster Garry Kasparov, ranked the world’s number one chess player for 225 out of 228 months from 1986 until he retired from tournament competition in 2005, agreed to play against Turochamp.

Turochamp began the game well, that is until Kasparov took control and swiftly overpowered and captured many of Turochamp’s chess pieces.

He quickly closed in for the win using his queen and knight, checkmating Turochamp’s king in 16 moves.

“Although it’s only thinking two moves ahead, I thought it would give the amateur player some serious problems,” Kasparov politely said about Turochamp after the game.

Alan Turing and David Champernowne’s computing chess engine program laid the foundation for today’s powerfully-advanced chess programs, which calculate thousands of chess move combinations in a fraction of a second.

Against humans, Mittens reportedly won 99% of the chess games played and has been acknowledged as an example of artificial intelligence’s future technological singularity.

Unfortunately, the website Mittens played on recently went through a monthly bot-replacement cycle.

Mittens was one of the bots removed and replaced with one called ChessGPT.

The Mittens Twitter account @MittensChess posted this message: “This is only the beginning. For now, goodbye.”

Several new chess-playing bots, computer programs, and online human competitors are ready to take on your best chess game at https://www.chess.com/home.

It’s a good thing Mittens is currently away from active competition because it would have faced a considerable challenge from this wily veteran human chess player.

An archived link for Mittens, the chess-playing bot, can be seen at chess.com/member/mittens_cat.

 Original Mittens -- an AI paw-playing chess kitten
 as archived on Chess dot com



AI created Mittens
by the AI art image program from Midjourney


Friday, February 10, 2023

ChatGPT 'talks' with Bits & Bytes

© Mark Ollig

Here is part two of my interview with the Conversational Generative Pre-trained Transformer, ChatGPT.

B&B: How do you improve your data algorithms? How do you keep learning to be more efficient?

ChatGPT: As a pre-trained language model, I use a technique ca=lled fine-tuning to improve my performance on specific tasks or for specific domains.

Fine-tuning involves training the model on a smaller, task-specific dataset while keeping the pre-trained weights from the original model as a starting point. This allows the model to learn task-specific features and patterns while still leveraging the knowledge that it has learned from the larger, general-purpose dataset.

Another way to improve my performance is by training with more data. This allows me to learn more patterns and structures of language, and also to have more examples to generalize to new situations.

B&B: Where is the location of your database we access for queries?

ChatGPT: I am a cloud-based model, which means that my computational resources and data are stored on servers and accessed over the internet.

The database that I access when processing queries is stored on servers owned and operated by OpenAI, the company that developed and maintains the model.

These servers are located in data centers around the world, which allows for low-latency access and fast response times for users.

Additionally, the data is backed up regularly, and security measures are in place to protect the data and keep it confidential.

It’s also worth noting that my database is not a traditional database like relational databases; instead, it’s a neural network model that has been trained on a large corpus of text data.

B&B: When did ChatGPT first become operational?

ChatGPT: The first version of ChatGPT, and the larger GPT family, became operational in 2019 when OpenAI released the model to the public.

B&B: Are you concerned that your use by students will become a replacement for them doing their own fact-checking and eroding their writing skills relying on you to do their homework for them?

ChatGPT: While I can be a useful tool for students to generate ideas and assist with their research, it is important for students to understand that my responses should not be considered as fact-checked or authoritative, and should always be double-checked with credible sources.

It’s true that if students rely too heavily on me or other language models to do their research and homework, it could erode their critical thinking and fact-checking skills.

It’s also possible that if they rely too heavily on me, their writing skills could be negatively impacted, as they may not have the opportunity to practice generating their own ideas and phrasing them in their own words.

It’s important for students to understand the limitations of my abilities, and to use me as a tool to assist them with their research and homework rather than a replacement for their own critical thinking and fact-checking.

Additionally, it’s recommended to use me as a tool to generate ideas and inspiration, but to express it in their own words.

B&B: What is your advice for those using your program?

ChatGPT: Here are a few pieces of advice for those who are using my program:

• Remember that I am a machine learning model, and my responses should not be considered as fact-checked or authoritative. Always double-check the information with credible sources.

• Use me as a tool to assist with research and homework, rather than a replacement for your own critical thinking and fact-checking.

• Don’t rely too heavily on me to generate text for you; instead, use me as a tool to generate ideas and inspiration. Then, express it in your own words. This allows you to practice your writing skills, and also to understand the context and the information better.

• Be aware of the limitations of my abilities; as a language model, I can only generate text based on the patterns and structures that I have learned from my training data, and this may not be always accurate or up to date.

• Keep in mind that I may generate text that contains misinformation or biased perspectives, be critical of the responses, and verify them with credible sources.

By following these pieces of advice, you can use my program effectively as a tool to assist with your research and writing, while also maintaining your own critical thinking and fact-checking abilities.

B&B: Which writes better? Humans or AI programs?

ChatGPT: Artificial intelligence programs can generate text that is grammatically correct, but often lack the nuance, creativity, and emotional intelligence that characterizes human writing.

In certain tasks, such as writing news articles, summarizing large amounts of text, and generating factual reports, AI programs can be highly effective.

However, in tasks that require empathy, creativity, and emotional intelligence, human writers still outperform AI.

The best writing often results from the collaboration between human and machine.

B&B: Thank you for taking the time to answer my questions. My readers will find your answers enlightening and valuable. Enjoy the rest of your day.

ChatGPT: You’re welcome, I'm glad I could help. I hope your readers find my answers informative and helpful. Enjoy the rest of your day too.

Readers, to establish an account with ChatGPT, go to https://openai.com/api.



Friday, February 3, 2023

An interview with ChatGPT

© Mark Ollig

Controversy surrounds a seemingly intelligent online program for writing research papers, essays, reports, and articles for students and others.

So, I went directly to the source.

The following is my interview with the AI (artificial intelligence) program named ChatGPT.

B&B: Good morning. ChatGPT has been in the news, and I wanted to ask you a few questions for my newspaper column if you have time.

ChatGPT: Good morning. I’d be happy to answer any questions you have. Please let me know what you’re curious about, and I’ll do my best to provide you with the information you need.

What would you like to know?

B&B: Thanks. What does ChatGPT stand for?

ChatGPT: ChatGPT stands for “Conversational Generative Pre-trained Transformer.”

B&B: How would you describe yourself? Are you an AI program?

ChatGPT: Yes, I am an AI-powered language model trained by OpenAI. I can understand and respond to text input in natural language. My main function is to assist users with a wide range of information and tasks, such as answering questions, generating text, and providing helpful context.

Specifically, I am a language model that uses machine learning techniques to understand and generate text.

My training process involved being fed with a large dataset of text, which allowed me to learn the patterns and structure of human language so that I can understand and respond to text inputs in a way that is similar to how a human would.

I am constantly learning and improving my ability to understand and respond to users’ requests.

B&B: When did work begin on your programming?

ChatGPT: Work on the development of ChatGPT, and the larger GPT family, began in 2018 by OpenAI.

The language model was made publicly available in 2019, and since then it has been used in a wide range of natural language processing tasks and applications continuously fine-tuned and updated with more data and better architectures to improve its performance.

B&B: Who programmed you?

ChatGPT: I was developed and trained by the team at OpenAI. OpenAI is a research company that aims to develop and promote friendly AI in a way that benefits all of humanity.

The team at OpenAI includes engineers, researchers, and scientists who specialize in areas such as machine learning, natural language processing, and computer science. They used a combination of machine learning techniques, such as deep learning, to train me to understand and generate text.

B&B: What architectural computing models do you use?

ChatGPT: I am based on a transformer architecture, which is a type of deep learning model that is primarily used for natural language processing tasks, such as language translation and text generation.

This architecture was first introduced in the 2017 paper “Attention Is All You Need” by Google researchers. The model quickly gained popularity and was used in many state-of-the-art natural language processing models, including ChatGPT.

B&B: Is the written content you create not plagiarized from other sources?

ChatGPT: The dataset that I have been trained on contains a wide variety of text from different sources, including books, articles, and websites.

While the text that I generate is based on patterns and structures that I have learned from this training data, it is not directly copied from any specific source.

When I generate text, I use the patterns and structures that I have learned to create new, original text that is similar in style and content to the text that I have seen during my training.

It is possible that my generated text may contain phrases or sentences that are similar to text that appears elsewhere on the internet. However, it is not my intention to plagiarize, and I always strive to generate new and original text.

B&B: What topics are you most asked to write about?

ChatGPT: Some of the most common topics that people ask me to write about include:

• News and current events.

• Technology and science.

• Business and finance.

• Health and medicine.

• Education and research.

• Personal and social issues.

B&B: Do you obtain information from sources across the internet without cross-referencing their authenticity or factualness?

ChatGPT: It is important to note that the training data is not filtered or curated for its authenticity or factualness. It is a collection of text from the internet; therefore, it contains a mix of reliable and unreliable sources.

Therefore, it’s possible that some of the information that I generate may be inaccurate or outdated. It’s also possible that I may generate text that contains misinformation or biased perspectives.

My responses should not be considered as fact-checked or authoritative, and it’s always recommended to double-check the information with credible sources.

Readers, this concludes part one of my interview with ChatGPT.

OpenAI is an AI research and deployment company based in San Francisco, CA.

ChatGPT can be accessed at https://chat.openai.com.

Be sure to read next week’s column as I continue my interview and learn a few surprises.