Tweet This! :)

Thursday, March 26, 2026

From BBS chat to today’s instant messaging

@Mark Ollig

Before the World Wide Web became part of everyday life, many computer enthusiasts, including this humble columnist, operated a computer bulletin board system, or BBS.

A BBS uses a software program running on a computer connected to one or more dial-up modems plugged into telephone lines.

Users called the BBS’s telephone number using a communications program such as ProComm on their modem-equipped computers, then logged in over a dial-up telephone connection.

Once connected, they could read messages, exchange files and emails, or engage in real-time chat with other users on the BBS.

In 1992, I launched my own hobbyist bulletin board system, WBBS Online, using The Major BBS, a popular “gold standard” BBS software platform developed by Galacticomm.

WBBS stood for Winsted Bulletin Board System.

An electronic handshake between the callers’ and the BBS modems produced audible squeals, screeches, and tones as the two negotiated a data rate, sometimes reaching a cutting-edge 19.2 kbps.

In 1992, hitting 19.2 kbps was like breaking the sound barrier compared to the standard 2400 or 9600 baud modems.

Regular BBS users logged in every day, shared local news, traded software, discussed hobbies, and sometimes texted late into the night via real-time chat.

WBBS users in Winsted and Lester Prairie could dial in without long-distance charges because the two towns had toll-free calling between them.

On busy evenings, hearing the modem answer meant another local caller had connected and joined the conversation.

The technology has changed, but the camaraderie of participating in the virtual community did not.

Today, I regularly text chat with my kids (they are now middle-aged adults) and my siblings (we are forever young) through a Google Messages group instant-messaging (IM) thread on our smartphones.

Instant messages arrive seamlessly over wireless connections, with no dial-up modems, no phone lines, no busy signals, and no waiting.

IM is similar in spirit to the BBS message boards and real-time chat we used over dial-up telephone lines.

AOL Instant Messenger (AIM) came out in 1997 and introduced millions of people to real-time text chatting over the internet.

BlackBerry Messenger, known as BBM, launched in 2005 exclusively for BlackBerry cellphones, letting users chat in real time instead of sending standard text messages.

BBM led people to buy BlackBerry phones just to access the service, but BBM was eventually opened to iOS and Android cellphone platforms in 2013.

By then, WhatsApp and iMessage had taken over the market, and BBM shut down May 31, 2019.

These instant-messaging services felt natural to early users because features such as contact lists, screen names, and visible online status were already familiar concepts.

They first experienced them in a slower form on dial-up bulletin board systems, and later saw them arrive in real time on computers and cellphones.

Computer users accessing WBBS would check in regularly, respond to posts, and join chats with others logged in at the same time.

They shared typed messages and simple images called ASCII art, which used letters, numbers and keyboard symbols to form pictures.

Photos were downloaded as image files, often in “.jpg” format, and could take several minutes to fully appear on screen.

Many of us might remember watching a photo load slowly, line by line, from top to bottom, as the dial-up connection transferred the data.

Today’s instant messaging friends and family group chats can fill quickly with updates, videos, and photos.

Apps have replaced BBS dial-up software, and fiber-optic broadband and cellular data networks have replaced the slow modem connections that once tied up a household’s only phone line.

Yet the core behavior remains: people still gather in online digital spaces to share messages and stay connected.

For those who were around back then, a BBS was likely the first taste of online communication.

That early experience paved the way for social media and messaging platforms.

I can still clearly remember the sound of a modem handshaking late at night, letting me know another computer user was connecting to WBBS Online.

It meant someone else was out there, sitting at their keyboard and joining our small virtual community.

Today’s instant messaging is simply a version of the local virtual communities we started decades ago in dial-up BBS chat rooms.





















Thursday, March 19, 2026

The evolution of Google’s ‘Nano Banana’

@Mark Ollig

Nano Banana 2 became Google’s latest artificial intelligence (AI) image creation and editing tool Feb. 26 of this year, powered by the Gemini system.

Nano Banana 2 allows users to create visuals quickly using Gemini 3.1 Flash Image technology.

The system quickly became available across Google services, allowing users to generate high-quality images in seconds.

The Gemini app and Google Messages now provide access to the technology. Developers and businesses can explore the system through Google AI Studio and Vertex AI.

Since its release, Nano Banana 2 has spread quickly across Google’s ecosystem, producing high-quality images through a wide range of applications.

The name “Nano Banana” has an unusual origin story, one that stands out in the world of technology branding.

Naina Raisinghani, a Google DeepMind product manager, created the placeholder name by combining her nicknames “Nano” and “Banana.”

She submitted the model anonymously to LM Arena (Large Model Arena), a benchmarking site to evaluate artificial intelligence systems.

When the model quickly climbed to the top of the leaderboard, the name went viral.

Google ultimately decided to keep the unusual name and adopted the banana emoji [🍌 ] as the feature’s signature icon.

Despite the playful branding, Nano Banana 2 represents a serious effort to make advanced digital creation tools widely available to everyone.

Google’s goal is straightforward: make AI a practical, creative partner capable of producing clear, lifelike images while maintaining visual consistency across characters and objects.

You provide the idea, and the AI handles the technical execution.

Today’s accompanying illustration provides a roadmap through the complex inner workings of the Gemini 3.1 Flash Image system, breaking down the advanced technology into four digestible stages:

Zone one, human intention – This is the starting point where you provide the “vision” for your project, using either a voice command or a typed request to describe the image you want to create.

Zone two, the core processing engine – Often called the “engine room,” this is where the Gemini 3.1 Flash technology analyzes your instructions and builds the high-definition image.

Zone three: content integrity and provenance – In this stage, the system verifies real-world details through search grounding and embeds a SynthID digital watermark to identify the image as AI-generated.

Zone four: output and visualization – The final result is a polished 4K graphic that is automatically synced to your devices and professional tools like Google Workspace.

To bring these visions to life, Google uses its Veo video model to animate the high-resolution images generated by Nano Banana 2, turning a static 4K design into a cinematic sequence with realistic motion and lighting.

Unlike earlier Gemini tools that required long keyword lists, Nano Banana 2 allows users to describe their request in natural language.

Instead of writing complex prompts, users simply describe a scene to Nano Banana 2 as they might to a human designer.

The system interprets the request, generates a draft image, and allows the user to refine details through a conversational process.

For example, a user designing a new kitchen might say, “Change the style to mid-century modern,” and the system updates only those elements while preserving the rest of the composition.

A major challenge with earlier AI image tools involved visual continuity.

Characters or objects would often change appearance between frames, something I experienced that led to brief episodes of frustration.

But I digress.

Nano Banana 2 addresses this issue by tracking up to five characters and 14 individual objects across a sequence of images.

This capability makes the system useful for storyboards, advertising campaigns, and illustrated narratives that require consistent character identity.

Another update addresses the ongoing issue with AI-generated text.

Nano Banana 2 treats typography as structured data rather than just visual pixels. This lets the system create clear, readable text in many languages.

Users can put phrases in quotation marks to make posters, diagrams, or signs with correct spelling.

One of the major advances in the 2026 release is multimodal operation, meaning the system can interpret both images and text within the same reasoning framework.

This allows more realistic image-to-image editing.

For example, a user can upload a photograph of a kitchen and instruct the system to “render this in a midcentury modern style while keeping the cabinet layout unchanged.”

The model adjusts the visual style while preserving the room’s physical structure.

Nano Banana is also compatible with Google Lens. Users can tap the Nano Banana button to view and interpret their environment using a smartphone camera.

The result functions as a mobile design assistant capable of visualizing renovations, décor changes, or clothing variations in augmented reality (AR).

Another important step forward involves resolution.

Nano Banana 2 now supports 4K (4,096-pixel ultra-high-definition) image output, allowing concept images generated in seconds to become print-quality graphics.

Google has integrated these capabilities into Google Workspace and Google Ads.

This allows marketing teams to maintain brand consistency across large collections of AI-generated images.

The approach helps Google compete with other generative-image platforms such as Adobe Firefly and Midjourney.

To improve factual accuracy, Nano Banana 2 also incorporates search grounding.

When users request a specific landmark or brand, the system consults Google Search to ensure the rendering reflects real-world information.

The process for how an idea becomes a finished digital image begins with a user request, which may involve uploading a photograph or entering a written description.

That request is sent to the Gemini 3.1 processing system, where the model analyzes the instructions and constructs the image.

During generation, the system may query Google Search to verify the visual accuracy of real-world objects.

Before delivery, the finished image receives a SynthID watermark.

SynthID, created by Google DeepMind, is a hidden digital watermark that embeds discrete signals into images, enabling systems to detect AI-generated content.

Nano Banana 2 also supports C2PA (Coalition for Content Provenance and Authenticity) credentials.
C2PA provides a verifiable digital record showing how an image was created without altering its visible appearance.

The final result is a high-resolution image ready for tablet, smartphone, or workstation displays, as well as for advertising presentations or printed media.

Nano Banana 2 streamlines connection to creative tools, letting users focus on their artistic vision. An innovative idea quickly turns into a finished design.

I have used Nano Banana 2 to create illustrations for my columns and have been pleasantly surprised by its capabilities.

Visit https://gemini.google/overview/image-generation/ to learn more and to try Nano Banana.







Thursday, March 12, 2026

Satellite ‘cellular towers’ in Earth orbit

@Mark Ollig

Cellular networks have relied on ground-based cell towers, but SpaceX’s Starlink Direct to Cell now brings this model to space.

This service works with regular smartphones, not just satellite phones.

Future versions are expected to move beyond Long Term Evolution (LTE) as the Third Generation Partnership Project (3GPP) advances non-terrestrial network standards.

This international standards body is developing specifications for 5G and eventually future 6G concepts.

Today’s model supports text messaging and limited data through certain apps, with broader capabilities planned as the network evolves.

Our smartphones already rely on the Global Positioning System (GPS) for navigation signals from satellites orbiting about 12,550 miles above Earth.

However, GPS is a one-way system, and smartphones do not transmit anything back to those satellites.

Alongside SpaceX, companies like AST SpaceMobile and Lynk Global are working on satellite systems that connect directly to regular smartphones.

AST SpaceMobile claims its BlueBird satellites can provide broadband directly to regular phones, while Lynk offers a similar satellite-to-standard-mobile-phone service.

Major American companies like SpaceX and AT&T are also driving advances in satellite connectivity and telecommunications services for consumers.

Mobile carriers worldwide are exploring similar partnerships.

Orange is partnering with AST SpaceMobile in Europe and Africa, while Deutsche Telekom collaborates with Starlink in Europe.

These moves suggest that “cell towers in space” could soon become a normal layer of the global wireless network.

By adding satellites to the Radio Access Network (RAN), carriers can bring mobile coverage to remote areas using non-terrestrial networks.

Compatible smartphones can stay connected even when far from regular cell towers because the signal comes from space.

Satellite cellular lets users text, call, and access data, though some functions remain limited or are still being deployed by providers.

Besides smartphones, these systems can connect vehicles, farm equipment, security sensors, and other remote devices.

When a Starlink satellite receives data from a smartphone, it sends the signal back to Earth through a ground gateway station connected to the telecommunications network.

The satellite uses a high-capacity radio link to reach the gateway.

From there, the data travels through land-based fiber networks or through a carrier’s core network, depending on the service path.

Starlink works with carriers like T-Mobile to bring mobile service to areas where regular cell towers cannot reach.

When a smartphone connects through the satellite system, it still uses the carrier’s licensed cellular spectrum.

The satellite functions as part of the radio access network while the carrier’s terrestrial network handles authentication, mobility management, emergency communications, and billing.

Many people assume a satellite-to-phone system simply passes signals between a smartphone and the ground.

In reality, the satellite acts much like a cellular tower in Earth orbit.

Instead of connecting to a tower along the highway, the smartphone connects to a satellite that provides the radio link.

The signal then passes through ground gateways into the carrier network and the wider internet.

From there, data moves through the carrier’s core network and major telecommunications and internet interconnection hubs, where networks exchange traffic.

The return signal follows the same path in reverse.

It travels from an internet server through fiber networks and carrier core systems to the gateway, then to the satellite, and finally back to the smartphone.

Since these satellites operate in Low Earth Orbit (LEO), signal delay, or latency, can fall into the tens of milliseconds under favorable conditions.

This is similar to many land-based broadband connections.

Future upgrades are expected to depend on larger satellites equipped with advanced antennas that steer radio beams toward smartphones and other connected devices on Earth.

SpaceX says the next generation of Starlink satellites is intended to dramatically increase network capacity.

This could support applications such as video streaming, cloud services, and other broadband uses.

To launch these larger spacecraft, SpaceX plans to use its Starship launch system powered by the Super Heavy rocket booster to expand the Starlink network.

Today’s column features a diagram I researched and created using Microsoft software.

Images were generated by ChatGPT 5.2 using an uploaded photo of me as a reference, along with help from Google’s Gemini AI and Perplexity AI.

At TDS Telecom, I made engineering diagrams in Microsoft Visio.

But I digress.

Today’s diagram shows how a smartphone can reach the global internet through satellite and terrestrial telecommunications infrastructure.

At the top of the image, a constellation of satellites circles Earth in Low Earth orbit.

A satellite connects to a Starlink Gateway Earth Station through feeder links operating in assigned portions of the Ka-band microwave spectrum.

These links use uplink frequencies near 27.5 to 30 gigahertz and downlink frequencies near 17.8 to 20.2 gigahertz.

Equipped with large parabolic antennas, the gateway passes the signal into the terrestrial telecommunications network.

After reaching the gateway, the signal enters land-based fiber-optic networks and travels across the country and around the world through major telecommunications and internet interconnection hubs.

The diagram also shows a second communication path where a satellite connects directly with a smartphone using licensed Long Term Evolution (LTE) cellular frequencies.

In this role, the satellite functions as a space-based cellular tower.

It links the smartphone to the carrier’s packet core network, where authentication, mobility management, and routing occur before traffic reaches the broader internet.

Using columnist prerogative, I placed myself in the right foreground of the diagram, holding a smartphone to represent the network’s end user.

The image is intended to show that this satellite system will complement, rather than replace, terrestrial networks by connecting smartphones to satellites and then into global fiber and carrier infrastructure.

Details about Starlink’s satellite-to-cell service are available on the Starlink website: .

As of early this year, T-Mobile’s T-Satellite service supports texting, location sharing, text-to-911, and limited satellite data with certain apps on compatible smartphones.

Additional satellite-to-smartphone capabilities continue to roll out.

The system is designed to work with many existing LTE smartphones through software updates, although compatibility depends on the wireless carrier and smartphone model.

For more than 40 years, cellular service has relied on ground-based towers.

Before long, the cell tower our smartphone conversations and internet connections travel over will be a space-orbiting satellite passing over Minnesota.


Illustration depicting how a smartphone connects
 to telecommunications and internet networks via a Starlink
 satellite, a gateway earth station, and a global fiber-optic network.
The diagram is by Mark Ollig, with images created using
 ChatGPT 5.2, Google Gemini AI, and Perplexity AI.



x

Wednesday, March 4, 2026

From the Fourth Estate to the AI-driven Sixth Estate

@Mark Ollig

During the Middle Ages in Europe, society was divided into three main groups, called estates.

The First Estate consisted of the clergy, who were important for teaching morals and providing education.

The Second Estate included the nobles, who owned land and held military power.

The Third Estate consisted of everyone else, including merchants, tradespeople, and laborers, who made up most of the population but had limited influence.

By the 18th century, a new force, the Fourth Estate, emerged in the form of newspapers.

Their content spread across Europe and the American colonies through the printing press, shaping public opinion, challenging officials and at times influencing government policy.

For much of the 20th century, the Fourth Estate, made up of newspapers, radio, and television, reported the news and helped shape the social agenda.

Behind the scenes, wire services used the telegraph and later the telephone to move news quickly, accelerating reporting and expanding journalism’s reach.

These outlets largely determined what the public saw, heard, and understood about the world.

Editors in print newsrooms, along with producers and anchors in radio and television, served as content gatekeepers.

They chose which stories made it into the morning paper or led the evening broadcast and controlled when audiences would see or hear them.

News organizations distributed their work through printing presses and over-the-air broadcast stations, keeping people informed about events in their communities, their state, the nation, and the world.

The Fourth Estate began to change as news and public information began moving to digital platforms.

One of the earliest examples of those platforms was the public dial-up Computerized Bulletin Board System, or CBBS, launched by Ward Christensen and Randy Suess in Chicago during the Great Blizzard of 1978.

A January blizzard made it impossible for their local computer club members to meet, so the two created a virtual meeting solution.

CBBS went live Feb. 16, using a single S-100 bus microcomputer and a 300-baud modem wired into a residential phone line.

The system allowed computer club members to leave messages for one another and upload files that others could download and comment on.

These ongoing exchanges became the model for future hobbyist bulletin board systems, or BBSs, run by system operators known as SysOps.

A BBS enabled users to post messages, share files, chat in real time, and create online communities.

By the late 1970s and 1980s, home computer users were dialing into large online server networks that offered many of the same features as local BBSs and, in some cases, additional services.

In 1979, CompuServe launched its dial-up service, offering email, forums, file libraries, news, and real-time chat.

Prodigy, founded Feb. 13, 1984, began as Trintex, a joint venture of CBS, IBM, and Sears that offered online news, shopping, and banking.

In 1985, GEnie and Quantum Link followed. GEnie was operated by General Electric Information Services in Maryland, and Quantum Link, based in Vienna, VA, was designed specifically for Commodore 64 and 128 computer users.

Unlike local free-to-access BBSs, these large commercial services operated from centralized computer host systems that supported thousands of dial-up users.

In 1992, I launched a local bulletin board system called WBBS Online from my home computer, using Major BBS software and modems connected to four telephone lines.

WBBS offered discussion forums, real-time chat rooms, private email, file libraries, and games.

In 1993, I expanded to six modem lines and switched to Galacticomm World software, which added a graphical, point-and-click interface.

BBSs were the early stage of what would come to be called the digital Fifth Estate. They enabled individuals to publish, respond, and debate directly in shared online venues.

BBS popularity lasted into the mid-1990s as users began moving to internet-only access and using web browsers to reach the World Wide Web.

Later, large mainstream social networking platforms emerged, including MySpace in 2003, Facebook in 2004, Twitter in 2006 (later renamed X in 2023), and Bluesky, which opened to the public in February 2024.

However, the rise of artificial intelligence-generated content and algorithmic coordination is now altering this decentralized power structure.

These changes are creating a new Sixth Estate in which computer algorithms generate, rank, and distribute information at an unprecedented scale.

AI-driven systems now serve as primary gatekeepers, determining what information is collected, prioritized, and delivered to users.

This raises concerns that people may accept AI output as fact, even when the content is presented without clear citations or sources.

Florence, a former colleague from the Winsted Telephone Company, once told me, “Mark, always consider the source.”

The Sixth Estate uses artificial intelligence and algorithms to create, collect, organize, and quickly share information.

Algorithms can spread false information as easily as they can spread facts, often without clear sources or accountability.

Platforms like TikTok, X, Facebook, Instagram, YouTube, Reddit, and LinkedIn use AI-powered recommendation systems to decide what content each user sees.

These systems analyze engagement signals such as clicks, likes, watch time, follows, shares, and comments to predict what users are most likely to view.

As a result, automated algorithms, not human editors, now handle most of the distribution of information.

AI models such as ChatGPT by OpenAI, Gemini by Google, and Claude 4.6 by Anthropic can draft and refine text, summarize information, and translate languages.

Some of these models also can generate images or create video, and some can interpret images.

They also can answer questions, explain complex topics in plain language, analyze information, draft outlines and lesson plans, brainstorm ideas and create step-by-step instructions.

AI assistants such as Microsoft Copilot and Perplexity use underlying AI models to provide conversational and search-based responses to queries.

Image-generation tools such as Midjourney, DALL-E, and Google’s Gemini image tools can create images from text prompts.

Video-generation platforms such as Runway, Pika, Synthesia, and OpenAI’s Sora can generate and edit AI-created video clips.

AI audio tools such as ElevenLabs, Descript’s Overdub, and Amazon Polly can generate realistic synthetic speech for narration, voice-overs, and dubbing.

Sixth Estate systems will blur the line between human and AI-generated content; human oversight is essential to maintain safeguards that protect accuracy, source attribution, and accountability.

Florence’s sage advice to “always consider the source” is more relevant today than ever.