Tweet This! :)

Thursday, March 26, 2026

From BBS chat to today’s instant messaging

@Mark Ollig

Before the World Wide Web became part of everyday life, many computer enthusiasts, including this humble columnist, operated a computer bulletin board system, or BBS.

A BBS uses a software program running on a computer connected to one or more dial-up modems plugged into telephone lines.

Users called the BBS’s telephone number using a communications program such as ProComm on their modem-equipped computers, then logged in over a dial-up telephone connection.

Once connected, they could read messages, exchange files and emails, or engage in real-time chat with other users on the BBS.

In 1992, I launched my own hobbyist bulletin board system, WBBS Online, using The Major BBS, a popular “gold standard” BBS software platform developed by Galacticomm.

WBBS stood for Winsted Bulletin Board System.

An electronic handshake between the callers’ and the BBS modems produced audible squeals, screeches, and tones as the two negotiated a data rate, sometimes reaching a cutting-edge 19.2 kbps.

In 1992, hitting 19.2 kbps was like breaking the sound barrier compared to the standard 2400 or 9600 baud modems.

Regular BBS users logged in every day, shared local news, traded software, discussed hobbies, and sometimes texted late into the night via real-time chat.

WBBS users in Winsted and Lester Prairie could dial in without long-distance charges because the two towns had toll-free calling between them.

On busy evenings, hearing the modem answer meant another local caller had connected and joined the conversation.

The technology has changed, but the camaraderie of participating in the virtual community did not.

Today, I regularly text chat with my kids (they are now middle-aged adults) and my siblings (we are forever young) through a Google Messages group instant-messaging (IM) thread on our smartphones.

Instant messages arrive seamlessly over wireless connections, with no dial-up modems, no phone lines, no busy signals, and no waiting.

IM is similar in spirit to the BBS message boards and real-time chat we used over dial-up telephone lines.

AOL Instant Messenger (AIM) came out in 1997 and introduced millions of people to real-time text chatting over the internet.

BlackBerry Messenger, known as BBM, launched in 2005 exclusively for BlackBerry cellphones, letting users chat in real time instead of sending standard text messages.

BBM led people to buy BlackBerry phones just to access the service, but BBM was eventually opened to iOS and Android cellphone platforms in 2013.

By then, WhatsApp and iMessage had taken over the market, and BBM shut down May 31, 2019.

These instant-messaging services felt natural to early users because features such as contact lists, screen names, and visible online status were already familiar concepts.

They first experienced them in a slower form on dial-up bulletin board systems, and later saw them arrive in real time on computers and cellphones.

Computer users accessing WBBS would check in regularly, respond to posts, and join chats with others logged in at the same time.

They shared typed messages and simple images called ASCII art, which used letters, numbers and keyboard symbols to form pictures.

Photos were downloaded as image files, often in “.jpg” format, and could take several minutes to fully appear on screen.

Many of us might remember watching a photo load slowly, line by line, from top to bottom, as the dial-up connection transferred the data.

Today’s instant messaging friends and family group chats can fill quickly with updates, videos, and photos.

Apps have replaced BBS dial-up software, and fiber-optic broadband and cellular data networks have replaced the slow modem connections that once tied up a household’s only phone line.

Yet the core behavior remains: people still gather in online digital spaces to share messages and stay connected.

For those who were around back then, a BBS was likely the first taste of online communication.

That early experience paved the way for social media and messaging platforms.

I can still clearly remember the sound of a modem handshaking late at night, letting me know another computer user was connecting to WBBS Online.

It meant someone else was out there, sitting at their keyboard and joining our small virtual community.

Today’s instant messaging is simply a version of the local virtual communities we started decades ago in dial-up BBS chat rooms.





















Thursday, March 19, 2026

The evolution of Google’s ‘Nano Banana’

@Mark Ollig

Nano Banana 2 became Google’s latest artificial intelligence (AI) image creation and editing tool Feb. 26 of this year, powered by the Gemini system.

Nano Banana 2 allows users to create visuals quickly using Gemini 3.1 Flash Image technology.

The system quickly became available across Google services, allowing users to generate high-quality images in seconds.

The Gemini app and Google Messages now provide access to the technology. Developers and businesses can explore the system through Google AI Studio and Vertex AI.

Since its release, Nano Banana 2 has spread quickly across Google’s ecosystem, producing high-quality images through a wide range of applications.

The name “Nano Banana” has an unusual origin story, one that stands out in the world of technology branding.

Naina Raisinghani, a Google DeepMind product manager, created the placeholder name by combining her nicknames “Nano” and “Banana.”

She submitted the model anonymously to LM Arena (Large Model Arena), a benchmarking site to evaluate artificial intelligence systems.

When the model quickly climbed to the top of the leaderboard, the name went viral.

Google ultimately decided to keep the unusual name and adopted the banana emoji [🍌 ] as the feature’s signature icon.

Despite the playful branding, Nano Banana 2 represents a serious effort to make advanced digital creation tools widely available to everyone.

Google’s goal is straightforward: make AI a practical, creative partner capable of producing clear, lifelike images while maintaining visual consistency across characters and objects.

You provide the idea, and the AI handles the technical execution.

Today’s accompanying illustration provides a roadmap through the complex inner workings of the Gemini 3.1 Flash Image system, breaking down the advanced technology into four digestible stages:

Zone one, human intention – This is the starting point where you provide the “vision” for your project, using either a voice command or a typed request to describe the image you want to create.

Zone two, the core processing engine – Often called the “engine room,” this is where the Gemini 3.1 Flash technology analyzes your instructions and builds the high-definition image.

Zone three: content integrity and provenance – In this stage, the system verifies real-world details through search grounding and embeds a SynthID digital watermark to identify the image as AI-generated.

Zone four: output and visualization – The final result is a polished 4K graphic that is automatically synced to your devices and professional tools like Google Workspace.

To bring these visions to life, Google uses its Veo video model to animate the high-resolution images generated by Nano Banana 2, turning a static 4K design into a cinematic sequence with realistic motion and lighting.

Unlike earlier Gemini tools that required long keyword lists, Nano Banana 2 allows users to describe their request in natural language.

Instead of writing complex prompts, users simply describe a scene to Nano Banana 2 as they might to a human designer.

The system interprets the request, generates a draft image, and allows the user to refine details through a conversational process.

For example, a user designing a new kitchen might say, “Change the style to mid-century modern,” and the system updates only those elements while preserving the rest of the composition.

A major challenge with earlier AI image tools involved visual continuity.

Characters or objects would often change appearance between frames, something I experienced that led to brief episodes of frustration.

But I digress.

Nano Banana 2 addresses this issue by tracking up to five characters and 14 individual objects across a sequence of images.

This capability makes the system useful for storyboards, advertising campaigns, and illustrated narratives that require consistent character identity.

Another update addresses the ongoing issue with AI-generated text.

Nano Banana 2 treats typography as structured data rather than just visual pixels. This lets the system create clear, readable text in many languages.

Users can put phrases in quotation marks to make posters, diagrams, or signs with correct spelling.

One of the major advances in the 2026 release is multimodal operation, meaning the system can interpret both images and text within the same reasoning framework.

This allows more realistic image-to-image editing.

For example, a user can upload a photograph of a kitchen and instruct the system to “render this in a midcentury modern style while keeping the cabinet layout unchanged.”

The model adjusts the visual style while preserving the room’s physical structure.

Nano Banana is also compatible with Google Lens. Users can tap the Nano Banana button to view and interpret their environment using a smartphone camera.

The result functions as a mobile design assistant capable of visualizing renovations, décor changes, or clothing variations in augmented reality (AR).

Another important step forward involves resolution.

Nano Banana 2 now supports 4K (4,096-pixel ultra-high-definition) image output, allowing concept images generated in seconds to become print-quality graphics.

Google has integrated these capabilities into Google Workspace and Google Ads.

This allows marketing teams to maintain brand consistency across large collections of AI-generated images.

The approach helps Google compete with other generative-image platforms such as Adobe Firefly and Midjourney.

To improve factual accuracy, Nano Banana 2 also incorporates search grounding.

When users request a specific landmark or brand, the system consults Google Search to ensure the rendering reflects real-world information.

The process for how an idea becomes a finished digital image begins with a user request, which may involve uploading a photograph or entering a written description.

That request is sent to the Gemini 3.1 processing system, where the model analyzes the instructions and constructs the image.

During generation, the system may query Google Search to verify the visual accuracy of real-world objects.

Before delivery, the finished image receives a SynthID watermark.

SynthID, created by Google DeepMind, is a hidden digital watermark that embeds discrete signals into images, enabling systems to detect AI-generated content.

Nano Banana 2 also supports C2PA (Coalition for Content Provenance and Authenticity) credentials.
C2PA provides a verifiable digital record showing how an image was created without altering its visible appearance.

The final result is a high-resolution image ready for tablet, smartphone, or workstation displays, as well as for advertising presentations or printed media.

Nano Banana 2 streamlines connection to creative tools, letting users focus on their artistic vision. An innovative idea quickly turns into a finished design.

I have used Nano Banana 2 to create illustrations for my columns and have been pleasantly surprised by its capabilities.

Visit https://gemini.google/overview/image-generation/ to learn more and to try Nano Banana.







Thursday, March 12, 2026

Satellite ‘cellular towers’ in Earth orbit

@Mark Ollig

Cellular networks have relied on ground-based cell towers, but SpaceX’s Starlink Direct to Cell now brings this model to space.

This service works with regular smartphones, not just satellite phones.

Future versions are expected to move beyond Long Term Evolution (LTE) as the Third Generation Partnership Project (3GPP) advances non-terrestrial network standards.

This international standards body is developing specifications for 5G and eventually future 6G concepts.

Today’s model supports text messaging and limited data through certain apps, with broader capabilities planned as the network evolves.

Our smartphones already rely on the Global Positioning System (GPS) for navigation signals from satellites orbiting about 12,550 miles above Earth.

However, GPS is a one-way system, and smartphones do not transmit anything back to those satellites.

Alongside SpaceX, companies like AST SpaceMobile and Lynk Global are working on satellite systems that connect directly to regular smartphones.

AST SpaceMobile claims its BlueBird satellites can provide broadband directly to regular phones, while Lynk offers a similar satellite-to-standard-mobile-phone service.

Major American companies like SpaceX and AT&T are also driving advances in satellite connectivity and telecommunications services for consumers.

Mobile carriers worldwide are exploring similar partnerships.

Orange is partnering with AST SpaceMobile in Europe and Africa, while Deutsche Telekom collaborates with Starlink in Europe.

These moves suggest that “cell towers in space” could soon become a normal layer of the global wireless network.

By adding satellites to the Radio Access Network (RAN), carriers can bring mobile coverage to remote areas using non-terrestrial networks.

Compatible smartphones can stay connected even when far from regular cell towers because the signal comes from space.

Satellite cellular lets users text, call, and access data, though some functions remain limited or are still being deployed by providers.

Besides smartphones, these systems can connect vehicles, farm equipment, security sensors, and other remote devices.

When a Starlink satellite receives data from a smartphone, it sends the signal back to Earth through a ground gateway station connected to the telecommunications network.

The satellite uses a high-capacity radio link to reach the gateway.

From there, the data travels through land-based fiber networks or through a carrier’s core network, depending on the service path.

Starlink works with carriers like T-Mobile to bring mobile service to areas where regular cell towers cannot reach.

When a smartphone connects through the satellite system, it still uses the carrier’s licensed cellular spectrum.

The satellite functions as part of the radio access network while the carrier’s terrestrial network handles authentication, mobility management, emergency communications, and billing.

Many people assume a satellite-to-phone system simply passes signals between a smartphone and the ground.

In reality, the satellite acts much like a cellular tower in Earth orbit.

Instead of connecting to a tower along the highway, the smartphone connects to a satellite that provides the radio link.

The signal then passes through ground gateways into the carrier network and the wider internet.

From there, data moves through the carrier’s core network and major telecommunications and internet interconnection hubs, where networks exchange traffic.

The return signal follows the same path in reverse.

It travels from an internet server through fiber networks and carrier core systems to the gateway, then to the satellite, and finally back to the smartphone.

Since these satellites operate in Low Earth Orbit (LEO), signal delay, or latency, can fall into the tens of milliseconds under favorable conditions.

This is similar to many land-based broadband connections.

Future upgrades are expected to depend on larger satellites equipped with advanced antennas that steer radio beams toward smartphones and other connected devices on Earth.

SpaceX says the next generation of Starlink satellites is intended to dramatically increase network capacity.

This could support applications such as video streaming, cloud services, and other broadband uses.

To launch these larger spacecraft, SpaceX plans to use its Starship launch system powered by the Super Heavy rocket booster to expand the Starlink network.

Today’s column features a diagram I researched and created using Microsoft software.

Images were generated by ChatGPT 5.2 using an uploaded photo of me as a reference, along with help from Google’s Gemini AI and Perplexity AI.

At TDS Telecom, I made engineering diagrams in Microsoft Visio.

But I digress.

Today’s diagram shows how a smartphone can reach the global internet through satellite and terrestrial telecommunications infrastructure.

At the top of the image, a constellation of satellites circles Earth in Low Earth orbit.

A satellite connects to a Starlink Gateway Earth Station through feeder links operating in assigned portions of the Ka-band microwave spectrum.

These links use uplink frequencies near 27.5 to 30 gigahertz and downlink frequencies near 17.8 to 20.2 gigahertz.

Equipped with large parabolic antennas, the gateway passes the signal into the terrestrial telecommunications network.

After reaching the gateway, the signal enters land-based fiber-optic networks and travels across the country and around the world through major telecommunications and internet interconnection hubs.

The diagram also shows a second communication path where a satellite connects directly with a smartphone using licensed Long Term Evolution (LTE) cellular frequencies.

In this role, the satellite functions as a space-based cellular tower.

It links the smartphone to the carrier’s packet core network, where authentication, mobility management, and routing occur before traffic reaches the broader internet.

Using columnist prerogative, I placed myself in the right foreground of the diagram, holding a smartphone to represent the network’s end user.

The image is intended to show that this satellite system will complement, rather than replace, terrestrial networks by connecting smartphones to satellites and then into global fiber and carrier infrastructure.

Details about Starlink’s satellite-to-cell service are available on the Starlink website: .

As of early this year, T-Mobile’s T-Satellite service supports texting, location sharing, text-to-911, and limited satellite data with certain apps on compatible smartphones.

Additional satellite-to-smartphone capabilities continue to roll out.

The system is designed to work with many existing LTE smartphones through software updates, although compatibility depends on the wireless carrier and smartphone model.

For more than 40 years, cellular service has relied on ground-based towers.

Before long, the cell tower our smartphone conversations and internet connections travel over will be a space-orbiting satellite passing over Minnesota.


Illustration depicting how a smartphone connects
 to telecommunications and internet networks via a Starlink
 satellite, a gateway earth station, and a global fiber-optic network.
The diagram is by Mark Ollig, with images created using
 ChatGPT 5.2, Google Gemini AI, and Perplexity AI.



x

Wednesday, March 4, 2026

From the Fourth Estate to the AI-driven Sixth Estate

@Mark Ollig

During the Middle Ages in Europe, society was divided into three main groups, called estates.

The First Estate consisted of the clergy, who were important for teaching morals and providing education.

The Second Estate included the nobles, who owned land and held military power.

The Third Estate consisted of everyone else, including merchants, tradespeople, and laborers, who made up most of the population but had limited influence.

By the 18th century, a new force, the Fourth Estate, emerged in the form of newspapers.

Their content spread across Europe and the American colonies through the printing press, shaping public opinion, challenging officials and at times influencing government policy.

For much of the 20th century, the Fourth Estate, made up of newspapers, radio, and television, reported the news and helped shape the social agenda.

Behind the scenes, wire services used the telegraph and later the telephone to move news quickly, accelerating reporting and expanding journalism’s reach.

These outlets largely determined what the public saw, heard, and understood about the world.

Editors in print newsrooms, along with producers and anchors in radio and television, served as content gatekeepers.

They chose which stories made it into the morning paper or led the evening broadcast and controlled when audiences would see or hear them.

News organizations distributed their work through printing presses and over-the-air broadcast stations, keeping people informed about events in their communities, their state, the nation, and the world.

The Fourth Estate began to change as news and public information began moving to digital platforms.

One of the earliest examples of those platforms was the public dial-up Computerized Bulletin Board System, or CBBS, launched by Ward Christensen and Randy Suess in Chicago during the Great Blizzard of 1978.

A January blizzard made it impossible for their local computer club members to meet, so the two created a virtual meeting solution.

CBBS went live Feb. 16, using a single S-100 bus microcomputer and a 300-baud modem wired into a residential phone line.

The system allowed computer club members to leave messages for one another and upload files that others could download and comment on.

These ongoing exchanges became the model for future hobbyist bulletin board systems, or BBSs, run by system operators known as SysOps.

A BBS enabled users to post messages, share files, chat in real time, and create online communities.

By the late 1970s and 1980s, home computer users were dialing into large online server networks that offered many of the same features as local BBSs and, in some cases, additional services.

In 1979, CompuServe launched its dial-up service, offering email, forums, file libraries, news, and real-time chat.

Prodigy, founded Feb. 13, 1984, began as Trintex, a joint venture of CBS, IBM, and Sears that offered online news, shopping, and banking.

In 1985, GEnie and Quantum Link followed. GEnie was operated by General Electric Information Services in Maryland, and Quantum Link, based in Vienna, VA, was designed specifically for Commodore 64 and 128 computer users.

Unlike local free-to-access BBSs, these large commercial services operated from centralized computer host systems that supported thousands of dial-up users.

In 1992, I launched a local bulletin board system called WBBS Online from my home computer, using Major BBS software and modems connected to four telephone lines.

WBBS offered discussion forums, real-time chat rooms, private email, file libraries, and games.

In 1993, I expanded to six modem lines and switched to Galacticomm World software, which added a graphical, point-and-click interface.

BBSs were the early stage of what would come to be called the digital Fifth Estate. They enabled individuals to publish, respond, and debate directly in shared online venues.

BBS popularity lasted into the mid-1990s as users began moving to internet-only access and using web browsers to reach the World Wide Web.

Later, large mainstream social networking platforms emerged, including MySpace in 2003, Facebook in 2004, Twitter in 2006 (later renamed X in 2023), and Bluesky, which opened to the public in February 2024.

However, the rise of artificial intelligence-generated content and algorithmic coordination is now altering this decentralized power structure.

These changes are creating a new Sixth Estate in which computer algorithms generate, rank, and distribute information at an unprecedented scale.

AI-driven systems now serve as primary gatekeepers, determining what information is collected, prioritized, and delivered to users.

This raises concerns that people may accept AI output as fact, even when the content is presented without clear citations or sources.

Florence, a former colleague from the Winsted Telephone Company, once told me, “Mark, always consider the source.”

The Sixth Estate uses artificial intelligence and algorithms to create, collect, organize, and quickly share information.

Algorithms can spread false information as easily as they can spread facts, often without clear sources or accountability.

Platforms like TikTok, X, Facebook, Instagram, YouTube, Reddit, and LinkedIn use AI-powered recommendation systems to decide what content each user sees.

These systems analyze engagement signals such as clicks, likes, watch time, follows, shares, and comments to predict what users are most likely to view.

As a result, automated algorithms, not human editors, now handle most of the distribution of information.

AI models such as ChatGPT by OpenAI, Gemini by Google, and Claude 4.6 by Anthropic can draft and refine text, summarize information, and translate languages.

Some of these models also can generate images or create video, and some can interpret images.

They also can answer questions, explain complex topics in plain language, analyze information, draft outlines and lesson plans, brainstorm ideas and create step-by-step instructions.

AI assistants such as Microsoft Copilot and Perplexity use underlying AI models to provide conversational and search-based responses to queries.

Image-generation tools such as Midjourney, DALL-E, and Google’s Gemini image tools can create images from text prompts.

Video-generation platforms such as Runway, Pika, Synthesia, and OpenAI’s Sora can generate and edit AI-created video clips.

AI audio tools such as ElevenLabs, Descript’s Overdub, and Amazon Polly can generate realistic synthetic speech for narration, voice-overs, and dubbing.

Sixth Estate systems will blur the line between human and AI-generated content; human oversight is essential to maintain safeguards that protect accuracy, source attribution, and accountability.

Florence’s sage advice to “always consider the source” is more relevant today than ever.




Friday, February 27, 2026

She helped to bring astronauts home safely

@Mark Ollig

I recently read the Nov. 14, 2018, NASA Johnson Space Center oral history interview of Frances Marian “Poppy” Northcutt by Jennifer Ross-Nazzal.

The interview provides a detailed account of her work on the Apollo program and her experience as the first woman to serve in an engineering role in NASA’s Mission Control.

Northcutt, born Aug. 10, 1943, shares that her older brother gave her the nickname “Poppy” from a favorite Little Golden Book fairy tale he read.

Northcutt studied mathematics at the University of Texas and later earned a law degree from the University of Houston Law Center.

A job referral led her to TRW Systems, a NASA contractor in Houston, TX, where she started as a “computress,” handling data analysis and programming.

Northcutt noted similarities between her experience and that of the women mathematicians depicted in the 2016 book “Hidden Figures” and the film of the same name, which told the story of NASA’s early human computers.

Despite feeling intimidated by colleagues with advanced degrees from elite schools, she quickly proved her abilities and became a valuable team member within months.

Northcutt was part of the lunar return-to-Earth trajectory program, initially called the “abort program.”
Its main challenge was the three-body problem involving Earth, the moon, and the spacecraft.

Unlike returns from Earth orbit, coming home from the moon meant linking several curved trajectory segments, switching between Earth’s and the moon’s gravity, and relying on powerful computers to map the way back.

She noted that lunar return calculations could not be done with slide rules alone and required repeated mainframe computer processing power.

Northcutt worked on reverse-engineering the trajectory software to understand it fully.

Her team, usually consisting of three to eight members, focused on return-to-Earth analysis and developed a flexible program for various mission conditions, which replaced a competing trajectory program.

Northcutt’s mastery of the code set her apart and helped her advance quickly, as she supported the retrofire officers in the staff support room (SSR-1) during the Apollo 8 mission around the moon.

Apollo 8’s most dramatic moment came when the spacecraft passed behind the moon and communications were lost during the lunar orbit insertion burn.

The wait for the signal was stressful, as it determined whether the burn was successful or if the crew was on a crash course to the lunar surface.

Northcutt recalled the tense silence and countdown clock until the spacecraft finally responded, confirming it had successfully entered lunar orbit and traveled around the moon.

She remembered drawing widespread media attention as the first woman to serve in an engineering role inside NASA’s Mission Control during Apollo 8.

In the interview, Northcutt said the press treated her as a novelty and more as a spectacle than a professional.

She felt intense pressure to perform flawlessly, knowing any mistake could reinforce gender stereotypes.

Northcutt also faced systemic discrimination as an hourly employee, with wage‑hour rules limiting her pay even when she worked extra hours, until her supervisor fought to have her promoted from “computress” to “technical staff.”

While in Mission Control, she often felt under scrutiny, especially after she learned a hidden camera was broadcasting her image without her knowledge.

Northcutt had support from Mission Control officers like John Llewellyn, who valued her technical expertise.

She mentioned receiving fan letters and even marriage proposals from around the world, including notes addressed only to Poppy, Space Center, which still somehow found their way to her desk.

After the oxygen tank explosion aboard Apollo 13 April 13, 1970, Northcutt drew on her return‑to‑Earth trajectory work to help guide efforts to place the spacecraft back onto a free‑return path so it could loop around the moon, conserve as much fuel as possible, and slingshot home.

Mission Control adopted this strategy to conserve fuel and avoid the risks of a direct abort, which would have required untested maneuvers and much more fuel to be used with very little room for error.

She later noted that the most difficult work fell to the engineers struggling in real time to keep the environmental, life‑support, and power systems functioning.

Northcutt was part of the Apollo 13 Mission Operations Team that received the Presidential Medal of Freedom for developing the emergency procedures that helped bring the crew home after the oxygen tank explosion.

After Apollo, she spent a short period working on space shuttle development before moving to California, where she contributed to TRW’s antiballistic missile defense programs.

Northcutt stayed involved in the broader aerospace world connected to NASA into the early 1970s.

The Apollo program concluded with Apollo 17 in December 1972.

“I’m just full of pride, not about myself so much . . . It is about the whole achievement, that it’s a teamwork,” Northcutt said during an Oct. 30, 2024, KTRK-TV interview in Houston.

“I mean, there’s nothing, there’s no bigger team than that in terms of that kind of enterprise. So just a lot of pride about the accomplishments of that team in doing what President Kennedy challenged us to do. And then we actually did it,” she said.

Reflecting on her greatest accomplishment, Northcutt stated, “We never lost a customer. They all came home.”

Northcutt, now 82, is recognized in NASA’s oral history as the first woman to work as an engineer in Mission Control.

Her work played an important role in helping to bring astronauts home safely.

The full edited transcript of Northcutt’s interview is available on NASA’s website at this shortened link: https://www.nasa.gov/wp-content/uploads/2025/08/northcuttfm-11-14-18.pdf?emrc=8a7b05



Thursday, February 19, 2026

Astroflies: First living beings to reach space

@Mark Ollig

In August 1945, German-made V-2 rockets and their parts were brought to the White Sands Missile Range in New Mexico.

The V-2, or Vergeltungswaffe 2 (“Vengeance Weapon 2”), was the world’s first rocket-powered ballistic missile, standing 46 feet tall and weighing nearly 28,000 pounds when fully fueled.

Developed at Peenemünde in northeastern Germany by a team led by Wernher von Braun, it first flew successfully Oct. 3, 1942, and became operational in 1944.

Equipped with an internal guidance system, this supersonic weapon had a flight range of roughly 200 miles and carried an explosive warhead of about 2,200 pounds.

The V-2 plunged toward its target at around 3,400 mph.

For those curious, the V-1 (Vergeltungswaffe 1), aka “buzz bomb,” was the world’s first operational cruise missile.

Designed under engineer Robert Lusser at Fieseler, it entered service in June 1944, with a flight range of about 160 miles.

The V-1 carried a roughly 1,870-pound high-explosive warhead, and typically flew at close to 400 mph toward its target before diving in.

After WWII, the United States acquired V-2 rockets, parts, and technical documentation, shipping about 300 freight-car loads of components to the new White Sands Proving Ground in New Mexico.

There, under Project Hermes, Army teams and General Electric personnel inventoried, reworked, assembled, modified, and launched V-2s for military testing and high-altitude scientific research.

In early 1946, a plan developed by Harvard University and the US Naval Research Laboratory was selected to send life from Earth into space aboard a V-2 rocket.

The V-2 flight that carried the first living animals into space was V-2 No. 20, also known as the Blossom 1 mission.

It launched from White Sands Missile Range’s Launch Complex 33 Feb. 20, 1947, under US Army oversight.

Those first living animals from Earth to reach space and return alive were . . . drum roll . . . fruit flies (yes, I was surprised too).

Fruit flies, or Drosophila melanogaster, may seem like pesky insects, but they are highly valuable for scientific research.

Scientists chose them for early spaceflights because their genetics are well-mapped, including four pairs of chromosomes, which made it easier to spot radiation-related changes after recovery.

Their rapid life cycle allows researchers to study radiation effects in both the original spacefaring fruit flies and their offspring.

By the 1940s, fruit flies were already essential to genetics research, making them a practical choice for early biological experiments.

Scientists wanted to know whether living organisms could survive exposure to radiation at very high altitudes and the violent forces of a rocket launch before humans attempted space flight.

Along with fruit flies, the V-2 payload carried plant material like corn and other seeds to track visible genetic mutations in future generations.

It also included extra seeds so scientists could study whether radiation might impact the quality of future crops.

This allowed researchers to compare the effects on both animal and plant life during the same flight.

Fruit flies were placed in an ejectable metal canister built to protect them during the V-2 flight.

This payload canister kept the insects safe from the vacuum, extreme pressure shifts, and mechanical forces during ascent, the short time at peak altitude, and the descent.

The Blossom 1 V-2 rocket reached space Feb. 20, 1947, climbing to about 68 miles above Earth in roughly three minutes, 10 seconds.

That altitude placed the rocket roughly six miles above the commonly cited 62-mile Kármán line.

Near its peak altitude, the rocket ejected the recoverable payload canister carrying the fruit flies, which I’ve nicknamed “astroflies.”

As the payload canister began its descent, a small ribbon parachute deployed first to absorb the initial deceleration and aerodynamic shock and to stabilize it in the thin upper atmosphere.

A larger parachute then opened at about 30 miles for the remainder of the descent.

Army documentation from White Sands states that the payload canister “descended for 50 minutes and, with the aid of radar, was recovered immediately.”

Using a two-stage parachute system, it drifted slowly through the thin upper atmosphere before continuing its gradual descent as the air grew thicker, making the return last about 50 minutes.

“The parachute was ejected and functioned perfectly,” Commanding Officer Lt. Col. Harold R. Turner later said.

After recovery of the payload canister, the fruit flies were examined and scientists assessed possible radiation effects.

“Analysis made by Harvard on recovered seeds and flies has shown that no detectable changes are produced by the radiation,” wrote US Naval Research Laboratory nuclear physicist Ernst H. Krause.

The flight proved that living organisms could survive a rocket launch, reach space, and return safely to Earth.

At the time, some scientists feared acceleration, vibration, or radiation might make survival impossible for living organisms during a space flight.

In 1947, fruit flies answered one important question: Could life leave Earth, reach space, and come back alive?

We learned the answer was yes.

Today, NASA continues to send fruit flies to the International Space Station for testing and observation, exploring how space affects biology over time.

NASA notes that fruit flies share many fundamental genetic and cellular characteristics with humans, with approximately 75% of human disease genes having counterparts in fruit flies.

This makes them a small yet efficient model for studying changes related to the immune system, heart function, and other bodily systems in space.

Let’s pause to acknowledge those spacefaring “astroflies,” the first living beings to journey into space and return alive.


Thursday, February 12, 2026

How frequency modulation conquered static

@Mark Ollig 

In the 1920s, radio evolved from small experiments into a powerful new medium for sharing information and entertainment wirelessly across America.

Westinghouse station 8ZZ in Pittsburgh, the direct predecessor of KDKA, broadcast live the results of Warren G. Harding’s presidential win over James M. Cox at about 545 kHz Nov. 2, 1920, before newspapers could report them.

The station aired the first Major League Baseball play-by-play broadcast from Forbes Field in Pittsburgh Aug. 5, 1921, using a telephone as a microphone; in case you were interested, the Pittsburgh Pirates beat the Philadelphia Phillies 8 to 5.

According to a Feb. 12, 1922, article in the New Jersey Trenton Sunday Times-Advertiser, “President Harding is the latest notable wireless fan,” noting that “a radio phone has been placed in the presidential study in the White House.”

“The President proposes to make excellent use of the wireless and is planning to give some part of each of his busy days to listening-in. A radiophone is also to be installed in the press room in the White House,” the article stated.

WLAG in Minneapolis began broadcasting Sept. 4, 1922, at 720 kHz.

It was acquired by Washburn‑Crosby Co. and became WCCO Oct. 2, 1924.

WCCO moved to 810 kHz in 1928 and transmitted at 50,000 watts by September 1932, reaching much of the Upper Midwest.

In 1941, it transferred to 830 kHz and was long promoted as “the station that serves the nation.”

Nationally, radio saw the launch of the National Broadcasting Company (NBC) in 1926 and the Columbia Broadcasting System (CBS) in 1927.

Radio owes much to Edwin Howard Armstrong, an American electrical engineer and inventor born Dec. 18, 1890, in New York City.

Armstrong was fascinated by gadgets from a young age, building a homemade antenna tower and conducting electrical experiments.

In 1912, while studying at Columbia University, Armstrong discovered that feeding some of a radio tube’s output back into its input could make weak signals regenerate into much stronger ones.

In 1914, his regeneration design obtained US Patent 1,113,149, titled “Wireless receiving system.”

However, Lee de Forest, an American inventor and electrical engineer, had patented the Audion tube in 1907, and claimed it was the same basic idea as Armstrong’s, and so he should not own the rights or collect any royalties.

This argument over who really invented regeneration in a circuit ended in 1934 when the US Supreme Court sided with de Forest.

Many engineers believed Armstrong was the one who had actually turned regeneration into a practical, working radio circuit.

In 1918, while serving in the US. Army Signal Corps during World War I, Edwin Howard Armstrong developed the superheterodyne receiver.

His design mixed the incoming station signal with a locally generated signal and shifted the result to a fixed intermediate frequency that was easier to amplify and filter.

After amplification at that intermediate frequency, the receiver recovered the audio signal.

This approach produced clearer voices and music with less “bleed-over” from nearby stations, and it remains the basic architecture of most modern radio, television, radar, and cellular receivers.

Starting in the 1920s, home radios tuned into AM stations, typically from about 550 to 1,500 kHz, until the AM band was extended to 1,600 kHz in 1941.

In the early 1930s, Armstrong engineered wide-band frequency modulation, or FM, which changed the signal’s frequency rather than its amplitude, making it much less susceptible to electrical noise.

He tested the system at extremely high frequencies and launched the experimental station W2XMN in Alpine, NJ.

Assigned 42.8 MHz (megahertz) in 1938, the station began regular broadcasts in 1939, demonstrating that FM could deliver high-fidelity sound far superior to AM.

Minnesota engineers soon joined this high-frequency frontier.

In 1939, Minneapolis station W9XHW operated at 42.30 MHz with 50 watts from the Nicollet Hotel.

W9XHW was an experimental “Apex” station using amplitude modulation (AM) on very high radio frequencies above the standard AM band to provide wide audio bandwidth and higher‑fidelity sound.

Because it still relied on AM, it remained vulnerable to lightning and man‑made static, letting engineers see how Armstrong’s FM system improved on earlier high‑frequency AM.

Some Twin Cities broadcasters were also testing Apex signals.

KSTP’s high-frequency station W9XUP began “ultra-short-wave” Apex tests in 1938 at 25.95 MHz, then shifted to 26.15 MHz in 1939.

WTCN’s W9XTC, active in 1939, operated as an experimental FM transmitter on 26.05 MHz.

The Apex era of experimentation effectively ended when the Federal Communications Commission (FCC) eliminated the Apex band Jan. 1, 1941, reallocating its spectrum to the new FM broadcast band.

The FCC then used those former Apex channels to launch the first commercial FM band from 42 to 50 megahertz, replacing high‑frequency AM with Armstrong’s new static‑resistant system.

In March 1941, Nashville’s W47NV went on the air at 44.7 MHz as the first licensed commercial FM station in the United States.

In June 1945, the FCC moved FM broadcasting from 42 to 50 MHz up to 88 to106 MHz.

The band was expanded again in 1946 to 88 to 108 megahertz.

The change disrupted many early FM stations, including Armstrong’s, but the core technology adjusted to the new allocation and eventually became the standard for high-quality broadcasting.

FM radio faced strong opposition from the Radio Corporation of America (RCA) and the AM radio establishment, which saw it as a threat to their existing investments.

In 1940, RCA offered Armstrong $1 million for a nonexclusive license to his FM patents with no royalties.
He refused, insisting that RCA pay royalties as other licensees did.

Years of lawsuits followed, with RCA President David Sarnoff applying financial and legal pressure while Armstrong fought to defend his work and reputation.

The struggle took a heavy toll on Armstrong, and by the early 1950s, he was financially strained and physically exhausted.

Edwin Howard Armstrong, at the age of 63, took his own life in New York City, NY Feb. 1, 1954.

His wife, Marion, carried on the litigation, securing several important settlements, including a significant deal with RCA.

In 1983, the US Postal Service paid tribute to Edwin Howard Armstrong with a 20-cent commemorative stamp featuring his portrait alongside an illustration of his frequency modulation invention.