Tweet This! :)

Thursday, April 2, 2026

Automation and AI maintain circuit board production

@Mark Ollig

Your vehicles, smartphones, computers, and other electronic devices depend on circuit boards filled with hundreds or even thousands of tiny electronic parts.

These parts include resistors, capacitors, diodes, transistors, integrated circuits, connectors, inductors, relays, and switches.

They also include sensors, voltage regulators, crystals, oscillators, logic gates, and memory chips.

There are more, but I think you get the idea.

Picking, placing, and monitoring those parts during production demands considerable effort that most people never see.

Every completed circuit board relies on a precise system that ensures each tiny part is placed in the exact location and verified for quality.

This week’s column centers on my son Daniel, whose full-time production facility in Minnesota manufactures industrial electronic modules.

The company is family-owned, and Daniel works side by side with his son, my grandson.

Recently, they installed an SMT HW-T8-72/80F automated pick-and-place machine for prototyping and assembling printed circuit boards, or PCBs.

Much of their work happens behind the scenes, but the products they build help keep industrial equipment, vehicles, and control systems running across the country.

The SMT HW-T8-72/80F pick-and-place machine is manufactured in China by Beijing Huawei Silkroad Electronic Technology Co. Ltd., which exports its surface-mount equipment worldwide, including to customers in the United States.

The machine measures 4 feet, 6 inches by 4 feet, 8 inches by 4 feet, 7 inches and weighs about 1,100 pounds.

It operates on 220-volt alternating current and requires compressed air to power its pneumatic vacuum and motion systems.

I watched the SMT HW-T8-72/80F in operation in a manufacturer’s demonstration video.

About the size of two vending machines, this pick-and-place system features an operator screen, rows of component tape feeders, and a fast-moving placement head that works over a circuit board.

At the center of the machine is an eight-head placement system mounted on a horizontal gantry.

The circuit board moves into the work area on a conveyor.

Once inside, cameras locate reference marks on the circuit board so the machine knows its exact position.

The placement program then directs the moving heads to put each part at the correct X-Y location.

Several pickup heads work in rapid sequence as the machine gathers tiny electronic components from reels of carrier tape and places them onto the circuit board.

Feeders advance the carrier tape in small steps so each part is presented in the correct pickup position.

The machine uses vacuum nozzles to lift the parts, while its camera system checks their position and orientation before placing them on the board.

The entire process is controlled through an operator panel with a computer monitor, where jobs are loaded, feeder positions are assigned, and machine operation is monitored.

Safety panels enclose the work area while still allowing the operator to watch component placement on the circuit board through a viewing window during production.

The SMT HW-T8-72/80F can hold up to 80 feeders and is designed to handle a wide range of small electronic parts used on today’s circuit boards.

It works with standard digital files from design software, such as bills of materials and centroid files, which provide the X-Y coordinates and rotation data needed to place parts on a circuit board.

It also uses PCB layout data derived from Gerber files.

The files are named for the Gerber Scientific Instrument Co., which developed the format to describe a circuit board’s physical layout, including copper traces, connection pads, solder mask, and printed markings used in manufacturing.

These files help tell the machine where components go and how the job should be set up.

The process begins after the solder paste has been applied to the circuit board.

The machine places the component parts onto the pasted surface before the assembly moves to reflow, where heat melts the solder and forms permanent electrical connections.

After that, it is inspected, often by automated optical inspection, or AOI, and any needed touch-ups or rework are completed before final testing.

The main advantage of the SMT HW-T8-72/80F pick-and-place machine is its ability to place parts consistently, with speed, precision, and reliability.

In practice, success relies not only on the machine but also on careful setup.

Feeder positions must be planned, reels kept readily accessible, nozzle types matched to the corresponding parts, and circuit board alignment verified before full production begins.

These steps help prevent defects such as tombstoning, where one end of a component lifts off the board; skew, where a part is crooked; and solder bridging, where solder connects two points that should remain separate.

A slow first-article run ensures proper part orientation and placement before full production, after which production speed increases.

The SMT HW-T8-72/80F can place nearly 11,000 components per hour under ideal conditions, but actual speeds are usually lower because of part size and board complexity.

Its positioning accuracy is rated at plus or minus 0.0004 inch, and it can handle parts as small as 0201 components, roughly 0.008 by 0.004 inch, as well as larger parts used in industrial electronics.

To guide component placement, the machine uses an eight-camera vision system to locate the board’s reference points, while vacuum nozzles pick up and place each component.

With the rise of automated board assembly, traceability became crucial because speed is only valuable if the system can track the parts used and their correct placement.

Daniel developed and implemented two in-house artificial intelligence software systems that run entirely on local hardware, with no cloud connectivity.

The first is an automated optical inspection system, or AOI, that uses a high-resolution camera to capture images of assembled circuit boards and compare them with known-good references.

Its software then uses a neural network and other AI tools to identify missing parts, misaligned components, and solder defects.

Circuit boards flagged by the AI system are sent to a human operator for review and confirmation.

The second AI system manages inventory and production.

It scans components as they arrive, capturing part numbers, quantities, lot codes, date codes, and other details from standardized 2D barcodes to reduce manual-entry errors and improve traceability.

Each reel is tracked individually, including partial reels, so the system always knows exactly what is available for production.

When a component barcode is damaged, AI-assisted fallback tools can restore the missing data, maintaining full traceability of each component from receipt through storage to production.

Additionally, these AI tools verify the availability of required parts before a job begins and identify potential shortages to prevent production delays.

Both AI systems operate fully on-site, so there are no cloud data transfers or subscription fees.

These systems ensure that every completed circuit board and module meets stringent quality and reliability standards.

The pick-and-place machine assembles the boards, while Daniel’s production team uses in-house AI and separate computing systems to monitor inventory, maintain detailed records, and oversee production operations.

Behind much of today’s advanced technology are skilled people like my son and grandson, who help produce the customized circuit boards, electronic control modules, and other electronic assemblies industry relies on every day.


Thursday, March 26, 2026

From BBS chat to today’s instant messaging

@Mark Ollig

Before the World Wide Web became part of everyday life, many computer enthusiasts, including this humble columnist, operated a computer bulletin board system, or BBS.

A BBS uses a software program running on a computer connected to one or more dial-up modems plugged into telephone lines.

Users called the BBS’s telephone number using a communications program such as ProComm on their modem-equipped computers, then logged in over a dial-up telephone connection.

Once connected, they could read messages, exchange files and emails, or engage in real-time chat with other users on the BBS.

In 1992, I launched my own hobbyist bulletin board system, WBBS Online, using The Major BBS, a popular “gold standard” BBS software platform developed by Galacticomm.

WBBS stood for Winsted Bulletin Board System.

An electronic handshake between the callers’ and the BBS modems produced audible squeals, screeches, and tones as the two negotiated a data rate, sometimes reaching a cutting-edge 19.2 kbps.

In 1992, hitting 19.2 kbps was like breaking the sound barrier compared to the standard 2400 or 9600 baud modems.

Regular BBS users logged in every day, shared local news, traded software, discussed hobbies, and sometimes texted late into the night via real-time chat.

WBBS users in Winsted and Lester Prairie could dial in without long-distance charges because the two towns had toll-free calling between them.

On busy evenings, hearing the modem answer meant another local caller had connected and joined the conversation.

The technology has changed, but the camaraderie of participating in the virtual community did not.

Today, I regularly text chat with my kids (they are now middle-aged adults) and my siblings (we are forever young) through a Google Messages group instant-messaging (IM) thread on our smartphones.

Instant messages arrive seamlessly over wireless connections, with no dial-up modems, no phone lines, no busy signals, and no waiting.

IM is similar in spirit to the BBS message boards and real-time chat we used over dial-up telephone lines.

AOL Instant Messenger (AIM) came out in 1997 and introduced millions of people to real-time text chatting over the internet.

BlackBerry Messenger, known as BBM, launched in 2005 exclusively for BlackBerry cellphones, letting users chat in real time instead of sending standard text messages.

BBM led people to buy BlackBerry phones just to access the service, but BBM was eventually opened to iOS and Android cellphone platforms in 2013.

By then, WhatsApp and iMessage had taken over the market, and BBM shut down May 31, 2019.

These instant-messaging services felt natural to early users because features such as contact lists, screen names, and visible online status were already familiar concepts.

They first experienced them in a slower form on dial-up bulletin board systems, and later saw them arrive in real time on computers and cellphones.

Computer users accessing WBBS would check in regularly, respond to posts, and join chats with others logged in at the same time.

They shared typed messages and simple images called ASCII art, which used letters, numbers and keyboard symbols to form pictures.

Photos were downloaded as image files, often in “.jpg” format, and could take several minutes to fully appear on screen.

Many of us might remember watching a photo load slowly, line by line, from top to bottom, as the dial-up connection transferred the data.

Today’s instant messaging friends and family group chats can fill quickly with updates, videos, and photos.

Apps have replaced BBS dial-up software, and fiber-optic broadband and cellular data networks have replaced the slow modem connections that once tied up a household’s only phone line.

Yet the core behavior remains: people still gather in online digital spaces to share messages and stay connected.

For those who were around back then, a BBS was likely the first taste of online communication.

That early experience paved the way for social media and messaging platforms.

I can still clearly remember the sound of a modem handshaking late at night, letting me know another computer user was connecting to WBBS Online.

It meant someone else was out there, sitting at their keyboard and joining our small virtual community.

Today’s instant messaging is simply a version of the local virtual communities we started decades ago in dial-up BBS chat rooms.





















Thursday, March 19, 2026

The evolution of Google’s ‘Nano Banana’

@Mark Ollig

Nano Banana 2 became Google’s latest artificial intelligence (AI) image creation and editing tool Feb. 26 of this year, powered by the Gemini system.

Nano Banana 2 allows users to create visuals quickly using Gemini 3.1 Flash Image technology.

The system quickly became available across Google services, allowing users to generate high-quality images in seconds.

The Gemini app and Google Messages now provide access to the technology. Developers and businesses can explore the system through Google AI Studio and Vertex AI.

Since its release, Nano Banana 2 has spread quickly across Google’s ecosystem, producing high-quality images through a wide range of applications.

The name “Nano Banana” has an unusual origin story, one that stands out in the world of technology branding.

Naina Raisinghani, a Google DeepMind product manager, created the placeholder name by combining her nicknames “Nano” and “Banana.”

She submitted the model anonymously to LM Arena (Large Model Arena), a benchmarking site to evaluate artificial intelligence systems.

When the model quickly climbed to the top of the leaderboard, the name went viral.

Google ultimately decided to keep the unusual name and adopted the banana emoji [🍌 ] as the feature’s signature icon.

Despite the playful branding, Nano Banana 2 represents a serious effort to make advanced digital creation tools widely available to everyone.

Google’s goal is straightforward: make AI a practical, creative partner capable of producing clear, lifelike images while maintaining visual consistency across characters and objects.

You provide the idea, and the AI handles the technical execution.

Today’s accompanying illustration provides a roadmap through the complex inner workings of the Gemini 3.1 Flash Image system, breaking down the advanced technology into four digestible stages:

Zone one, human intention – This is the starting point where you provide the “vision” for your project, using either a voice command or a typed request to describe the image you want to create.

Zone two, the core processing engine – Often called the “engine room,” this is where the Gemini 3.1 Flash technology analyzes your instructions and builds the high-definition image.

Zone three: content integrity and provenance – In this stage, the system verifies real-world details through search grounding and embeds a SynthID digital watermark to identify the image as AI-generated.

Zone four: output and visualization – The final result is a polished 4K graphic that is automatically synced to your devices and professional tools like Google Workspace.

To bring these visions to life, Google uses its Veo video model to animate the high-resolution images generated by Nano Banana 2, turning a static 4K design into a cinematic sequence with realistic motion and lighting.

Unlike earlier Gemini tools that required long keyword lists, Nano Banana 2 allows users to describe their request in natural language.

Instead of writing complex prompts, users simply describe a scene to Nano Banana 2 as they might to a human designer.

The system interprets the request, generates a draft image, and allows the user to refine details through a conversational process.

For example, a user designing a new kitchen might say, “Change the style to mid-century modern,” and the system updates only those elements while preserving the rest of the composition.

A major challenge with earlier AI image tools involved visual continuity.

Characters or objects would often change appearance between frames, something I experienced that led to brief episodes of frustration.

But I digress.

Nano Banana 2 addresses this issue by tracking up to five characters and 14 individual objects across a sequence of images.

This capability makes the system useful for storyboards, advertising campaigns, and illustrated narratives that require consistent character identity.

Another update addresses the ongoing issue with AI-generated text.

Nano Banana 2 treats typography as structured data rather than just visual pixels. This lets the system create clear, readable text in many languages.

Users can put phrases in quotation marks to make posters, diagrams, or signs with correct spelling.

One of the major advances in the 2026 release is multimodal operation, meaning the system can interpret both images and text within the same reasoning framework.

This allows more realistic image-to-image editing.

For example, a user can upload a photograph of a kitchen and instruct the system to “render this in a midcentury modern style while keeping the cabinet layout unchanged.”

The model adjusts the visual style while preserving the room’s physical structure.

Nano Banana is also compatible with Google Lens. Users can tap the Nano Banana button to view and interpret their environment using a smartphone camera.

The result functions as a mobile design assistant capable of visualizing renovations, décor changes, or clothing variations in augmented reality (AR).

Another important step forward involves resolution.

Nano Banana 2 now supports 4K (4,096-pixel ultra-high-definition) image output, allowing concept images generated in seconds to become print-quality graphics.

Google has integrated these capabilities into Google Workspace and Google Ads.

This allows marketing teams to maintain brand consistency across large collections of AI-generated images.

The approach helps Google compete with other generative-image platforms such as Adobe Firefly and Midjourney.

To improve factual accuracy, Nano Banana 2 also incorporates search grounding.

When users request a specific landmark or brand, the system consults Google Search to ensure the rendering reflects real-world information.

The process for how an idea becomes a finished digital image begins with a user request, which may involve uploading a photograph or entering a written description.

That request is sent to the Gemini 3.1 processing system, where the model analyzes the instructions and constructs the image.

During generation, the system may query Google Search to verify the visual accuracy of real-world objects.

Before delivery, the finished image receives a SynthID watermark.

SynthID, created by Google DeepMind, is a hidden digital watermark that embeds discrete signals into images, enabling systems to detect AI-generated content.

Nano Banana 2 also supports C2PA (Coalition for Content Provenance and Authenticity) credentials.
C2PA provides a verifiable digital record showing how an image was created without altering its visible appearance.

The final result is a high-resolution image ready for tablet, smartphone, or workstation displays, as well as for advertising presentations or printed media.

Nano Banana 2 streamlines connection to creative tools, letting users focus on their artistic vision. An innovative idea quickly turns into a finished design.

I have used Nano Banana 2 to create illustrations for my columns and have been pleasantly surprised by its capabilities.

Visit https://gemini.google/overview/image-generation/ to learn more and to try Nano Banana.







Thursday, March 12, 2026

Satellite ‘cellular towers’ in Earth orbit

@Mark Ollig

Cellular networks have relied on ground-based cell towers, but SpaceX’s Starlink Direct to Cell now brings this model to space.

This service works with regular smartphones, not just satellite phones.

Future versions are expected to move beyond Long Term Evolution (LTE) as the Third Generation Partnership Project (3GPP) advances non-terrestrial network standards.

This international standards body is developing specifications for 5G and eventually future 6G concepts.

Today’s model supports text messaging and limited data through certain apps, with broader capabilities planned as the network evolves.

Our smartphones already rely on the Global Positioning System (GPS) for navigation signals from satellites orbiting about 12,550 miles above Earth.

However, GPS is a one-way system, and smartphones do not transmit anything back to those satellites.

Alongside SpaceX, companies like AST SpaceMobile and Lynk Global are working on satellite systems that connect directly to regular smartphones.

AST SpaceMobile claims its BlueBird satellites can provide broadband directly to regular phones, while Lynk offers a similar satellite-to-standard-mobile-phone service.

Major American companies like SpaceX and AT&T are also driving advances in satellite connectivity and telecommunications services for consumers.

Mobile carriers worldwide are exploring similar partnerships.

Orange is partnering with AST SpaceMobile in Europe and Africa, while Deutsche Telekom collaborates with Starlink in Europe.

These moves suggest that “cell towers in space” could soon become a normal layer of the global wireless network.

By adding satellites to the Radio Access Network (RAN), carriers can bring mobile coverage to remote areas using non-terrestrial networks.

Compatible smartphones can stay connected even when far from regular cell towers because the signal comes from space.

Satellite cellular lets users text, call, and access data, though some functions remain limited or are still being deployed by providers.

Besides smartphones, these systems can connect vehicles, farm equipment, security sensors, and other remote devices.

When a Starlink satellite receives data from a smartphone, it sends the signal back to Earth through a ground gateway station connected to the telecommunications network.

The satellite uses a high-capacity radio link to reach the gateway.

From there, the data travels through land-based fiber networks or through a carrier’s core network, depending on the service path.

Starlink works with carriers like T-Mobile to bring mobile service to areas where regular cell towers cannot reach.

When a smartphone connects through the satellite system, it still uses the carrier’s licensed cellular spectrum.

The satellite functions as part of the radio access network while the carrier’s terrestrial network handles authentication, mobility management, emergency communications, and billing.

Many people assume a satellite-to-phone system simply passes signals between a smartphone and the ground.

In reality, the satellite acts much like a cellular tower in Earth orbit.

Instead of connecting to a tower along the highway, the smartphone connects to a satellite that provides the radio link.

The signal then passes through ground gateways into the carrier network and the wider internet.

From there, data moves through the carrier’s core network and major telecommunications and internet interconnection hubs, where networks exchange traffic.

The return signal follows the same path in reverse.

It travels from an internet server through fiber networks and carrier core systems to the gateway, then to the satellite, and finally back to the smartphone.

Since these satellites operate in Low Earth Orbit (LEO), signal delay, or latency, can fall into the tens of milliseconds under favorable conditions.

This is similar to many land-based broadband connections.

Future upgrades are expected to depend on larger satellites equipped with advanced antennas that steer radio beams toward smartphones and other connected devices on Earth.

SpaceX says the next generation of Starlink satellites is intended to dramatically increase network capacity.

This could support applications such as video streaming, cloud services, and other broadband uses.

To launch these larger spacecraft, SpaceX plans to use its Starship launch system powered by the Super Heavy rocket booster to expand the Starlink network.

Today’s column features a diagram I researched and created using Microsoft software.

Images were generated by ChatGPT 5.2 using an uploaded photo of me as a reference, along with help from Google’s Gemini AI and Perplexity AI.

At TDS Telecom, I made engineering diagrams in Microsoft Visio.

But I digress.

Today’s diagram shows how a smartphone can reach the global internet through satellite and terrestrial telecommunications infrastructure.

At the top of the image, a constellation of satellites circles Earth in Low Earth orbit.

A satellite connects to a Starlink Gateway Earth Station through feeder links operating in assigned portions of the Ka-band microwave spectrum.

These links use uplink frequencies near 27.5 to 30 gigahertz and downlink frequencies near 17.8 to 20.2 gigahertz.

Equipped with large parabolic antennas, the gateway passes the signal into the terrestrial telecommunications network.

After reaching the gateway, the signal enters land-based fiber-optic networks and travels across the country and around the world through major telecommunications and internet interconnection hubs.

The diagram also shows a second communication path where a satellite connects directly with a smartphone using licensed Long Term Evolution (LTE) cellular frequencies.

In this role, the satellite functions as a space-based cellular tower.

It links the smartphone to the carrier’s packet core network, where authentication, mobility management, and routing occur before traffic reaches the broader internet.

Using columnist prerogative, I placed myself in the right foreground of the diagram, holding a smartphone to represent the network’s end user.

The image is intended to show that this satellite system will complement, rather than replace, terrestrial networks by connecting smartphones to satellites and then into global fiber and carrier infrastructure.

Details about Starlink’s satellite-to-cell service are available on the Starlink website: .

As of early this year, T-Mobile’s T-Satellite service supports texting, location sharing, text-to-911, and limited satellite data with certain apps on compatible smartphones.

Additional satellite-to-smartphone capabilities continue to roll out.

The system is designed to work with many existing LTE smartphones through software updates, although compatibility depends on the wireless carrier and smartphone model.

For more than 40 years, cellular service has relied on ground-based towers.

Before long, the cell tower our smartphone conversations and internet connections travel over will be a space-orbiting satellite passing over Minnesota.


Illustration depicting how a smartphone connects
 to telecommunications and internet networks via a Starlink
 satellite, a gateway earth station, and a global fiber-optic network.
The diagram is by Mark Ollig, with images created using
 ChatGPT 5.2, Google Gemini AI, and Perplexity AI.



x

Wednesday, March 4, 2026

From the Fourth Estate to the AI-driven Sixth Estate

@Mark Ollig

During the Middle Ages in Europe, society was divided into three main groups, called estates.

The First Estate consisted of the clergy, who were important for teaching morals and providing education.

The Second Estate included the nobles, who owned land and held military power.

The Third Estate consisted of everyone else, including merchants, tradespeople, and laborers, who made up most of the population but had limited influence.

By the 18th century, a new force, the Fourth Estate, emerged in the form of newspapers.

Their content spread across Europe and the American colonies through the printing press, shaping public opinion, challenging officials and at times influencing government policy.

For much of the 20th century, the Fourth Estate, made up of newspapers, radio, and television, reported the news and helped shape the social agenda.

Behind the scenes, wire services used the telegraph and later the telephone to move news quickly, accelerating reporting and expanding journalism’s reach.

These outlets largely determined what the public saw, heard, and understood about the world.

Editors in print newsrooms, along with producers and anchors in radio and television, served as content gatekeepers.

They chose which stories made it into the morning paper or led the evening broadcast and controlled when audiences would see or hear them.

News organizations distributed their work through printing presses and over-the-air broadcast stations, keeping people informed about events in their communities, their state, the nation, and the world.

The Fourth Estate began to change as news and public information began moving to digital platforms.

One of the earliest examples of those platforms was the public dial-up Computerized Bulletin Board System, or CBBS, launched by Ward Christensen and Randy Suess in Chicago during the Great Blizzard of 1978.

A January blizzard made it impossible for their local computer club members to meet, so the two created a virtual meeting solution.

CBBS went live Feb. 16, using a single S-100 bus microcomputer and a 300-baud modem wired into a residential phone line.

The system allowed computer club members to leave messages for one another and upload files that others could download and comment on.

These ongoing exchanges became the model for future hobbyist bulletin board systems, or BBSs, run by system operators known as SysOps.

A BBS enabled users to post messages, share files, chat in real time, and create online communities.

By the late 1970s and 1980s, home computer users were dialing into large online server networks that offered many of the same features as local BBSs and, in some cases, additional services.

In 1979, CompuServe launched its dial-up service, offering email, forums, file libraries, news, and real-time chat.

Prodigy, founded Feb. 13, 1984, began as Trintex, a joint venture of CBS, IBM, and Sears that offered online news, shopping, and banking.

In 1985, GEnie and Quantum Link followed. GEnie was operated by General Electric Information Services in Maryland, and Quantum Link, based in Vienna, VA, was designed specifically for Commodore 64 and 128 computer users.

Unlike local free-to-access BBSs, these large commercial services operated from centralized computer host systems that supported thousands of dial-up users.

In 1992, I launched a local bulletin board system called WBBS Online from my home computer, using Major BBS software and modems connected to four telephone lines.

WBBS offered discussion forums, real-time chat rooms, private email, file libraries, and games.

In 1993, I expanded to six modem lines and switched to Galacticomm World software, which added a graphical, point-and-click interface.

BBSs were the early stage of what would come to be called the digital Fifth Estate. They enabled individuals to publish, respond, and debate directly in shared online venues.

BBS popularity lasted into the mid-1990s as users began moving to internet-only access and using web browsers to reach the World Wide Web.

Later, large mainstream social networking platforms emerged, including MySpace in 2003, Facebook in 2004, Twitter in 2006 (later renamed X in 2023), and Bluesky, which opened to the public in February 2024.

However, the rise of artificial intelligence-generated content and algorithmic coordination is now altering this decentralized power structure.

These changes are creating a new Sixth Estate in which computer algorithms generate, rank, and distribute information at an unprecedented scale.

AI-driven systems now serve as primary gatekeepers, determining what information is collected, prioritized, and delivered to users.

This raises concerns that people may accept AI output as fact, even when the content is presented without clear citations or sources.

Florence, a former colleague from the Winsted Telephone Company, once told me, “Mark, always consider the source.”

The Sixth Estate uses artificial intelligence and algorithms to create, collect, organize, and quickly share information.

Algorithms can spread false information as easily as they can spread facts, often without clear sources or accountability.

Platforms like TikTok, X, Facebook, Instagram, YouTube, Reddit, and LinkedIn use AI-powered recommendation systems to decide what content each user sees.

These systems analyze engagement signals such as clicks, likes, watch time, follows, shares, and comments to predict what users are most likely to view.

As a result, automated algorithms, not human editors, now handle most of the distribution of information.

AI models such as ChatGPT by OpenAI, Gemini by Google, and Claude 4.6 by Anthropic can draft and refine text, summarize information, and translate languages.

Some of these models also can generate images or create video, and some can interpret images.

They also can answer questions, explain complex topics in plain language, analyze information, draft outlines and lesson plans, brainstorm ideas and create step-by-step instructions.

AI assistants such as Microsoft Copilot and Perplexity use underlying AI models to provide conversational and search-based responses to queries.

Image-generation tools such as Midjourney, DALL-E, and Google’s Gemini image tools can create images from text prompts.

Video-generation platforms such as Runway, Pika, Synthesia, and OpenAI’s Sora can generate and edit AI-created video clips.

AI audio tools such as ElevenLabs, Descript’s Overdub, and Amazon Polly can generate realistic synthetic speech for narration, voice-overs, and dubbing.

Sixth Estate systems will blur the line between human and AI-generated content; human oversight is essential to maintain safeguards that protect accuracy, source attribution, and accountability.

Florence’s sage advice to “always consider the source” is more relevant today than ever.




Friday, February 27, 2026

She helped to bring astronauts home safely

@Mark Ollig

I recently read the Nov. 14, 2018, NASA Johnson Space Center oral history interview of Frances Marian “Poppy” Northcutt by Jennifer Ross-Nazzal.

The interview provides a detailed account of her work on the Apollo program and her experience as the first woman to serve in an engineering role in NASA’s Mission Control.

Northcutt, born Aug. 10, 1943, shares that her older brother gave her the nickname “Poppy” from a favorite Little Golden Book fairy tale he read.

Northcutt studied mathematics at the University of Texas and later earned a law degree from the University of Houston Law Center.

A job referral led her to TRW Systems, a NASA contractor in Houston, TX, where she started as a “computress,” handling data analysis and programming.

Northcutt noted similarities between her experience and that of the women mathematicians depicted in the 2016 book “Hidden Figures” and the film of the same name, which told the story of NASA’s early human computers.

Despite feeling intimidated by colleagues with advanced degrees from elite schools, she quickly proved her abilities and became a valuable team member within months.

Northcutt was part of the lunar return-to-Earth trajectory program, initially called the “abort program.”
Its main challenge was the three-body problem involving Earth, the moon, and the spacecraft.

Unlike returns from Earth orbit, coming home from the moon meant linking several curved trajectory segments, switching between Earth’s and the moon’s gravity, and relying on powerful computers to map the way back.

She noted that lunar return calculations could not be done with slide rules alone and required repeated mainframe computer processing power.

Northcutt worked on reverse-engineering the trajectory software to understand it fully.

Her team, usually consisting of three to eight members, focused on return-to-Earth analysis and developed a flexible program for various mission conditions, which replaced a competing trajectory program.

Northcutt’s mastery of the code set her apart and helped her advance quickly, as she supported the retrofire officers in the staff support room (SSR-1) during the Apollo 8 mission around the moon.

Apollo 8’s most dramatic moment came when the spacecraft passed behind the moon and communications were lost during the lunar orbit insertion burn.

The wait for the signal was stressful, as it determined whether the burn was successful or if the crew was on a crash course to the lunar surface.

Northcutt recalled the tense silence and countdown clock until the spacecraft finally responded, confirming it had successfully entered lunar orbit and traveled around the moon.

She remembered drawing widespread media attention as the first woman to serve in an engineering role inside NASA’s Mission Control during Apollo 8.

In the interview, Northcutt said the press treated her as a novelty and more as a spectacle than a professional.

She felt intense pressure to perform flawlessly, knowing any mistake could reinforce gender stereotypes.

Northcutt also faced systemic discrimination as an hourly employee, with wage‑hour rules limiting her pay even when she worked extra hours, until her supervisor fought to have her promoted from “computress” to “technical staff.”

While in Mission Control, she often felt under scrutiny, especially after she learned a hidden camera was broadcasting her image without her knowledge.

Northcutt had support from Mission Control officers like John Llewellyn, who valued her technical expertise.

She mentioned receiving fan letters and even marriage proposals from around the world, including notes addressed only to Poppy, Space Center, which still somehow found their way to her desk.

After the oxygen tank explosion aboard Apollo 13 April 13, 1970, Northcutt drew on her return‑to‑Earth trajectory work to help guide efforts to place the spacecraft back onto a free‑return path so it could loop around the moon, conserve as much fuel as possible, and slingshot home.

Mission Control adopted this strategy to conserve fuel and avoid the risks of a direct abort, which would have required untested maneuvers and much more fuel to be used with very little room for error.

She later noted that the most difficult work fell to the engineers struggling in real time to keep the environmental, life‑support, and power systems functioning.

Northcutt was part of the Apollo 13 Mission Operations Team that received the Presidential Medal of Freedom for developing the emergency procedures that helped bring the crew home after the oxygen tank explosion.

After Apollo, she spent a short period working on space shuttle development before moving to California, where she contributed to TRW’s antiballistic missile defense programs.

Northcutt stayed involved in the broader aerospace world connected to NASA into the early 1970s.

The Apollo program concluded with Apollo 17 in December 1972.

“I’m just full of pride, not about myself so much . . . It is about the whole achievement, that it’s a teamwork,” Northcutt said during an Oct. 30, 2024, KTRK-TV interview in Houston.

“I mean, there’s nothing, there’s no bigger team than that in terms of that kind of enterprise. So just a lot of pride about the accomplishments of that team in doing what President Kennedy challenged us to do. And then we actually did it,” she said.

Reflecting on her greatest accomplishment, Northcutt stated, “We never lost a customer. They all came home.”

Northcutt, now 82, is recognized in NASA’s oral history as the first woman to work as an engineer in Mission Control.

Her work played an important role in helping to bring astronauts home safely.

The full edited transcript of Northcutt’s interview is available on NASA’s website at this shortened link: https://www.nasa.gov/wp-content/uploads/2025/08/northcuttfm-11-14-18.pdf?emrc=8a7b05



Thursday, February 19, 2026

Astroflies: First living beings to reach space

@Mark Ollig

In August 1945, German-made V-2 rockets and their parts were brought to the White Sands Missile Range in New Mexico.

The V-2, or Vergeltungswaffe 2 (“Vengeance Weapon 2”), was the world’s first rocket-powered ballistic missile, standing 46 feet tall and weighing nearly 28,000 pounds when fully fueled.

Developed at Peenemünde in northeastern Germany by a team led by Wernher von Braun, it first flew successfully Oct. 3, 1942, and became operational in 1944.

Equipped with an internal guidance system, this supersonic weapon had a flight range of roughly 200 miles and carried an explosive warhead of about 2,200 pounds.

The V-2 plunged toward its target at around 3,400 mph.

For those curious, the V-1 (Vergeltungswaffe 1), aka “buzz bomb,” was the world’s first operational cruise missile.

Designed under engineer Robert Lusser at Fieseler, it entered service in June 1944, with a flight range of about 160 miles.

The V-1 carried a roughly 1,870-pound high-explosive warhead, and typically flew at close to 400 mph toward its target before diving in.

After WWII, the United States acquired V-2 rockets, parts, and technical documentation, shipping about 300 freight-car loads of components to the new White Sands Proving Ground in New Mexico.

There, under Project Hermes, Army teams and General Electric personnel inventoried, reworked, assembled, modified, and launched V-2s for military testing and high-altitude scientific research.

In early 1946, a plan developed by Harvard University and the US Naval Research Laboratory was selected to send life from Earth into space aboard a V-2 rocket.

The V-2 flight that carried the first living animals into space was V-2 No. 20, also known as the Blossom 1 mission.

It launched from White Sands Missile Range’s Launch Complex 33 Feb. 20, 1947, under US Army oversight.

Those first living animals from Earth to reach space and return alive were . . . drum roll . . . fruit flies (yes, I was surprised too).

Fruit flies, or Drosophila melanogaster, may seem like pesky insects, but they are highly valuable for scientific research.

Scientists chose them for early spaceflights because their genetics are well-mapped, including four pairs of chromosomes, which made it easier to spot radiation-related changes after recovery.

Their rapid life cycle allows researchers to study radiation effects in both the original spacefaring fruit flies and their offspring.

By the 1940s, fruit flies were already essential to genetics research, making them a practical choice for early biological experiments.

Scientists wanted to know whether living organisms could survive exposure to radiation at very high altitudes and the violent forces of a rocket launch before humans attempted space flight.

Along with fruit flies, the V-2 payload carried plant material like corn and other seeds to track visible genetic mutations in future generations.

It also included extra seeds so scientists could study whether radiation might impact the quality of future crops.

This allowed researchers to compare the effects on both animal and plant life during the same flight.

Fruit flies were placed in an ejectable metal canister built to protect them during the V-2 flight.

This payload canister kept the insects safe from the vacuum, extreme pressure shifts, and mechanical forces during ascent, the short time at peak altitude, and the descent.

The Blossom 1 V-2 rocket reached space Feb. 20, 1947, climbing to about 68 miles above Earth in roughly three minutes, 10 seconds.

That altitude placed the rocket roughly six miles above the commonly cited 62-mile Kármán line.

Near its peak altitude, the rocket ejected the recoverable payload canister carrying the fruit flies, which I’ve nicknamed “astroflies.”

As the payload canister began its descent, a small ribbon parachute deployed first to absorb the initial deceleration and aerodynamic shock and to stabilize it in the thin upper atmosphere.

A larger parachute then opened at about 30 miles for the remainder of the descent.

Army documentation from White Sands states that the payload canister “descended for 50 minutes and, with the aid of radar, was recovered immediately.”

Using a two-stage parachute system, it drifted slowly through the thin upper atmosphere before continuing its gradual descent as the air grew thicker, making the return last about 50 minutes.

“The parachute was ejected and functioned perfectly,” Commanding Officer Lt. Col. Harold R. Turner later said.

After recovery of the payload canister, the fruit flies were examined and scientists assessed possible radiation effects.

“Analysis made by Harvard on recovered seeds and flies has shown that no detectable changes are produced by the radiation,” wrote US Naval Research Laboratory nuclear physicist Ernst H. Krause.

The flight proved that living organisms could survive a rocket launch, reach space, and return safely to Earth.

At the time, some scientists feared acceleration, vibration, or radiation might make survival impossible for living organisms during a space flight.

In 1947, fruit flies answered one important question: Could life leave Earth, reach space, and come back alive?

We learned the answer was yes.

Today, NASA continues to send fruit flies to the International Space Station for testing and observation, exploring how space affects biology over time.

NASA notes that fruit flies share many fundamental genetic and cellular characteristics with humans, with approximately 75% of human disease genes having counterparts in fruit flies.

This makes them a small yet efficient model for studying changes related to the immune system, heart function, and other bodily systems in space.

Let’s pause to acknowledge those spacefaring “astroflies,” the first living beings to journey into space and return alive.